免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 4925 | 回复: 4

vcs lab(旧版本1.3的,但对vcs的了解会有帮助) [复制链接]

论坛徽章:
0
发表于 2003-01-16 09:43 |显示全部楼层
Lab Exercise 1A – VCS Hardware Setup

The purpose of this lab is to set up the hardware to prepare for the installation of the VCS software.  It is important to verify that you have set up the hardware correctly before you attempt to install the software.

The Ethernet and SCSI interfaces may already be installed.

Instructions

1.        Check the major and minor numbers for the block (disk) devices:

# ls –lL  /dev/rdsk/c1t1d0s3
# crw-r----- root sys xx,yy  

Make note of the “xx,yy” numbers and they should be the same on both systems.

If you have Veritas Volume Manager installed you can:

# cat  /etc/name_to_major
Make a note of the values of vxio and vxspec and they should be the same on both systems.
2.        If necessary, reconcile the major and minor numbers according to the instructions in Chapter 3 of the VERITAS Cluster Server Installation Guide.

3.        Connect the system and network cables as necessary.

§        Connect hme0 to the public network
§        Connect the two private network cables (qfe0 and qfe1 if you are using quad-Ethernet cards).   
§        Plumb the private network interfaces:
§       
§        # ifconfig qfe0 plumb
§       
§        Do an ifconfig -a to see that you can see the private network interfaces

4.        Reboot the systems.

# boot -r
5.        When the system is up, test for the public network connectivity using the ping command.


Lab Exercise 1B – Installing the VCS Software

The purpose of this lab is to install the VCS software on a two_system cluster, then to verify the heartbeat network connectivity.

Instructions

1.        The VCS software is on CD-ROM.

2.        Create the /.rhosts file with an entry of   +
3.        Enter the following to start the installation:
# pkgadd  -d  /cdrom/cdrom0
4.        Install all of the VCS packages.  Just keep entering “y” until the installation is complete.

5.        Verify the heartbeat connectivity on both systems by using the dlpiping command to do the following where one system is the server and the other is the client:

On the system acting as the server:
# /opt/VRTSllt/dlpiping  -vs  /dev/<interface such as qfe:0>;

The server just listens.  At the end of the test will need to do a control+c to end the session.

In another terminal window, do the following to get the mac address of the machine acting as the server:

# /opt/VRTSllt/getmac  /dev/qfe:0

The response will be in the form of   xxxxxx    Use the address on the client machine in the dlpiping –vc command below.

On the system acting as the client:

# dlpiping  -vc  /dev/<interface such as qfe:0>;   xxxxxx

Now reverse the roles of the systems and run the test again.  You want to be able to have the interface work both ways and on both private networks.

       

Lab Exercise 2A – Configuring llt and gab       

The purpose of this lab is to configure the llt and gab environment in a two-system configuration.

NOTE:  Solutions are provided at the end of this section.

Instructions

The key VCS directories are:

/sbin   -  gabconfig and lltconfig  commands
/opt /VRTSvcs/bin  -  VCS server commands,  VCS agent directories and scripts
/etc /VRTSvcs/conf/config  -  VCS server configuration

Note:  Before starting this lab, be sure that /sbin and /opt/VRTSvcs/bin are in the root PATH.

1.        Use vi to create an /etc/llttab file on each system.  Make sure you have the appropriate set-node,  link, and start statements in the file:

  
2.        Use the lltconfig command to initialize llt.

3.        Use the lltconfig command to see if llt is running.
4.        Now, use the lltstat command to get information about the llt process.  Look at the “CONNECT PACKETS” field.
5.        Are there any packets present?
6.        Use the gabconfig command to start the gab process.
7.        Use the gabconfig command to obtain gab port information.
8.        Which ports are configured?
9.        As you can see, no ports are present.  One of the systems needs to be a seed node. On one of the systems, use the gabconfig command to indicate that it is the seed node to start the packets flowing.  
The recommended method is to modify the S92gab startup script to start a 2-node cluster.
10.        Use gabconfig again to now see if any ports are open.  The “a” port should be open.
11.        Stop gab and llt on each system using:  gabconfig -u   and   lltconfig –U

Lab Exercise 2A SOLUTIONS – Configuring llt and gab        

The purpose of this lab is to configure the llt and gab environment in a two-system configuration.

Instructions

The key VCS directories are:

/sbin   -  gabconfig and lltconfig  commands
/opt /VRTSvcs/bin  -  VCS server commands,  VCS agent directories and scripts
/etc /VRTSvcs/conf/config  -  VCS server configuration

Note:  Before starting this lab, be sure that /sbin and /opt/VRTSvcs/bin are in the root PATH.

# PATH=$PATH:/opt/VRTSvcs/bin; export PATH

1.        Use vi to create an /etc/llttab file on each system.  Put the following lines in the file:
set-node n (where n is 0 or 1)
link qfe0 /dev/qfe:0
link qfe1 /dev/qfe:1
start  
2.        Use the following to initialize llt:
# lltconfig  -c
3.        See if llt is running by entering the following:
# lltconfig
5.        Start the gab process as follows:
# gabconfig -c
6.        Now obtain gab port information as following:
# gabconfig -a
7.        Which ports are configured?
8.        As you can see, no ports are present.  One of the systems needs to be manually seeded. On one of the systems, use the following command to start the packets flowing:
# gabconfig –xc
NOTE: If you choose not to have one system be a seed node you can use:
gabconfig –c –n2 in the /etc/gabtab file to have the packets flow after the second node is started in a two-node cluster.  
9.        Use gabconfig again to now see if any ports are open.  The  “a” ports should be open.
# gabconfig -a
10.        Stop gab and llt on each system using:  gabconfig -u   and   lltconfig –U
Lab Exercise 2B – Configuring a basic VCS Cluster and Using the GUI       

The purpose of this lab is set up a basic VCS environment by defining the VCS configuration file, main.cf, and to use the VCS GUI.  

NOTE:  Solutions are provided at the end of this section.

Instructions

If you rebooted, then the cluster server is probably running:  had (ha daemon) is running.  Ensure that the cluster server is stopped: # /opt/VRTSvcs/bin/hastop  -all

IMPORTANT  IMPORTANT  IMPORTANT  IMPORTANT  IMPORTANT

Decide which system in your cluster will be “systemA” and which system will be “systemB”.  Substitute the appropriate host name whenever the instructions refer to “systemA” or “systemB”.

On the systemA system, do the following:
1.        # cd /etc/VRTSvcs/conf
# mkdir config
# cp types.cf   ./config
# cd config
2.        Use vi to create a main.cf file that contains the appropriate statements to include the types.cf file, define the cluster with the name vcslab, define snmp with the name vcslab, and define two systems, systemA and systemB.
       
3.        Verify the main.cf file by doing:
# hacf  -verify config
4.        Generate the main.cf file  (the  -generate option can be used to reset the status of the configuration  when the VCS server thinks that the configuration is bad):
# hacf  -generate config
5.        Start llt
6.        Start gab
7.        Start the VCS process using the hastart command.  Go look on the system console.  When you see “Entering running state” then you know that the VCS server has successfully started.  If you do not see “Entering running state” on the system console, then the VCS server did not start.
8.        How many systems do you see when you use the hasys command?

On the systemB system, do the following:

# cd  /etc/VRTSvcs/conf
# mkdir config
1.        Start llt
2.        Start gab
3.        Start the VCS process:
4.        You should notice that VCS is building the configuration file from the remote system, systemA.
5.        You can see what was built by doing the following:
# haconf -dump
# cat main.cf    (Do the cat main.cf on both systems.  They should be identical)

Using the GUI
1.        The VCS GUI requires a user id and password use the hauser command to add a user and at the password prompts just hit return
2.        Start the VCS GUI in the background.
3.        To start the log in, click on one of the VCS GUI quadrants.
4.        It is also helpful to use the hastatus command in separate x-windows  terminal window and put the VCS status screen on the desktop.
5.        Click on the orange pyramid in the upper right quadrant.
6.        In the GUI tree on the left, double click on the “Member Systems” branch to show the two systems in the cluster.  You can expand and contract the tree and view information about the cluster.  Since you do not have any groups defined yet, there is not a lot to see at this point.
7.        Prepare to stop VCS by dumping the memory copy of the configuration file to disk:
# haconf -dump -makero
8.        Stop the VCS processes on all systems:
# hastop -all
Lab Exercise 2B SOLUTIONS – Configuring a basic VCS Cluster and Using the GUI       


The purpose of this lab is set up a basic VCS environment by defining the VCS configuration file, main.cf, and to use the VCS GUI.

Instructions

If you rebooted, then the cluster server is probably running:  had (ha daemon) is running.  Ensure that the cluster server is stopped: # /opt/VRTSvcs/bin/hastop –all

IMPORTANT  IMPORTANT  IMPORTANT  IMPORTANT  IMPORTANT

Decide which system in your cluster will be “systemA” and which system will be “systemB”.  Substitute the appropriate host name whenever the instructions refer to “systemA” or “systemB”.


On the systemA system, do the following:
1.        # cd /etc/VRTSvcs/conf
# mkdir config
# cp types.cf   ./config
# cd config
2.        Use vi to create a main.cf file that contains the following:
        include  “types.cf”
        cluster vcslab
        snmp vcslab
             system systemA
        system systemB
3.        Verify the main.cf file by doing:
# hacf  -verify config
4.        Generate the main.cf file  (the  -generate option can be used to reset the status of the configuration  when the VCS server thinks that the configuration is bad):
# hacf  -generate config
5.        Start llt
# lltconfig -c -o
6.        Start gab
# gabconfig -xc
7.        Start the VCS process:
# hastart
Go look on the system console.  When you see “Entering running state” then you know that the VCS server has successfully started.  If you do not see “Entering running state” on the system console, then the VCS server did not start.

8.        How many systems do you see when you use the following command?
# hasys –list

On the systemB system, do the following:
# cd /etc/VRTSvcs/conf
# mkdir config
1.        Start llt
# lltconfig -c -o
2.        Start gab
# gabconfig -xc
3.        Start the VCS process:
# hastart
4.        You should notice that VCS is building the configuration file from the remote system, systemA.
5.        You can see what was built by doing the following:
# haconf -dump
# cat main.cf    (Do the cat main.cf on both systems.  They should be identical)

Using the GUI
1.        The VCS GUI requires a user id and password:
# haconf -makerw
        # hauser -add  <user name>;
        at the password prompts just hit return
2.        Start the VCS GUI in the background.
# hagui &amp;
3.        To start the log in, click on one of the VCS GUI quadrants.
4.        It is also helpful to bring up another x-windows  terminal window and put the VCS status screen on the desktop:
# hastatus
5.        Click on the orange pyramid in the upper right quadrant.
6.        In the GUI tree on the left, double click on the “Member Systems” branch to show the two systems in the cluster.  You can expand and contract the tree and view information about the cluster.  Since you do not have any groups defined yet,  there is not a lot to see at this point.
7.        Prepare to stop VCS by dumping the memory copy of the configuration file to disk:
# haconf -dump -makero
8.        Stop the VCS processes on all systems:
# hastop -all
Lab Exercise 3A – Managing Groups and Resources via the GUI       
The purpose of this lab is to manage VCS groups and resources using the GUI.


Instructions
Ensure that the cluster server is stopped: # hastop -all


1.        Start VCS on the machine that has the main.cf  file for this lab.  Wait for it to get into the RUNNING state  (view this on the system console or via hastatus).

2.        Now start VCS on the other machine and it will get the configuration from the first
machine that is running.

You will use the VCS GUI to create a service group that contains resources and dependencies.

3.        Bring up the GUI

§        Once you are logged into the GUI, go to the cluster tree and right-click on Service Groups.
§        Click Add then enter the group name f_group.
§        Highlight both systems by holding the shift key when selecting the second system
§        Click OK

4.        From the cluster tree, double click Service Groups to see the newly created f_group

§        Right-click on f_group è View è Resource View to bring up a window.
§        Right-click on f_group è add resource.  
§        Enter the name: tmp_a.
§        Choose the resource type of FileOnOff.
§        Click in the value column and enter: /tmp/a
§        Ensure that the critical and enable boxes are checked.
§        Click ok.   You should see the new resource in the Resource View window.

5.        Repeat step 4 to add three additional resources: tmp_b, tmp_c, tmp_d.  Be sure to enter the appropriate values of  /tmp/b, /tmp/c, and /tmp/d, respectively.

6.        Now lets create resource dependencies.

§        Double click on the tmp_c resource.  It should change color  from gray to yellow
§        Click on the tmp_d resource.  You should see a box asking about the dependency.
§        Have the tmp_c resource be the parent and the tmp_d resource be the child.
§        Double click on the tmp_b resource.
§        Click on the tmp_d resource.
§        Have the tmp_b resource be the parent and the tmp_d resource be the child.
§        Double click on the tmp_a resource.
§        Click on the tmp_c resource.
§        Have the tmp_a resource be the parent and the tmp_c resource be the child.
§        Double click on the tmp_a resource.
§        Click on the tmp_b resource.
§        Have the tmp_a resource be the parent and the tmp_b resource be the child.

You should now have a diamond shaped dependency: tmp_a at the top depending on tmp_b and tmp_c while tmp_b and tmp_c depend on tmp_d.

7.        Go to the cluster tree and double click on Member Systems.  Double click on systemA, then on f_group click the right mouse button.  Then choose the view  è resource view menu item.  Do the same for systemB. You should now have two windows of the resources of f_group on your workspace.

At this point have only one person in the cluster do the following.  The others can see the results in the resource views.  The cluster operations should be done from the tree portion of the GUI (the left side of the display).

8.        Online group f_group on system systemA.

You should see the resources tmp_d, then tmp_c and tmp_b in parallel, then tmp_a are onlined on systemA.

9.        Online group f_group on system systemB, what happens?

Since f_group is a failover group it may be online on only one system at a time.  Attempting to online it when it is online elsewhere in the cluster will return an error.

10.        Offline f_group on systemA:

Notice the order that the resources are put offline.

11.        Online resource tmp_c on systemB.

Notice the order that the resources are onlined.  Since tmp_c depends on tmp_d,
tmp_d must be online first.  The resources tmp_a and tmp_b are unaffected.

Look at the state of the group f_group on system systemB.

The group is now partially online.

12.        Online resource tmp_b on systemA.  What happens?

The command will fail because the failover group f_group is partially online on  systemB.

13.         Online resource tmp_a on systemB.  What happens?

The resource tmp_a will go online after tmp_b is online.  
Look at the state of the group f_group on system systemB.

The group is now online because all critical resources (tmp_a) are online.
14.        From a terminal window on systemB, remove the file /tmp/c on systemB.  Wait for approximately 60 seconds.What happens?

The resource tmp_c will fault on systemB because it went offline without a directive from the VCS server.  Because the critical resource, tmp_a, depends on tmp_c, the group is faulted and will failover to systemA.

15.        Offline f_group on systemA.

Notice the order that the resources are put offline.
16.        Confirm that tmp_c is offline on systemB and clear the faulted resource.

The resource tmp_c will transition from faulted to offline.  The group, f_group, on systemB will also transition from faulted to offline.


17.        Online f_group on systemA.

18.        Now that the fault is cleared for the resource tmp_c on systemB,  offline group f_group on systemA and bring up group f_group on systemB using the hagrp command  (there is an option that will do this with one hagrp command).


Notice that f_group is offlined on systemA and then onlined on systemB.

19.        Offline resource tmp_b on systemB.  What happens?

The command fails because an online resource, tmp_a, depends on tmp_b.
20.         Offline resource tmp_a on systemB.  What happens?

Only the resource tmp_a will be offline.  To propagate to all resources in the dependency tree, use the command:

# hares -offprop tmp_a -sys systemB

Lab Exercise 3A – Managing Groups and Resources via the Command Line Interface       
The purpose of this lab is to manage VCS groups and resources using the command line.
You will be using the hares and hagrp commands numerous times.  To get a listing of the syntax, do an hares / or an hagrp /

NOTE:  Solutions are provided at the end of this section.

Instructions
1.        Online group f_group on system systemA.

You should see the resources tmp_d, then tmp_c and tmp_b in parallel, then tmp_a are onlined on systemA.

2.        Online group f_group on system systemB, what happens?

Since f_group is a failover group it may be online on only one system at a time.  Attempting to online it when it is online elsewhere in the cluster will return an error.

3.        Offline f_group on systemA:

Notice the order that the resources are put offline.

4.        Online resource tmp_c on systemB.

Notice the order that the resources are onlined.  Since tmp_c depends on tmp_d,
tmp_d must be online first.  The resources tmp_a and tmp_b are unaffected.

Look at the state of the group f_group on system systemB.

The group is now partially online.

5.        Online resource tmp_b on systemA.  What happens?

The command will fail because the failover group f_group is partially online on  systemB.

6.         Online resource tmp_a on systemB.  What happens?

The resource tmp_a will go online after tmp_b is online.  
Look at the state of the group f_group on system systemB.

The group is now online because all critical resources (tmp_a) are online.
7.        From a terminal window on systemB, remove the file /tmp/c on systemB.  Wait for approximately 60 seconds.What happens?

The resource tmp_c will fault on systemB because it went offline without a directive from the VCS server.  Because the critical resource, tmp_a, depends on tmp_c, the group is faulted and will failover to systemA.

8.        Offline f_group on systemA.

Notice the order that the resources are put offline.
9.        Confirm that tmp_c is offline on systemB and clear the faulted resource.

The resource tmp_c will transition from faulted to offline.  The group, f_group, on systemB will also transition from faulted to offline.


10.        Online f_group on systemA.

11.        Now that the fault is cleared for the resource tmp_c on systemB,  offline group f_group on systemA and bring up group f_group on systemB using the hagrp command  (there is an option that will do this with one hagrp command).


Notice that f_group is offlined on systemA and then onlined on systemB.

12.        Offline resource tmp_b on systemB.  What happens?

The command fails because an online resource, tmp_a, depends on tmp_b.
13.         Offline resource tmp_a on systemB.  What happens?

Only the resource tmp_a will be offline.  To propagate to all resources in the dependency tree, use the command:

# hares -offprop tmp_a -sys systemB
Lab Exercise 3A SOLUTIONS – Managing Groups and Resources         via the Command Line Interface

The purpose of this lab is to manage VCS groups and resources using the command line.
You will be using the hares and hagrp commands numerous times.  To get a listing of the syntax, do an hares / or an hagrp /


Instructions

1.        Online group f_group on system systemA:

# hagrp -online f_group -sys systemA

You should see the resources tmp_d, then tmp_c and tmp_b in parallel, then tmp_a are onlined on systemA.

2.        Online group f_group on system systemB, what happens?

# hagrp -online f_group -sys systemB

Since f_group is a failover group it may be online on only one system at a time.  Attempting to online it when it is online elsewhere in the cluster will return an error.

3.        Offline f_group on systemA:

# hagrp -offline f_group -sys systemA

Notice the order that the resources are put offline.

4.        Online resource tmp_c on systemB:

# hares -online tmp_c  -sys systemB

Notice the order that the resources are onlined.  Since tmp_c depends on tmp_d,
tmp_d must be online first.  The resources tmp_a and tmp_b are unaffected.

Look at the state of the group f_group on system systemB:

# hagrp -display f_group -sys systemB

The group is now partially online.

5.        Online resource tmp_b on systemA.  What happens?

# hares -online tmp_b  -sys systemA

The command will fail because the failover group f_group is partially online on  systemB.

6.         Online resource tmp_a on systemB.  What happens?

# hares -online tmp_a -sys systemB

The resource tmp_a will go online after tmp_b is online.  
Look at the state of the group f_group on system systemB:

# hagrp -display f_group -sys systemB

The group is now online because all critical resources (tmp_a) are online.
7.        From a terminal window on systemB, remove the file /tmp/c on systemB.  Wait for approximately 60 seconds.What happens?

# /bin/rm  /tmp/c

The resource tmp_c will fault on systemB because it went offline without a directive from the VCS server.  Because the critical resource, tmp_a, depends on tmp_c, the group is faulted and will failover to systemA.

8.        Offline f_group on systemA:

# hagrp -offline f_group -sys systemA

Notice the order that the resources are put offline.
9.        Confirm that tmp_c is offline on systemB and clear the faulted resource:

Look in the /tmp directory on systemB and verify that the file “c” does not exist, then

# hares -clear tmp_c  -sys systemB

The resource tmp_c will transition from faulted to offline.  The group, f_group, on systemB will also transition from faulted to offline.

10.        Online f_group on systemA.

11.        Now that the fault is cleared for the resource tmp_c on systemB,  offline group f_group on systemA and bring up group f_group on systemB using one command:

# hagrp -switch f_group -to systemB

Notice that f_group is offlined on systemA and then onlined on systemB.

12.        Offline resource tmp_b on systemB.  What happens?

# hares -offline tmp_b -sys systemB

The command fails because an online resource, tmp_a, depends on tmp_b.
13.         Offline resource tmp_a on systemB.  What happens?

# hares -offline tmp_a -sys systemB

Only the resource tmp_a will be offline.  To propagate to all resources in the dependency tree, use the command:

# hares -offprop tmp_a -sys systemB
Lab Exercise 3B – Adding Groups and Resources       

The purpose of this lab is to add a group and its resource to VCS and then modify an attribute value and localize an attribute.

You will be creating a group, xcg, that will have a resource, xcr, to be a resource type of  PROCESS.  The xcr resource will start the XCLOCK process and direct the clock to be displayed on your terminal.


NOTE:  Solutions are provided at the end of this section.

Instructions
1.        Use vi to creat a shell script, /bin/vcsview, that has the following lines:

haconf  -dump
sleep 1
cat /etc/VRTSvcs/conf/config/main.cf

2.        Make the VCS configuration writeable:

3.        Allow connections from other systems to the local system:

# xhost  +

4.        Add a group, xcg, for the xclock utility

5.        Run the vcsview script you created in step 1.   What do you see added to the configuration file?

You will see the entries for the group xcg.

6.        Display group attribute information:

# haattr -display group

Look at the information about the SystemList and the AutoStartList attributes for a group.

7.        Add systemA and systemB to the SystemList attribute for the group xcg:

# hagrp -modify xcg SystemList -add systemA  0  systemB  1

8.        Run the vcsview script  again.  What has changed?

You will see the entries for the SystemList attribute.

9.        Put systemA on the list of systems to start automatically for the group xcg.

10.        Run the vcsview script again to see the changes to the configuration.
11.        Add an xclock resource, xcr of the Process type to the group xcg.

Display the resource xcr that was just created.  Is the resource Enabled?

The resource is disabled.  This is the default for resources added after the cluster is started.  You will enable this resource later.

12.        Modify the Arguments and PathName attibutes of the xcr resource to allow use of the local display by the resource.

# hares -modify xcr Arguments “%-display  <hostname>;:0.0”
      where <hostname>; is the hostname of the system whose display you are using

# hares -modify xcr PathName “/usr/openwin/bin/xclock”

13.        Modify the interval that the Process type is monitored to 5 seconds so that you don’t have to wait too long when you offline the xcr resource.

14.        Enable the xcr resource.  Resources added by the command line are, by default, not Enabled.

15.        Display the xcr resource again.  The status of the xcr resource should be Enabled.
16.        Check to see if the xcr resource is online.
17.        Online the resource xcr.  What happens?

The xclock will appear on your workspace.

18.        Close the xclock window.  What happens?

After 5 seconds the xclock will appear again  (the default interval to monitor a Process  resource type is 60 seconds.  But, you modified the interval to 5 seconds in a previous step).  You can look at the MonitorInterval for the Process type by entering:

# hatype -display Process

On the systemB system check to see that the xclock process is now running on systemB.


19.        Clear the fault on the xcr resource on systemA and close the xclock window and the xclock should appear again.   Check that the process is now running on systemA.

20.        The Arguments attribute is global so the xclock window is displayed on the systemA machine no matter which machine is actually running the xclock process.  From a user perspective, that is OK, because the user should not care which machine an application is running.  

      As an example to show the effects of making an attribute local to the specific
machine  in  the cluster use the hares command to display the Arguments of the xcr resource and then use the hares command to  make the Arguments attribute local.

Display the xcr resource and see what has changed in the Arguments attribute.

Notice that the Arguments attribute now has a line for each system (systemA and systemB).  Notice that the hostname is the same on both lines.

Change the line for  systemB to have the hostname of  systemB’s system:

21.        Clear the faulted xcr resource on systemB
22.        Close the xclock window and it should appear on the systemB system.
Now lets create another resource in the group xcg and create a resource dependency.

1.        Create another resource for tmpg, as a type FileOnOff with Pathname /tmp/g, in the same group xcg.   To reduce the wait time for the fail over to occur, change the MonitorInterval of FileOnOff to 5 seconds.
2.        Make xclock  dependent on tmpg using the hares command.
3.        Start the xcr and tmpg resources (enable and online the resources).
4.        Remove /tmp/g.  What happens to xclock?
5.        Start both resources again.  Be sure to clear the faulted resources.
6.        Close  xclock.  What happens to tmpg?
7.        Change xcr’s Critical attribute so that xcr is non-critical.
8.        Close xclock.  What happens to the xcg group?
9.        Clear the faulted xcr resource.  
10.        Remove /tmp/g.  What happens to the xcg group?
Lab Exercise 3B SOLUTIONS – Adding Groups and Resources       


The purpose of this lab is to add a group and its resource to VCS and then modify an attribute value and localize an attribute.

You will be creating a group, xcg, that will have a resource, xcr, to be a resource type of  PROCESS.  The xcr resource will start the XCLOCK process and direct the clock to be displayed on your terminal.

Instructions
1.        Use vi to creat a shell script, /bin/vcsview, that has the following lines:

haconf  -dump
sleep 1
cat /etc/VRTSvcs/conf/config/main.cf

2.        Make the VCS configuration writeable:

# haconf –makerw

3.        Allow connections from other systems to the local system:

# xhost  +

4.        Add a group, xcg, for the xclock utility:

# hagrp -add xcg

5.        Run the vcsview script you created in step 1.   What do you see added to the configuration file?

You will see the entries for the group xcg.

6.        Display group attribute information:

# haattr -display group

Look at the information about the SystemList and the AutoStartList attributes for a group.

7.        Add systemA and systemB to the SystemList attribute for the group xcg:

# hagrp -modify xcg SystemList -add systemA  0  systemB  1

8.        Run the vcsview script  again.  What has changed?

You will see the entries for the SystemList attribute.

9.        Put systemA on the list of systems to start automatically for the group xcg.

# hagrp -modify xcg AutoStartList -add systemA

10.        Run the vcsview script again to see the changes to the configuration.
11.        Add an xclock resource, xcr of the Process type to the group xcg:

# hares -add xcr Process xcg

Display the resource xcr that was just created.  Is the resource Enabled?

# hares -display xcr

The resource is disabled.  This is the default for resources added after the cluster is started.  You will enable this resource later.

12.        Modify the Arguments and PathName attibutes of the xcr resource to allow use of the local display by the resource:

# hares -modify xcr Arguments “%-display  <hostname>;:0.0”
      where <hostname>; is the hostname of the system whose display you are using

# hares -modify xcr PathName “/usr/openwin/bin/xclock”

13.        Modify the interval that the Process type is monitored so that you don’t have to wait too long when you offline the xcr resource:

# hatype -modify Process MonitorInterval 5

14.        Enable the xcr resource.   Resources added via the command line are, by default, not Enabled:

# hares -modify xcr Enabled 1

15.        Display the xcr resource again.   The status of the xcr resource should be Enabled.
16.        Check to see if the xcr resource is online:

# hares -display xcr

17.        Online the resource xcr.  What happens:

# hares -online xcr  -sys systemA

The xclock will appear on your workspace.

18.        Close the xclock window.  What happens:

After 5 seconds the xclock will appear again  (the default interval to monitor a Process  resource type is 60 seconds.  But, you modified the interval to 5 seconds in a previous step).  You can look at the MonitorInterval for the Process type by entering:

# hatype -display Process

On the systemB system check  to see that the xclock process is now running on systemB:

# ps -ef  | grep xc


19.        Clear the fault on the xcr resource on systemA and close the xclock window and the xclock should appear again.  Check that the process is now running on systemA.

20.        The Arguments attribute is global so the xclock window is displayed on the systemA machine no matter which machine is actually running the xclock process.  From a user perspective, that is OK, be cause the user should not care which machine an application is running.

      As an example to show the effects of making an attribute local to the specific
      machine in the cluster  use the hares command to look at the Arguments attribute in
      the xcr resource and then use the hares command to make the Arguments attribute
      local:

      # hares -display xcr
# hares -local xcr Arguments

Display the xcr resource and see what has changed in the Arguments attribute:

# hares -display xcr

Notice that the Arguments attribute now has a line for each system (systemA and systemB).  Notice that the hostname is the same on both lines.

Change the line for systemB to have the hostname of  systemB’s system:

# hares -modify xcr Arguments “%-display  <hostname>;:0.0” -sys systemB
        where <hostname>; is the hostname of systemB

21.        Clear the faulted xcr resource on systemB
22.        Close the xclock window and it should appear on the systemB system.
Now lets create another resource in the group xcg and create a resource dependency.

1.        Create another resource for tmpg, as a type FileOnOff with Pathname /tmp/g, in the same group xcg. To reduce the wait time for the fail over to occur, change the MonitorInterval of FileOnOff to 5 seconds.

2.        Make xclock  dependent on tmpg:

# hares  -link xcr  tmpg

3.        Start the xcr and tmpg resources (enable and online the resources).
4.        Remove /tmp/g.  What happens to xclock?
5.        Start both resources again.  Be sure to clear the faulted resources.
6.        Close  xclock.  What happens to tmpg?
7.        Change xcr’s Critical attribute so that xcr is non-critical.
8.        Close xclock.  What happens to the xcg group?
9.        Clear the faulted xcr resource.  
10.        Remove /tmp/g.  What happens to the xcg group?

Lab Exercise 3C – Setting up NFS Failover                             
       
The purpose of this lab is to take a main.cf file and modify it to meet your environment.  In this exercise we will use an existing main.cf file set up to provide NFS server failover and modify it to match your environment.

Instructions
You will use the main.cf provided at /etc/VRTSvcs/conf/Sample_nfs directory.

The resources used in the Sample_nfs configuration are in the provided agents and resource types that come with the VCS product:

§        Disk
§        DiskGroup  <= use only if you are using VxVM
§        IP
§        Mount
§        NFS
§        NIC
§        Share
§        Volume <= use only if you using VxVM

Take a few minutes to review Appendix A at the end of the lab guide to see what the agents do and how to configure the resources in a service group.  If you would like to see a graphic representation of the dependency statements for a non VxVM, VxFS environment, refer to the class handouts, Module 7, VCS Resources and Agents.  Better yet, see if you can draw the dependency tree yourself.   


1.        Save the main.cf file from Lab Exercise 3B by renaming it to another name like lab3b.cf.  We will use this main.cf file in Lab Exercise 4.

2.        Copy the main.cf file from the /etc/VRTSvcs/conf/Sample_nfs directory to /etc/VRTSvcs/conf/config as main.cf.

3.        You will notice that the main.cf  file has four groups defined, two for UFS file systems and two for VxFS file systems.  You will only need one of the groups for this lab (either the UFS or the VxFS, VxVM group).

If you have shared disk ensure that the major and minor numbers are the same on both systemA and systemB.

If you do not have shared disk then you will create the same file system on each system and see that NFS failover is “simulated” (resources are offlined on one system and onlined on the other system).



4.        If you do not  have VxVM and VxFS installed then create a UFS file system:

§        Select a free partition:
# format =>; select the boot disk =>; partition =>; print

§        Select a fee partition and enter the partition number then:
-          Set the tag to home
-          Set the mode to mw
-          Choose a cylinder that does not overlap other partitions
-          Set the size as 100mb

§        Select label and reply y.

§        Quit out of format.

§        Create a file system:
# newfs  /dev/rdsk/c0t0d0sX   (where X is the partition number)

§        Create a mount point:
# mkdir /vcsnfs


5.        If you have VxVM and VxFS installed then create a VxFS file system:

§        Define a volume on Rootdg:
# vxassist –g rootdg make vcsvol01 100m

§        Create a file system:
# mkfs –F vxfs /dev/vx/rdsk/rootdg/vcsvol01

§        Create a mount point:
# mkdir /vcsnfs


6.        Now modify the main.cf information to reflect the resources that you have in your environment (IP address, disk, file system, mount point, etc.).   Choose an IP address that you can ping from the other system.

7.        Add a user so you can log into the GUI.

8.        Start VCS and see that the NFS group is running on one system:

§        An IP alias was created on your public network interface.  Use ifconfig -a and ping.
§        The file was mounted.  Use df -k.
§        The share was created in  /etc/dfs/sharetab.

9.        Cause one of the resources to fail so that the NFS group will fault and be brought up on the other system.  Suggestion:  unmount or unshare the file system.  You may want to change the MonitorInterval for the resource type that you fault so you won’t have to wait too long for the fault to be detected.







Lab Exercise 4 – Creating a New Resource Type                             
       
The purpose of this lab is to go through the steps to create a new resource type including defining the type, creating and installing an agent for the type, and creating a group that uses the new resource type.

Instructions
1.        To prepare for this lab, shut down the VCS cluster gracefully by using the following command:

# hastop –all

In this lab, you will create and test an agent called MyFile using the provided ScriptAgent for the startup entry point.  ScriptAgent is a compiled C++ startup entry point with all entry points set to null.

§        A MyFile resource represents a regular file.
§        The MyFile online entry point creates the file if it does not already exist.
§        The MyFile offline entry point deletes the file.
§        The MyFile monitor entry point returns the online confidence level of 110 and  if the file is offline it returns a confidence level of 100.

Do the following on both systemA and systemB to create the new resource.

2.        Define the resource type called MyFile in /etc/VRTSvcs/conf/config/Mytypes.cf.

type MyFile (
NameRule = resource.PathName
str PathName
static str ArgList[] = { PathName }
)

The resource name and ArgList attribute values are passed to the script entry points as command-line arguments. For example, in the above configuration, script entry points receive the resource name as the first argument, and PathName as the second.

Now, follow these steps to build the MyFile agent without writing and compiling any C++ code. This procedure uses the online, offline, and monitor entry points only.

3.        Create the directory /opt/VRTSvcs/bin/MyFile

4.        Use the  /opt/VRTSvcs/bin/ScriptAgent startup entry point as the MyFile startup entry point.  Copy this file to /opt/VRTSvcs/bin/MyFile/MyFileAgent OR create a symbolic link.

To copy the agent binary:

cp /opt/VRTSvcs/bin/ScriptAgent  /opt/VRTSvcs/bin/MyFile/MyFileAgent

To create a link to the agent binary:

ln  -s  /opt/VRTSvcs/bin/ScriptAgent   /opt/VRTSvcs/bin/MyFile/MyFileAgent

Now let’s implement the online, offline, and monitor entry points using scripts.

5.        Using any editor, create the file /opt/VRTSvcs/bin/MyFile/online with the contents:

# !/bin/sh
# Create the file specified by the PathName attribute
touch $2

6.        Create the file /opt/VRTSvcs/bin/MyFile/offline with the contents:

# !/bin/sh
# Remove the file specified by the PathName attribute
rm $2

7.        Create the file /opt/VRTSvcs/bin/MyFile/monitor with the contents:

# !/bin/sh
# Verify that the file specified by the PathName
# attribute exists.
if test -f $2
then exit 110;
else exit 100;
fi

Now stop and make certain that both systems are set up with the new MyFile agent components:

§        MyFile subdirectory
§        MyFileAgent startup entry point
§        online, offline, monitor scripts
§        Mytypes.cf  in /etc/VRTSvcs/conf/config

Be certain that all of the binaries and scripts have execute permissions.

8.        Retrieve the main.cf file that you used in Lab Exercise 3B and make sure it is in /etc/VRTSvcs/conf/config


9.        Add an include statement in the main.cf file that points to the Mytypes.cf file:

include “Mytypes.cf”

10.        Define a MyFile resource in the xcg group in main.cf file:

MyFile (
PathName = "/tmp/agentfile"
Enabled = 1
)

11.        Verify and generate the configuration (use the hacf command).

12.        Bring up the cluster.

13.        Change the MonitorInterval of MyFile to 5 seconds.

14.        Online the xcg group.  What happens to the MyFile resource?  Make sure it is online.  Did the agentfile get created?

15.        Remove the  /tmp/agentfile and see what happens.  Did the xcg group fault?

16.        Create a dependency in the xcg group using the MyFile resource and see how the group and resource behaves depending on which object is put online, offline, faulted, etc.

Appendix A – VCS Resource Types and Agents


Bundled Agents

The agent descriptions provided below are taken from Chapter 2 in the VCS Installation Guide.  These are a subset of the agents described in the Installation Guide

Disk Agent

Description -  Manage a raw disk.

Entry Points

§        Online—Not applicable.
§        Offline—Not applicable.
§         Monitor—Determines if disk is accessible by performing read I/O on raw disk.

Required Attribute

Partition - Indicates which partition to monitor. Partition is specified with the full path beginning with a slash (/). Otherwise the name given is assumed to reside in /dev/rdsk.

Type Definition

type Disk (
str Partition
NameRule = resource.Partition
static str Operations = None
static str ArgList[] = { Partition }
)

Sample Configuration

Disk c1t0d0s0 (
Partition = c1t0d0s0
)


DiskGroup Agent

Description - Bring online, take offline, and monitor a VERITAS Volume Manager disk group.

Entry Points

§        Online - Using the command vxdg, this script imports the disk group.
§        Offline - Using the command vxdg, this script deports the disk group.
§        Monitor - Using the command vxdg, this agent determines if the disk group is online or
offline. If disk group has been imported with noautoimport=off, and if the group is not frozen, the group to which the disk group belongs is taken offline.

Required Attribute

DiskGroup - Disk group name.

Optional Attributes

StartVolumes - If value is 1, the DiskGroup online script starts all volumes belonging to that disk group after importing. Default is 1.

StopVolumes - If value is 1, the DiskGroup offline script stops all volumes belonging to that disk group before deporting. Default is 1.

Type Definition

type DiskGroup (
static int OnlineRetryLimit = 1
str DiskGroup
NameRule = resource.DiskGroup
static str ArgList[] = { DiskGroup, StartVolumes,
StopVolumes, MonitorOnly }
str StartVolumes = 1
str StopVolumes = 1
static int NumThreads = 1
)

Sample Configuration

DiskGroup sharedg (
DiskGroup = sharedg
)


FileOnOff Agent

Description -  Create, remove, and monitor files.

Entry Points

§        Online - Create an empty file with the specified name, if one does not already
exist.
§        Offline - Remove the specified file.
§        Monitor - Check if the specified file exists. If it does, the agent reports as online. If it does not, the agent reports as offline.

Required Attribute

PathName - Specifies the complete pathname, starting with the slash (/) preceding the file name.

Type Definition

type FileOnOff (
str PathName
NameRule = resource.PathName
static str ArgList[] = { PathName }
)

Sample Configuration

FileOnOff tmp_file01 (
PathName = "/tmp/file01"
)


IP Agent

Description - Manage the process of configuring an IP address on an interface.

Entry Points

§        Online - Check if the IP address is in use by another system. Uses ifconfigto set the IP address on a unique alias on the interface.
§        Offline - Bring down the IP address associated with the specified interface.  Uses ifconfig to set the interface alias to 0.0.0.0 and the state to “down.”
§        Monitor - Monitor the interface to test if the IP address associated with the interface is alive.

Required Attributes

Address - IP address associated with the interface.

Device - Name of the NIC resource associated with the IP address.  Should only contain the resource name without an alias; for example, le0.

Optional Attributes

ArpDelay - Number of seconds to sleep between configuring an interface and sending out a broadcast to inform routers about this IP address. Default is 1 second.

IfconfigTwice - Causes an IP address to be configured twice, using an ifconfig up-down-up sequence. Increases probability of gratuitous arps (caused by ifconfig up) reaching clients. Default is 0.

NetMask - Netmask associated with the interface.

Options - Options for the ifconfig command.

Type Definition

type IP (
str Device
str Address
str NetMask
str Options
int ArpDelay = 1
int IfconfigTwice = 0
NameRule = resource.Address
static str ArgList[] = { Device, Address, NetMask,
Options, ArpDelay, IfconfigTwice
}
)

Sample Configuration

IP IP_192_203_47_61 (
Device = le0
Address = "192.203.47.61"
)


Mount Agent

Description - Bring online, take offline, and monitor a file system mount point.

Entry Points

§        Online—Mount a block device on the directory. If mount fails, the agent triesto run fsck on the raw device to remount the block device.
§        Offline—Unmount the file system.
§        Monitor—Determine if the file system is mounted. Checks mount status using the commands stat and statvfs.

Required Attributes

BlockDevice - Block device for mount point.
MountPoint - Directory for mount point.
FSType - File system type, for example, vxfs, ufs, etc.

Optional Attributes

FsckOpt - Options for fsck command.
MountOpt - Options for mount command.

Type Definition

type Mount (
str MountPoint
str BlockDevice
str FSType
str MountOpt
str FsckOpt
NameRule = resource.MountPoint
static str ArgList[] = { MountPoint, BlockDevice,
FSType, MountOpt, FsckOpt }
)

Sample Configuration

Mount export1 (
MountPoint= "/export1"
BlockDevice = "/dev/dsk/c1t1d0s3"
FSType = vxfs
MountOpt = ro
)


NFS Agent

Description - Start and monitor the nfsd and mountd processes required by all exported NFS file systems.

Entry Points

§        Online - Check if nfsd and mountd processes are running. If they’re not, it starts them and exits.
§        Offline - Not applicable.
§        Monitor - Monitor versions 2 and 3 of the nfsd process, and versions 1, 2, and 3 of the mountd process. Monitors tcp and udp versions of the processes by sending RPC (Remote Procedure Call) calls clnt_create and clnt_call to the RPC server. If calls succeed, the resource is reported as online.

Optional Attribute

Nservers - Specifies the number of concurrent NFS requests the server can handle. Default is 16.

Type Definition

type NFS (
int Nservers = 16
NameRule = "NFS_" + group.Name + "_" + resource.Nservers
static str ArgList[] = { Nservers }
static str Operations = OnOnly
static int RestartLimit = 1
)

Sample Configuration

NFS NFS_groupx_24 (
Nservers = 24
)


NIC Agent

Description  - Monitor the configured NIC. If a network link fails, or if there is a problem with the device card, the resource is marked as offline. The NIC listed in the Device attribute must have an administration IP address, which is the default IP address assigned to the physical interface of a host on a network.  The agent will not configure network routes or an administration IP address.

Entry Points

§        Online—Not applicable.
§        Offline—Not applicable.
§        Monitor—Test the network card and network link. Use the DLPI (Data Link Provider Interface) layer to send and receive messages across the driver. Ping the broadcast address of the interface to generate traffic on the network. Count the number of packets passing through the device before and after the address is pinged. If the count decreases or remains the same, the resource is marked as offline.

Required Attribute

Device - NIC name.

Optional Attributes

PingOptimize - Number of monitor cycles to detect if configured interface is inactive. A value of 1 optimizes broadcast pings and requires two monitor cycles. A value of 0 performs a broadcast ping
during each monitor cycle and detects the inactive interface within the cycle. Default is 1.

NetworkHosts - List of hosts on the network that will be pinged to determine if the network connection is alive. The IP address of the host should be entered instead of the HostName to prevent the monitor from timing out (DNS problems can cause the ping to hang); for example, 166.96.15.22. If this optional attribute is not specified, the monitor will test the NIC by pinging the broadcast address on the NIC. If more than one network host is listed, the monitor will return online if at least one of the hosts is alive.

NetworkType - Type of network, such as Ethernet (ether), FDDI (fddi), Token Ring (token), etc.

Type Definition

type NIC (
str Device
str NetworkType
str NetworkHosts[]
NameRule = group.Name + "_" + resource.Device
int PingOptimize = 1
static str ArgList[] = { Device, NetworkType,
NetworkHosts, PingOptimize }
static str Operations = None
)



Sample Configurations

Sample 1: Without Network Hosts, Using the Default Ping Mechanism
NIC groupx_le0 (
Device = le0
PingOptimize = 1
)

Sample 2: With Network Hosts
NIC groupx_le0 (
Device = le0
NetworkHosts = { "166.93.2.1", "166.99.1.2" }
)


Process Agent

Description - Start, stop, and monitor a process specified by the user.

Entry Points

§        Online—Start the process with optional arguments.
§        Offline—VCS sends a SIGTERM. If the process does not exit within one second, VCS sends a SIGKILL.
§        Monitor—Check to see if the process is alive by scanning the process table for the name of the executable pathname and argument list. Because of the Solaris procfs-interface, the match is limited to the initial 80 characters.

Required Attribute

PathName - Defines complete pathname for accessing an executable program, including the program name.

Optional Attribute

Arguments - Passes arguments to the process.  Note: Multiple tokens must be separated by one space only.  String cannot accommodate more than one space between tokens, or leading or trailing whitespace characters.

Type Definition

type Process (
str PathName
str Arguments
NameRule = resource.PathName
static str ArgList[] = { PathName, Arguments }
)

Sample Configuration

Process usr_lib_sendmail (
PathName = /usr/lib/sendmail
Arguments = "bd q1h"
)


Share Agent

Description - Share, unshare, and monitor a single local resource for exporting an NFS file system to be mounted by remote systems.

Entry Points

§        Online - Share an NFS file system.
§        Offline - Unshare an NFS file system.
§        Monitor - Read /etc/dfs/sharetab file and look for an entry for the file system specified by PathName. If the entry exists, it returns as online.

Required Attribute

PathName - Pathname of the file system to be shared.

Optional Attributes

OfflineNFSRestart - Restart NFS when the offline entry point is executed. Default is 1. If there are multiple shares in a single service group, setting this attribute for one share only is sufficient.

OnlineNFSRestart - Restart NFS when the online entry point is executed.  Default is 0. If there are multiple shares in a single service group, setting this attribute for one share only is sufficient.

Options - Options for the share command.

Type Definition

type Share (
str PathName
str Options
int OnlineNFSRestart = 0
int OfflineNFSRestart = 1
NameRule = nfs + resource.PathName
static str ArgList[] = { PathName, Options,
OnlineNFSRestart,
OfflineNFSRestart }
)

Sample Configuration

Share nfsshare1x (
PathName = "/share1x"
)


Volume Agent

Description - Bring online, take offline, and monitor a VERITAS Volume Manager volume.

Entry Points

§        Online—Using the command vxvol, this agent starts the volume.
§        Offline—Using the command vxvol, this agent stops the volume.
§        Monitor—Determines if the volume is online or offline by reading a block from the raw device interface to the volume.

Required Attributes

DiskGroup - Disk group name.
Volume - Volume name.

Type Definition

type Volume (
str Volume
str DiskGroup
NameRule = resource.DiskGroup + "_" + resource.Volume
static str ArgList[] = { Volume, DiskGroup }
)

Sample Configuration

Volume sharedg_vol3 (
Volume = vol3
DiskGroup = sharedg
)

论坛徽章:
0
发表于 2003-01-16 11:35 |显示全部楼层

vcs lab(旧版本1.3的,但对vcs的了解会有帮助)

好帖。提议spookish老大贡献更多这样的好东东!!!

论坛徽章:
0
发表于 2003-01-16 15:21 |显示全部楼层

vcs lab(旧版本1.3的,但对vcs的了解会有帮助)

感谢Spookish!!!把3.5也贴上来吧,你肯定作过了,也应该有文档了。

论坛徽章:
0
发表于 2003-01-16 21:30 |显示全部楼层

vcs lab(旧版本1.3的,但对vcs的了解会有帮助)

真不好意思,3.5一直没时间做,估计要年后了
我现在把2002年的事情做了断呢
这个可不是我写的,是v的lab

论坛徽章:
0
发表于 2003-01-17 01:29 |显示全部楼层

vcs lab(旧版本1.3的,但对vcs的了解会有帮助)

[quote]原帖由 "sea-unix"]感谢Spookish!!!把3.5也贴上来吧,你肯定作过了,也应该有文档了。[/quote 发表:

3.5区别大么?
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

DTCC2020中国数据库技术大会

【架构革新 高效可控】2020年12月21日-23日第十一届中国数据库技术大会将在北京隆重召开。

大会设置2大主会场,20+技术专场,将邀请超百位行业专家,重点围绕数据架构、AI与大数据、传统企业数据库实践和国产开源数据库等内容展开分享和探讨,为广大数据领域从业人士提供一场年度盛会和交流平台。

http://dtcc.it168.com


大会官网>>
  

北京盛拓优讯信息技术有限公司. 版权所有 16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122
中国互联网协会会员  联系我们:huangweiwei@it168.com
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP