免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 11999 | 回复: 12
打印 上一主题 下一主题

搭建ASE Cluster Edition步骤 [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2010-04-01 15:54 |只看该作者 |倒序浏览
Sybase的集群出来的较晚,我们也是较早使用Sybase集群的公司。因此有机会学习ASE Cluster Edition。我在网上发现有关Sybase集群的资料很少,所以把这次安装的过程记录下来,供大家参考。。。

此次使用两台Redhat5.2虚拟机作为ASE CE 节点(分别为sdc1,sdc2)。分别建了两个Cluster:
mycluster: 两个instance,分别跑在sdc1和sdc2上,两者共享存储。
my3nodes: 三个instance,都跑在sdc1上,使用本地文件系统。
为了实现上述目的,需要作如下准备:

1.sdc1,sdc2上分别使用sdc用户安装ASE CE。非常简单,比Oracle easy多了,基本上只要按回车就行。

2.共享存储分区情况
sdc1,sdc2上使用fdisk将共享的/dev/sdb分区:
[root@sdc1 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        2479    19808145   83  Linux
/dev/sda3            2480        2610     1052257+  82  Linux swap / Solaris

Disk /dev/sdb: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          25      200781   83  Linux
/dev/sdb2              26          50      200812+  83  Linux
/dev/sdb3              51          75      200812+  83  Linux
/dev/sdb4              76         391     2538270    5  Extended
/dev/sdb5              76         100      200781   83  Linux
/dev/sdb6             101         125      200781   83  Linux
/dev/sdb7             126         150      200781   83  Linux
/dev/sdb8             151         175      200781   83  Linux
/dev/sdb9             176         200      200781   83  Linux
/dev/sdb10            201         225      200781   83  Linux
/dev/sdb11            226         250      200781   83  Linux
/dev/sdb12            251         275      200781   83  Linux
/dev/sdb13            276         300      200781   83  Linux
/dev/sdb14            301         325      200781   83  Linux
/dev/sdb15            326         391      530113+  83  Linux

在sdc1,sdc2上分别进行裸设备绑定:
[root@sdc1 ~]# cat /etc/udev/rules.d/60-raw.rules
# This file and interface are deprecated.
# Applications needing raw device access should open regular
# block devices with O_DIRECT.
#
# Enter raw device bindings here.
#
# An example would be:
#   ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
# to bind /dev/raw/raw1 to /dev/sda, or
#   ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
# to bind /dev/raw/raw2 to the device with major 8, minor 1.
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"                    
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"   
ACTION=="add", KERNEL=="sdb3", RUN+="/bin/raw /dev/raw/raw3 %N"   
ACTION=="add", KERNEL=="sdb5", RUN+="/bin/raw /dev/raw/raw5 %N"   
ACTION=="add", KERNEL=="sdb6", RUN+="/bin/raw /dev/raw/raw6 %N"   
ACTION=="add", KERNEL=="sdb7", RUN+="/bin/raw /dev/raw/raw7 %N"   
ACTION=="add", KERNEL=="sdb8", RUN+="/bin/raw /dev/raw/raw8 %N"   
ACTION=="add", KERNEL=="sdb9", RUN+="/bin/raw /dev/raw/raw9 %N"
ACTION=="add", KERNEL=="sdb10", RUN+="/bin/raw /dev/raw/raw10 %N"
ACTION=="add", KERNEL=="sdb11", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sdb12", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sdb13", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sdb14", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sdb15", RUN+="/bin/raw /dev/raw/raw15 %N"
ACTION=="add", KERNEL=="raw*", WNER=="sdc", GROUP=="sdc", MODE=="0664"

重启sdc1和sdc2使之生效。

3.修改两个node的/etc/hosts
sdc1:
[root@sdc1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost
11.24.114.192   sdc1
11.24.114.182   sdc2
192.168.202.130 sdc1-priv1
192.168.202.131 sdc1-priv2
192.168.202.133 sdc1-priv3
192.168.202.134 sdc1-priv4
192.168.202.129 sdc2-priv1

sdc1-priv1和sdc2-priv1作为sdc1/sdc2在mycluster集群中的心跳IP,sdc1-priv2~sdc1-priv4作为sdc1在my3nodes集群的心跳IP。

sdc2:
[root@sdc2 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost
11.24.114.192   sdc1
11.24.114.182   sdc2
192.168.202.130 sdc1-priv1
192.168.202.129 sdc2-priv1


4.sdc1和sdc2上建立软连接,以便mycluster集群使用[sdc@sdc1 data]$ ll
total 0
-rw-r--r-- 1 sdc sdc  0 Dec 18 17:46 empty
lrwxrwxrwx 1 sdc sdc 13 Mar 29 05:13 master.dat -> /dev/raw/raw1
lrwxrwxrwx 1 sdc sdc 13 Mar 29 05:13 procs.dat -> /dev/raw/raw2
lrwxrwxrwx 1 sdc sdc 13 Mar 29 05:17 quorum.dat -> /dev/raw/raw5
lrwxrwxrwx 1 sdc sdc 13 Mar 29 05:29 sdc1_lst.dat -> /dev/raw/raw6
lrwxrwxrwx 1 sdc sdc 13 Mar 29 05:29 sdc2_lst.dat -> /dev/raw/raw7
lrwxrwxrwx 1 sdc sdc 13 Mar 31 04:19 sdc3_lst.dat -> /dev/raw/raw9
lrwxrwxrwx 1 sdc sdc 14 Mar 31 04:20 sdc4_lst.dat -> /dev/raw/raw10
lrwxrwxrwx 1 sdc sdc 13 Mar 29 05:13 sybsystem.dat -> /dev/raw/raw3
lrwxrwxrwx 1 sdc sdc 13 Mar 29 05:46 temp.dat -> /dev/raw/raw8

[sdc@sdc1 data]$ pwd
/opt/sdc/data

5. sdc1,sdc2上分别启动agent:
uafstartup.sh &


6.创建mycluster集群:
两个nodes(sdc1,sdc2),两个instance(inst01,inst02)集群mycluster的创建:

sybcluster -Uuafadmin -P -Cmycluster -Fsdc1,sdc2
> show agents
Verifying the supplied agent specifications...

Agent Information: service:jmx:rmi:///jndi/rmi://sdc1:9999/agent
-----------------------------------------------------


    Node Name:       sdc1
    Agent Port:      9999
    Agent Version:   2.5.0
    Agent Build:     1003

    OS Name:         Linux
    OS Version:      2.6.18-92.el5
    OS Architecture: amd64

    Agent Service Info:
      Agent Service (Agent)  Build: 1003  Status: running
      BootstrapService (BootstrapService)  Build: <unavailable>  Status: running
      Configuration Service (ConfigService)  Build: 1003  Status: running
      Deployment Service (DeploymentService)  Build: 20  Status: running
      Environment Service (EnvironmentDiscoveryService)  Build: 1003  Status: running
      File Transfer Service (FileTransferService)  Build: 1003  Status: running
      Plugin Registration Service (PluginRegisterService)  Build: 1003  Status: running
      RMI Service (RMIService)  Build: 1003  Status: running
      Remote Shell Service (RemoteShellService)  Build: 1003  Status: running
      Security Service (SecurityService)  Build: 1003  Status: running
      Self Discovery Service (SelfDiscoveryService)  Build: 1003  Status: running
      Service Registration Service (ServiceRegistrationService)  Build: 1003  Status: running
      Session Service (SessionService)  Build: 1003  Status: running
      Sybase Home Service (SybaseHomeService)  Build: 14  Status: running

    Agent Plugin Info:

Agent Information: service:jmx:rmi:///jndi/rmi://sdc2:9999/agent
-----------------------------------------------------


    Node Name:       sdc2
    Agent Port:      9999
    Agent Version:   2.5.0
    Agent Build:     1003

    OS Name:         Linux
    OS Version:      2.6.18-92.el5
    OS Architecture: amd64

    Agent Service Info:
      Agent Service (Agent)  Build: 1003  Status: running
      BootstrapService (BootstrapService)  Build: <unavailable>  Status: running
      Configuration Service (ConfigService)  Build: 1003  Status: running
      Deployment Service (DeploymentService)  Build: 20  Status: running
      Environment Service (EnvironmentDiscoveryService)  Build: 1003  Status: running
      File Transfer Service (FileTransferService)  Build: 1003  Status: running
      Plugin Registration Service (PluginRegisterService)  Build: 1003  Status: running
      RMI Service (RMIService)  Build: 1003  Status: running
      Remote Shell Service (RemoteShellService)  Build: 1003  Status: running
      Security Service (SecurityService)  Build: 1003  Status: running
      Self Discovery Service (SelfDiscoveryService)  Build: 1003  Status: running
      Service Registration Service (ServiceRegistrationService)  Build: 1003  Status: running
      Session Service (SessionService)  Build: 1003  Status: running
      Sybase Home Service (SybaseHomeService)  Build: 14  Status: running

    Agent Plugin Info:
> create cluster
Cluster mycluster - Enter the maximum number of instances:  [ 4 ] 2
How many agents will participate in this cluster:  [ 4 ] 2
Verifying the supplied agent specifications...
1) sdc1 9999 2.5.0 Linux
2) sdc2 9999 2.5.0 Linux
Enter the number representing the cluster node 1:  [ 2 ] 1 1
2) sdc2 9999 2.5.0 Linux
Enter the number representing the cluster node 2:  [ 2 ] 2
Will this cluster be configured using private SYBASE installations? (Y/N) :   [ N ] y

------------------ Quorum Device  ---------------------
The quorum device is used to manage a cluster.  It contains information shared between instances and nodes.

Enter the full path to the quorum disk: /opt/sdc/data/quorum.dat
Enter any traceflags:

-------------------- Page Size  --------------------

Enter the page size in kilobytes:  [ 2 ] 4

--------------- Master Database Device  ----------------
The master database device controls the operation of the Adaptive Server and stores information about all user databases and their associated database devices.

Enter the full path to the master device: /opt/sdc/data/master.dat
Enter the size the Master Device (MB):  [ 60 ] 150
Enter the size the Master Database (MB):  [ 26 ] 100

------------ Sybase System Procedure Device --------
Sybase system procedures (sybsystemprocs) are stored on a device.

Enter the System Procedure Device path: /opt/sdc/data/procs.dat
Enter System Procedure Device size (MB):  [ 152 ]
Enter the System Procedure Database size (MB):  [ 152 ]

-------------- System Database Device ------------------
The system database (sybsystemdb) stores information about distributed transactions.

Enter the System Database Device path: /opt/sdc/data/sybsystem.dat
Enter the System Database Device size (MB):  [ 12 ] 50
Enter the System Database size (MB):  [ 12 ] 50

--------------- PCI Device  ----------------
Pluggable Component Interface (PCI) provides support for Java in database by loading off-the-shelf JVMs from any vendor. If you want to use JVM, create a device for it.

Enable PCI in Adaptive Servier (Y/N):  [ N ]

--------------------------------------------------------
Does this cluster have a secondary network:  [ Y ] n
Enter the port number from which this range will be applied:  [ 15100 ]

--------------------------------------------------------

--------------------------------------------------------

You will now be asked for the instance information on a node by node basis.


-- Cluster: mycluster - Node: sdc1 - Agent: sdc1:9999 --

Enter the name of the cluster instance: inst01
Enter the Sybase installation directory on mycluster:  [ /opt/sdc ]
Enter the environment shell script path on mycluster:  [ /opt/sdc/SYBASE.sh ]
Enter the ASE home directory on mycluster:  [ /opt/sdc/ASE-15_0 ]
Enter path to the dataserver config file:  [ /opt/sdc/mycluster.cfg ]
Enter the interface file query port number for instance sdc1: 6001
Enter the primary protocol address for sdc1:  [ sdc1 ] sdc1-priv1

--------------- Local System Temporary Database ---------
The Local System Temporary Database Device contains a database for each instance in the cluster.
Enter the LST device name: sdc1_lst
Enter the LST device path: /opt/sdc/data/sdc1_lst.dat
Enter LST device size (MB): 150
Enter the LST database name:  [ mycluster_tdb_1 ]
Enter the LST database size (MB):  [ 150 ]

-- Cluster: mycluster - Node: sdc2 - Agent: sdc2:9999 --

Enter the name of the cluster instance: inst02
Enter the Sybase installation directory on mycluster:  [ /opt/sdc ]
Enter the environment shell script path on mycluster:  [ /opt/sdc/SYBASE.sh ]
Enter the ASE home directory on mycluster:  [ /opt/sdc/ASE-15_0 ]
Enter path to the dataserver config file:  [ /opt/sdc/mycluster.cfg ]
Enter the interface file query port number for instance sdc2: 6001
Enter the primary protocol address for sdc2:  [ sdc2 ] sdc2-priv1

--------------- Local System Temporary Database ---------
The Local System Temporary Database Device contains a database for each instance in the cluster.
Enter the LST device name: sdc2_lst
Enter the LST device path: /opt/sdc/data/sdc2_lst.dat
Enter LST device size (MB): 150
Enter the LST database name:  [ mycluster_tdb_2 ]
Enter the LST database size (MB):  [ 150 ]
Would you like to save this configuration information in a file?  [ Y ]
Enter the name of the file to save the cluster creation information:  [ /home/sdc/mycluster.xml ]

--------------------------------------------------------
Create the cluster now?  [ Y ]
INFO  - Creating the Cluster Agent plugin on host address sdc1 using agent: sdc1:9999
INFO  - Creating the Cluster Agent plugin on host address sdc2 using agent: sdc2:9999
Enter the path to the Interfaces file on sdc1:  [ /opt/sdc ]
Enter the path to the Interfaces file on sdc2:  [ /opt/sdc ]
Would you like to check whether this device supports IO fencing capability (Y/N)?  [ Y ]
Validating the device /opt/sdc/data/master.dat for IO Fencing Capabilities.
This device does not have SCSI-3 PGR capability. It does not support I/O fencing
Do you want to continue (Y/N)?  [ N ] y
Validating the device /opt/sdc/data/procs.dat for IO Fencing Capabilities.
This device does not have SCSI-3 PGR capability. It does not support I/O fencing
Do you want to continue (Y/N)?  [ N ] y
Validating the device /opt/sdc/data/sybsystem.dat for IO Fencing Capabilities.
This device does not have SCSI-3 PGR capability. It does not support I/O fencing
Do you want to continue (Y/N)?  [ N ] y
INFO  - Cluster "mycluster" creation in progress.
INFO  - Choosing the first instance to be created using the connected agent...
INFO  - The Sybase home directory is /opt/sdc.
INFO  - The ASE home directory is /opt/sdc/ASE-15_0.
INFO  - Retrieving environment variables from /opt/sdc/SYBASE.sh.
INFO  - The first instance created will be sdc1.
INFO  - Warning: You have selected '4k' as the logical page size for the Adaptive
INFO  - Server. If you plan to load dump from another database, make sure this logical
INFO  - page size matches the size of the source database. The default logical page
INFO  - size in previous Adaptive Server versions was 2KB.
INFO  - Warning: Unable to verify /opt/sdc/data/master.dat device size.  Please verify
INFO  - that this device is not already in use and that it has sufficient space
INFO  - available.
INFO  - Warning: Unable to verify /opt/sdc/data/procs.dat device size.  Please verify
INFO  - that this device is not already in use and that it has sufficient space
INFO  - available.
INFO  - Building Adaptive Server 'inst01':
INFO  - Writing entry into directory services...
INFO  - Directory services entry complete.
INFO  - Building master device...
INFO  - Master device complete.
INFO  - Starting server...
INFO  - Server started.
INFO  - Building sysprocs device and sybsystemprocs database...
INFO  - sysprocs device and sybsystemprocs database created.
INFO  - Running installmaster script to install system stored procedures...
INFO  - installmaster: 10% complete.
INFO  - installmaster: 20% complete.
INFO  - installmaster: 30% complete.
INFO  - installmaster: 40% complete.
INFO  - installmaster: 50% complete.
INFO  - installmaster: 60% complete.
INFO  - installmaster: 70% complete.
INFO  - installmaster: 80% complete.
INFO  - installmaster: 90% complete.
INFO  - installmaster: 100% complete.
INFO  - installmaster script complete.
INFO  - Creating two-phase commit database...
INFO  - Two phase commit database complete.
INFO  - Installing common character sets (Code Page 437, Code Page 850, ISO Latin-1,
INFO  - Macintosh and HP Roman-...
INFO  - Character sets installed.
INFO  - Server 'inst01' was successfully created.
INFO  - Connecting to the dataserver using the host and query port sdc1:6001.
INFO  - Creating the Local System Temporary device sdc1_lst at /opt/sdc/data/sdc1_lst.dat of size 150M.
INFO  - Creating the Local System Temporary device sdc2_lst at /opt/sdc/data/sdc2_lst.dat of size 150M.
INFO  - sdc1: Creating the Local System Temporary database mycluster_tdb_1 on sdc1_lst of size 150M.
INFO  - sdc2: Creating the Local System Temporary database mycluster_tdb_2 on sdc2_lst of size 150M.
INFO  - The cluster is now configured. Shutting down this first instance.
The cluster mycluster was successfully created.

论坛徽章:
0
2 [报告]
发表于 2010-04-01 15:55 |只看该作者
7.创建1个node(sdc1),3个instance(sdc1,sc2,sdc3)的集群my3nodes。
由于3个instance全部跑在一台机器上,所以当提示
Will this cluster be configured using private SYBASE installations? (Y/N) :
时,我选择N. 即每个instance共享一份可执行文件。

[sdc@sdc1 onenodes]$ sybcluster -Uuafadmin -P -Cmy3nodes -F"sdc1"
> show agents
Verifying the supplied agent specifications...

Agent Information: service:jmx:rmi:///jndi/rmi://sdc1:9999/agent
-----------------------------------------------------


            Node Name:       sdc1
            Agent Port:      9999
            Agent Version:   2.5.0
            Agent Build:     1003

            OS Name:         Linux
            OS Version:      2.6.18-92.el5
            OS Architecture: amd64

            Agent Service Info:
              Agent Service (Agent)  Build: 1003  Status: running
              BootstrapService (BootstrapService)  Build: <unavailable>  Status: running
              Configuration Service (ConfigService)  Build: 1003  Status: running
              Deployment Service (DeploymentService)  Build: 20  Status: running
              Environment Service (EnvironmentDiscoveryService)  Build: 1003  Status: running
              File Transfer Service (FileTransferService)  Build: 1003  Status: running
              Plugin Registration Service (PluginRegisterService)  Build: 1003  Status: running
              RMI Service (RMIService)  Build: 1003  Status: running
              Remote Shell Service (RemoteShellService)  Build: 1003  Status: running
              Security Service (SecurityService)  Build: 1003  Status: running
              Self Discovery Service (SelfDiscoveryService)  Build: 1003  Status: running
              Service Registration Service (ServiceRegistrationService)  Build: 1003  Status: running
              Session Service (SessionService)  Build: 1003  Status: running
              Sybase Home Service (SybaseHomeService)  Build: 14  Status: running

            Agent Plugin Info:

              ASE Cluster Agent Plugin (com.sybase.ase.cluster) Version: 15.0.1 Build: 400 Instance: 1  Status: running
                 Cluster Name: mycluster
                 Env Shell:    /opt/sdc/SYBASE.sh Shell Type: sh
                 Sybase Home:  /opt/sdc
                 ASE Home:     /opt/sdc/ASE-15_0
                 ASE Version:  Adaptive Server Enterprise/15.5/EBF 17487 Cluster Edition/P/x86_64/Enterprise Linux/ase155ce/2422/64-bit/FBO/Thu Feb 25 08:14:07 2010
                 ASE Login:    sa
                 Update Time:  800 seconds
                 Last Update:  2010-03-31 11:22:41 -0400
> create cluster
Cluster my3nodes - Enter the maximum number of instances:  [ 4 ] 3
How many agents will participate in this cluster:  [ 4 ] 1
Verifying the supplied agent specifications...
        1) sdc1 9999 2.5.0 Linux
Enter the number representing the cluster node 1:  [ 1 ]
Will this cluster be configured using private SYBASE installations? (Y/N) :   [ N ]

------------------ Quorum Device  ---------------------
The quorum device is used to manage a cluster.  It contains information shared between instances and nodes.

Enter the full path to the quorum disk: /opt/sdc/onenodes/quorum.dat
Enter any traceflags:

-------------------- Page Size  --------------------

Enter the page size in kilobytes:  [ 2 ] 4

--------------- Master Database Device  ----------------
The master database device controls the operation of the Adaptive Server and stores information about all user databases and their associated database devices.

Enter the full path to the master device: /opt/sdc/onenodes/master.dat
Enter the size the Master Device (MB):  [ 60 ] 100
Enter the size the Master Database (MB):  [ 26 ] 50

------------ Sybase System Procedure Device --------
Sybase system procedures (sybsystemprocs) are stored on a device.

Enter the System Procedure Device path: /opt/sdc/onenodes/procs.dat
Enter System Procedure Device size (MB):  [ 152 ] 160
Enter the System Procedure Database size (MB):  [ 152 ]

-------------- System Database Device ------------------
The system database (sybsystemdb) stores information about distributed transactions.

Enter the System Database Device path: /opt/sdc/onenodes/sybsystem.dat
Enter the System Database Device size (MB):  [ 12 ] 20
Enter the System Database size (MB):  [ 12 ] 20

--------------- PCI Device  ----------------
Pluggable Component Interface (PCI) provides support for Java in database by loading off-the-shelf JVMs from any vendor. If you want to use JVM, create a device for it.

Enable PCI in Adaptive Servier (Y/N):  [ N ]

--------------------------------------------------------
Does this cluster have a secondary network:  [ Y ] n
Enter the port number from which this range will be applied:  [ 15100 ]

--------------------------------------------------------
Enter the SYBASE home directory:  [ /opt/sdc ]
Enter the environment shell script path:  [ /opt/sdc/SYBASE.sh ]
Enter the ASE home directory:  [ /opt/sdc/ASE-15_0 ]
Enter path to the dataserver config file:  [ /opt/sdc/my3nodes.cfg ]

--------------------------------------------------------

You will now be asked for the instance information on a node by node basis.


-- Cluster: my3nodes - Node: sdc1 - Agent: sdc1:9999 --

Enter the name of the cluster instance: sdc1
Enter the interface file query port number for instance sdc1: 4001
Enter the primary protocol address for sdc1:  [ sdc1 ] sdc1-priv2

--------------- Local System Temporary Database ---------
The Local System Temporary Database Device contains a database for each instance in the cluster.
Enter the LST device name: sdc1_lst
Enter the LST device path: /opt/sdc/onenodes/sdc1_lst.dat
Enter LST device size (MB): 50
Enter the LST database name:  [ my3nodes_tdb_1 ]
Enter the LST database size (MB):  [ 50 ]

--------------------------------------------------------
Do you want to add another instance? (Y or N):  [ N ] y
Enter the name of the cluster instance: sdc2
Enter the interface file query port number for instance sdc2: 4002
Enter the primary protocol address for sdc2:  [ sdc1 ] sdc1-priv3

--------------- Local System Temporary Database ---------
The Local System Temporary Database Device contains a database for each instance in the cluster.
Enter the LST device name: sdc2_lst
Enter the LST device path: /opt/sdc/onenodes/sdc2_lst.dat
Enter LST device size (MB): 50
Enter the LST database name:  [ my3nodes_tdb_2 ]
Enter the LST database size (MB):  [ 50 ]

--------------------------------------------------------
Do you want to add another instance? (Y or N):  [ N ] y
Enter the name of the cluster instance: sdc3   
Enter the interface file query port number for instance sdc3: 4003
Enter the primary protocol address for sdc3:  [ sdc1 ] sdc1-priv4

--------------- Local System Temporary Database ---------
The Local System Temporary Database Device contains a database for each instance in the cluster.
Enter the LST device name: sdc3_lst
Enter the LST device path: /opt/sdc/onenodes/sdc3_lst.dat
Enter LST device size (MB): 50
Enter the LST database name:  [ my3nodes_tdb_3 ]
Enter the LST database size (MB):  [ 50 ]
Would you like to save this configuration information in a file?  [ Y ]
Enter the name of the file to save the cluster creation information:  [ /opt/sdc/onenodes/my3nodes.xml ]

--------------------------------------------------------
Create the cluster now?  [ Y ]
INFO  - Creating the Cluster Agent plugin on host address sdc1 using agent: sdc1:9999
Enter the interfaces directory:  [ /opt/sdc ]
Would you like to check whether this device supports IO fencing capability (Y/N)?  [ Y ] n
Processing the create request...
INFO  - Cluster "my3nodes" creation in progress.
INFO  - Choosing the first instance to be created using the connected agent...
INFO  - The Sybase home directory is /opt/sdc.
INFO  - The ASE home directory is /opt/sdc/ASE-15_0.
INFO  - Retrieving environment variables from /opt/sdc/SYBASE.sh.
INFO  - The first instance created will be sdc1.
INFO  - Warning: You have selected '4k' as the logical page size for the Adaptive
INFO  - Server. If you plan to load dump from another database, make sure this logical
INFO  - page size matches the size of the source database. The default logical page
INFO  - size in previous Adaptive Server versions was 2KB.
INFO  - Building Adaptive Server 'sdc1':
INFO  - Writing entry into directory services...
INFO  - Directory services entry complete.
INFO  - Building master device...
INFO  - Master device complete.
INFO  - Starting server...
INFO  - Server started.
INFO  - Building sysprocs device and sybsystemprocs database...
INFO  - sysprocs device and sybsystemprocs database created.
INFO  - Running installmaster script to install system stored procedures...
INFO  - installmaster: 10% complete.
INFO  - installmaster: 20% complete.
INFO  - installmaster: 30% complete.
INFO  - installmaster: 40% complete.
INFO  - installmaster: 50% complete.
INFO  - installmaster: 60% complete.
INFO  - installmaster: 70% complete.
INFO  - installmaster: 80% complete.
INFO  - installmaster: 90% complete.
INFO  - installmaster: 100% complete.
INFO  - installmaster script complete.
INFO  - Creating two-phase commit database...
INFO  - Two phase commit database complete.
INFO  - Installing common character sets (Code Page 437, Code Page 850, ISO Latin-1,
INFO  - Macintosh and HP Roman-...
INFO  - Character sets installed.
INFO  - Server 'sdc1' was successfully created.
INFO  - Connecting to the dataserver using the host and query port sdc1:4001.
INFO  - Creating the Local System Temporary device sdc1_lst at /opt/sdc/onenodes/sdc1_lst.dat of size 50M.
INFO  - Creating the Local System Temporary device sdc2_lst at /opt/sdc/onenodes/sdc2_lst.dat of size 50M.
INFO  - Creating the Local System Temporary device sdc3_lst at /opt/sdc/onenodes/sdc3_lst.dat of size 50M.
INFO  - sdc1: Creating the Local System Temporary database my3nodes_tdb_1 on sdc1_lst of size 50M.
INFO  - sdc2: Creating the Local System Temporary database my3nodes_tdb_2 on sdc2_lst of size 50M.
INFO  - sdc3: Creating the Local System Temporary database my3nodes_tdb_3 on sdc3_lst of size 50M.
INFO  - The cluster is now configured. Shutting down this first instance.
The cluster my3nodes was successfully created.

论坛徽章:
0
3 [报告]
发表于 2010-04-01 15:56 |只看该作者
8.启动mycluster集群。
[sdc@sdc1 ~]$ sybcluster -Uuafadmin -P -Cmycluster -F"sdc1,sdc2"
> connect
mycluster> start cluster
.
.
.
此处省略
.
.
.
mycluster> show cluster status
INFO  - Listening for the cluster heartbeat. This may take a minute. Please wait... (mycluster::AseProbe:434)

        Id   Name   Node  State  Heartbeat
        --  ------  ----  -----  ---------
         1  inst01  sdc1    Up      Yes   
         2  inst02  sdc2    Up      Yes   
        --  ------  ----  -----  ---------

9.创建逻辑集群,并将逻辑集群与某个用户绑定:
[sdc@sdc2 ~]$ isql -Usa -P -Smycluster -Q -w300

sp_cluster logical, "create", testcluster1
go
sp_cluster logical, "add", testcluster1, instance, inst01  --向逻辑集群中增加一个instance:inst01。可以增加多个instance。
go
sp_cluster logical, "add", testcluster1, failover, inst02  --向逻辑集群中增加一个专门用做failover的instance:inst02。可以增加多个failover intance。
go

--tesuser是一个已经增加好的user,指定其登录时连接到testcluster1上(由于该逻辑集群上只有一个inst01,因此肯定连接到inst01上)
sp_cluster logical, "add", testcluster1, route, login, testuser
go
sp_cluster logical, "online", testcluster1
go
--察看此时逻辑集群的状态:
1> sp_cluster logical, "show", testcluster1
2> go

ID   Name                     State        Online Instances                     Connections            
---- ------------------------ ------------ ------------------------------------ ----------------------
  2   testcluster1             online                        1                             0            


Instance         State          Type             Connections            Load Score               Failover Group                  
---------------- -------------- ---------------- ---------------------- ------------------------ --------------------------------
inst01           online         base                       0                    0.02                         NULL                 
inst02           offline        failover                   0                    0.00                            1                 

Attribute                            Setting                                            
------------------------------------ --------------------------------------------------
Action Release                       manual                                             
Down Routing Mode                    system                                             
Failover Mode                        instance with fail_to_any                          
Failover Recovery                    on                                                
Gather Mode                          manual                                             
LC Roles                             none                                               
Load Profile                         sybase_profile_oltp                                
Login Distribution                   affinity                                          
Startup Mode                         automatic                                          
System View                          cluster                                            

Route Type               Route Key              
------------------------ ----------------------
login                    testuser               

另外开一个窗口,使用testuser登录,注意要使用-Q选项:
[sdc@sdc1 ~]$ isql -Utestuser -Ptestuser -Smycluster -Q -w300
1> select @@instancename
2> go
                                                              
------------------------------------------------------------
inst01                                                      

(1 row affected)
显然,此时登录到inst01上。

使用sa账号登录,手动将testcluster1进行failover切换:
[sdc@sdc2 ~]$ isql -Usa -P -Smycluster -Q -w300
1> sp_cluster logical,'failover',testcluster1,cluster
2> go

Logical Cluster                    Handle       Action                           From         To       State        InstancesWaiting                 ConnectionsRemaining                     WaitType         StartTime                              Deadline         CompleteTime            
---------------------------------- ------------ -------------------------------- ------------ -------- ------------ -------------------------------- ---------------------------------------- ---------------- -------------------------------------- ---------------- ------------------------
testcluster1                            3       failover cluster                 1            NULL     active                      1                                    1                     infinite         Apr  1 2010  2:37PM                        NULL                 NULL            

(return status = 0)

此时在刚才那个使用testuser登录的窗口执行:
1> select @@instancename
2> go
                                                              
------------------------------------------------------------
inst02                                                      

(1 row affected)
显然,该连接不需要再次连接,就已经自动migration到inst02上了。此时logical cluster的状态:

1> sp_cluster logical,show,testcluster1
2> go

ID   Name                     State        Online Instances                     Connections            
---- ------------------------ ------------ ------------------------------------ ----------------------
  2   testcluster1             online                        1                             1            


Instance         State          Type             Connections            Load Score               Failover Group                  
---------------- -------------- ---------------- ---------------------- ------------------------ --------------------------------
inst01           offline        base                       0                    0.00                         NULL                 
inst02           online         failover                   1                    0.00                            1              

inst01是offline,而inst02是online的。
然后再failback回来:
[sdc@sdc2 ~]$ isql -Usa -P -Smycluster -Q -w300
1> sp_cluster logical,'failback',testcluster1,cluster
2> go
Action '4' has been issued for the 'failback cluster' command.


Logical Cluster                    Handle       Action                           From         To       State        InstancesWaiting                 ConnectionsRemaining                     WaitType         StartTime                              Deadline         CompleteTime            
---------------------------------- ------------ -------------------------------- ------------ -------- ------------ -------------------------------- ---------------------------------------- ---------------- -------------------------------------- ---------------- ------------------------
testcluster1                            4       failback cluster                 2            1        active                      1                                    1                     infinite         Apr  1 2010  2:41PM                        NULL                 NULL            

(return status = 0)

此时testuser又自动切换到inst01上了:
1> select @@instancename
2> go
                                                              
------------------------------------------------------------
inst01                                                      

(1 row affected)

logical cluster又变回初始状态:
1> sp_cluster logical,show,testcluster1
2> go

ID   Name                     State        Online Instances                     Connections            
---- ------------------------ ------------ ------------------------------------ ----------------------
  2   testcluster1             online                        1                             1            


Instance         State          Type             Connections            Load Score               Failover Group                  
---------------- -------------- ---------------- ---------------------- ------------------------ --------------------------------
inst01           online         base                       1                    0.02                         NULL                 
inst02           offline        failover                   0                    0.00                            1                 

如果将inst01直接 shutdown掉,连接到testcluster1上的连接也会自动failover到inst02上。
后面我又在包含三个instance的my3nodes集群上作了类似操作,没有再做记录。三个instance的逻辑机群更加灵活,可以建立一个包含2个工作用的instance,外加一个failover用的instance作为热备。

10.总结
集群最重要的功能就是实现HA的功能。一个逻辑集群中的failover instance,可以同时作为另一个逻辑集群的普通工作instance。一个instance也可以同时作为多个逻辑集群的failover节点。对于一个逻辑集群来说,其中的failover instance就像一个热备instance一样,随时等待接管其他instance的工作。逻辑集群中也可以没有failover instance,根据某种策略(这个策略是可以自己定义的,比如根据CPU使用率高低或者连接数的多少,有点像负载均衡),客户端连接被重定向到某个instance上去。一旦该instance当机,连接到该instance的链接会被自动migration到逻辑集群的其他instance上去。当一个逻辑集群上的所有instance全部完蛋时,其上的链接会被migration到指定的另一个逻辑集群上去。

论坛徽章:
0
4 [报告]
发表于 2010-04-01 16:07 |只看该作者
不错不错,学习了

论坛徽章:
6
水瓶座
日期:2014-06-04 03:34:37水瓶座
日期:2014-06-17 13:20:31数据库技术版块每日发帖之星
日期:2016-07-09 06:20:00数据库技术版块每日发帖之星
日期:2016-07-17 06:20:00数据库技术版块每日发帖之星
日期:2016-08-01 06:20:00数据库技术版块每日发帖之星
日期:2016-08-04 06:20:00
5 [报告]
发表于 2010-04-01 16:48 |只看该作者
顶,谢谢!

论坛徽章:
0
6 [报告]
发表于 2010-04-01 22:51 |只看该作者
不错,顶你

论坛徽章:
0
7 [报告]
发表于 2010-04-02 11:25 |只看该作者
楼主的 ASE CE 安装包是不是有单独的安装介质,和一般普通数据库安装介质分开的吗?

论坛徽章:
0
8 [报告]
发表于 2010-04-07 22:52 |只看该作者
楼主的 ASE CE 安装包是不是有单独的安装介质,和一般普通数据库安装介质分开的吗?
wangdonsy 发表于 2010-04-02 11:25



    是不同的介质。据厂家的人说,最终版本会整合在一起。目前还是分开的。

论坛徽章:
1
操作系统版块每日发帖之星
日期:2016-06-20 06:20:00
9 [报告]
发表于 2010-04-08 08:50 |只看该作者
写的不错,安装其实很简单的,08年刚出来的时候装过,同时写的性能太差了还宕机,现在不知道如何了,当时只有linux版的,也不知现在版本齐全了没有。sybase离自己越来越远了。

论坛徽章:
0
10 [报告]
发表于 2010-04-08 11:30 |只看该作者
写的不错,安装其实很简单的,08年刚出来的时候装过,同时写的性能太差了还宕机,现在不知道如何了,当时只 ...
echoaix 发表于 2010-04-08 08:50



   是啊,感觉ASE现在是逐渐向Oracle靠拢
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP