免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 14931 | 回复: 12
打印 上一主题 下一主题

转: Master-Master Replication Example using MMM [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2009-03-27 17:16 |只看该作者 |倒序浏览
http://blog.kovyrin.net/2007/04/ ... -example-using-mmm/

  Despite of my high load at work I decided to release mmm-1.0-pre2 today. There are some small, but critical fixes added and much more coming next week (or little bit later if mysqlconf will take more time than I think).

After the first alpha release I’ve received lots of emails, some messages in mmm-devel mail list and even some bug reports in Google Code bug tracking. One of the most asked things was documentation. ;-) So, I decided to write some posts in this blog (sorry to non-sql-related readers) and them compose some docs for final release using these posts and comments from readers. This post will be first in mmm-series and will describe how to use mmm in simple master+master scheme where one master accept write requests and both masters accept read requests. This post will provide you with detailed instructions about MySQL setup, permissions setting, mmm installation and configuration and cluster management.

Network Infrastructure

All my example configs in this article will be based on the following network infrastructure:

    * Web Server + MMM Monitoring Server - 192.168.1.1
    * MySQL Server db1 - 192.168.1.111
    * MySQL Server db2 - 192.168.1.112

All servers are connected to the same switched network.
Software Prerequisites

Before you’ll begin your setup, take a looks at the following list of prerequisites for each server in your cluster to be sure that you have all mentioned packages/modules/etc.

Each of mysql servers in the cluster should have iproute2 package to let mmm manage IP addresses on these servers with ip command. As for perl modules, you can run install.pl script and it’d say what do you need to add to your system before installation will be possible.
MySQL Servers Setup

First of all, you need to setup both of your MySQL servers to replicate data from each other. Example configs are following:

my.cnf at db1 should have following options:
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log

my.cnf at db2 should have following options:
server-id = 2
log_bin = /var/log/mysql/mysql-bin.log

Replication settings (db1):
mysql> grant replication slave on *.* to 'replication'@'%' identified by 'slave';
...
mysql> change master to master_host='192.168.1.112', master_port=3306, master_user='replication', master_password='slave';
...
mysql> slave start;

Replication settings (db2):
mysql> grant replication slave on *.* to 'replication'@'%' identified by 'slave';
...
mysql> change master to master_host='192.168.1.111', master_port=3306, master_user='replication', master_password='slave';
...
mysql> slave start;

After all these operations were made your servers will have SHOW SLAVE STATUS results like following:
mysql> show slave status\G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.1.112
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000026
        Read_Master_Log_Pos: 98
             Relay_Log_File: db1-relay-bin.000339
              Relay_Log_Pos: 235
      Relay_Master_Log_File: mysql-bin.000026
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB:
        Replicate_Ignore_DB:
         Replicate_Do_Table:
     Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
                 Last_Errno: 0
                 Last_Error:
               Skip_Counter: 0
        Exec_Master_Log_Pos: 98
            Relay_Log_Space: 235
            Until_Condition: None
             Until_Log_File:
              Until_Log_Pos: 0
         Master_SSL_Allowed: No
         Master_SSL_CA_File:
         Master_SSL_CA_Path:
            Master_SSL_Cert:
          Master_SSL_Cipher:
             Master_SSL_Key:
      Seconds_Behind_Master: 0
1 row in set (0.00 sec)

Setting up mmm agents

Each mysql server should have one mmmd_agent running on it. To set them up you will need to install mmm as following:
# mkdir ~/mmm
# cd ~/mmm
# wget http://mysql-master-master.googl ... mm-1.0-pre2.tar.bz2
...
# tar xzf mmm-1.0-pre2.tar.bz2
# cd mmm-1.0-pre2
# ./install.pl
...
Installation is done!
#

After mmm installation you’ll need to configure your agents.

db1 config /usr/local/mmm/etc/mmm_agent.conf:
#
# Master-Master Manager config (agent)
#

# Debug mode
debug no

# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin

# Logging setup
log mydebug
    file /usr/local/mmm/var/mmm-debug.log
    level debug

log mytraps
    file /usr/local/mmm/var/mmm-traps.log
    level trap

# MMMD command socket tcp-port and ip
bind_port 9989

# Cluster interface
cluster_interface eth0

# Define current server id
this db1
mode master

# For masters
peer db2

# Cluster hosts addresses and access params
host db1
    ip 192.168.1.111
    port 3306
    user rep_agent
    password RepAgent

host db2
    ip 192.168.1.112
    port 3306
    user rep_agent
    password RepAgent

db1 config /usr/local/mmm/etc/mmm_agent.conf:
#
# Master-Master Manager config (agent)
#

# Debug mode
debug no

# Paths
pid_path /usr/local/mmm/var/mmmd_agent.pid
bin_path /usr/local/mmm/bin

# Logging setup
log mydebug
    file /usr/local/mmm/var/mmm-debug.log
    level debug

log mytraps
    file /usr/local/mmm/var/mmm-traps.log
    level trap

# MMMD command socket tcp-port and ip
bind_port 9989

# Cluster interface
cluster_interface eth0

# Define current server id
this db2
mode master

# For masters
peer db1

# Cluster hosts addresses and access params
host db1
    ip 192.168.1.111
    port 3306
    user rep_agent
    password RepAgent

host db2
    ip 192.168.1.112
    port 3306
    user rep_agent
    password RepAgent

Now you can run mmmd_agent on each server and your servers would be ready for management with mmm.
MMM Server Installation and Configuration

When everything is done on mysql servers, you are ready to set up monitoring node which of course could be combined with some web-server node or another services - dedicated hardware is not required. Before configuration step you’ll need to install mmm just as it was done on mysql servers. Then you’ll need to create configuration file for mmmd_mon program which would monitor your nodes. Config file for our example scheme could be like following:

Config file for monitoring node - /usr/local/mmm/etc/mmm_mon.conf:
#
# Master-Master Manager config (monitor)
#

# Debug mode
debug no

# Paths
pid_path /usr/local/mmm/var/mmmd.pid
status_path /usr/local/mmm/var/mmmd.status
bin_path /usr/local/mmm/bin

# Logging setup
log mydebug
    file /usr/local/mmm/var/mmm-debug.log
    level debug

log mytraps
    file /usr/local/mmm/var/mmm-traps.log
    level trap


# MMMD command socket tcp-port
bind_port 9988
agent_port 9989
monitor_ip 127.0.0.1

# Cluster interface
cluster_interface eth0

# Cluster hosts addresses and access params
host db1
    ip 192.168.1.111
    port 3306
    user rep_monitor
    password RepMonitor
    mode master
    peer db2

host db2
    ip 192.168.1.112
    port 3306
    user rep_monitor
    password RepMonitor
    mode master
    peer db1

#
# Define roles
#

active_master_role writer

# Mysql Reader role
role reader
    mode balanced
    servers db1, db2
    ip 192.168.1.201, 192.168.1.202

# Mysql Writer role
role writer
    mode exclusive
    servers db1, db2
    ip 192.168.1.200

#
# Checks parameters
#

# Ping checker
check ping
    check_period 1
    trap_period 5
    timeout 2

# Mysql checker
check mysql
    check_period 1
    trap_period  2
    timeout 2

# Mysql replication backlog checker
check rep_backlog
    check_period 5
    trap_period 10
    max_backlog 60
    timeout 2

# Mysql replication threads checker
check rep_threads
    check_period 1
    trap_period 5
    timeout 2

With this configuration file you will get 3 interface IP addresses used to “speak” with your cluster:

    * Writer IP (192.168.1.200) - this address should be used to send write requests to your server.
    * Reader IPs (192.168.1.201 and 192.168.1.202) - addresses for read-only requests.

Before you’ll start your monitoring part of the cluster, you need to be sure what mmm_mon will be able to connect to your servers with credentials from mmm_mon.conf file (run this command on one node and, if your replication was set up correctly (you’ve already tested it, right?), another server will get this statement by replication:
mysql> GRANT ALL PRIVILEGES on *.* to 'rep_monitor'@'192.168.1.1' identified by 'RepMonitor';

MMM Monitoring and Management Hints

When your configuration will be finished and mmmd_mon will be started, you obviously would need to take a look at mmm_control script which is small program dedicated to sending commands to mmmd_mon process and output results in nice format. To view its params info you can start it without any parameters. At this moment you can use following commands:

    * show - displays list of servers with status info and bound roles.
    * ping - sends ping command to local mmmd_mon daemon to check if it is running
    * set_online host_name/set_offline - changes statuses of specified server.
    * move_role role_name host_name - asks mmmd_mon to move specified role to specified host (useful for exclusive roles (like writer)

When your mmmd_mon script will be started first time, it will think what all servers were offline and now they came back. So, initial status of all servers will be set to AWAITING_RECOVERY and you’ll need to put both servers to ONLINE:
# mmm_control set_online db1
Config file: /usr/local/mmm/mmm_mon.conf
[2007-04-23 09:49:15]: Sending command 'SET_ONLINE(db1)' to 127.0.0.1
Command sent to monitoring host. Result: OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!

# mmm_control set_online db2
Config file: /usr/local/mmm/mmm_mon.conf
[2007-04-23 09:49:53]: Sending command 'SET_ONLINE(db2)' to 127.0.0.1
Command sent to monitoring host. Result: OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!

# mmm_control show
Config file: /usr/local/mmm/mmm_mon.conf
[2007-04-23 09:50:31]: Sending command 'PING()' to 127.0.0.1
Daemon is running!
Servers status:
  db1(192.168.1.111): master/ONLINE. Roles: reader(192.168.1.201;), writer(192.168.1.200;)
  db2(192.168.1.112): master/ONLINE. Roles: reader(192.168.1.202;)
#

So, that’s it. If you have any questions or suggestions, you can leave then in comments below or post them to mmm mail list.

[ 本帖最后由 枫影谁用了 于 2009-3-27 17:21 编辑 ]

论坛徽章:
0
2 [报告]
发表于 2009-03-27 17:23 |只看该作者

论坛徽章:
1
白银圣斗士
日期:2015-11-23 08:33:04
3 [报告]
发表于 2009-03-27 17:23 |只看该作者
中文的可看这里。。
http://www.technow.com.hk/mysql- ... eplication-manager2

在cu blog也有这样的文章啦。

论坛徽章:
0
4 [报告]
发表于 2009-03-27 17:50 |只看该作者
呵呵,中文的一般都是翻转的,照搬的话没几个能配通的.

论坛徽章:
0
5 [报告]
发表于 2009-03-27 17:51 |只看该作者
突然发现2楼的链接是个大宝库

论坛徽章:
0
6 [报告]
发表于 2009-03-28 00:25 |只看该作者
搞不懂互备MM有什么实际意义。  真正应用毫无用处

论坛徽章:
0
7 [报告]
发表于 2009-03-30 14:54 |只看该作者
若需要可以搞多个slave,同意楼上的,不知道MMM有什么应用

论坛徽章:
0
8 [报告]
发表于 2009-05-06 11:12 |只看该作者
用于负载均衡,灾难恢复,故障转移

论坛徽章:
0
9 [报告]
发表于 2009-05-06 13:09 |只看该作者
有生产中用MMM的吗,我常常在想那个Perl写的Proxy真的能在高负载情况下顶住业务吗?
没有勇气使用这个。

论坛徽章:
1
白银圣斗士
日期:2015-11-23 08:33:04
10 [报告]
发表于 2009-05-06 18:47 |只看该作者

回复 #9 Coolriver 的帖子

我也没有生产环境中试过。
连mysql-proxy都只是在测试环境中用过一个多月。
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP