温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

MySQL读写分离amoeba&mysql-proxy

发布时间:2020-08-10 13:57:13 来源:ITPUB博客 阅读:154 作者:彩伦 栏目:MySQL数据库

----主从同步介绍

refencen    https://www.cnblogs.com/lin3615/p/5684891.html

1. 读写分离方式

这里介绍两种方式,一种是用mysql-proxy,一种用Amoeba

 

amoeba

优点:直接实现读写分离和负载均衡,不用修改代码,有很灵活的数据解决方案

缺点:自己分配账户,和后端数据库权限管理独立,权限处理不够灵活

 

mysql-proxy

优点:直接实现读写分离和负载均衡,不用修改代码,masterslave用一样的帐号

缺点:字符集问题,lua语言编程,还只是alpha版本,时间消耗有点高

 

2. 读写分离,延迟是个大问题

在slave服务器上执行 show slave status 查看同步情况

Master_Log_File:slave中的I/O线程当前正在读取的master服务器二进制式日志文件名.

Read_Master_Log_Pos:在当前的 master服务器二进制日志中,slave中的I/O线程已经读取的位置

Relay_Log_File:SQL线程当前正在读取与执行中继日志文件的名称

Relay_Log_Pos:在当前的中继日志中,SQL线程已读取和执行的位置

Relay_Master_Log_File:SQL线程执行的包含多数近期事件的master二进制日志文件的名称

Slave_IO_Running:I/O线程是否被启动并成功连接到master

Slave_SQL_Running:SQL线程是否被启动

Seconds_Behind_Master:slave服务器SQL线程和从服务器I/O线程之间的差距,单位为秒计

 

slave同步延迟情况出现:

1.Seconds_Behind_Master不为0,这个值可能会很大

2.Relay_Master_Log_FileMaster_Log_File显示bin-log的编号相差很大,说明bin-logslave上没有及时同步,所以近期执行的 bin-log和当前I/O线程所读的 bin-log相差很大

3.mysql slave数据库目录下存在大量的 mysql-relay-log日志,该日志同步完成之后就会被系统自动删除,存在大量日志,说明主从同步延迟很厉害

 

3. mysql主从同步延迟原理

mysql主从同步原理

主库针对读写操作,顺序写 binlog,从库单线程去主库读"写操作的binlog",从库取到 binlog在本地原样执行(随机写),来保证主从数据逻辑上一致.

mysql的主从复制都是单线程的操作,主库对所有DDLDML产生 binlogbinlog是顺序写,所以效率很高,slaveSlave_IO_Running线程到主库取日志,效率比较高,下一步问题来了,slave的 slave_sql_running线程将主库的 DDL和DML操作在 slave实施。DML,DDL的IO操作是随即的,不能顺序的,成本高很多,还有可能slave上的其他查询产生 lock,由于 slave_sql_running也是单线程的,所以 一个 DDL卡住了,需要执行一段时间,那么所有之后的DDL会等待这个 DDL执行完才会继续执行,这就导致了延迟.由于master可以并发,Slave_sql_running线程却不可以,所以主库执行 DDL需求一段时间,在slave执行相同的DDL时,就产生了延迟.

 

主从同步延迟产生原因

当主库的TPS并发较高时,产生的DDL数量超过Slave一个 sql线程所能承受的范围,那么延迟就产生了,当然还有就是可能与 slave的大型 query语句产生了锁等待

首要原因:数据库在业务上读写压力太大,CPU计算负荷大,网卡负荷大,硬盘随机IO太高

次要原因:读写 binlog带来的性能影响,网络传输延迟

 

4. 主从同步延迟解决方案

架构方面

1.业务的持久化层的实现采用分库架构,mysql服务可平行扩展分散压力

2.单个库读写分离,一主多从,主写从读,分散压力。

3.服务的基础架构在业务和mysql之间加放 cache

4.不同业务的mysql放在不同的机器

5.使用比主加更了的硬件设备作slave

反正就是mysql压力变小,延迟自然会变小

 

硬件方面:

采用好的服务器

 

5. mysql主从同步加速

1sync_binlogslave端设置为0

2、–logs-slave-updates 从服务器从主服务器接收到的更新不记入它的二进制日志。

3、直接禁用slave端的binlog

4slave端,如果使用的存储引擎是innodbinnodb_flush_log_at_trx_commit =2

 

从文件系统本身属性角度优化

master

修改linuxUnix文件系统中文件的etime属性, 由于每当读文件时OS都会将读取操作发生的时间回写到磁盘上,对于读操作频繁的数据库文件来说这是没必要的,只会增加磁盘系统的负担影响I/O性能。可以通过设置文件系统的mount属性,组织操作系统写atime信息,在linux上的操作为:

打开/etc/fstab,加上noatime参数

/dev/sdb1 /data reiserfs noatime 1 2

然后重新mount文件系统

#mount -oremount /data

 

主库是写,对数据安全性较高,比如sync_binlog=1,innodb_flush_log_at_trx_commit = 1 之类的设置是需要的

而slave则不需要这么高的数据安全,完全可以将sync_binlog设置为0或者关闭binlog,innodb_flushlog也可以设置为0来提高sql的执行效率

1sync_binlog=1

MySQL提供一个sync_binlog参数来控制数据库的binlog刷到磁盘上去。

默认,sync_binlog=0,表示MySQL不控制binlog的刷新,由文件系统自己控制它的缓存的刷新。这时候的性能是最好的,但是风险也是最大的。一旦系统Crash,在binlog_cache中的所有binlog信息都会被丢失。

如果sync_binlog>0,表示每sync_binlog次事务提交,MySQL调用文件系统的刷新操作将缓存刷下去。最安全的就是sync_binlog=1了,表示每次事务提交,MySQL都会把binlog刷下去,是最安全但是性能损耗最大的设置。这样的话,在数据库所在的主机操作系统损坏或者突然掉电的情况下,系统才有可能丢失1个事务的数据。

但是binlog虽然是顺序IO,但是设置sync_binlog=1,多个事务同时提交,同样很大的影响MySQLIO性能。

虽然可以通过group commit的补丁缓解,但是刷新的频率过高对IO的影响也非常大。对于高并发事务的系统来说,“sync_binlog”设置为0和设置为1的系统写入性能差距可能高达5倍甚至更多。

所以很多MySQL DBA设置的sync_binlog并不是最安全的1,而是2或者是0。这样牺牲一定的一致性,可以获得更高的并发和性能。

默认情况下,并不是每次写入时都将binlog与硬盘同步。因此如果操作系统或机器(不仅仅是MySQL服务器)崩溃,有可能binlog中最后的语句丢失了。要想防止这种情况,你可以使用sync_binlog全局变量(1是最安全的值,但也是最慢的),使binlog在每Nbinlog写入后与硬盘同步。即使sync_binlog设置为1,出现崩溃时,也有可能表内容和binlog内容之间存在不一致性。

 

2innodb_flush_log_at_trx_commit (这个很管用)

抱怨InnodbMyISAM 100倍?那么你大概是忘了调整这个值。默认值1的意思是每一次事务提交或事务外的指令都需要把日志写入(flush)硬盘,这是很费时的。特别是使用电池供电缓存(Battery backed up cache)时。设成2对于很多运用,特别是从MyISAM表转过来的是可以的,它的意思是不写入硬盘而是写入系统缓存。

日志仍然会每秒flush到硬盘,所以你一般不会丢失超过1-2秒的更新。设成0会更快一点,但安全方面比较差,即使MySQL挂了也可能会丢失事务的数据。而值2只会在整个操作系统 挂了时才可能丢数据。

 

3ls命令可用来列出文件的 atimectime mtime

atime 文件的access time 在读取文件或者执行文件时更改的

ctime 文件的create time 在写入文件,更改所有者,权限或链接设置时随inode的内容更改而更改

mtime 文件的modified time 在写入文件时随文件内容的更改而更改

ls -lc filename 列出文件的 ctime

ls -lu filename 列出文件的 atime

ls -l filename 列出文件的 mtime

stat filename 列出atimemtimectime

atime不一定在访问文件之后被修改

因为:使用ext3文件系统的时候,如果在mount的时候使用了noatime参数那么就不会更新atime信息。

这三个time stamp都放在 inode .如果mtime,atime 修改,inode 就一定会改, 既然 inode 改了,ctime也就跟着改了.

之所以在 mount option 中使用 noatime, 就是不想file system 做太多的修改, 而改善读取效能

 

4.进行分库分表处理,这样减少数据量的复制同步操作

 

 

 

 

 

 

一、MySQL主从搭建

 

1. 主从库定义

主库     192.168.12.56   3306

从库1    192.168.12.56   3307

从库2    192.168.12.55   3306

 

 

2. 主库修改参数

====192.168.12.56

# vi /etc/my.cnf

[mysqld]

server-id       = 1

log-bin=mysql-bin

 

 

------重启mysql

# /etc/init.d/mysqld restart

# netstat -nltpd |grep mysql

 

------检查参数

# ls -lrth /app/mysql/data/|grep mysql-bin

-rw-rw---- 1 mysql mysql  107 Oct 29 22:35 mysql-bin.000001

-rw-rw---- 1 mysql mysql   19 Oct 29 22:35 mysql-bin.index

 

# mysql -uroot -p111111 -e 'show variables;'|egrep "log_bin|server_id"

log_bin ON

server_id       1

 

 

3. 从库修改参数

3.1 从库1修改

====192.168.12.56

# vi /mysqldata/3307/my3307.cnf

[mysqld]

server-id       = 2

 

# mysqladmin -uroot -p1234567 -S /mysqldata/3307/mysql3307.sock shutdown

# mysqld_safe --defaults-file=/mysqldata/3307/my3307.cnf 2>&1 >/dev/null &

 

# mysql -uroot -p1234567 -S /mysqldata/3307/mysql3307.sock -e 'show variables like "server%"'

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| server_id     | 2     |

 

 

 

 

3.2 从库2修改

====192.168.12.55

# vi /etc/my.cnf

[mysqld]

server-id       = 3


# /etc/init.d/mysqld restart

# netstat -nltpd |grep mysql

 

# mysql -uroot -p111111 -e 'show variables like "server%"'

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| server_id     | 3     |

+---------------+-------+

 

 

4. 主库创建同步帐号rep

# mysql -uroot -p111111

 

mysql> grant replication slave on *.* to 'rep'@'192.168.12.%' identified by '123456';

mysql> flush privileges;

 

----replication slave mysql同步的必须权限,此处不要授予all

----*.* 表示所有库所有表,也可以指定具体库和表进行复制。shaw_gbk_db.test_tb

----'rep'@'192.168.12.%' rep为同步帐号,192.168.12.%为授权主机网段

 

mysql> select user,host from mysql.user where user='rep';

+------+--------------+

| user | host         |

+------+--------------+

| rep  | 192.168.12.% |

+------+--------------+

1 row in set (0.04 sec)

mysql> show grants for rep@'192.168.12.%'\G

*************************** 1. row ***************************

Grants for rep@192.168.12.%: GRANT REPLICATION SLAVE ON *.* TO 'rep'@'192.168.12.%' IDENTIFIED BY PASSWORD '*6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9'

 

 

5. 从库搭建

5.1 ### 从库1搭建 ###(方式一)

5.1.1 主库导出数据

------主库加锁。加锁后,该窗口不能退出,并且受以下参数影响

mysql> show variables like '%timeout';

| interactive_timeout        | 28800    |

| wait_timeout               | 28800    |

 

mysql> flush table with read lock;

注意:mysql 5.1mysql 5.5版本锁表方式不一样:

5.1版本: flush tables with read lock

5.5版本: flush table with read lock

 

 

------查看当前binlog

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000001 |      334 |              |                  |

+------------------+----------+--------------+------------------+

 

------新开窗口导出数据

# mkdir /bak

# mysqldump -uroot -p1111111 -A -B --events|gzip >/bak/mysql_bak_$(date +%F).sql.gz

 

# ll -lrht /bak/

total 144K

-rw-r--r-- 1 root root 141K Jan 10 07:58 mysql_bak_2018-01-10.sql.gz

 

------导完数据之后,查看binlog状态是否和之前一致,无误后解锁

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000001 |      334 |              |                  |

+------------------+----------+--------------+------------------+

mysql> unlock tables;

 

-------创建一个数据库,带回从库搭建后,看是否能自动同步过去

mysql> create database shaw_db;

Query OK, 1 row affected (0.02 sec)

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000001 |      423 |              |                  |

+------------------+----------+--------------+------------------+

mysql> show processlist;

+----+------+-------------------------------------------------------------------+------------------+

| Id | User | Host        | db      | Command     | Time | State                                                                 | Info             |

+----+------+-------------------------------------------------------------------+------------------+

|  4 | root | localhost   | shaw_db | Query       |    0 | NULL                                                                  | show processlist |

| 10 | rep  | xuan2:37165 | NULL    | Binlog Dump |  406 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL             |

+----+------+---------------------------------------------------------------------+------------------+

 

 

5.1.2 从库1导入数据

# gzip -d /bak/mysql_bak_2018-01-10.sql.gz

# ls -lrht /bak/

total 516K

-rw-r--r-- 1 root root 516K Jan 10 07:58 mysql_bak_2018-01-10.sql

# mysql -uroot -p1234567 -S /mysqldata/3307/mysql3307.sock </bak/mysql_bak_2018-01-10.sql

mysql> show slave status\G

Empty set (0.00 sec)

 

5.1.3 从库1设置change master

# mysql -uroot -p1234567 -S /mysqldata/3307/mysql3307.sock<<EOF

  change master to

master_host='192.168.12.56',

master_port=3306,

master_user='rep',

master_password='123456',

master_log_file='mysql-bin.000001',

master_log_pos=334;

EOF

 

====>从库1生成master.info

# ls -lrht /mysqldata/3307/data/

total 29M

drwx------ 2 mysql mysql 4.0K Oct 29 21:45 performance_schema

-rw-rw---- 1 mysql mysql 5.0M Oct 29 21:47 ib_logfile1

-rw-rw---- 1 mysql mysql  18M Oct 29 22:42 ibdata1

-rw-rw---- 1 mysql mysql 5.0M Oct 29 22:43 ib_logfile0

drwx------ 2 mysql root  4.0K Oct 29 23:42 mysql

-rw-rw---- 1 mysql mysql   78 Oct 29 23:45 master.info

-rw-rw---- 1 mysql mysql  107 Oct 29 23:45 mysql3307-relay-bin.000001

-rw-rw---- 1 mysql mysql   29 Oct 29 23:45 mysql3307-relay-bin.index

-rw-r----- 1 mysql root  6.0K Oct 29 23:45 xuan2.err

-rw-rw---- 1 mysql mysql   52 Oct 29 23:45 relay-log.info

 

# cat master.info

18

mysql-bin.000001

334

192.168.12.56

rep

123456

3306

60

0

 

5.1.4 从库1启动从库进程

mysql> start slave;

 

mysql> show slave status\G

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

        Seconds_Behind_Master: 0   落后主库的秒数

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.12.56

                  Master_User: rep

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: mysql-bin.000001

          Read_Master_Log_Pos: 423

               Relay_Log_File: orcl-relay-bin.000002

                Relay_Log_Pos: 342

        Relay_Master_Log_File: mysql-bin.000001

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 0

                   Last_Error:

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 423

              Relay_Log_Space: 497

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 0

               Last_SQL_Error:

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 1

1 row in set (0.00 sec)

 

mysql> show processlist;

+----+-------------+-----------+----------------------------------+------------------+

| Id | User        | Host      | db   | Command | Time | State                                                                       | Info             |

+----+-------------+-----------+---------------------------------------------------------------+------------------+

|  9 | root        | localhost | NULL | Query   |    0 | NULL                                                                        | show processlist |

| 10 | system user |           | NULL | Connect |  347 | Waiting for master to send event                                            | NULL             |

| 11 | system user |           | NULL | Connect |  289 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL             |

+----+-------------+---------------------------------------------------------------------+------------------+

3 rows in set (0.00 sec)

 

----检查,发现之前创建的数据

mysql> show databases like 'shaw_db';

+--------------------+

| Database (shaw_db) |

+--------------------+

| shaw_db            |

+--------------------+

1 row in set (0.00 sec)

 

5.1.5 再次测试同步情况

----主库

mysql> use shaw_db;

Database changed

mysql> create table t_zhong as select * from mysql.user;

Query OK, 3 rows affected (0.02 sec)

Records: 3  Duplicates: 0  Warnings: 0

mysql> show master status;

+------------------+----------+--------------+------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+------------------+----------+--------------+------------------+

| mysql-bin.000001 |      537 |              |                  |

+------------------+----------+--------------+------------------+

1 row in set (0.00 sec)

 

----从库1

mysql> select count(*) from shaw_db.t_zhong;

+----------+

| count(*) |

+----------+

|        3 |

+----------+

1 row in set (0.00 sec)

 

 

 

--5.1.6 主从库产生的日志文件信息

---- 主库开启binlog后产生binlog日志

# ls -lrt |grep bin

-rw-rw---- 1 mysql mysql   33 Jan 10 07:13 mysql-bin.index

-rw-rw---- 1 mysql mysql  827 Jan 10 10:57 mysql-bin.000001

 

# cat mysql-bin.index

/mysqldata/3309/mysql-bin.000001

 

# mysqlbinlog mysql-bin.000001|more

/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;

/*!40019 SET @@session.max_insert_delayed_threads=0*/;

/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;

DELIMITER /*!*/;

# at 4

#180110  7:13:05 server id 1  end_log_pos 107   Start: binlog v 4, server v 5.5.32-log created 1801

10  7:13:05 at startup

# Warning: this binlog is either in use or was not closed properly.

ROLLBACK/*!*/;

BINLOG '

。。。。。。。。。。。。。。。。。。

 

 

---- 从库开启未开启binlog,但是有relaylog,并且有master.info记录信息

# ls -lrt |grep relay

-rw-rw---- 1 mysql mysql      155 Jan 10 08:20 orcl-relay-bin.000001

-rw-rw---- 1 mysql mysql       48 Jan 10 08:20 orcl-relay-bin.index

-rw-rw---- 1 mysql mysql      746 Jan 10 10:57 orcl-relay-bin.000002

-rw-rw---- 1 mysql mysql       49 Jan 10 10:57 relay-log.info

 

# cat  orcl-relay-bin.index   ##relaylog索引文件

./orcl-relay-bin.000001

./orcl-relay-bin.000002

 

# cat relay-log.info  relaylog是SQL线程

./orcl-relay-bin.000002

746    ## Relay_Log_Pos: 746表示从库sql应用日志的relaylog位置。

mysql-bin.000001   ##这个表示从库从主库取数据回来的binlog位置

827   ##Exec_Master_Log_Pos: 827 这个是从库从主库取数据回来的binlog日志中pos位置

 

# mysqlbinlog orcl-relay-bin.000002 |more

/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;

/*!40019 SET @@session.max_insert_delayed_threads=0*/;

/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;

DELIMITER /*!*/;

# at 4

#180110  8:20:25 server id 2  end_log_pos 107   Start: binlog v 4, server v 5.5.32 created 180110 

8:20:25

。。。。。。。。。

 

# cat master.info   ##master.info是IO线程

18    ##

mysql-bin.000001  ##主库binlog位置

827   ##主库pos位置

192.168.12.55   ##主库地址

rep      ##主库连接帐号

123456   ##主库连接密码

3309  ##主库端口号

60    ##表示主从出现问题后,从库重试时间

0

 

 

 

 

 

0

1800.000

 

0

 

 

5.2 ### 从库2搭建 ###(方式二) 推荐

5.2.1 主库导出数据

====192.168.12.56

---- 导出时加入master-data=1

主库导出备份加参数 master-data=1 后。导入从库后,change master to时 不需要加以下参数,如果是master-data=2 则需要加以下参数

master_log_file='mysql-bin.000001',

master_log_pos=334;

 

## 导出数据

# mysqldump -uroot -p111111 -A -B -F --master-data=1 --events --single-transaction|gzip > /bak/mysqld_$(date +%F).sql.gz

 

===============================================================

## 参数说明:

-B 指定多个库,增加建库及use语句

--compact 去掉注释,适合调试输出,生产不用

-A 备份所有库

-F 刷新binlog日志

--master-data 增加binlog日志文件名及对应的位置点

-x, --lock-all-tables 

   Locks all tables across all databases.This is archieved by taking a global read lock for the duration of the whole dump. Automatically turns –single-transaction and –lock-tables off

-l, --lock-tables  Lock all tables for read

-d 只备份表结构

-t 只备份数据

--single-transaction  适合InnoDB事务数据库备份

InnoDB表在备份时,通常启用选项—single-transaction来保证备份的一致性,实际上它的工作原理是设定本次会话的隔离级别为:repeatable read 以确保本次会话dump时不会看到其他会话已经提交的数据。

 

myisam备份命令

mysqldump –uroot –p111111 –A –B –F –master-data=2 –x –events|gzip > /opt/all.sql.gz

 

innodb备份命令:推荐

mysqldump –uroot –p111111 –A –B –F –master-data=2 –events –single-transaction |gzip >opt/all.sql.gz

===============================================================

 

 

5.2.2 从库2导入数据

====192.168.12.55

# scp /bak/mysqld_2016-10-30.sql.gz root@192.168.12.55:/root

# gunzip mysqld_2016-10-30.sql.gz

# ls -lrht |grep mysqld

-rw-r--r--   1 root root  520K Feb 27 14:12 mysqld_2016-10-30.sql

 

# mysql -uroot -p111111 <mysqld_2016-10-30.sql

 

# mysql -uroot -p111111

mysql> show slave status\G

Empty set (0.00 sec)

 

5.2.3 从库2设置change master

# mysql -uroot -p1111111 <<EOF

  change master to

master_host='192.168.12.56',

master_port=3306,

master_user='rep',

master_password='123456';

EOF

 

 

mysql> show slave status\G

*************************** 1. row ***************************

               Slave_IO_State:

                  Master_Host: 192.168.12.55

                  Master_User: rep

                  Master_Port: 3309

                Connect_Retry: 60

              Master_Log_File:

          Read_Master_Log_Pos: 4

               Relay_Log_File: mysql3308-relay-bin.000001

                Relay_Log_Pos: 4

        Relay_Master_Log_File:

             Slave_IO_Running: No

            Slave_SQL_Running: No

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 0

                   Last_Error:

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 0

              Relay_Log_Space: 107

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: NULL

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 0

               Last_SQL_Error:

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 0

1 row in set (0.00 sec)

 

5.2.4 从库2启动从库进程

mysql> start slave;

 

mysql> show slave status\G

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.12.55

                  Master_User: rep

                  Master_Port: 3309

                Connect_Retry: 60

              Master_Log_File: mysql-bin.000002

          Read_Master_Log_Pos: 107

               Relay_Log_File: mysql3308-relay-bin.000002

                Relay_Log_Pos: 480

        Relay_Master_Log_File: mysql-bin.000001

             Slave_IO_Running: Yes

            Slave_SQL_Running: No

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 1007

                   Last_Error: Error 'Can't create database 'shaw_db'; database exists' on query. Default database: 'shaw_db'. Query: 'create database shaw_db'

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 334

              Relay_Log_Space: 1642

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: NULL

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 1007

               Last_SQL_Error: Error 'Can't create database 'shaw_db'; database exists' on query. Default database: 'shaw_db'. Query: 'create database shaw_db'

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 1

1 row in set (0.00 sec)

 

报错原因: 在从库先创建个对象,数据库、表….然后在主库创建一个同名的对象就会出现以上报错

 

5.2.5 处理同步故障

----解决方法两种,一种如下:

stop slave;

set global sql_slave_skip_counter = 1;

start slave

MySQL读写分离amoeba&mysql-proxy

 

----另一种,如下:需要重启mysql

根据错误号跳过指定的错误。

slave-skip-errors = 1032,1062,1007

1032:记录不存在

1062:字段值重复,入库失败

1007:数据库已存在,创建数据库失败

1050:数据表已存在

 

一般由于入库重复导致的失败,可以忽略。也可以使用all值忽略所有错误消息如下,但不建议。slave-skip-errors = all

 

 

===处理之后:

mysql> show slave status\G

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.12.56

                  Master_User: rep

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: mysql-bin.000002

          Read_Master_Log_Pos: 107

               Relay_Log_File: orcl-relay-bin.000007

                Relay_Log_Pos: 253

        Relay_Master_Log_File: mysql-bin.000002

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes

              Replicate_Do_DB:

          Replicate_Ignore_DB:

           Replicate_Do_Table:

       Replicate_Ignore_Table:

      Replicate_Wild_Do_Table:

  Replicate_Wild_Ignore_Table:

                   Last_Errno: 0

                   Last_Error:

                 Skip_Counter: 0

          Exec_Master_Log_Pos: 107

              Relay_Log_Space: 554

              Until_Condition: None

               Until_Log_File:

                Until_Log_Pos: 0

           Master_SSL_Allowed: No

           Master_SSL_CA_File:

           Master_SSL_CA_Path:

              Master_SSL_Cert:

            Master_SSL_Cipher:

               Master_SSL_Key:

        Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

                Last_IO_Errno: 0

                Last_IO_Error:

               Last_SQL_Errno: 0

               Last_SQL_Error:

  Replicate_Ignore_Server_Ids:

             Master_Server_Id: 1

1 row in set (0.00 sec)

 

 

5.2.6 测试同步

---- 主库操作

mysql> use shaw_db;

Database changed

mysql> create table t_user as select * from mysql.user;

 

---- 从库1检查

mysql> show slave status\G

mysql> select count(*) from shaw_db.t_user;

+----------+

| count(*) |

+----------+

|        3 |

+----------+

1 row in set (0.01 sec)

 

---- 从库2检查

mysql> show slave status\G

mysql> select count(*) from shaw_db.t_user;

+----------+

| count(*) |

+----------+

|        3 |

+----------+

1 row in set (0.01 sec)

 

 

5.3 创建用户用于应用连接数据库

mysql> grant all privileges on *.* to user01@'192.168.12.%' identified by "111111";

Query OK, 0 rows affected (0.00 sec)

 

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

 

mysql> show grants for user01@'192.168.12.%'\G

*************************** 1. row ***************************

Grants for user01@192.168.12.%: GRANT ALL PRIVILEGES ON *.* TO 'user01'@'192.168.12.%' IDENTIFIED BY PASSWORD '*FD571203974BA9AFE270FE62151AE967ECA5E0AA'

1 row in set (0.00 sec)

 

###### 注意:

========》在登录的时候,如果是本机登录,默认的认证方式是local,因此默认登录会报错,需要指定ip登录。如下:

# mysql -uuser01 -p111111

ERROR 1045 (28000): Access denied for user 'user01'@'localhost' (using password: YES)

 

# mysql -uuser01 -p111111 -h 192.168.12.56 -P3306

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 20

Server version: 5.5.32-log Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> exit

Bye

 

=====》这里处理方法是再创建local的用户

mysql> grant all privileges on *.* to user01@'localhost' identified by "111111";

Query OK, 0 rows affected (0.00 sec)

 

mysql> flush privileges;

 

# mysql -uuser01 -p111111

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 23

Server version: 5.5.32-log Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

 

mysql> select user,host from mysql.user;

+--------+--------------+

| user   | host         |

+--------+--------------+

| root   | 127.0.0.1    |

| rep    | 192.168.12.% |

| user01 | 192.168.12.% |

| root   | localhost    |

| user01 | localhost    |

+--------+--------------+

5 rows in set (0.00 sec)

 

 

5.4 分别在主库、从库插入不同数据

----目的是后面读写分离时 方便识别连接的库

----主库上操作192.168.12.56 3306

# mysql -uroot -p111111

mysql> use shaw_db;

mysql> create table t_usedb (id int,name varchar(20));

mysql> insert into t_usedb values(1,'master');

 

----从库1操作 192.168.12.55 3306

# mysql -uroot -p111111

mysql> use shaw_db;

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

+------+--------+

mysql> insert into t_usedb values(2,'slave1');

 

----从库2操作 192.168.12.56 3307

# mysql -uroot -p111111 -S /mysqldata/3307/mysql3307.sock

mysql> select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

+------+--------+

mysql> insert into shaw_db.t_usedb values(3,'slave2');

 

 

 

 

二、基于AmoebaMySQL读写分离

refencen   

http://blog.chinaunix.net/uid-20639775-id-154600.html

https://www.cnblogs.com/liuyisai/p/6009379.html

https://www.cnblogs.com/xyp-blog123/p/6684118.html

 

1. Amoeba介绍

Amoeba的中文意思是阿米巴、变型虫

目前要实现mysql的主从读写分离,主要有以下几种方案:

a. 通过程序实现,网上很多现成的代码,比较复杂,如果添加从服务器要更改多台服务器的代码。

b. 通过mysql-proxy来实现,由于mysql-proxy的主从读写分离是通过lua脚本来实现,目前lua的脚本的开发跟不上节奏,而又没有完美的现成的脚本,因此导致用于生产环境的话风险比较大,据网上很多人说mysql-proxy的性能不高。

c. 自己开发接口实现,这种方案门槛高,开发成本高,不是一般的小公司能承担得起。

d. 利用阿里巴巴的开源项目Amoeba来实现,具有负载均衡、高可用性、sql过滤、读写分离、可路由相关的query到目标数据库,并且安装配置非常简单

 

Amoeba(变形虫)项目,专注 分布式数据库 proxy 开发。在ClientDB Server(s)之间。对客户端透明。具有负载均衡、高可用性、sql过滤、读写分离、可路由相关的query到目标数据库、可并发请求多台数据库合并结果。

主要解决:

? 降低 数据切分带来的复杂多数据库结构

? 提供切分规则并降低 数据切分规则 给应用带来的影响

? 降低db 与客户端的连接数

? 读写分离

 

MySQL读写分离amoeba&mysql-proxy

 

2. 部署环境前介绍

主库     192.168.12.56   3306

从库1    192.168.12.56   3307

从库2    192.168.12.55   3306

Amoeba   192.168.12.55   8066

 

Amoeba框架是居于JDK1.5开发的,采用了JDK1.5的特性,所以还需要安装java环境,建议使用javaSE1.5以上的JDK版本

 

 

3. 安装Java环境

官方下载地址:

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.rpm?AuthParam=1519720220_d473abf93bf78651f1ec927514473d86

 

二进制安装方法:

MySQL读写分离amoeba&mysql-proxy

 

这里采用rpm包安装:

========192.168.12.55

# rpm -ivh jdk-8u161-linux-x64.rpm

Preparing...                ########################################### [100%]

   1:jdk1.8                 ########################################### [100%]

Unpacking JAR files...

        tools.jar...

        plugin.jar...

        javaws.jar...

        deploy.jar...

        rt.jar...

        jsse.jar...

        charsets.jar...

        localedata.jar...

 

# rpm -qa|grep jdk1.8

jdk1.8-1.8.0_161-fcs.x86_64

# rpm -ql jdk1.8-1.8.0_161-fcs.x86_64

/usr/java/jdk1.8.0_161/…….

 

## 配置java环境变量

# vim /etc/profile

#set java environment

export JAVA_HOME=/usr/java/jdk1.8.0_161

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH

export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

 

# source /etc/profile

 

# java -version

java version "1.8.0_161"

Java(TM) SE Runtime Environment (build 1.8.0_161-b12)

Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)

 

 

4. 安装Amoeba环境

官方下载地址:

https://sourceforge.net/projects/amoeba/

https://nchc.dl.sourceforge.net/project/amoeba/Amoeba%20for%20mysql/3.x/amoeba-mysql-3.0.5-RC-distribution.zip

 

Amoeba安装非常简单,直接解压即可使用,这里将Amoeba解压到/usr/local/amoeba目录下,这样就安装完成了

 

========192.168.12.55

# mkdir /usr/local/amoeba/

# unzip -d /usr/local/amoeba/ amoeba-mysql-3.0.5-RC-distribution.zip

 

# cd /usr/local/amoeba/

# ls

benchmark  bin  conf  jvm.properties  lib

 

 

 

5. 配置Amoeba

5.1 配置文件介绍

Amoeba总共有7个配置文件,分别如下:

#/*  Amoeba主配置文件($AMOEBA_HOME/conf/amoeba.xml),用来配置Amoeba服务的基本参数,如Amoeba主机地址、端口、认证方式、用于连接的用户名、密码、线程数、超时时间、其他配置文件的位置等。

#/*  数据库服务器配置文件($AMOEBA_HOME/conf/dbServers.xml),用来存储和配置Amoeba所代理的数据库服务器的信息,如:主机IP、端口、用户名、密码等。

#/*  切分规则配置文件($AMOEBA_HOME/conf/rule.xml),用来配置切分规则。

#/*  数据库函数配置文件($AMOEBA_HOME/conf/functionMap.xml),用来配置数据库函数的处理方法,Amoeba将使用该配置文件中的方法解析数据库函数。

#/*  切分规则函数配置文件($AMOEBA_HOME/conf/ruleFunctionMap.xml),用来配置切分规则中使用的用户自定义函数的处理方法。

#/*  访问规则配置文件($AMOEBA_HOME/conf/access_list.conf),用来授权或禁止某些服务器IP访问Amoeba

#/*  日志规格配置文件($AMOEBA_HOME/conf/log4j.xml),用来配置Amoeba输出日志的级别和方式。

 

 

5.2 配置脚本介绍

Amoeba的配置文件在本环境下位于/usr/local/amoeba/conf目录下。配置文件比较多,但是仅仅使用读写分离功能,只需配置两个文件即可,分别是dbServers.xmlamoeba.xml,如果需要配置ip访问控制,还需要修改access_list.conf文件。

 

#### 注意点:

脚本中用 “<!--     -->”表示注释,如下面的这段,其实是注释了的

                <!--

                <property name="writePool">master</property>

                <property name="readPool">myslave</property>

                -->

 

 

 

 

 

 

下面首先介绍dbServers.xml

[root@bogon amoeba]# cat conf/dbServers.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

 

        <!--

            Each dbServer needs to be configured into a Pool,

            If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

             add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

             such as 'multiPool' dbServer  

        -->

       

    <dbServer name="abstractServer" abstractive="true">

        <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

            <property name="connectionManager">${defaultManager}</property>

            <property name="sendBufferSize">64</property>

            <property name="receiveBufferSize">128</property>

               

            <!-- mysql port -->

            <property name="port">3306</property>  #设置Amoeba要连接的mysql数据库的端口,默认是3306

           

            <!-- mysql schema -->

            <property name="schema">testdb</property>  #设置缺省的数据库,当连接amoeba时,操作表必须显式的指定数据库名,即采用dbname.tablename的方式,不支持 use dbname指定缺省库,因为操作会调度到各个后端dbserver

           

            <!-- mysql user -->

            <property name="user">test1</property>  #设置amoeba连接后端数据库服务器的账号和密码,因此需要在所有后端数据库上创建该用户,并授权amoeba服务器可连接

           

            <property name="password">111111</property>

        </factoryConfig>

 

        <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">

            <property name="maxActive">500</property>  #最大连接数,默认500

            <property name="maxIdle">500</property>    #最大空闲连接数

            <property name="minIdle">1</property>    #最新空闲连接数

            <property name="minEvictableIdleTimeMillis">600000</property>

            <property name="timeBetweenEvictionRunsMillis">600000</property>

            <property name="testOnBorrow">true</property>

            <property name="testOnReturn">true</property>

            <property name="testWhileIdle">true</property>

        </poolConfig>

    </dbServer>

 

    <dbServer name="writedb"  parent="abstractServer">  #设置一个后端可写的dbServer,这里定义为writedb,这个名字可以任意命名,后面还会用到

        <factoryConfig>

            <!-- mysql ip -->

            <property name="ipAddress">192.168.2.204</property> #设置后端可写dbserver

        </factoryConfig>

    </dbServer>

   

    <dbServer name="slave"  parent="abstractServer">  #设置后端可读dbserver

        <factoryConfig>

            <!-- mysql ip -->

            <property name="ipAddress">192.168.2.205</property>

        </factoryConfig>

    </dbServer>

   

    <dbServer name="myslave" virtual="true">  #设置定义一个虚拟的dbserver,实际上相当于一个dbserver组,这里将可读的数据库ip统一放到一个组中,将这个组的名字命名为myslave

        <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

            <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->

            <property name="loadbalance">1</property>  #选择调度算法,1表示复制均衡,2表示权重,3表示HA, 这里选择1

           

            <!-- Separated by commas,such as: server1,server2,server1 -->

            <property name="poolNames">slave</property>  #myslave组成员

        </poolConfig>

    </dbServer>

       

</amoeba:dbServers>

 

 

 

 

另一个配置文件amoeba.xml

[root@bogon amoeba]# cat conf/amoeba.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">

<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">

 

    <proxy>

   

        <!-- service class must implements com.meidusa.amoeba.service.Service -->

        <service name="Amoeba for Mysql" class="com.meidusa.amoeba.mysql.server.MySQLService">

            <!-- port -->

            <property name="port">8066</property>    #设置amoeba监听的端口,默认是8066

           

            <!-- bind ipAddress -->    #下面配置监听的接口,如果不设置,默认监听所以的IP

            <!--

            <property name="ipAddress">127.0.0.1</property>

             -->

           

            <property name="connectionFactory">

                <bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">

                    <property name="sendBufferSize">128</property>

                    <property name="receiveBufferSize">64</property>

                </bean>

            </property>

           

            <property name="authenticateProvider">

                <bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">

                   

 

# 提供客户端连接amoeba时需要使用这里设定的账号 (这里的账号密码和amoeba连接后端数据库服务器的密码无关)

 

                    <property name="user">root</property>    

 

                   

                    <property name="password">123456</property>

                   

                    <property name="filter">

                        <bean class="com.meidusa.toolkit.net.authenticate.server.IPAccessController">

                            <property name="ipFile">${amoeba.home}/conf/access_list.conf</property>

                        </bean>

                    </property>

                </bean>

            </property>

           

        </service>

       

        <runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">

           

            <!-- proxy server client process thread size -->

            <property name="executeThreadSize">128</property>

           

            <!-- per connection cache prepared statement size  -->

            <property name="statementCacheSize">500</property>

           

            <!-- default charset -->

            <property name="serverCharset">utf8</property>

           

            <!-- query timeout( default: 60 second , TimeUnit:second) -->

            <property name="queryTimeout">60</property>

        </runtime>

       

    </proxy>

   

    <!--

        Each ConnectionManager will start as thread

        manager responsible for the Connection IO read , Death Detection

    -->

    <connectionManagerList>

        <connectionManager name="defaultManager" class="com.meidusa.toolkit.net.MultiConnectionManagerWrapper">

            <property name="subManagerClassName">com.meidusa.toolkit.net.AuthingableConnectionManager</property>

        </connectionManager>

    </connectionManagerList>

   

        <!-- default using file loader -->

    <dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">

        <property name="configFile">${amoeba.home}/conf/dbServers.xml</property>

    </dbServerLoader>

   

    <queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">

        <property name="ruleLoader">

            <bean class="com.meidusa.amoeba.route.TableRuleFileLoader">

                <property name="ruleFile">${amoeba.home}/conf/rule.xml</property>

                <property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>

            </bean>

        </property>

        <property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>

        <property name="LRUMapSize">1500</property>

        <property name="defaultPool">writedb</property>  #设置amoeba默认的池,这里设置为writedb

       

       

        <property name="writePool">writedb</property>  #这两个选项默认是注销掉的,需要取消注释,这里用来指定前面定义好的俩个读写池

        <property name="readPool">myslave</property>   #

       

        <property name="needParse">true</property>

    </queryRouter>

</amoeba:configuration>

 

 

 

5.3 配置脚本dbServers.xml

首先注意到这个是一个抽象的父服务,用来给实际服务的继承使用. 每一个server都要有独立的后端mysql连接端口,数据库名称,帐号,密码, 连接池大小等.写在抽象父类中,方便统一管理. 如果不用继承的方式,也可以复制到每一个具体server.

 

再创建两个实际的server.都继承了相同的父类,唯一不同的是他们的IP

 

再配置一个虚拟的server,相当于自由组合实际server为一个server,这里命名为multiPool,名称可以自定义.

 

# cd /usr/local/amoeba/conf/

# vi dbServers.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

 

                <!--

                        Each dbServer needs to be configured into a Pool,

                        If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

                         add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

                         such as 'multiPool' dbServer  

                -->

 

        <dbServer name="abstractServer" abstractive="true">

                <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

                        <property name="connectionManager">${defaultManager}</property>

                        <property name="sendBufferSize">64</property>

                        <property name="receiveBufferSize">128</property>

 

                        <!-- mysql port -->

                        <property name="port">3306</property>

 

                        <!-- mysql schema -->

                        <property name="schema">shaw_db</property>

 

                        <!-- mysql user -->

                        <property name="user">user01</property>

 

                        <property name="password">111111</property>

                </factoryConfig>

 

                <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">

                        <property name="maxActive">500</property>

                        <property name="maxIdle">500</property>

                        <property name="minIdle">1</property>

                        <property name="minEvictableIdleTimeMillis">600000</property>

                        <property name="timeBetweenEvictionRunsMillis">600000</property>

                        <property name="testOnBorrow">true</property>

                        <property name="testOnReturn">true</property>

                        <property name="testWhileIdle">true</property>

                </poolConfig>

        </dbServer>

 

        <dbServer name="master"  parent="abstractServer">

                <factoryConfig>

                        <!-- mysql ip -->

                        <property name="ipAddress">192.168.12.56</property>

                </factoryConfig>

        </dbServer>

 

        <dbServer name="slave1"  parent="abstractServer">

                <factoryConfig>

                        <!-- mysql ip -->

                        <property name="ipAddress">192.168.12.55</property>

                </factoryConfig>

        </dbServer>

 

 

        <dbServer name="myslave" virtual="true">

                <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

                        <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->

                        <property name="loadbalance">1</property>

 

                        <!-- Separated by commas,such as: server1,server2,server1 -->

                        <property name="poolNames">slave1</property>

                </poolConfig>

        </dbServer>

 

</amoeba:dbServers>

 

 

 

 

 

 

5.4 配置脚本amoeba.xml

配置amoeba的连接端口,以及amoeba的登录帐号密码,dbServers.xml中的帐号密码不同,dbServers.xml是配置后端mysql的帐号密码

 

设置默认数据库为master,以及读和写策略,这里读策略是从虚拟服务中轮询, 当对一条sql语句解析不出的时候,无法正确路由到writePoolreadPool,就会路由到defaultPool.所以defaultPool一般配置为主节点.

 

# cd /usr/local/amoeba/conf

# vi amoeba.xml

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">

<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">

 

        <proxy>

 

                <!-- service class must implements com.meidusa.amoeba.service.Service -->

                <service name="Amoeba for Mysql" class="com.meidusa.amoeba.mysql.server.MySQLService">

                        <!-- port -->

                        <property name="port">8066</property>

 

                        <!-- bind ipAddress -->

                        <property name="ipAddress">192.168.12.55</property>

 

                        <property name="connectionFactory">

                                <bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">

                                        <property name="sendBufferSize">128</property>

                                        <property name="receiveBufferSize">64</property>

                                </bean>

                        </property>

 

                        <property name="authenticateProvider">

                                <bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">

 

                                        <property name="user">amobeba</property>

 

                                        <property name="password">123456</property>

 

                                        <property name="filter">

                                                <bean class="com.meidusa.toolkit.net.authenticate.server.IPAccessController">

                                                        <property name="ipFile">${amoeba.home}/conf/access_list.conf</property>

                                                </bean>

                                        </property>

                                </bean>

                        </property>

 

                </service>

 

                <runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">

 

                        <!-- proxy server client process thread size -->

                        <property name="executeThreadSize">128</property>

 

                        <!-- per connection cache prepared statement size  -->

                        <property name="statementCacheSize">500</property>

 

                        <!-- default charset -->

                        <property name="serverCharset">utf8</property>

 

                        <!-- query timeout( default: 60 second , TimeUnit:second) -->

                        <property name="queryTimeout">60</property>

                </runtime>

 

        </proxy>

 

        <!--

                Each ConnectionManager will start as thread

                manager responsible for the Connection IO read , Death Detection

        -->

        <connectionManagerList>

                <connectionManager name="defaultManager" class="com.meidusa.toolkit.net.MultiConnectionManagerWrapper">

                        <property name="subManagerClassName">com.meidusa.toolkit.net.AuthingableConnectionManager</property>

                </connectionManager>

        </connectionManagerList>

 

                <!-- default using file loader -->

        <dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">

                <property name="configFile">${amoeba.home}/conf/dbServers.xml</property>

        </dbServerLoader>

 

        <queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">

                <property name="ruleLoader">

                        <bean class="com.meidusa.amoeba.route.TableRuleFileLoader">

                                <property name="ruleFile">${amoeba.home}/conf/rule.xml</property>

                                <property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>

                        </bean>

                </property>

                <property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>

                <property name="LRUMapSize">1500</property>

                <property name="defaultPool">master</property>


                <property name="writePool">master</property>

                <property name="readPool">myslave</property>

                <property name="needParse">true</property>

        </queryRouter>

</amoeba:configuration>

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

5.5 启动Amoeba服务

5.5.1 启动服务

#  /usr/local/amoeba/bin/launcher

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 

The stack size specified is too small, Specify at least 228k

Error: Could not create the Java Virtual Machine.

Error: A fatal exception has occurred. Program will exit.

 

5.5.2 处理报错

从错误文字上看,应该是由于stack size太小,导致JVM启动失败,要如何修改呢?

其实Amoeba已经考虑到这个问题,并将JVM参数配置写在属性文件里。现在,让我们通过该属性文件修改JVM参数。

修改jvm.properties文件JVM_OPTIONS参数。

 

# vim /usr/local/amoeba/jvm.properties

改成:

JVM_OPTIONS="-server -Xms1024m -Xmx1024m -Xss256k -XX:PermSize=16m -XX:MaxPermSize=96m"

原为:

JVM_OPTIONS="-server -Xms256m -Xmx1024m -Xss196k -XX:PermSize=16m -XX:MaxPermSize=96m"

 

# /usr/local/amoeba/bin/launcher

2018-03-06 14:17:12,281 INFO  net.ServerableConnectionManager - Server listening on 0.0.0.0/0.0.0.0:8066.

 2018-03-06 15:19:46 [INFO] Project Name=Amoeba-MySQL, PID=34261 , System shutdown ....

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 2018-03-06 16:12:06 [INFO] Project Name=Amoeba-MySQL, PID=34592 , starting...

log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

2018-03-06 16:12:06,852 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA

log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

2018-03-06 16:12:07,142 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 2018-03-06 16:22:06 [INFO] Project Name=Amoeba-MySQL, PID=34592 , System shutdown ....

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 2018-03-06 16:22:54 [INFO] Project Name=Amoeba-MySQL, PID=34684 , starting...

log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

2018-03-06 16:22:54,571 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA

log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

2018-03-06 16:22:54,855 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 

 

5.5.3 检查服务端口

# netstat -tlnp |grep java

tcp        0      0 :::8066                     :::*                        LISTEN      32534/java 

 

5.5.4 停止服务

# /usr/local/amoeba/bin/shutdown

kill -15 34592

 

 

 

6. 测试Amoeba主从读写

## 注意看登录的提示amoeba

# mysql -uamobeba -p123456 -h 192.168.12.55 -P8066

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 2012979869

Server version: 5.1.45-mysql-amoeba-proxy-3.0.4-BETA Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

 

## 我们之前在从库1上插入了一条slave1的数据,这里可以确定连接的是从库1

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

+------+--------+

 

## 插入一条数据

mysql> insert into t_usedb values (55,'amoeba');

Query OK, 1 row affected (0.10 sec)

 

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

|   55 | amoeba |

+------+--------+

 

### 此时我们登录到主库查看是否有新插入数据

# mysql -uroot -p111111

mysql> select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|   55 | amoeba |

+------+--------+

 

### 此时我们登录到从库2查看是否有新插入数据

# mysql -uroot -p111111 -S /mysqldata/3307/mysql3307.sock

mysql> select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    3 | slave2 |

|   55 | amoeba |

+------+--------+

 

#####  我们把主库stop,然后再插入数据,看是否报错

# /etc/init.d/mysqld stop

# mysql -uamobeba -p123456 -h 192.168.12.55 -P8066

mysql> insert into t_usedb values (100,'amoeba222');

ERROR 1044 (42000): Amoeba could not connect to MySQL server[192.168.12.56:3306],Connection refused

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

|   55 | amoeba |

+------+--------+

3 rows in set (0.00 sec)

 

 

 

7. 添加从库2amoeba

7.1 配置脚本dbServers.xml

###  此时只需要配置脚本dbServers.xml

# cd /usr/local/amoeba/conf

# /usr/local/amoeba/bin/shutdown

 

# vi dbServers.xml

 

<?xml version="1.0" encoding="gbk"?>

 

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">

<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

 

                <!--

                        Each dbServer needs to be configured into a Pool,

                        If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:

                         add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig

                         such as 'multiPool' dbServer  

                -->

 

        <dbServer name="abstractServer" abstractive="true">

                <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">

                        <property name="connectionManager">${defaultManager}</property>

                        <property name="sendBufferSize">64</property>

                        <property name="receiveBufferSize">128</property>

 

                        <!-- mysql port -->

                        <property name="port">3306</property>

 

                        <!-- mysql schema -->

                        <property name="schema">shaw_db</property>

 

                        <!-- mysql user -->

                        <property name="user">user01</property>

 

                        <property name="password">111111</property>

                </factoryConfig>

 

                <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">

                        <property name="maxActive">500</property>

                        <property name="maxIdle">500</property>

                        <property name="minIdle">1</property>

                        <property name="minEvictableIdleTimeMillis">600000</property>

                        <property name="timeBetweenEvictionRunsMillis">600000</property>

                        <property name="testOnBorrow">true</property>

                        <property name="testOnReturn">true</property>

                        <property name="testWhileIdle">true</property>

                </poolConfig>

        </dbServer>

 

        <dbServer name="master"  parent="abstractServer">

                <factoryConfig>

                        <!-- mysql ip -->

                        <property name="ipAddress">192.168.12.56</property>

                </factoryConfig>

        </dbServer>

 

        <dbServer name="slave1"  parent="abstractServer">

                <factoryConfig>

                        <!-- mysql ip -->

                        <property name="ipAddress">192.168.12.55</property>

                </factoryConfig>

        </dbServer>

 

        <dbServer name="slave2"  parent="abstractServer">

                <factoryConfig>

                        <property name="ipAddress">192.168.12.56</property>

                        <property name="port">3307</property>

                        <property name="user">user01</property>

                        <property name="password">111111</property>

                </factoryConfig>

        </dbServer>

 

 

        <dbServer name="myslave" virtual="true">

                <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">

                        <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->

                        <property name="loadbalance">1</property>

 

                        <!-- Separated by commas,such as: server1,server2,server1 -->

                        <property name="poolNames">slave1,slave2</property>

                </poolConfig>

        </dbServer>

 

</amoeba:dbServers>

 

 

7.2 测试主从

# /usr/local/amoeba/bin/launcher &

[1] 34792

[root@orcl conf]# 2018-03-06 16:12:07,142 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 2018-03-06 16:22:06 [INFO] Project Name=Amoeba-MySQL, PID=34592 , System shutdown ....

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 2018-03-06 16:22:54 [INFO] Project Name=Amoeba-MySQL, PID=34684 , starting...

log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

2018-03-06 16:22:54,571 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA

log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

2018-03-06 16:22:54,855 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 2018-03-06 16:35:49 [INFO] Project Name=Amoeba-MySQL, PID=34684 , System shutdown ....

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0

 2018-03-06 16:44:36 [INFO] Project Name=Amoeba-MySQL, PID=34797 , starting...

log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml

2018-03-06 16:44:37,450 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA

log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf

2018-03-06 16:44:37,735 INFO  net.ServerableConnectionManager - Server listening on /192.168.12.55:8066.

 

 

###  连接后做查询可以发现,一次连接的是从1一次连接的是从2

# mysql -uamobeba -p123456 -h 192.168.12.55 -P8066

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

|   55 | amoeba |

+------+--------+

3 rows in set (0.00 sec)

mysql> select * from t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    3 | slave2 |

|   55 | amoeba |

+------+--------+

3 rows in set (0.01 sec)

 

## 创建一张表,由于之前测试把主库停了,所以创建失败,这里启动主库后再创建表

mysql> create table t_zhong as select * from mysql.user;

ERROR 1044 (42000): Amoeba could not connect to MySQL server[192.168.12.56:3306],Connection refused

 

# /etc/init.d/mysqld start

 

mysql> create table t_zhong as select * from mysql.user;

ERROR 1050 (42S01): Table 't_zhong' already exists

mysql> show tables;

+-------------------+

| Tables_in_shaw_db |

+-------------------+

| t_usedb           |

| t_zhong           |

+-------------------+

2 rows in set (0.00 sec)

mysql> create table t_zhong2 as select * from mysql.user;

Query OK, 5 rows affected (0.06 sec)

Records: 5  Duplicates: 0  Warnings: 0

 

## 从库2登录查看下

# mysql -uuser01 -p111111 -h 192.168.12.56 -P3307

mysql> use shaw_db;

Database changed

mysql> show tables;

+-------------------+

| Tables_in_shaw_db |

+-------------------+

| t_usedb           |

| t_zhong           |

| t_zhong2          |

+-------------------+

3 rows in set (0.00 sec)

 

 

8. 配置Amoeba 服务启动脚本

/etc/init.d/amoeba   把脚本写到此路径

dos2unxi amoeba 如果出现找不到文件之类的,可能需要转换编码。

 

 

cd /etc/init.d

 chkconfig --add ./amoeba

 chkconfig amoeba on

 service amoeba start

 

 

### 脚本如下:

 

#!/bin/sh

# chkconfig: 12345 62 62

 # description: amoeba 3.05 AutoRun Servimces

 # /etc/init.d/amoeba

 #

 # Run-level Startup script for the Oracle Instance, Listener, and

 # Web Interface

 export JAVA_HOME=/usr/java/jdk1.8.0_161

 export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

 PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

 NAME=Amoeba

 AMOEBA_BIN=/usr/local/amoeba/bin/launcher

 SHUTDOWN_BIN=/usr/local/amoeba/bin/shutdown

 PIDFILE=/usr/local/amoeba/Amoeba-MySQL.pid

 SCRIPTNAME=/etc/init.d/amoeba

 

 case "$1" in

start)

echo -n "Starting $NAME... "

$AMOEBA_BIN &

echo " done"

;;

 

stop)

echo -n "Stoping $NAME... "

$SHUTDOWN_BIN

echo " done"

;;

 

restart)

$SHUTDOWN_BIN

sleep 1

$AMOEBA_BIN &

;;

 

*)

echo "Usage: $SCRIPTNAME {start|stop|restart}"

exit 1

;;

 

 esac

 

 

三、基于mysql-proxyMySQL读写分离

refencen    https://www.cnblogs.com/lin3615/p/5684891.html

http://www.mamicode.com/info-detail-1566167.html

http://blog.itpub.net/15480802/viewspace-1432659/

http://www.bubuko.com/infodetail-1523794.html

https://www.cnblogs.com/tae44/p/4701226.html

 

MySQL读写分离amoeba&mysql-proxy

 

1. 部署环境前介绍

主库         192.168.12.56   3306

从库1       192.168.12.56   3307

从库2       192.168.12.55   3306

MySQL-Proxy  192.168.12.55   4040

 

mysql-proxymysql官方提供的mysql中间件服务,上游可接入若干个mysql-client,后端可连接若干个mysql-server。它使用mysql协议,任何使用mysql-client的上游无需修改任何代码,即可迁移至mysql-proxy上。

mysql-proxy最基本的用法,就是作为一个请求拦截,请求中转的中间层,拦截查询和修改结果,需要通过编写Lua脚本来完成。

 

MySQL Proxy通过lua脚本来控制连接转发,主要的函数都是配合MySQL Protocol各个过程的:

    * connect_server()     // 接收到Client的连接请求时调用

    * read_handshake()   // 读取server发起的handshake信息时调用

    * read_auth()               // 读取Client的认证信息时调用

    * read_auth_result() // 读取认证结果时调用

    * read_query()            // 读取Clientquery请求时调用

  * read_query_result()   //读取query结果时调用

 

 

具体功能:

   1.数据连接的故障转移

   2.数据连接的负载均衡

   3.拦截查询(取通信包,实现关键字替换)

   4.重写查询(例如,强制密码度等规则)

   5.添加额外的查询(附)

   6.删除,修改或者添加返回到客户端的 SQL结果集

 

配置文件:

mysql-proxy.cnf(权限设为660)

 [mysql-proxy]

    admin-username=root

    admin-password=123456

    admin-lua-script=/usr/local/lib/admin.lua

    proxy-read-only-backend-addresses=192.168.2.115

    proxy-backend-addresses=192.168.2.117

    proxy-lua-script=/usr/local/lib/rw-splitting.lua

    log-file=/var/log/mysql-proxy.log

    log-level=debug

    daemon=true

keepalive=true

 

proxy-lua-script,指定一个Lua脚本来控制mysql-proxy的运行和设置,这个脚本在每次新建连接和脚本发生修改的的时候将重新调用

keepalive,额外建立一个进程专门监控mysql_proxy进程,当mysql_proxy crash予以重新启动;

 

启动:

/usr/local/mysql-proxy/bin/mysql-proxy -P 192.168.2.112:3306 --defaults-file=/etc/mysql-proxy.cnf

 

 

读写分离:

proxy-lua-script指定为rw-splitting.lua时,mysql_proxy会对客户端传入的sql执行读写分离;

同一个事务,DML传输给backendselect则被传到read-only-backend

Lua脚本默认最小4个最大8个以上的客户端连接才会实现读写分离(这是因为mysql-proxy会检测客户端连接当连接没有超过min_idle_connections预设值时,不会进行读写分离,即查询操作会发生到Master上)

 

 

2. 安装MySQL-Proxy

安装需要的基础组件,基本系统都可以满足lua的组件版本要求

安装lua(确定是否需要安装)

    yum -y install gcc* gcc-c++* autoconf* automake* zlib* libxml* ncurses-devel* libmc rypt* libtool* flex* pkgconfig*

 

实现读写分离是有lua脚本实现的,现在mysql-proxy里面已经集成,无需再安装。

download  https://downloads.mysql.com/archives/proxy/

 

# tar zxvf mysql-proxy-0.8.5-linux-rhel5-x86-64bit.tar.gz

# mv mysql-proxy-0.8.5-linux-rhel5-x86-64bit /usr/local/mysql-proxy

 

 

3. 配置MySQL-Proxy,创建主配置文件

MySQL读写分离amoeba&mysql-proxy

以上面文件为准。如果不配置管理接口4041,admin相关的都可以省略

 

 

MySQL读写分离amoeba&mysql-proxy

 

# cd /usr/local/mysql-proxy

# mkdir lua

# mkdir logs

# cp share/doc/mysql-proxy/rw-splitting.lua ./lua/

# cp share/doc/mysql-proxy/admin-sql.lua ./lua/

# vi /etc/mysql-proxy.cnf

[mysql-proxy]

user=root

admin-username=user01

admin-password=111111

proxy-address=192.168.12.55:4040

proxy-read-only-backend-addresses=192.168.12.55

proxy-backend-addresses=192.168.12.56

proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua

admin-lua-script=/usr/local/mysql-proxy/lua/admin-sql.lua

log-file=/usr/local/mysql-proxy/logs/mysql-proxy.log

log-level=info

daemon=true

keepalive=true

 

# chmod 660 /etc/mysql-proxy.cnf

 

4. 修改读写分离配置文件

MySQL读写分离amoeba&mysql-proxy

mysql-proxy会检测客户端连接, 当连接没有超过min_idle_connections预设值时,不会进行读写分离, 即查询操作会发生到Master.

 

# vi /usr/local/mysql-proxy/lua/rw-splitting.lua

if not proxy.global.config.rwsplit then

        proxy.global.config.rwsplit = {

                min_idle_connections = 1,

                max_idle_connections = 1,

 

                is_debug = false

        }

end

 

 

5. 启动mysql-proxy

# /usr/local/mysql-proxy/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.cnf

# netstat -nltp| grep mysql-proxy

tcp        0      0 192.168.12.55:4040          0.0.0.0:*                   LISTEN      31749/mysql-proxy 

 

# killall -9 mysql-proxy #关闭mysql-proxy使用

 

----查看日志

# tail -200f /usr/local/mysql-proxy/logs/mysql-proxy.log

2018-03-07 11:22:39: (message) chassis-unix-daemon.c:136: [angel] we try to keep PID=31749 alive

2018-03-07 11:22:39: (critical) plugin proxy 0.8.5 started

2018-03-07 11:22:39: (message) proxy listening on port 192.168.12.55:4040

2018-03-07 11:22:39: (message) added read/write backend: 192.168.12.56

2018-03-07 11:22:39: (message) added read-only backend: 192.168.12.55

 

 

6. 测试读写分离

# mysql -uuser01 -p111111 -h292.168.12.55 -P4040

 

## 连接的是主库

mysql>  select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

+------+--------+

 

#### 多开两个窗口,发现连接到从库1

# mysql -uuser01 -p111111 -h292.168.12.55 -P4040

mysql>  select * from shaw_db.t_usedb;

+------+--------+

| id   | name   |

+------+--------+

|    1 | master |

|    2 | slave1 |

+------+--------+

 

## 插入测试

mysql> insert into shaw_db.t_usedb values(55,'myproxy');

Query OK, 1 row affected (0.25 sec)

 

mysql>  select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

+------+---------+

 

## 连接从库2查看是否同步数据

# mysql -uroot -p111111 -S /mysqldata/3307/mysql3307.sock

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    3 | slave2  |

|   55 | myproxy |

+------+---------+

 

7. 添加从库2proxy配置文件

7.1 配置主文件

# killall -9 mysql-proxy #关闭mysql-proxy使用

# vi /etc/mysql-proxy.cnf

[mysql-proxy]

user=root

admin-username=user01

admin-password=111111

proxy-address=192.168.12.55:4040

proxy-read-only-backend-addresses=192.168.12.55:3306,192.168.12.56:3307

proxy-backend-addresses=192.168.12.56:3306

proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua

admin-lua-script=/usr/local/mysql-proxy/lua/admin-sql.lua

log-file=/usr/local/mysql-proxy/logs/mysql-proxy.log

log-level=info

daemon=true

keepalive=true

 

7.2 启动mysql-proxy

# /usr/local/mysql-proxy/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.cnf

# tail -200f /usr/local/mysql-proxy/logs/mysql-proxy.log

2018-03-07 14:08:59: (critical) plugin proxy 0.8.5 started

2018-03-07 14:08:59: (message) proxy listening on port 192.168.12.55:4040

2018-03-07 14:08:59: (message) added read/write backend: 192.168.12.56:3306

2018-03-07 14:08:59: (message) added read-only backend: 192.168.12.55:3306

2018-03-07 14:08:59: (message) added read-only backend: 192.168.12.56:3307

 

 

# netstat -nltpd |grep mysql-proxy

tcp      0    0 192.168.12.55:4040    0.0.0.0:*   LISTEN   31871/mysql-proxy

 

7.3 测试主从

#  mysql -uuser01 -p111111 -h292.168.12.55 -P4040

 

## 连接的是主库

mysql>  select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|   55 | myproxy |

+------+---------+

 

## 多开几个窗口,发现都能正常访问

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    3 | slave2  |

|   55 | myproxy |

+------+---------+

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

 

 

## 从库2启停(这里把从库2停了,然后连接mysql-proxy更新一条数据,然后在启动从库2

# mysqladmin -uroot -p111111 -S /mysqldata/3307/mysql3307.sock shutdown

# mysqld_safe --defaults-file=/mysqldata/3307/my3307.cnf 2>&1 >/dev/null &

 

## 上面停从库2后往表t_usedb中插入数据测试

# mysql -uuser01 -p111111 -h292.168.12.55 -P4040

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

+------+---------+

mysql> insert into shaw_db.t_usedb values(100,'zhong');

Query OK, 1 row affected (0.03 sec)

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

|  100 | zhong   |

+------+---------+

4 rows in set (0.00 sec)

 

## 此时启动从库2,并从从库2连接到数据库中查看表数据

# mysql -uroot -p111111 -S /mysqldata/3307/mysql3307.sock

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    3 | slave2  |

|   55 | myproxy |

|  100 | zhong   |

+------+---------+

 

 

## 最后在测试下把主库关闭后,注意报错

# /etc/init.d/mysqld stop

# mysql -uuser01 -p111111 -h292.168.12.55 -P4040

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

|  100 | zhong   |

+------+---------+

mysql> insert into shaw_db.t_usedb values(222,'err');

ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql> create table t as select * from mysql.user;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id:    5

Current database: *** NONE ***

ERROR 2013 (HY000): Lost connection to MySQL server during query

 

 

8. 配置mysql-proxy服务启动脚本

/etc/init.d/mysqlproxy   把脚本写到此路径

dos2unxi mysqlproxy 如果出现找不到文件之类的,可能需要转换编码。

 

 

cd /etc/init.d

 chkconfig --add ./mysqlproxy

 chkconfig mysqlproxy on

 service mysqlproxy start

 

 

### 脚本如下:

#!/bin/bash

#chkconfig: - 99 23

#description: mysql_proxy

mysql_proxy_home='/usr/local/mysql-proxy'

case "$1" in

start)

        $mysql_proxy_home/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.cnf

        ;;

stop)

        killall -9 mysql-proxy &>/dev/null

        ;;

restart)

        killall -9 mysql-proxy &>/dev/null

        $mysql_proxy_home/bin/mysql-proxy --defaults-file=/etc/mysql-proxy.cnf

        ;;

*)

        echo "Usage: $0 {start|stop|restart}"

        exit 1

esac

exit 0

 

 

9. !!配置mysql-proxy配置管理地址!!

9.1 配置主文件

[mysql-proxy]

user=root

plugins=admin,proxy

admin-username=admin

admin-password=admin

admin-lua-script=/usr/local/mysql-proxy/lua/admin-sql.lua

admin-address=192.168.12.55:4041

proxy-backend-addresses=192.168.12.56:3306

proxy-read-only-backend-addresses=192.168.12.55:3306,192.168.12.56:3307

proxy-address=192.168.12.55:4040

proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua

log-file=/usr/local/mysql-proxy/logs/mysql-proxy.log

log-level=debug

daemon=true

keepalive=true

 

#### 注意,这里和前面的配置可能有点不同,这里再解释下参数意义:

MySQL读写分离amoeba&mysql-proxy

 

 

9.2 配置管理接口脚本

# vi /usr/local/mysql-proxy/lua/admin.lua

function set_error(errmsg) 

    proxy.response = { 

        type = proxy.MYSQLD_PACKET_ERR, 

        errmsg = errmsg or "error" 

    } 

end

function read_query(packet)

    if packet:byte() ~= proxy.COM_QUERY then  

        set_error("[admin] we only handle text-based queries (COM_QUERY)") 

        return proxy.PROXY_SEND_RESULT 

    end

    local query = packet:sub(2)

    local rows = { }

    local fields = { }

    if query:lower() == "select * from backends" then

        fields = {  

            { name = "backend_ndx",  

              type = proxy.MYSQL_TYPE_LONG },

            { name = "address",

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "state", 

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "type", 

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "uuid", 

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "connected_clients",  

              type = proxy.MYSQL_TYPE_LONG }, 

        }

        for i = 1, #proxy.global.backends do

            local states = { 

                "unknown", 

                "up", 

                "down" 

            } 

            local types = { 

                "unknown", 

                "rw", 

                "ro" 

            } 

            local b = proxy.global.backends[i]

            rows[#rows + 1] = {

                i, 

                b.dst.name,          -- configured backend address 

                states[b.state + 1], -- the C-id is pushed down starting at 0 

                types[b.type + 1],   -- the C-id is pushed down starting at 0 

                b.uuid,              -- the MySQL Server's UUID if it is managed 

                b.connected_clients  -- currently connected clients 

            } 

        end 

    elseif query:lower() == "select * from help" then 

        fields = {  

            { name = "command",  

              type = proxy.MYSQL_TYPE_STRING }, 

            { name = "description",  

              type = proxy.MYSQL_TYPE_STRING }, 

        } 

        rows[#rows + 1] = { "SELECT * FROM help", "shows this help" } 

        rows[#rows + 1] = { "SELECT * FROM backends", "lists the backends and their state" } 

    else 

        set_error("use 'SELECT * FROM help' to see the supported commands") 

        return proxy.PROXY_SEND_RESULT 

    end

    proxy.response = {

        type = proxy.MYSQLD_PACKET_OK, 

        resultset = { 

            fields = fields, 

            rows = rows 

        } 

    } 

    return proxy.PROXY_SEND_RESULT 

end

 

 

9.3 重启mysql-proxy

注:这次启动要添加以下启动选项 ,因为我们添加了额外的插件,把新加功能添加进来,选项如下

--plugins=admin  mysql-proxy启动时加载的插件;

--admin-username="admin" 运行mysql-proxy进程管理的用户;

--admin-password="admin" 密码

--admin-lua-script=/usr/local/mysql-proxy/lua/admin-sql.lua  插件使用的配置文件路径;

 

##  重启后,注意看日志

# /etc/init.d/mysqlproxy restart

 

MySQL读写分离amoeba&mysql-proxy

 

# netstat -nltpd |grep mysql-

tcp        0      0 192.168.12.55:4040    0.0.0.0:*     LISTEN  34596/mysql-proxy  

tcp        0      0 192.168.12.55:4041    0.0.0.0:*     LISTEN  34596/mysql-proxy

 

 

9.4 登录管理口

# mysql -uadmin -padmin -h292.168.12.55 -P4041

 

## 管理口只能执行两条查询命令

mysql> select * from help;

+------------------------+------------------------------------+

| command                | description                        |

+------------------------+------------------------------------+

| SELECT * FROM help     | shows this help                    |

| SELECT * FROM backends | lists the backends and their state |

+------------------------+------------------------------------+

2 rows in set (0.00 sec)

 

## 执行下面命令,可以看到主从状态是否up

mysql> SELECT * FROM backends;

+-------------+--------------------+---------+------+------+-------------------+

| backend_ndx | address            | state   | type | uuid | connected_clients |

+-------------+--------------------+---------+------+------+-------------------+

|           1 | 192.168.12.56:3306 | unknown | rw   | NULL |                 0 |

|           2 | 192.168.12.55:3306 | unknown | ro   | NULL |                 0 |

|           3 | 192.168.12.56:3307 | unknown | ro   | NULL |                 0 |

+-------------+--------------------+---------+------+------+-------------------+

3 rows in set (0.00 sec)

 

 

9.5 主从登录测试

## 多登陆几个窗口

# mysql -uuser01 -p111111 -h 192.168.12.55 -P4040

# mysql -urep -p123456 -h292.168.12.55 -P4040

 

 

## 此时登录管理口查看,主和从1已经up

# mysql -uadmin -padmin -h292.168.12.55 -P4041

mysql> SELECT * FROM backends;

+-------------+--------------------+---------+------+------+-------------------+

| backend_ndx | address            | state   | type | uuid | connected_clients |

+-------------+--------------------+---------+------+------+-------------------+

|           1 | 192.168.12.56:3306 | up      | rw   | NULL |                 0 |

|           2 | 192.168.12.55:3306 | up      | ro   | NULL |                 0 |

|           3 | 192.168.12.56:3307 | unknown | ro   | NULL |                 0 |

+-------------+--------------------+---------+------+------+-------------------+

 

9.6 关于mysql-proxy的测试再提一下注意点

主库的主机关机时,数据只能读不能写,并且从库的主机关机后,数据仍然可读可写,这个不好弄,如果你对lua脚本有所了解,修改mysql-proxy 的读写分离脚本或许可以实现。

 

不过主库关机时不能访问(指的是不能访问mysql-proxy指定的ip地址),并且从库关机仍可读可写,这个不需要配读写分离,这本来就是主从复制的基本能力,从库只是备用的而已。从库关闭后再开启,slave IO线程会自动从中断处二进制日志的位置开始复制主机的二进制日志,slave SQL线程会读取并执行这些二进制日志中的SQL

 

总之,不需要特别的配置,这是主从复制的基本能力。

 

 

不过主库关机时不能访问(指的是不能访问mysql-proxy指定的ip地址)

主库关闭后,不影响现有连接,可能影响新的连接。一旦有连接执行dml及ddl等语句,就会报错,并断开连接,并导致新的连接无法再连接(现有连接select不受影响)。这个或许可以配置proxy-lua-script=/usr/local/mysql-proxy/lua/rw-splitting.lua脚本解决,但是对lua脚本不熟,不知道是否可行。

 

测试报错如下:

主库down

mysql> SELECT * FROM backends;

+-------------+--------------------+-------+------+------+-------------------+

| backend_ndx | address            | state | type | uuid | connected_clients |

+-------------+--------------------+-------+------+------+-------------------+

|           1 | 192.168.12.56:3306 | down  | rw   | NULL |                 0 |

|           2 | 192.168.12.55:3306 | up    | ro   | NULL |                 0 |

|           3 | 192.168.12.56:3307 | up    | ro   | NULL |                 0 |

+-------------+--------------------+-------+------+------+-------------------+

 

### session1 执行ddl语句

mysql> create table shaw_db.zhong as select * from mysql.zhong;

ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql> create table shaw_db.zhong as select * from mysql.zhong;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id:    23

Current database: *** NONE ***

ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql>  select * from shaw_db.t_usedb;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

ERROR 1105 (HY000): (proxy) all backends are down

ERROR:

Can't connect to the server

 

### session2执行新的连接

[root@xuan2 ~]# mysql -uuser01 -p111111 -h 192.168.12.55 -P4040

ERROR 1105 (HY000): (proxy) all backends are down

 

### session3 现有连接继续执行查询操作

mysql> select * from shaw_db.t_usedb;

+------+---------+

| id   | name    |

+------+---------+

|    1 | master  |

|    2 | slave1  |

|   55 | myproxy |

|  100 | zhong   |

+------+---------+

 

 

 

 

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI