1.初始状态:Master,slave均为running状态。
2.升级过程
Master
1).关闭 master 记录最后检查点位置 (latest checkpoint location),这是宕机时间开始的地方 (This is where your downtime starts)。
postgres用户执行以下命令:
$ pg_ctl -D $PGDATA stop -m fast
$ pg_controldata | grep "Latest checkpoint location"
$ Latest checkpoint location: 0/C619840
2).关闭slave 比较最后检查点
$ pg_ctl -D $PGDATA stop -m fast
$ pg_controldata | grep "Latest checkpoint location"
$ Latest checkpoint location: 0/C619840
因为两个检查点位置一致,我们确认 standby 应用了所有日志,Master和Slave数据没有差异.
3).保存旧版本配置文件
$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp
$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp
$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
4).Master使用链接方式升级,如果多核服务器使用“-j”选项,并行执行pg_upgrade
$ export PGDATAOLD=/u02/pgdata/testmig/
$ export PGDATANEW=/u02/pgdata/testmig95/
$ export PGBINOLD=/u01/app/postgres/product/91/db_8/bin/
$ export PGBINNEW=/u01/app/postgres/product/95/db_5/bin/
$ /u01/app/postgres/product/95/db_5/bin/pg_upgrade -k
(Usually you’d do a “-c” check run before doing the real upgrade). When using link mode the files get hard-linked instead of copied which is much faster and saves disk space. The downside is that you can not revert to the old cluster in case anything goes wrong. When it goes fine, it looks like this:
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for reg* system OID user data types ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for invalid "line" user columns ok
Creating dump of global objects ok
Creating dump of database schemas
ok
Checking for presence of required libraries ok
Checking database user is the install user ok
Checking for prepared transactions ok
If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.
Performing Upgrade
------------------
Analyzing all rows in the new cluster ok
Freezing all rows on the new cluster ok
Deleting files from new pg_clog ok
Copying old pg_clog to new server ok
Setting next transaction ID and epoch for new cluster ok
Deleting files from new pg_multixact/offsets ok
Setting oldest multixact ID on new cluster ok
Resetting WAL archives ok
Setting frozenxid and minmxid counters in new cluster ok
Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster
ok
Setting minmxid counter in new cluster ok
Adding ".old" suffix to old global/pg_control ok
If you want to start the old cluster, you will need to remove
the ".old" suffix from /u02/pgdata/testmig/global/pg_control.old.
Because "link" mode was used, the old cluster cannot be safely
started once the new cluster has been started.
Linking user relation files
ok
Setting next OID for new cluster ok
Sync data directory to disk ok
Creating script to analyze new cluster ok
Creating script to delete old cluster ok
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
./analyze_new_cluster.sh
Running this script will delete the old cluster's data files:
./delete_old_cluster.sh
5).恢复配置文件到新目录
$ mkdir -p /u02/pgdata/testmig95/pg_log
$ cp /var/tmp/postgresql.conf /u02/pgdata/testmig95/postgresql.conf
$ cp /var/tmp/pg_hba.conf /u02/pgdata/testmig95/pg_hba.conf
6).启动、停止更新的实例,检查日志文件中一切正常
$ /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ -l /u02/pgdata/testmig95/pg_log/log.log start
$ /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ stop
数据库集群现在已经运行,数据库完整关闭(计划重建standby)
Slave
1).保存配置文件
$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp
$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
$ cp /u02/pgdata/testmig/recovery.conf /var/tmp
同步master目录到standby(this will be very fast because it will create hard links on the standby server instead of copying the user files):
$ cd /u02/pgdata
$ rsync --archive --delete --hard-links --size-only testmig testmig95 192.168.22.33:/u02/pgdata
$ cd /u03
$ rsync -r pgdata/testmig95 192.168.22.33:/u03/pgdata/testmig95
2).standby恢复配置文件
$ cp /var/tmp/postgresql.conf /u02/pgdata/testmig95/postgresql.conf
$ cp /var/tmp/pg_hba.conf /u02/pgdata/testmig95/pg_hba.conf
$ cp /var/tmp/recovery.conf /u02/pgdata/testmig95/recovery.conf
3).启动master
$ export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH
$ pg_ctl -D /u02/pgdata/testmig95/ start -l /u02/pgdata/testmig95/pg_log/log.log
4).启动standby
$ export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH
$ pg_ctl -D /u02/pgdata/testmig95/ start -l /u02/pgdata/testmig95/pg_log/log.log
5).检查standby日志文件
LOG: database system was shut down at 2017-01-19 07:51:24 GMT
LOG: creating missing WAL directory "pg_xlog/archive_status"
LOG: entering standby mode
LOG: started streaming WAL from primary at 0/E000000 on timeline 1
LOG: consistent recovery state reached at 0/E024D38
LOG: redo starts at 0/E024D38
LOG: database system is ready to accept read only connections
6).standby其它检查工作
$ psql
psql (9.5.5)
Type "help" for help.
postgres=# select pg_is_in_recovery();
pg_is_in_recovery
-------------------
t
(1 row)
postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
-----------+---------+------------+-----------------------------------------
adminpack | 1.0 | pg_catalog | administrative functions for PostgreSQL
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(2 rows)
postgres=# \c testmig
You are now connected to database "testmig" as user "postgres".
testmig=# \dx
List of installed extensions
Name | Version | Schema | Description
----------------+---------+------------+-------------------------------------------------------------------
pg_buffercache | 1.0 | public | examine the shared buffer cache
pg_trgm | 1.0 | public | text similarity measurement and index searching based on trigrams
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(3 rows)
testmig=# \d
List of relations
Schema | Name | Type | Owner
--------+------------------+-------+----------
public | pg_buffercache | view | postgres
public | pgbench_accounts | table | postgres
public | pgbench_branches | table | postgres
public | pgbench_history | table | postgres
public | pgbench_tellers | table | postgres
(5 rows)
testmig=# select count(*) from pgbench_accounts;
count
---------
1000000
(1 row)
7).master运行analyze_new_cluster.sh
$ ./analyze_new_cluster.sh
This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy. When it is done, your system will
have the default level of optimizer statistics.
If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.
If you would like default statistics as quickly as possible, cancel
this script and run:
"/u01/app/postgres/product/95/db_5/bin/vacuumdb" --all --analyze-only
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "testmig": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "testmig": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics
vacuumdb: processing database "testmig": Generating default (full) optimizer statistics
8).master删除旧的集群
$ ./delete_old_cluster.sh
复制脚本到 standby 或者手工删除旧的 standby
$ rm -rf /u02/pgdata/testmig
$ rm -rf /u03/pgdata/testmig
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。