Data Replication Setup Using Master and Slave: PostgreSQL – II

In last few blogs, we have seen how to automate steps required to setup Master-slave nodes. In next few blogs we will explore how High Availability(HA) will be setup.

We can implement this master-slave environment on separate VMs. We can also implement this on the same machine. For blogging purposes, we will use the same machine for implementing this setup.

In this blog, we will set up the Master node using our automated scripts. One can set up manually instead of an automated script using this blog by creating a database node and modifying configurations manually.

Let’s list all the configuration which would be required to update to create a master-slave environment.

Configurations to be updated on Master and this will be updated using a supplementary configuration file “pgrpl.conf

  • listen_addresses = ‘*’
  • synchronous_commit = local
  • max_wal_size = 1GB
  • wal_keep_segments = 2GB
  • min_wal_size = 100MB
  • max_wal_senders = 5
  • wal_keep_segments = 0
  • max_replication_slots = 5
  • archive_mode = on
  • archive_command = ‘cp %p /home/sanjeeva/postgres/Archivelogs/pg01/%f’
  • synchronous_standby_names = ‘pgcluster’
  • hot_standby = on
  • wal_level=hot_standby

Configurations to be updated on Slave other than primary_conn_info

  • application_name=pgcluster
  • restore_command = ‘cp %p /home/sanjeeva/postgres/Archivelogs/pg01/%f %p’
  • recovery_target_timeline = ‘latest’

We will discuss here these parameters and their values to be updated. Let’s start with the configuration at master node first. All configuration changes which is mentioned here to be done postgresql.conf file. However, if we don’t want to update this file, we may create another file and include this into postgresql.conf as an additional configuration file. PostgreSQL read these additional configurations from the new file.

We have defined the value of listen_address as * for blogging purposes, however, we strongly recommend that you should define this specific to your client. This parameter is a string type and can take a list of a hostname as comma separated values.

All parameters related to wal should be defined as per your scenario/environment. Since we are doing it on one VM, the values defined here is the one which we can think of. For example, if the environment is a high end, and transactions are very high, we can increase the max_wal_senders parameter from 5 to 10.

For replication on the slave node, archive_mode should be on and DB should be made aware that where should it be done using parameter archive_command. This value can be set to any folder which is reachable to the master database. In our case, we have made a folder under postgres/Archivelogs our home directory.

hot_standby should be one and wal_level should be set to hot_standby.

The synchronous_commit can take many specified values. The behavior of the master-slave nodes is varied a lot based on the value selected. Also, the value of this parameter is impacted by the parameter synchronous_standby_names. If we have not defined this parameter, synchronous_commit may be either off or on. In case, we have supplied a value to synchronous_standby_names ,  synchronous_commit will impact how master and slave agree on a transaction. For example, if we decide the value to remote_apply, the master will wait for the slave to acknowledge back of successful commit for any write operations. Hence, it is advised to decide the value carefully based on the requirement and objective of setting the slave replication.

Until now, we have covered all parameters which is to be defined on master node. On Slave there are only three parameters to be updated and that too, not in postgresql.conf. It is recovery.conf file, which we need to udpate with these prameters. The values are self explanatory.

Parameter application_name to be defined under primary_conninfo which takes string as input. For example, primary_conninfo = ' application_name=pgcluster 

As of now we have understand all the required parameter for master slave replication setup. We are almost done. In next blog we will create the nodes and update the value using our automated scripts.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.