Note

To use automatic transactional xCluster replication, both the Primary and Standby universes must be running v2.25.1 or later.

TP Automatic transactional xCluster replication handles all aspects of replication for both data and schema changes.

In particular, DDL changes made to the Primary universe are automatically replicated to the Standby universe.

In this mode, xCluster replication operates at the YSQL database granularity. This means you only run xCluster management operations when adding and removing databases from replication, and not when tables in the databases are created or dropped.

Set up Automatic mode replication

Because this feature is TP , you must enable it by adding the xcluster_enable_ddl_replication flag to the allowed_preview_flags_csv list and setting it to true on yb-master in both universes.

Before setting up xCluster replication, ensure you have reviewed the Prerequisites and Best practices.

DDLs must be paused on the Primary universe during the entire set up process. #26053

The following assumes you have set up Primary and Standby universes. Refer to Set up yugabyted universes. The yugabyted node must be started with --backup_daemon=true to initialize the backup/restore agent.

  1. Create a checkpoint on the Primary universe for all the databases that you want to be part of the replication.

    ./bin/yugabyted xcluster create_checkpoint \
        --replication_id <replication_id> \
        --databases <comma_separated_database_names> \
        --automatic_mode
    

    The command informs you if any data needs to be copied to the Standby, or only the schema (empty tables and indexes) needs to be created. For example:

    +-------------------------------------------------------------------------+
    |                                yugabyted                                |
    +-------------------------------------------------------------------------+
    | Status               : xCluster create checkpoint success.              |
    | Bootstrapping        : Bootstrap is required for database `yugabyte`.   |
    +-------------------------------------------------------------------------+
    For each database which requires bootstrap run the following commands to perform a backup and restore.
     Run on source:
    ./yugabyted backup --cloud_storage_uri <AWS/GCP/local cloud storage uri>  --database <database_name> --base_dir <base_dir of source node>
     Run on target:
    ./yugabyted restore --cloud_storage_uri <AWS/GCP/local cloud storage uri>  --database <database_name> --base_dir <base_dir of target node>
    
  2. If needed, perform a full copy of the database(s) on the Primary to the Standby using distributed backup and restore.

    Note that if your source database is not empty, it must be bootstrapped, even if the output suggests otherwise. This applies even if it contains only empty tables, unused types, or enums (issue #24030).

  3. Enable point in time restore (PITR) on the database(s) on both the Primary and Standby universes:

    ./bin/yugabyted configure point_in_time_recovery \
        --enable \
        --retention <retention_period> \
        --database <database_name>
    

    The retention_period must be greater than the amount of time you expect the Primary universe to be down before it self recovers or before you perform a failover to the Standby universe.

  4. Set up the xCluster replication.

    ./bin/yugabyted xcluster set_up \
        --target_address <ip_of_any_target_cluster_node> \
        --replication_id <replication_id> \
        --bootstrap_done
    

    You should see output similar to the following:

    +-----------------------------------------------+
    |                   yugabyted                   |
    +-----------------------------------------------+
    | Status        : xCluster set-up successful.   |
    +-----------------------------------------------+
    

The following assumes you have set up Primary and Standby universes. Refer to Set up universes.

  1. Create a checkpoint using the create_xcluster_checkpoint command, providing a name for the replication group, and the names of the databases to replicate as a comma-separated list.

    ./bin/yb-admin \
        -master_addresses <primary_master_addresses> \
        create_xcluster_checkpoint \
        <replication_group_id> \
        <comma_separated_namespace_names> \
        automatic_ddl_mode
    

    The command informs you if any data needs to be copied to the Standby, or only the schema (empty tables and indexes) needs to be created. For example:

    Waiting for checkpointing of database(s) to complete
    Checkpointing of yugabyte completed. Bootstrap is not required for setting up xCluster replication
    Successfully checkpointed databases for xCluster replication group repl_group1
    Create equivalent YSQL objects (schemas, tables, indexes, ...) for databases [yugabyte] on the standby universe
    Once the above step(s) complete run 'setup_xcluster_replication'
    

    You can also manually check the status as follows:

    ./bin/yb-admin \
    -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \
    is_xcluster_bootstrap_required repl_group1 yugabyte
    

    You should see output similar to the following:

    Waiting for checkpointing of database(s) to complete
    Checkpointing of yugabyte completed. Bootstrap is not required for setting up xCluster replication
    
  2. If needed, perform a full copy of the database on the Primary to the Standby using distributed backup and restore. See Distributed snapshots for YSQL.

    Note that if your source database is not empty, it must be bootstrapped, even if the output suggests otherwise. This applies even if it contains only empty tables, unused types, or enums (issue #24030).

  3. Enable point in time restore (PITR) on the database(s) on both the Primary and Standby universes:

    ./bin/yb-admin \
        -master_addresses <standby_master_addresses> \
        create_snapshot_schedule \
        <snapshot-interval> \
        <retention-time> \
        <ysql.database_name>
    

    The retention-time must be greater than the amount of time you expect the Primary universe to be down before it self recovers or before you perform a failover to the Standby universe.

  4. Set up the xCluster replication group.

    ./bin/yb-admin \
    -master_addresses <primary_master_addresses> \
    setup_xcluster_replication \
    <replication_group_id> \
    <standby_master_addresses>
    

    You should see output similar to the following:

    xCluster Replication group repl_group1 setup successfully
    

Monitor replication

For information on monitoring xCluster replication, refer to Monitor xCluster.

Add a database to a replication group

The database should have at least one table in order to be added to replication. If it is a colocated database then there should be at least one colocated table in the database in order for it to be added to replication.

  1. Create a checkpoint on the Primary universe for all the databases that you want to add to an existing replication group.

    ./bin/yugabyted xcluster add_to_checkpoint \
        --replication_id <replication_id> \
        --databases <comma_separated_database_names>
    

    You should see output similar to the following:

    Waiting for checkpointing of database to complete
    Successfully checkpointed database db2 for xCluster replication group repl_group1
    Bootstrap is not required for adding database to xCluster replication
    Create equivalent YSQL objects (schemas, tables, indexes, ...) for the database in the standby universe
    
  2. If needed, perform a full copy of the database(s) on the Primary to the Standby using distributed backup and restore. If your source database is not empty, it must be bootstrapped, even if the output suggests otherwise. This applies even if it contains only empty tables, unused types, or enums (#24030).

  3. Enable point in time restore (PITR) on the database(s) on both the Primary and Standby universes:

    ./bin/yugabyted configure point_in_time_recovery \
        --enable \
        --retention <retention_period> \
        --database <database_name>
    

    The retention_period must be greater than the amount of time you expect the Primary universe to be down before it self recovers or before you perform a failover to the Standby universe.

  4. Add the databases to the xCluster replication.

    ./bin/yugabyted xcluster add_to_replication \
        --databases <comma_separated_database_names> \
        --replication_id <replication_id> \
        --target_address <IP-of-any-target-node> \
        --bootstrap_done
    
  1. Create a checkpoint.

    ./bin/yb-admin \
    -master_addresses <primary_master_addresses> \
    add_namespace_to_xcluster_checkpoint <replication_group_id> <namespace_name>
    

    You should see output similar to the following:

    Waiting for checkpointing of database to complete
    Successfully checkpointed database db2 for xCluster replication group repl_group1
    Bootstrap is not required for adding database to xCluster replication
    Create equivalent YSQL objects (schemas, tables, indexes, ...) for the database in the standby universe
    
  2. If needed, perform a full copy of the database(s) on the Primary to the Standby using distributed backup and restore. If your source database is not empty, it must be bootstrapped, even if the output suggests otherwise. This applies even if it contains only empty tables, unused types, or enums (#24030).

  3. Enable point in time restore (PITR) on the database(s) on both the Primary and Standby universes:

    ./bin/yb-admin \
        -master_addresses <standby_master_addresses> \
        create_snapshot_schedule 1 10 ysql.yugabyte
    
  4. Set up the database using the checkpoint.

    ./bin/yb-admin \
    -master_addresses <primary_master_addresses> \
    add_namespace_to_xcluster_replication <replication_group_id> <namespace_name> <standby_master_addresses>
    

    You should see output similar to the following:

    Successfully added db2 to xCluster Replication group repl_group1
    

Remove a database from a replication group

To remove a database from a replication group, use the following command:

./bin/yugabyted xcluster remove_database_from_replication \
    --databases <comma_separated_database_names> \
    --replication_id <replication_id> \
    --target_address <ip_of_any_target_cluster_node>
./bin/yb-admin \
-master_addresses <primary_master_addresses> \
remove_namespace_from_xcluster_replication <replication_group_id> <namespace_name> <standby_master_addresses>

You should see output similar to the following:

Successfully removed db2 from xCluster Replication group repl_group1

Drop xCluster replication group

To drop a replication group, use the following command:

./bin/yugabyted xcluster delete_replication \
    --replication_id <replication_id> \
    --target_address <ip_of_any_target_cluster_node>

To drop a replication group, use the following command:

./bin/yb-admin \
-master_addresses <primary_master_addresses> \
drop_xcluster_replication <replication_group_id> <standby_master_addresses>

You should see output similar to the following:

Outbound xCluster Replication group rg1 deleted successfully

Making DDL changes

DDL operations must only be performed on the Primary universe. All schema changes are automatically replicated to the Standby universe.