Multi-Server Installation

This section describes a Carbonio CE Multi-Server installation, that is, a Carbonio installation spread across multiple nodes, each playing one or more Roles.

Rather than giving fixed installation instructions, with some functionality installed on any node, we present an installation scenario that can be adapted to the different needs of Carbonio CE users, who use a different number of nodes. For this reason, we introduce the notion of Role: a Carbonio CE functionality that is considered atomic and consists of one or more packages.

A Role can be installed on any node of the cluster, therefore the scenario we describe below can be modified at will by installing a Role on a different node (or even on a dedicated node).

Six Nodes Scenario

In the suggested scenario we will set up a Carbonio CE Multi-Server environment, composed by six nodes (that we will denote as SRV1, …, SRV6) as follows:

  1. SRV1 features a dedicated installation of Postgres

  2. SRV2 represents the core infrastructure of Carbonio CE and installs Directory Server, Carbonio Mesh, and DB connection

  3. SRV3 is equipped with MTA, the mail server

  4. SRV4 hosts the Proxy, which allows web access to all components

  5. SRV5 is an AppServer which installs Carbonio Files & Carbonio Docs, that provide sharing and collaborative editing of documents, and the Carbonio Storages CE instance

  6. SRV6 is another AppServer and consists of Carbonio Preview, Carbonio's ability to preview snippets or thumbnails of a document, and the User Management

The Carbonio Storages CE Roles must be unique within a Carbonio CE infrastructure. Carbonio Storages CE is a dependency for Carbonio Files and therefore it is installed on the first node on which you install Carbonio Files. When you install Carbonio Files on another node, Carbonio will recognise via Carbonio Mesh that there is already an instance of Carbonio Storages CE, prevent its installation, and rely on the existent one.

In our scenario, we start Carbonio CE installation from 6 nodes equipped with Ubuntu 20.04 LTS. The instructions are valid for nodes which are installed with RHEL 8: the only difference is the command for the package installation, while the commands to configure the nodes are the same.

We also assume that the IP address of each node is 172.16.0.1X, with X the n-th node. In other words, IPs will be in the range 172.16.0.11 (SRV1) 172.16.0.16 (second AppServer). These values will be used in configuration files that need to be manually modified during the installation or upgrade procedures.

In most Multi-Server scenarios, it proves useful to install a Replica Directory Server in a Master/Slave setup for improved reliability and load-balancing. We describe in a dedicated section the procedure to install the Replica on a dedicated node, SRV7 (which must be equipped with the same OS as the other six). However, you can install the Replica on any node other than SRV2, following the same procedure.

Requirements

Each node of a Multi-Server must satisfy the Single-Server’s Software Requirements and also System Requirements are valid, but take into account the following advices:

  • By dividing the services and therefore the load on more nodes, less resources per node are needed. We recommend at least 4GB of RAM on each node, though.

Additional Requirements

No additional requirement is necessary.

The following additional requirements are needed.

  • An active subscription (you must be able to fetch from BaseOS and the other main repositories):

    # subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms
    
  • The CodeReady repository enabled:

    # subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
    

Some more remarks:

  • Acquaintance with the use of CLI is necessary. All carbonio commands must be executed as the zextras user (these commands will feature a zextras$ prompt), while all other commands must be issued as the root user, unless stated otherwise.

  • Give meaningful names to the nodes. For example, call them proxy.example.com, mta.example.com, and so on. Replace example.com with your domain name.

  • During the installation procedure, you will need to write down some configuration options and their value, because they will be needed in the setup of the next nodes. These information are summarised at the end of each node’s installation: copy them to a safe place and keep them at hand until the end of the installation. Example of values include: the IP address (public or private) of a node or the password of a database user.

Preliminary Tasks

Before starting with the actual installation, carry out the following tasks on each of the six nodes.

Task 1: Configure repositories

In order to add Carbonio CE’s repository, go to the following page and fill in the form:

https://www.zextras.com/carbonio-community-edition/#discoverproduct

You will receive an e-mail containing:

  • the URL of the repository

  • the GPG key of the repository

Follow the instructions in the e-mail to add these data to your system.

Repository and Channels

The following are important information concerning the packages repository Carbonio CE and its content. Please read them carefully, as they might save you some time in case of installation or upgrade problems and help you to provide more precise bug reports.

The repository hosts simultaneously packages of two channels:

  • Release Candidate (RC). Packages in this channel are made available as soon as they are built by Zextras development and tested by the QA team. While they are stable, they are not suitable for a use in a production environment, because they might still contain some bug, new functionalities that have not yet reached a production-level quality, or some broken dependencies might be introduced.

    Usually these problems are fixed within days or even hours, so in case just try again before you report some problem.

    Use this channel and its packages for testing (or demo) installations only.

  • RELEASE. This channel contains only packages that are stable and suitable for a production environment.

Hint

When reporting a problem or opening a ticket, remember to always add the channel if you are using, as this helps us to analyse the problem quickly.

FAQ

  1. I want to help testing things, which channel should I use?

    RC channel.

  2. I need to install Carbonio in a production environment which channel should I use?

    RELEASE channel.

  3. How will we be informed about new RC packages?

    There will be no notification, because RC channel is updated continuously.

  4. How will we be informed about a potential new release incoming?

    A red message on the homepage of the documentation site will notify you of the release of a new stable version. You may also be informed through other means of communication such as email and social media.

  5. Could there be bugs in the packages downloaded from the RC channel?

    Yes, RC versions have a risk of containing bugs (which in some cases could lead to data loss). If you find a bug in an RC package we kindly ask you to report it on the appropriate community page. We will try to resolve it as quickly as possible.

Task 2: Setting Hostname

Carbonio CE needs a valid FQDN as hostname and a valid entry in the /etc/hosts file. To configure them, execute these two commands. First, set the hostname

# hostnamectl set-hostname mail.example.com

then update /etc/hosts with IP and hostname

# echo "172.16.0.10 mail.example.com mail" >> /etc/hosts

You can also simply get the current IP and hostname and save them:

# echo "$(hostname -I) $(hostname -f)"

Hint

Replace 172.16.0.10 with the actual management IP to be assigned to the server.

It is mandatory to configure the hostname, especially on the Directory-Server node, otherwise the services will not be able to bind to the correct address, leading to a disruption in Carbonio CE's functionality.

RHEL 8-only Preliminary Tasks

A few task are required for the installation of Carbonio CE on RHEL 8 systems, they concern the configuration of SELinux and the firewall.

SELinux and Firewall

SELinux

Must be set to disabled or permissive in file /etc/selinux/config. You can check the current profile using the command

# sestatus
Firewall

All the ports needed by Carbonio CE are open on the firewall or the firewall is disabled. To disable the firewall, issue the commands

# systemctl stop firewalld.service
# systemctl disable firewalld.service

Node Installation

The installation procedure follows the suggested order of nodes as described in the scenario. A few remarks:

  • It is assumed that the Postgres node is not a “real” part of the infrastructure, in the sense that it can also be an existent server that is configured to communicate correctly with Carbonio CE (configuration instruction are part of SRV1 installation).

    Note

    In our scenario, we install Postgres and configure it from scratch (SRV1).

  • The first node to be installed is the one that will feature the Directory Server role (SRV2)

  • The next server to be installed is the MTA one (SRV3)

  • The other nodes can be installed in any order, you can skip instructions for any node or role that you do not plan to install

  • While the overall procedure is the same for both Ubuntu and RHEL 8, the actual commands and file paths may differ on the two operating system, so pay attention that you execute the correct command on the correct file

When the installation process has successfully finished, you can access Carbonio CE's GUI using a browser: directions can be found in Section Access to the Web Interface.

SRV1: Postgres

The first node is dedicated to PostgreSQL and will host all the databases required by Carbonio CE.

# apt install postgresql-12

First step is to add the dedicated Postgresql repository

# yum -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm

Then, make sure that Postresql 12 is installed, by running commands

# dnf -qy module disable postgresql
# dnf -y install postgresql12 postgresql12-server

Finally, initialise and enable Postgresql

# /usr/pgsql-12/bin/postgresql-12-setup initdb
# systemctl enable --now postgresql-12

Carbonio CE relies on a number of databases to store and keep track of all the objects it needs to manage. The main database can be configured in two steps, but if you are running Carbonio CE on RHEL 8, please first configure Postgres according to the guidelines.

The first step is to create a role with administrative rights and an associated password.

# su - postgres -c "psql --command=\"CREATE ROLE carbonio_adm WITH LOGIN SUPERUSER encrypted password 'DB_ADM_PWD';\""

Remember to replace the password with a robust password of your choice and store it in a safe place (preferably using a password manager), as you need it in the remainder of the procedure, and you also might need them in the future. This password will be denoted as DB_ADM_PWD.

The second step is to create the database.

# su - postgres -c "psql --command=\"CREATE DATABASE carbonio_adm owner carbonio_adm;\""

Finally, allow the other nodes to access the databases that will be stored on this node by running these four commands.

# su - postgres -c "psql --command=\"ALTER SYSTEM SET listen_addresses TO '*';\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET port TO '5432';\""
# echo "host    all             all             0.0.0.0/0            md5" >> /etc/postgresql/12/main/pg_hba.conf
# systemctl restart postgresql
# su - postgres -c "psql --command=\"ALTER SYSTEM SET listen_addresses TO '*';\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET port TO '5432';\""
# echo "host    all             all             0.0.0.0/0            md5" >> /var/lib/pgsql/12/data/pg_hba.conf
# systemctl restart postgresql-12

Hint

You may replace the 0.0.0.0/0 network with the one within the cluster is installed, to prevent unwanted accesses.

Values used in the next steps

  • DB_ADM_PWD the password of the carbonio_adm database role

  • SRV1_IP the IP address of the node

SRV2: Directory Server, DB connection, and Carbonio Mesh Server

The installation of this server encompasses a number of tasks, as it will feature a number of crucial services for the correct working of Carbonio CE: Directory Server and Carbonio Mesh, connection with PostgreSQL node using Pgpool-II, and Carbonio Mesh.

Note

It is possible to install multiple instances of the service-discover service provided by Carbonio Mesh. Please refer to section Set up Multiple Carbonio Mesh Servers for details.

  1. Install the following packages.

    # apt install service-discover-server pgpool2 \
      carbonio-directory-server carbonio-files-db
    
    # dnf install service-discover-server pgpool2 \
      carbonio-directory-server carbonio-files-db
    
  2. Configure Pgpool-II to work with the node on which PostgreSQL runs (SRV1), using the following command. Replace SRV1_IP with the value saved in the previous task.

    # echo "backend_clustering_mode = 'raw'
      port = 5432
      backend_hostname0 = 'SRV1_IP' # eg 192.168.1.100
      backend_port0 = 5432" > /etc/pgpool2/pgpool.conf
    
  3. restart the service using this command.

    # systemctl restart pgpool2.service
    
  4. Bootstrap Carbonio

    # carbonio-bootstrap
    

    The bootstrap command will execute a number of tasks and will set up the node. At the end, you will be prompted with a menu and, if you already configured all, you only need to click y for confirmation.

  5. Setup Carbonio Mesh

    Carbonio Mesh is required to allow communication between Carbonio CE and its components. The configuration is interactively generated by command

    # service-discover setup-wizard
    

    This command will:

    • ask for the IP address and netmask

    • ask for the Carbonio Mesh secret, which is used for setups, management, and to access the administration GUI. See section Carbonio Mesh Administration Interface for more information.

      This password will be denoted as MESH_SECRET throughout the documentation.

      Hint

      We suggest to use a robust password which is at least 16 characters long, including at least one of lowercase and uppercase letters, numbers, special characters and store it in a password manager.

      In case the password is lost or the credential file becomes corrupted and unusable, you can Regenerate Carbonio Mesh Secret.

    • store the setup in file /etc/zextras/service-discover/cluster-credentials.tar.gpg

    To complete Carbonio Mesh installation, run

    # pending-setups -a
    

    Hint

    The secret needed to run the above command is stored in file /var/lib/service-discover/password, which is accessible only by the root user.

  6. Bootstrap Carbonio Files Database, using the Postgres user created on SRV1 and the password defined in previous step.

    # PGPASSWORD=DB_ADM_PWD carbonio-files-db-bootstrap carbonio_adm 127.0.0.1
    

Values used in the next steps

  • SRV2_hostname this node’s hostname, which can be retrieved using the command su - zextras -c "carbonio prov gas service-discover"

  • MESH_SECRET the Carbonio Mesh password

  • LDAP_PWD the LDAP bind password for the root user and applications, retrieved with command:

    # zmlocalconfig -s zimbra_ldap_password
    
  • AMAVIS_PWD the password used by Carbonio for the Amavis service, retrieved with command

    # zmlocalconfig -s ldap_amavis_password
    
  • POSTFIX_PWD the password used by Carbonio for the Postfix service, retrieved with command

    # zmlocalconfig -s ldap_postfix_password
    
  • NGINX_PWD the password used by Carbonio for the NGINX service, retrieved with command

    # zmlocalconfig -s ldap_nginx_password
    

Note

By default, all the LDAP_PWD, AMAVIS_PWD, POSTFIX_PWD, and NGINX_PWD bind passwords have the same value.

SRV3: MTA

On this node we install the MTA, which is the actual software which sends and receives emails.

# apt install service-discover-agent carbonio-mta
# dnf install service-discover-agent carbonio-mta

These following tasks must be executed to configure the MTA.

  1. Bootstrap Carbonio

    # carbonio-bootstrap
    

    In the bootstrap menu, use SRV2_hostname, LDAP_PWD, POSTFIX_PWD, and AMAVIS_PWD in the following items to complete successfully the bootstrap.

    • Ldap master host: SRV2_hostname

    • Ldap Admin password: LDAP_PWD

    • Bind password for postfix ldap user: POSTFIX_PWD

    • Bind password for amavis ldap user: AMAVIS_PWD

  2. Run Carbonio Mesh setup using MESH_SECRET

    # service-discover setup-wizard
    

    Since this node is not the Carbonio Mesh Server, the cluster-credentials.tar.gpg file will be automatically downloaded.

  3. Complete Carbonio Mesh setup

    # pending-setups -a
    

    Hint

    The secret needed to run the above command is stored in file /var/lib/service-discover/password which is accessible only by the root user.

Values used in the next steps

  • MTA_IP the IP address of this node

SRV4: Proxy

This node featurs the Proxy, and all the *-ui files (i.e., the front-end packages for Carbonio Files and Carbonio Admin Panel) will be installed here.

  1. Install packages

    # apt install service-discover-agent carbonio-proxy \
      carbonio-webui carbonio-files-ui \
      carbonio-admin-ui carbonio-admin-console-ui
    
    # dnf install service-discover-agent carbonio-proxy \
      carbonio-webui carbonio-files-ui \
      carbonio-admin-ui carbonio-admin-console-ui
    
  2. Bootstrap Carbonio

    # carbonio-bootstrap
    

    In the bootstrap menu, use SRV2_hostname, AND LDAP_PWD in the following items to complete successfully the bootstrap.

    • Ldap master host: SRV2_hostname

    • Ldap Admin password: LDAP_PWD

    • Bind password for nginx ldap user: NGINX_PWD

  3. Run Carbonio Mesh setup using MESH_SECRET

    # service-discover setup-wizard
    

Since this node is not the Carbonio Mesh Server, the cluster-credentials.tar.gpg file will be automatically downloaded.

  1. Complete Carbonio Mesh setup

    # pending-setups -a
    

    Hint

    The secret needed to run the above command is stored in file /var/lib/service-discover/password which is accessible only by the root user.

  2. Enable Memcached access using the commands as the zextras user:

    zextras$ carbonio prov ms $(zmhostname) zimbraMemcachedBindAddress $(hostname -i)
    zextras$ zmmemcachedctl restart
    zextras$ zmproxyctl restart
    

    Warning

    Since Memcached does not support authentication, make sure that the Memcached port (11211) is accessible only from internal, trusted networks.

Values used in the next steps

  • SRV4_IP the IP address of the node

SRV5: AppServer, Files and Docs

On this node, first install all the required packages for Carbonio Files, then configure the various services needed.

# apt install service-discover-agent carbonio-appserver \
  carbonio-storages-ce carbonio-user-management \
  carbonio-files-ce carbonio-docs-connector-ce \
  carbonio-docs-editor

Make sure to respect the order of installation.

# dnf install service-discover-agent carbonio-appserver
# dnf install carbonio-storages-ce carbonio-user-management
# dnf install carbonio-files-ce carbonio-docs-connector-ce
# dnf install carbonio-docs-editor

Execute the following tasks.

  1. Bootstrap Carbonio

    # carbonio-bootstrap
    

    In the bootstrap menu, use SRV2_hostname, LDAP_PWD, and NGINX_PWD in the following items to complete successfully the bootstrap.

    • Ldap master host: SRV2_hostname

    • Ldap Admin password: LDAP_PWD

    • Bind password for nginx ldap user: NGINX_PWD

  2. Run Carbonio Mesh setup using MESH_SECRET

    # service-discover setup-wizard
    

    Since this node is not the Carbonio Mesh Server, the cluster-credentials.tar.gpg file will be automatically downloaded.

  3. Complete Carbonio Mesh setup

    # pending-setups -a
    

    Hint

    The secret needed to run the above command is stored in file /var/lib/service-discover/password which is accessible only by the root user.

SRV6: AppServer, Preview and Logger

On this node we show how to install the Preview and the User Management.

Hint

We suggest that Preview and the Carbonio Docs-related packages be installed on different physical nodes.

First install all the necessary packages:

# apt install service-discover-agent carbonio-appserver \
  carbonio-user-management carbonio-preview-ce

Make sure to respect the order of installation.

# dnf install service-discover-agent carbonio-appserver
# dnf install carbonio-user-management carbonio-preview-ce

Execute the following tasks.

  1. Bootstrap Carbonio

    # carbonio-bootstrap
    

    In the bootstrap menu, use SRV2_hostname, and LDAP_PWD in the following items to complete successfully the bootstrap.

    • Ldap master host: SRV2_hostname

    • Ldap Admin password: LDAP_PWD

  2. Run Carbonio Mesh setup using MESH_SECRET

    # service-discover setup-wizard
    

    Since this node is not the Carbonio Mesh Server, the cluster-credentials.tar.gpg file will be automatically downloaded.

  3. Complete Carbonio Mesh setup

    # pending-setups -a
    

    Hint

    The secret needed to run the above command is stored in file /var/lib/service-discover/password which is accessible only by the root user.

  4. Let Carbonio Preview use Memcached. Edit file /etc/carbonio/preview/config.ini and search for section #Nginx Lookup servers.

    1nginx_lookup_server_full_path_urls = https://172.16.0.16:7072
    2memcached_server_full_path_urls = 172.16.0.14:11211
    

    Make sure that:

    • in line 1 protocol is https and the IP address is the address of one AppServer, we use the current node’s IP Address for simplicity

    • in line 1, make also sure to specify the port used by Preview, 7072

    • in line 2 SRV4_IP is written, to allow this node’s access to Memcached, which is installed on the Proxy Node

  5. Restart the Carbonio Preview process

    # systemctl restart carbonio-preview
    # systemctl restart carbonio-preview-sidecar
    
  6. As last task, restart the mailbox process as the zextras user

    zextras$ zmcontrol stop
    zextras$ zmcontrol start
    

Values used in the next steps

  • SRV6_hostname this node’s hostname, which can be retrieved using the command su - zextras -c "carbonio prov gas service-discover"

Installation Complete

At this point installation is complete and you can start using Carbonio CE and access its graphic interface as explained in section Access to the Web Interface.

If you need to access the administration interface, you need to create a system user with administrative access, a task explained in Create System User below.

Centralised Logging Configuration

The log system in Carbonio CE is rsyslog, which supports a centralised setup: in other words, all log files produced by Carbonio CE can be sent to a unique host server (we call it “Log Server”), that is appropriately configured to receive log files, which is particularly useful in a Multi-Server installation.

In the instructions below, we elect the Log Server to be SRV-6.

Centralised Logging Setup

On SRV-6, open file /etc/rsyslog.conf, find the following lines, and uncomment them (i.e., remove the # character at the beginning of the line).

$ModLoad imudp
$UDPServerRun 514

$ModLoad imtcp
$TCPServerRun 514

Then, restart the rsyslog service.

# systemctl restart rsyslog

Finally, specify the host server that will receive logs. Since is the SRV-6 node, we need SRV6_hostname.

zextras$ carbonio prov mcf zimbraLogHostname SRV6_hostname

Note

Since zimbraLogHostname is a global attribute, this command must be run only once on one node.

Other Nodes Setup

Once the Log Server node has properly been initialised, on all other nodes, execute

# /opt/zextras/libexec/zmsyslogsetup  && service rsyslog restart

Install a Directory Server Replica

In this section we explain how to install a Directory Server Replica, i.e., a second instance of a Directory Server in Master/Slave setup, which proves useful whenever the load on the Directory server is continuously high.

Indeed, in this set up the Master Directory Server will remain authoritative for storing and managing both user information and server configuration, but will delegate to the Replica all the queries coming from the other infrastructure nodes. Therefore, whenever some data is updated on the master, they are immediately copied to the slave and available for queries. The bottom line is that the two server will split their tasks, reducing thus the load on the main instance.

Preliminaries

Before attempting to install a Directory Server Replica, please read carefully the whole procedure in this page and make sure the following requirements are satisfied.

In the remainder, we show how to install one Replica on a dedicated node, but it is possible to install it also on existent node in the cluster.

  • A Multi-Server Carbonio CE is already operating correctly

  • A new node is available, on which to install the Replica, which satisfies the Multi Server Requirements and on which the Preliminary Tasks have already been executed. We will call this node SRV7.

    Note

    In case you plan to install the Replica on an existent node, execute all commands on that node instead of on SRV7.

  • Except for the one in the Installation section, all commands must be executed as the zextras user

  • Give the new node a meaningful name/FQDN. We will use ds-replica.example.com whenever necessary. Remember to replace it with the name you give.

  • Have CLI access to the Main and Replica Directory Servers, as you need to execute commands on both servers

Installation

The installation requires to execute this command on SRV7.

# apt install carbonio-directory-server service-discover-agent

Note

As an alternative. you may install the package service-discover-server if you plan to have multiple Carbonio Mesh server. In this case, however, please refer to section Set up Multiple Carbonio Mesh Servers for detailed directions.

Configuration

Configuration of the Replica Directory server requires a few steps.

Step 1: Activate replica

On SRV2 activate the replica by executing

$ /opt/zextras/libexec/zmldapenablereplica

Step 2: Retrieve passwords from SRV2

Then, retrieve a few passwords that you will need on the Replica to configure the connection and access to SRV2

$ zmlocalconfig -s zimbra_ldap_password
$ zmlocalconfig -s ldap_replication_password
$ zmlocalconfig -s ldap_postfix_password
$ zmlocalconfig -s ldap_amavis_password
$ zmlocalconfig -s ldap_nginx_password

Note

By default, these password are the same and coincide with zimbra_ldap_password. If you did not change them, use the same password in the next step.

Step 3: Bootstrap Carbonio CE on Replica

After the command completed successfully, log in to SRV7 and bootstrap Carbonio CE. You will need to configure a number of options, so make sure to have all available.

$ carbonio-bootstrap

Step 4: Configure Replica

You will asked to properly configure a couple of options in the Common configuration and Ldap configuration menus. In the first menu, provide these values:

Ldap configuration

   1) Hostname: The hostname of the Director Server Replica.
   2) Ldap master host: The hostname of SRV2
   3) Ldap port: 389
   4) Ldap Admin password: The zimbra_ldap_password

Exit this menu and go to the second:

Ldap configuration

   1) Status: Enabled
   2) Create Domain: do not change
   3) Domain to create: example.com
   4) Ldap root password: The zimbra_ldap_password
   5) Ldap replication password: The ldap_replication_password
   6) Ldap postfix password: The ldap_postfix_password
   7) Ldap amavis password: The ldap_amavis_password
   8) Ldap nginx password: The ldap_nginx_password

Hint

Remember to always use the zimbra_ldap_password in case you did not change the other passwords.

Step 5: Complete the installation

You can now continue the bootstrap process and after a while the installation will be successfully completed and immediately after, the copy of the Directory Server on SRV2 will be copied over to the Replica on SRV7.

Testing

In order to test whether the Replica works correctly after the installation was completed successfully, you can make a quick test as follows.

  1. Log in to the Master (SRV2) and create a test user with a password:

    $ carbonio prov ca john.doe@example.com MySecretPassword
    
  2. Log in to the replica and check that all account have been copied over from the Master:

    $ carbonio prov -l gaa
    

    Among the results, the john.doe@example.com must be present.

    Hint

    You can pipe the previous command to grep to check only the new account (or any given account): carbonio prov -l gaa | grep "john.doe@example.com"

Set up Replica to Answer Queries

It is now time to configure the Replica to answer queries in place of the Master, which requires to reconfigure the value of the ldap_url parameter and let it point to the Replica. You can achieve this set up with a few commands on the Master.

Values used in this step

You need to keep at hand the following data

  • SRV2_hostname: the hostname on which the Directory Server Master is installed

  • SRV7_hostname: the hostname on which the Directory Server Replica is installed, e.g., ds-replica.example.com

Hint

To retrieve the hostname, use the hostname on the Master and Replica nodes.

  1. Stop all Carbonio CE services

    $ zmcontrol stop
    
  2. Update the value of ldap_url

    $ zmlocalconfig -e \
      ldap_url="ldap://SRV7_hostname ldap://SRV2_hostname"
    

    If you plan to install multiple Replica Directory Servers, you can install all of them and then execute the above-mentioned command once for all Replicas, making sure that their hostnames precede the Master’s hostname. For example, provided you installed two Replica Directory Servers on SRV4 and SRV5, execute:

    $ zmlocalconfig -e \
      ldap_url="ldap://SRV7_hostname ldap://SRV4_hostname \
      ldap://SRV5_hostname ldap://SRV2_hostname"
    

    The Replica instance to query first is the first listed in the command.

Uninstall a Replica

To remove a Replica, you need to carry out two tasks:

  1. On each node of the Multiple-Server installation, execute the following command, which will use only the Master for the queries

    $ zmlocalconfig -e ldap_url="ldap://SRV2_hostname"
    

    In case you had configured multiple Replicas, the above command will redirect all queries to the Master. If you want to remove only some of the Replicas, simply omit its hostname from the list. For example, to remove SRV5, use the command

    $ zmlocalconfig -e \
      ldap_url="ldap://SRV7_hostname ldap://SRV4_hostname \
      ldap://SRV2_hostname"
    
  2. Execute, only on the MTA node the command

    $ /opt/zextras/libexe/zmmtainit
    

    This command will update the configuration of postfix with new ldap_url.