Multi-Server Installation
This section describes a Carbonio Multi-Server installation, that is, a Carbonio installation spread across multiple nodes, each playing one or more Roles.
Rather than giving fixed installation instructions, with some functionality installed on any node, we present an installation scenario that can be adapted to the different needs of Carbonio users, who use a different number of nodes. For this reason, we introduce the notion of Role: a Carbonio functionality that is considered atomic and consists of one or more packages.
A Role can be installed on any node of the cluster, therefore the scenario we describe below can be modified at will by installing a Role on a different node (or even on a dedicated node).
Six Nodes Scenario
Carbonio Multi-Server is the only supported method of installation in a production environment, especially for large production system, because it is more scalable in case of a growth of the infrastructure and the communication across all nodes is set up and secured automatically by Carbonio Mesh, which also adds fault detection and dynamic routing between components of the infrastructure.
In the suggested scenario we will set up a Carbonio Multi-Server environment, composed by six nodes (that we will denote as SRV1, …, SRV6) as follows:
SRV1 features a dedicated installation of Postgres
SRV2 represents the core infrastructure of Carbonio and installs Directory Server, Carbonio Mesh, and DB connection
SRV3 is equipped with MTA, the mail server
SRV4 hosts the Proxy, which allows web access to all components, and Carbonio VideoServer, which provides the video-conference features and includes the ability to record meetings
SRV5 is an AppServer which installs Carbonio Files & Carbonio Docs, that provide sharing and collaborative editing of documents
SRV6 is another AppServer and consists of Carbonio Preview, Carbonio's ability to preview snippets or thumbnails of a document, the User Management, and some advanced service
In our scenario, we start Carbonio installation from six nodes equipped with Ubuntu 20.04 LTS. The instructions are valid for six nodes which are installed with RHEL 8: the only difference is the command for the package installation, while the commands to configure the nodes are the same.
We also assume that the IP address of each node is 172.16.0.1X
,
with X
the n-th node. In other words, IPs will be in the range
172.16.0.11
(SRV1) 172.16.0.16
(second
AppServer). These values will be used in configuration files that need
to be manually modified during the installation or upgrade procedures.
In most Multi-Server scenarios, it proves useful to install a Replica Directory Server in a Master/Slave setup for improved reliability and load-balancing. We describe in a dedicated section the procedure to install the Replica on a dedicated node, SRV7 (which must be equipped with the same OS as the other six). However, you can install the Replica on any node other than SRV2, following the same procedure.
Requirements
Each node of a Multi-Server must satisfy the Single-Server’s Software Requirements and also System Requirements are valid, but take into account the following advices:
By dividing the services and therefore the load on more nodes, less resources per node are needed. We recommend at least 4GB of RAM on each node, though.
Additional Requirements
No additional requirement is necessary.
The following additional requirements are needed.
-
An active subscription (you must be able to fetch from BaseOS and the other main repositories):
# subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms
-
The CodeReady repository enabled:
# subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
Some more remarks:
Acquaintance with the use of CLI is necessary. All
carbonio
commands must be executed as thezextras
user (these commands will feature azextras$
prompt), while all other commands must be issued as theroot
user, unless stated otherwise.Give meaningful names to the nodes. For example, call them proxy.example.com, mta.example.com, and so on. Replace
example.com
with your domain name.During the installation procedure, you will need to write down some configuration options and their value, because they will be needed in the setup of the next nodes. These information are summarised at the end of each node’s installation: copy them to a safe place and keep them at hand until the end of the installation. Example of values include: the IP address (public or private) of a node or the password of a database user.
Preliminary Tasks
Before starting with the actual installation, carry out the following two tasks on each of the six nodes.
Task 1: Configure repositories
The following are important information concerning the packages repository Carbonio and its content. Please read them carefully, as they might save you some time in case of installation or upgrade problems and help you to provide more precise bug reports.
The repository hosts simultaneously packages of two channels:
-
Release Candidate (RC). Packages in this channel are made available as soon as they are built by Zextras development and tested by the QA team. While they are stable, they are not suitable for a use in a production environment, because they might still contain some bug, new functionalities that have not yet reached a production-level quality, or some broken dependencies might be introduced.
Usually these problems are fixed within days or even hours, so in case just try again before you report some problem.
Use this channel and its packages for testing (or demo) installations only.
RELEASE. This channel contains only packages that are stable and suitable for a production environment.
Hint
When reporting a problem or opening a ticket, remember to always add the channel if you are using, as this helps us to analyse the problem quickly.
FAQ
-
I want to help testing things, which channel should I use?
RC channel.
-
I need to install Carbonio in a production environment which channel should I use?
RELEASE channel.
-
How will we be informed about new RC packages?
There will be no notification, because RC channel is updated continuously.
-
How will we be informed about a potential new release incoming?
A red message on the homepage of the documentation site will notify you of the release of a new stable version. You may also be informed through other means of communication such as email and social media.
-
Could there be bugs in the packages downloaded from the RC channel?
Yes, RC versions have a risk of containing bugs (which in some cases could lead to data loss). If you find a bug in an RC package we kindly ask you to report it on the appropriate community page. We will try to resolve it as quickly as possible.
Task 2: Setting Hostname
Carbonio needs a valid FQDN as hostname and a valid entry in the
/etc/hosts
file. To configure them, execute these two
commands. First, set the hostname
# hostnamectl set-hostname mail.example.com
then update /etc/hosts
with IP and hostname
# echo "172.16.0.10 mail.example.com mail" >> /etc/hosts
You can also simply get the current IP and hostname and save them:
# echo "$(hostname -I) $(hostname -f)"
Hint
Replace 172.16.0.10 with the actual management IP to be assigned to the server.
It is mandatory to configure the hostname, especially on the Directory-Server node, otherwise the services will not be able to bind to the correct address, leading to a disruption in Carbonio's functionality.
RHEL 8-only Preliminary Tasks
A few task are required for the installation of Carbonio on RHEL 8 systems, they concern the configuration of SELinux and the firewall.
SELinux and Firewall
- SELinux
-
Must be set to disabled or permissive in file
/etc/selinux/config
. You can check the current profile using the command# sestatus
- Firewall
-
All the ports needed by Carbonio are open on the firewall or the firewall is disabled. To disable the firewall, issue the commands
# systemctl stop firewalld.service # systemctl disable firewalld.service
Node Installation
The installation procedure follows the suggested order of nodes as described in the scenario. A few remarks:
-
It is assumed that the Postgres node is not a “real” part of the infrastructure, in the sense that it can also be an existent server that is configured to communicate correctly with Carbonio (configuration instruction are part of SRV1 installation).
Note
In our scenario, we install Postgres and configure it from scratch (SRV1).
The first node to be installed is the one that will feature the Directory Server role (SRV2)
The next server to be installed is the MTA one (SRV3)
The other nodes can be installed in any order, you can skip instructions for any node or role that you do not plan to install
While the overall procedure is the same for both Ubuntu and RHEL 8, the actual commands and file paths may differ on the two operating system, so pay attention that you execute the correct command on the correct file
When the installation process has successfully finished, you can access Carbonio's GUI using a browser: directions can be found in Section Access to the Web Interface.
SRV1: Postgres
The first node is dedicated to PostgreSQL and will host all the databases required by Carbonio.
# apt install postgresql-12
First step is to add the dedicated Postgresql repository
# yum -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
Then, make sure that Postresql 12 is installed, by running commands
# dnf -qy module disable postgresql
# dnf -y install postgresql12 postgresql12-server
Finally, initialise and enable Postgresql
# /usr/pgsql-12/bin/postgresql-12-setup initdb
# systemctl enable --now postgresql-12
Carbonio relies on a number of databases to store and keep track of all the objects it needs to manage. The main database can be configured in two steps, but if you are running Carbonio on RHEL 8, please first configure Postgres according to the guidelines.
The first step is to create a role with administrative rights and an associated password.
# su - postgres -c "psql --command=\"CREATE ROLE carbonio_adm WITH LOGIN SUPERUSER encrypted password 'DB_ADM_PWD';\""
Remember to replace the password with a robust password of your choice and store it in a safe place (preferably using a password manager), as you need it in the remainder of the procedure, and you also might need them in the future. This password will be denoted as DB_ADM_PWD.
The second step is to create the database.
# su - postgres -c "psql --command=\"CREATE DATABASE carbonio_adm owner carbonio_adm;\""
Finally, allow the other nodes to access the databases that will be stored on this node by running these four commands.
# su - postgres -c "psql --command=\"ALTER SYSTEM SET listen_addresses TO '*';\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET port TO '5432';\""
# echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/12/main/pg_hba.conf
# systemctl restart postgresql
# su - postgres -c "psql --command=\"ALTER SYSTEM SET listen_addresses TO '*';\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET port TO '5432';\""
# echo "host all all 0.0.0.0/0 md5" >> /var/lib/pgsql/12/data/pg_hba.conf
# systemctl restart postgresql-12
Hint
You may replace the 0.0.0.0/0
network with the one
within the cluster is installed, to prevent unwanted accesses.
Values used in the next steps
DB_ADM_PWD the password of the
carbonio_adm
database roleSRV1_IP the IP address of the node
SRV2: Directory Server, DB connection, and Carbonio Mesh Server
The installation of this server encompasses a number of tasks, as it will feature a number of crucial services for the correct working of Carbonio: Directory Server and Carbonio Mesh, connection with PostgreSQL node using Pgpool-II, and Carbonio Mesh.
Note
It is possible to install multiple instances of the service-discover service provided by Carbonio Mesh. Please refer to section Set up Multiple Carbonio Mesh Servers for details.
-
Install the following packages from main repository.
# apt install service-discover-server \ carbonio-directory-server carbonio-files-db \ carbonio-mailbox-db carbonio-docs-connector-db
# dnf install service-discover-server \ carbonio-directory-server carbonio-files-db \ carbonio-mailbox-db carbonio-docs-connector-db
-
Install pgpool
# apt install pgpool2
# dnf install https://www.pgpool.net/yum/rpms/4.3/redhat/rhel-8-x86_64/pgpool-II-release-4.3-1.noarch.rpm
-
Configure Pgpool-II to work with the node on which PostgreSQL runs (SRV1), using the following command. Replace SRV1_IP with the value saved in the previous task.
# echo "backend_clustering_mode = 'raw' port = 5432 backend_hostname0 = 'SRV1_IP' # eg 192.168.1.100 backend_port0 = 5432" > /etc/pgpool2/pgpool.conf
# echo "backend_clustering_mode = 'raw' port = 5432 backend_hostname0 = 'SRV1_IP' # eg 192.168.1.100 backend_port0 = 5432" > /etc/pgpool-II/pgpool.conf
-
restart the service using this command.
# systemctl restart pgpool2.service
# systemctl restart pgpool.service
-
Bootstrap Carbonio
# carbonio-bootstrap
The bootstrap command will execute a number of tasks and will set up the node. At the end, you will be prompted with a menu and, if you already configured all, you only need to click y for confirmation.
-
Setup Carbonio Mesh
Carbonio Mesh is required to allow communication between Carbonio and its components. The configuration is interactively generated by command
# service-discover setup-wizard
This command will:
ask for the IP address and netmask
-
ask for the Carbonio Mesh secret, which is used for setups, management, and to access the administration GUI. See section Carbonio Mesh Administration Interface for more information.
This password will be denoted as MESH_SECRET throughout the documentation.
Hint
We suggest to use a robust password which is at least 16 characters long, including at least one of lowercase and uppercase letters, numbers, special characters and store it in a password manager.
In case the password is lost or the credential file becomes corrupted and unusable, you can Regenerate Carbonio Mesh Secret.
store the setup in file
/etc/zextras/service-discover/cluster-credentials.tar.gpg
To complete Carbonio Mesh installation, run
# pending-setups -a
Hint
The secret needed to run the above command is stored in file
/var/lib/service-discover/password
, which is accessible only by theroot
user. -
Bootstrap Carbonio Databases, using the Postgres user created on SRV1 and the password defined in previous step.
-
Carbonio Advanced
# PGPASSWORD=DB_ADM_PWD carbonio-mailbox-db-bootstrap carbonio_adm 127.0.0.1
-
Carbonio Files
# PGPASSWORD=DB_ADM_PWD carbonio-files-db-bootstrap carbonio_adm 127.0.0.1
-
Carbonio Docs
# PGPASSWORD=DB_ADM_PWD carbonio-docs-connector-db-bootstrap carbonio_adm 127.0.0.1
-
Values used in the next steps
SRV2_hostname this node’s hostname, which can be retrieved using the command su - zextras -c "carbonio prov gas service-discover"
MESH_SECRET the Carbonio Mesh password
-
LDAP_PWD the LDAP bind password for the
root
user and applications, retrieved with command:# zmlocalconfig -s zimbra_ldap_password
-
AMAVIS_PWD the password used by Carbonio for the Amavis service, retrieved with command
# zmlocalconfig -s ldap_amavis_password
-
POSTFIX_PWD the password used by Carbonio for the Postfix service, retrieved with command
# zmlocalconfig -s ldap_postfix_password
-
NGINX_PWD the password used by Carbonio for the NGINX service, retrieved with command
# zmlocalconfig -s ldap_nginx_password
Note
By default, all the LDAP_PWD, AMAVIS_PWD, POSTFIX_PWD, and NGINX_PWD bind passwords have the same value.
SRV3: MTA
On this node we install the MTA, which is the actual software which sends and receives emails.
# apt install service-discover-agent carbonio-mta
# dnf install service-discover-agent carbonio-mta
These following tasks must be executed to configure the MTA.
-
Bootstrap Carbonio
# carbonio-bootstrap
In the bootstrap menu, use SRV2_hostname, LDAP_PWD, POSTFIX_PWD, and AMAVIS_PWD in the following items to complete successfully the bootstrap.
Ldap master host
: SRV2_hostnameLdap Admin password
: LDAP_PWDBind password for postfix ldap user
: POSTFIX_PWDBind password for amavis ldap user
: AMAVIS_PWD
-
Run Carbonio Mesh setup using MESH_SECRET
# service-discover setup-wizard
Since this node is not the Carbonio Mesh Server, the
cluster-credentials.tar.gpg
file will be automatically downloaded. -
Complete Carbonio Mesh setup
# pending-setups -a
Hint
The secret needed to run the above command is stored in file
/var/lib/service-discover/password
which is accessible only by theroot
user.
Values used in the next steps
MTA_IP the IP address of this node
SRV4: Proxy and Carbonio VideoServer
This node featurs the proxy, the *-ui
files (i.e., the front-end
packages for Carbonio Chats, Carbonio Admin Panel, and Carbonio Files), then the packages related to
Carbonio VideoServer. Since Proxy and Carbonio VideoServer are different roles, we separate their
installation and setup, so they can easily be installed on different
nodes.
Proxy Installation and Node Setup
The proxy functionality requires no configuration, so we can just install the packages and configure the node only.
-
Install packages
# apt install service-discover-agent carbonio-proxy \ carbonio-webui carbonio-files-ui carbonio-chats-ui \ carbonio-admin-ui carbonio-admin-console-ui
# dnf install service-discover-agent carbonio-proxy \ carbonio-webui carbonio-files-ui carbonio-chats-ui \ carbonio-admin-ui carbonio-admin-console-ui
-
Bootstrap Carbonio
# carbonio-bootstrap
In the bootstrap menu, use SRV2_hostname, AND LDAP_PWD in the following items to complete successfully the bootstrap.
Ldap master host
: SRV2_hostnameLdap Admin password
: LDAP_PWDBind password for nginx ldap user
: NGINX_PWD
-
Run Carbonio Mesh setup using MESH_SECRET
# service-discover setup-wizard
Since this node is not the Carbonio Mesh Server, the
cluster-credentials.tar.gpg
file will be automatically downloaded. -
Complete Carbonio Mesh setup
# pending-setups -a
Hint
The secret needed to run the above command is stored in file
/var/lib/service-discover/password
which is accessible only by theroot
user.
Carbonio VideoServer and Video Recording
It is possible to install the Carbonio VideoServer without the Video Recording feature. If you wish to do so, follow the procedure below, but skip the last step, labelled [Video Recording].
-
Install Carbonio VideoServer package
# apt install carbonio-videoserver
Before starting the procedure, install Fedora’s epel-repository.
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
Then, install the packages.
# dnf install carbonio-videoserver
After the installation, make sure that the Carbonio VideoServer public IP address (i.e., the one that will accept incoming connections to the Carbonio VideoServer) is present in the configuration file
/etc/janus/janus.jcfg
and add it if missing. -
Enable and start the service with the commands
# systemctl enable videoserver.service # systemctl start videoserver.service
-
Enable
Memcached
access using the commands as thezextras
user:zextras$ carbonio prov ms $(zmhostname) zimbraMemcachedBindAddress $(hostname -i) zextras$ zmmemcachedctl restart zextras$ zmproxyctl restart
Warning
Since
Memcached
does not support authentication, make sure that the Memcached port (11211) is accessible only from internal, trusted networks. -
[Video Recording] To implement this feature, install package
# apt install carbonio-videoserver-recorder
# dnf install carbonio-videoserver-recorder
The video-recording feature is enabled by default, and does not require configuration on this node, but in the next one. Indeed, it requires a node which installs the
carbonio-appserver
packages. The recorded sessions will be stored on that node, in directory/var/lib/videorecorder/
. Make sure that the directory has sufficient free space, otherwise recorded videos can not be stored.Hint
You can mount on that location a dedicated disk or partition and keep it monitored for space usage.
Values used in the next steps
VS_IP the local IP address of this node
VS_PWD the password of the Carbonio VideoServer, that can be retrieved by running as the
root
user the command grep -i -e nat_1_1 -e api_secret /etc/janus/janus.jcfgSERVLET_PORT the value of the servlet port configuration option saved in file
/etc/carbonio/videoserver-recorder/recordingEnv
, needed when running the previous command
SRV5: Advanced, AppServer, Files, and Docs
On this node, first install all the required packages for Carbonio Files, then configure the various services needed.
# apt install service-discover-agent carbonio-appserver \
carbonio-advanced carbonio-zal \
carbonio-user-management \
carbonio-files carbonio-docs-connector \
carbonio-docs-editor
Make sure to respect the order of installation.
# yum install service-discover-agent carbonio-appserver
# yum install carbonio-files
# yum install carbonio-user-management carbonio-advanced carbonio-zal
# yum install carbonio-docs-connector carbonio-docs-editor
Execute the following tasks.
-
Bootstrap Carbonio
# carbonio-bootstrap
In the bootstrap menu, use SRV2_hostname, LDAP_PWD, and NGINX_PWD in the following items to complete successfully the bootstrap.
Ldap master host
: SRV2_hostnameLdap Admin password
: LDAP_PWDBind password for nginx ldap user
: NGINX_PWD
-
Run Carbonio Mesh setup using MESH_SECRET
# service-discover setup-wizard
Since this node is not the Carbonio Mesh Server, the
cluster-credentials.tar.gpg
file will be automatically downloaded. -
Complete Carbonio Mesh setup
# pending-setups -a
Hint
The secret needed to run the above command is stored in file
/var/lib/service-discover/password
which is accessible only by theroot
user. -
Run as the
zextras user
the following command to configure the Video Recording, using VS_IP, SERVLET_PORT and VS_PWD configured on SRV4zextras$ carbonio chats video-server add VS_IP port 8188 \ servlet_port SERVLET_PORT secret VS_PWD
-
Enable Carbonio VideoServer at COS level, Video Recording, and the possibility for each user to record meetings.
zextras$ carbonio config set cos default teamChatEnabled true zextras$ carbonio config set global teamVideoServerRecordingEnabled true zextras$ carbonio config set global teamMeetingRecordingEnabled true
Note
In the commands above, the policy allows every user to record a meeting. It is however possible to enforce this policy at user or COS level, to allow only selected users or members of a COS to record meetings.
-
(optional) Activate the license as the
zextras user
zextras$ carbonio core activate-license TOKEN
SRV6: Advanced, AppServer, and Preview
On this node we install the Preview, the User Management, and advanced services.
Hint
We suggest that Preview and the Carbonio Docs-related packages be installed on different physical nodes.
First install all the necessary packages:
# apt install service-discover-agent carbonio-appserver \
carbonio-user-management carbonio-advanced carbonio-zal \
carbonio-preview
Make sure to respect the order of installation.
# dnf install service-discover-agent carbonio-appserver
# dnf install carbonio-user-management carbonio-advanced carbonio-zal
# dnf install carbonio-preview
Execute the following tasks.
-
Bootstrap Carbonio
# carbonio-bootstrap
In the bootstrap menu, use SRV2_hostname, and LDAP_PWD in the following items to complete successfully the bootstrap.
Ldap master host
: SRV2_hostnameLdap Admin password
: LDAP_PWD
-
Run Carbonio Mesh setup using MESH_SECRET
# service-discover setup-wizard
Since this node is not the Carbonio Mesh Server, the
cluster-credentials.tar.gpg
file will be automatically downloaded. -
Complete Carbonio Mesh setup
# pending-setups -a
Hint
The secret needed to run the above command is stored in file
/var/lib/service-discover/password
which is accessible only by theroot
user. -
Let Carbonio Preview use Memcached. Edit file
/etc/carbonio/preview/config.ini
and search for section # Nginx Lookup servers.1nginx_lookup_server_full_path_urls = https://172.16.0.16:7072 2memcached_server_full_path_urls = 172.16.0.14:11211
Make sure that:
in line 1 protocol is https and the IP address is the address of one AppServer, we use the current node’s IP Address for simplicity
in line 1, make also sure to specify the port used by Preview, 7072
in line 2 VS_IP is written, to allow this node’s access to Memcached, which is installed on the Proxy Node
-
Restart the Carbonio Preview process
# systemctl restart carbonio-preview # systemctl restart carbonio-preview-sidecar
-
As last task, restart the mailbox process as the
zextras
userzextras$ zmcontrol stop zextras$ zmcontrol start
Values used in the next steps
SRV6_hostname this node’s hostname, which can be retrieved using the command su - zextras -c "carbonio prov gas service-discover"
Installation Complete
At this point installation is complete and you can start using Carbonio and access its graphic interface as explained in section Access to the Web Interface.
If you need to access the administration interface, you need to create a system user with administrative access, a task explained in Create System User below.
Centralised Logging Configuration
The log system in Carbonio is rsyslog
, which supports a
centralised setup: in other words, all log files produced by
Carbonio can be sent to a unique host server (we call it “Log
Server”), that is appropriately configured to receive log files,
which is particularly useful in a Multi-Server installation.
In the instructions below, we elect the Log Server to be SRV-6.
On SRV-6, open file
/etc/rsyslog.conf
, find the following lines, and uncomment
them (i.e., remove the #
character at the beginning of the
line).
$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$TCPServerRun 514
Then, restart the rsyslog
service.
# systemctl restart rsyslog
Finally, specify the host server that will receive logs. Since is the SRV-6 node, we need SRV6_hostname.
zextras$ carbonio prov mcf zimbraLogHostname SRV6_hostname
Note
Since zimbraLogHostname
is a global attribute, this
command must be run only once on one node.
Once the Log Server node has properly been initialised, on all other nodes, execute
# /opt/zextras/libexec/zmsyslogsetup && service rsyslog restart
Install a Directory Server Replica
In this section we explain how to install a Directory Server Replica, i.e., a second instance of a Directory Server in Master/Slave setup, which proves useful whenever the load on the Directory server is continuously high.
Indeed, in this set up the Master Directory Server will remain authoritative for storing and managing both user information and server configuration, but will delegate to the Replica all the queries coming from the other infrastructure nodes. Therefore, whenever some data is updated on the master, they are immediately copied to the slave and available for queries. The bottom line is that the two server will split their tasks, reducing thus the load on the main instance.
Preliminaries
Before attempting to install a Directory Server Replica, please read carefully the whole procedure in this page and make sure the following requirements are satisfied.
In the remainder, we show how to install one Replica on a dedicated node, but it is possible to install it also on existent node in the cluster.
A Multi-Server Carbonio is already operating correctly
-
A new node is available, on which to install the Replica, which satisfies the Multi Server Requirements and on which the Preliminary Tasks have already been executed. We will call this node SRV7.
Note
In case you plan to install the Replica on an existent node, execute all commands on that node instead of on SRV7.
Except for the one in the Installation section, all commands must be executed as the
zextras
userGive the new node a meaningful name/FQDN. We will use ds-replica.example.com whenever necessary. Remember to replace it with the name you give.
Have CLI access to the Main and Replica Directory Servers, as you need to execute commands on both servers
Installation
The installation requires to execute this command on SRV7.
# apt install carbonio-directory-server service-discover-agent
Note
As an alternative. you may install the package
service-discover-server
if you plan to have multiple Carbonio Mesh
server. In this case, however, please refer to section
Set up Multiple Carbonio Mesh Servers for detailed directions.
Configuration
Configuration of the Replica Directory server requires a few steps.
Step 1: Activate replica
On SRV2 activate the replica by executing
$ /opt/zextras/libexec/zmldapenablereplica
Step 2: Retrieve passwords from SRV2
Then, retrieve a few passwords that you will need on the Replica to configure the connection and access to SRV2
$ zmlocalconfig -s zimbra_ldap_password
$ zmlocalconfig -s ldap_replication_password
$ zmlocalconfig -s ldap_postfix_password
$ zmlocalconfig -s ldap_amavis_password
$ zmlocalconfig -s ldap_nginx_password
Note
By default, these password are the same and coincide with
zimbra_ldap_password
. If you did not change them, use the same
password in the next step.
Step 3: Bootstrap Carbonio on Replica
After the command completed successfully, log in to SRV7 and bootstrap Carbonio. You will need to configure a number of options, so make sure to have all available.
$ carbonio-bootstrap
Step 4: Configure Replica
You will asked to properly configure a couple of options in the Common configuration and Ldap configuration menus. In the first menu, provide these values:
Ldap configuration
1) Hostname: The hostname of the Director Server Replica.
2) Ldap master host: The hostname of SRV2
3) Ldap port: 389
4) Ldap Admin password: The zimbra_ldap_password
Exit this menu and go to the second:
Ldap configuration 1) Status:Enabled
2) Create Domain: do not change 3) Domain to create: example.com 4) Ldap root password: Thezimbra_ldap_password
5) Ldap replication password: Theldap_replication_password
6) Ldap postfix password: Theldap_postfix_password
7) Ldap amavis password: Theldap_amavis_password
8) Ldap nginx password: Theldap_nginx_password
Hint
Remember to always use the zimbra_ldap_password
in
case you did not change the other passwords.
Step 5: Complete the installation
You can now continue the bootstrap process and after a while the installation will be successfully completed and immediately after, the copy of the Directory Server on SRV2 will be copied over to the Replica on SRV7.
Testing
In order to test whether the Replica works correctly after the installation was completed successfully, you can make a quick test as follows.
-
Log in to the Master (SRV2) and create a test user with a password:
$ carbonio prov ca john.doe@example.com MySecretPassword
-
Log in to the replica and check that all account have been copied over from the Master:
$ carbonio prov -l gaa
Among the results, the john.doe@example.com must be present.
Hint
You can pipe the previous command to
grep
to check only the new account (or any given account): carbonio prov -l gaa | grep "john.doe@example.com"
Set up Replica to Answer Queries
It is now time to configure the Replica to answer queries in place of
the Master, which requires to reconfigure the value of the
ldap_url
parameter and let it point to the Replica. You can
achieve this set up with a few commands on the Master.
You need to keep at hand the following data
SRV2_hostname
: the hostname on which the Directory Server Master is installedSRV7_hostname
: the hostname on which the Directory Server Replica is installed, e.g., ds-replica.example.com
Hint
To retrieve the hostname, use the hostname on the Master and Replica nodes.
-
Stop all Carbonio services
$ zmcontrol stop
-
Update the value of
ldap_url
$ zmlocalconfig -e \ ldap_url="ldap://SRV7_hostname ldap://SRV2_hostname"
If you plan to install multiple Replica Directory Servers, you can install all of them and then execute the above-mentioned command once for all Replicas, making sure that their hostnames precede the Master’s hostname. For example, provided you installed two Replica Directory Servers on
SRV4
andSRV5
, execute:$ zmlocalconfig -e \ ldap_url="ldap://SRV7_hostname ldap://SRV4_hostname \ ldap://SRV5_hostname ldap://SRV2_hostname"
The Replica instance to query first is the first listed in the command.
Uninstall a Replica
To remove a Replica, you need to carry out two tasks:
-
On each node of the Multiple-Server installation, execute the following command, which will use only the Master for the queries
$ zmlocalconfig -e ldap_url="ldap://SRV2_hostname"
In case you had configured multiple Replicas, the above command will redirect all queries to the Master. If you want to remove only some of the Replicas, simply omit its hostname from the list. For example, to remove SRV5, use the command
$ zmlocalconfig -e \ ldap_url="ldap://SRV7_hostname ldap://SRV4_hostname \ ldap://SRV2_hostname"
-
Execute, only on the MTA node the command
$ /opt/zextras/libexe/zmmtainit
This command will update the configuration of postfix with new
ldap_url
.