Advanced Administration Tasks#

In this section we describe and outline a few advanced tasks, providing quick directions to carry them out.

Role-based Access Control#

Carbonio uses a delegation mechanism to allow the principal administrator to assign roles and permission to other users with the purpose of sharing with them the administration tasks and duties, electing them as either Global or Delegated Administrator, who have different privileges to configure and manage these features.

In the context of the management of Anti-Virus and Anti-Spam functionalities, each Administrator, either Global or Delegated, can access with the personal account and be able to control and manage them, according to permissions granted.

Server-side Management of E-mails#

Carbonio's MTA (Postfix) receives e-mail by means of the SMTP protocol and forwards it to the correct e-mail queue according to the user’s username (this coincides with the user’s e-mail address). In case of an e-mail whose recipient is a distribution list (i.e., a mailing list), the e-mail is forwarded to each of the members of the list.

Additional rules can be created directly on Postfix from the CLI. Examples of restriction rules can be found directly in Postfix documentation, for example in Postfix Standard Configuration Examples. More examples, depending on the use case, can be found in the official Postfix documentation.

Server-side Attachment Management#

Global e-mail attachment settings allow you to specify global rules for handling attachments of an e-mail message.

Modify list of blocked file extensions

You can set rules at COS-level and also for individual accounts. When attachment settings are configured in Global Settings, the global rule takes precedence over COS and Account settings.

A list of formats (extensions) can be created via CLI to restrict the attachments allowed.

Warning

All files whose extension is in the list will never reach the recipient.

For example, to block .exe filex, use command

# carbonio prov mcf +zimbraMtaBlockedExtension "exe"

Hint

Only one format/extension at a time can be specified.

Manage attachments’ size

It is possible to limit both the size of a message and its attachment and their individual sizes.

The total size of the message can be set for each MTA server directly on the Carbonio Admin Panel, in the Tuning section under Admin Panel ‣ MTA ‣ Advanced (see documentation).

The maximum size of each attachment can be set by the CLI for each user, CoS, or domain issuing as the zextras user

zextras$ carbonio prov ma alice@example.com zimbraFileUploadMaxSizePerFile 1048574

This command prevents Alice to sent an attachment larger that 10Mb.

Management of a COS#

Each account on Carbonio is assigned a COS, a global object that is not restricted to a particular domain or set of domains.

The COS assigned to an account determines the default attributes for that account and the features to be enabled or not for it. The COS controls mailbox quotas, message lifetime, password restrictions, attachment blocking, and server pool usage.

You can create and edit the classes of services via CLI and soon also via the new Carbonio Admin Panel, in which the tasks of COS management will feature an helpful wizard that guides the Administrator through the creation process.

# carbonio prov createCos {name} [attribute value ...]

All the attributes that need to be customised can be either added at the end of the base command above as [ attribute value ] pairs, or at a later point with the following command:

# carbonio prov modifyCos {name} [attribute value ...]

S/MIME support#

S/MIME support for all users can be enabled using command

zextras$ carbonio config set global enableSmimeEncryption TRUE

Users will then be able to upload certificates to be used for S/MIME signing; setting it to false will prevent users from using the feature.

Additionally, the command below enables the ability of the user to verify the S/MIME signature.

zextras$ carbonio prov mcf carbonioSMIMESignatureVerificationEnabled TRUE

The access the S/MIME Certificate Store from within their Settings page, users need to supply a password, which is different from the S/MIME certificate’s password. You can set various parameters of this password from the CLI, by using the following command as the zextras user user.

zextras$ carbonio config set global \
encryptionPasswordPolicyAttribute <attribute> <value>

The list of all available attributes that can be customised can be retrieved using command

zextras$ carbonio config get global \
encryptionPasswordPolicyAttribute

The output of the above command is:

global
      values

              attribute                                                   encryptionPasswordPolicyAttribute
              inheritedValue
                  minLength                                                       8
                  maxLength                                                       100
                  minUpperCase                                                    1
                  minLowerCase                                                    1
                  minDigits                                                       1
                  minPunctuation                                                  0
                  minAlphaChars                                                   0
                  minPunctuationOrDigitChars                                      0
                  requireAlphanumeric                                             true
                  allowedChars
                  allowedPunctuation
                  denyList
              inheritedFrom                                               default
              isInherited                                                 true
              modules
                      ZxCore

The attribute is available only at global level, meaning this password policy is the same for all configured domains, CoS, and user on the system.

Pending-setups#

This section describes the functionalities provided by pending-setups in Carbonio and explains when it is necessary to run or re-run it for its proper functioning.

Overview#

In a nutshell, pending-setups in Carbonio manages post-installation and configuration tasks that are required to finalise Carbonio's setup and to ensure that the services run correctly.

pending-setups is a wrapper around a collection of scripts, provided by other packages and placed in the /etc/zextras/pending-setups.d/ directory.

When invoked, pending-setups carries out these task:

  • Set up of authentication and security policies using service-discover

  • Securely store and manage credentials

  • Configure services to ensures all Carbonio components (e.g., Mail, Authentication, Video Server) are properly configured

  • Reload services to apply changes

When a script is successfully executed, it is moved to the /etc/zextras/pending-setups.d/done directory.

Running pending-setups#

Hint

All commands shown in this section must be executed as the root user.

Before executing the command, make sure that you retrieve the cluster credential password, which is stored in file /var/lib/service-discover/password.

pending-setups can be invoked with the -a option: in this case, it simply reads all the scripts in the /etc/zextras/pending-setups.d/ directory and executes them, with the output showing the tasks it carries out.

Invoked with no options, it enters an interactive mode, showing a menu similar to:

You have 6 pending setups to run
0) carbonio-catalog.sh
1) carbonio-tasks.sh
2) carbonio-user-management.sh
3) carbonio-tasks-db-setup.sh
4) carbonio-files.sh
5) carbonio-files-db-setup.sh

a) execute all
q) quit

Please input your selection:
>

The initial list shows all the scripts that must be executed: enter the number corresponding to the script to launch it or chose either a to execute them all, or q to exit. In the latter case, however, it will not be possible to use Carbonio successfully, as it would miss a number of configurations.

If all scripts have been executed, the command will output:

There are no pending-setups to run. Exiting!

Why Running pending-setups Again#

There are a number of situations when you need to run pending-setups again. In some of the following cases, the script might have not been executed successfully, but nonetheless they are moved under the /etc/zextras/pending-setups.d/done, so simply invoking pending-setups again will not suffice.

After a failed setup or partial installation

If the initial pending-setups process fails or gets interrupted (e.g., due to a system crash, missing dependencies, or misconfigured services)

After manually fixing a configuration issue

If you manually adjust configuration files, services, or permissions

After upgrading Carbonio

If an update doesn’t fully apply

After restoring from a backup or migrating to a new server

If you restore a Carbonio system from a backup or migrate it to a new server, some configurations might need to be re-applied to match the new environment

When adding a new Carbonio Role

If you install a new Carbonio Role (e.g., Files, Chats), running pending-setups ensures that the new component is properly integrated into the system

Debugging and troubleshooting

If a service is misbehaving or showing unexpected errors, re-running pending-setups can sometimes reapply missing configurations and fix the issue

How to Run pending-setups Again#

There are two options to execute a failed script.

Option 1: Move the Script Back and Re-run pending-setups

The first one proves useful if you realise that multiple script have failed, or believe they have failed. In this case, the procedure is:

  1. Copy the script from /etc/zextras/pending-setups.d/done back to the main directory

    # cp /etc/zextras/pending-setups.d/done/<script-name>.sh \
    /etc/zextras/pending-setups.d/
    

    Replace <script-name>.sh with the actual script name.

  2. Run the command

    # pending-setups -a
    
  3. The script will be executed again as part of the pending setups process, like described in Section Running pending-setups above

Option 2: Manually Execute the Script

If you have only one script to execute, youcan run the script manually.

  1. Navigate to the /etc/zextras/pending-setups.d/done directory

# cd /etc/zextras/pending-setups.d/done
  1. Execute the script directly. For example, to execute carbonio-videoserver.sh

# bash carbonio-videoserver.sh
  1. The script will immediately be executed

Carbonio and HTTP Proxy#

Many organisations operate within secured network environments where direct Internet access is restricted and blocked by firewall rules or corporate policies.

Using an HTTP proxy allows administrators to control and monitor outbound connections from the Carbonio server, ensuring that only approved services and destinations are accessible.

In a scenario like this, configuring an HTTP proxy is mandatory for Carbonio to function properly: this can be achieved using the zimbraHttpProxyURL attribute as follows.

Note

All commands in this section must be executed as the zextras user user.

First, login to the any Node installing the Proxy and check the current configuration of the attribute.

zextras$ carbonio prov getConfig zimbraHttpProxyURL

If there is no output, Carbonio tries to use a direct connection to the Internet, which in the scenario depicted above does not work, so we need to configure the attribute as follows.

Unauthenticated proxy
zextras$ carbonio prov modifyConfig zimbraHttpProxyURL http://proxy.example.com:8080

Replace http://proxy.example.com:8080 with the actual proxy that needs to be used.

Authenticated proxy
zextras$ carbonio prov modifyConfig zimbraHttpProxyURL http://username:password@proxy.example.com:8080

Replace http://username:password@proxy.example.com:8080 with the correct values

Finally, restart the service.

Execute, as the zextras user

zextras$ zmcontrol restart

Execute, as the zextras user

zextras$ zmcontrol restart

Execute, as the zextras user

zextras$ zmcontrol restart

Execute, as the root user

# systemctl start/stop/restart carbonio-proxy.target

Freshclam and HTTP Proxy#

Similarly to the scenario described in Section Carbonio and HTTP Proxy, freshclam can not download the antivirus signatures if Carbonio is placed behind a proxy. The solution is to put the proxy information in the freshclam configuration template, which is /opt/zextras/conf/freshclam.conf.in.

Log in to the Node where the MTA AV/AS Role is installed and edit the file, by adding at the bottom the lines:

HTTPProxyServer  proxy.example.com
HTTPProxyPort    8080

As the zextras user, execute

zextras$ zmclamdctl restart

To verify that the new configuration is correct and the HTTP proxy is being used, check the log file

zextras$ tail -f /opt/zextras/log/freshclam.log

In the output you should find some lines similar to the following:

ClamAV update process started at Thu Mar 13 14:01:58 2025
Trying to retrieve CVD header from https://database.clamav.net/daily.cvd
OK
main database available for download (remote version: xx)
Testing database: '/opt/zextras/data/clamav/db/tmp.03d9f579e4/clamav-f220b006e75bbf945b40dd0f5b8a2f29.tmp-main.cvd' ...
Database test passed.
main.cvd updated (version: 62, sigs: 6647427, f-level: 90, builder: sigmgr)
Trying to retrieve CVD header from https://database.clamav.net/bytecode.cvd
OK
Make a copy of the configuration template

If you have modified freshclam's configuration template /opt/zextras/conf/freshclam.conf.in , you will likely need to edit it again after an upgrade, because upgrades often overwrite configuration templates. Since Carbonio regenerates freshclam.conf from the template file, when Carbonio or ClamAV updates, the template may be reset to default or overwritten with a new version, losing any changes you made to it. In this case, any custom modifications would be lost.

To make sure changes are preserved after an upgrade, create a backup of the template file

zextras$ cp /opt/zextras/conf/freshclam.conf.in /opt/zextras/conf/freshclam.conf.in.backup

After the upgrade, compare and restore changes

zextras$ diff /opt/zextras/conf/freshclam.conf.in.backup /opt/zextras/conf/freshclam.conf.in

If the output is empty, there is no difference between the new template and the backup, so freshclam will keep on working without issues. If you see any difference, you might want to restore the proxy settings, by following this procedure from the beginning or copying your backup file on the new template.

zextras$ cp /opt/zextras/conf/freshclam.conf.in.backup /opt/zextras/conf/freshclam.conf.in

Clean Activesync Status#

The PostgreSQL active_sync table in the activesync database contains the synchronisation status of mobile devices connected to Carbonio. Since there is no mechanism in place to cleans this table, it can grow significantly over time and occupy unnecessary space. The simplest solution is to add a line to the zextras user’s crontab.

Hint

Remember that active_sync is a table and activesync is the database containing it.

To edit the crontab, execute, as the zextras user

zextras$ crontab -e

Scroll to the very end of the file (after the comment ZEXTRAS-END -- DO NOT EDIT ANYTHING BETWEEN THIS LINE AND ZEXTRAS-START) and add this line:

45 23 * * * /opt/zextras/bin/carbonio mobile doPurgeMobileState > /dev/null 2>&1

Save the file, then exit. This line will clear the state table every day at 23:45 (11:45PM) from all the connections. The command is completely transparent, as active connections will immediately be restored in the table, so users will not notice any service interruption.

Hint

You can change the time at which the script runs by modifying the 23 (hour) and 45 (minute) values.

However, in case of a lot of devices connecting, the table can become so large that the carbonio mobile doPurgeMobileState can timeout and do not clean the table, so a different approach is needed.

This approach requires you to access the Database Node and execute some SQL command directly on the database.

Warning

Please be careful when you act on the database, because wrong commands may have unexpected outcomes.

After you access the Database Node, become the postgresql user.

# su - postgres

Then access the PostgreSQL’s client

# psql

From here, select the activesync database

postgres=# \c activesync

hen use the following query to retrieve the number of mobile devices recorded in the table active_sync.

select concat(extract(YEAR from last_modification), '-', extract(MONTH from last_modification)), count(*) from active_sync.state group by concat(extract(YEAR from last_modification), '-', extract(MONTH from last_modification)) order by concat ASC;

You will have an output similar to:

 concat  | count
---------+-------
 2024-1  |    41
 2024-10 |    55
 2024-11 |   165
 2024-12 |    89
 2024-2  |    10
 2024-3  |     7
 2024-4  |   273
 2024-5  |    68
 2024-6  |   190
 2024-7  |    33
 2024-8  |    90
 2024-9  |  1529
 2025-1  |   989
 2025-2  |   606
 2025-3  |    90
(15 rows)

This output shows that the table holds the status of mobile devices back to more than year. Old entries can be safely removed from the table, because they can refer to devices no longer used or that never connected recently. For example, to remove all entries of year 24, execute the following query.

delete from active_sync.state where last_modification between '2024-01-01' and '2024-10-31';

Hint

Do not clean the entire table, otherwise all the mobile devices will need to re-sync.

You can now check the size of the table by running command

postgres=# \dt+ active_sync.*

If the size of the table is no longer as high as before, quit the PostgreSQL cliend

postgres=# \q

As the postgres user, you can analyse the table using the command

$ vacuumdb -t active_sync.state -Z -v activesync

If the output is similar to the following one, the table is clean:

vacuumdb: vacuuming database "activesync"
INFO:  analyzing "active_sync.state"
INFO:  "state": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows

Otherwise, if the number of dead rows is high, you can run the command, as the postgres user

$ vacuumdb -t active_sync.state -f -v activesync

This command removes dead tuples (rows) to reduce the space used and keep database performances at an optimal level.