Carbonio Backup#

This chapter describes Carbonio Backup, the component that is responsible to back up all the data. The chapter is divided into several sections: at the beginning, an overview of the most common task is given along with pointers to more technical references.

Next, the architecture of Carbonio Backup is described, which includes also important concepts to know beforehand; the concepts will be detailed in the remainder of the chapter.

Finally, the possibilities to backup items and accounts, are detailed, accompanied by the corresponding CLI commands. Tasks that can be carried out from the GUI can be found in Admin Panel’s Backup, while those that can be carried out on both CLI and GUI are cross-referenced between the two sections, to let you choose the favourite way to execute them.

Documentation of the Backup is therefore split in four main parts:

  1. Backup (of a Mailstore) is the current page, which includes: the architecture of the backup modules and a glossary of most relevant terms; the most common operations related to the backup and how to execute them from the CLI

  2. Restore Strategies for the Backup: how to restore items, accounts, or whole Mailstores from the CLI

  3. Advanced Backup Techniques, including Disaster Recovery, a collection of last-resort recovery possibilities after hardware or software errors (not related to Carbonio)

  4. Admin Panel’s Backup, which contains all tasks that can be carried out from the GUI only.

Carbonio Backup Common Tasks#

This section contains guidelines for the most common task required by the users; also links to technical resources are also provided.

How to Activate Carbonio Backup#

Once you have finished your Carbonio setup, you need a few more steps to configure the Backup component and have all your data automatically backed up.

  • Mount a storage device at your target location. We use the default /opt/zextras/backup/zextras throughout this section; remember to replace it with the path you chose.


    The size of the device should be at least 80% of primary + secondary volume size.

  • Set the correct permission on the backup path: chown zextras:zextras /opt/zextras/backup/zextras


To avoid a flood of notifications about running operations, it is suggested to lower the default Notification level from Information to one of Warning, Error, or Critical using the command line:

zextras$ carbonio config set global ZxCore_LogLevel 0

to increase the log verbosity, or

zextras$ carbonio config set global ZxCore_LogLevel 1

to restore the normal log verbosity. you can also check the current log level as follows.

zextras$ carbonio config dump global|grep LogLevel

Architecture of Carbonio Backup#

This section introduces the main concepts needed to understand the architecture of Carbonio Backup and outlines their interaction; each concept is then detailed in a dedicated section.

Foremost, Carbonio Backup can be configured, and is executed, only on the Nodes equipped with the Mailstore & Provisioning Role. In case multiple Mailstore & Provisioning Roles are installed, a Carbonio Backup must be configured for each Node: they will be completely separated and independent from each other, therefore you need to configure them to use different buckets or storage devices. You can however centralise the backup on the same NAS: create different partitions on it, then add the appropriate Backup Path to each Backups.

Then, before entering in the architecture of Carbonio Backup, we recall two general approaches that are taken into account when defining a backup strategy: RPO and RTO.

The Recovery Point Objective (RPO) is the highest amount of data that a stakeholder is willing to loose in case of a disaster, while the Recovery Time Objective (RTO) is the highest amount of time that a stakeholder is willing to wait to recover its data.

According to these definitions, the ideal acceptable value zero, while the realistic values are usually near zero, depending on the size of the data. In Carbonio, the combination of Realtime Scanner and SmartScan guarantees that both RTO and RPO values are quite low: The Real Time Scanner ensures that all metadata changes are recorded as soon as they change, while the SmartScan copies all items that have been modified, hence the possible loss of data is minimised and usually limited to those items that have changed between two consecutive run on SmartScan.


The whole architecture of Carbonio Backup revolves around the concept of ITEM: An item is the minimum object that is stored in the backup, for example:

  • an email message

  • a contact or a group of contacts

  • a folder

  • an appointment

  • an account (including its settings)

  • a distribution list

  • a domain

  • a class of services (COS)


The last three items (distribution lists, domains, classes of services) are subject to the SmartScan only, i.e., the Real Time Scan will not record any change of their state.

There are also objects that are not items, and as such will never be scanned for changes by the Realtime Scanner and will never be part of a restore:

  • Node settings, i.e., the configuration of each Node

  • Global settings of Carbonio product

  • Any customizations made to the software (Postfix, Jetty, etc…​)

For every item managed by Carbonio, every variation in its associated metadata is recorded and saved, allowing its restore at a given point in time. In other words, whenever one of the metadata associated with an item changes, a “photograph” of the whole item is taken and stored with a timestamp be means of a transaction. Examples of metadata associated to an item include:

  • when the email was read, deleted, moved to a folder

  • a change in the name/address/job of a contact

  • the deletion or addition of a file in a folder

  • the change of status of an item (e.g, an account)

Technically, an item is stored as a JSON Array containing all changes in the item’s lifetime. More about this in the Structure of an Item section.

A Deleted Item is an item that has been marked for removal.


An element in the thrash bin is not considered as a deleted item: It is a regular item, placed in a folder that is special only to us, from the Carbonio Backup’s point of view, the item has only changed its state when moved to the thrash bin.


A Transaction is a change of state of an item. With change of state we mean that one of the metadata associated with an item is modified by a user. Therefore, a Transaction can be seen as a photography of the metadata in a moment in time. Each transaction is uniquely identified by a Transaction ID. It is possible to restore an item to any past transaction. See more in Restore Strategies.

SmartScan and Realtime Scanner#

The initial structure of the backup is built during the Initial Scan, performed by the SmartScan: the actual content of a Node featuring the Mailstore & Provisioning Role is processed and used to populate the backup. The SmartScan is then executed at every start of the Carbonio Backup and on a daily basis if the Scan Operation Scheduling is enabled in the Carbonio Admin Panel.


If none of the two Scan Operations is active, no backup is created!

SmartScan runs at a fixed time—​that can be configured—​on a daily basis and is not deferred. This implies that, if for any reason (like e.g., the server is turned off, or Carbonio is not running), SmartScan does not run, it will not run until the next day. You may however configure the Backup to run the SmartScan every time Carbonio is restarted (although this is discouraged), or you may manually run SmartScan to compensate for the missing run.


Make sure that SmartScan is always running whenever you want to make any backup or restore operations, otherwise they will not be successful!

SmartScan’s main purpose is to check for items modified since its previous run and to update the database with any new information.


The SmartScan is the scheduled component that keeps the backup aligned against production data for all those situations when the Real Time Scan is unable to operate, such as account data changes or situations when the backup service is suspended or inactive. To always have consistency, the smart scan is run automatically once a day. This process also takes care of performing metadata storage on the remote backup volume, in case the remote backup volume has been configured. Both SmartScan and Realtime Scanner are enabled by default. While both can be (independently) stopped, it is suggested to leave them running, as they are intended to complement each other.

Realtime Scanner

The Realtime Scanner is the technology that allows changes to Mails and Calendar Module’s items or Contacts to be intercepted in real time, just after the application server has actually executed them. This allows the backup to record and archive them in virtually real time, reducing the RPO (the time distance between what is in the backup and what is in the live system) to 0. In addition, thanks to the separation of the backup into metadata and raw data, when changes affect only the metadata of an object (e.g., changing the state or the folder that contains it), only the metadata is updated and not the entire item, drastically reducing resource usage (CPU, IO, bandwidth).

When to Disable Scan Operations#

Backups are written on disk, therefore the Scan operations result in I/O disk access. For this reason, there are a number of scenarios in which either of the SmartScan or Realtime Scanner might (or should) be disabled, even temporarily. For example:

  • You have a high number of trasactions every day (or you often work with Carbonio Files documents) and notice a high load in the Node’s resource consumption. In this case you can temporarily disable the Real Time Scan.

  • You start a migration: In this case it is suggested to stop the SmartScan, because it would create a lot of I/O operations on disk and even block the server. Indeed, it would treat every migrated or restored item as a new one.

  • You have a high traffic of incoming and outgoing emails per day. In this case, you should always have the Realtime Scanner active, because otherwise all transactions will be backed up only by the SmartScan, which might not be able to complete in a reasonable time, due to the resources required for the I/O operations.

Example Scenarios of Interaction#

The interaction between SmartScan and Realtime Scanner is designed to have an always up-to-date backup, provided that both of them run. This section shows what can happen in some scenario that may (partially) prevent the update of the Backup.

Scenario 0: Stopped RealTime Scanner

When the RealTime Scanner is stopped, only the daily (or differently scheduled) SmartScan updates the Backup. However, in case the system experiences some problem or some item is deleted, the corresponding blob is not updated, therefore it can not be recoverable from the Backup.

Scenario 1: The backup is stopped for one hour (or for any period)

In this case, there will be a one-hour “hole” in the backups that can be filled only by a SmartScan run, which will by default be run at the start of the Carbonio Backup service.

Scenario 2: Changes in LDAP

Since the Realtime Scanner operates on the Mailstore & Provisioning level, changes made at the LDAP level are not automatically picked up up by Carbonio Backup.

In this case, running (manually) the SmartScan allows to include those changes and update the Backup copies.

Scenario 3: Multiple Mailstore & Provisioning Nodes.

There is a corner case in which the Realtime Scanner may fail. Suppose you have two Mailstore & Provisioning nodes (we call them srv-mail and srv-alternate for simplicity). Now, if srv-mail is offline for any reason and you log in to srv-alternate and make some changes to srv-mail, the Realtime Scanner will not be able to record these changes in the Backup. Also in this case, running the SmartScan will bring the changes in the Backup.

Scenario 4: Other Cases

In general, the Realtime Scanner does not record any changes in those parts of the Carbonio that do not have any handler for the Realtime Scanner. For example, Scenario 2 above is caused by the Realtime Scanner inability to interact with LDAP. Other examples include:

  • changes in a COS

  • changes in a domain

  • the membership of a user in Distribution Lists.

Backup Path#

The backup path is the place on a filesystem where all the information about the backup and archives is stored. Each Node has exactly one backup path; different Nodes can not share the same backup path. It is structured as a hierarchy of folders, the topmost of which is by default /opt/zextras/backup/zextras/. Under this directory, the following important files and directories are present:

  • map_[server_ID] are so-called map files, that show if the Backup has been imported from an external backup and contain in the filename the unique ID of the Node.

  • accounts is a directory under which information of all accounts is defined. In particular, the following important files and directories can be found there:

    • account_info is a file that stores all metadata of the account, including password, signature, preferences

    • account_stat is a file containing various statistics about the account, like for example the ID of the last element stored by SmartScan

    • backupstat is a file that maintains generic statistics about the backup, including the timestamp of the first run

    • drive_items is a directory containing up to 256 subfolders (whose name is composed of two hexadecimal lowercase letters), under which are stored Carbonio Files items, according to the last two letters of their UUID

    • items is a directory containing up to 100 subfolders (whose name is composed of two digits, in which items are stored according to their ID’s last two digits

  • servers is a directory that contains archives of the Node configuration and customisations, Carbonio configuration and of the chat, one per day up to the configured Node retention time.

  • items is a directory containing up to 4096 additional folders, whose name consists of two hexadecimal (uppercae and lowercase) characters. Items in the Mailstore & Provisioning Role will be stored in the directory whose name has the last two characters of their ID.

  • id_mapper.log is a user object ID mapping and contains a map between the original object and the restored object. It is located at /backup/zextras/accounts/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/id_mapper.log. This file is present only in case of an external restore.

See also

Community Article

A more in-depth and comprehensive overview of the Backup Path.

Setting the Backup Path#

A Backup Path is a location in which all items and metadata are saved. Each Node must define one Backup path, which is unique to that server and not reusable. In other words, trying to use a Backup Path on a different Node and setting it there as the current Backup Path will return an error. Trying to force this situation in any way by tampering with the backup file will cause corruption of both old and new backup data.

The current value of the Backup Path can be retrieved using the command

zextras$ carbonio config get server ZxBackup_DestPath

     server                                              9d16badb-e89e-4dff-b5b9-bd2bddce53e2

             attribute                                                   ZxBackup_DestPath
             value                                                       /opt/zextras/backup/zextras/
             isInherited                                                 false

To change the Backup Path, use the set sub-command instead of get and append the new path,

zextras$ carbonio config set server ZxBackup_DestPath /opt/zextras/new-backup/path

The successful operation will display the ok message.

See also

You can do the same from the Carbonio Admin Panel under Server Config (Admin Panel ‣ Global Server Settings ‣ Server Config).

Retention Policy#

The Retention Policy (also retention time) defines after how many days an object marked for deletion is actually removed from the backup. The retention policies in the Backup are:

  • Data retention policy concerns the single items, defaults to 30 days

  • Account retention policy refers to the accounts, defaults to 30 days

All retention times can be changed; if set to 0 (zero), archives will be kept forever (infinite retention) and the Backup Purge will not run.

You can check the current value of the Retention Policy by using respectively

zextras$ carbonio config dump global | grep ZxBackup_DataRetentionDays
zextras$ carbonio config dump global | grep backupAccountsRetentionDays

In order to change either value, use 0 for infinite retention or any integer value as the number of days. For example, to set the retention to 15 days for data and accounts, use:

zextras$ carbonio config set global ZxBackup_DataRetentionDays 15
zextras$ carbonio config set global backupAccountsRetentionDays 15

In case an account is deleted and must be restored after the Data retention time has expired, it will be nonetheless possible to recover all items up to the Account retention time, because in that case, even if all the metadata have been purged, the digest can still contain the information required to restore the item.

See also

You can set retention policies from the Carbonio Admin Panel under Server Config (Admin Panel ‣ Global Server Settings ‣ Server Config).

Backup Purge#

The Backup Purge is a cleanup operation that removes from the Backup Path any deleted item that exceeded the retention time defined by the Data Retention Policy and Account retention policy.

Coherency Check#

The Coherency Check is specifically designed to detect corrupted metadata and BLOBs and performs a deeper check of a Backup Path than SmartScan.

While the SmartScan works incrementally by only checking items modified since the last SmartScan run, the Coherency Check carries out a thorough check of all metadata and BLOBs in the Backup Path.

To start a Coherency Check via the CLI, use the carbonio backup doCoherencyCheck <carbonio_backup_docoherencycheck> command:

zextras$ carbonio backup doCoherencyCheck *backup_path* [param VALUE[,VALUE]]

See also

Community Article

A detailed analysis of the Coherency Check

How Does Carbonio Backup Work#

Carbonio Backup has been designed to store each and every variation of an ITEM. It is not intended as a system or Operating System backup, therefore it can work with different OS architecture and Carbonio versions.

Carbonio Backup allows administrators to create an atomic backup of every item in the Mailstore & Provisioning account and restore different objects on different accounts or even on different servers.

By default, the default Carbonio Backup setting is to save all backup files in the local directory /opt/zextras/backup/zextras/. In order to be eligible to be used as the Backup Path, a directory must:

  • Be both readable and writable by the zextras user

  • Use a case sensitive filesystem


You can modify the default setting by using either technique shown in section Setting the Backup Path.

When first started, Carbonio Backup launches a SmartScan, to fetch from the Mailstore & Provisioning Role all data and create the initial backup structure, in which every item is saved along with all its metadata as a JSON array on a case sensitive filesystem. After the first start, either the Real Time Scanner, the SmartScan, or both can be employed to keep the backup updated and synchronised with the account.

Structure of an Item#

The basic structure of the item is a JSON Array that records all the changes happening during the lifetime of each item, such as information related to emails (e.g., tags, visibility, email moved to a folder), contacts, tasks, single folders, groups, or Carbonio Files documents, user’s preferences (e.g., hash of the password, general settings).

To improve performance, only the changes that are needed to restore the items are recorded: for example is not useful to store the user’s last login time or the IMAP and Activesync state, because if the account will be restored on a new one, the values of that attributes would be related to the old account.

By collecting the timestamp of the transaction, we are able to restore data at a specific moment of its life.

During the restore, the engine looks at all the transactions valid evaluating the “start-date” and “end-date” attributes.

The same logic is used to retrieve deleted items: when an item is deleted we store the timestamp and so, we are able to restore items that have been deleted within a specific time frame.

Even if the blob associated to the item changes, and consequently its digest changes too (as happens for Carbonio Files Document), the metadata records the validity of the old and the new digest.


The SmartScan operates only on accounts that have been modified since the previous SmartScan, hence it can improve the system’s performances and decrease the scan time exponentially.

The SmartScan is a resource intensive process and it should never be run during peak hours or during regular working time, but only when the load on Carbonio infrastructure is low, to prevent reductions in the Carbonio performance.

By default, a SmartScan is scheduled to be executed each night at 4:00 AM (if Scan Operation Scheduling is enabled in the Carbonio Backup section of the Carbonio Admin Panel). Once a week, on a day set by the user, a Purge is executed together with the SmartScan to clear the volume on which the Carbonio Backup is saved from any deleted item that exceeded the retention period.

The Carbonio Backup engine scans all the items on the Carbonio mailbox, looking for items modified after the last SmartScan. It updates any outdated entry and creates any item not yet present in the backup while flagging as deleted any item found in the backup and not in the Carbonio mailbox.

Then, all configuration metadata in the backup are updated, so that domains, accounts, COSs and server configurations are stored along with a dump of all configuration.

When the backup contains LDAP data, SmartScan will save in the Backup Path a compressed dump that can also be used standalone to restore a broken configuration.


In case the LDAP backup can not be executed (e.g., because the access credential are wrong or invalid), SmartScan will simply ignore to back up the Directory Server configuration, but will nonetheless save a backup of all the remaining configuration

When the External Restore functionality is active, SmartScan creates one (daily) archive for each account which include all the account’s metadata and stores it on the external volume. More information in section Backup on External Storage.

Smartscan can be run manually from the CLI or configured from the Admin Panel (Admin Panel ‣ Global Server Settings ‣ Server Config).

Running a SmartScan

To start a SmartScan via the CLI, use the command:

zextras$ carbonio backup doSmartScan *start* [param VALUE[,VALUE]]
Checking the Status of a Running Scan

Before actually carrying out this check, it is suggested to verify how many operations are running, to find the correct UUID. you can do this by using the command

zextras$ carbonio backup getAllOperations [param VALUE[,VALUE]]

To check the status of a running scan via the CLI, use the command

zextras$ carbonio backup monitor *operation_uuid* [param VALUE[,VALUE]]

Realtime Scanner#

The Realtime Scanner is an engine tightly connected to the Mailstore & Provisioning, which intercepts all the transactions that take place on each user mailbox and records them with the purpose of maintaining the whole history of an item for its entire lifetime.

Thanks to the Realtime Scanner, it is possible to recover any item at any point in time.

The Realtime Scanner reads all the events of the Mailstore & Provisioning in almost real-time, then it replicates the same operations on its own data structure, creating items or updating their metadata. No information is ever overwritten in the backup, so every item has its own complete history.

Enable the Realtime Scanner

Set the ZxBackup_RealTimeScanner property to TRUE.

zextras$ carbonio config set server $(zmhostname) ZxBackup_RealTimeScanner TRUE
Disable the Realtime Scanner

Set the ZxBackup_RealTimeScanner property to FALSE.

zextras$ carbonio config set server $(zmhostname) ZxBackup_RealTimeScanner FALSE

Backup Purge#

The Backup Purge is a cleanup operation that removes from the Backup Path any deleted item that exceeds the retention time defined by the Retention Policy.

The Purge engine scans the metadata of all the deleted items and when it finds an item marked for deletion whose last update is older than the retention time period, it erases it from the backup.

Note however, that if the blob of an item is still referenced by one or more valid metadata files, due to Carbonio Backup’s built-in deduplication, the blob itself will not be deleted.

The Backup Purge can be started manually from the CLI or scheduled from the Admin Panel (Admin Panel +‣ Global Server Settings ‣ Server Config).

However, note that when infinite retention is active (i.e., the Data Retention Policy is set to 0), the Backup Purge will immediately exit, since no deleted item will ever exceed the retention time.

Run a Backup Purge

To start a Backup Purge run the command

zextras$ carbonio backup doPurge [param VALUE[,VALUE]]
Check the Status of a Running Backup Purge

To check the status of a running Purge run the command

zextras$ carbonio backup monitor *operation_uuid* [param VALUE[,VALUE]]

Limitations and Corner Cases of the Backup#

There are a few cases in which the backup is not working correctly. We discuss those cases here.

  1. Restore of an active account on a new account should NOT be done using the latest state available. Suppose that a user by mistake deletes all of his emails or that for any reason (like e.g., a server failure) the emails in an account are lost. The user wants them back and asks the admin. If the admin restores the status of the account to the latest state available, the result is that the new account will contain the latest state available, which is an empty account, since in the latest state the email have already been deleted. Therefore, in order to correctly restore the account, it is necessary to restore it at a point in time which is antecedent the emails were deleted.

  2. When using the POP3/POP3S protocol, if the email client is configured to download email messages and delete them immediately from the server, these messages may not be included in the backup. This does not happen if the Carbonio Storage component is installed.

  3. When sending an email directly through an SMTP connection (e.g., using a multipurpose device or connecting to the STMP server using telnet), then that email will not be part of the backup.

  4. When sending email using an IMAP/SMTP client, the IMAP client must be configured to store the send email in a remote folder (using the IMAP STORE command) after the send operation, otherwise the email may not be included in the backup.


The last two cases do not apply when using a browser to connect to the Node hosting the Mailstore & Provisioning Role. In this case is it the Mailstore that contacts the SMTP server to send the email and automatically passes the email to mailboxd.

Backup on External Storage#

As described in section Architecture of Carbonio Backup, Carbonio Backup is composed of metadata and blobs (compressed and deduplicated), saved by default on the same folder—​or mounted volume—​specified in the Backup Path. The real-time backup requires that the Backup Path be fast enough to avoid queuing operations and/or risk data loss.

However, S3 buckets, NFS shares, and other storage mounted using Fuse can be very slow and might not be suited as storage mounted on the Backup Path.

Because the most important part of backups is the metadata, the idea behind Backup on External Storage is to use two different storage: one local (and typically fast) for metadata and cache and one external (local network or cloud) for the blobs and a copy of metadata.

If the external storage is remote, multiple changes will be bundled and sent together, while if it is local, larger but slower and cheaper storage can be employed.

Metadata are saved locally in the Backup Path, BLOBs are momentarily cached on the local disk and uploaded to the remote storage as soon as possible.

The SmartScan locally updates the metadata for accounts that have been modified since the previous scan and archives them on the remote storage.

The remote metadata archiving can be also triggered manually by running either of the following commands and adding the remote_metadata_upload true parameter:

  • carbonio backup doSmartScan

  • carbonio backup doAccountScan

  • carbonio backup doBackupServerCustomizations

  • carbonio backup doBackupLDAP

  • carbonio backup doBackupCluster

By splitting the I/O intensive metadata folder from the BLOBs one, it is also ensured that the backup works, even in case the remote storage is temporarily unavailable, for example because of network issues or ongoing maintenance tasks), granting a better reliability and backup resilience.

Activate Backup on External Storage#

Once that external storage has been set up, it is necessary to let Carbonio use the external storage. The procedure is slight different, depending if the new storage needs to be accessed from a newly installed server or if existing local backups must be migrated to the external storage.

Configure on newly installed or uninitialized server

If the backup has not been initialized on the server, an Administrator can configure the external storage by running

zextras$ carbonio backup setBackupVolume type bucket_configuration_id VALUE [param VALUE[,VALUE]].

For example

zextras$ carbonio backup setBackupVolume S3 123e4567-e89b-12d3-a456-556642440000

Once the backup will be initialized, it will use the external storage.

Therefore, check for any missing blobs with doCheckBlobs in the mounted volumes to avoid integrity errors.

Migrate existing backups

Before actually carrying out the migration, please perform the following important maintenance task. This procedure will minimise the risk of errors:

  1. Double-check the permissions on the active backup path

  2. Make sure that the Carbonio cache folder is accessible by the zextras user (typically under /opt/zextras/cache)

  3. Check for table errors in the myslow.log and in the MariaDb integrity check report. If any error is found, consider running the mysqlcheck command to verify the database integrity.

  4. Check for any missing blobs in the mounted Carbonio volumes with carbonio powerstore doCheckBlobs

  5. Check for any missing digest in the backup with doSmartScan deep=true

  6. Check for any orphaned digest or metadata in the Backup with carbonio backup doCoherencyCheck

  7. Optionally run a carbonio backup doPurge to remove expired data from the Backup

You can now proceed to migrate the existing backup using the appropriate carbonio backup migrateBackupVolume [[ Default | Local | S3 ]] command.

Finally, once the migration has been completed you can run this final task:

  • Manually remove the old backup data. Indeed, the migration only copies the files of the backup to the new external storage and leaves them in the place.

Goals and Benefits#

It is worth to highlight the two main advantages of the Backup on external storage:

  • Fast IOPS storage is needed only for metadata that are statistically less than 10% of the total backup size.

  • Backups are typically stored externally, away from the local infrastructure and are therefore accessible from disaster recovery sites


When activating the Backup on External Storage, it is not possible to modify the Backup Path from the UI. Indeed, the corresponding input text area will only be shown, but can not be edited. Moreover, the following warning will be shown:

“The backup path cannot be managed using this UI since the Backup On External Storage is enabled. Please use the backup CLI commands”

In order to disable the External Storage, you can run the carbonio backup setBackupVolume Default command.

zextras$ carbonio backup setBackupVolume Default start

Data Stored in the External Storage#

Data is stored in external storage using a structure very similar to the one of the Backup Path:

|-- accounts
|-- items
|-- server
`-- backupstat

The external volume is used as a storage for the $BACKUP_PATH/items only, while the metadata (which are in $BACKUP_PATH/accounts) will still use the local volume like a working directory to store the changed metadata.

There is a set of dedicated commands to download the metadata from the external storage and rebuild the structure and the content of the account in case of Disaster Recovery or to update/fix local metadata.

For example, this command downloads the latest metadata available in the remote storage to the Backup Path.

zextras$ carbonio backup retrieveMetadataFromArchive S3 *destination*

Types of External Storage#

Supported external volumes, i.e. shared volumes mounted either at the OS level, or object storage entirely managed by Carbonio, are of two types: NFS or Fuse external volumes, which are described in the remainder of this section, and External ObjectStorage, described in details in Section Bucket Management.

NFS/Fuse External Storage#

Before using the NFS/Fuse share, it is necessary to configure the new volume(s) that will store the backup, because no existent volume can be reused. Depending on what approach you choose, the steps to carry out are different. We describe here only the easier and most reliable one.

The Administrator must ensure that each Node writes on its own directory, and the destination volume must be readable and writable by the zextras user.

Consider a scenario in which the same NAS located on is involved, which exposes the NFS share as /media/externalStorage. We want to store our multi-servers backups on this NAS with the backup of each Node on a separate directory.

To do so, on each Node you need to add one entry similar to the following to the /etc/fstab file: /mnt/backup nfs rw,hard,intr 0 0


You need to add an entry like the one above on each node replacing SRV1 with the corresponding directory on the NAS on which the backup will be stores.

External ObjectStorage#

An external ObjectStorage is a Bucket created on either a remote provider or on a local infrastructure and used. Please refer to the dedicated section for more information and directions to create one.

Troubleshooting Backups on Defective ObjectStorage#

There are unfortunate cases in which a remote ObjectStorage holding a Backup becomes completely unavailable, for example because of an hardware failure.

What happens in this situation is unfortunate in many points:

  • All the data saved in on the Bucket are already lost

  • The remote bucket still shows up when issuing the command carbonio core listBuckets all

  • The Backup still tries to use that bucket

  • The defective Bucket can not be removed

  • Trying to redirect the backup to a new volume with the command migrateBackupVolume is fruitless, because the remote Bucket is unresponsive and unaccessible

The solution to this impasse is however quite simple, and indeed there are two alternatives:

  1. You do not have another ObjectStorage available: use the command

    zextras$ carbonio backup setBackupVolume Default start

    The Backup will now use the default, local path.

  2. You already have another ObjectStorage available: create a new Backup Volume with the following command (we use a new S3 bucket as example)

    zextras$ carbonio backup setBackupVolume S3  58fa4ca2-31dd-4209-aa23-48b33b116090 volume_prefix new_backup

In both cases, at this point you can proceed to remove the volume that is no longer functional.