As described in section [backup-architecture], Zextras Backup is composed of metadata and blobs (compressed and deduplicated), saved by default on the same folder—or mounted volume—specified in the Backup Path. The real-time backup requires the Backup Path be fast enough to avoid queuing operations and/or risk data loss.
However, S3 buckets, NFS shares, and other storage mounted using Fuse can be very slow and might not be suited as storage mounted on the Backup Path.
Because the most important part of backups is the metadata, the idea behind Backup on External Storage is to use two different storages: one local (and typically fast) for metadata and cache and one external (local network or cloud) for the blobs and a copy of metadata.
Metadata are saved locally in the Backup Path, BLOBs are momentarily cached on the local disk and uploaded to the remote storage as soon as possible.
The SmartScan locally updates the metadata for accounts that have been modified since the previous scan and archives them on the remote storage.
The remote metadata archiving can be also triggered manually by
running either of the following commands and adding the
remote_metadata_upload true parameter:
By splitting the I/O intensive metadata folder from the BLOBs one, it is also ensured that the backup works, even in case the remote storage is temporarily unavailable, for example because of network issues or ongoing maintenance tasks), granting a better reliability and backup resilience.
Goals and benefits of a backup on external storage include:
Data is stored in external storage using a structure very similar to the one of the Backup Path:
|-- accounts |-- items |-- server `-- backupstat
The external volume is used as a storage for the
only, while the metadata (which are in
still use the local volume like a working directory to store the
There is a set of dedicated commands to download the metadata from the external storage and rebuild the structure and the content of the account in case of Disaster Recovery or to update/fix local metadata.
For example, this command downloads the latest metadata available in the remote storage to the Backup Path.
zxsuite backup retrieveMetadataFromArchive S3 destination
See documentation of retrieveMetadataFromArchive S3 for more information.
Supported external volumes, i.e. shared volumes mounted either at the OS level, or object storage entirely managed by Zextras, are of two types: NFS or Fuse external volumes, which are described in the remainder of this section.
Before using the NFS/Fuse share, it is necessary to configure the new volume(s) that will store the backup, because no existent volume can be reused. Depending on what approach you choose, the steps to carry out are different. We describe here only the easier and most reliable one.
When NFS shares are used, you need to make them visible and accessible
to the OS and Zextras, a task that only requires to add a row in
/etc/fstab with the necessary information to mount the volume, for
example, to mount volume /media/mailserver/backup/ from a NAS located
at 192.168.72.16 you can add to the bottom of
/etc/fstab a line
192.168.72.16:/media/mailserver/backup/ /media/external/ nfs rw,hard,intr, 0,0
You will now be able to mount the external storage by simply using
mount /media/external/ on the server.
In the case of a multiserver installation, the admin must ensure that each server writes on its own directory, and the destination share must be readable and writable by the zimbra user.
In a multiserver installation, consider a scenario in which the same
NAS located on 192.168.72.16 is involved, which exposes via NFS the
/media/externalStorage. We want to store our multiservers
backups on this NAS.
To do so, on each server you need to add one entry similar to the
192.168.72.16:/externalStorage/Server1 /mnt/backup nfs rw,hard,intr 0 0 192.168.72.16:/externalStorage/Server2 /mnt/backup nfs rw,hard,intr 0 0 192.168.72.16:/externalStorage/Server3 /mnt/backup nfs rw,hard,intr 0 0
Before using an ObjectStorage, a dedicated Zextras bucket must be created.
While similar in concept, Zextras Backup and Zextras Powerstore buckets are not compatible with each other. If Powerstore data is stored in a bucket it is not possible to store Backup data on the same bucket and vice-versa.
The zxsuite core listBuckets all command reports the bucket usage, for example:
bucketName hsm protocol HTTPS storeType S3 accessKey xxxxx region EU_WEST_1 uuid 58fa4ca2-31dd-4209-aa23-48b33b116090 usage in powerstore volumes server: server1 volume: centralized-s3 server: server2 volume: centralized-s3 usage in external backup unused bucketName backup protocol HTTPS storeType S3 accessKey xxxxxxx region EU_WEST_1 destinationPath server2 uuid 5d32b50d-79fc-4591-86da-35bedca95de7 usage in powerstore volumes unused usage in external backup server: server2
Since each Zextras Bucket is identified by a prefix, you can use the combination of S3 bucket credentials and Zextras bucket prefix to uniquely identify and store multiple Zextras Buckets within a single S3 Bucket.
In other words, the same Amazon S3 Bucket, you could define several Zextras Buckets, to be used both for Powerstore HSM and Backup
In multi-mailbox environments, it is not necessary to create multiple
buckets: You only enter the bucket configuration information when
enabling the remote backup on the first server. The
prefix parameters can then be used to
store other server’s data on a separate directory on the same storage.
Once that external storage has been set up, it is necessary to let Zextras Suite use the external storage. The procedure is slight different, depending if the new storage needs to be accessed from a newly installed server or if existing local backups must be migrated to the external storage.
If there the backup has not been initialized on the server, an Administrator can configure the external storage by running
zxsuite backup setBackupVolume S3 bucket_configuration_id VALUE [param VALUE[,VALUE]].
Once the backup will be initialized, it will use the external storage.
Therefore, check for any missing blobs with doCheckBlobs in the Zimbra volumes to avoid integrity errors.
Before actually carrying out the migration, please perform the following important maintenance task. This procedure will minimise the risk of errors:
Double-check Zimbra permissions on the active backup path
Make sure that the Zextras cache folder is accessible by the Zimbra user (typically under
Check for table errors in the myslow.log and in the MariaDb integrity check report. If any error is found, consider running the
mysqlcheckcommand to verify the database integrity.
Check for any missing blobs in the Zimbra volumes with doCheckBlobs
Check for any missing digest in the backup with doSmartScan deep=true
Check for any orphaned digest or metadata in the Backup with doCoherencyCheck
Optionally run a doPurge to remove expired data from the Backup
Finally, once the migration has been completed you can run this final task:
Manually remove the old backup data. Indeed, the migration only copies the files of the backup to the new external storage and leaves them in the place.