|This feature is currently in beta, usage in production environment is not endorsed|
Zextras Backup also allows to store its data on a third party S3 bucket or on an NFS/Fuse share.
The standard local Backup Path will still be used to store metadata, a copy of which will then be uploaded to S3 every night, while BLOBs will be stored in the bucket straight away.
This allows to effectively split the live data, keeping the metadata on a local disk and the BLOBs on a remote storage. Due to the fact that Metadata usually makes up the 10% of the backup space but the 90% of the total read/write operations, this design ensures a good level of performances (especially on metadata I/O heavy tasks, such as the Purge and the SmartScan) even when the largest part of the data is hosted remotely.
|Even when backing up on a Third Party Store, the local Backup Path will still be used for metadata and to cache BLOB writes.|
While BLOBs are momentarily cached on the local disk and uploaded to the remote volume as soon as possible, metadata are stored locally in the Backup Path.
An incremental copy of all metadata to the remote volume is performed every time the scheduled SmartScan runs, and every time a SmartScan is manually executed with the
deep true option.
It is also possible to force a metadata upload using the
remote_metadata_upload option in the following commands:
Remote metadata can be fetched to perform a Disaster Recovery or to update/fix local metadata with the
zxsuite backup retrieveMetadataFromArchive command to download the latest metadata available in the remote storage to the Backup Path.
Data is stored in third-party storages using a structure very similar to the one of the Backup Path:
|-- accounts |-- items |-- server `-- backupstat
The only difference in how the content is saved is that all metadata for a mailbox are compressed within a single
.tar.gz file instead of being stored uncompressed and spread across multiple subfolders.
New Zextras Backup users who don’t have an initialized backup just need to run the setBackupVolume_S3 command to set up the S3 backup. Running it without any further argument will display the full usage message:
zxsuite backup setBackupVolume S3 [param VALUE[,VALUE]]
Existing Zextras Backup users, on the other hand, must use the migrateBackupVolume_S3 command to setup the S3 backup and migrate their current data to it.
zxsuite backup migrateBackupVolume S3 [param VALUE[,VALUE]]
In both cases the command will create a bucket if provided with
all the required information, or will use an existing bucket if the
bucket_configuration_id parameter is used.
The backup migration to a third party store also copies the metadata archives, the server’s configuration, and the map files; unlike what happened in previous releases, SmartScan is not run at the end of the migration, to reduce bandwidth consumption. Moreover, if the server’s logging is set to debug level, a line for each file uploaded will be added to the log file.
|While similar in concept, Zextras Backup and Zextras Powerstore buckets are not compatible with each other - if Powerstore data is stored in a bucket it is not possible to store Backup data on the same bucket and vice-versa.|
In multi-mailbox environments, it’s not necessary to create multiple buckets: enter the bucket configuration information when enabling the remote backup on the first server and then just use the
prefix parameters to store other server’s data on a separate directory of the same storage.
On Server 1, set up the S3 backup by creating a new bucket:
zxsuite backup setBackupVolume S3 \ bucket_name "Backup Bucket" access_key a1b2c3e4 \ secret f5g6h7i8j9k0 region EuWest \ url s3api.myvendor.com volume_prefix "server1" \
After the backup is created, list all buckets and take note of the
bucket_configuration_id of the backup bucket:
zxsuite core listBuckets all
On Server 2, set up the S3 backup using the previously created bucket:
zxsuite backup setBackupVolume S3 bucket_configuration_id vw12xy34z56 volume_prefix "server 2"
While at a first glance it might seem that due to the need of a local mountpoint specifically setting up the backup for NFS or FUSE has little utility, the backend differences in metadata handling ensure a greater degree of data safety.
Splitting the high-access metadata from the BLOBs ensures that disk failures, such as when the share becomes briefly available, are better handled thanks to the local cache granting a higher backup resilience.
To backup on "Local" shares such as NFS or Fuse, first mount the share and then use the appropriate command based on your need:
No pre-existing backup:
zxsuite backup setBackupVolume Local
zxsuite backup migrateBackupVolume Local
Both commands only require a single argument, which is the path to the local mountpoint of the NFS/FUSE share.