We are currently testing MBS as a last resort backup solution for our business customer’s servers. We have had local backups set up for years but due to recent situations, we have now started integrating cloud backups to all of our customer’s servers. I have really liked the feature set of MBS so far and I believe it is one of the best backup solutions available. We are using the Image backup software to keep a copy of the servers so that if a location is lost we can restore the server to another set of hardware or to a cloud computing service.
What I have been having questions with is the recommended retention policy. I know everyone’s situation is different and will have different requirements but I would like to have some help figuring our position out. I have been trying to find the balance between having both a current image of the server and keeping the data storage footprint as small as possible.
I wanted to know if there is some retention policy documentation that I am missing or if someone at Cloudberry would be able to break down the requirements for having the ability to restore a full and current server image on demand. We currently have the ability to restore past versions with our local backup solution and having that ability in the cloud is not a dire requirement, we are using MBS as more of a last resort backup.
If anyone would help answer these questions I would greatly appreciate it.
Thanks in advance,
Caleb
[reply=“Tom’s Help Desk;d85”] As you mentioned, every situation is different. Retention periods simply refer to “how far back do you need to recover from”. If your data is otherwise protected and company policy is being met by that retention period, you might be OK. If you just want MBS to provide the “most recent image” and don’t need it for data retention then you can set your retention to be very small.
I wish there was a “standard” answer but it really does depend on each situation. Certain regulations can also dictate retention periods and then there’s company policy, especially if you need to do “discovery” for a lawsuit.
I’d also consider if you are running more of a cloud backup, local or hybrid?
What’s the size of your instances? If you are running cloud, is your ISP good enough to upload images daily? If not, consider running synthetic full image backup (www.cloudberrylab.com/blog/synthetic-full-backup-with-cloudberry-backup/)
But, as Doug said, consider your RTO/RPO first.
[reply=“Denis Gorbachev;117”] [reply=“Doug Hazelman;116”] Thank you both for your replies. Those links have really good information, especially the “Block Level Backup and Full Backup Explained” linked in the article that Denis linked.
Our current schedule for backups is:
Backup options:
Compression: enabled
Encryption: disabled
Retention policy:
purge version older than 7 day(s)
keep only 7 versions
Schedule backup:
Occurs every week on Monday, Tuesday, Wednesday, Thursday, Friday at 12:00 AM.
Full backup options:
Full backup schedule:
Occurs every week on Saturday at 12:00 AM.
This was created during a remote session with a support rep and I wanted to confirm that what we are doing is not incorrect. The RTO and RPO are definitely going to be customer specific and that will be something I will have to go over with them. For the time being, I wanted to get the current situation in order. From what I read in the article linked above the “Full Backup” does not reimage the server, but reuploads all the files that had previously been backed up using block level backup. That does make a lot of sense so what I had thought of doing is performing a block level backup two-three times a day and a full backup once a night at midnight. What I wanted to confirm is how many versions we wanted to keep in the cloud. As I had said before with our current local backups we can see the previous versions going back years so to cut costs I would like to have only 1-2 versions available. Storage space is the main deterrent from keeping more versions and I would like to keep the backups as small as possible.
[reply=“Denis Gorbachev;117”] The MBS backup is just cloud, we already have a system for local backups. The instance sizes range from around 200gb of data backed up to just over 1tb. Some of our customer’s ISPs are very fast while others are not so much. The synthetic backup looks good except we would have to switch over to S3 (currently on B2).
My question is does the versions kept apply to every backup done? or just the files themselves. Like does the backup consist of 7 changes of each file that is purged? If there is any documentation on the retention policy I would be happy to take a look at it.
I would not say, that it is not correct. The schedule looks just fine.
If you are running full backup while running image-based - it will actually creates a newer image. If you are running file level backup - then new full backup creates new full version of each file and after that a new sequence of block-levels begins.
You can set different versioning for each backup you perform. This is done from the given backup plan wizard.
In case of a file level backup - if the given file was not changed, we don’t create the new version. In other words, we don’t re-backup all files, only changed ones.
Regarding the retention, I think you may find these articles helpful: