• Brad

    My company provides MSP backup services and are moving to MSP360. We are using image based backups to provide bare metal restore capabilities and are backing up to Backblaze B2. I am having trouble understanding the interaction between incremental / full backups and versions. A test server we are backing up has approximately 700GB of data. Our image based backup takes about two weeks to upload a full. We have Versions set at 30, Delete Versions Older Than 30 days, and Always Keep Last Version. Incremental's run daily and fulls run monthly. The problem I foresee with this configuration is due to the size of our backup, the server will literally be uploading a full backup half the time. This would leave the server susceptible to dataloss if an event occurred during the full backup upload and I cannot run another image based backup (locally) while the upload is occurring. Am I correct in assuming that versions are only deleted during a full? If we increase the time between fulls, how do we ensure version retention is being met (versions older than 30 are being deleted) without increasing the number of versions stored? What is the advised solution to backing up large servers while balancing the amount of data stored in the cloud?

  • Matt
    Yeah, full is required for proper purging of your data. The software only deletes old versions in chains of full + incrementals, so if retention is somehow misconfigured you'd end up with a lot of data that is not being purged. The best way to set it up is to run full backups more frequently.

    If there are problems with slow local upload speeds it's better to contact support regarding that, since it could be a technical problem.
  • BradAccepted Answer
    What is the proper way to configure a 30 day retention policy so we dont end up with excess data?
  • David Gugick
    If you set 30 Days retention and run a Full every 30 then, you'll end up with 60 days in storage for one day until the previous set of 30 days can be removed. How you configure is completely up to you and may be driven by your desire to limit the frequency of long, full backup times versus the need to reduce overall cloud storage. I would suggest you stagger your full backups so all servers are not running fulls at the same time. And just remember that retention will be maintained as set by you and in order for older backup sets to be removed (as they are full + incrementals), the entire set needs to be removed at the same time. That can only be done if removing the set does not violate retention settings. If you need 30 days of retention, then your full backup schedule will determine how long before a set can be removed. If you set your fulls to every 15 days and 30 days retention, then after the third backup set (45 days) is complete, the first 15 days can be removed while still leaving you with 30 days in storage. But if running fulls is more of a concern, then just be aware that you'll need more storage.
  • Brad
    Understanding that the configuration is up to me in the end, our recovery point objective is to have at least 14 restore points. How would you configure versioning and fulls on a server with close to 1TB of data and a 30meg upload capability? I'm not trying to make you liable for anything just want an experienced recommendation so I can gain a better understanding. Thank you for the help.
  • Steve Putnam
    Our approach to client backup/ recovery using MSP360 MBS is a bit different, and is based on separating data file recovery from OS/ system recovery.
    For the data files we use file level backup to a local USB drive and to the Cloud. The initial Cloud backup takes a long time, but after that, only files that have been added or modified are uploaded, typically a very small amount of data. The retention period for versions is usually 90 days. We run “full” file backups one a week, which are only marginally bigger than a block level backup.

    For operational OS/System Recovery ( meaning any issue that requires a reload), we do daily or weekly Image backups of the C: Drive to the local USB drive, but exclude the folders/drives that contain the data files as they are backed up at the file level.

    For true Disaster Recovery ( when the server PC and the local USB drive are unusable) we run monthly Full Image backups to the Cloud, again excluding the data folders.
    These Image Backups typically range from 25GB to 100GB or so and we keep two month’s worth in the Cloud.
    We do not see the need for ( or have the bandwidth for) a daily Cloud Image backup, or even weekly for most customers whose OS and apps do not change often.
    To recover we do a Bare Metal image recovery from the USB or Cloud, then restore the files to the most recent backup.
    Other notes
    At 30mbps, you should be able to upload 10-13GB per hour, meaning a 50 GB system image would
    take under 5 hours to upload to the Cloud. And most recoveries can utilize the local image backup.
    We have a customer with 3 TB of data and have no trouble running the local and file cloud backups each night and the OS images on the weekends.
    We employ this same approach for our clients with HYper-V instances. We try to create separate VHDx files for the data drive so that we can exclude them.
    I realize that other MSP’s have different approaches and requirements, but this strategy has worked well for the 60 servers that we support.
    I would be happy to discuss the details of different strategies with you either in this forum or offline.
  • David Gugick
    Set retention to 14 days and run the fulls as needed. The full schedule does not affect retention. It only determines how frequently the fulls run and how long before data can be removed from storage. I'd probably say 14 day full schedule is a good start. If they are too burdensome, to run that frequently, then move to 28 days but understand your retention is really 28 days then.
  • Brad
    Thank you that is very helpful.
Add a Comment