• Dhayes
    Hi all. I am loving my trial of cloudberry backup. So simple and powerful. But I have a more general question for people using the product who might be pushing about 1tb of data perhaps on multiple VM's. Assuming you are NOT using synthetic fulls and do not have a super fast internet pipe (>20mb upload) how is it working for you? How are you getting your fulls up to the cloud efficiently without having periodic full backup jobs causing incrementals to wait or be skipped?

    We have many larger sites using storagecraft and their cloud option. We are looking at CB but are just concerned about the amount of data that needs to get up to the cloud.

    I am just curious how others with that amount of data going to the cloud manage this. For instance a 3 fulls totaling 1tb would take over a week to fully upload so the daily jobs will not run until complete.

    This is NOT a knock. Cbb is awesome and it has been super solid. I was just wondering what others are doing.

  • tekshelter
    I completely agree. We currently have a customer with a Hyper-V host with over 1TB of data. It takes 4-6 days to push a full backup to the cloud. We are struggling to come up with a good solution, as missing that many days of incremental backups is not viable. We previously used Solarwinds Backup, which syncs the local backup to the cloud in the background. If CBB could implement a similar feature, it would be great!
  • Steve Putnam
    The concept of a periodic full is what confused me in the beginning. I assumed it was like tape backup - a whole new full backup of every file. But if a file never changes, it never gets backed up again. Think pdf's, jpg's, mp3, etc. Our "Full" is run once per month and takes only a little longer than an incremental as it is only uploading full copies of any files that have changed since the last Full backup.
    We have many customers with >1 TB of data. We do file level incremental backups of the changed data to the cloud nightly. Typically the monthly "full" on 1TB is only about 30 - 50GB. We run it on weekends and it finishes with no problem.
    We also do VHDx file backups to the cloud each weekend for DR purposes. We have separate Hyper-V guests for the Domain controller, File server, SQL app, etc. Further, we use different vhdx files File server instance C: (OS) drive and the D: (data) drive so we only need to upload the C: Drive VHdx which is under 30GB (compressed).
    One of our customers has 4TB of data , and once the initial upload was done (that took a long time), we have had no issues completing incremental backups to two cloud locations each night as well as a local backup and weekly VHDx backups.
    There is no need to do a VHDx backup of the Data drive as you have the file backups to pull from.
    And if your setup has apps and data on the HyperV host, then an image backup can be done the same way, just exclude the data paths.
    I apologize if I am misinterpreting your situation, but would be glad to assist you in any way that can.
Add a Comment