Forum tip: Always check when replies were posted. Technology evolves quickly, so some answers may not be up-to-date anymore.

Comments

  • How to Delete entire Folder from Cloud Backup Storage
    Just Be sure that you have “ allow Data Deletion in Backup Agent” selected under companies: Agent options. You should be able to remove unwanted folders.
  • Expected Data Usage Compared to Drive Size
    You want an image backup of a machine to be able to restore the OS and all of the installed apps. If the users folder is not huge, then the image will also include a copy of all of the data files. But if the user data folders are large (greater than 200GB in our model) we exclude it from the image backups. The data is already being backed up separately via daily file backups with a 90 day retention period.
    The image backups are for disaster recovery - not file recovery.
    I would be happy to work with you to get things set up properly, let me know if you want help. (I don't charge anything)
  • Expected Data Usage Compared to Drive Size
    Get yourself a 4TB usable NAS device ( with room to add drives if needed later). That should be plenty for your situation and they are fairly inexpensive.
    To attempt to answer your question:
    For standard (legacy format) file backups of 200GB with a retention period such as yours, I would expect you to consume between 190GB and 240GB depending on the data change rate. How could it be less? Compression. We typically get ~20% compression rate overall. Most data never changes (pdf’s pictures, etc) , and if you are backing up QuickBooks, Word, Excel docs etc, you get a high compression rate and block level incrementals are small.
    There are some data types (like app specific backups) that can generate a new GB+ file each day, such that keeping a month’s worth will consume more.
    If you plan on doing local Image backups for six devices, you will consume a lot more space, but daily block incrementals tend to be small, and you can exclude the data folders from the image backup since you are already backing up the files separately.
    Remember that with a once per month full image, you will always have two months worth of backups in storage since you can’t purge the previous month until the last block incremental for that month has reached 30 days old on day 59.
  • Image Restore Followed By File Restore
    Under the Remote Management tab, where the devices are listed, you will see a gear with a drop down arrow. Select Edit : Edit options and you can do a repo sync.
  • Image Restore Followed By File Restore
    Either from the server console or the MBS portal, select Options: Repository, then select the cloud/ local storage that you want and synchronize it. This will update the repository such that a file restore will get the most current backup data.
  • Prefilled Encryption Key in MBS portal - Big Problem
    Figured out a way - The CBB_Report Folder has the same strings associated with the plans so by loolking at the plan run report I can match them up.
  • Prefilled Encryption Key in MBS portal - Big Problem
    So it turns out that the prefilled encryption key is not causing the issue, the original encryption key was wrong so when I put in the right one, it said it had changed and would do a new full.
    Looking for a way to tell which other plans have the wrong encryptiuon key . In legacy, I would simply download a small file using Cloudberry Explorer and if the Encrtyption keyis correct, the download works.
    In V7 however, all I see in CB Explorer is a lengthy string for each plan. Is there any way to display the string associated with each plan?
  • Configuring incremental backups with periodic full backup
    That was my plan - stick with legacy for file backups and use the new format for Image/VHDx backups taking advantage of the synthetic full capability. What worried me was the statement (hopefully since rertracted) that there will come a time when legacy format will no longer be supported. Lets hope that day never comes.
  • Prefilled Encryption Key in MBS portal - Big Problem
    I will send the logs to the open ticket. Thanks for your help.
  • Prefilled Encryption Key in MBS portal - Big Problem
    I have been avoiding looking at the keys for that reason, but the prefilled pw managed to get saved in a plan forcing a full backup to occur. Could be that someone on the team looked at it, not realizing the implications.
    I am just glad that it throws a notification that the encryption key has been changed. Otherwise we’d never know until a restore didn’t work. Would be looking for another job.
  • Prefilled Encryption Key in MBS portal - Big Problem
    Edit a V7 file plan in portal. Go to compression and encryption. Sometimes the defaul encryption key is visible, other times if you look at it is shows jlx40..6mp…qZnu9wVJQ==
    If you then save the plan it changes the key apparently.
  • Delete Cloud Data When No Longer Using A Backup Plan
    Yes I forgot that part - you can resync the repository via the MBS Portal by selecting Edit :Options: Repository. Another note - If you switch from legacy to V7 format on a machine, the new format data is in something called CBB_Archive - The old data will be the folder with a drive letter.
    Question for David G.:
    I have a server with 1 million files The inital backup two years ago took a week, but ever since, every backup is "incremental" - meaning only changed files get backed up. With the new format do we actually have to periodically upload everything again? Even jpg's and pdf's that have never changed?
    I suppose the synthetic backup would reduce the time to do that - but we use Google Nearline that currently does not support synthetic fulls.
  • Delete Cloud Data When No Longer Using A Backup Plan
    We use Cloudberry Explorer to delete old Cloud backups in scenarios such as you described.
    Be sure to put a password on the Explorer app for security purposes.
    For local backups we login to the server/device being backed up and manually delete the old backups. Since we started using 5TB external USB drives for local backups, space has rarely been an issue, so sometimes we just leave the old ones there. We use our RMM tool to monitor available space on the drive.
  • Configuring incremental backups with periodic full backup
    So help me understand. If we use the new backup format for a server with 1 million files - 800GB, we have to reupload all of the files periodically? I get that if one uses a Cloud storage vendor that supports Synthetic fulls that it would act like an incremental, but we backup to Google Nearline - not supported for Synthetic fulls.
    Is it true that the entire set of files would have to be reuploaded with each full? Even for Pdf's jpg's etc that never change?
  • MSPBackups.com Website Slow
    Performance is MUCH better. Kudos to MSP360 for upgrading the backend.
  • Unfinished Large Files
    Great, thank you. Saves me a lot of hassle having to manually go in and delete them each week.
  • Unfinished Large Files
    David-
    Can you confirm that the latest MSP360 Version 7 now properly deletes unfinished file uploads from BackBlaze B2/S3 compatible?
  • Retention Policy Problem with V7
    So I finally got the answer from support:

    You are absolutely right about new backup format retention. In new backup format, retention is the period of time a backup generation is retained once it has been replaced. This means in your Full Backup Only configuration you will always have two Full Backups. It is not currently possible to delete your previous Full Backup when you perform your next Full Backup.

    This is different from how retention worked in legacy backup format. Legacy format retention period was based on backup date.

    I'm going to submit a feature request on your behalf.
  • How to configure backup to use the less possible volume on destination ?
    David/MHC - A couple of points:
    1. Wasabi has a 90 day timed-storage policy, meaning that if you purge data prior to 90 days, you still get charged for 90 days. BackBlaze has no such timed-storage restrictions.
    2. I am working with support to understand why the retention/purge process behaves differently with the New backup format (NBF) compared to the old format.
    Simply put, in the old format I could run a weekly full image backup to BackBlaze with a 3 day retention period, and when the next weekly full ran, the previous week's image would get purged (as it is over three days old). I end up with one full image.
    That is NOT what is happening in NBF.
    Using weekly Synthetic Fulls only - no scheduled incrementals, with the same 3 day retention period, the previous week's generation is NOT getting purged at the completion of the the new synthetic full.
    I am in the process of trying the one day retention setting to see if it changes the behavior, but for the life of me I do not understand why it doesn't work the way it used to. Once the synthetic full completes there is ZERO dependency on the prior generation.
  • Retention Policy Problem with V7
    See the screenshot below - taken today 11/7.
    First generation from 10/30- 10/31 says it will be deleted in two days.
    Yet the retention period is only 3 days and the job runs weekly so the first generation should have been deleted at the completion of the second generation on 11/6.
    Now if the data would actually get purged without having to wait until the next week's plan runs it would be ok. But it appears that there is no way to prevent there always being two generations in Cloud storage - doubling my cost.
    aws4_request&X-Amz-Date=20211107T164433Z&X-Amz-Expires=604791&X-Amz-Signature=1381381a811600480a17869921bd390db8b68d4e94dd702ee6d167b156c0c092&X-Amz-SignedHeaders=host&response-content-disposition=inline