• Stratos Misinezis
    4
    Much are written in this forum and I was wondering if there is a pros and cons comparison in between them
  • David Gugick
    118
    There is extensive coverage of all the benefits over the legacy format in the online help here: https://mspbackups.com/AP/Help/backup/about/backup-format/about-format

    Here's an overview of the benefits:
    • Grandfather-Father-Son (GFS) retention policy for long-term retention management
    • Immutability - for backup data protection in the cloud
    • Client-Side Deduplication to reduce backup data and improve backup times
    • Consistency Checks to ensure backup data is present and correct
    • Synthetic Backup for all backup types: file, image-based, Hyper-V and VMware backups
    • Changed Block Tracking for Image-Based Backup
    • Restore on Restore Points
    • The number of requests to storage is reduced significantly (faster backups and restores)
    • Uploading by data parts enables continued upload in case of network issues
    • Support for any characters (emoji, 0xFFFF, etc) and extra-long filenames
    • Automatic filename encryption (one password for generation)
    • Real full backup for file-level backups (for generational retention of complete backup sets)
    • Fast synchronization (reduced number of objects in backup storage)
    • Plan configuration is always included in a backup
    • Backup logs are backed up along with backup data
    • Object size is now limited to 256 TB regardless of the storage provider limitations
    • Much faster purge operations
    • Faster backups and restores for large numbers of small files (file backup)
    • Lower cloud storage costs for large numbers of small files for warm and cold storage classes

    I would say the only considerations if you're using the legacy formats and will not greatly benefit from the new features:

    • File Backup: If you need incremental-forever backups, stay with the legacy format. This may be an issue if you're performing large local file backups (there is no synthetic full support for local storage yet)
    • File Backup: If you prefer version-based retention versus generational retention
    • If you need hybrid backups - as opposed to running two separate backup plans with the new format
  • jeff kohl
    0
    can a synthetic full backup be saved to B2 and an immutable backup
    ?
  • David Gugick
    118
    Synthetic Full backups are supported on Backblaze B2 with the new backup format for all backup types. https://mspbackups.com/AP/Help/backup/about/backup-format/synthetic-full-backup
  • Steve Putnam
    35
    David- Great write-up explaining the differences. Can you help me get a feel for the reduction in object count using the new format? I use Amazon lifecycle mgmt to move files from S3 to Archive after 120 days ( the files that do not get purged after my 90 day retention expires). However, the cost of the API calls makes that a bad strategy for customers with lots of very small files ( I’m talking a million files that take up 200GB total). If I were to reupload the files in the new format to Amazon and do a weekly synthetic full, ( such that I only have two fulls for a day or so then back to one), would the objects be significantly larger such that the Glacier migration would be cost-effective?
  • David Gugick
    118
    Object count will normally be much, much lower. We use a dynamic algorithm to group files into archives - based partly on file processing speed & time with a maximum size. I do not recall off-hand what the max size is but it's at least 1 GB from what I recall. So, if you're backing up many small objects they should fit nicely into the new archives. And this will reduce API calls since there will be far fewer objects to upload and move around through lifecycle policies (and restores) and you're not going to be below the minimum object size for S3-IA and Glacier, which can add greatly to storage costs. You can always run a local test off-hours on a subset of data to see what the archives look like.
  • Steve Putnam
    35
    Thanks David. I will do a test tomorrow.
  • Steve Putnam
    35
    They are anywhere from 3-17GB each.
    So this leads to my next question:
    Lets say i have files with 90 days worth of versions backed up in these CBL files (new format) on my local drive. If i were to copy those files to Amazon using CB Explorer into the appropriate MBS-xxxxx folder, would a subsequent repo sync see them and allow me to "continue on" with a new format Cloud Backup?
    Assuming that it would work, if i were to then use Bucket Lifecycle management to migrate CBL files to a lower tier of cloud storage, would every file contained in the CBL have to match the aging criteria?
    Or is there some intelligence as to what gets put into each CBL file?
  • David Gugick
    118
    Lifecycle Policies as I recall work on the object creation date in the cloud and not the dates on the original files. So any lifecycle policy on a new backup format would move the archive according to the date on the archive. You'd effectively be seeding the backups in the cloud as some customers do on initial backups when the source data is large and the upstream bandwidth is inadequate.
  • Steve Putnam
    35
    You are correct about the lifecycle policy behavior, I was asking if MSP360 backups would recognize the contents of the cbl files , old versions and all, such that I could “seed” 90 days worth of backups to the cloud that were originally on the local drive.
  • David Gugick
    118
    Yes. That's how you would seed as if you were using something like S3 Snowball. Again, I would do a simple test. a few files in an archive saved locally and copied to the cloud and synchronized.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment