Comments

  • New Backup Format - Size comparison on the bucket vs legacy

    Thanks for sharing your schedule Steve, gave some insights for the process.


    David, our requirement in this scenario is mostly for file backups.

    We would like to keep 1 year retention of files and at least 3 versions.

    Today, we run legacy format with 1 year delete local and keep at least 3 version.

    We have one customer with 6 million files, which would take forever to restore the entire dataset.

    So, we though on using the new format.
    But as per I'm seeing, a always incremental approach wouldn't work, and run monthly fulls, would consume something like what... 10x more GB on the storage rather than the legacy format, due to the monthly full backup.

    I wanted to use the new format because of the deduplication and much reduced recovery time, fetching the packages rather then millions of small files is much faster.

    I know there is no rule of thumb and it depend on requirements, but are you having traction for file-based backups with this new format?

    Thanks and regards,
  • New Backup Format - Size comparison on the bucket vs legacy
    Thanks for sharing Steve. I was thinking on rolling out the new backup format for specific customers who have a lot of files on his file server.
    On Amazon S3, we pay for each file uploaded to the cloud, there are customers who have like 6.000.000 files that a complete restore would probably take weeks in the legacy, but with the new format would be much faster.

    I'm trying to find a way to make the new backup format works, in a way that doesn't have too much overhead but still would allow to have a faster restore.

    I didn't figure it out yet.

    I was thinking maybe to have a reduced retention such as 3 months, or running the new format incremental forever, but still I'm not convinced on what is the best course of action.