• Wenneker
    0
    Hi,
    We're making backups of a fileserver. Since it's quite a lot of data (100TB+) we're only making incremental back-ups. We have the retention set to 3 months but nothing ever seems to get deleted. How should we configure the back-up to actually delete back-ups of files that are no longer on the source after a certain amount of time has passed?
    Thanks, Bart
  • David Gugick
    118
    you'll need to clarify whether you're using the legacy file backup format or the new backup format. You'll also need to post whether or not you have the option to mark files deleted in backup storage when they're deleted locally (if legacy format), and also post all of the settings you have on the retention tab (Post the screenshot if that's easier). If you're doing file backups then they're always incremental forever with the legacy backup format.
    If you're using the new back of format then also post the schedule for the incremental and full backups. And lastly, let us know if you're backing up to local disc or cloud storage.
  • Wenneker
    0
    I think we're using the new format. we're backing up to the cloud (Backblaze B2)
    s3teb3cn2b4ifrwa.png
    upnc516zzl9xzqke.png
  • David Gugick
    118
    The reason nothing's getting deleted is because you're not running any Full backups.
    You can see that on your first screenshot. It's the option right below the incremental backups. Nothing can be deleted because the new backup format uses generational backup sets with a chained set of backups that include a full and a number of incrementals. And that way, it works similarly to how the legacy image and virtual machine backups work, since it does not manage backup data at the file level.

    Having said that, I don't think the new backup format is what you need. Because in order to remove data, you need to run periodic full backups, and as you've said, with a hundred terabytes worth of data you may not want to use that much backup storage. The b2 full backups will be synthetic; meaning we'll only run a new incremental backup and then use the intelligence in the cloud to have your B2 account created new full from the existing data that's up there. But currently you'll need to run full backups at least once a month (a version to allow less frequent full backups is not out yet), and if you want to keep data for 3 months that means that you'll end up with 4 full backups in storage until the first backup set can be removed. And that means 4 full copies of the hundred terabytes of data. And that could greatly increase your cloud storage needs and cost. If that works for you then you can continue using the new backup format. But...

    You may want to use the legacy backup format. The legacy backup format is a true incremental forever as we only ever back up the new and change files and all cloud objects are managed at the file level. It uses version-based retention which provides flexibility in how many revisions you want to keep for each file. It loses some of the advantages of the new backup format like faster backups, faster restores, client side deduplication, automatic backup consistency checks, GFS retention, and backup immutability on some cloud platforms.


    It sounds like your need is to have incremental forever, and the legacy backup format is the way to get there. The only thing is, in order to move to the legacy format you're going to have to start a brand new backup using the legacy backup format and reupload the data to B2. You can disable the backup scheduling on the new backup format plan and can leave the existing backup data in B2 until it's no longer needed. Then use the backup storage tab in the client to remove the backup data that's no longer needed.

    Let me know if you have any additional questions.
  • Wenneker
    0
    Thanks for the insight.
    I'll have a look at the legacy format.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment