• John Sikora
    0
    I was running the legacy backup to basically create a synced version of my data on Amazon Glacier. I was thinking of changing over to the new backup approach but was confused on one point. If I'm running the new backup format, how are local deletes reflected in the data store? Essentially I wanted my backup to only contain the current version of files and anything deleted to be reflected in the data store.
  • David Gugick
    118
    When you run an incremental backup, locally deleted files are noted in the new backup data. When you start a new full backup (which will be synthetic on most clouds), a new full is created with only the current set of live files. Your retention settings for Keep Backups For and GFS will determine how long those backups stick around. If you're using the legacy format to back up directly to Amazon S3 Glacier, then you can stick with that format if you desire. It's not going away. When you say "a synced version" it sounds like you want a copy of the current data in the cloud and do nto really need generations of backups. So stick with the legacy format if it works for you. If you want to move to the new format, you might see much improved backup performance, you you'll most certainly see improved restore performance. But if you're backing to Glacier, then you are not planning to do restores unless absolutely necessary. If you want to elaborate on your exact use case, I can provide additional guidance.
  • John Sikora
    0
    Thanks for the clarification. I was going to transition to the new method since even though it may be awhile, I have about 22TB of data to backup. That's about 6 months of backup time. And since it's only going to grow, there is no time like the present. Your other assumptions are correct, so I'll get some performance improvement but not necessarily much.

    As a second question that I should have asked. Is there an issue with a never ending set of incremental backups against a primary full or does it not really matter? Or to ask the question another way, is there a rule of thumb for how often a full backup MUST be run. The vast majority of my files don't change. I've split the backups into a couple of jobs. The more often changing ones (and hence the ones where I could run a full backup more often) are also the smallest. So my thinking was that the more often changing ones would have a new full every year while the less changing one would maybe never have another full (or maybe ever two or three years). Thoughts?
  • David Gugick
    118
    If you're referring to the legacy format, then there is no concept of a full backup. Everything is incremental forever as the backups are at the file level. If you're referring to the new backup format, then we recommend full backups are run periodically, but that interval is up to you. Currently, the user-interface implies they should be run monthly, but that may change with a future update and allow less frequent fulls to be run. The new backup format uses a generational format of large archive files that contain the individual files being backed up and there is chain of backups that may be needed to perform a restore. We always run consistency checks with the new backup format, so if there's a missing backup archive, you'll be notified, and the remedy is to run a new full backup. Full backups are usually synthetic in the cloud, and they tend to run much much faster than a full backup of the source data. But, the new full backup generation does use an equivalent storage amount. So, in your case, where you only want a single copy of each source file in backup storage, the legacy format is fine.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment