• Natechev
    0
    Is this a problem at all? Lets say this is done for 1 year every hour daily. I would guess that has an effect on the restoration process? I like the deduplication on the new backup format as well as the much lower number of files and folders it uses which means fewer transactions. I also like the quick restore feature. I like the file delete warning the legacy gives as well as the file version list in the restore window. Would be nice if the best of both was merged if that's possible but anyway.

    So far CB is the best option Ive found and the most comprehensive. Products like backblaze, carbonite, and crashplan are ok but I never liked the auto delete retention scheme. I realize they are forcing themselves to do this as they offer unlimited (uncapped not really unlimited) storage. The problem is if your files are accidentally deleted or maliciously deleted by malware or an ex spouse maybe. If you dont know the files have been deleted then they are gone in 30-90 days. You have 30-90 says to realize it? BB offers 1 year for more money which is much better but still. That is unacceptable to me. If I dont realize they have been deleted then 1 of 3 copies are gone. That breaks the 3-2-1 strategy IMMEDIATELY. UNTIL I realize it I no longer have a 3-2-1 strategy. Period. The companies themselves say they are not an archive service. That is absurd for residential customers. Many people have a mixture of data some with a short expiration date some longer some should NEVER be deleted. That defeats the point. Most have that data they simply dont know if they'll need it again. Anyway feel free to ignore or not respond to this. This is just my frustration really. Maybe much of it is unfounded though Ive read a number of online comments/stories highlighting precisely this!
  • David Gugick
    118
    Here's my response:
    1 - The new backup format (for files) is not designed for incremental-forever backups like the legacy format. The new format requires periodic full backups, since it uses a generational approach (full backup sets) for data retention. These full backups are synthetic fulls (on most cloud storage providers) and will run quickly compared to the first full backup.For Image and VM backups the periodic need for a (synthetic) full backup is no different than the legacy format.
    2 - You get many new features using the new backup as you've pointed out: Better performance, faster restores, client-side deduplication, synthetic fulls for all backup types, GFS retention, Immutability on AWS and Wasabi (and soon others). We also support restore verification for image backups.
    3 - The new backup format also includes built-in backup consistency checks. So, to your last point about someone deleting backup data, there's no notification issues as you describe with the new format. We will see missing backups and automatically run a full backup if there are backup files that are missing / incorrect.
  • Natechev
    0
    TY

    1. How often would a full (or syn) backup need to be performed? Just to be clear, Syn FB creates a new FB with current block already in the cloud + new blocks created locally since. So if i dont delete any files there would be 200 GB of data after the syn full assuming the size of the orginal FB was 100GB which would increase my per GB storage bill? So the new format wouldnt necessarily save the most online space?

    3. I dont mean deleting backup data in the backup itself. I mean in the local files marked for backup. If i dont see those as gone. The 30 day retention auto deletion and they are gone. I may just stick with Legacy maybe.
  • David Gugick
    118
    You can run the full backup on a schedule that works for you. However, depending on retention settings, if you use GFS, then you may need to have a recurring full set at a specific schedule to support those GFS settings. But you would be warned if you mismatch GFS and your full backup schedule.

    Synthetic fulls are as you described. We run an incremental backup and then use the bits and bytes already in the cloud to reconstruct the new fall without the need to upload all the data again. It's possible that your storage requirements could go up depending on file changes. But that's only really true for file backup. For image and VM backup, storage requirements would be about the same or less because of client side deduplication. But it also depends on retention settings.

    Correct on the last point. If you're not using GFS or using a longer retention to provide a longer period and opportunity for you to discover that files had been deleted locally, then you maybe better off sticking with the legacy format for file backups because of the ability to manage things version-based and use the incremental forever style backup. And that backup format isn't going anywhere, so if it works best for you, then stick with it.
  • Natechev
    0
    But that's only really true for file backup. For image and VM backup, storage requirements would be about the same or less because of client side deduplication. But it also depends on retention settings.David Gugick

    Syn fb is a server side created FB with only the changed bits uploaded right? So two FBs reside now on the cloud? How does that save space. Unless there is block level dedup on the server as well. THe new Syn FB just links to same blocks from the first FB?
  • David Gugick
    118
    It's a full backup as you mentioned. But so are the full backups that need to be run periodically with image and VM backups that use the legacy backup format. The client side deduplication can help save space with the incremental backups.
  • Natechev
    0
    The new backup format (for files) is not designed for incremental-forever backups like the legacy format. The new format requires periodic full backups, since it uses a generational approach (full backup sets) for data retention. These full backups are synthetic fulls (on most cloud storage providers) and will run quickly compared to the first full backup.For Image and VM backups the periodic need for a (synthetic) full backup is no different than the legacy format.David Gugick

    TY for your responses.

    COuld you elaborate on this a bit? Im not sure I entirely understand the data retention point. What if I decide to keep it forever? Never purge it. Why would that be a problem? Is it a problem because the generation is taken as 1 unit of restoration if one part of it goes you cant restore to it? So the more you incremental the more points of failure?
  • David Gugick
    118
    You could do that, but it's not recommended because the more incremental backups you have in the chain the more data loss is possible if there is disk corruption or some other type of backup file data loss. Even if you wanted to keep data "forever", you'd probably want to run periodic full backups. You could something like use GFS with 10 Years of Annual backups, so you're only keeping one generation per year. Or use the legacy format which I think is the better choice in your case as they are incremental forever and you can keep locally deleted files in backup storage and also check the option to always keep the last version of every file - that way, you keep everything forever without chained backups since all backups are managed at the file object level.
  • Natechev
    0
    TY.

    But legacy is also incremental forever? That isnt a problem? Is that because of the format?
  • David Gugick
    118
    No. All file backups using legacy mode are at the file level, We only back up new and changed files and there is a one-to-one correspondence of objects in backup storage. The only "chaining" that can occur is when you use block-level backups which back up changes within large files. Even so, you need to schedule periodic backup of those files in full - but those backups are still at the file level. The new backup format uses archives that bundle many files together for easier backup file management and faster backups and restores.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment