• Jerry
    0
    With Cloudberry for Windows 6 I have been using Hybrid to backup my PC to a local NAS and Backblaze cloud. Hybrid is not yet available in the version 7 new backup format. I would like to take advantage of deduplication in the new backup format and wonder if this alternative approach to hybrid is practical:
    1. Run files backup (encrypted and compressed) source=PC, destination=NAS, then...
    2. Run files backup (no encryption and no compression) source=NAS, destination=Backblaze

    Would this produce similar backup time savings as version 6 hybrid does?
  • Jerry
    0
    The silence is deafening ;-).

    I would appreciate any input I can get on the value proposition of the new backup format vs the legacy format. Obviously, deduplication is one. The new format also provides separation between backup plans - but I'm not sure what that means to me.

    FWIW, I just ran a deduplication analysis on my 1.2TB local storage and determined deduplication could shave approximately 150GB on a full backup. I assume I would see a similar percentage on future incrementals. Here are my dedupe percentages by data type:

    Documents and general data: 6.87%
    Photos/images: 3.83%
    Music: 0.67%
    Videos and NLE projects: 16.77%
  • David Gugick
    118
    What type of backup are you running? File/Folder or Image? For File / Folder, deduplication may not have the storage savings you'd see with image-based backups since a true full backup of all files is only run once. In addition, if you use the Block-Level Backup option with the legacy file format, we'll back up only the changes within larger files that change little data to day. My opinion is that you would be better served by continuing to use the legacy hybrid backup format.
  • Jerry
    0
    Thanks @David Gugick. I am referring to File/Folder backups. I think I will go with your recommendation. I have been doing Forever Incremental but I would like to instead go with Forward Incremental which would create periodic full backups and start a new backup chain, after which an older backup chain would be purged. I am having a hard time figuring out how to accomplish that using the CBB Hybrid Backup wizard retention settings. Specifically, how do I force periodic full backups, and then control when an entire backup chain (a full and its subsequent incrementals) is purged?
  • David Gugick
    118
    I'll have to loop back with you, but what you are saying is not a thing with file/folder. There is no such thing as a periodic full backup with respect to all the files / folders in a backup. If you use block-level, a file may only have the changed parts backed up and the full backup schedule will back up that file (and any other block-level files) in full as needed at that time. But at no time, are all files backed up in full after the initial backup. File backups are effectively incremental forever.
  • Jerry
    0
    @David Gugick, my concern with Forever Incremental is how reliable (and timely) will a full restore be if the backup chain has hundreds of incremental backups. The retention policy would take care of changes to files, but what if there is at least one new file every day for a year, and I run daily incremental backups? Is my concern valid?
  • David Gugick
    118
    May need to reboot this question because I assumed you were talking about the legacy format. Assuming that is the case, then there is no backup chain at the group level. There are only backup chains at the file level when using the block-level backup option. If you do not use block-level, then every backup of every file is always in full. Do not think of file backup in the context you are painting it in - your analogy is more applicable to image backups where you need to periodically run a new full backup or construct one using technology like a synthetic full backup. While a file backup plan might manage and back up tens of thousands of files, the metadata is managed at the file level.

    If, however, you were referring to the new backup format, then let me know...
  • Jerry
    0
    @David Gugick Thanks for the explanation. Let me restate what you are saying in a different way to be sure we are understanding each other. Let's say I have been doing a "forever incremental" backup using the legacy file level backup every day for one year, with a new source file every day. Separately at the one year mark I do a legacy full file level backup of the same files. So assume each backup repository has exactly the same files in it. If I do a full restore from either I think you are saying the time to restore would be the same. Correct?

    One other question... with the legacy hybrid backups my cloud backup is a file level backup to Backblaze. My intention is if I ever need to do a full restore from Backblaze I would have them send me my backup on hard disk. How would I use CBB to restore from the BB hard disk?
  • Jerry
    0
    @David Gugick I am starting to understand why I am so confused. First, in the msp360 Windows 10 Backup-Restore Full Guide I find this:

    "Full Backup Options help keep backup sizes low and ensure you can restore data quickly. If you can only run incremental block-level backups for data that changes often, you can get a Windows 10 cloud backup archive that significantly exceeds your actual data size (each incremental copy stores all data changed from the last full backup). To avoid such situations you need to schedule full backups or look to creating a full data copy if the total size of your previous block-level backups is larger than that certain amount of your data. The second option can help if you are unsure of a desired full backup schedule."

    My version 6 file level backups DID use the block level option. Now, when I drill down in the version 7 Legacy File Backup wizard I find it has removed the block level backup from advanced options. What would happen if I transition an existing version 6 file backup plan that uses block level option to version 7? Does that break my existing backups? Are there other changes I haven't discovered yet. Are these documented anywhere?

    Again, thanks for your help in my quest to figure this out.
  • David Gugick
    118
    That explanation you posted is for image backups. Again, your example in the previous post is not a thing you can do. You can't run a legacy full backup that backs up every single file at any point. That function does not exist. The only time every file is backed up is the very first time you back up on a new plan. After that, only changed and new files are backed up. Whether those changed files are backed up in full or whether only the changes within those files are backed up depends on whether or not the block level backup option is selected and the size of the file - because small files will probably just be backed up in their entirety every time regardless. Any of the files that are backed up using block level will only be backed up in full again according to your full backup schedule. But just to reiterate, and to be clear, when the full backup runs only files that are new and changed are backed up and they are backed up in their entirety. Any files that are unchanged are not backed up.

    As far as I'm aware you would need to create a new plan to transition to the new backup format and that would create a new backup set using the new format. Block level backup does not exist in the new format because we perform client side deduplication automatically and that effectively replaces the block level backup option.
  • Jerry
    0
    @David Gugick I found the article that I was remembering when I made the comments about "Forever Incremental" and the need for periodic full backups. This article is undated so hopefully this has changed in version 7. Please advise.

    It implies that you want incremental backups to be done at regular intervals after the initial full backup was performed. It is very helpful that you don’t need to worry about full backups afterward.

    However, with each subsequent backup, the chain of incremental backups becomes larger. As a result, it takes more time and computing capacity for backup software to analyze the full backup and all increments so as to determine the difference between the data on your server/workstation and the data in your backup repository.

    New call-to-action
    This backup type reduces recovery reliability. It also becomes harder and takes longer to recover the whole data set, as it takes time to analyze and recover each backup in the chain.

    That is why periodical full backups are highly recommended, in order to start a new sequence of incremental backups. The frequency of full backups depends on your business needs. You may want to conduct it weekly, monthly, or once every couple of months. There is an advanced backup technique to simulate full backups, called a synthetic full backup.
  • David Gugick
    118
    That's just a generic article about incremental backups, and is nothing to do with our products. It's for education. File folder backups do not implement that scheme. That article is more appropriate for our image based backups.
  • Jerry
    0
    @David Gugick first, thanks for taking time out of your Saturday to respond. Your explanations have helped me better understand the nuances/characteristics of CBB backup types. I do think there are some opportunities for your tech writers to provide provide that same kind of information for each backup type in their documentation.

    If I use the new files backup type and in the future need to do a full data restore from Backblaze, would I be able to do that locally from the hard drive they provide (which contains my entire bucket)? What would I need to do to point the restore at that drive?
  • David Gugick
    118
    You'd register the new external HD as a target and synchronize the repository. You can then restore.
  • Jerry
    0
    @David Gugick
    Block level backup does not exist in the new format because we perform client side deduplication automatically and that effectively replaces the block level backup option.David Gugick

    David, am I correct that this section of the documentation should include the statement that block level backups apply only to legacy format? Or better, place this whole section under the legacy format section.
  • Jerry
    0
    @David Gugick, I am trying to understand the implications of legacy vs GFS retention in the new files backup format. I believe legacy retention can be set to keep every version that is backed up over the retention period, whereas GFS would normally purge some of the intermediate file versions. Is this generalization correct, assuming they are both set to one year maximum retention?
  • David Gugick
    118
    legacy retention can be set to keep any number of versions. It could also be said to keep every version, if needed. What's your goal with retention?
  • Jerry
    0
    @David Gugick my goal is to be sure I understand the nuances so I don't get into a situation where retention works differently than I had assumed. My background is engineering... I ask a lot of questions ;-).
  • David Gugick
    118
    if you provide specifics, assuming you have them at this time, I can provide additional guidance. But your assessment is accurate in their operation. The new backup format also has performance benefits like deduplication and improved performance with large numbers of files.
  • Jerry
    0
    @David Gugick I've used the new format to complete a Files and an Image backup to my NAS. I'm impressed with the performance and especially the new ability to automatically test the system image restore in a VM.

    I appreciate the offer of guidance for retention. I will get back to you later this week with some specifics.
  • David Gugick
    118
    Sounds great. Thanks.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment