• New Backup Format Setup Help
    I am glad the deletions are working for you. The empty folder thing is a known issue.
    I find that doing periodic consistency checks avoids repo sync issues that can affect the purge schedule.
  • New Backup Format Setup Help
    Lukas -
    I am going to be honest with you, unless I am totally misunderstanding your question, I think that you would be wise to keep using the legacy Backup format.
    Recap of how Legacy works:
    - Files get created or modified and are backed up to Wasabi.
    - If they never get modifed or deleted, they simply stay in Wasabi forever.
    - If a file gets deleted, it stays in Wasabi for 90 days (based on your settingin Legacy for deleted files)
    - A "Full" backup in legacy mode is actually an incremental backup. It simply backs up any file that has had a block level incremental done since the last "Full". This is typically only a small subset of your entire backup data on Wasabi - since the vast majority of space is consumed by video files that will never change and will never need to be backed up again.

    Lets look at how FFI would work in your scenario:
    1. You re-upload all of the existing data to Wasabi using the new backup Format.
    2. You set the FFI interval to 90 days - or let Intelligent retention do it for you because of the Wasabi early delete penalty for objects less than 90 days old
    3. Each night, only files that have been changed or added that day get included in the incremental backup
    4. If a file gets deleted, it will be kept in the cloud for the FFI interval - the same as for file versions.
    4. At the end of 90 days you will have one true Full backup and 89 incrementals.
    Here is where it gets dicey:
    On day 91, the system takes one more incremental - then starts creating a brand new "Synthetic" Full which uses the 'in-cloud copy" feature of Wasabi to create a brand new Full - Just as if you reuploaded the files - even ones that have not changed..
    Now this would be okay except for one issue: The in-cloud copy feature runs at between 200GB and 350GB per hour. You can do the math - but 70TB is going to take a LONG time to copy in Wasabi.
    And here is the best part - On day 92 the system will do ANOTHER synthetic full as its goal is to keep no more than one Full and 90 incrementals in storage at any point in time.
    I have requested that we be given the ability to schedule when the Synthetic full ocurrs - so that we can perform it once a week on weekends instead of every night.
    So you can see that the new format and the FFI are not going to be viable for the amount of data that you have.
    And your static video file data type is ideal for the legacy format's "forever incremental" design.
    When you move the completed project video files to the other NAS, it will be considered deleted and get purged from Wasabi after 90 days (or whatever you set)

    Now my only disclaimer is that if you are actually only storing a small amount of data in Wasabi at any given point in time and the majority of your 70-80TB's are on a NAS and not in the cloud. Then the FFI might sense.
    Also, we use BackBlaze as there are no early deletion fees.
    Happy to discuss further.
    Steve
  • Large Cloud Image Backups
    First of all, we use redirected folders for the majority of our clients, and only backup the server. Workstations are standard builds that can be reloaded faily quickly, and because we encourage clients to maintain a spare (or two) the rebuild of an individual PC is not an emergency.

    We do local image backups of the server on a daily basis - Usually weekly fulls and daily incrementals.
    This provides full operational recovery in the event of a failure of the OS/Hardware.
    Prior to the availability of Synthetic full backups, we did a full cloud image backup only once per month.
    We would exclude from the image backup all data folders as well as the temp,recycle bin, etc to keep the size down. In a true disaster, having a one month old image was acceptable as the OS and Apps typically do not change significantly in a month.
    We do cloud and local daily file backups as well that would be used to bring the server up to date after the image is restored.
    The daily delta for our image backups is typically in the 5-15GB range, due to the fact that any change in the OS, location of temp files, etc will result in changed blocks which need to be backed up again.
    With the synthetic full capailty we now run image backups every night for all clients except those with the very slowest link speeds (<5mbps).
    The synthetic full gets run on the weekend and takes a tenth of the time that a true full would take.
    For those with slow links, we do a monthly synthetic full and weekly incrementals on the weekends.
    For our clients who are using P2P devices for file sharing, again, we only do an image of the P2P server, not individual workstations on the network.
    Not knowing how your clients are setup it is hard to make a recommendation, but certainly you should have local and cloud image backups, and utilize Synthetic cloud fulls. I recommend using BackBlaze as there is no minimum retention period. And for disaster recovery, there is no real need to keep more than one or two versions of the image in the cloud.
    For our clients that have only individual machines with the data stored locally, we simply backup the data files to the cloud (and locally if they have a USB HD device). We do not do image backups unless they are willing to pay extra for that service. ($10/month).
    Brevity is not my strong suit :)
  • Confused by schedule options
    The :"repeat every xx" setting seems irrelevant with FFI. With other backups I used it to run a Full every three months and incrementals every week.
    With FFI, the frequency of the synthetic full is now dictated by the retention period that you set or by the Intelligent Retention if you have it turned on and your retention period is less that the platform minimum.
  • Cloudberry Upload to s3 bucket just showing $GmetaaaAAA#
    The new backup format does not show individual files, just the backup generations with .cbl files.
  • "Do not back up system and hidden files" Option Best Practice
    That is what the image backup is for. You would only need to restore those system files if the system was hosed in some way. As you know, you can restore iindividual files from an image backup, but in my experience restoring an individual system file rarely fixes anything.
    One thing we have always done is to include system directories (including appdata user folders) in our local backups since there is no cost to do so. This can lead to errors however, as some of the files are temporary and are present when the snapshot is taken, but disappear by the time the system goes to do the backup. (things like roaming profiles).
    Hope this helps.
  • Backing up a VMware VM and restoring to Hyper-V
    To your first question, yes this is all doable in MSP360.
    As to whether you can take an image of your guest OS’s from VMWare and turn them into HyperV VHDx files - We don’t use VMWare and never have, but I dont see why not.
  • Backing up a VMware VM and restoring to Hyper-V
    HyperV is very well supported in MSP360. You run the VM version on the host and it backs up all of the Virtual disks/ machines. We do weekly synthetic fulls and nightly incrementals to BackBlaze for all of our clients. We keep only one weeks’ worth of VM backups as they are only for Disaster Recovery. We also backup the files within the File server VM’s and keep versions for 90 days.
    DM me if you want any specific recommendations.
  • Cloud Files Not Being Deleted After Local Files Are Moved/Deleted
    Are you using cloudberry Explorer to verify what files are actually out in the cloud?
  • Backup Agent 7.8 for Windows / Management Console 6.3
    Wspeed . given that all the files you upload are never modified and never deleted, the old back up format is probably a better fit. One of the things that has tripped me up is the cost of lifecycle transitions from S3 to glacier. They charge five cents for every thousand objects migrated.
    I did some calculations a while back and determined that transitioning to glacier was only cost-effective if the average file size was over half a megabyte.
    I suspect the average size of your images is significantly more than that so even in legacy format it’s worth it.
    Going forward, we are sending all back ups, file and image,to Backblaze, as it only cost $.50 per gigabyte per month, supports synthetic fulls, and has no minimum retention.
    The API call charges are also significantly lower than Amazon.
  • Backup Agent 7.8 for Windows / Management Console 6.3
    The new backup format, as I understand it, will continue "where it left off" rather than having to start all over again. But the big thing is the synthetic fulls process data at a rate of 200-300 GB per hour using in-cloud copying. In my experience synthetic fulls take less than -20% of the time that the initial full took. So a full that took 4 days gets done in less than 15 hours - something easily doane over the wekend.
  • Cloud Files Not Being Deleted After Local Files Are Moved/Deleted
    Lukas - When you say you are “seeing things in wasabi that were uploaded over a year ago”, do you mean you are seeing files in backup storage that were deleted long ago? Or just valid undeleted files that were uploaded a year ago.
    If the former, then put in a support ticket. If it is the latter, I suggest you read my recent post here in this forum for an explanation as to how fulls and incrementals work.
  • How exactly does an incremental backup work?
    First of all, are you using the new backup format or the legacy format?
    The hardest thing to do is to wrap your head around what “incremental forever” means.
    In the legacy format, you take an initial full backup. That captures every file that is on source storage (that you included in the backup set)
    After that the ONLY things that get backed up are newly added files and modifications to existing files.
    Files that never change ( think .pdf files) are never backed up again. They stay forever or until someone deletes them. If someone deletes a file, it will be kept in backup storage for as long as you set in the “ keep files deleted on source for xx days”. A “ Full” in the legacy format is simply a reupload of the files that had block level incrementals captured after the previous full plus any new files added since the last incremental. These “ fulls” are only slightly larger than the incrementals.
    The New Backup format is much more in line with traditional tape backups. You do a full backup of everything - run some incrementals, then do another full backup of everything, even the files that have not been ( and never will be) modified.
    Until very recently you had to do a full backup at least once per month, meaning that if you have a 90 day version retention period you would have three complete backup sets in storage at any given time (
    (actually four but that is beyond this discussion)
    The only thing that makes this even remotely viable is that many ( but not all) backend storage providers have a feature called “ in-cloud copying” which allows the creation of synthetic fulls.
    A synthetic full takes all of the unchanged blocks of data in the previous full and copies them, behind the scenes, into a new full. This in cloud copying processes at a rate of between 200 GB and 300GB/ hr, or 55 - 83 MBps.
    Back to your question, in the new backup format, there is no separate setting for how long to save deleted files, you simply have x number of restore points based on the retention that you set in the plan. The setting applies to versions of files and deleted files.
    The latest feature Forever Full Incrementals (FFI), eliminates the 3x-4x storage consumption problem by keeping only one full followed by x numer of incrementals based on your retention setting.
    So if you have 30 day retention, on day 31, a new full will be created using the synthetic full process plus the oldest incremental. This process will happen every day going forward. You end up with a “rolling 30 days” of backup/ restore points.
    The only issue I see with this is that for very large backup sets, and/ or companies with very slow uplink speeds, the synthetic full might take longer than the hours available overnight. Not that this is a huge issue, but I have asked MSP360 to consider adding a feature that would allow us to specify when the synthetic full should run, perhaps once per week on weekends.
    And that is the short explanation.
    Hope it helps.
  • How exactly does an incremental backup work?
    When files get deleted on the source, they are flagged to be purged after the period of time set in the backup plan for keeping deleted files for x days/ weeks/ months.
    Incrementals backup new and modified files. The weekly full backs up all files that have outstanding block level incremental versions.
    Your full Quickbooks file for example might be 300MB. The daily incremental is probably around 10MB.
    When you run the next full, it backs up the entire 300MB.
  • Backup Agent 7.8 for Windows / Management Console 6.3
    The way I see it, they are mutually exclusive. The only value of FFI is to those of us who do not use GFS or immutability.
    It will hopefully make NBF file backups have similar storage consumption as Legacy format has.
    Am still trying to fully understand the new features, but when I do, I will write-up my assessment in this forum.
  • Bandwidth Settings
    I use the MSP version of the SW but I believe the code is the same.
    I set "When using Cloud/Local storage" to unlimited.
    I then specify the workday hours cloud bandwidth to something less as you have done, but leave local at unlimited since it has no impact on the performance of anything.
    During the workday the screen shows "specified" - and when I am in off-hours it shows Unlimited.
    I have not tested to see if perhaps the software is not using local time to adjust the bandwith.
    The most important thing is whether it is actually lowering your bandwidth during the peak hours - something that you can see when the plan is running.
  • "retrieving data" operation takes a lot of time, is it normal? How can I speed it up?
    The new format does not support incremental forever, so it is not a good fit for Cloud file backups as it required periodic "Fulls" which require re-uploading all 2TB of data.
    For Image and VHD CLoud backups - and for local drives - the new format works great. You would just need a larger NAS or USB drive.
    You could run a FULL once a month and incrementals each night.
    If you want to keep 90 days worth of backups of 2TB of data on a local drive, you will consume 8TB plus the daily incremental space consumption.
    I would recommend at least 10TB of backup drive space, preferrably12TB.
    I also recommend compression and encryption. They do not slow down the backups unless you are on a horrendoesly old slow machine.
    I would also encourage you to utilize Wasabi or Backblaze to keep a copy of the data using legacy mode incremental forever in the cloud as a local NAS device can break, get stolen (and not encrypting the data is asking for touble), or be destroyed in a disaster.
    Alex is correct, the new format is orders of magnitude faster in bringing up the contents of storage to view/restore than legacy mode for large file stores, but takes significantly more space.
    Local space is a one-time and relatively cheap.
    Hope this helps.
    - Steve
  • How to Restore to Azure VM
    HYPERV replication does not rely on backups. It is a windows feature that mirrors the state of the primary server and can be configured to keep multiple checkpoints to restart from on the HYPERV replication server. It is a manual failover - but can be completed in minutes.
    We use ramnode to host several clients' servers, as well as to spin up an instance in a disaster scenario where using our local spare server is not a viable option.
    Ramnode is significantly cheaper than Azure or AWS and has better, more accessible, technical support. We have used all three and prefer Ramnode overall.
    We only use he HYPERV replication feature for our largest, most critical customers, where downtime due to hardware or OS failures cannot exceed an hour.
  • Immutable Backups
    As for your dilemma, what you really want (and unfortunately cannot have) is the legacy format sent to Immutable storage.
    Why do you need immutable storage?
    Is the customer demanding that? Or are you just trying to us Immutability to protect your data?
  • How to Restore to Azure VM
    HyperV replication is what we use for clients that need fast recovery from major, non-Disaster events.
    We keep the old server when upgrading and make it the replica.
    Other MSP’s rely on Datto-type devices to spin up virtual machines on prem.
    Neither of these approaches help in a true Disaster where the client equipment and/or site is unusable for whatever reason.
    For that we rely on a spare server in our office that can quickly download the client’s image/vhdx file(s), or we spin-up a Ramnode hosted instance and do the recovery from there. We tell clients that in a true disaster scenario, we will aim to have core systems up and running in less than 24 hours from the time a disaster is declared, with data from the previous backup.