They are just File/Folder backups. Currently the Legacy plans are set to run a block level every day and an Incremental every 7 days.
I see that the new backup format does not have block level backups. My question is what should I be setting instead? Daily Incrementals with a Full Backup every so often?
When using the new backup format, you'll need to set your schedule for incremental backups, for example daily, and also scheduled the full backup, say monthly. Full backups in the cloud will be run using the synthetic option, and will be created using most of the bits and bytes in the cloud combined with the latest incremental data. They'll run faster than running a full backup. Full backups to local disk will have to run in full. Schedule your full backups with the understanding that you'll have multiple backup sets in storage and weigh that against your retention requirements that will be set on the retention tab - which determine how many days of backups you're going to keep. For example if you're keeping 90 days worth of backups and you run a full backup every 30 days you'll end up with four full backup sets of 120 days, at which point the first backup set of 30 days will be removed from storage leaving you with 90 days. If you do need features like GFS retention or immutability, then you can enable those options as well on the retention tab. If you provide some details about what you're using now for retention, I might be able to provide better guidance on what the ideal options would be with the new backup format.
Thanks for replying David. Currently we have a two year retention on the plans. My main concern with running full backups so frequently is the amount of space they take up, some of the plans we run are not backing up anymore than 20GB and some are backing up over 3TB of data.
I cannot speak to the reason by Support that you move to the new backup format. If it's because of what the KB article talks about (millions of files), then moving to the new backup format may only be needed on those backup plans with a large number of files.
Generally speaking though, for those plans with large numbers of files, the new format has much improved backup and restore performance, plus client-side deduplication, synthetic fulls (new for file backups), GFS Retention, Immutability for Wasabi / Amazon S3 and soon for Backblaze B2, and built-in backup consistency checks.
As you've noted, the backups do require periodic full backups because backups are managed at the generation level - which is a full set of the data. On those systems where the source data is large, you can schedule less frequent full backups. And you may want to consider using GFS retention to keep data longer term, but with less granularity in restore points as the data ages.
Some of these settings are going to be dependent on your negotiated retention and backup contracts with your customers. You say you are keeping 2 years, but did not indicate if you're keeping all versions of every file, including files deleted locally, for 2 years, or you are keeping only a fixed number of file versions.
For example, on your 3 TB data set, you could:
* Run daily incrementals - or more frequently if needed
* Run a full backup every 3 months
* Keep backups for 90 days
* Turn on GFS and set Yearly backups to 2 years
With those settings you'll have all data kept for 90 days (you'll really back up to 180 days before the oldest backup set of 90 days can be removed). After the 90 days, you'll keep full backup snapshots for 2 years. So you may end of up with 4 full backup snapshots in storage plus up to 180 days of incrementals. Obviously, those retention and schedule settings can be adjusted as needed for your particular use case.