You need to schedule those backups for full and block level if you're using block level. If you're not using block level then you just run the same type of backups over and over and every file is backed up in its entirety every time when it changes, making retention very simple. If you always want to make sure you keep the last backup of a file just in case it hasn't been touched in more than 6 months and you don't want it deleted from backup storage there's an option for that. The only chains that exist are the full backup of a file and its block level backups if you have that option on. And if your retention demands an older version be removed but there are block level backups that exist in backup storage that require it, the full backup of that file will not be removed until it's safe to do so. So most customers will schedule nightly backups as an example and say once a month or once every two weeks run a backup that'll back up the block-level backed up files in full. But there are no backup sets as I've already discussed. files exist within a universe to themselves and have no relationship to other files.
if you're talking about file backups there's no such thing as a full backup as it pertains to the entire set of data. A full just means each file is backed up in its entirety. All retention should be handled by our product. You should not use any kind of retention settings available with the cloud storage providers to remove data independently of retention settings in our product. when you run a backup for the first time all files are backed up in their entirety; each file is backed up as a separate object. When you run any backups after that they're always incremental in nature, meaning, that only the files that changed or are new will be backed up. if you use the block level backup option, all that means is if the file is large enough and only a small amount of that file changed we will only back up the changes. That can help with large files like Outlook PST files that may only change 10 or 20 megabytes per day, but might be many gigabytes in size. With regard to retention and for block level backups, everything happens at the file level, not at the backup set level. If you let our product manage all the retention nothing will ever get removed in an unsafe way. Meaning, we will never remove a full backup of a file and leave incremental block level backups in storage, because that would not allow you to restore. We will only remove the full and related incremental backups when it's safe to do so based on your retention settings.
Performance depends entirely on bandwidth variables like: Disk read speed, CPU cores available, threads used in the backup product, network speed, data egress speed from AWS, data ingress speed at Backblaze. It's hard to say what is causing the 16 Gbit speeds you're hitting. Both services can support faster speeds than that, so it's more than likely a VM performance issue.How fast can you read data in the VM? What network speed do your virtual adapters support? How many threads are you using in CloudBerry Backup (in Options | Advanced)? What about file sizes of the files being backed up? Are they small, large?