• New backup format and incremental forever.......
    It's a full backup as you mentioned. But so are the full backups that need to be run periodically with image and VM backups that use the legacy backup format. The client side deduplication can help save space with the incremental backups.
  • New backup format and incremental forever.......
    You can run the full backup on a schedule that works for you. However, depending on retention settings, if you use GFS, then you may need to have a recurring full set at a specific schedule to support those GFS settings. But you would be warned if you mismatch GFS and your full backup schedule.

    Synthetic fulls are as you described. We run an incremental backup and then use the bits and bytes already in the cloud to reconstruct the new fall without the need to upload all the data again. It's possible that your storage requirements could go up depending on file changes. But that's only really true for file backup. For image and VM backup, storage requirements would be about the same or less because of client side deduplication. But it also depends on retention settings.

    Correct on the last point. If you're not using GFS or using a longer retention to provide a longer period and opportunity for you to discover that files had been deleted locally, then you maybe better off sticking with the legacy format for file backups because of the ability to manage things version-based and use the incremental forever style backup. And that backup format isn't going anywhere, so if it works best for you, then stick with it.
  • Version 7.4.0.161
    7.4.1 was released today.
  • MSO365 SMTP Basic Authentication Discontinued
    I'm not sure I fully understand. You posted a link about Microsoft 365 no longer supporting Basic Authentication and then you talk about using our email service for notifications. Maybe I am missing the connection.

    The Managed Backup notification feature supports using our servers, using Amazon SES, or using your own SMTP server. Many customers opt to use their own servers - we support SSL and full authentication.

    More about notification options here in help:
    https://mspbackups.com/AP/Help/settings/email-service

    But in case I am misunderstanding, please restate the question.

    Thanks.
  • New backup format and incremental forever.......
    Here's my response:
    1 - The new backup format (for files) is not designed for incremental-forever backups like the legacy format. The new format requires periodic full backups, since it uses a generational approach (full backup sets) for data retention. These full backups are synthetic fulls (on most cloud storage providers) and will run quickly compared to the first full backup.For Image and VM backups the periodic need for a (synthetic) full backup is no different than the legacy format.
    2 - You get many new features using the new backup as you've pointed out: Better performance, faster restores, client-side deduplication, synthetic fulls for all backup types, GFS retention, Immutability on AWS and Wasabi (and soon others). We also support restore verification for image backups.
    3 - The new backup format also includes built-in backup consistency checks. So, to your last point about someone deleting backup data, there's no notification issues as you describe with the new format. We will see missing backups and automatically run a full backup if there are backup files that are missing / incorrect.
  • C++ 2010 Library Required, but Unable to Download
    I believe engineering is aware of this particular issue that some customers are running into. They are working to address. I'm glad you're up and running. Thanks for the update.
  • Storage Account not applied
    Can you tell me what agent version you're running? The only reference that error I see in the system is back in version 7.2.2, and it was fixed in 7.2.2.1 . I'd probably recommend that you open a support case, but if you are running an older version of the agent, then upgrading an agent, at least on one endpoint with the issue, as a test might be worthwhile.
  • constraint failed UNIQUE constraint failed: cloud_files.destination_id, cloud_files.local_path
    The only reference to this issue I can find is for file backups when the backup plan has duplicate files / folders selected for backup. This can happen when you specify a Users folder dynamically for backup (e.g. %userprofile%\nicholas) and then also have a file or folder in the Nicholas Users folder selected to backup as well.

    If this is not the case, then reply back with the information requested above and I'll see what I can find.
  • constraint failed UNIQUE constraint failed: cloud_files.destination_id, cloud_files.local_path
    Version?
    What type of backup are you running - file, image, SQL Server, other?
    New or legacy backup format?
    What's your backup storage target?
  • Version 7.4.0.161
    Brandon, I recommend you open a support case to report the issue and ask them about the 7.4.1 hotfix, which I believe Support now has available for those customers experiencing the issue you're reporting.

    A few other points:

    I cannot address directly what happened to cause this issue for those customers reporting it, but the reports seem to be isolated to a few customers. I realize that you're experiencing the issue and that probably comes as little consolation. But the team is aware of these reports (and your logs would help the team to ensure the issue is properly corrected). We do strive to ensure all new releases are fully verified before they are made available, but sometimes an issue can occur.

    There is already an internal item about rolling back to a previous version and I have added your comments and some of my own about how we might improve remediation in the future.

    If you do not have the Download - Sandbox enabled, you can enable it if you want to test builds internally or deploy to a customer before making it public to all customers. And as I tell customers, upgrading to a new version is under your control. If you want to wait some time after a new release before deploying to your customers, you can certainly do so.

    But for now, I'd report the issue and ask support about the hotfix.

    I will speak to the team to better understand what occurred and work with them to ensure QA has this new test case in their arsenal going forward.


    Thanks.
  • Backup retention isn't deleting old backups from 3 years ago. How do I remove these?
    Have you verified that those backups are actually in storage? If not, I would check first, as it's possible the repository database is just not properly synchronized. If they are there you should be able to delete them from the backup storage tab. If not, synchronize the repository and that should fix the issue.

    If the backups are in fact and backup storage:
    Are you saying that when you right click on the full backup you want to remove there's no option to delete it?
  • CloudBerry Backup Questions
    That's correct. Each generation is a full copy, even if it's created using a synthetic full operation. That's needed for proper GFS management and Immutability / Object-Lock if the customer enables that feature. We do not yet support Google Cloud for synthetic full operations.
  • CloudBerry Backup Questions
    they're likely going to be multiple IO operations per file, probably based on the chunk size you're uploading. If the chunk size is large enough and most of your files are under that size, then you may get two IO operations per file. But again, I wouldn't spend too much time thinking about IO operation cost, as it's likely to be very low. Just find the Google calculator that they have posted somewhere and type in a number that's something like 10 times the number of files you're uploading just to get a very rough high level estimate about what those operations might cost for the initial backup. After that, they should be relatively low as file changes should be minimal compared to the total number of files already backed up.

    But you're correct that the legacy file backup format uses a one-to-one ratio of file to object in the cloud. Whereas the new format groups many files together into archives. It's not a single archive, although it can be if the number files backed up is small or sizes are small. But there is a lot of grouping going on which will minimize IO and the associated latency when backing up to the cloud.
  • CloudBerry Backup Questions
    Sorry, but I can't estimate the various API calls and related charges for the clouds. You may be able to do something like use the number of files and the chunk size or part sizes we call it in Options for the legacy format to estimate and then use whatever calculator the cloud storage provider provides to help estimate those costs, but as I said earlier, I've never seen API charges be expensive.

    The new backup format uses backup generations, and those backup generations are kept as whole sets. It's not version based because GFS demands that an entire backup set is kept. So it just works differently. But both backup formats are staying around and you should use the one that works best for you.

    For file backups, use the one that is best. For image and virtual machine backups, the new backup format is far superior. As far as client side deduplication, you're not going to benefit too much if you're talking documents that do not change much, like digital music. But even so, the legacy file backup format supports block level backups if enabled via scheduling. And those block level backups can back up only the changes within larger files so the whole file doesn't have to be backed up each time it changes. But that's generally something that's really helpful for very large files, like an Outlook PST file which can grow to many gigabytes in size. For something like music files that are generally in the megabytes, that feature won't provide much utility, and it's much better to just to upload the file if it changes for some reason in full.
  • CloudBerry Backup Questions
    The best way to determine transaction fees for those cloud service providers that charge such fees is to look at your monthly bills after using our product. There's no real way to estimate the number of operations using our software since there are many variables to consider. But usually those fees are pretty low, so I wouldn't expect them to consume any significant part of your monthly cost. However if you prefer to not have to deal with those fees, there are options out there with different cloud service providers that do not charge any data egress or API charges that you can look into. But I think you'll find even with those charges the Backblaze B2 fees are extremely reasonable.

    Regarding your second question. If you're using the new backup format for your file backups, and you're using GFS retention, then you'll maintain backup sets for the total duration of your GFS settings. So if you keep annual backups for 3 years then your oldest backup that contains all of your files will not be removed for 3 years. And presumably, you'd realize by then that something needs to be restored. Having said that, you may be better off using the legacy backup format for your file backup, as that format has version-based retention and there's an option there to always keep the last version of every file even if it's past the the retention period and an option to keep the file backup if the original is deleted. Since that format is not backup-set based, only the first time do we back up everything and after that we only back up new and changed files (incremental forever). And if you use the proper retention settings mentioned above, you'll never have to worry about old files that were removed locally on your computer without your knowledge ever disappearing from backup storage. So my recommendation would be to move over to the legacy file backup format to solve your particular needs.
  • Suggestion for clean termination of backup
    I understand. I am assuming it's only your initial backup that takes that long, correct?

    You can clean stop the backups, however. What you do not get I believe are in-progress file completion - although I'd need to check with the engineers to understand exactly what happens when you click Stop as the process may complete some files if they are close to completing. The next backup you run should pick up from where the last one left off.
  • Suggestion for clean termination of backup
    That's not how VSS works with MSP360. Once the backup is complete the snapshot is stopped and removed. Snapshots do not stick around, nor should they. But you can sometimes run into a VSS issue, and many of those issues can be addressed by reviewing that knowledge base article.
  • Suggestion for clean termination of backup
    What you're referring to is VSS, which you presumably have enabled. VSS is Microsoft's volume shadow services, and it allows applications such as hours to quiesce the volume in order to back up files that are in use by other applications. If you deal with an inability to back up some files that are in use, then enabling VSS is the best solution to overcome that limitation. However, if you're backing up files that are not in use, then you may not need VSS enabled for backup.

    The snapshots that are created during the VSS process have nothing to do with the backups themselves. If you're having a problem with VSS then I would suggest you reach out to support in an effort to understand what's going on as they may be able to assist and provide some remedy to that particular issue so you don't have that issue again. Or, you can refer to this knowledge base article: https://www.google.com/amp/s/www.msp360.com/resources/blog/troubleshooting-vss-volume-shadow-copy-issues/
  • Suggestion for clean termination of backup
    You mention volume snapshots. Are you running Image backups and not File backups, as I had assumed from the original post?
  • Suggestion for clean termination of backup
    I know you can stop the backups now and continue them later. Any in-progress files would be backed up fresh the next time. Are you saying that that's insufficient for your needs? What if one of those files that you're uploading is 60 GB in size is only 1 GB into the upload. What's the advantage from your perspective of waiting 59 more GBs until the backup of that file is complete, versus terminating the backup in a safe way as quickly as possible and then backing up that 60 GB file the next time the backup runs? Usually when a backup needs to be stopped manually, while infrequent, it's because of a pressing need to turn off or cycle the machine.