• Change backup prefix?
    You can create custom prefix with any name you want when selecting it, but you'd still need to select it upon new installation.
    You can even do that right now by renaming CBB_Prefix folder to anything you want so it looks like "CBB_Anythingyouwant" and then specify the same prefix when editing storage account(just type it in manually). Then you just need to sync repository via tools > options > repository and that's it.
    This is only applicable to situations where the rest of the folder structure stays the same.
    Otherwise, there shouldn't be any problems with backup prefix.
  • Multiple Buckets MSP Edition
    It's more simple than that. Some of our clients simply prefer to track individual bucket names instead of checking which destination is assigned to a particular user/company.

    Just add something: in my opinion the best way is to use bucket per company setup: you get enough granularity and destinations are really easy to manage.
  • Amazon S3 Bucket and Client size discrepancy
    Thanks!
    Here's what I suggest:
    Run repository sync(tools > options > repository) for both destinations and try performing full backup after that.
    I extended your trial period for another 15 days.

    If that doesn't help you'd need to open a support ticket on that.
  • AWS Glacier - Archive deletion/Vault deletion (Help!)
    Might be a problem with synchronization due to how Glacier works. The first thing I'd try is to synchronize the repository via tools > options > repository and then check the storage tab after 5 hours have passed. Global Glacier inventory update occurs every 24 hours, so you might need to wait a little bit more.
  • Multiple Buckets MSP Edition

    Security is not the main problem here. For Wasabi it doesn't really make a difference, but on B2 that could cause performance problems due to their API limitations.
    As for security concerns, it would be better, of course to give each user their own bucket for better destination tracking.
  • Amazon S3 Bucket and Client size discrepancy
    That's unusual. Can you provide the screenshot?
  • Cloud Storage Providers & New Features
    Correct, Hybrid and VM synthetic backups should be implemented soon. VMs should be the first, I believe.
  • Cloud Storage Providers & New Features
    B2 and Azure are next on the list for synthetic backup, but I don't have any info on the release date right now.

    Regarding Wasabi: yes, universal S3-compatible API is actually its main advantage. Our software was designed around S3, so naturally it works very well with S3-compatible cloud storage providers.
  • Cloud Storage Providers & New Features
    Well, all of the enterprise-level cloud storage providers are basically the same in that regard, so we don't see any point in making a comprehensive list of such features. It's not really beneficial for cloud storage providers to be radically different from each other.
    The closest that we have is this table: https://help.cloudberrylab.com/cloudberry-backup/backup-destinations/feature-comparison

    Currently only S3 and Wasabi are a little bit different, since they support synthetic backups.

    As for Wasabi vs B2 question: a lot of our customers are moving on from B2 in favor of Wasabi. The main problem with B2 is API limitations and (sometimes)upload speed.
  • Sequence contains more than one matching element
    That's a known issue with VM backups. To properly register the issue in our bug tracker and add your info there I suggest to send us diagnostic logs via tools > diagnostic menu.
  • Error - There is no connection to the Cloud
    To fix such problems in closed network environment you need to:
    1) Go to C:\ProgramData\CloudBerryLab\CloudBerry Drive
    2) Find productConfig.xml file
    3) Open it in Notepad and edit the following parameter:
    <CheckInternetConnection>true</CheckInternetConnection>
    Just change it to
    <CheckInternetConnection>false</CheckInternetConnection>
    That should work.
  • False positive ransomware
    You should see the corresponding warning in GUI. When you click on it you can approve or delete the files from that menu.
  • Just a caution regarding scheduled backups
    Thanks once again, quite interesting stuff.
  • Just a caution regarding scheduled backups
    I would say this is an overkill, but you can actually share the script with us. I believe we already discussed that in one of our support tickets. If the script is indeed useful I can share it with our developers and we can constructively discuss it with our R&D.
  • no option to verify backups
    That's a very old article, but I guess it's still relevant.
    Such files can be backed up by simply using file backup plan, nothing complicated here.

    You can PM me here on forum(emails can't be seen by other users) or just write an email to mentioning my name. All info in the ticket system is confidential.
  • Path doesn't exist for Sharepoint mapped drives
    Sharepoint mapped drives are not yet available as a target for backups. It's on our road map to do that, but there's no ETA yet.
  • no option to verify backups
    The procedure is quite simple: the file is technically not in the folder until it's fully copied to it, so if you already have a large data set uploaded to the cloud the software will start to upload it almost immediately once it detects its presence in that location.
  • Backup fails to access local OneDrive folder
    Until Microsoft does something about the situation and how files like that are handled we can't really do much on our side.
    Creating a separate plan with VSS disabled along with "file on demand" option on OD side for such data is the only workaround I can suggest.
    I already created a feature request regarding this problem, some improvements should be implemented in version 6.2.
  • no option to verify backups
    Yes, that's correct, that's why this option is there.
    There's never been even a single case where MD5 checks failed during backup/restore procedures.
  • MBS + B2 - 7% Discrepancy in Storage Reporting
    Thanks for providing that info. According to our devs a couple of other users encountered this issue. The problem is still under investigation.