Comments

  • Beyond frustrated
    Just want to make sure you're good, or if you have additional questions. I have passed your comments onto the team and agree we could improve from the UI validation to ensure the full backup step is not missed. But wanted to make sure your backups are now set up correctly and you're good to go. Let me know. Thanks.
  • Feature addition suggestion: Warn before shutdown/restart/sleep when running backup
    Most backups can continue from where they left off. If that was the case for the backup you were running, would the addition of such a feature still provide the same utility for you?

    How would you expect the end-users to respond? And how would we handle"I'm done with work and closing the lid of my laptop" scenarios? For those types of users, they would not even see the screen. Most people I know just close the lid when they are done using their laptop and expect it do go to sleep.

    But even for desktops, I wonder under what circumstances an end-user would really not shut down the system, despite the message. Let's say they are done for the day and they get the message a backup is running when shutting down. What would you expect they would do?

    Maybe a solution is to have a option to lock the computer and shut down when the current backup completes. I'm not sure this would work. If they are on a laptop, they're probably just closing the lid, and even if the laptop does not shut down (I don't like that thought) they will probably lose their internet connection anyway when they leave the office and the backup stops.

    I could see this as useful message for power users and administrators working on servers. But I wonder what non-technical, end-users would do.

    I'll check the system to see if a request is already in the system. If not, I'll add it for User Initiated shutdown requests. I'm not sure the best solution here, but let me know if you have any additional thoughts from the MSP perspective.
  • File access with file manager.
    There's a difference between Accessing Files through a File Manager (literally) and the Accessing Files through a File Manager (in the way we mean it). Yes, you can access files, but you cannot do anything with the files using advanced mode backup, short of seeing them and maybe deleting them. So, I would argue that for most use cases, accessing the backups files via a file manager will not provide any advantage to you.

    What we mean, is that if you want to be able to replicate files and access them directly using a file manager (local or cloud) and leave them in their native format, then you need to use simple mode.

    Advanced Mode changes the format of the file as it's processed, and adds compression and encryption. You would then need to use our product to restore. Simple Mode does not provide true backup. It's really simple replication of files without version based retention, compression, and encryption.
  • Beyond frustrated
    Yes, you can. But you'll have 3 weeks just like Acronis before the oldest backup set can be removed. If you're keeping 14 days, and you have weekly Full backups adn daily incrementals, then you have 1 Full + 6 Incremental backups, and then a new set is created. Since you are keeping 14 days, you cannot delete the first backup set until the 3rd is complete on day 21. Then you'll have 14 days and the process will start again. Even so, you are paying for 30 days minimum with Wasabi, so you'll be within that 30 day window, keeping your cloud charges to a minimum.

    Just enable that full backup as described above since you do not have one enabled yet.
  • Beyond frustrated

    I think we could do better with a warning about a full backup not being scheduled, while still providing flexible scheduling options. That would, at the very least, prevent the accidental creation of a backup plan without a full backup scheduled. I have already spoken to the team about these design improvements and will reiterate your frustration with them again.

    But for now, what you need to do is schedule the Full backup by checking that option and you should be good to go.
  • Beyond frustrated
    Let's correct the problem you have and get Full backups scheduled so your old data can be properly removed. You must schedule your Full backups. For your use cases, I would consider scheduling your Full Backups Weekly (I do not think we have an every 2 weeks option yet). In that case, you can remove the Incremental backup on that full backup day. With retention set at 2 weeks, you should never have more than 3 backup sets in storage before the oldest is removed, keeping you under 30 Days.

    As an alternative, you can run Full backups monthly, but then you'll end up with 2 months of backups in storage - if retention is set for 1 Month. The benefit though might be that you are running fewer full backups over the course of the year and you might find storage use is close to that of the weekly full backups.And you could keep customer data for longer - 2 months versus 2 weeks.

    Yes, you are correct that for Retention to work correctly with the new backup format, some recurring Fulls will need to take place. But "misalignment" of Scheduling vs Retention is always a possible occurrence even if things are set. For example, if you run a Full Backup once a month and only want to keep 1 Week of data, you'll effectively be keeping 2 months because of the full schedule.
  • File level restore of Image Based Nackup // key not working
    It sounds like you may have an incorrect encryption password, but Support can probably assist - and may be able to see additional details in the log files.
  • Changing drive letter for backed up Windows directory hierarchy
    That article has been updated by the team to include references to the new backup format.
  • File level restore of Image Based Nackup // key not working
    I would open a Support Case. Are you restoring from the source computer or are you restoring from another PC?
  • Beyond frustrated
    That first image is only for the Incremental Backups. You are running incremental backups every day. That's fine, but what I need to see is the screen behind it that has a separate option for the Full Backup called "Execute Full Backup (Synthetic Full If Possible)". A Full must be scheduled or you'll end up with an incremental forever issue. On Wasabi, Fulls will run as Synthetic Fulls and will complete much faster than the initial full backup took to complete.
  • Beyond frustrated
    Can you reply with your retention settings for the new backup format? Also, check your backup plan to ensure you're running a full backup on schedule. If you're only running incrementals, then your backups will never be deleted.
  • Beyond frustrated
    I'll need some clarification on exactly what you're doing to be able to assist. First, I need to know if you are using the new backup format or the legacy format that's been around for a long time. I also need to know if these are file or image-based backups. And then I need to know what your retention settings are.

    Something you need to be aware of with Wasabi is that if you use your own Wasabi account, there is normally a 90-day retention requirement. So, even if you delete data before those 90 days, you'll be billed an early delete fee for the remaining time. If you're using our integrated Wasabi storage option, then we only require 30 days retention.

    Usually though, if you're seeing excessive storage use, then that usually relates to your attention settings. So let's start there and get this problem corrected.

    It would also probably be helpful to know if Wasabi is reporting similar storage use when you log into their management console. That would remove the possibility that the local data dictionary is out of date and the repository needs to be synchronized.

    I look forward to your response.
  • What if my computer with CloudBerry Backup fails?
    assuming your machine dies and you need to continue backups on another computer, and the source files are in the same location, then you can follow the directions in this article. https://www.msp360.com/resources/blog/how-to-continue-backup-on-another-computer/amp/
  • An error occurred (Code: 1003) An error occurred: Unable to read data from the transport connection
    You'll need to open a support case. There could be a couple different reasons for that error.
  • Changing drive letter for backed up Windows directory hierarchy
    If you are not moving computers and are just moving data, then the new backup format supports deduplication, so it should see those bytes already in storage and avoid uploading them again. You're in B2 and will benefit from synthetic full backups when the next full backup runs. I would recommend you open a support case for best assistance.
  • Changing drive letter for backed up Windows directory hierarchy
    The new format has many advantages. The determining factor will be if you need the new features and are using a supported cloud for synthetic full backups. You are, with B2. But if you prefer the incremental-forever, version-based retention with the legacy format, then you can use it.

    Those directions that Steve posted only apply to the legacy backup format. I am waiting on direction from the team about how (or if) this is possible with the new backup format since there is more involved, like the deduplication database.
  • What will happen to the old backup format?
    It’s staying. No plans to remove it.
  • Powershell Snap in
    And you have you installed the necessary prerequisites?
    • .NET Framework 4.0 (full version)
    • Windows Management Framework 3.0

    If so, I think you'll need to reach out to Support again for assistance.
  • Error 1003 Disk I/O
    Support also warned me that the error you're getting would prevent you from submitting the logs. You'll need to create the ticket from the Support Portal on the web site. If you want maintenance, then Support can direct you to Sales. But for this case, they'll look at the logs. https://support.msp360.com/

    Also, DM me your email address so I can give it to Support.
  • Error 1003 Disk I/O
    I think you're going to need to support case for this. Please submit the logs through the tools diagnostic toolbar option. I'll let support know the case is coming