Comments

  • File Delete Warnings on Legacy
    Yes I was able to confirm that it's a known issue. I believe what's actually happening is that the folder may not be scheduled for purge, which is why the purge warning is not showing. But support would have to get a closer look at your logs if you feel like submitting them from the tools diagnostic toolbar option. The team's going to report back to me when they have more information and a planned resolution, but I recommend opening a support case anyway for better tracking at your end.
  • Recommended Backup Exclusions
    I'm not in front of the software right now, but I believe you can see that information from the backup history in the agent. I'll have to check on Monday, but if you're near the software and feel like checking, see if you can find that info, and sort as needed to find the large files.
  • Restoring From a NAS to an Alternate Computer
    Can you confirm you're trying to restore an image? Meaning, the entire operating system? Or are you trying to restore files from the backup, whether that's an image or a file backup.
  • Recommended Backup Exclusions
    I believe we have a requirement in the system to allow that in a future release, but I don't think it's supported now. I'll pass this on to the team or add your comments to the open item. I think it would be a good feature to have for any type of backup.
  • Server 2019 backups failing - Error 1003
    at this point you can either reach out to support via the tools diagnostic toolbar option, which will send the logs to them for review. The logs may contain some additional information that is not visible without a closer look.

    Or you could try upgrading one of the servers to a more recent version of the software to see if that addresses the issue.
  • Immutable storage without doing full backups?
    The way immutable storage works is:
    1 - You need to use the new backup format
    2 - You need to enable GFS as Immutability is currently tied to that option
    3 - You need to schedule full backups on your preferred schedule - at least once a month
    4- The full backups will be synthetic on supported cloud platforms and will only back up new data. The cloud APIs will be used to create the next full in the cloud from data that is mostly already stored there. The new full backup will contain a new generational copy of all files
    5- You can schedule incremental backups to run on schedule in-between the full backups
    6- Set your Keep Backups For option to 30 days
    7- Set GFS options to coordinate with your full backup schedule - so if you're using an every 30 days' full backup schedule, then all you need to keep is 1 Monthly backup in GFS
    8 - Enable Immutability - you need to create the bucket new and enable immutability at creation time. You cannot use an existing bucket if it was created without immutability
    8- You'll end up with 2 full backup sets in storage before the oldest once can be removed

    Unless you change the full backup schedule, there is no way to only keep 7 days as you're going to have a backup chain that includes a full + 30 incremental backups - there is no way to remove any of that backup data unless you schedule the full backup at least once a week.
  • Cloudberry works at host level or Vsphere , Vcenter level?
    yes, we support connecting to vCenter. If you're having issues, I recommend you reach out to Support or if you're evaluating the product, work with your account manager.
  • Remote Assistant 2.3.x - Wake On LAN
    No, it has not been implemented, and the feature is currently in review. We've added some support for wake on lan to the RMM product, but for Managed Remote Desktop, it's still under review.
  • CloudBerry Explorer Invalidation
    CloudFront is no longer supported. You can manage directly from AWS or stick with the older version for now.
    https://www.msp360.com/resources/blog/cloudfront-support-is-going-to-be-discontinued/amp/
  • an error occurred while retrieving data from ****.vhdx
    Glad you're up and running. Let us know if you have any further questions.
  • an error occurred while retrieving data from ****.vhdx
    The only reference to that error that I see in this system is when you specify an incorrect encryption password. And that error was reported in the 7.0 release and subsequently fixed to bring up the encryption password dialog again rather than throw that generic error. Can you confirm what version of the product are using, whether this is a legacy or new backup format virtual machine backup, and whether or not you are using encryption and, if so, whether or not it's possible you specified and incorrect password at restored time?
  • Endpoint Limits
    not in the current version. I don't recall if the current version limits address book entries to five, but even if it does you can still type in the remote computer ID. As I said there will be changes coming that will encourage those customers who need to connect to more than 5 to upgrade to a not-yet-released paid version, but you should be able to use the current free version to connect to your family.
  • Endpoint Limits
    The free version is officially licensed for up to five connections. That's not simultaneous, but as many as. In the current version, you may be able to add more than five remote systems to the address book, but in future versions that may be limited to five. When that newer version is released we'll also be releasing a subscription option for those that need more connections and features but don't have the need for our more advanced, managed remote desktop solution. More information will be forthcoming when that version is ready. In the interim, use the free version and see how it works for you.
  • Version 7.4.0.161
    okay I understand now. I've updated my comments to the team and we're going to discuss it this week. In the interim, you may want to stick with the version that was working for you.
  • Version 7.4.0.161
    I'll relay your comments to the team. Just want to clarify one thing: You were manually running the synthetic full on the cloud backup once a month since it could not be scheduled using chained backups, correct?
  • Version 7.4.0.161


    I'll assume you are using the legacy backup format. You created an image backups to a NAS that chains an image backup to the cloud. You are doing this so one plan can follow the other without the risk of overlap and a possible backup failure.

    As I recall from these changes some time back, the previous implementation was flawed in that the chained backup could only ever run an incremental backup (or you could force a full - but that was not a great option). Since you are running image backups, then you need to periodically run a new full or you end of with a very long incremental backup chain and a risk of data loss if any incremental backup in the chain is lost / damaged.

    The change, though, is as you describe. Either the chained backup follows what the parent backup plan does (meaning, a full for a full, an incremental for an incremental) or you force a full (which again is not a great option for most customers).

    I am not exactly clear how the old implementation was working for you given that the chained plan always ran an incremental backup, which as I explained earlier is risky.

    You could:

    1- Continue to use the legacy backup format and use a Hybrid backup. A hybrid backup reads the source data once and sends it to two locations: a local target (your NAS) and cloud storage.
    2 - Continue to use the legacy backup plans as you have them now, but do not chain them. Instead, schedule their periodic full backups on different days (use a synthetic full on the cloud backup if the option is available) to avoid overlap and schedule incremental backups as well to avoid as best as possible overlapping times.
  • Azure Storage Archive - Incremental Backups
    let me clarify one point. When we're storing, it's only necessary to restore the needed blocks within each archive where the files that are selected for restore are contained. We do not have to restore the entire archive file. For example, if you have a file that one megabyte in size and it's contained within an archive that's one gigabyte, we only have to restore the blocks that contain that one megabyte file. There's no reason to restore the entire 1 gigabyte archive file.
  • Server 2019 backups failing - Error 1003


    Product version?

    Do you know if those servers have the updates from Microsoft installed. I see some reference to this error on the internets that talk about a fix in the July 2020 Windows Server update.

    You didn’t mention the types of backups you are running, but if file backup using legacy format, do you happen to have any folders being backed up with 100’s of thousands of files in them? Not saying that’s the issue. I have not been able to find a reference to that error in our system, but I’ll wait for your reply before reaching out to support.
  • Outlook Hosted Exchange OST File Backup - CBT
    Generally, you do not have to back up OST files. They are simply cached data from the Exchange servers. You can research this yourself, but that is the general recommendation. If the OST file is deleted / lost, the next time Outlook connects to the Exchange Server, all email will cache again locally. Having said that, there's no harm in backing them up. This applies to file backups.

    It's not clear if you are running Image or File backups and if you're using the new backup format. For image backups, we only ever back up changed blocks, not files. For file backups, the legacy format could leverage Block-Level Backups to only back up the changes within the OST file. For the new backup format, we automatically use client-side deduplication - again, only backing up changes within the file.
  • Azure Storage Archive - Incremental Backups
    What type of backup are you running? File or Image? Are you using the new backup format.

    I'll make a few comments and can adjust based on your reply:

    Generally speaking, archive tiers do not support synthetic full backups because of limitations in their storage access times and / or APIs. Azure Storage Archive is not supported for synthetic full backups. You can see a list of supported cloud storage options for the new backup format in this help page: https://help.msp360.com/cloudberry-backup/backup/about-backups/new-backup-format/synthetic-full-backup.

    You could use a lifecycle policy to move the data from a hot Azure storage tier to a less expensive one on a schedule. That way, you can get the advantage of Synthetic Fulls while moving your older backup data to less expensive Azure Archive. But doing that may depend on your retention needs. Feel free to elaborate on how you keep backup data.

    The new backup format file backup does group files together as you described, but not necessarily in a single file object. But that could be the case if the total size of files is under the archive size limit and the archive is created fast enough (the archives themselves are dynamically managed based on size and speed of creation). The benefits are both in backup and restore speeds and the other new backup format advantages (https://mspbackups.com/AP/Help/backup-and-restore/about-backup/backup-format/about-format).

    In contrast, the legacy file backup format manages backups at the file object level. Meaning, each file backed up has one or more objects created in backup storage each time (the file and optionally NTFS permissions). Since each object needs to be created / uploaded individually via the cloud APIs, this creates a lot of IO and associated latencies if file counts are high and / or files are small. This is largely eliminated with the new backup format.

    When you restore, the software will restore every archive that is needed based on the restore criteria.