Comments

  • Restore a file-level backup to Hyper-v
    Yes, you would need to use an image-based backup, and not a file back up. File backups just back up the files that you've selected for backup. They do not image the volume or volumes. Once you have the backup completed you can restore it as a virtual machine or virtual disk; either Hyper-V or VMware.

    https://mspbackups.com/AP/Help/restore/ibb/console
  • Microsoft OneDrive Data Restore
    I think we need some clarification on exactly what's going on. Are you saying you have a customer that is using a perpetually licensed, standalone version of our cloudberry backup product to back up a Windows desktop or server. They did so using an image based backup using the new backup format. If so are you looking to a do a file level restore from the image backup? That would be supported.

    But, I see you're asking about restoring onedrive and online SharePoint data, but that would require a Microsoft 365 backup and not one being performed by a Windows backup agent. Any clarification would probably help.
  • New Backup Format: Run Full Every 6 months, is it safe?
    much of this is going to depend on what your customers need if you're in managed services. If you have restore SLAs in place and can't meet those SLAs using the legacy format then you'll have to move to the new format. And you'll need to negotiate with your customers how long you keep deleted files or how long they need them kept. Using that information You should be able to gauge the best choice in backup format and estimate backup storage requirements. Of course, there are things you can do to reduce backup storage costs as well. By making sure you're using the best cloud storage vendor for that customer. You could also mix the legacy format for local backups with the new backup format for cloud backups and adjust the retention on each to make sure you're meeting customer needs while still limiting cloud storage usage.
  • New Backup Format: Run Full Every 6 months, is it safe?
    There are a lot of ways to keep more deleted files, but none provide complete assurance that you'll have them all, short of keeping all backups for 1 year.

    If you set Keep Backup for 1 year, you'd have everything. You can set the full to whatever schedule you want and calculate how many full backups that will be. Every 30 days would require up to 13 full backups in storage. Ever 2 months would require 7. Every 3, requires 5. etc.

    I do not think you'll get lower storage use using GFS given your requirements of saving only a year. GFS benefits are geared to restore points and longer term storage, when needed. If you ran monthly fulls and kept only the last 30 days of backups and used 12 Monthly GFS backups, you'd have up to 60 days of fulls + Incrementals (2 fulls + incrementals for each) and monthly full backups for the remainder of the year. So about 12 monthly fulls + 60 days of incrementals.

    If you kept only 3 Years of GFS backups (and no monthly backups) you'd have 60 days (2 fulls _+ incrementals) + 3 more fulls fulls for the yearly backups, but you'd only have deleted file restore capability for 30 - 60 days.

    You do get the ability to use Immutability with Amazon S3, Wasabi, and soon Backblaze B2 with your GFS backups for added backup protection.

    If GFS, Immutability, long-term retention management, large file counts, restore speed, etc. are most important, then use the new backup format.

    But as I tell customers who are running file backups: If the legacy format and its version-based retention and file-level to cloud object management works better for you, then stick with the legacy format. If your primary concerns are deleted file restoration and reduced storage, then the legacy format may be a better fit. Do not be concerned about using the older format. It's tested, very capable, and if it's right for you, use it.
  • Immutability in Wasabi does not work
    It's enabled as far as I am aware - you are referring to an old post. Your screenshots were confusing as one seemed to show the bucket as not having Immutability selected (from agent) and the other was at creation time, but not after creation. So I am not sure if something happened at creation time or the agent is not picking up the setting for some reason.


    Where are you trying to create the backup plan? From the agent or from the management console? If you have not tried the management console, please try that.

    If all else fails, then I think the best option is for you to open a support case.
  • Immutable Backups
    I am not clear on your use case. Are you using MSP360 to back up the SQL Server databases? It sounds like you are running native SQL Server backups rather than using our SQL Server agent. We do have a SQL Server agent and run native SQL Server backups in the background and then process for retention while being built into the overall management structure of the product.

    Regardless, if you want immutability, regardless of the file types you are backing up, then you can do what I said in my previous post and use a File Backup using the new backup format to Amazon S3. Use the GFS settings on the backup to define how long you want to keep backups and in what granularity (weekly, monthly, yearly) and you should be good to go. The product will protect all files being backed up using those GFS settings. If you wanted, you could also use multiple file backup plans for the various files you are protecting so each backup set is smaller and can have a customized retention.
  • New Backup Format: Run Full Every 6 months, is it safe?
    I think you answered this in another thread and understand how the retention works... If you run a full every 6 months and need to keep 1 year, then you'll have to run backups until there are 18 months in storage before the oldest backup set of 6 months can be removed, leaving you with 12 months in storage.

    As you pointed out, if we detect an issue, we'll notify you, but any damage to a backup archive file in the chain could prevent any incremental backups after the missing or damaged file to be inaccessible. So there is more of a risk of data loss if you run into an issue. I won't get into cloud durability, as that's something you can research yourself. But I would say that for many customers, running a full backup every 6 months would not be frequent enough for them (or their customers in the case of managed services).

    You can optionally limit deleted file exposure with the GFS Retention settings by keeping backups for a longer period of time.
  • Immutable Backups
    SQL Server is not yet supported by the new backup format. If you are using the SQL Server edition, then you are running native SQL Server fulls, differentials (optionally), and transaction log backups (optionally). For SQL Server, you are using the best option to protect your databases.

    In your case, I assume you are either tiering your data using a lifecycle policy to back up to S3 Standard and automatically moving those backups to S3 Glacier for archival purposes on some schedule or you are targeting S3 Glacier directly. Even though we do not natively support Immutability with SQL Server, you may be able to do this:

    Back up the SQL Server databases to a NAS or Local Disk using the SQL Server agent and then create a second file backup plan using the new backup format that would take those backups on the NAS / Local Disk and back them up to Amazon S3 Glacier. You could then use object lock on the cloud data.

    I have added your request for Immutability support to the existing item in the system for SQL Server.
  • Immutability in Wasabi does not work
    You cannot change immutability settings on an existing bucket. You need to create a new bucket on that account and enable immutability at creation time.
  • Immutability in Wasabi does not work


    In order to use immutability, you have to use GFS retention. Do you have GFS selected on the retention page? If you want, you can post an image of your retention page here on the form.
  • 0% disk load while searching for modified files.
    Are you asking because the scan is taking longer than expected? Or are you asking because you don’t see much load on the disk during the scan process? What options have you set in the back up plan? Are you using the NTFS fast scan option? Can you tell me if those are locally attached storage drives or are they network shares?
  • cloudberry linux agent installation
    Can you provide any additional information about what is out of date, any issues you are having with what is documented, or what you're looking for specifically that is missing from the help information? That way, I'll be able to provide the assistance you need. Thanks.
  • cloudberry linux agent installation
    Are you asking for instructions on how to create an AWS AMI that includes the Linux Backup pre-installed so new VMs that are created from that AMI will have backup ready to go? Are you using stand-alone CloudBerry Backup or our Managed Backup product? I'm asking because when I read you are using AMIs, I suspect you may be managing many VMs.

    In general, I think you would create your virtual machine and configure it the way you need and then use the AWS Management Console to create an AMI image from the VM. https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/tkv-create-ami-from-instance.html
  • Backup server to local NAS and Cloud storage
    Ok. Look forward to your reply once your testing is complete.
  • Backup server to local NAS and Cloud storage
    You can use one user account and assign both storage accounts (NAS and Cloud), and each storage can have a different storage limit applied, if needed. Is that what you want?
  • Backup server to local NAS and Cloud storage
    Users are just service accounts to authenticate the MSP360 services on the endpoint with one of your customers. You can only have one user account per endpoint and that account can be shared with all endpoints at a customer. While your main admin account may show up in the Users list, it's really an administration account (see Administrators section) used for management, not for backup.

    I do not think you will have an issue changing the storage destination from the User to the Company - by removing from User and then assigning at the Company level, But I am not sure this change provides any benefits. I am going to check with the team and report back on whether this is a good idea. So, hold off on this change until I reply. Thanks.
  • How to set Content-Type for uploaded HTML files to, well, html !
    I do not think this feature has been added to the Mac version of Explorer. At this point, given the age of this post, I would not feel comfortable telling you the feature is coming. You can either move to the Windows version of Explorer. Or, if you're dealing with a small number of objects, you can change the content-type in the Amazon S3 Management Console. Wish I had better news. I'll log your request in the system.
  • Backup server to local NAS and Cloud storage
    I don't remember adding the storage destinations to the user, but must have when I created that user. it looks like Storage limit is set to unlimited for both destinations when I'm looking at the sja_backup_user details.Rick Huebner

    You can either set the storage at the Company level or the User level. Until last year, the User level was the only way and allows for storage limits to be applied. Adding backup destinations at the Company level is easier for some Managed Backup customers, but does not allow storage limits.

    It seems you have no storage limits set, so I am wondering how your NAS backup was failing because of a storage limit being reached?

    Since you're using two image backup plans: NAS and Cloud, can I ask if you are using the new backup format? A lot of benefits using the NBF for image backups. https://mspbackups.com/AP/Help/backup/about/backup-format/about-format
  • One or More Files are EFS Encrypted
    EFS Support was added to the latest Windows 7.6 agent. https://help.mspbackups.com/backup/about/efs

    You have options to either Keep EFS Encryption or Decrypt EFS files at backup time. The new feature is only available when using the New Backup Format.