• Immutable Backups
    It's coming in September. Thanks for asking. We'll post here when it's available.
  • Glacier and new Backup format in-cloud copy
    I do not believe synthetic fulls work in S3 Glacier. Glacier is designed for archival storage and I do not think it has the necessary APIs to perform the synthetic full. When it comes time to restore from Glacier, you can do standard or expedited delivery depending on your SLAs for getting access to the data. Expedited is more expensive. There is a new storage class called glacier instant retrieval that we support and I believe that storage class will allow you to access the data immediately.

    I do not generally recommend backing up directly to S3 glacier. You can do it, but in my experience glacier is best used for archival storage. In other words, long-term storage on data that you don't plan to change and that you don't plan to restore from unless absolutely necessary. In other words, for compliance and emergencies. You could back up to a storage class like S3 standard or S3 and infrequent access, and use a lifecycle policy to automatically move that data after some time has passed, for example 30 or 60 days, to glacier for long-term storage. If you do that and you're outside your full backup schedule, then you're effectively moving data that will never change.

    Glacier does have minimum retention periods. If I'm not mistaken, regular glacier is 90 days, and glacier deep archive is 180 days. You can use standard or expedited retrieval. And then there's the new storage class called glacier instant retrieval. It can get a little confusing and pricing varies between the different storage classes and restore options; not only in what you pay for the storage, but how much it will cost you to restore the data - egress from glacier.

    So I would strongly encourage you to use the AWS calculator for glacier and type in some examples of how much data you might be restoring and doing that for the different storage classes and restore types, so you can understand better what you are going to pay in order to restore that data. and so you can have these prices negotiated properly, in advance, with your customers so there are no surprises.

    https://calculator.aws/#/
    https://aws.amazon.com/s3/storage-classes
    https://aws.amazon.com/s3/storage-classes-infographic/
    https://aws.amazon.com/s3/storage-classes/glacier/
  • New Backup Format: Run Full Every 6 months, is it safe?
    That's correct if you're keeping 3 months with a 3 Month Full Backup schedule.
  • New Backup Format: Run Full Every 6 months, is it safe?
    Sounds like a good plan.

    I think you could probably decide on a few default options for customers and then have the conversations with them about anything they need over and above. For example you could use the new backup format for cloud backups with 6 months of deleted file restorability, and then open the conversation with the customers about whether or not they need longer retention or longer deleted file restorability, and move on from there. If many customers select the default, then you don't have to worry about a custom designed backup plan for each; but you always have that option.

    Feel free to post back to the forum on your progress and how things are working out for you. Also let us know if you have any further questions I can help you with.
  • Region in Web URL
    You need to hit the Support section of the msp360.com web site. Usually you need maintenance on a product to open a support case, but if they ask (and you do not have maintenance), then ask politely and let them know you tried the forum. If that fails, I can try to open an internal case.

    Having said that, I think the product is working correctly now given the Amazon specs for URL access. I realize that the older version may not have been AWS compliant in that regard. Why can't you use the recommended URL style, if I may ask? Is it because you have an application expecting the old format and changes would be difficult?

    What happens when you upload a file to your S3 bucket using the AWS Management Console - is the URL format the same as Explorer or does it use the older format? Just curious.

    And I guess I should have asked: Is this the paid or free version of Explorer. If the free version, you will not be able to open a support case...
  • Region in Web URL
    Also see if this helps:
    https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#VirtualHostingCustomURLs

    Otherwise, you can open a support case as I am not sure if the issue you're running into is something that was supported by Amazon but has been deprecated, or it's something that can be changed it the product or your application that accesses the objects.
  • Region in Web URL
    As far as I can tell from AWS docs, the region is a part of the full name you use to access objects in Amazon S3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html

    I am not sure about the format you are using. From what I can see, there are two ways to access:

    Virtual-hosted–style access:
    https://bucket-name.s3.region-code.amazonaws.com/key-name
    
    Path-style access:
    https://s3.region-code.amazonaws.com/bucket-name/key-name
    


    You could try using the recommended virtual hosted style without the region and see if it works.
  • Immutable Backups
    how are you currently restoring from S3? Do you copy the files from S3 down to local disk and then perform a restore, or do you have some way to restore directly from S3?

    Using the new backup format we store things in archive files, so there won't be a list of files that match what you have locally in the cloud. The way you have things set up, you may want to have different backup plans for each database, but I think you could include the t-log backups if you wanted since most backups are removed every week.

    As a side-note, if you're deleting your backups locally after the next successful full backup, then you're only leaving yourself with one days' worth of backups since you're deleting the previous week. But if that process is working for you, that's fine. I would be a little concerned about that, especially since you're backing up locally and presumably you have disk space. Restoring from the cloud is certainly going to take a lot longer, and if you find yourselves restoring a few times a week, I'd be looking to keep more backups locally to avoid the need to restore from the cloud. But more backups kept locally means larger full backups on our side, so there's a trade-off.

    You can run the fulls as frequently or as infrequently as you want. I assume you don't need all backups for the last 10 years, is that correct? Or are you keeping every single full and every single differential for every single database for 10 years? I guess I would need to know that to provide some guidance on retention settings.

    If you're truly keeping 10 years worth of everything, then you don't need to run the fulls very frequently at all. And you may want to run them after the cleanup happens locally to minimize the next full backup size on the S3. You could run them once a month if you wanted. Or you could run them weekly if they're aligned with the local full backup and cleanup. In that case you just going to have a new full backup for the database and the rest of the folder it sounds like it's going to be cleared out anyway. So the synthetic won't really do anything as the only thing being backed up is the new full backup because that's the only file in the folder. And in that sense, a full is no different in what data is backed up than an incremental.

    Use the Keep Backups For as an absolute measure of how long you need to keep the backups. Use the GFS settings if you do not need to keep every backup and are looking for longer-term retention, but less granular in restore points - or use GFS if you need immutability.
  • Restore a file-level backup to Hyper-v
    we only back up the used blocks. Who told you that the image will match the actual volume size?
  • Immutable Backups
    a full, in your case, would be all of the files that are selected for backup. If you have some retention that's removing old backups from that folder then they obviously would not be a part of that. And if you had different backups for different databases, as an example, then each full backup would be for an individual database. An incremental backup in your case would just be any new files that are created as it's unlikely that any sql server backups that are already created are going to be modified in some way. Unless you're appending transaction log or other backup files to one another, which I generally do not recommend. then, as you create new transaction logs or differential or full backups of the databases they would be considered as a part of the incremental backup. At the end of say a month if you're using a 30-day cycle a new full would be created based on the data that's currently on your backup folder locally, while using the bits that are already in the cloud to help create that next full.

    How much data you decide to keep in the cloud is going to be up to you, and likely based on your retention needs.

    I don't know how you're managing the deletion of old backups if you're not using our agent to do the SQL Server backups. But let's just assume that you're intelligently removing old backup files that are no longer needed, and treating every backup set for SQL Server as a full plus any differentials plus transaction log backups - that way you're not removing a differential or a full while leaving transaction log backups in the backup folder. If that's the case then you would end up with backups in the cloud that contain all the files that are needed for a restore. Of course, if you're doing it that way you'd have to restore the files locally and then restore them to SQL Server using native SQL Server restore functionality. You couldn't restore directly to SQL Server from the cloud because you're not using our SQL Server agent.

    If your retention needs are different based on the databases that are being backed up, then you would probably be best served by creating separate backup plans for each database (assuming you're backing them up to different folders) that match your retention needs in the cloud. That would keep each full backup smaller because each backup would only be dealing with a single database.

    The number of fulls that you actually keep in the cloud is going to depend on your full backup schedule, your setting for Keep Backups For, and if you're using GFS what type of backups you're keeping (weekly, monthly, yearly) and how many of each.

    Feel free to reply back with more details on retention needs and maybe we could dial in the right settings for your particular use case..
  • Restore a file-level backup to Hyper-v
    Yes, you would need to use an image-based backup, and not a file back up. File backups just back up the files that you've selected for backup. They do not image the volume or volumes. Once you have the backup completed you can restore it as a virtual machine or virtual disk; either Hyper-V or VMware.

    https://mspbackups.com/AP/Help/restore/ibb/console
  • Microsoft OneDrive Data Restore
    I think we need some clarification on exactly what's going on. Are you saying you have a customer that is using a perpetually licensed, standalone version of our cloudberry backup product to back up a Windows desktop or server. They did so using an image based backup using the new backup format. If so are you looking to a do a file level restore from the image backup? That would be supported.

    But, I see you're asking about restoring onedrive and online SharePoint data, but that would require a Microsoft 365 backup and not one being performed by a Windows backup agent. Any clarification would probably help.
  • New Backup Format: Run Full Every 6 months, is it safe?
    much of this is going to depend on what your customers need if you're in managed services. If you have restore SLAs in place and can't meet those SLAs using the legacy format then you'll have to move to the new format. And you'll need to negotiate with your customers how long you keep deleted files or how long they need them kept. Using that information You should be able to gauge the best choice in backup format and estimate backup storage requirements. Of course, there are things you can do to reduce backup storage costs as well. By making sure you're using the best cloud storage vendor for that customer. You could also mix the legacy format for local backups with the new backup format for cloud backups and adjust the retention on each to make sure you're meeting customer needs while still limiting cloud storage usage.
  • New Backup Format: Run Full Every 6 months, is it safe?
    There are a lot of ways to keep more deleted files, but none provide complete assurance that you'll have them all, short of keeping all backups for 1 year.

    If you set Keep Backup for 1 year, you'd have everything. You can set the full to whatever schedule you want and calculate how many full backups that will be. Every 30 days would require up to 13 full backups in storage. Ever 2 months would require 7. Every 3, requires 5. etc.

    I do not think you'll get lower storage use using GFS given your requirements of saving only a year. GFS benefits are geared to restore points and longer term storage, when needed. If you ran monthly fulls and kept only the last 30 days of backups and used 12 Monthly GFS backups, you'd have up to 60 days of fulls + Incrementals (2 fulls + incrementals for each) and monthly full backups for the remainder of the year. So about 12 monthly fulls + 60 days of incrementals.

    If you kept only 3 Years of GFS backups (and no monthly backups) you'd have 60 days (2 fulls _+ incrementals) + 3 more fulls fulls for the yearly backups, but you'd only have deleted file restore capability for 30 - 60 days.

    You do get the ability to use Immutability with Amazon S3, Wasabi, and soon Backblaze B2 with your GFS backups for added backup protection.

    If GFS, Immutability, long-term retention management, large file counts, restore speed, etc. are most important, then use the new backup format.

    But as I tell customers who are running file backups: If the legacy format and its version-based retention and file-level to cloud object management works better for you, then stick with the legacy format. If your primary concerns are deleted file restoration and reduced storage, then the legacy format may be a better fit. Do not be concerned about using the older format. It's tested, very capable, and if it's right for you, use it.
  • Immutability in Wasabi does not work
    It's enabled as far as I am aware - you are referring to an old post. Your screenshots were confusing as one seemed to show the bucket as not having Immutability selected (from agent) and the other was at creation time, but not after creation. So I am not sure if something happened at creation time or the agent is not picking up the setting for some reason.


    Where are you trying to create the backup plan? From the agent or from the management console? If you have not tried the management console, please try that.

    If all else fails, then I think the best option is for you to open a support case.
  • Immutable Backups
    I am not clear on your use case. Are you using MSP360 to back up the SQL Server databases? It sounds like you are running native SQL Server backups rather than using our SQL Server agent. We do have a SQL Server agent and run native SQL Server backups in the background and then process for retention while being built into the overall management structure of the product.

    Regardless, if you want immutability, regardless of the file types you are backing up, then you can do what I said in my previous post and use a File Backup using the new backup format to Amazon S3. Use the GFS settings on the backup to define how long you want to keep backups and in what granularity (weekly, monthly, yearly) and you should be good to go. The product will protect all files being backed up using those GFS settings. If you wanted, you could also use multiple file backup plans for the various files you are protecting so each backup set is smaller and can have a customized retention.
  • New Backup Format: Run Full Every 6 months, is it safe?
    I think you answered this in another thread and understand how the retention works... If you run a full every 6 months and need to keep 1 year, then you'll have to run backups until there are 18 months in storage before the oldest backup set of 6 months can be removed, leaving you with 12 months in storage.

    As you pointed out, if we detect an issue, we'll notify you, but any damage to a backup archive file in the chain could prevent any incremental backups after the missing or damaged file to be inaccessible. So there is more of a risk of data loss if you run into an issue. I won't get into cloud durability, as that's something you can research yourself. But I would say that for many customers, running a full backup every 6 months would not be frequent enough for them (or their customers in the case of managed services).

    You can optionally limit deleted file exposure with the GFS Retention settings by keeping backups for a longer period of time.
  • Immutable Backups
    SQL Server is not yet supported by the new backup format. If you are using the SQL Server edition, then you are running native SQL Server fulls, differentials (optionally), and transaction log backups (optionally). For SQL Server, you are using the best option to protect your databases.

    In your case, I assume you are either tiering your data using a lifecycle policy to back up to S3 Standard and automatically moving those backups to S3 Glacier for archival purposes on some schedule or you are targeting S3 Glacier directly. Even though we do not natively support Immutability with SQL Server, you may be able to do this:

    Back up the SQL Server databases to a NAS or Local Disk using the SQL Server agent and then create a second file backup plan using the new backup format that would take those backups on the NAS / Local Disk and back them up to Amazon S3 Glacier. You could then use object lock on the cloud data.

    I have added your request for Immutability support to the existing item in the system for SQL Server.
  • Immutability in Wasabi does not work
    You cannot change immutability settings on an existing bucket. You need to create a new bucket on that account and enable immutability at creation time.
  • Immutability in Wasabi does not work


    In order to use immutability, you have to use GFS retention. Do you have GFS selected on the retention page? If you want, you can post an image of your retention page here on the form.