• Move data between Backup storage devices
    The data will not be moved automatically. You have a couple of options:
    1. Copy/move the backup folder from the old NAS to the new one, then create a new local storage account for the new NAS and run a repository sync. You can then pick up where you left off.
    2. If you want to keep the old backups on the original NAS and only have newer files get backed up to the new location, then create a local storage account for the new location and set the Advanced filter to "back up objects modified since xx/xx/xxxx date". It will only backup files created /modified after that date.

    If you use option 2 and have to do a complete restore, you could restore the more recent files first then do a second restore for the older stuff.
  • Optimum S3/Cloudberry config for desktop data
    For new clients, we now do nightly cloud file backups to Wasabi and BackBlaze B2. It works great.
  • What MSP360 Windows backup strategy do you use for largely static collections?
    I may be atypical, but for this type of data set, I would recommend just using the legacy file format, AND sending a copy to one of the low-cost Storage providers (BackBlaze or Wasabi).
    As Alex points out, if you use the New Backup Format, there will always be two FULL copies of the data consuming space at any given point in time. For local backups that may not matter to you, but for cloud storage it makes a significant difference.
    What is your reason for switching this existing set of data to the New backup Format?
  • Full backup from time to time
    What kind of backups are you referring to? File or Image/VM?
  • Why is Cloudberry Backup For Windows Server So Slow?? (Backblaze B2)
    You say that you are restoring individual files from an image backup? We do file backups using legacy format and Image backups using NBF (to take advantage of Synthetic fulls in BackBlaze).We keep one full and daily incrementals of the Image backup and for files we keep 90 days worth of versions using legacy format. Have never had an issue with file download speed. If we need an entire system we restore the Image which includes al of the data - that runs at near-ISP max d/l speed.
  • Forever Forward Backup “Times Out”
    Don't use FFI for anything. But using the new backup format with weekly synthetic fulls (weekend) and daily incrementals works well for us. The weekday incrementals take only a very short time.
  • Error code: 1003
    What aere you using for backend Cloud storage? BackBlaze?
  • Browsing Cloudberry Backups with Explorer PRO
    The New backup Format combines files into an archive file using a different format than the legacy file backup format. For that, and other reasons, we do not use the New backup format for data/file backups; only for Image and HYPERV VHDx file backups.
  • Files changed during backup code 1629
    Been a pet peeve of mine for a long time. It is not an error or a warning. It is by design that we skip files that are not needed for recovery. SOmeone from MSP360 care to comment?
  • Forever Forward Backup “Times Out”
    You are still using the New Backup format whcih you probably should not be using. DM'ing you some questions and more info.
  • Forever Forward Backup “Times Out”
    Kaleb - This is a problem with BackBlaze - their West Coast Data center in particular. I live on the east coast and was getting these timeouts on a regular basis. I resolved it by moving the backups to the East Coast BB DC ( and at the same time set the bucket up for immutable Object Lock for those companies that might need it some day.)
    As for Forever Forward Incremental (FFI) , I DO NOT recommend using that for regular data file backups (documents, pdf's etc), only for VHDX or Image Backups.
    The problem with FFI is that once you reach the number of days of the retention period, it has to do a Synthetic backup EVERY DAY. For example you do your first true Full upload on day one with a retention period of 30 days. On the 31st day the system does a Synthetic backup that rolls the day 1 incremental into the Full, (using something called "in-cloud copy") which takes a lot of time. The next day it does the same thing to fold in the day 2 incremental to the Full and so on in perpetuity. With 5TB of data, it takes a long time to do the synthetic folding of the incremental into the full. And with BackBlaze West Coast it all too often times out.

    Now contrast that to regular file backups. You do an initial upload of all the files. Each day you run an incremental backup which uploads any new files created that day plus any new versions of a file such as a spreadsheet or QuickBooks file. You never have to do another true full, synthetic or otherwise. At the end of the retention period any old versions of files are purged from the cloud, but any files that have not changed stay forever.
    For Images and VHDx backups it is a different story. Before the New Backup Format (NBF) we could only do a full backup once a month due to bandwidth considerations, and it took 2- 3 days for each one to finish. With the NBF, which supports Synthetic Fulls, we now can do a weekly Full and nightly incrementals. The synthetic fulls run on the weekends and typically take 80% less time than a real full backup would. We do not use FFI as it would cause the same issue as the file based FFI, each night it would have to do the synthetic Full folding of the oldest incremental, and could take too long.
    So my recommendation would be to
    1. setup a different email account with BackBlaze, and assign it to the Data Center closest to where your devices are.
    2. Setup a bucket that supports Object lock immutability - (you don't need to use it in your backup plans but could at some point in the future)
    3. Upload the 5TB of files - understand that it could take a week (or more).
    4. Once the initial backup is complete, schedule weekly fulls and daily incrementals - understand that the weekly full will only do a backup of any file that has had changes during the week, such as a QB or Excel file. It takes only slightly longer than an incremental.
    For any Image or HyperV VHDx backups (yes you can do that with the Server version by just uploading the VHDx files, set the retention period to one day. You will then have anywhere from one to 7 backups available for recovery.

    I don't know your exact situation, but would be happy to assist you in designing the optimum backup plans.
  • Recovering Data after Project Loss
    Install MSP360 on another Linux machine, sign in with the same account as the old machine and create a restore plan that has the correct decrytion key.
  • Email Notification Assistance
    I tried extensively for two years to modify the verbiage in the email notifications to clients as it is not clear and frankly frightens them. But even though I could change the wording of the body by editting the html, I could not get the subject line/ status to appear the way it does with the default.
    Yet no one at MSP360 would work with me to get the custom email notifications to work the way I wanted.
    Maybe I will send in another ticket asking for help.
  • How to Resolve Missed Full Backups
    Because only one image backup can run at any one time, we run into this issue now and then. The incidence of this is far less frequent now that we utilize the synthetic full capability for the cloud Image backups. Full Image backups take, on average, around 75% less time to complete than they did previously.
    Because the fulls now take so much less time, we can distributeae the schedule throughout the week nights. my suggestion would be to schedule the local block level incrfemental image backups first each night given that they take only a short time and are not subject to the internet speed variability. Then schedule your cloud fulls/incrementals. But above all, be sure to utilize the synthetic full (and immutability) offerred by Amazon, BackBlaze and Wasabi. Foir example, a full image backup that once took 2.5 days to complete now competes in under 8 hours.
  • Split Backup Data to multiple Jobs
    Not sure I understand what you are tryng to accomplish but here is my interpretation:
    You are running two file backup jobs - One to the NAS and another to AWS, but these jobs backup data from two separate clients/companies.
    You want to wind up with two jobs for each company - one to NAS and the other to AWS - each containing only one companies files/folders..
    The easiest way to accomplish this is to clone the existing jobs and then go into the "What to backup" and select the appropriate files/folders for each one.
    The data that is already on the NAS/AWS doesn't change and no re-uploading is involved.
    If this is not what you are looking to do please elaborate.
  • Intelligent Retention Postponed
    James
    The anser is yes - it will wait until the standard 90 day minimum retention period has expired before doing a full again. But there is a special offer from MSP360 tha reduces that to 30 days. See below.

    https://help.mspbackups.com/billing-storage/storage-providers/wasabi/min-retention-policy
    We use BackBlaze for our Image and VHDx backups that we only need to keep for a week as there is no minimum retnetion period, - but not without issues (DM me if interested in more details)
    Hope this helps
  • Maintenance
    Followings
  • Swapping machines
    Dont worry about the licenses - you will get a trial license - Login to MSP360 using the client's credentials on the new machine, and then create a restore plan.
  • Cannot open system after downloading
    Check to see if it is being quarantined by your AV software