• Full backup from time to time
    What kind of backups are you referring to? File or Image/VM?
  • Why is Cloudberry Backup For Windows Server So Slow?? (Backblaze B2)
    You say that you are restoring individual files from an image backup? We do file backups using legacy format and Image backups using NBF (to take advantage of Synthetic fulls in BackBlaze).We keep one full and daily incrementals of the Image backup and for files we keep 90 days worth of versions using legacy format. Have never had an issue with file download speed. If we need an entire system we restore the Image which includes al of the data - that runs at near-ISP max d/l speed.
  • Forever Forward Backup “Times Out”
    Don't use FFI for anything. But using the new backup format with weekly synthetic fulls (weekend) and daily incrementals works well for us. The weekday incrementals take only a very short time.
  • Error code: 1003
    What aere you using for backend Cloud storage? BackBlaze?
  • Browsing Cloudberry Backups with Explorer PRO
    The New backup Format combines files into an archive file using a different format than the legacy file backup format. For that, and other reasons, we do not use the New backup format for data/file backups; only for Image and HYPERV VHDx file backups.
  • Files changed during backup code 1629
    Been a pet peeve of mine for a long time. It is not an error or a warning. It is by design that we skip files that are not needed for recovery. SOmeone from MSP360 care to comment?
  • Forever Forward Backup “Times Out”
    You are still using the New Backup format whcih you probably should not be using. DM'ing you some questions and more info.
  • Forever Forward Backup “Times Out”
    Kaleb - This is a problem with BackBlaze - their West Coast Data center in particular. I live on the east coast and was getting these timeouts on a regular basis. I resolved it by moving the backups to the East Coast BB DC ( and at the same time set the bucket up for immutable Object Lock for those companies that might need it some day.)
    As for Forever Forward Incremental (FFI) , I DO NOT recommend using that for regular data file backups (documents, pdf's etc), only for VHDX or Image Backups.
    The problem with FFI is that once you reach the number of days of the retention period, it has to do a Synthetic backup EVERY DAY. For example you do your first true Full upload on day one with a retention period of 30 days. On the 31st day the system does a Synthetic backup that rolls the day 1 incremental into the Full, (using something called "in-cloud copy") which takes a lot of time. The next day it does the same thing to fold in the day 2 incremental to the Full and so on in perpetuity. With 5TB of data, it takes a long time to do the synthetic folding of the incremental into the full. And with BackBlaze West Coast it all too often times out.

    Now contrast that to regular file backups. You do an initial upload of all the files. Each day you run an incremental backup which uploads any new files created that day plus any new versions of a file such as a spreadsheet or QuickBooks file. You never have to do another true full, synthetic or otherwise. At the end of the retention period any old versions of files are purged from the cloud, but any files that have not changed stay forever.
    For Images and VHDx backups it is a different story. Before the New Backup Format (NBF) we could only do a full backup once a month due to bandwidth considerations, and it took 2- 3 days for each one to finish. With the NBF, which supports Synthetic Fulls, we now can do a weekly Full and nightly incrementals. The synthetic fulls run on the weekends and typically take 80% less time than a real full backup would. We do not use FFI as it would cause the same issue as the file based FFI, each night it would have to do the synthetic Full folding of the oldest incremental, and could take too long.
    So my recommendation would be to
    1. setup a different email account with BackBlaze, and assign it to the Data Center closest to where your devices are.
    2. Setup a bucket that supports Object lock immutability - (you don't need to use it in your backup plans but could at some point in the future)
    3. Upload the 5TB of files - understand that it could take a week (or more).
    4. Once the initial backup is complete, schedule weekly fulls and daily incrementals - understand that the weekly full will only do a backup of any file that has had changes during the week, such as a QB or Excel file. It takes only slightly longer than an incremental.
    For any Image or HyperV VHDx backups (yes you can do that with the Server version by just uploading the VHDx files, set the retention period to one day. You will then have anywhere from one to 7 backups available for recovery.

    I don't know your exact situation, but would be happy to assist you in designing the optimum backup plans.
  • Recovering Data after Project Loss
    Install MSP360 on another Linux machine, sign in with the same account as the old machine and create a restore plan that has the correct decrytion key.
  • Email Notification Assistance
    I tried extensively for two years to modify the verbiage in the email notifications to clients as it is not clear and frankly frightens them. But even though I could change the wording of the body by editting the html, I could not get the subject line/ status to appear the way it does with the default.
    Yet no one at MSP360 would work with me to get the custom email notifications to work the way I wanted.
    Maybe I will send in another ticket asking for help.
  • How to Resolve Missed Full Backups
    Because only one image backup can run at any one time, we run into this issue now and then. The incidence of this is far less frequent now that we utilize the synthetic full capability for the cloud Image backups. Full Image backups take, on average, around 75% less time to complete than they did previously.
    Because the fulls now take so much less time, we can distributeae the schedule throughout the week nights. my suggestion would be to schedule the local block level incrfemental image backups first each night given that they take only a short time and are not subject to the internet speed variability. Then schedule your cloud fulls/incrementals. But above all, be sure to utilize the synthetic full (and immutability) offerred by Amazon, BackBlaze and Wasabi. Foir example, a full image backup that once took 2.5 days to complete now competes in under 8 hours.
  • Split Backup Data to multiple Jobs
    Not sure I understand what you are tryng to accomplish but here is my interpretation:
    You are running two file backup jobs - One to the NAS and another to AWS, but these jobs backup data from two separate clients/companies.
    You want to wind up with two jobs for each company - one to NAS and the other to AWS - each containing only one companies files/folders..
    The easiest way to accomplish this is to clone the existing jobs and then go into the "What to backup" and select the appropriate files/folders for each one.
    The data that is already on the NAS/AWS doesn't change and no re-uploading is involved.
    If this is not what you are looking to do please elaborate.
  • Intelligent Retention Postponed
    The anser is yes - it will wait until the standard 90 day minimum retention period has expired before doing a full again. But there is a special offer from MSP360 tha reduces that to 30 days. See below.
    We use BackBlaze for our Image and VHDx backups that we only need to keep for a week as there is no minimum retnetion period, - but not without issues (DM me if interested in more details)
    Hope this helps
  • Maintenance
  • Swapping machines
    Dont worry about the licenses - you will get a trial license - Login to MSP360 using the client's credentials on the new machine, and then create a restore plan.
  • Cannot open system after downloading
    Check to see if it is being quarantined by your AV software
  • Windows Home Server, AWS, TLS 1.2
    We are supporting a Windows XP server that only works with MSP360 versiobn and prior.
    Told it is a TLS related issue.
  • Restore to local computer using agent or mbs web portal
    Item level restore in a restore plan from both the web portal and the agent console. Learned through testing that the source machine has to be on in order to access its backups but still a very useful feature that I did not know existed.
  • Optimum S3/Cloudberry config for desktop data
    thanks for answering my questions.
    This is how we would setup a backup scheme based on your requirements:
    Local Backup
    If you don’t already have one, get a 4-5 TB USB 3.x capable removable Hard Drive.
    Set it up as a remote shared device and send both Image and data backups in legacy format from each of your computers to that device. If you have a standard OS build, you really do not need to image every desktop, just your standard OS build and any one offs.
    This costs nothing other than the device cost (~$100) and should allow you to keep a couple for weeks of images ( daily incremental /weekly full with a 6 day retention, using legacy backup format.
    We keep a year or two worth of data versions and deleted files - as long as we have the drive capacity ( hence the 5TB drive).

    Cloud Image backups:
    Once you have a set of local Image backups, there is no need to keep more than one or two copies of your standard image in the cloud.
    We send daily Image backups to the cloud using the New Backup Format (NBF), with a synthetic full scheduled each weekend. We give it a one day retention. So we have anywhere from 2 to seven copies depending on the day of the week. ( if this is confusing, let me know and I will explain).
    Now to keep costs down we use BackBlaze B2 ( not BB S3 compatible) for our Image cloud backups.
    Reason #1 - the cost is only .005/GB/mo. Vs .01 for OZ- IA
    Reason #2 - it supports synthetic full backups
    Reason #3 - there is no minimum retention as there is with Amazon One zone IA ( 30:days).

    Cloud File Backups
    We would use legacy file format and backup to Amazon OZ- IA with a 90 day retention.
    We run monthly fulls and daily block level incrementals.
    Understand that a “full” in legacy format is only backing up files that have outstanding block level versions since the last full.
    So the actual space consumed for all of the unchanged files and versions is typically not more than 10-15% more than the size of the data on the source.

    File Backup Retention policies
    Setup a separate daily cloud backup plan for that infrequently used access database and give it a 90 or 180 day retention period. Keep in mind you will eventually have a years worth on the local drive, but that cannot be guaranteed as the drive could fail.
    Exclude those files from your normal file cloud backup plan and give it a 30 day retention in OZ IA.
    Understand that with a monthly full and 29 incrementals, the previous set of fulls/ incrementals will not be purged until the last incremental of the set has aged to 30 days.

    So in summary:
    - Get a 4/5. TB local drive and backup files and images from all of your machines using Legacy format to it with as long a retention setting as you want.
    - Send nightly images to the Cloud ( only unique ones) using NBF and weekly synthetic fulls. With your 280mbps upstream speed this will be piece of cake. Set retention to one or two days since it is for Disaster recovery not long term retention.
    - setup a legzcy backup for your normal files to Amazon OZIA with a 90 day retention, monthly “incrementals” ( fulls in our language) and block level incrementals each day.
    - for those infrequently updated access db files, setup a separate backup plan and set the retention to a year or whatever you like.

    As for glacier, there is a significant cost to use lifecycle management to migrate from OZIA to glacier- $.05 per thousand objects. For small files, you will wind up paying more just to migrate them than you will save. . When we have a particular folder that holds large files ( over 2MB each on average ) that dont change, we will use Cloudberry Explorer to setup a lifecycle policy for that folder(s) with the large files to migrate to Glacier after 30 days.
    in general, I do not recommend using the Glacier lifecycle migration. Not worth the trouble.

    So I apologize for the lengthy and perhaps confusing reply, but there are a lot of factors to take into account when optimizing backup strategies.