• Recovering Data after Project Loss
    Install MSP360 on another Linux machine, sign in with the same account as the old machine and create a restore plan that has the correct decrytion key.
  • Email Notification Assistance
    I tried extensively for two years to modify the verbiage in the email notifications to clients as it is not clear and frankly frightens them. But even though I could change the wording of the body by editting the html, I could not get the subject line/ status to appear the way it does with the default.
    Yet no one at MSP360 would work with me to get the custom email notifications to work the way I wanted.
    Maybe I will send in another ticket asking for help.
  • How to Resolve Missed Full Backups
    Because only one image backup can run at any one time, we run into this issue now and then. The incidence of this is far less frequent now that we utilize the synthetic full capability for the cloud Image backups. Full Image backups take, on average, around 75% less time to complete than they did previously.
    Because the fulls now take so much less time, we can distributeae the schedule throughout the week nights. my suggestion would be to schedule the local block level incrfemental image backups first each night given that they take only a short time and are not subject to the internet speed variability. Then schedule your cloud fulls/incrementals. But above all, be sure to utilize the synthetic full (and immutability) offerred by Amazon, BackBlaze and Wasabi. Foir example, a full image backup that once took 2.5 days to complete now competes in under 8 hours.
  • Split Backup Data to multiple Jobs
    Not sure I understand what you are tryng to accomplish but here is my interpretation:
    You are running two file backup jobs - One to the NAS and another to AWS, but these jobs backup data from two separate clients/companies.
    You want to wind up with two jobs for each company - one to NAS and the other to AWS - each containing only one companies files/folders..
    The easiest way to accomplish this is to clone the existing jobs and then go into the "What to backup" and select the appropriate files/folders for each one.
    The data that is already on the NAS/AWS doesn't change and no re-uploading is involved.
    If this is not what you are looking to do please elaborate.
  • Intelligent Retention Postponed
    The anser is yes - it will wait until the standard 90 day minimum retention period has expired before doing a full again. But there is a special offer from MSP360 tha reduces that to 30 days. See below.

    We use BackBlaze for our Image and VHDx backups that we only need to keep for a week as there is no minimum retnetion period, - but not without issues (DM me if interested in more details)
    Hope this helps
  • Maintenance
  • Swapping machines
    Dont worry about the licenses - you will get a trial license - Login to MSP360 using the client's credentials on the new machine, and then create a restore plan.
  • Cannot open system after downloading
    Check to see if it is being quarantined by your AV software
  • Windows Home Server, AWS, TLS 1.2
    We are supporting a Windows XP server that only works with MSP360 versiobn and prior.
    Told it is a TLS related issue.
  • Restore to local computer using agent or mbs web portal
    Item level restore in a restore plan from both the web portal and the agent console. Learned through testing that the source machine has to be on in order to access its backups but still a very useful feature that I did not know existed.
  • Optimum S3/Cloudberry config for desktop data
    thanks for answering my questions.
    This is how we would setup a backup scheme based on your requirements:
    Local Backup
    If you don’t already have one, get a 4-5 TB USB 3.x capable removable Hard Drive.
    Set it up as a remote shared device and send both Image and data backups in legacy format from each of your computers to that device. If you have a standard OS build, you really do not need to image every desktop, just your standard OS build and any one offs.
    This costs nothing other than the device cost (~$100) and should allow you to keep a couple for weeks of images ( daily incremental /weekly full with a 6 day retention, using legacy backup format.
    We keep a year or two worth of data versions and deleted files - as long as we have the drive capacity ( hence the 5TB drive).

    Cloud Image backups:
    Once you have a set of local Image backups, there is no need to keep more than one or two copies of your standard image in the cloud.
    We send daily Image backups to the cloud using the New Backup Format (NBF), with a synthetic full scheduled each weekend. We give it a one day retention. So we have anywhere from 2 to seven copies depending on the day of the week. ( if this is confusing, let me know and I will explain).
    Now to keep costs down we use BackBlaze B2 ( not BB S3 compatible) for our Image cloud backups.
    Reason #1 - the cost is only .005/GB/mo. Vs .01 for OZ- IA
    Reason #2 - it supports synthetic full backups
    Reason #3 - there is no minimum retention as there is with Amazon One zone IA ( 30:days).

    Cloud File Backups
    We would use legacy file format and backup to Amazon OZ- IA with a 90 day retention.
    We run monthly fulls and daily block level incrementals.
    Understand that a “full” in legacy format is only backing up files that have outstanding block level versions since the last full.
    So the actual space consumed for all of the unchanged files and versions is typically not more than 10-15% more than the size of the data on the source.

    File Backup Retention policies
    Setup a separate daily cloud backup plan for that infrequently used access database and give it a 90 or 180 day retention period. Keep in mind you will eventually have a years worth on the local drive, but that cannot be guaranteed as the drive could fail.
    Exclude those files from your normal file cloud backup plan and give it a 30 day retention in OZ IA.
    Understand that with a monthly full and 29 incrementals, the previous set of fulls/ incrementals will not be purged until the last incremental of the set has aged to 30 days.

    So in summary:
    - Get a 4/5. TB local drive and backup files and images from all of your machines using Legacy format to it with as long a retention setting as you want.
    - Send nightly images to the Cloud ( only unique ones) using NBF and weekly synthetic fulls. With your 280mbps upstream speed this will be piece of cake. Set retention to one or two days since it is for Disaster recovery not long term retention.
    - setup a legzcy backup for your normal files to Amazon OZIA with a 90 day retention, monthly “incrementals” ( fulls in our language) and block level incrementals each day.
    - for those infrequently updated access db files, setup a separate backup plan and set the retention to a year or whatever you like.

    As for glacier, there is a significant cost to use lifecycle management to migrate from OZIA to glacier- $.05 per thousand objects. For small files, you will wind up paying more just to migrate them than you will save. . When we have a particular folder that holds large files ( over 2MB each on average ) that dont change, we will use Cloudberry Explorer to setup a lifecycle policy for that folder(s) with the large files to migrate to Glacier after 30 days.
    in general, I do not recommend using the Glacier lifecycle migration. Not worth the trouble.

    So I apologize for the lengthy and perhaps confusing reply, but there are a lot of factors to take into account when optimizing backup strategies.
  • Optimum S3/Cloudberry config for desktop data
    A few questions first:
    • Do you plan to do image backups - I believe the newer version allows image backups of desktop OS’s
    • Roughly how many files and how many GB’s are you backing up?
    • What ISP upload/ download speed do you currently have?
    • Are you sure you want to “keep # of versions” vs say a 30 or 90 day retention period for versions?
  • Delete Files in Hybrid Plans
    I would be happy to help setup your Image and File backups regarding when to use Legacy and when to use NBF formats for both Local and Cloud storage as well as helping choose the optimum backend Cloud platform. I have been using MSP360 for 9 years and thus have a pretty thorough undertanding of the product features and limitations. DM me if interested
  • Delete Files in Hybrid Plans
    You are right, there is no setting for "purging deleted files" with Hybrid as there is with standard Legacy file backups.
    We do not use Hybrid Backups; we simply run separate plans to Local and Cloud storage.
    We run local backups every four hours and keep 2 years of versons and deleted files since there is no cost to do so. We might lower that to one year if the local storage capacity is smaller, but with the cost of a 4-5 TB USB drive these days, space is rarely a concern.
    We keep Cloud storage versions for 90 days (on two separate cloud platforms) unless the customer pays for extended 15 month retention (recommended for CPA's and legal firms that often touch their client files only once per year).
    Can you help me understand the need for a hybrid backup?
  • Backup Agent 7.8 for Windows / Management Console 6.3
    Backup Fan - I understand your concerns with BB and Wasabi - We actually use them as a SECOND cloud backup after our primary - Amazon S3 IA
  • Backup Agent 7.8 for Windows / Management Console 6.3
    I do not believe that any of the Glacier options support the in-cloud copying necessary for Synthetic fulls. Curious as to why you want to use Glacier - I know it is less expensive, but BackBlaze and Wasabi are not much more per GB and they do support the Synthetic fulls.
  • New Backup Format Setup Help
    I am glad the deletions are working for you. The empty folder thing is a known issue.
    I find that doing periodic consistency checks avoids repo sync issues that can affect the purge schedule.
  • New Backup Format Setup Help
    Lukas -
    I am going to be honest with you, unless I am totally misunderstanding your question, I think that you would be wise to keep using the legacy Backup format.
    Recap of how Legacy works:
    - Files get created or modified and are backed up to Wasabi.
    - If they never get modifed or deleted, they simply stay in Wasabi forever.
    - If a file gets deleted, it stays in Wasabi for 90 days (based on your settingin Legacy for deleted files)
    - A "Full" backup in legacy mode is actually an incremental backup. It simply backs up any file that has had a block level incremental done since the last "Full". This is typically only a small subset of your entire backup data on Wasabi - since the vast majority of space is consumed by video files that will never change and will never need to be backed up again.

    Lets look at how FFI would work in your scenario:
    1. You re-upload all of the existing data to Wasabi using the new backup Format.
    2. You set the FFI interval to 90 days - or let Intelligent retention do it for you because of the Wasabi early delete penalty for objects less than 90 days old
    3. Each night, only files that have been changed or added that day get included in the incremental backup
    4. If a file gets deleted, it will be kept in the cloud for the FFI interval - the same as for file versions.
    4. At the end of 90 days you will have one true Full backup and 89 incrementals.
    Here is where it gets dicey:
    On day 91, the system takes one more incremental - then starts creating a brand new "Synthetic" Full which uses the 'in-cloud copy" feature of Wasabi to create a brand new Full - Just as if you reuploaded the files - even ones that have not changed..
    Now this would be okay except for one issue: The in-cloud copy feature runs at between 200GB and 350GB per hour. You can do the math - but 70TB is going to take a LONG time to copy in Wasabi.
    And here is the best part - On day 92 the system will do ANOTHER synthetic full as its goal is to keep no more than one Full and 90 incrementals in storage at any point in time.
    I have requested that we be given the ability to schedule when the Synthetic full ocurrs - so that we can perform it once a week on weekends instead of every night.
    So you can see that the new format and the FFI are not going to be viable for the amount of data that you have.
    And your static video file data type is ideal for the legacy format's "forever incremental" design.
    When you move the completed project video files to the other NAS, it will be considered deleted and get purged from Wasabi after 90 days (or whatever you set)

    Now my only disclaimer is that if you are actually only storing a small amount of data in Wasabi at any given point in time and the majority of your 70-80TB's are on a NAS and not in the cloud. Then the FFI might sense.
    Also, we use BackBlaze as there are no early deletion fees.
    Happy to discuss further.
  • Large Cloud Image Backups
    First of all, we use redirected folders for the majority of our clients, and only backup the server. Workstations are standard builds that can be reloaded faily quickly, and because we encourage clients to maintain a spare (or two) the rebuild of an individual PC is not an emergency.

    We do local image backups of the server on a daily basis - Usually weekly fulls and daily incrementals.
    This provides full operational recovery in the event of a failure of the OS/Hardware.
    Prior to the availability of Synthetic full backups, we did a full cloud image backup only once per month.
    We would exclude from the image backup all data folders as well as the temp,recycle bin, etc to keep the size down. In a true disaster, having a one month old image was acceptable as the OS and Apps typically do not change significantly in a month.
    We do cloud and local daily file backups as well that would be used to bring the server up to date after the image is restored.
    The daily delta for our image backups is typically in the 5-15GB range, due to the fact that any change in the OS, location of temp files, etc will result in changed blocks which need to be backed up again.
    With the synthetic full capailty we now run image backups every night for all clients except those with the very slowest link speeds (<5mbps).
    The synthetic full gets run on the weekend and takes a tenth of the time that a true full would take.
    For those with slow links, we do a monthly synthetic full and weekly incrementals on the weekends.
    For our clients who are using P2P devices for file sharing, again, we only do an image of the P2P server, not individual workstations on the network.
    Not knowing how your clients are setup it is hard to make a recommendation, but certainly you should have local and cloud image backups, and utilize Synthetic cloud fulls. I recommend using BackBlaze as there is no minimum retention period. And for disaster recovery, there is no real need to keep more than one or two versions of the image in the cloud.
    For our clients that have only individual machines with the data stored locally, we simply backup the data files to the cloud (and locally if they have a USB HD device). We do not do image backups unless they are willing to pay extra for that service. ($10/month).
    Brevity is not my strong suit :)
  • Confused by schedule options
    The :"repeat every xx" setting seems irrelevant with FFI. With other backups I used it to run a Full every three months and incrementals every week.
    With FFI, the frequency of the synthetic full is now dictated by the retention period that you set or by the Intelligent Retention if you have it turned on and your retention period is less that the platform minimum.