• Backup Agent 7.8 for Windows / Management Console 6.3

    Thanks for your valuable inputs as always Steve.

    I asked first because they mentioned is supportable, but in fact it's not in the agent. I received a reply that on an upcoming version it will be available only for Instant Retrieval, which is good.

    The reason why we want to use Glacier for specific projects is that we have customer who have a huge amount of data which are for archival purposes, but are like 6,7,10 million files, which need to be stored for life. If we use the NBF, we have lower cost for the initial upload (less transactions), requirement satisfied on the archival, and fast download. The way the agent works with the legacy backup format that unarchives files by each 1000, it would take a year to download the entire dataset, which as far as the NBF would be really faster.

    Finally, the reason why we don't use Wasabi and Backblaze (despite the fact I know it's a lot cheaper than any of the AWS services) is because they are not even on the Gartner Magic quadrant:
    https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/10/28/CIPS-MQ-991x1024.png

    Those players don't provide the SLA and security we require for our customers.

    Thanks again for the inputs.

    Best regards,
  • Backup Agent 7.8 for Windows / Management Console 6.3
    Hi Team,
    Are you able to use New backup format and Glacier Instant Retrieval for the Synthetic Fulls?
    I tried to configure but it prompted to disable the synthetic full, despite using the supported format As per documentation, https://help.mspbackups.com/backup/about/backup-format/synthetic-full-backup
    Support for Major Storage Providers
    A synthetic backup type is supported by the following storage providers:

    Amazon S3 (except S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes)

    I opened a ticket and they said the documentation needed to be updated.

    Just to check if I made a confusion here, or maybe I'm doing something wrong to have NBF and Glacier Instant (which is the new glacier format)
  • Backup Agent 7.8 for Windows / Management Console 6.3
    Thanks Steve. I'm considering using other storage options, but as these are very important files, we still use Amazon.

    The bottom-line is that Backblaze is not even mentioned by gartner https://aws.amazon.com/resources/analyst-reports/22-global-gartner-mq-cips/?nc1=h_ls

    Regarding the lifecycle transition, if we could use the NBF the number of requests would be greatly reduced as for e.g., a case where we have 5.8 million files, would be greatly reduced using the NBF and would be cheaper to store, transaction and lifecycle change.

    Anyways, hopefully the team make some small changes on FFI and intelligent tracking to allow the usage of long term storage.

    As we say here, we already have the knife and the cheese, maybe with a couple of changes it could make it possible.

    Until then, we still use legacy.
  • Backup Agent 7.8 for Windows / Management Console 6.3
    I guess the Lifecycle transition would be the way to go, but the Tool would need to work with when we need to restore data.

    we already have in place and working a type of storage that is
    7 days in hot storage
    in the 8th goes to the cold storage (Glacier), but using legacy, it works fine.

    With the new backup format, we could upload to S3 and then send to Glacier. That's a test I'm gonna try to make on the next week to see if it works. Specially the download part.
  • Backup Agent 7.8 for Windows / Management Console 6.3
    H Thanks for your feedback Steve. That was the same assumption I had, until I upload a lot of data, the backup failed because MSP360 maintenance window was in the middle of uploading 2-3TB of initial data.

    I opened a support ticket and that's the reply I had once I questioned about why it didn't left of where it was and why it was starting a new full backup, meaning that all the days I had sent data was going to the trash:

    "If the backup fails, it needs to be started anew (just the latest run, not the whole chain), leaving the data that was uploaded on the storage and marking it as a failed incremental run. The backup can not be continued, from the same spot, neither in legacy nor in the new backup format plan.
    What actually happens when the incremental backup fails and we start a new backup run: The software checks if the previous backup execution was successful or not and most importantly if the restore point created during the previous backup execution is valid. If the restore point due to backup failure is invalid the software looks for the last valid restore point before it. When the valid restore point is finally found, the software starts listing the information about the data that is on the machine and in the cloud. Once this is done it starts the upload of the data. (some steps like shadow copy creation are skipped to shorten the description of the process)."
    That said according to the screenshot you have provided, the full backup run failed and the last restore point before the full was in a previous chain, hence all your current incrementals that were successfully uploaded belong to a previous backup chain. (the last successful full backup)"

    Later I questioned about if the first full backup fails and would it reuse the data...

    "Unfortunately no, since the shadow copies are different, and the data on the storage might have an unfinished upload which means that the file is basically useless since part of it is missing. There is a way to use the data that is on the storage already, but it requires a few things such as a successful first full backup and Backup storage that supports synthetic backups. You can read about this option here: https://help.mspbackups.com/backup/about/backup-format/synthetic-full-backup""

    about the Synthetic Full, I'm aware and that's wonderful as it doesn't cost as well.
  • Backup Agent 7.8 for Windows / Management Console 6.3
    That's awesome.
    Another thing to add here, imagine that a backup plan that uploads 5TB of data, may take 3-6 weeks depending on the internet connection, and during business hours we must reduce the usage to 30%, so it takes a while to upload everything.

    Sometimes, the connection or the server gets rebooted and the first full backup gets failed.

    We also would like your help to be able to continue a full backup failed (the data is already in the cloud), knowing that something happened in the middle of the operation.

    I'm not sure if I was able to get myself clear, but it's something that with the legacy backup format, sometimes on these cases we have to run 5-6 times on a 2month timespan until we have all data uploaded. As you might know, s*** happens, and that's what we must take into consideration on the first full backup, so we don't need to create a brand new Generation as the first one (that failed in the middle is pointless to have there in the cloud).
  • Backup Agent 7.8 for Windows / Management Console 6.3
    In this scenario for long-term storage, we're talking about image exams that need to be stored forever.
    That being said, there are no files being deleted, only added.

    We run daily backups using legacy format.
    We haven't used NBF because IMO it's pointless to have multiple full backups on a dataset that we're only adding data.

    In summary, we have a backup plan that backs up a folder running every day at 8pm never delete the files.
  • Backup Agent 7.8 for Windows / Management Console 6.3
    Understood. Are there any plans to support this in the future? We heavily use Flexible Retrieval.
  • Backup Agent 7.8 for Windows / Management Console 6.3
    Hi

    Does the FFI work with Glacier Storage? This is a game changer for us, we thought we were going to use the NBF for long term storage, but as it required many full uploads for a data that is only being added, it didn't work for us.

    Hopefully this new FFI work so we can migrate all customers to this new format and save money.
  • Issues with Cloud Backups
    Great feedback.
    I opened a couple of forum threads to try to figure out a proper way to use the new backup format, but I guess that's the way to go as well.
    Thanks Steve.
  • Bad Gateway and 3015 internal storage error
    Same here, I thought it was an intermittent issue, but still occurs today, it's happening since Saturday evening on many servers.
    The workaround suggested is to try to rerun the backup plans that fail, but that's not feasible as some plans required a pre-batch to run.

    Hopefully they can put their efforts to solve this and later improve the communications channel, as it would also help them to avoid get clogged with support messages and put all efforts on solving this massive problem.
  • Glacier and new Backup format in-cloud copy
    Thanks David for the clarification.
    We already use it for archival purposes, but we use the legacy backup which works fine.
    I was thinking on using the NBF because of the reduced number of files which would be cheaper.

    The problem is that we don't have a folder that is frozen and won't get changed anymore. If we would, we could run a full backup with NBF and forget.

    However, we have this folder that all the files (images) that are there they won't change and they can be archived directly. Although the NBF could be a very good solution, I don't think it would be a fit for this scenario.

    It would be great if it could as it would be much cheaper for us and faster to recover on a disaster.
  • New Backup Format: Run Full Every 6 months, is it safe?
    I think you could probably decide on a few default options for customers and then have the conversations with them about anything they need over and above. For example you could use the new backup format for cloud backups with 6 months of deleted file restorability, and then open the conversation with the customers about whether or not they need longer retention or longer deleted file restorability, and move on from there. If many customers select the default, then you don't have to worry about a custom designed backup plan for each; but you always havDavid Gugick

    Definitely, that's what we came up.
    Instead of full every 6 months which might be dangerous, we set up a baseline: to keep 3 months of Generations, run a Full every 3 months.

    In the end, we will end up with 3-6 months of deleted files, because only when the third full backup runs after 6 months, shall delete the first generation and its differentials.

    Is it correct my understanding?

    Thanks again for the help David.
  • New Backup Format: Run Full Every 6 months, is it safe?
    Thanks for the clarifications David.
    We were trying to find a "cook recipe", but we are now looking at each customer specific needs.
    If they want to store forever and don't mind on being slower on the restore, we go for the traditional backup format.
    You want blazing fast, but don't mind keeping only the last 3 6 or 12 months of deleted data, let's use the new format.

    Cheers
  • New Backup Format: Run Full Every 6 months, is it safe?

    Thanks for the clear explanation David.

    My concern is the restore speed that with the new backup format is much better!
    The problem is that we need to have extra storage consumption to meet requirements.

    The major concern is that now the retention policy is set per the generation and not about the files, so we now need to worry on how long to keep the generation, at least until there is the new feature Consolidation and Reverse Full backup in place, right?

    I guess that will be a game changes for this concern.
  • New Backup Format: Run Full Every 6 months, is it safe?
    Thanks for the clarification David.
    We wanted to hear that experienced opinion before think this through.

    You can optionally limit deleted file exposure with the GFS Retention settings by keeping backups for a longer period of time.David Gugick

    Regarding the GFS, in the end, I will still end up with many full backups increasing even more the total consumption right?

    I couldn't find a comparison regarding the total storage used by the GFS vs a regular plan with monthly fulls.

    Please advise on an e.g. of GFS policy that would allow me to have the last year of deleted files with the least usage of storage.
  • New Backup Format Retention Policy - Keep for 1 week HELP
    Can elaborate on what you would like to see so I understand better. What would you like to happen after you run 7 days' of backups and then on the 8th day, a new full is executed?David Gugick

    Thinking on how the NBF is organized, I guess there won't be a way for me to accomplish this requirement.

    If I set a 1 week retention policy, after a while I will always have at least 2 Fulls + incrementals.
    Whenever the third full is ran, the oldest full + incrementals are removed due to the policy.

    Anyways, I guess that's how we need to cope with.

    Just to figure out on future cases to quote the amount of space for customer.

    Thanks David
  • Tracking deleted objects
    Hi Team, a suggestion would be to add on the Detailed Reporting.
    As there is already the information of which files were Backed up, Purged...

    Instead of having only
    "Information about 1432 deleted files", you could have 1 line per file.

    The end result would b something lik:

    C:\File.txt Backup
    C:\fileold.txt Purged
    c:\filedeleted.txt Deleted

    We had this requirement in the past, but we decided to not pursue it.

    Hopefully this suggestion helps on where to put this information.
  • New Backup Format Retention Policy - Keep for 1 week HELP
    Hi David, it's the latest Version 7.5.3.

    I understood your point.

    The thing is that as the retention policy now applies to the generation, so bottom line if check on the config to keep for 1 week, it's not that I'm going to keep 1 week of retention of files, it's regarding generation, so we need to do the math and it's misleading to customer.

    In order for me to always keep the last 7 days, I guess it's not possible to accomplish such requirement right?

    With the legacy format I would just set to keep 7 versions and run a daily backup.

    Please correct me if I'm wrong.

    As to running more frequent fulls, I don't think it's possible on the config, it allows only weekly and monthly.

    Cheers,
  • Deleted files restore issue
    Hi Steve, I've seen that happens.
    We solved that in a 2 step process.
    1. Backup File-legacy: add the option to mark files deleted in the destination
    2. Restore: instead of restore latest version, we mark Backup Period from: 1900, to: the date we want the restore point.

    We had a problem before, raised a ticket and MSP solved that.

    We wrote a KBA about this process, but it's in portuguese in with our white-label, hopefully it helps (put it on google translate).

    https://suporte-wspeed-com-br.translate.goog/929884-Como-Recuperar-seus-Dados-no-Momento-Exato-logo-antes-da-Pane?_x_tr_sl=pt&_x_tr_tl=en&_x_tr_hl=pt-BR&_x_tr_pto=wapp