• Aaron Swayze
    0
    hi everyone,
    I have a client that is using Cloudberry Backup Ultimate (v 6.0.1.66) to perform a daily backup of their file server (running Windows Server 2012 R2).
    I recently performed a test/review of their backup and restore procedures and noticed a large number of the local files were not present in the S3 bucket. Long story short, I found there was a lifecycle policy in place on the bucket itself which was automatically deleting files older than 30 days.
    I am just wondering if my plan to resolve the issue is what I should be doing. Here is what I have done:

    1. Ran a consistency check on the storage account in question. Obviously it found a large number of discrepancies :) .

    2. I then performed a repository sync on the storage account which completed successfully.

    3. Finally, I am manually running the backup plan.

    Is this the correct way to resolve this issue? I was about to create a brand new bucket and start the backup from scratch, but the issue is the client has over 4 million files, totaling over 4TB in size. I would assume performing the repository sync was the correct way to do this, but I am unsure.

    Thanks for your help!
  • Matt
    91
    Just points 1 & 3 or 2 & 3 will be enough.

    Make sure that all of the versioning is handled through our software via the retention policy and you should be good.
  • Aaron Swayze
    0

    I confirmed that Cloudberry is now handling the versioning of all files so we should be good there. I also started a manual backup about 24 hours ago which is still running, so it definitely looks like the repository sync identified the missing files, which are now being backed up again.
    Thanks for your help!
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment