hi everyone,
I have a client that is using Cloudberry Backup Ultimate (v 6.0.1.66) to perform a daily backup of their file server (running Windows Server 2012 R2).
I recently performed a test/review of their backup and restore procedures and noticed a large number of the local files were not present in the S3 bucket. Long story short, I found there was a lifecycle policy in place on the bucket itself which was automatically deleting files older than 30 days.
I am just wondering if my plan to resolve the issue is what I should be doing. Here is what I have done:
-
Ran a consistency check on the storage account in question. Obviously it found a large number of discrepancies
. -
I then performed a repository sync on the storage account which completed successfully.
-
Finally, I am manually running the backup plan.
Is this the correct way to resolve this issue? I was about to create a brand new bucket and start the backup from scratch, but the issue is the client has over 4 million files, totaling over 4TB in size. I would assume performing the repository sync was the correct way to do this, but I am unsure.
Thanks for your help!