Should you make separate buckets per company or just one big bucket?
Wondering if anyone uses separate buckets per company or just one big bucket. Right now we use one big bucket. Seems like it would be easier organization with separate buckets per company. What do you think?
Most customers I speak with use common buckets. Data is separated by the company user account, so customer A cannot see customer B backup data. But if you have a need, you can certainly use different buckets, but I'd encourage against it unless it's a one-off customer use-case. Remember that many cloud services have limits on the number of buckets. Some as low as 100 - which may not scale for you - so check with your vendor.
What is the best way to rename a bucket name without losing any backups? Is it best to just make a new destination bucket on mspbackups?
OR, Can I rename the bucket through Wasabi and just select sync local repository in the backup settings for the backup plans?
I do not think you can rename a bucket. Buckets are not folders. Just like folders on a local disk are not really folders in the cloud. Everything in the cloud is an object. Changing a bucket name would require (and probably why it's not supported) changing every object name that sits within that bucket.
Why do you need to rename a bucket? Do you just hate the name? You should create a new bucket and assign. But backups will start fresh. While you could seed the buckets by copying data from the old to the new and then synchronizing the repository, you may find that process is slower and more complicated than simply starting backups fresh. You can remove the old bucket when you're certain you no longer need the data contained within.
Sign in or register to add a comment.
Add a Comment
Welcome to MSP360 Forum!
MSP360 Managed Products
Managed Backup - General
Managed Backup Windows
Managed Backup Mac
Managed Backup Linux
Managed Backup SQL Server
Managed Backup Exchange
Managed Backup Microsoft 365
Managed Backup G Workspace
Backup for Linux
Backup SQL Server
Connect Free/Pro (Remote Desktop)
Internal error (500) when trying to copy big-size object from bucket to another bucket
Cloudberry Upload to s3 bucket just showing $GmetaaaAAA#
Backup fails with "S3 Transfer Acceleration is not configured on this bucket" error
If we use AWS-CLI to copy files to the S3 bucket, can ClouldBerry Drive work with them properly ?
Terms of Service
Useful Hints and Tips
Created with PlushForums
© 2024 MSP360 Forum