• Martin W
    I've configured a SQL Server backup plan to back up to Amazon S3. While some of my small SQL databases are successfully backed up, the larger databases fail with the message "Error occurred. Possible reason: low local disk space." Here are my questions:

    (1) Does Cloudberry Backup need local disk space to make SQL backups before uploading to S3? If so:

    (2) What are the local disk space requirements?

    (3) Where are the local backups created? Is it possible to change the configuration for the local backup location?

    On my server, I have three drives C:, E: and F:. The database data and log files are stored on E: and F:, respectively. There's plenty of free space on E: and F:, but there is limited free space on C: --- less space than the size of the database.

    If local disk space is required for a successful backup, this seems like a pretty important matter. It would be good to include info about this in the documentation. In addition, it seems like Cloudberry Backup should be able to check the free disk space and provide more informative messages if there isn't enough space.


  • David GugickAccepted Answer
    You are correct that SQL Server databases are saved locally first. The reason is that with SQL Server, we use the native backup features to perform the backup and then send the final backup to the desired backup storage. You can change the temp / default location for these SQL Server backups by adjusting the value in the Options dialog (system menu) on the Advanced tab - Temporary Folder option.

    If you turn on Compression, then SQL Server compresses the database first, but actual disk space used depends entirely on the database size, log file size (if any), and how compressible the database is. Many databases can compress 80% or more, but some, if they contain a lot of binary data or use table/row compression or TDE, will compress less. Differential and transaction log backups will vary in size based on actual data changes, so estimations for those backups types are not possible. Since we do not know how well the database will compress before backup, we cannot guess if the backup will fit.

    As you stated, you have other disks, so select one where the database files are not located for best performance. As an example, if your database files are on E, then consider using F for backups.

  • Martin W
    Thanks! I've reconfigured the temp folder location to use a folder on F:, and will see how well that works when the scheduled job runs next.
Add a Comment