• yesopk
    0
    Hi All,

    I have set up the CloudBerryDrive on our SQL server so I can have the all the database backups stored in AWS S3, instead of a local drive on the EC2 Server.

    We have about 16 databases (some larger than others) that should have backups done nightly. Most of them are successful. However, 5 of them failed and have the same message in the sql job history.

    Executing the query "BACKUP DATABASE [Database1] TO  DISK = N'G:\\Data..." failed with the following error: "Write on "G:\\Database1\\Database1_backup_2019_04_24_000002_6585953.bak" failed: 31(A device attached to the system is not functioning.)
    BACKUP DATABASE is terminating abnormally.
    10 percent processed.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
    

    I have checked the CloudBerryDrive log and there was nothing logged for that night/time that the backup ran. Given that some backups work fine, I can only presume that the connection timed out.

    I have not made any tweaks to the configuration of CloudBerryDrive other than setting the account and the mapped drive. Then tested the connection, which is fine.

    Has anyone experienced this before?
  • Sergey N
    26
    Hello,

    In this case it might be a simple network error, maybe connection dropped or got unstable. Does it happen constantly with any operation or only on the DB backup ? We also know that SQL is not writing the files in an ordinary way, sometimes it may write to the end of the file, sometimes to the beginning so it might be related to it. One of the possible workarounds might be disabling the caching function in the drive, you can do this via Advanced Options of the Mapped Drives. Thank you and please keep us posted regarding the results. We can also continue investigation in the ticket to avoid public exposure of your data.
  • yesopk
    0
    Hi,
    Thanks for the info. I did switch off the cache in the advanced tab.

    I have un-ticked the "Limit cache size" and "Folder cache retention period" settings.

    However there were still some failures last night with the same error message.

    Are those the only two settings that relate to the cache?

    As you would expect, if I run the backups manually, they run fine.
  • David Gugick
    118
    Have you considered using CloudBerry Backup for SQL Server to perform the SQL Server backups for you to S3 storage, rather than using Drive? it would probably eliminate any of these issues as the backups would be performed locally using native SQL Server backups and then seamlessly moved to S3 (local backups then removed). Plus, you can use the product to also back up files / images of the EC2 instance itself, if needed.
  • yesopk
    0
    Hi David,

    I didn't know there was a product for backing up SQL server specifically. I would like to be able to write the backups directly to the Drive without storing locally first. This is to reduce the cost of additional drive space on the EC2 server.

    I did found that the issue was due to the size of the backups being produced by SQL Server. I thought that the CloudBerry drive would be able to handle any size of file that AWS S3 could handle. Is this not the case?
  • David Gugick
    118
    Even with Drive, the files need to be written locally first. While there are no file size limitations in Drive, it's best to work with Support to understand exactly what's going on in your environment. With CloudBerry Backup for SQL Server, the files are only stored locally temporarily and then removed once they are moved to cloud storage. This is done for best performance with SQL Server databases as it's best to reduce backup time as much as possible to ensure they complete in a timely fashion. I assume you are enabling SQL Server backup compression in your backup scripts / maintenance plans to reduce backup sizes, correct?
  • yesopk
    0
    Hi David,
    Sorry for the later reply. Yes, the backups are being compressed, but they were still quite big. We have done a bit of data cleanup in the large db's and all is working fine now.

    Thanks
  • David Gugick
    118
    Great. We appreciate the update and please let us know if anything else comes up.
  • Bruce Yen
    0
    Hi, we are using CB as drive mount tool and serve S3 buckets for ReadOnly of accessing a lot of Training Materials. Since access are pretty intensive and potentially large video files. We have ran into IIS servers service dropped in the middle of accessing there Training Materials. So, my question is
    1. is there any bench mark or anything we can test for the performance/bottle neck as far as bandwidth, requests/sec or bytes/sec limit?
    2. Since it's ReadOnly, would longer Folder Cache retention period, Disk Driver callback timeout would help?
    3. what's the suggested config setting, such as "User Chunck size", "Rety Attempts Number and Time", "Quere thread count" according to our Application?
    4. Also, this is just an FYI, your software has memory leak issue. We have to run nightly schedule jobs to restart CB every night. But, this is our least concern.

    Thanks for your time.
  • David Gugick
    118
    What's your execution timeout for IIS? Seems like that could be increased as that appears to be the place where the timeout occurs. https://stackoverflow.com/questions/2414441/how-to-increase-request-timeout-in-iis

    Regarding testing speed, you can use the AWS Console to download a large file and see what speed you are getting. Final numbers depend on available downstream bandwidth and S3 maximum read speeds.Once you know the speed you can pull data from S3 and the maximum file size, you can figure out how long the downloads might take and set the execution timeout as needed on the IIS side.

    The Retention Period determines how frequently Drive refreshes the files for display in Windows. If the files do not change frequently, you can increase the time based on your needs.

    You can try disabling Compression in Advanced Options for the mapped drive. The video files likely do not benefit from compression.

    If you're using encryption that might slow things down too when downloading. But, for security reasons, you may not want to leave your video files in S3 without encryption. That's up to you though depending on how secure the files need to be.

    As far as the Cache Size, if you do not set a limit, then Drive will use available disk space, so make sure you have sufficient space to store the large video files.

    As far as any memory leaks, I do not see any open issues in the current release. So, if you're using the latest version and are having an issue, then please open a support case using the Diagnostic option in the context menu of the Drive tray icon for the support team to review.

    How large are these files? What downstream bandwidth speed do you have (please not if the bandwidth is Megabit / Sec or Megabyte / sec)?
  • Bruce Yen
    0
    Thanks for your prompt reply.
    We are using the latest 3.0.1.5 . And memory leak is for sure. You can see it clearly in AWS dashboard,
    It starts with high memory, and at the end of the day, it dropped to very low and never recover from it. It's high again after restarted CB service.
    And I'll look into your suggestions. And will reply to your questions when I get more detail info.
    Thanks again.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment