Comments

  • Option to install Linux Backup in alternate path/location
    apparently that screenshot from the help file that shows the Move option is an old one and that option for Mac and Linux has been removed. Although, there is a task to reimplement it. I'm certain the removal was an engineering necessity at the time.

    The support team has provided the following directions if you want to move the database manually. I won't be able to provide much support to you here if you have questions about the steps but feel free to reply here or open a support case directly. If you do open to support case, please reference this forum post. Here are the instructions:

    1) Stop backup service:

    ##### For RedHat 6, CentOS 6, Ubuntu 12-14, Amazon Linux standalone v 2.* #####

    sudo service cloudberry-backup stop

    ##### For RedHat 7, CentOS 7, Ubuntu 16-18 standalone v 2.* #####

    sudo systemctl stop cloudberry-backup.service

    2) Move cbbackup.db, cbbackup.db-wal, cbbackup.db-shm from /opt/local/Online\ Backup/00000000-0000-0000-0000-000000000000/config/ folder to another folder on a local disk.

    3) Create symlinks for these 3 files to an original location.

    4) Start backup service:

    ##### For RedHat 6, CentOS 6, Ubuntu 12-14, Amazon Linux standalone v 2.* #####

    sudo service cloudberry-backup start

    ##### For RedHat 7, CentOS 7, Ubuntu 16-18 standalone v 2.* #####

    sudo systemctl start cloudberry-backup.service
  • How to Change Backup Agent Icons Remotely
    Have the the agents without the updated icons ever been upgraded from an old version of the agent to a newer version? Or were they already on 7.3 before the icon changes were made in Rebranding?

    My recommendation is to work with support on this issue. It seems odd that some agents would be showing the updated icons while others are not, especially if those agents have been upgraded to a newer version after the rebranding was set.
  • Version 7.4.0.161
    Please work with the support team directly on this ticket. Thanks.
  • Version 7.4.0.161
    unfortunately, I cannot find any open items that reference that error message / code. My recommendation is you open a support case so the team can review the logs. You can do this very easily using the tools diagnostic toolbar option.
  • Version 7.4.0.161


    • Is what you posted the exact error you are seeing? If not, please post the exact error
    • Where are you backing up? Any changes to your storage (if this is a local backup)?
    • Are you using the new backup format or the older format?
    • Is this a new backup or were you running it successfully on an older version of the agent and then you upgraded and started seeing this error?
  • Long time user, completely stuck in restarting seeded remote share.
    is the relative too far away in order for you to grab the drive and do all this work locally like you did for the initial seed?
  • Long time user, completely stuck in restarting seeded remote share.
    So, You're connecting over the VPN to that remote external drive that shared from a computer at a relative's house? If so can you access the share normally from Windows explorer? Are using login credentials on the remote share for access? In other words, how does the remote share know to provide you access? Could be through Windows security or you could be logging in using a user ID password. Are you sure the software is not responding or maybe it's busy doing something. How long did you let it run before you shut down the program?

    As far as opening a support case is concerned, I would reach out to support and just explain you don't have maintenance. They can probably provide you some options to get you access to the full support technicians while remaining on your perpetual license.

    It's hard to diagnose this particular issue given the lack of technical details around what's going on. Could be security issues, it could be a VPN issue, or it could be something completely else like the agent is doing some kind of work and the latency is very high and that's causing whatever it's doing to take a lot of time. That's something that support team can review in the logs. You can also review the logs yourself if you want. You can check the options dialog in the agent to see the default log location, open it up and examine them. It may point to the operation that's going on at the time you terminate the program.
  • Option to install Linux Backup in alternate path/location
    okay I mentioned to support that you might be running the web interface and what would be the method to move the repository database manually. I'll reply here once I hear back
  • Option to install Linux Backup in alternate path/location
    Are you running the full GUI or the web interface? If you're using the web interface, and you can, please try the GUI.
  • Option to install Linux Backup in alternate path/location
    I'll have to ask Support what's going on. Possibly, it's a limitation in the Linux build (when compared to the Mac build) and there is an alternate way to move the database. I'll reply when I hear back from the team. Thanks.
  • Option to designate bucket subfolder for S3 backup
    I would recommend you use (as you described) a dedicated bucket for CloudBerry backups to avoid mixing our backups in the same bucket with object I/O operations from other products. I realize it's not exactly what you want, but I think it's safer in the long run.
  • Restoring data from deleted S3 object
    I'm not exactly clear why you're using versioning on the S3 side. All versioning should be handled by our software. We do not support S3 versioning directly, but having it on should not pose any issues. If you could elaborate a bit more here I'd be interested to understand more about why you're using it. Are you using object lock, perhaps?

    When you restore with our software, it uses the local repository database that is built when backups and purge operations occur so it knows what files are in cloud storage. It does not do a real-time scan as that would be hugely time-consuming. If the file is up there and properly logged in the repository database, when you restore that file you should see all the versions that you're keeping. Looks like you're keeping three versions at the most, so you would likely not see more than that.

    If you're not seeing versions or the file itself, then it's likely it got removed through some other means or because you don't have your retention settings set properly. There is an option in the retention tab to always keep the latest version of every file and also options about how you want to handle files that are deleted locally.

    If you restore the file from S3 versioning, and it's really in the correct format that we need, then you can try synchronizing the repository in the software as that will read what's in the bucket and synchronize it with the local database. Then you should see the file when you do the restore.

    I'm assuming you're trying the restore from the same computer that did the backup, or at the very least, on a new computer that uses the same backup prefix. If you don't have access to the original computer, and you're doing this from a new computer, then you need to make sure the backup prefix is the same. However, setting the backup prefix would usually cause the repository to synchronize automatically, so I'm guessing that's not the issue. Please refer to this article about how to do a manual synchronize and then report back the results.

    https://help.msp360.com/cloudberry-backup/restore/about-restore/repository-sync
  • HTTP headers change from HTML to octetstream
    See my comment above. Make sure you don't have duplicate rules.
  • Problem Connecting to Google Cloud
    there's a drop-down option when you register the Google account in Explorer to select the authentication type. If you need additional assistance, let me know. I may be able to point you to a Google article that describes the authentication types and information more clearly.
  • Problem Connecting to Google Cloud
    Are you able to try a different authentication method than the one selected? Access/Secret Key or OAuth Service Account?
  • Option to install Linux Backup in alternate path/location
    If you're seeing high disk usage, it's probably from the local repository database, which you can move. Please refer to this page for more information: https://help.msp360.com/cloudberry-backup-mac-linux/settings/application-settings
  • Option to designate bucket subfolder for S3 backup
    There is no such option as far as I'm aware. Backups will be placed in the target bucket in folders named according from the endpoint backup prefix and plan names. Can you describe your use case in more details as to why you need this feature?
  • Scheduled restores for all clients?
    Go to Remote Management, click on the endpoint computer name in question or click the gear icon and then Show Plans. You can add backup and restore plans there using the "+" Create Plan icon
  • Default bucket?
    I think your client should be setting the target bucket name in the role as described here: https://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-policy.html#required-permissions-using-servicelinkedrole

    What happens if you type in the bucket name into the folder text box? If you can type it in, then you can save as a favorite to get there easily in the future. If you cannot type in the bucket name without an error, then it sounds like the permissions may not be set up correctly.