• Old Exchange Backups are not deleted
    Hi,
    after removing an old image I was able to perform a Exchange Backup which results in deleting the old exchange backups. Therefore the problem is solved, I did not know that old backups are only deleted when perfming a backup.

    Florian
  • Support for FreeNAS/FreeBSD?
    Hi,
    I know this, but we are using FreeNAS in a more coprehensive way.
    For example, as NFS/iSCSI storage for VmWare, certain customers as object storage or with extensive jail use.
    Backup of these systems can´t be done with samba share. Currently we are using Rsync Tasks and the inbuild Cloud Sync Tasks feature which is working quit well, but we cannot centrally manage the backup plans and storage. That´s the problem.

    Florian
  • Support for FreeNAS/FreeBSD?
    Yes, I mean a native agent. We have customers with FreeNAS units and there we can´t offer MSP360 backup.
    I would generally suggest to improve the linux/unix support of the backup software. Esspecially in time of ransomware heterogeneous infrastucture is becomeing more and more popular. Lot´s of customer are using AD and MS-Exchange but want a different system e.g. as fileserver.

    Florian
  • Best practise to move existing standalone instance to MSP?
    Thanks for the link, that What I was searching for!

    Florian
  • How can I create a regular plan for consistency check of the backup in the webinterface?
    Hi,
    thank you for your repsonse. Yes, I am using MBS. Is it possible that the consistency check is only available for the Ultimate Edition? I have an "old" desktop licenses which is currently associated with the ubtunu server and there I cannot find any possibility to create a consistency check plan.

    Thanks,
    Florian
  • "The creation of a shadow copy is already in progress."
    An old topic, but today I had also this problem at one of my customers. The reason was, that my customer had enabled versioning and at the time, when the backup should be executed, the volume shadow copy was running to remove old versions of files. That was the reason for the error.
  • Reporting improvement -- collecting your feedback
    Hello,
    please extend the api for additonal features e.g.:
    - Get all backup plans from a certain computer (already mentioned in another post)
    - Get the backup history of a plan with detailed information about which files have been backed, etc. (already mentioned in another post)
    - Implement a possibility to get the current remote state of the servers.

    We have our own monitoring and customer information system where we integrate all services we provide.

    Florian
  • Problems with Consistency Check
    Hi Matt,
    thank you for the reponse. A second server also uses this NAS and Cloudberry as backup destination, there the backups works fine, the consitency check is quick, and the repository has a normal size, therefore I don´t believe that the NAs is the problem. Versioning is disabled on the NAS.

    Currently I try to shrink the database. And afterwards I will start a new synchronization of the repository. I hope afterwards the consitency check works fine and quick.

    Florian
  • Cloudberry API getting Backup History
    Thank you for your quick repsonse, we are using MBS Web API 2.0.

    Florian
  • Cannot use Amazon Glacier with ManagedBackup
    Thank you for your explanation. We need to evaluate, but we are not sure if it make sense to us, that we first backup to s3 and than move the backup to glacier because it complicates our workflow.
    But first we need to check the costs if they are practicable with our flat rates.
    Personally I do not understand if in standalone mode of Cloudberry, backup to Glacier is possible but not in managed backup.

    Thanks,
    Florian
  • Cannot use Amazon Glacier with ManagedBackup
    Thank you for your information,
    that's frustrating, because it means that we need to rewrite our backend management application, because it bases on glacier. :sad:
    We decided to use glacier because we only provide hybrid backup solution to our customers. Retention via Cloud backup is only used in case of fire, water or property damage. Using S3 also means, that we need to adapt our fees.

    Thanks,
    Florian
  • Best practise for first backup to harddisk and then switch to cloud

    I just want to add, that the "file folders" (from each file a folder is created in which the different versions of the file are stored) will be displayed when exploring the backup storage after deleting a connection, creating a new one and than performing a repository synchronisation. They are displayed in combination with the files, therefore you can recover or update each file. Probably this is depending on the endoding the object server uses.

    To clarify: In this scenario I made a cloud backup, deleted the connection, created a new one with the same settings and performed a repository synchronization.

    You can see it on the image:
    2CDE91161C05772CD397D2834CC8B140

    Greetings,
    Florian
  • Best practise for first backup to harddisk and then switch to cloud

    Thank you for the clarification, event if these are not good news for our use case. The method of the blog entry is nearly impossible to realize in our use case, because we will have dozens of users, medicins, lawyers, small companies who have internet contracts with low upload rates, where it would take several weeks for the first backup to the cloud.
    You have a really good backup solution esspecially with the possibility of managed backup, but I would really suggest you, to integrate a "local to cloud backup" mode into your Cloudberry Explorer, because I think, we are not the only one, which habe this problem.

    Thanks,
    Florian
  • Best practise for first backup to harddisk and then switch to cloud

    Hi thank you for your help, but I am still suffering of this problem.
    I uploaded the backup from the external harddisk with the Cloudberry Explorer. Then I renamed manually all $ synmbols by %3A, see this image:
    5A1C2EE0879865C12744B658AA4991FB
    Then I updated the repository:
    0A2F73B1FFA01D089725434BC844B657
    As you can see, the same problem as above, but the images are not even recognized.

    Thanks,
    Florian
  • Best practise for first backup to harddisk and then switch to cloud

    I think there is an encoding confusion. The NAS is using en_US.UTF-8 as encoding. If I replace : with %3A the images are not regonized anymore, see the following image:
    3D228309881C78775CC51FDF5F3E4CB3

    I performed the renaming directly on our Backup NAS, using the default linux command mv.
    Therefore I need to replace %3A by :, then the images are regonized, and I can also make a revovery, depite that the image folder are displayed as in my posting before.

    Thanks,
    Florian
  • Best practise for first backup to harddisk and then switch to cloud
    Thank you for your help, it has worked as expected. There is still one little problem, perhaps you can give me a hint. When browsing the cloud storage the "versioning" folders are displayed in the Cloudberry Client. This does not happen on the local storage. What can be the reason for this, see:

    8F196392CD7E7AC4B76B014C2A8019BA

    Thanks,
    Florian
  • Best practise for first backup to harddisk and then switch to cloud

    Thank you for this link.

    I hoped there might a more straight forward way to achive this:
    1. Creating a Backup Plan which stores the Backup in a folder on an external Harddisk.
    2. Uploading the content of the folder to the cloud storage.
    3. Creating the Cloud Backup Account
    4. Performing a repository synchronization
    5. Changing the Backup Storage in the Backup Plan from Harddisk to cloud storage.

    But it seems that this is not possbile, Cloudberry seems to use absolute pathes including the path to the backup storage instead of relative pathes?

    Thanks,
    Florian
  • Could not parse the specified URII
    The solution I found is quiet simple, :wink:
    I switched the cloud account from S3 to Open Stack and since than I have been able to upload even bigger files than the 8GB public folder database.

    I have contacted the support team and they found out, that a timeout problem occurs during uploading the file. Therefore I think the reason is the default configuration of the Object Storage Server which based on Open Stack Swift.

    Thank you for your help,
    Florian
  • Could not parse the specified URII
    Thank you for the reply. I have just created a ticker, if we found a solution I will post it in this forum, because it might help others.

    Florian