malicious and accidental deletions

So basically what we know right now is that unless you pay for advanced rebranding, there is no way to protect from malicious deletions.
The CLI “currently” bypasses the console password
The Console password is trivial to bypass by editing a file.

It Sounds like the only real builtin option is to use Encrypt filenames which will require the encryption password to delete?

[reply=“Brandon K;3442”] Yes, image-based backups can’t be deleted via CLI. Option to disable CLI via rebranding will be available soon, as well as master password security improvements.

[reply=“Mark Hodges;3443”] File name encryption should help, and master password issue will be fixed soon, as I already mentioned.

Can you enable filename encryption after that fact so all new files are encrypted (and protected)? I’ve tried it on a test server and it doesn’t appear that the file name encryption takes effect as the newly backed up file is visible, and I could just right click and delete it.

[reply=“Mark Hodges;3448”] You need to start from scratch in this case.

Here is another question. Assuming we removed delete option with advanced rebranding, won’t criminals just bring their own copy of the exe files and replace them, thereby giving them delete access again?

Hopefully someone at cloudberry is actively thinking about security like a malicious actor as we have 24tb of client data thats all but useless if our clients are maliciously attacked

[reply=“Mark Hodges;3462”] Not sure what “exe files” you’re referring to. Installation packages are unique for each provider, if that’s what you mean.

They are unique but the code is all the same at its core. As a test, has anyone tried using advanced rebranding to remove the delete option (which probably just removes that function from a DLL) and tried overwriting all the exe and dll’s used by a customized version with the same files from a non customized version to see if you’d have that functionality back.

Its not like installs are completely unique and with a little work it wouldn’t be hard to find out which dll’s cbb.exe called and replace both those files from another generic install to see if the command line and/or delete functionality came back.
At least that would be my first attempt to getting delete functionality back once it’s removed.

And Matt, thank you for responding so quickly to this. Obviously this is a major concern for those of us who have not been hit yet by a malicious attack where the threat actors have gotten complete access to our environments

[reply=“Mark Hodges;3464”] Well, the only scenario I can image is that one of our developers decides to do that, but I doubt any of them will be interested :wink: .

As for security concerns, we’re always listening to the feedback and try to improve the software with each release.

Well, the only scenario I can image is that one of our developers decides to do that, but I doubt any of them will be interested - that concerns me because you can be sure that a malicious actor looking for a 5+ figure payout by encrypting the environment and deleting the backups sure as hell will be attempting it.

Mark,
I totally understand what you mean.
If some of the local program files determine whether or not these features are enabled/disabled (Assuming this is true because when a settings change such as “disable delete from GUI” is made in the console, it requires an updated installer on the endpoint) , a malicious actor could theoretically get his hands on a generic cloudberry installer or .dll files with all of those options disabled, run it on the endpoint and have access to everything.

I’m also wondering once an updated installer is run, is there no need for account reauthentication to then open the cloudberry gui on the endpoint?

Let’s say a malicious actor does the above mentioned, he’d then in theory have unrestricted access to the GUI on the endpoint with all security options disabled, including ability to delete backups from within.

I’ve talked to backblaze as well..there is no way to restrict deletions..they have ability to do the rotations which i assume is how cloudberry is handing the api calls to do the rotations but there is no way to protect the data from deletion for X days or replicate it. you can snapshot a bucket manually but only up to 10tb

quote
a malicious actor could theoretically get his hands on a generic cloudberry installer or .dll files with all of those options disabled, run it on the endpoint and have access to everything.

thats not even a theoretical issue…its going to be very simple to bypass for all but the dumbest criminals and the new breed making money off physically encrypting data are anything but stupid and you can bet a simple script that just replaces all the DLL’s and the exe’s that are present in the CBB directory will be utilized PDQ.

At this point I can’t imagine a situation where cloudberry is going to be able to solve this to be honest unless we start writing to tape.

to add to my last point…

The only true defense I see going forward is to either go with the higher priced S3 with replication (paying for twice the data) or re-doing my 24tb across all my clients of backups with filename encryption. Even when console security if fixed, I think the replacement of the application files is going to still allow them to bypass the console lock.

In theory, even if they replace all the application files, the encryption password should still be required to delete since that requirement has probably been around a long time

[reply=“Mark Hodges;3474”] [reply=“Brandon K;3473”] That’s… not really how it works in general(even theoretically) and that’s not how install packages are generated on our side.
If it were that simple every backup software manufacturer would be out of business in a matter of weeks.

Besides, as mentioned previously, if your system is breached then nothing is safe. Backup software is a part of a security system, not THE security system.

Actually.. It is that simple as I’ve done it… Sure you have dependencies wetween files and DLL files but let’s be honest, When I remove one version and install another it keeps all the config including the encryption and config passwords which are Backwards compatible. So if as a hacker I removed the version that prevented deletion and installed a version that allowed it, guess what…they can then delete the data. He’ll, i can removed a branded version and install the copy downloaded from your website and all the account info is still stored and accessible. Reg on, processmon and any other tools make it trivial to find what registry and files a program is reading from and manipulate things to gain or remove functionality. Determined hackers would just reverse engineer to get the encryption process (but honestly that’s probably a state Attack and none of us using cloudberry are going to have any defense against that)

The whole POINT of the backup software is to get the data offsite and protected from deletion WHEN you are breached, which if you read up to start of the forum post is exactly what the backup software DIDNT do.. Protect the data.

In reality the backup software should be WORM… Write once, read many… Which doesn’t include the ability from the client to delete that data at all from the client.

The data truly should only ever been deletable from the cloudberry or backblaze or Amazon portals which includes retention processing.

We are going to try and redo all 50 of our clients to add the filename encryption to hopefully make it more difficult for their data to be deleted.

[reply=“Mark Hodges;3478”] Sorry, but I highly doubt you’ll be able to do that with generic standalone build or a build that was generated for another provider due to the following reasons:

That’s part of standard functionality. You can only do that if you have administrator account in your web panel, which can be protected by 2FA, which requires your phone that uses Google Authenticator. You can also go to Settings > Security and allow only a certain range of IPs to log in to your portal, which further brings the possibility of that to zero.

Standalone builds are incompatible with managed ones, we even use different folder structure and authentication methods for them. Even in this highly hyperbolic(I would say even spy thriller-like) scenario you’d need too much data from your storage provider, from your MBS panel, and from our servers. And I can assure you we protect our data well enough.

This is not really something I can comment on since I don’t know if the data has indeed been deleted via our software and not directly from storage side.

Fair enough…i am going to reproduce those scenarios anyhow and I think you are right that it’s unlikely to work, but people don’t always protect old versions of the builds (ie they are in the account downloads folder) so they don’t need the folder or the portal access.

I Think the best defense at moment is filename encryption since it should prompt for the encryption password too) regardless of what version of the software they used…

Which of the issues discussed above will be fixed in the next release? BTW, the capability to prevent storage tab deletions from the server console does not work - I created a build that did not allow the deletions but was still able to delete files (see ticket #231523).

That was no more sophisticated than stuff I’ve personally done to regain access to a Windows domain controller that the administrator password had been forgotten, or other system recovery tasks.

The news is full of recent 5 and even 6 figure payouts; there is plenty of money in play here. And clever geniuses only have to figure out the defeat once, then they just automate it into a script a child could run, and sell it.

Does that prevent the backup plan from being modified too? retention settings maliciously altered, etc? Change it all to just keeping the last version, run the ransomware on the local system, and then run the backup and it obediently whipes out all the previous versions, and replaces the current version with gibberish??

Also how does the encryption password work? Is it stored locally with reversible encryption? Some meaty detail on how the keys are secured would be good.

At this point I’m stuck; even AWS cross region replication to a 2nd account – I haven’t had a chance to play with that extensively yet, but the reading I’ve done so far, it seems that it replicates deletions; so within hours of the first bucket being cleaned out, the 2nd bucket might empty itself out too.

In my case, it was deleted via your software. Backblaze confirmed there were no logins to its web portal and 2FA was enabled there, and that the delete commands came through its API. Your software was the ONLY thing that had been paired to the backblaze API and the only thing I used that backblaze account for. I was using backblaze in part to totally isolate the backups from anything else we use.

Anyway, the reason i came back to the forums was because i was trying AWS S3 “compliance retention” aka “object locking” policies to see if i could protect buckets from object deletions by having a ‘compliance retention policy’ in place that could prevent objects from being deleted.

My thinking was that if I do block level backups, and full backups once a month, then a retention period of 90 days would make the last couple full backups immutable along with all the daily diffs. That seemed like a good model; and AWS says nothing will delete the objects or remove the retention policy from the object, and even the bucket can’t be deleted short of cancelling your AWS account entirely.

But it turns out Cloudberry backups to that bucket fail with:

Content-MD5 HTTP header is required for Put Object requests with Object Lock parameters

Has anyone else tried this and got it working?
Any word from cloudberrylab on supporting this?

[reply=“db ots;3573”]

you are killing me…
I totally didn’t think about the modifications to the plan being an attack vector, but yeah, it totally makes sense now that you mention it..change the retention to 1 day, uncheck keep 1 version) run the backup to wipe everything older, encrypt the crap out of things and then run backup again…
sigh…
makes that console password all the more critical