Which environment would give the best throughput

In running Cloudberry Drive Server for Windows, which scenario should I expect the greatest amount of throughput? (This is not a trick question :slight_smile:

All options have the same attributes except for the following

A: A server with a 1 GB NIC

B: A server with 4 - 1GB aggregated NICs (LACP)

C: A server with 2 - 10GB aggregated NICs (LACP)

Each of these scenarios would use the same route to Azure.
After your answer, I will let you know what we found.

[reply=“andrew nee;d241”] It sounds as a student task :smile: but actually it’s hard to say what’s the best. I would say that B and C are expected to be at least no worse than A. What would you say? :wink:

[reply=“andrew nee;d241”] A lot of it depends on the hardware and software(AV/FW). In ideal environment it would be variant C, but as I’ve mentioned, it depends on various factors/bottlenecks, like cache settings.

In what I have experienced, with most all hardware being equal, though admittedly not exact, I experienced the following:

B: 4- 1 Gbe LACP upload 280 Mbps, download 766 Mbps
C: 2 - 10 Gbe LACP upload 143 Mbps, download 395 Mbps
(Speeds are measured by using LANSpeedtest lite)

With this being seen, I can only imagine the only rationale behind this is the possibility of multi-threading enabling B to be faster. I don’t know if Cloudberry developers are watching this, but it would be nice to hear from them if multi-threading is in fact being utilized.
If someone has the time and resources, it would be nice if someone else can simulate a similar environment and see what they get.