I cannot speak to the reason by Support that you move to the new backup format. If it's because of what the KB article talks about (millions of files), then moving to the new backup format may only be needed on those backup plans with a large number of files.
Generally speaking though, for those plans with large numbers of files, the new format has much improved backup and restore performance, plus client-side deduplication, synthetic fulls (new for file backups), GFS Retention, Immutability for Wasabi / Amazon S3 and soon for Backblaze B2, and built-in backup consistency checks.
As you've noted, the backups do require periodic full backups because backups are managed at the generation level - which is a full set of the data. On those systems where the source data is large, you can schedule less frequent full backups. And you may want to consider using GFS retention to keep data longer term, but with less granularity in restore points as the data ages.
Some of these settings are going to be dependent on your negotiated retention and backup contracts with your customers. You say you are keeping 2 years, but did not indicate if you're keeping all versions of every file, including files deleted locally, for 2 years, or you are keeping only a fixed number of file versions.
For example, on your 3 TB data set, you could:
* Run daily incrementals - or more frequently if needed
* Run a full backup every 3 months
* Keep backups for 90 days
* Turn on GFS and set Yearly backups to 2 years
With those settings you'll have all data kept for 90 days (you'll really back up to 180 days before the oldest backup set of 90 days can be removed). After the 90 days, you'll keep full backup snapshots for 2 years. So you may end of up with 4 full backup snapshots in storage plus up to 180 days of incrementals. Obviously, those retention and schedule settings can be adjusted as needed for your particular use case.