I changed from 50Mb/s to 1000Mb/s service a few months ago; latency is much the same, which means that when the pipe is not crowded, I really can't tell the difference.
(But it's wonderful headroom for large file transfers.)
I can't tell the difference between 75mbit and 900mbit. Most servers don't get anywhere near 900mbit. And my hardware is almost always a limiting facture (wifi, hdd speed, etc).
One way to do a test is to bittorrent something legal, say an Ubuntu DVD.
If you use a transmission client (e.g.), you can change the upload speed of your client. Do one test at a low upload speed (5K/second), and one test just above your ISP's advertised upload speed.
In the first case you should see your speed peak out around 850mb/s. In the second, if you have a bufferbloat situation, the download speed will drop significantly as the uploads push the download acks into a queue, forcing TCP to throttle heavily downwards.
It might actually, mostly due to buffer bloat on some modem stacks.
The anti-buffer bloat stuff from the recent DOCSIS (I forget if it was 3.0 or the one after) are supposed to help a LOT with latency issues. Not as good as fiber but close enough that global distance issues become dominant again.
The "buffer bloat" guy has been useful. I've been arguing against big FIFO queues for decades, but he's more into PR and visibility than I am. Any place you have a major choke point, especially home router uplink connections, you need QoS, or fair queuing, or something, or queue delays will go through the roof. At least send the ACKs first.
Latency can definitely improve from where we are at. The speed of light is different through different materials, that's why some people are using hollow fiber optic cables to have lower latency connections. Laying a more direct line would also improve latency, my connections to Seattle backtrack about a hundred miles east before heading out west again according to traces. Faster routing hardware would shave off a few milliseconds. Economic factors hold back latency improvements for the majority of internet users, not hard physical limits.
Speed of light isn't the only factor in latency. Quality of the equipment, routing protocols, etc are far more important since those are the things we can actually change.
I strongly suspect that for an average consumer, latency is driven by the TCP stack on every piece of hardware close to the edge (local machine, router, modem, ISP point of presence) to a much higher degree than the transmission medium or backhaul. Consumer internet has a long way to go before it becomes HFT backhaul-grade.
I changed from 50Mb/s to 1000Mb/s service a few months ago; latency is much the same, which means that when the pipe is not crowded, I really can't tell the difference.
(But it's wonderful headroom for large file transfers.)