* I presume the case here is the proxy is close to the server so the handshake is faster and thus the single benefit of this setup, although that's not at all what's illustrated.
* The illustrations show the request taking longer with the proxy, although maybe the two diagrams aren't to scale
* The originating UDP packet could get lost and the client would never know
The author could improve the latency by prepping the TCP connection before the request comes in, giving a significant reduction in latency.
It's about SENDING requests with "zero latency", not about completing http operations with zero latency.
And yes, you get no confirmation, and no reply.
I must admit I'm hard pressed to come up with a use case for this. You could just as easily do a regular HTTP request on a separate thread and throw away the result to get "zero latency" fire-and-forget behavior.
> I must admit I'm hard pressed to come up with a use case for this. You could just as easily do a regular HTTP request on a separate thread and throw away the result to get "zero latency" fire-and-forget behavior.
I do this quite often, and creating another thread might be easy to code, but it's harder on capacity planning and management. It's also hard to gossip utilisation of a shared network link across a cluster, so e.g. if requests are infrequently sourced, I may want to permit a pool of 500k concurrent output HTTP requests across the entire cluster, but I don't want to give every machine a mere 10k outgoing since busy endpoints would be starved by unbusy ones (and my network link would have plenty of spare capacity). Managing all that could be a full time job if I went down the "easy" path of just creating another thread.
Using UDP means if there is network congestion, messages just get dropped and I don't waste more network and CPU traffic by sending retransmits. I have retries further up the chain anyway for other reasons, so it makes sense to me to have less code and reuse what I've already got.
Do you really want a log that might have useful information incase of meltdown? I think logging is something you should probably confirm or you’re setting yourself up for nasty bugs.
A lot of logging software allows for UDP connections though. Not all logs are critical, and in times when you have close to 100% network saturation, dropping some log traffic to maintain a functional production environment is not a big deal for most.
* I presume the case here is the proxy is close to the server so the handshake is faster and thus the single benefit of this setup, although that's not at all what's illustrated.
* The illustrations show the request taking longer with the proxy, although maybe the two diagrams aren't to scale
* The originating UDP packet could get lost and the client would never know
The author could improve the latency by prepping the TCP connection before the request comes in, giving a significant reduction in latency.