If all of your agents are seeing the UUID for the first time, they cannot deduplicate amongst themselves without quorum or shared state which has race conditions itself. It’s non trivial.
There's no need to. Different agents will have different copies of the same message, that's a given. If the sender ensures that a UUID is attached to a single specific message and never re-used (though they may retry to send the exact same message multiple times), then there is no risk that multiple receivers would have different versions of a message and therefore consensus is guaranteed without even having to talk to each other.
If you assume that the sender cannot be trusted to not reuse a single UUID for multiple messages (e.g. to trick the receiver), you can still work around that limitation by computing and comparing message hashes (e.g. sha256) on the receiver side... Don't even need UUIDs. Every time you receive a message, you can hash it and check if you're received a message with this hash before (storing the hash of each message as it is received). You can use the hash in place of a UUID though in this case you probably need to add some index to each message to ensure that each hash is unique over time (since a message with the exact same payload will be counted as the same message, even if broadcast a long time apart).
It depends on your definition of 'recipient' and 'delivery'. If you assume that the receiver is a specific database instance (or shard) or a specific data store (on the receiver side), then it's totally possible to have exactly once delivery in the sense that each message sent would end up being inserted into the database/data store exactly once without duplicates.
A common Kafka approach is to partition by key, so that a given UUID will only be placed on one partition, and we're guaranteed that any further messages with that key will also be placed on that partition, so handled by the same consumer.
Then create a change-log topic that's co-partitioned by key with the input topic. And then funk around with partition assignment strategies so if consumer X is assigned partition 10 of the input topic, then it's also assigned partition 10 of the change-log topic.
Then add RocksDB as a fast state store on a persistent volume claim, as restoring state fresh from your change-log topic turns out to take about 7 minutes.
And then realise you've just reimplemented bits of Kafka Streams poorly.