Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the idea is to leverage the Redis protocol for an “API”, what are the benefits to this approach over using the built-in `pub-sub` and the regular old C implementation of Redis Server?

I could maybe see some really specific use-cases for this but for probably 90% of cases implementing a distributed process API should be able to go over pubsub fine shouldn’t it? Or does pubsub have some sort of massive overhead?

EDIT: Just because it’s tangential to the topic at hand, and since I’m up here at the top, I also wanna throw out a nod to libraries like `nng` and `nanomsg` which are spiritual successors to ZeroMQ and they have functionality like brokerless RPC/pubsub built in as messaging models. I don’t see tools like this talked about a lot in this space cuz systems software isn’t the sexiest but if you need to embed a small and lightweight messaging endpoint in your backend stuff then look at those as well. No horse in the race, just like sharing useful tools with people.

https://nng.nanomsg.org



> and the regular old C implementation of Redis Server?

To be clear, this article is talking about implementing your own synchronous-RPC-request server, i.e. a network service that other services talk to through an API over some network wire protocol, to make requests over a socket and then wait for responses to those requests over that same socket. This article assumes that you already know that that's what you need. This article then offers an additional alternative to the traditional wire protocols one might expose to clients in a synchronous-RPC-request server (RESTful HTTP, gRPC, JSON-RPC over HTTP, JSON-RPC over TCP, etc.); namely, mimicking the wire protocol Redis uses, but exposing your own custom Redis commands. This choice allows you to use existing Redis client libraries as your RPC clients, just as writing a RESTful HTTP server allows you to use existing HTTP client libraries as your RPC clients.

The alternative to doing so, if you want to call it that, would be to write these custom commands as a Redis module in C. But then you have to structure your code to live inside a Redis server, when that might not be at-all what you want, especially if your code already lives inside some other kind of framework, or is written in a managed-runtime language unsuited to plugging into a C server.

Or think of it like this: this article is about taking an existing daemon, written in some arbitrary language (in this case Elixir), where that daemon already speaks some other, slower RPC protocol (e.g. REST over HTTP); and adding an additional Redis-protocol RPC listener to that daemon, so that you can use a Redis client as a drop-in replacement for an HTTP client for doing RPC against the daemon, thus (presumably) lowering per-request protocol overhead for clients that need to pump through a lot of RPC requests.

I do realize that you're suggesting that you could use Redis as an event-bus between two processes that each connect to it via the Redis protocol; and then use a "fire an async request as an event over the bus, and then await a response event containing the request's ref to show up on the bus" RPC strategy, ala Erlang's own gen_server:call messaging strategy. All I can say is that, due to there being three processes and two separate RPC sessions involved, with their own wire-protocol encoding/decoding phases, that's likely higher-overhead than even a direct RESTful-HTTP RPC session between the client and the relevant daemon; let alone a direct Redis RPC session between the client and the daemon.


> This article assumes that you already know that that's what you need

This is how I read the article. It was about how to implement network protocols in elixir and here are two of them: redis and msgpack.

Having an elixir based redis server is not the same piece of the puzzle as having elixir simply talk to a redis server. For one, the elixir based redis server can have arbitrary rules around keys and values that are not supported by redis. (Same said for a redis server written in C or python or rust or...)

This approach lets you store all keys and values in a dict, b-tree, sqlite or postgres etc. Want to store each value in a flat file? Sure, now you can. Only you know if this is actually useful.

At least, this is how I made sense of this.


I don't think the article was talking writing a "Redis Server" in the sense of writing something that tries to do what redis-server does — i.e. to be a "data-structure server" that has a root-level keyspace of variously-typed values, and commands that atomically build up and query those data structures. Maybe they used that as an example of what you can do, but it's not the most useful example. (If that's what you wanted, why not just use a regular redis-server deployment?)

I think the article was instead just using the term "Redis server" to mean "any server that speaks the server side of the client-server protocol that redis-server speaks" (without implication that it stores data ala redis-server) — in the same way that an "HTTP server" is "any server that speaks the server side of the client-server protocol that HTTP servers speak" (without implication that it serves HTML files, generates directory indices, and supports per-user multitenant shares, ala default-configuration Apache.)

Note that the command set of Redis isn't part of the Redis protocol. A "Redis protocol" server could have an entirely novel set of commands, none of which have anything to do with keys or data-structures. It's just another way of exposing an API to clients. Your application-layer protocol over the Redis wire protocol could be "an API for triggering webhooks", or "a group-chat software protocol ala IRC/XMPP", etc.; and in none of those cases do you need to implement GET/SET/DEL/etc., or to describe your own use-case in terms of GET/SET/DEL/etc.†

The only thing the Redis wire-protocol necessitates, IIRC, is that each command start with a verb; that verbs consist of ASCII characters, with a certain maximum length; and that each verb have a fixed "schema" for the members of its parameter list, that pre-determines the encoding a client should use to send an instance of that command over the text or binary wire-protocols, without any connection-time schema discovery.

And yes, most Redis clients do have some way of sending custom commands to the server, with the schema for those commands specified at runtime (at least over the Redis text protocol), even if they don't have syntax sugar for doing it the way they do for the redis-server built-in commands. Even if a Redis client's aim is only to talk to redis-server deployments, they still have to support custom commands, because individual redis-server deployments are extensible with https://redis.io/modules that expose arbitrary commands, and client libraries can't possibly know about those modules at compile-time. So they have to support potentially any command at runtime, somehow or another.

-----

† Mind you, just like HTTP has "REST" (which basically means "using the default HTTP verbs for analogous purposes in your own API, instead of totally abusing theirs semantics or inventing your own verbs"), there could be a similar convention on top of the Redis wire protocol, where you implement your API in terms of the built-in redis-server verbs GET/SET/DEL/etc. Then you could use the full syntax-sugared default commands built into Redis-protocol clients, to talk to your server, instead of needing to rely on the runtime custom-command support. However, unlike with REST, I don't think this use-case is very useful — the schema of redis-server's built-in command verbs has pretty tight tolerances, and doesn't allow for too many use-cases that aren't just "building a data-structure server."


> This choice allows you to use existing Redis client libraries as your RPC clients, just as writing a RESTful HTTP server allows you to use existing HTTP client libraries as your RPC clients.

Thanks for this beautiful analogy. I'm familiar with redis and elixir and found the article interesting but I didn't quite understand the "why" and now this makes sense. Highlighting it here for others as well.


That’s fair. Although I think the characterization of doing RPC over Redis’s built-in pubsub is a little uncharitable. You’d just fire off a PUBLISH command w/ the channel and payload, and you’d receive the response to the RPC request on a subscriber that you can immediately drop (a pseudo-one-shot channel). It doesn’t have to be an event bus (even though that’s close to how Redis does pubsub internally).


When I say "event bus", I mean "an async RPC architecture using a reliable at-least-once message-queuing model, where clients connect to a message broker [e.g. a Redis stream], and publish RPC workloads there; backends connect to the same message broker, and subscribe as consumers of a shared consumer-group for RPC workloads, greedily take messages from the queue, and do work on them; backends that complete RPC workloads publish the workload-results back to the broker on channels specific to the original clients, while ACKing the original workloads on the workloads channel; and clients subscribe to their own RPC workload-results channel, ACKing messages as they receive them."

Event bus is the name for this network architecture. And if you're trying to replicate what synchronous client-server RPC does in a distributed M:N system, it's what you'd have to use. You can't use at-most-once/unreliable PUBSUB to replicate how synchronous client-server RPC works, as a client might sit around forever waiting for a response that got silently dropped due to the broker or a backend crashing, without knowing it. All the queues and ACKs are there to replicate what clients get for free from having a direct TCP connection to the server.

(Yes, Erlang uses timeouts on gen_server:call to build up distributed-systems abstractions on top of an unreliable message carrier. But everything else in an Erlang system has to be explicitly engineered around having timeouts on one end and idempotent handling of potentially-spurious "leftover" requests on the other. Clients that were originally doing synchronous RPC, where you don't know exactly how they were relying on that synchronous RPC, can switch to a Redis-streams event-bus based messaging protocol as a drop-in replacement for their synchronous client-server RPC, because reliable at-least-once async delivery can embed the semantics of synchronous RPC; but they can't switch to unreliable async pubsub as a drop-in replacement for their synchronous client-server RPC. Doing the latter would require investigation and potentially re-engineering, on both sides. If you don't control one end — e.g. if the clients are third-party mobile apps — then that re-engineering might even be impossible.)


I don’t know if anyone ever told you before but you have a real talent for explaining thing clearly and simply!!

You should write technical book or something!!


This article is about implementing the Redis Protocol on a socket yourself. Not about the Redis Database. Sorry for the confusion.


I actually did understand that and I think it’s super cool from a technical perspective. I was just curious as to the use case where this was a strong contender to beat out the built-in stuff you can get from Redis server. It’s a “small” implementation but you still have to test it and own it throughtout the lifecycle of the product that’s using it.


It's pretty common that you won't have a specific use case in mind when learning something new, but I've often found that later it ends up fitting in somehow.


If it's straight RPC there's no need for the extra moving parts, and the simpler your topology the fewer ways for it to go wrong.

I often build TPC based "protocols" that are just newline delimited json in each direction, it's a nice middle ground between using an HTTP POST and something like gRPC.


I’m not 100% sure I’m understanding but if the idea is to implement a “traditional”/synchronous RPC mechanism you can do that using the PubSub functionality trivially and I’m not sure which additional moving parts it adds unless you’re in a situation where you have to cluster Redis (which has a reputation for being a PITA but isn’t that hard honestly).

The only use case that really jumps out is when you don’t want a broker because you don’t want a single point of failure but now you’re embedding a Redis server implementation in every one of the services in your mesh and I’m not convinced that’s much better but I can see where it might be helpful.


> I’m not sure which additional moving parts it adds

> The only use case that really jumps out is when you don’t want a broker

The broker is exactly the additional moving part I was referring to.

Pubsub over a broker is more complicated than RPC operationally, whether you're already using the broker software elsewhere or not.

Especially when you're looking at the RPC server living inside an erlang VM that's already really good at handling things like load shedding of direct connections.


I don't think this is anything to do with Redis server, you are simply using the Redis wire protocol/parser? I could be wrong...


That’s correct but my point was that Redis has an out-of-the-box solution for “message passing arbitrary data using the Redis protocol over TCP to named endpoints”, and Redis Server is a very lightweight piece of software. Even if you aren’t using it as an in-memory key-value DB it’s not a big problem to pull it in to your stack even if you’re only gonna use it for PubSub or RPC/IPC.

This is a super cool write-up and I’m not saying anything negative about what the author did. I like it a lot. I’m just asking from a technical and curiosity perspective what the advantages are to this over using the stuff Redis Server already provides which can do the same thing.


With Elixir specifically, you have the option of clustering these things. We (fly.io) send messages between servers using NATS. This works well with geographically distributed infrastructure, messages are somewhat peer to peer. If we were using Redis, we'd need a round trip to a centralized server. And we'd need the internet to always work well.

You can do a similar thing with Elixir and your own protocol (or the Redis protocol).


Problem with redis as a Pub-sub is it’s not transactional. So a consumer can’t take temporary lease on a message try to process it and then delete the message if it was processed successfully.

This pattern is very common in banking but require something like RabbitMQ or Azure service bus …


I suppose the main advantages would be less dependencies (no Redis server). Less overhead (you aren’t routing though anything)

I haven’t looked at the exact performance characteristics but it would be fun! You would have built in load balancing!


Correct this post is about the Redis Protocol.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: