Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is question more than a comment, as I have only casual knowledge of event sourcing...

"""

You wouldn't let two separate services reach directly into each other's data storage when not event sourcing – you'd pump them through a layer of abstraction to avoid breaking every consumer of your service when it needs to change its data

"""

Isn't the event itself precisely that layer of abstraction? That is, you're not publishing the details of your data store. You're publishing an event which is a thin slice or crafted combination of details that ultimately reside in that store, but which you are hiding...

Am I misunderstanding the quote?



> Am I misunderstanding the quote?

I don't think you are misunderstanding the quote, I think you are misunderstanding the nature of the problem.

If you tip your head sideways, you may notice that the persisted representation of your model is "just" a message, from the past to the future. It might describe a sequence of patches, or it might be a snapshot of rows/columns/relations. But it is still a message.

The trick that makes managing changes to this message schema easy is that you own the schema, the sender, and the receiver. So coordinating changes are "easy" -- you just need to migrate all of the information that you have into its new representation.

If the schema is stable, the risk of coupling additional consumers to the schema is relatively small. Think HTTP -- we've been pushing out new clients and servers for years, but they are still interoperable, because the schema has only changed in quiet safe ways.

But if the schema _isn't_ stable, then all bets are off.

Because of concerns of scale/speed, we normally can't lock all of our information at once. Instead, we carve up little islands of information that can be locked individually. The schema that we use are often implicitly coupled to our arrangement of these islands, which means that if we need to change the boundaries later, we often need to change schema, and that ripples.

And all of this is happening in an environment where business expect to change, and there is competitive advantage in being able to change quickly. So it turns out to be really important that we can easily understand how many modules are going to need to be modified to respond to the needs of the business, and to ensure as often as possible that the sizes of the changes to be made are commiserate with the benefits we hope to accrue.


Stupid non-web guy here. I don't understand why you can't just start publishing events, and if you need to change the event contents just create a new version of the event and support the legacy clients until you can get around to rewriting them. If you can't control the clients and need to support the legacy events indefinitely... well, you probably would have had that same problem no matter what you did, right?

This article seems like another instance of criticizing an oversimplified example of something.


> You're publishing an event which is a thin slice or crafted combination of details that ultimately reside in that store

The event stream is the canonical store.


I was wondering this too. Seems like services publish to an event stream which other services then read from.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: