Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> the trend has been to encourage avoiding monolithic architectures like Rails

I'd say's it's completely the opposite.

Yes, microservices might've been the trend in late 2010s, but after everyone got burned by it with unnecessary complexity (unless you're at "Google" scale), people just went back by being productive building modular "monolithic" architectures, and using "microservices" only when absolutely necessary.



Surely one can imagine a middle ground between one giant monolith and a huge mess of microservices?


In some ways its more about organization of humans doing the work. Breaking some piece of a monolith off into its own application, not micro-service, has advantages that you avoid having to deal more than a 2 pizza team size on an app. Sometimes the business grows and you go from startup idea one-app-does-everything implementations into needing more de-coupled, single business responsibility organizations of code.

I suppose places like Spotify or Github may have good practices around working on large monoliths but I would think that takes a lot of effort to get right and may have trade-offs.


This is correct.

It was always more of a team organization solution than a system architectural solution. Leaning into it too much on the latter created a lot of complications for the former.


Totally, I think there's a lot of retroactive justification for what's familiar whether it be microservice or monolith. They both have advantages and disadvantages -- we're at a point where deploying and running either is available on most clouds.

That said, I think interesting possibilities exist where the two approaches are blended. I work for a cloud [1] that supports a concept of "services" grouped within an app each of those services is a web server that can be configured to sleep [2] under specific conditions -- effectively serverless but without the loss of shared memory and all the efficiencies of running a more traditional web server.

The grouping of services also provides a way to spin off parts of an application while keeping it within the same deployment process.

1. https://noop.dev

2. https://noop.dev/docs/sleepy-services/


Depends what you wanted from Microservices. If all you wanted was scale, then Rails ActiveJob solves that very effectively allowing you to scale your job-runners.

If you're looking for the "mindset" of microservices, where you store the data separately and impotently, then I believe Rails discourages that mindset.


I keep hearing this "microservices to allow scale", in which "scale" implies performance, as some counterargument against microservices.

Honest question, who ever seriously proposed microservices to improve performance? It doesn't take a lot of thought to figure out that microservices have overhead that will always put it in a disadvantage over a monolith in this regard.

The only "scale" that makes sense wrt microservices is that of the scale of teams, and manage-ability of the scale of features and codebase. They are primarily a solution to "too many people working in too many poorly bounded domains". But as a solution to performance problems? Is it really proposed as that?


This was seriously proposed by some. E.g. "scaling services independently"

Scaling services independently is usually a recipe for outages where something is at the wrong scale. Sometimes you want to separate workloads that don't fit the request response model well because they take too long or use too much CPU or RAM, but you don't need micro services to get that benefit.

I don't think anyone was claiming it would lower latency for typical requests except maybe indirectly through materializing views in event-driven architecture.

I think the steel man has been about scaling teams, but the discourse was not limited to that.


The idea of reserving some capacity for specific workloads make sense, but that's mostly a load balancer / job scheduler configuration thing. The latent capability to serve other workloads physically hanging out in the same binary is really unlikely to have a material impact, if you're not sending it any work.


It was proposed in the sense that Ruby, or python, or whatever webserver language you used (Perl, php, even JavaScript) was slow, single core, synchronous, database blocked, or whatever else made it “unscalable” and you built this tiny service that only focuses on your core bottlenecks like an api call that only returns coordinates of your map position on things like aws lambda.

Then for some reason some junior engineers thought that you could make everything an api call and you can make services in the most optimal language and glue them all together to have functional “scalable” apps.

And thus the horrors of being a web dev in 2016+ began.

Of course it didn’t help SPAs were encouraging backends to be decoupled from front ends and completely hidden in their implementation so the fact that “it was now possible” enticed backend devs to experiment with multiple api services.


Well, Ruby (on Rails) is slow, single core, synchronous, database blocked and hard to scale. But certainly almost everyone realises that's not a feature of it being a monolith, but comes from it's language/stack/paradigms (AR, template, dynamic, JIT etc)?

I have, certainly, replaced some endpoints in Rails apps with lambda's, rust, or even standalone sinatra services for performance reasons.

For example an endpoint that generated "default stable avatar pngs" for new users: Ruby just isn't cut for image generation and manipulation. Rewriting that in a stack that performed x100 in this use-case (we picked rust) took a lot of heat off the cluster of servers.

Or moving the oauth and registration to a separate rails app that served these pages - the only endpoints that did HTML. Allowing the "main" Rails app to remain leaner by not loading all of the templating, and other HTML middleware in memory when it would never be used.

In that sense, I guess, monolyths can have a performance disadvantage: they require the entire app to load stuff for that one endpoint or feature even if 99% of the requests and users never use that.

Like the "PDF generation for reports" we once had, that was rarely used but still loaded in every running thread that would never handle anything related to reports or PDFs. Extracting that to a separate "PDF report generation worker" freed GBs of memory on almost all servers.


Yes, this is the sensible and necessary side of microservices...

Now, take your auth logic and put it on a 3rd party, rewriting all of your auth to do so.

Now, make your database shared across multiple distribution platforms and 12 services (aws, cloud, heroku, tableau).

When one of your 15 services goes offline for temporary maintenance, for some reason your entire website goes down.

The 17th service you created has an ip address switch and is missing and the response to all urls is the default apache gateway page.

The 24th service upgraded from Node 12 and is now broken, while the 26th service built in Go doesn't compile on the specific linux variant of one of your devs.

Before you know it, you're just doing maintenance work because something is broken and it isn't your code, it's some random downtime or brittleness that is inherent in microservice architecture.


What you describe is common "management of complexity", or, really, lack thereof.

These problems are independent of "microservices" vs "monolith". They are independent of "using a framework" vs "no framework". They are independent of programming-language or hosting infra.

Managing complexity, in itself, is a daunting task. It's hard in a monolith, it's hard in microservices. Building a tangled big ball of spaghetti is rather common in e.g. Rails - it takes a lot of experience, discipline and dedication to avoid it.

Languages (type systems, checkers, primitives), frameworks, hosting infra, design patterns, architectures, all of these are tools to help manage the complexity. But it still starts with a dedication to manage it today, and still be able to do so in a decade.

Microservices don't inherently descend into an unmanageable tangle of tightly coupled, poorly bounded "services". Just as a monolith doesn't inherently descend into an unmanageable tangle of tightly coupled, poorly bounded "modules".


Image manipulation is the one thing I also run as a micro service whenever needed. I just set up imagor once and never need to manage that in the installation/images of all other apps. No extra memory for shelling out to libvips or imagemagick needed.

The PDF use case also sounds very promising low hanging fruit


> very promising low hanging fruit

That was actually excacly our angle of attack: see the routes, modules or jobs that were putting the most pressure on the servers.

Then copy the entire app over to separate servers, connected to the same db cluster. Have an app router direct everything except, say /reports/ to the old servers, and /reports/ itself to the copies on new servers.

Did the old servers reduce significant in load? Rip out the part there. Better? Now rewrite, cleanup, isolate or extract the, e.g. /reports/ part on the servers. Better?

Then, finally, disconnect the service from the shared DB (microservices sharing a DB is the worst idea ever.) and have it communicate either via a message bus, via REST calls or not communicate at all.


Scale in a sense where you can scale that one part of system independently when it is micro service.

You still can run into situation where adding a network call is small overhead over the optimization available where it has its own datababase running on its own VM where you can add more resources just for that specific thing.

Maybe you can rewrite that part in a language that fits use case better for only that service.


Modulith - you still program app usually as single repo project, but you take care about code level modularization so in any case you are able to simply extract separate (micro)service.


A modular monolith is distinct from a "plain" monolith. It's a good middle ground for most web services.


What's a "plain" monolith? Is a modular monolith "just a monolith except we don't suck at coding"?


Let's use MVC for the sake of argument. A regular monolith has lots of models, lots of controllers, and lots of views. A modular monolith has several collections of models/controllers/views, which might be tightly coupled internally, but the collections as a whole exposes much smaller APIs to each other. You cannot just reach into an implementation detail of distantly related functionality, even if this functionality is "public" in the programming language package visibility sense (i.e. users repository is visible to users controller).

This is basically what's accomplished by publishing a Thrift/Proto/OpenAPI IDL from a collection of packages comprising a network service. The key insight is that the serialization and networking parts of this are superfluous, what you actually wanted was the visibility rules.


A modular monolith has a single executable which runs in different modes, typically depending on environment variables. So you can run three processes in the mode that handles web requests, five processes in the mode that processes events on a queue (e.g. Kafka), etc. Eight processes, running in two different modes, but it's all the same executable. That's the basic idea of a modular monolith.

By "plain monolith" I meant just any kind of monolith.


A distributed monolith! Worst of both worlds! I’m just kidding of course.


yes, it’s called SOA and it’s been around for decades at this point.


Hey, we should invent a Protocol for it!


We stopped making microscopic microservices but we still ship services. Services are deployable independently from each other, can be scaled independently from each other. A monolith that results in a single build artifact / executable or whose contents must all run in a single pod or application server is inherently harder for larger teams to work on. Deploying a small change means re-deploying your entire application. Scaling your authz system means scaling up your analytics data producers. Separating your code into services that run and deploy independently from each other means organizations can scale without creating headaches for developers.


> people just went back by being productive building modular "monolithic" architectures, and using "microservices" only when absolutely necessary.

The number of hi-tech companies that are in the middle-to-large scale have increased significantly from the first wave of Rails era.

Majority of hi-tech companies with listed stock have complexity more than "monolithic" architecture.

Sadly, if a company doesn't grow, they will get eaten by their competitors.


> Majority of hi-tech companies with listed stock

Isn't that kinda circular? Generally speaking, companies only list their stock when they grow large. The vast majority of web dev is not happening in those kinds of companies.


There are publicly listed hi-tech companies that may not be that big...


Is this correct? Practically every job advert I've seen claims they are using microservices (and want candidates with such experience).


It is not correct. This is the sentiment people who don't understand k8s often have, because of handwavy complexity blah blah blah. The predictable quote is always along the lines of "unless you're Google scale..." - which misses perhaps 80% of what microservices bring to the table.

Then they go off and build complicated monorepos that take 6 months to learn before you can effectively contribute.

All paradigms have trade offs. Engineering is about understanding those trade offs and using them where appropriate. Unfortunately too many people just jump on the "k8s is complicated" bandwagon (because it challenges most of the assumptions they've previously held in software development) and entirely write-off microservices without taking the time to learn what problems microservices solve.


People are well aware of what problems microservices solve. They are also aware of all the numerous problems they introduce, such as making debugging much more complicated, for starters.


Well, you're repeating one of the myths yourself here.

Debugging is different and some people find that harder because it's not what they are used to. That does not mean it is actually more complicated.


Debugging IS more difficult. Observability is much harder, sometimes needlessly. When you're 4 languages deep and have to enable distributed tracing across a variety of brokers/protocols (http, sqs, grpc, queue in a database) you know the lord intended you to spend your time in a more useful way.


So you're describing a bunch of unnecessary things, and stating they make microservices more difficult to debug?

Nobody says you need http, sqs, grpc, and queue (message broker) all in the same system. Nor does anyone say you need to use 4 languages for these things, nor does anyone say you need to only enable observability when there's issues.

Bunch of handwavy "it's complicated" nonsense basically - demonstrating yet again most people don't really understand microservices.

A microservice can be as simple as studying it's inputs and outputs and nothing more. All of which can be observed via tracing, logging or whatever you prefer, either on a sampled-basis, only on errors, or all the time.

Microservices make you rethink almost all of your software engineering assumptions - and some people are just not in the right headspace to make the jump. The biggest change is viewing/treating your codebases like cattle more than pets. Automate everything, fail quickly, stop caring about crashes, keep services as small and focused as possible, stop worrying about multiple supported releases, etc.

It is very different, but that doesn't mean it needs to be complex.

That doesn't mean microservices are the solution to every problem. It does mean, however, it's a solution to more problems than some people are willing to admit and/or take the time to understand.


> Nobody says you need http, sqs, grpc, and queue

I give it to you that, we're it not for microservices, I wouldn't need as much of http, sqs, etc, as well requiring making sense out of all of it via tracing/metrics/logging...

I've worked enough with microservices and heard enough managers preaching the same "cattle not pets", "small focused services", and similar, enough to know that high turnover or several rounds of layoffs, unreasonably high cloud bills and clients frustrated with high latency, often caused by several services, is the inevitable outcome.


Microservices are distributed and distributed systems are inherently more complex than non-distributed ones.[0]

> A microservice can be as simple as studying it's inputs and outputs and nothing more. All of which can be observed via tracing, logging or whatever you prefer, either on a sampled-basis, only on errors, or all the time.

In a statically-typed monolith, function calls are typed, I can jump to the declaration site with a mouse click and if I pass the wrong type, the code won't compile. That's way easier than "you can observe the inputs and outputs" (and hope that your observations generalise sufficiently). And yes, there are solutions for this (contract tests, Avro schemas, etc.), but they are inherently more complicated (maybe the most interesting idea I've seen in this space is Unison[1], but that's not gonna gain a lot of traction anytime soon, I predict).

Of course, microservices have valid use cases, but they come at a significant cost. I don't understand how one can argue that that cost doesn't exist.

[0]: https://en.m.wikipedia.org/wiki/Fallacies_of_distributed_com...

[1]: https://www.unison-lang.org/


It is inevitably more complicated because there are more moving pieces to coordinate. This applies to all IPC, not just in this context. To some extent you can mitigate this with tooling that tries to hide the complexity, but that usually only works for simple cases.


You're generalizing way too much. There are still tons of teams out there running and creating new microservices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: