So many companies I've interviewed for are rushing toward microservices and containerization as the cure to all problems. The problem is that the champions often have no clue what any of this means.
I recently spoke with a company that had no testing whatsoever for a large production app. When I asked about it, they proudly said "Oh, we do CI. We have Jenkins!" Any tests? "We're going to add them after we move to microservices. Moving away from our monolith is top priority because monoliths are difficult to debug."
I see a ton of companies shitting all over best practices and then chanting buzzwords to pretend that they're all about it. That, or gross misunderstanding of any concept behind buzzwords.
X company uses Docker. We should use Docker. "Um. This code runs on an FPGA." "Does it run Docker?"
If it weren't containers it would be a new programming language, framework, or another agile methodology. Your argument has very little to do with containers.
Containers are just a better tool for writing OS configuration scripts. (If your team is full of Chef experts then it's not "better" for your team, but for a lot of teams it is).
What you're saying applies a lot more to microservices, which are a fundamental architecture choice. Containers aren't, they're just better than a tangle of bash scripts which create stateful VMs. And the problems you're describing apply no matter which tools a team uses.
Remember that you can use containers without complicated orchestration or microservices. I think a better argument would be to untangle these three things and describe how each one can solve certain problems or make the problem worse, and under which conditions.
> "If it weren't containers it would be a new programming language, framework, or another agile methodology. Your argument has very little to do with containers."
Agreed, containers are just one way to package software, they aren't the be-all-end-all when it comes to making software modular.
One example of a modular design abstraction that does not rely on containers is the data access layer ( https://en.wikipedia.org/wiki/Data_access_layer ). The idea being you can design a service that sits on top of a data store (whether that's a RDBMS or otherwise), that encapsulates the core business logic that you want the applications in your business to adhere to. This data access layer can potentially be shared by differing applications. The implementation of this does not rely on containers. Also, just in case "data access layer" (DAL for short) seems like a corporate IT term, I'd say the best tool I've ever seen that's used to build DALs is GraphQL.
>> Containers are just a better tool for writing OS configuration scripts. (If your team is full of Chef experts then it's not "better" for your team, but for a lot of teams it is).
No, not really. You could argue that dockerfiles are part image provisioning script and part process environment specification, but I think you'd still be missing the main advantage. Dependency isolation is the thing that usually gets touted, but that's only part of the picture. After all you can isolate dependencies now by baking images. That works great, it's well proven and reliable. But the vm that runs a single boot image can potentially run dozens of different containers, and using an orchestration platform you can easily and quickly shift those loads around, scale up and down, reconfigure and redeploy, all with far less overhead then deploying an image to a vm requires. Containers didn't become a popular tool because they don't add value. The use case for them has been clear for over four years now.
> Containers didn't become a popular tool because they don't add value. The use case for them has been clear for over four years now
Well, yes, they did become popular despite not providing anything
substantially new. The main value of containers is that programmer who works
with network now doesn't need (initially) to understand how to configure the
network, which is a dumb idea by itself. All the other things added by
containers boil down to distributing a tarball with whole operating system, so
you can run that in a chroot.
From where I stand it seems that programmers didn't want to learn how to
build, distribute, and configure software with OS packages, so they invented
their own binary packages system.
The irony here being I had to port our production RPM (rhel-based) build system to Docker just so it could have a reasonable API and be anything close to maintainable.
Edit: the extreme portability and "free" concurrency were just a bonus.
Containers are not OS configuration scripts.. they still need a host to run on. They still need networking managed.. im not sure how you figure that one out.
Chef experts likely deploy your underlying hosts, setup other required services (eg load balancers, state)
while you can provide a generic "container" that will run whatever you want and provide consistent ingress/egress points so the "chef experts" can run it without caring what it is they are running
I agree you shouldn't always go for microservices but you should always have separate well defined domains with well defined boundaries - even in a monolithic application.
I'm a C# guy so I'm going to speak in c# terms. I don't see any reason you shouldn't always have in a monolithic solution, c# projects where all of the internal classes are "internal" with a "public API" that is either a single class or interface.
That gives you the optionality of creating either in process Nuget packages by extracting the project or creating a microservice later when it makes sense. It also makes merge conflicts less likely and it lends itself to easier testability. In the last year and a half, I've been combining and separating projects from one application to another between microservices, Nuget packages and even Git subtrees as it made sense.
I agree. Progamming languages give us so many ways to organize domains without increasing deployment complexity and rewrites like microservices do. I wonder why microservice components talk as if those approaches don't exist or aren't equally valid. There used to be this term called "refactoring" which has gone out of fashion.
Too many people would use the term "refactor" when they really mean "rewrite".
As far as microservices, one benefit is that it makes blurring the lines between domains much harder. A mediocre developer can easily go into a well designed code base and make a mess of it quickly. In a microservice setup, their mess is mostly contained to one domain at a time.
> Too many people would use the term "refactor" when they really mean "rewrite".
I am a fan of "refactor from zero".
Yes, you're right. People do often mean rewrite. I suspect that using "refactor" instead comes from working in environments where rewriting is seen as akin to proposing sacrificing babies to Satan, but refactoring is a daily event.
There's still a distinction to be made. A rewrite usually means 'one day we will turn off this "legacy" system' and replace the whole thing at once', but a 'refactor from zero' can clearly involve running two systems at once for a while, slowly offloading functionality until you have replaced each part. The former is usually such a bad idea that it's worth clarifying that you're not doing it.
I agree completely and I implemented a microservice like architecture for just that reason. My team consists of junior devs and contractors. I didn't want bad design to infect the system.
But, I made sure we had an easy to use CI/Cd solution, orchestration and service discovery.
Complex social systems spaces are highly prone to fads. In technical areas, we call this "cargo culting".
The reason is that a deep understanding of the problem is hard and expensive. A proposed solution, particularly one that's being widely adopted in other places, has the surface appearance of a potential solution, but it is difficult to tell in advance whether or not this is true.
Hence: software, programming, technical, and management fads.
This also appears in clothing, music, diet, arts, and language (most especially dialects and/or slang).
Another problem particularly bad for software developers:
Once we've solved a problem once, we 'understand' it. But we don't want to solve the same problem again. And even if we do show up for another one of these we have to somehow explain it all to people who won't believe it til they see it anyway. It's boring, thankless work.
For instance I've done things that look a lot like CMSes many, many times. I can predict pretty accurately what the bosses will be pissed about in 6 months. I'm only surprised by a production issue if it's actually dumber than I thought we could possibly be. Yeah, of course that broke. I've been warning you for months.
But if I had a nickel for every time I said "You really don't want to do it that way, do it this way", and people actually listened, I wouldn't be able to afford a cup of coffee. Only the Jr devs listen. The rest think they're special and will avoid the problems that everybody runs into.
At this point, I should probably have my head examined for showing up again. I have resolved that next time I will work on something where I can make all new (to me) mistakes and have a chance to learn. But here's the rub: that's probably exactly what 90% of my coworkers were thinking when they joined this project.
I am coming to a very, very sad realization that teams that 'need a rewrite' probably don't deserve them. The desire for a do-over is a little childish to begin with, but the fact that you can't find a route from A to B means you lack a useful combination of imagination and discipline.
From my personal experiences and those of my peers, I don't think you can trick people into discipline by rewriting the application and then letting them in after you've "fixed everything".
First, you are most assuredly deluding yourself about your own mastery of the problem space. The problems you don't see can kill you just as badly as all the ones you do. Two, the bad patterns will sneak back in the first time you are distracted. Which will be almost immediately, because you just made assurances about when major pieces of functionality will be ready to use.
If the team has enough discipline already, you can start refactoring the code to look more like what you wanted. By the time your rewrite would objectively ship you'll be a long way toward it already (and maybe discover some even cooler features along the way.) Refactoring is the Ship of Theseus scenario. You get a new ship but you still call it by the old name.
CI/Containers/Microservices come under process. Without a culture that fosters a solid engineering strategy that supports and enriches them those processes will die on the vine.
I recently spoke with a company that had no testing whatsoever for a large production app. When I asked about it, they proudly said "Oh, we do CI. We have Jenkins!" Any tests? "We're going to add them after we move to microservices. Moving away from our monolith is top priority because monoliths are difficult to debug."
I see a ton of companies shitting all over best practices and then chanting buzzwords to pretend that they're all about it. That, or gross misunderstanding of any concept behind buzzwords.
X company uses Docker. We should use Docker. "Um. This code runs on an FPGA." "Does it run Docker?"