The argument that Kubernetes adds complexity is, in my opinion, bogus. Kubernetes is a "define once, forget about it" type of infrastructure. You define the state you want your infrastructure to and Kubernetes takes care of maintaining that state. Tools like Ansible and Puppet, as great as they are, do not guarantee your infrastructure will end up in the state you defined and you easily end up with broken services. The only complexity in kubernetes is the fact that it forces you to think and carefully design your infra in a way people aren't used to, yet. More upfront, careful thinking isn't complexity. It can only benefit you in the long run.
There is, however, a learning curve to Kubernetes, but it isn't this sharp. It does require you to sit down and read the doc for 8 hours, but that a small price to pay.
A few month back I wrote a blog post[1] that, through walking through the few different infrastructures my company experimented with over the years, surfaces many reasons one would want to use [a managed] Kubernetes. (For a shorter read, you can probably start at reading at [2])
It's pretty common for new technologies to advertise themselves as "adopt, and forget about it", but in my experience it's unheard of that any actually deliver on this promise.
Any technology you adopt today is a technology you're going to have to troubleshoot tomorrow. (I don't think the 15,000 Kubernetes questions on StackOverflow are all from initial setup.) I can't remember the last [application / service / file format / website / language / anything related to computer software] that was so simple and reliable that I wasn't searching the internet for answers (and banging my head against the wall because of) the very next month. It was probably something on my C=64.
As Kernighan said back in the 1970's, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" I've never used Kubernetes, but I've read some articles about it and watched some videos, and despite the nonstop bragging about its simplicity (red flag #1), I'm not sure I can figure out how to deploy with it. I'm fairly certain I wouldn't have any hope of fixing it when it breaks next month.
Hearing testimonials only from people who say "it doesn't break!" is red flag #2. No technology works perfectly for everyone, so I want to hear from the people who had to troubleshoot it, not the people who think it's all sunshine and rainbows. And those people are not kind, and make it sound like the cost is way more than just "8 hours reading the docs" -- in fact, the docs are often called out as part of the problem.
Disclaimer: I work for Red Hat as an OpenShift Consulting Architect.
If you want a gentle, free introduction to OpenShift (our Kubernetes distribution), I recommend trying out Katacoda portal, Learn OpenShift [0]. Katacoda [1] also has vanilla Kubernetes lessons as well.
> As Kernighan said back in the 1970's, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"
Any fool can write code that a computer can understand. Good programmers write code that humans can understand. – Martin Fowler
Just because people tell you it can’t be done, that doesn’t necessarily mean that it can’t be done. It just means that they can’t do it. – Anders Hejlsberg
The true test of intelligence is not how much we know how to do, but how to behave when we don’t know what to do. – John Holt
Controlling complexity is the essence of computer programming. – Brian W. Kernighan
The most important property of a program is whether it accomplishes the intention of its user. – C.A.R. Hoare
No one in the brief history of computing has ever written a piece of perfect software. It’s unlikely that you’ll be the first. – Andy Hunt
In my experience, the clusters that break are clusters that are architected poorly and/or go unmaintained. There's also some of us that believe clusters need to be treated more like cattle rather than pets.
Disclaimer: I work as a Red Hat Consulting Architect focused on OpenShift (our Kubernetes distro).
I didn’t say anything about being a genius or being stupid. But it does help to have some something to reference [0][1].
In short, there’s several things that help.
1) Start with 3 Master nodes that sit behind an Enterprise Load Balancer (or even HAProxy) and a VIP (console.example.com)
2) We choose to have 3 Infrastructure nodes that perform Container load balancing (app routers), log aggregation, metrics collection, and registry hosting
3) We then put another VIP and Load Balancer in front of Application Subdomain (*.apps.example.com) so that apps can be exposed outside the cluster (myjavaapp.apps.example.com)
4) Stand up 3 or more worker nodes
Yes. This is more complex than plain old Containers. Yes. This is harder than putting an Apache server with RoR and MariaDB on the same Linux VM. But there is lots of value there if you put in the effort. Lots of consultants run “all-in-one” VMs or minishift/minikube On their laptops or homelabs.
This is not my experience with k8s at all. Sure, everything will work fine when it works (although I don't think it's as easy as you make it out to be), but when something unexpected happens it's a massive pain to debug.
"8 hours" initial investment is already large for a small/simple scenario; and it's not the full costs either, because fixing problems will be much harder/time-consuming.
I wrote a post about this just the other day[1]:
> What does it mean for a framework, library, or tool to be “easy”? There are many possible definitions one could use, but my definition is usually that it’s easy to debug. I often see people advertise a particular program, framework, library, file format, or something else as easy because “look with how little effort I can do task X, this is so easy!” That’s great, but an incomplete picture.
[..]
> Abstractions which make something easier to write often come at the cost of make things harder to understand. Sometimes this is a good trade-off, but often it’s not. In general I will happily spend a little but more effort writing something now if that makes things easier to understand and debug later on, as it’s often a net time-saver.
I strongly agree with "something unexpected happens it's a massive pain to debug"; though it's somewhat under-appreciated how often this applies to open source SW in general.
I don't think it matters if it was a k8s issue or not, what matters is that k8s made it much harder to get to the bottom of it.
We actually had similar issues when we deployed k8s. In the end it turned out to be a misconfiguration, but took weeks to figure out, and only because our entire dev team looked at it (and not the k8s guru who implemented it all).
The problem is that a k8s cluster requires hundreds/thousands of lines of yaml configuration just for the core components, including a choice of network overlay. Everybody copies the same default configs hoping they will work out of the box, without understanding how each option actually affects the cluster. Add to this the container and yaml upgrades that should be applied per component every month or so, and it's nigh impossible for most companies to handle.
Kubernetes is far too complex to set up from scratch. The only way to reduce the complexity – or rather, to offload it – is to use managed Kubernetes via AWS or GCE. The fact that using AWS or GCE is effectively the only viable method for running a production Kubernetes cluster speaks volumes to how non-simple the stack truly is.
New kubeadm makes is easier to set up a cluster from scratch. But the complexity truly is there and it's difficult to understand all parts of it. But - clusters are complicated. Running hundreds of workload jobs in a manageable way shouldn't be easy, right? Maybe it is not a problem of k8s but the fact is we now require complicated stack with many moving parts and it is not easy to manage it reliably.
You need to be able to debug and separate out problems no matter your infrastructure.
You'll get lots of SMEs that can use a product but can't trace or profile, and it isn't the underlying products fault beyond hype driven development.
If you want to spin up ephemeral environments to test your code, kube is one of the easier ways to achieve this. Similarly, it helps you solve problems such as app health, dying nodes, etc.
It's a great product, especially when you've scaled past a single app and a single team, but that's largely because it's a framework, similarly to how rails is, and if you do something serious with it, you need to know how it works.
The alternative is you build your own framework, but your new hires will prefer k8s.
Realistically though, use GKE and forget about it until you grow. Otherwise you're using something closed or something bespoke, the latter being fine for a 1-2 man team, but kinda pointless when you can have a managed service.
If you read the book you would not call it advertising material. It's very open-minded and it's honestly not a book that is designed to sell you on Pivotal PAS in any direct way.
The first line in the first chapter is more like an "ok, full disclosure, this is advertising material" but if you read the full text, the Pivotal suite of tools is hardly mentioned. It merely paints a picture you can understand to demonstrate that some kind of platform is definitely needed for basically any enterprise of nontrivial size. (Now if you want to read the real promotional material, get yourself a copy of "Cloud Foundry the Definitive Guide")
This book is only advertising in the sense that if its message is delivered successfully, you will concede that you should use a platform, and we will both agree on the meaning of the word platform. That's it. It's only a bit of tongue-in-cheek first since Pivotal actually makes such a platform, that it says on Page 1, "our platform is the best and only platform" – the rest of the book isn't at all like that.
But if you take it in context for the date of publication, it might make more sense that it was framed this way. Obviously it does not pre-date Heroku. It is a different platform than Heroku. But it is a platform, and a cloud-native one. It's an example of "why you might not need Kubernetes." Also might help to note that this book was published in 2016.
It might be (read: definitely) harder to assert in 2019 that there really isn't any other platform worth considering (than CloudFoundry in 2016), but for a medium-large enterprise in 2016 I'm honestly not so sure. Kubernetes was still new on the scene. People in 2016 in large part still needed to be convinced that such a platform was needed, or even possible, since few existed in the wild/open source commons. (Name your favorite platform that is older than 2016, if you still are in disagreement with the thesis. There are sure several right answers, and I seriously don't doubt this at all.)
To take the contrary position to the extreme a bit, present company including the original poster I think nobody serious (I hope) is suggesting "you may not need Kubernetes, and there is also no serious contender which you should consider in the race for a thing like K8S that you may need for your enterprise, either." Even the original post recommends something instead (Nomad). There is a decent chance you are already sold on this idea, if you're reading this article.
The central argument isn't that you should definitely pick Nomad, it's that you really do need a platform that works for you, even if it only gets you 80% of the way there. The author of "Maybe You Don't Need Kubernetes" started out explaining why Kubernetes just wouldn't work for them. The book, just like this article, comes off a bit like "hey, why not this product instead of whatever it is you're doing there."
But just a little. I strongly agree myself – "In 2019, pick something, anything. (Even Nomad) Just don't do nothing."
I agree to some extent. One of the key value propositions of Kubernetes is indeed in its abstractions, which are mostly quite good. And I really like its declarative approach.
However, I've been dealing with its ins and outs for a while now, partly because I'm developing tooling around it, and I've found there are already a lot of idiosyncrasies in there. Some APIs have very frustrating flaws, and most annoying of all is how tightly some core functionality is coupled to kubectl. A remarkable amount of logic is embedded in the CLI, as opposed to the API layer, and if you really get in the weeds you're likely to start pulling your hair.
Which is to say, even once you get through the learning curve, you may still find yourself struggling with it as you use it. Sometimes when you declare your intent, you find that it just doesn't work, and it can take a while to figure out why. Before you know it, you're using various operators that are meant to mask some of the deficiencies in the core orchestrator, and by extension you're outside of the core abstractions. And don't get me started on Istio... (but that's a tangent).
Anyhow. If you're not trying to do anything unusual, it's a good orchestrator. Not perfect, and for sure heavy, but a good choice for a lot of use cases.
The internet is both full of people complaining that Kubernetes is too complicated and full of people arguing that it makes things easier.
The people saying it's simple are looking up from the world of IaaS and non-declarative orchestration software (Ansible, Puppet etc) and saying "hey this is much easier".
The people saying it's complicated are looking down from the world of PaaS and FaaS (Heroku, AWS Lambda, Google App Engine) and saying "why they hell would I want to manage this?"
I agree with this: after you know k8s, it is pretty darn great. Two more points:
1. K8s of 2019 is significantly better than K8s of 2017. With stable Deployments, StatefulSets and CronJobs and binaryData in ConfigMaps etc., I haven't missed a single feature in my latest setup: I just typed away helm'd manifests and things just worked. This is in stark contrast to the dance around missing features with initContainer and crond container kludges you had to do in 2017, when I built my first kubernetes setup.
2. People tend to conflate maintaining a k8s cluster to using it. Setting up your own k8s cluster with HA masters, is likely still a royal pain (disclaimer: my attempt at that are from 2017 so could be wrong, see point 1). But for a small company, whipping up a cluster from GKE is a breeze, and for big corps, company-wide cluster(s) setup by an ops team (OpenShift has been ok) is the way to go. The end result is that as a developer you just push containers and apply manifests.
I am scared of Helm. I would expect most people install helm chart like a package in Ubuntu, without knowing what it is going to do, what images are going to run and why. What if something malfunctions? What if helm upgrade won't success? Are you going to call a Helm support line?
I was referring to using helm to configure my own charts to work in different test environments. Wrt public helm charts, you want to understand the chart install. Most of them are really dead simple if you know k8s.
> Tools like Ansible and Puppet, as great as they are, do not guarantee your infrastructure will end up in the state you defined and you easily end up with broken services.
False dilemma. Ansible and Puppet are great tools for configuring kubernetes, kubernetes worker nodes, and building container images.
Kubernetes does not solve for host OS maintenance; though there are a number of host OS projects which remove most of what they consider to be unnecessary services, there's still need to upgrade kubernetes nodes and move pods out of the way first (which can be done with e.g. Puppet or Ansible).
As well, it may not be appropriate for monitoring to depend upon kubernetes; there again you have nodes to manage with an SCM tool.
> There is, however, a learning curve to Kubernetes, but it isn't this sharp. It does require you to sit down and read the doc for 8 hours, but that a small price to pay.
Last time I sat down with k8s docs it was a broken mess.
It was impossible for me to follow any of the examples I tried through.
I would not recommend most people to run their own k8s cluster. Even Amazon's EKS is enough of a pain in the ass I don't think it's close to a "define once, forget about it" type of experience.
However, this is pretty much my experience using GKE on Google Cloud. You can get a cluster up and running and start deploying stuff to it in a matter of minutes.
I have never heard any infrastructure explained as “define once, forget about it.” Ever. Everything requires maintenance, monitoring, upgrades, security, etc. And things just randomly break.
I like Kubernetes but it is not without problems. And when your entire business is now leaning on Kubernetes, it can really hurt when it has problems and you can’t figure it out quickly.
And if your expertise is not running infrastructure, a lot of people, as this post explains, are better off on AWS or Google Cloud using managed services.
Or you might have a problem with the underlying storage platform that Kubernetes uses, so you now have to be a storage expert.
I think it’s telling how many job offers I receive from companies already running Kubernetes and need help with it.
My personal take is that it's because Kubernetes has a lot of compositional features and abstractions, rather than a bunch of one off ones that don't play well with other features.
A good example is label selectors with Services, Pods, and Deployments.
You create a deployment, which is basically a scaling group for pods, and pods are the unit of deployment running containers.
A service exposes your pods ports to other components through a cluster local virtual IP, or a load balancer.
You cannot simply say "this service is for this app". You must instead say, this service will map it's service port, to the following pod port, for all pods that match the following label selector.
There are 5 different concepts (pods, deployments, services, labels and selectors) you need to learn before you can actually make your app accessible. But what's nice is that these concepts are used elsewhere in Kubernetes.
Labels and selectors are used to query over any group of objects, and allow grouping different things in arbitrary ways.
Pods are concepts that get reused by anything that deploys a container. There's Deployments, Statefulsets, Daemonsets, Jobs, CronJobs, these all create pods. So it's nice that pods are their own decoupled concept, but now you gotta understand how they're used, and relate to the other Resource types.
Part of what's great about this all is also that now that people are building custom operators/controllers, you can understand how to use them fairly easily because they build on all these existing concepts.
I find that in Kubernetes I do a great deal of tedious boilerplate work. Sometimes I just want to run some damn software and let the platform do the boring stuff.
Disclosure: I work for Pivotal, we have a hand in a few such things.
I've tried that, but then been burned by PaaS switching things up in unexpected ways. Best to make the investment and just own it. I absolutely share the sentiment, though.
Good news is, it only took me a week to really pick up k8s.
The documentation around it isn't that great. Once you know the key actors -- which are a dozen or so key types of resources, and how they play together, you understand how to create the configuration for your environment.
The YAML files are quite intimidating at first, if you look at examples out there.
I'm probably not a great teacher, but I think I could boil it down to a slide deck of no more than 25, and an hour's worth of time, and clearly teach how to interface with Kubernetes for a basic web app w/ horizontal scaling, Let's Encrypt SSL certificates, etc.
There is, however, a learning curve to Kubernetes, but it isn't this sharp. It does require you to sit down and read the doc for 8 hours, but that a small price to pay.
A few month back I wrote a blog post[1] that, through walking through the few different infrastructures my company experimented with over the years, surfaces many reasons one would want to use [a managed] Kubernetes. (For a shorter read, you can probably start at reading at [2])
[1]: https://boxunix.com/post/bare_metal_to_kube
[2]: https://boxunix.com/post/bare_metal_to_kube/#_hardware_infra...