It’s not off-base, just outside of your personal experience. This shows in your first comment about updating when you deploy: think about a large and not especially functional environment where there’s stuff written by vendors, contractors, acquisitions which have been folded into various areas, etc. They might not be deployed very frequently at all, and nobody in ops is sure which can handle a rebuild, whether the upstream images for all of those containers are still getting updates or whether upgrades will break something else, etc. They track Red Hat CVEs but not Alpine, etc. The corporate security scanner looks at the base OS but doesn’t know how to introspect containers – or it’s Red Hat’s and doesn’t know how to handle anything which doesn’t use rpm, etc. Performance is similar: containers add a layer of complexity which requires a lot of tools and practice to change – arguably for the better but many places weren’t doing so well even before the problem got harder.
I still think containers are the best answer to a ton of operational needs but it’s absolutely true that better tooling is needed for a bunch of problems, and this is repeating the classic hype cycle where it’s being billed as a cure-all when in fact it’s still going to require time, staffing, and a commitment to do the job well.
As an example, start with the easiest problem: say you want to prove that you’ve installed the latest OpenSSL patch. On traditional servers, this is a well solved problem. If you’re using Docker, your options are to buy a commercial offering with a support contract or, if your purchasing process is dysfunctional, build something around Clair, which has a bunch of usable but not great tools. If you’re the ops person looking at that, you’re probably thinking this just made your life worse even if there’s the promise that in some indefinite future it could get better. I’m hoping that the OSS community starts rounding out rough edges like that because it’s definitely an enterprise adoption barrier.
I still think containers are the best answer to a ton of operational needs but it’s absolutely true that better tooling is needed for a bunch of problems, and this is repeating the classic hype cycle where it’s being billed as a cure-all when in fact it’s still going to require time, staffing, and a commitment to do the job well.
As an example, start with the easiest problem: say you want to prove that you’ve installed the latest OpenSSL patch. On traditional servers, this is a well solved problem. If you’re using Docker, your options are to buy a commercial offering with a support contract or, if your purchasing process is dysfunctional, build something around Clair, which has a bunch of usable but not great tools. If you’re the ops person looking at that, you’re probably thinking this just made your life worse even if there’s the promise that in some indefinite future it could get better. I’m hoping that the OSS community starts rounding out rough edges like that because it’s definitely an enterprise adoption barrier.