Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Quit Covering Up Your Toxic Hellstew with Docker (neutrondrive.com)
124 points by cyberpanther on Sept 26, 2014 | hide | past | favorite | 51 comments


I read the opening paragraph and thought 'ah well, another boring didactic angry developer rant.' Just as I was about to close the tab, my eye caught the start of the second paragraph:

This reminds me of my days in the Space Shuttle program.

Which, to put it mildly, is something of a credibility boost. So I finished the article.


"Which, to put it mildly, is something of a credibility boost. So I finished the article."

Yes, but then when he repeated the line with "this reminds me of my days with perl", suddenly I started hearing the entire post in grandpa simpsons voice...


"I used an onion as my scripting language, which was the style at the time."


I enjoyed that brain reset feeling. I think I'm going to borrow the phrase...

"You think a reboot is going to help? This reminds me of my days in the Space Shuttle program."


an attention-getting phrase, but not the best one in the post as far as i am concerned--i'm going through our g/h issues just now and adding comments like "poeple let's fix this toxic hellstew!"


I like to think the merit of an idea comes from the quality of its contents, not the stamp on the envelope.


Is it?

I mean, yes, it's literally rocket science, but the shuttle program fell well short of it's reliability goals.


Don't see why I'm getting downvoted here, honestly. Even NASA management has admitted that the Shuttle program fell well short of the planned goals, and left us with much worse capabilities than if we had followed the "reusable thingy with wings" pipedream.


None of the reliability or capability issues with the Space Shuttle were software-related, as far as I know, so I'm not sure how your objection is relevant. The Shuttle is generally regarded as having one of the most complex and quality-focused software design efforts in history.

http://www.fastcompany.com/28121/they-write-right-stuff


You are probably correct that the Shuttle software was highly reliable and highly complex. But the real failure of the Shuttle was the cost. I think the Shuttle was originally sold to Congress as being able to do 40 missions a year. We only ever did 4 or 5 a year at most. The cost was way to high. And while the software worked flawlessly, the process to create it cost too much.

Also the amount of change to the software was relatively low because of the slow bureaucratic process. So you have a high cost for not much change. The computers used in the Shuttle were extremely old towards the end, and because of Moore's law, really laughable in terms of processing powering. Towards the end NASA was using EBay to buy old hardware parts for the Shuttle program. http://www.nytimes.com/2002/05/12/us/for-parts-nasa-boldly-g...

NASA with the Shuttle program didn't fight cost and complexity hard enough. Hence you have large groups of engineers to do tasks that can now be handled by better software. Our "Data Group" should have been removed long before the whole program shutdown but half of our department refused to change and use the new simulator.

I think Elon Musk gets this with Space X and I believe Space X vehicles already operate at lower costs. Now he just has to prove his reliability and keep becoming more efficient.


To be fair, going to the moon was once a pipe dream too. As was flying around the world.

I dont think following this pipedream was the wrong move to make. While it could have had better execution and resulted in aerospace setbacks overall I think the idea was a good one, perhaps just far too far ahead of its time.

One day, I hope we will have a reliable reusable thingy with wings. It just has to beat the cost of non reusable tube with giant flames.


Yes, simplicity leads to understanding and I don't understand why more people don't get this simple concept. I've dealt with codebases with such a horrendous build process that it doesn't matter what kind of sugar you sprinkle on top because making any change is practically impossible. That complexity has to live somewhere and if you offload it to the docker buildfile it's still in the buildfile. The problem at the end of the day comes down to the fact that most developers either don't understand enough to build proper build pipelines or they are lazy or they don't think complexity in the build pipeline is anything to worry about. Docker does not change those things.


I agree completely, and I think git vs. CVS is a good illustration of why people don't always understand the value of it. From a certain point of view, branching and merging was easier for most developers under CVS, because they never had to do it. It was a Big Deal and was done by special people. If you tell someone who is using CVS and never deals with branching and merging that they're switching to git, and oh, by the way, they'll be doing their own branching and merging from now on, that sounds like a horrible step backwards to them. Telling someone that you're going to make something difficult accessible to them can sound extremely threatening.


I think the gist of your sentiment is that barriers to entry aren't necessarily bad. I agree. I disagree that dscm (as a satisfied fossil user, I refuse to call these general concepts "git concepts") is the way to describe it though. I feel a closer analogy (from experience) is papering over horrible app performance by throwing up a proxy like varnish. You've still got garbage under the hood.


I don't mean to agree with people who resist this type of change, just to explain how it looks from their point of view. A CVS user sees branching and merging as a difficult, painful, time-consuming things to be avoided, so they're horrified if you tell them they're going to be branching and merging on a daily basis. They aren't right — generalizing from their CVS experience has given them a mistaken idea of what using git (or fossil) is like — but it's easy to understand why they're afraid. Also, ironically and unfortunately, the fact that a git (or fossil) advocate is enthusiastic about the ease of branching and merging is likely to make the CVS user think they are crazy, since you'd have to be crazy to love branching and merging in CVS.


>Yes, simplicity leads to understanding and I don't understand why more people don't get this simple concept.

I too have seen this time and time again. It pops up constantly in poorly managed software projects, especially those that have changed hands over years. In addition to this, I believe simplicity is lost because many engineers/managers struggle to step back from the their immediate task and instead visualize the whole system in its entirety. This is difficult and takes a special combination of knowledge and procedure to do well.


Chronic complexity is usually job security for someone.


What if the problem actually is complex? Let's say that I'm building an embedded firmware module for a chipset that can only be compiled with a proprietary compiler under Visual Studio - and you're a Unix shop with Bamboo as the continuous integration server?

Then that means the Unix bamboo server needs a Windows agent that can execute the cross-compiler inside a MS VC environment. It certainly is complex, but what piece are you going to change?


You are always going to have complexity, but we often forget to simplify before adding on more layers and tools on to our processes. I singled out Docker in particular because it is so awesome and powerful, I feel like a lot of people are just using it to cover up crappy processes. Eventually that is going to come back to haunt you in terms of slower development.


Honestly? Port it to GCC. (Easier than LLVM, IMO)

That, or switch to a different chipset, this time taking care to select something that includes unix build tools.

Neither of those is trivial, nor are they cheap. But rewriting their simulator in C, as described in the article, couldn't have been cheap either.


Perhaps the instruction set is undocumented? (it happens). Alternative chipsets cost x10 (also happens, and is probably correlated to documentation quality).

Price is an important factor, and the messy choice is often not unreasonable.


Why are you a unix shop if using Visual Studio is a hard requirement? The complexity is entirely due to you choosing tools that work poorly together, so instead pick tools that work well with the ones you can't change.


Seems like you're suggesting running a completely separate parallel build platform for one project. The parent seems to have taken the pragmatic, but messier choice.

Sometimes, there just aren't good clean solutions, and you have to be messy and pragmatic. That, of course, is often used as an excuse for being lazy as well.


This reminds me of the "Every dev should be senior" mindset that is all too common in this industry.

I don't see how supposing every devOps specialist is replaceable by average engineers is a real solution.


Every dev should be senior, in my experience is a result of dumb hierarchies and skewed goals. Where I work (as a manager), we're given a certain number of "heads" (headcount) each quarter. As managers, we get to decide what level we want to hire at. There's no tradeoffs presented to us. We're basically asked, "would you like a junior, senior or staff level engineer?" Most managers just try to hire as senior as possible because it increases the size of their fiefdom and gives them more clout. A staff engineer will command more respect in meetings and will get his/her way more often. It's a toxic culture of unaligned goals where managers aren't penalized for spending too much on their teams and are rewarded political successes rather than shipping code. If I wasn't resting-n-vesting, I'd be out of there.

The beauty of Docker is that it, to some extent, does turn average engineers into devops specialists. They just specialize in a very specific, somewhat abstract/virtual environment. Then the real devops specialists can write tooling around that virtual environment that doesn't require detailed knowledge of what's inside the black box. It's an interface between application and deployment engineers that obviates the need to coordinate on deployment process.


I think it's something different. "Junior devs shouldn't reinvent the wheel in production [1] until after they understand the solutions of the senior members of their field." I.e., don't use MongoDB until you fully understand SQL and don't build your own fancy planning algo until you you understand linear programming.

[1] When learning, reinvent all the wheels.


Also, don't use SQL until you fully understand MongoDB.


No. SQL is the culmination of years of research into data storage systems. It's a well established solution to a large number of problems. MongoDB simply isn't.


So just use something you don't understand because someone said so. Got it.


No. Trust in what's battle tested in production.

Use MongoDB for all your pet projects, all your development. Joining data from documents is good way to learn where SQL came from.

Just don't do it in production before you master it.


And don't use MongoDB for data that matters ;-)


Can't upvote this enough. Expertise is very difficult to replicate, yet it's the first solution everybody reaches for when things don't quite work the way they want. Just pile onto the number of domains a developer is expected to fully understand.


The article is essentially discussing frustration at the hiding - by abstraction - of technical debt incurred in adopting poor software architecture and/or development process.

Some potentially relevant quotes[1]:

Zymurgy's First Law of Evolving Systems Dynamics: Once you open a can of worms, the only way to recan them is to use a larger can.

Ducharm's Axiom: If you view your problem closely enough you will recognize yourself as part of the problem.

The organization of the software and the organization of the software team will be congruent. (paraphrasing of Conway's Law)

Separation of concerns ... a necessary consequence of loss of resolution due to scale ... a strategy for staying sane. (Mark Burgess, In Search of Certainty: The Science of Our Information Infrastructure, 2013)

[1] Taken from my fortune clone https://github.com/globalcitizen/taoup


Actually the more complex it is, the more beneficial encapsulation can be.

I think maybe I know what his actual problem is. It didn't detect a change to the requirements.txt, or it is _always_ detecting a change.

In the Dockerfile you want to ADD your requirements.txt first, then RUN the pip install, then ADD .

http://stackoverflow.com/questions/25305788/how-to-avoid-rei...

Also check for a --no-cache


Thanks for this comment! I read this whole thread primarily to find this information.


I have been working recently with a PHP application whose installation instructions are "Run this VirtualBox image inside your network, and without a proxy in front of it because we are not properly configured".

This PHP application is not trivial, but also not very complex. Yet its developers do not provide this application as a PEAR package, and replied that this kind of things are superfluous these days, VM are simpler.

People like these developers fail to understand that there is a lot more in a VM than just their software (from the kernel to all the exposed services) and that by distributing a VM they are also becoming the maintainers of a very complex set of dependencies. Not that they care: it took them two months to release a VM not vulnerable to heartbleed. Let's see how long before they release a VM not vulnerable to shellshock.


this makes sense, i work in a pretty convoluted SharePoint environment. It is completely impossible to spin up a development environment without dozens of scripts and knowing exactly what lists to manually create and what data must be present inside of them.

This means that new-hires are handed a cryptic and seriously out-of-date document with instructions on how to setup a proper VM environment.

They check out their code.

They deploy...but wait the deploy fails because of missing data, missing document types, missing lists...

Open up SharePoint, enable some doc types not turned on by default, turn on some more features, add a list. Deploy again, no wait it died again, oh now it's a different doc type and a different feature that the deployment doesn't turn on by default ...etc

The end result is a mess that requires days to get up and running, not hours.


> i work in a pretty convoluted SharePoint environment

For some reason I intuited that you were not going to talk about how well the organization has dealt with said complexity. I'm curious if there is any way to FFI into sharepoint and start using tools that don't actively sabotage their users.


So our answer has to just been to hire more developers, more and more and more developers. As the complexity grows, so does headcount.

This is why I thank the stars that I am the opensource developer, focusing on linux / Python / Django development. I cross over into SharePoint only when we begrudgingly use it as a CMS for a windows 8 app or similar.

There is an effort now to write an external service bus to manage a lot of the business logic complexity that we've baked into SharePoint. With the eventual goal of having SharePoint and other resources leverage this SB and use it instead.


I'd like to know if anyone has anything nice to say about Sharepoint. As a user I find it miserable in all browsers, and have obviously avoided doing anything with it as a developer.


It's better than staring into the computer as if it were a dark abyss, screaming and longing for a way to build something. It's easy to discount how useful this is for people who really have no tools or means otherwise (not that more specialized services like, without loss of generality, Wufoo, aren't way better). The general discontent has to do (mostly) with 'Sharepoint Engineers', which is a bit like having someone build a house with legos.


I guess a regular wiki is too much to ask for...

(For the omg-cryptic-markup crowd there is always Google Docs. I seriously wonder why that hasn't crushed all trivial Sharepoint instances yet.)


> I'd like to know if anyone has anything nice to say about Sharepoint.

Its marginally less of a total failure for issue tracking than Excel.

I'm not sure that that counts as something nice.


Edward Snowden's last job was a Sharepoint administrator, IRC.


Very valuable concept regurgitated with [flavour–of–the–year] Docker as the focus.

The article seems to portray Docker as the _cause_ of this anti–pattern without any sort of context for why Docker and not xyz or why not your implementation of xyz.


I realize a lot of people use docker for a lot of different reasons.. But the technologies seem more geared toward(and seeing as dotcloud and not docker are PaaS companies) solving deployment concerns..


I don't know much about docker. What is he saying I makes easier?


The problem is the carpenter, not the hammer. Please stop blaming the hammers in your headlines.

The tool was used incorrectly and it resulted it badness. Bringing Docker into it is pointless. This would have happened with Salt, Fabric, Chef, Puppet, etc. with the same team.


Even in the headline, it was obvious that Docker wasn't the problem. In the article itself it was abundantly clear that Docker was just the misused hammer closest to the front of the author's mind. It's almost as much a defense of Docker from abusers as anything else.


Nobody blamed hammers, or said they were bad. He said stop using hammers to smash other people's thumbs.


Testing of environments is needed on / with docker as well...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: