Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

FWIW, I used to work at a company with a severe case of NIH syndrome, but it worked out well. From what I could tell, the difference was all about mentality: This company did it, not because of weak management, but because of strong management that was trying to push a culture of KISS and technical ownership.

So, whenever a technical team identified a need for X, there was a lot of time putting into understanding the problem, understanding the company's real needs, and figuring out the simplest thing that would get the job done. If there was an off-the-shelf product that did one thing and did it well (e.g., protocol buffers for serializing datagrams), that would get chosen. If everything out there was a confusing pile of feature creep that was trying to be all things to all people (e.g., configuration management), an in-house product would get launched.

TBH, the tech stack was a joy to work on. Especially that home-grown configuration management system. So simple, so effective.

I've never worked for Uber or anything, but I've attended a few lectures and conference sessions where Uber engineers presented things they worked on, and my sense was that that wasn't at all what was happening there. Most the projects I saw seemed to be motivated by resume-driven development. Which is absolutely the way to go if you want to end up with solutions that are complicated, hard to manage, and half-implemented.



Are any of these projects open source, and if so can you point me toward some of them?


Nope. They had no particular business interest in doing so, and, from a technical perspective, doing so would have created a conflict between the needs of the community and the needs of the company. I'm not sure a lot of it would have flown, anyway, due to going against the grain of modern software development.

For example, automated package management was banned at the company level. Instead there was a central repository of approved packages that everyone used. That's actually another great example of figuring out the simplest solution that would work: It ensured internal consistency in a way that more-or-less eliminated a problem that is more commonly solved with baroque solutions like containerization. It gave tech ops a really, really easy place to go look when they needed to do security audits on our dependencies or for pushing security updates. And it made it easy for legal to keep an eye on compliance. Stuff like that. Honestly, I thought it worked really, really, amazingly well. And I'd have never thought of it, because it flies in the face of 20 years' worth of the entire software field pulling things in in a completely different, much fancier and more complicated and bell-y and whistle-y, direction.

Also, most of what was going on was so simple you wouldn't even be able to turn it into an open source project. Like, I know that the official wisdom is that you're not supposed to route your event streaming system through a database, but wow, assuming you're willing to shell out for a commercial RDBMS, it's amazing how far you can scale an event streaming mechanism that routes everything through database tables. And don't even get me started on how easy it was to support in production. Sometimes I have to remind myself, open source is only free when your programmers aren't drawing a paycheck.


> For example, automated package management was banned at the company level. Instead there was a central repository of approved packages that everyone used.

Do you mean they had a private dpkg/yum repository? That is a well supported and somewhat common setup, and counts as automated package management (your application gets packaged up as a .deb or .rpm with the necessary dependency and installation info).

> It ensured internal consistency in a way that more-or-less eliminated a problem that is more commonly solved with baroque solutions like containerization.

I worked at a company with that kind of build/deployment system. About the time of the start of Docker mania, a corporate merger happened, and the acquiring company, in addition to buying the company I worked for, also bought into the hype, and insisted the build and deployment process go through Docker. Early days of shitty Docker tooling meant a ridiculous amount of time spent by one of my colleagues getting the system working, with the result that build and deploy times went up an order of magnitude, and debugging got a lot harder. Containers are not a substitute for package managers or automated provisioning.


> Do you mean they had a private dpkg/yum repository?

Two things:

All libraries were collected into a single repository in source control that developers would clone onto their machines and use. These were the same libraries that were installed on the production servers. Very consistently, modulo major platform migrations and the fact of running multiple server OSes in production.

Binaries were named and versioned tarballs that were deployed by just unzipping them on the target server. That was all that was necessary, regardless of server OS, because of the abovementioned consistency in what shared libraries were installed on the servers.

With those standards in place, CI, CD, configuration management, all those sorts of things became just wildly simple.

It's a bit like Bazel's golden handcuffs: If you're willing to let go of the basket case of flexibility that C and C++ have traditionally offered, Bazel can deliver you a build system that's just a dream to work with compared to old friends like autotools and CMake.


Name this unicorn?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: