Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience this is completely true of large organizations, "Most operations departments are inflexible and inefficient because they rely on specialized engineers glued together with manual processes and a large IT bureaucracy – all fundamentally at odds with the fast-moving, application-centric world of cloud computing."

This is a cycle. If the management of the Operations organization is measured based on reducing downtime they control what they can, Release & Change Management. This kills frequent small releases, so development teams have to build big releases. If management in development organizations are measured mostly by delivering on schedule they cut scope. You end up w/ development organizations delivering the minimum to ensure they meet the project mostly artificial timelines for huge releases. Suggesting small frequent releases sounds good to development (assuming they can reduce the operational paperwork associated w/ releasing), but jeopardizes Operation's control of stability so Operation's resists it. Suggesting that more get delivered in each huge release jeopardizes Development's ability to meet project deadlines because there is so much unknown and the commitment is expected up front, a quarter or more (I've seen 18 months) in advance.

There are reasons for all of this; it's not bad people, just a consequence of large organizations. Reducing downtime reduces costs because you can cut support staff. Delivering on time increases productivity because code that isn't being used is useless code.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: