Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The existence of small and large companies with bad systems architecture isn't an argument for implementing more bad systems architecture. Small or large, it doesn't need to cost more to design stuff well.


For what it's worth -- I agree with you and is why folks are flocking to docker/kubernetes and devops tooling like Terraform. I just think you're missing the scope of it. Say for example, a big box store, they might have 500 locations, and all the sudden they buy a few more companies, and merge all this stuff under a single brand. All the sudden they have tons of systems all over the place, lots of existing platforms that need to talk to each other, and staff that are resistant to change. This isn't "newly designed stuff" it's systems that sort of organically grew over a long time. You're talking lots of different operating systems, networks, front-end/back-end systems, www corperate site, mobile site, rewards sites, all sorts of internal support applications, pos system backends, etc. They probably have 20+ different large database systems and they might not even know all the apps connecting to them. I actually worked with a company like this on a few cloud projects. It was amazing to see the complexity. This is the type of stuff that's running large parts of companies that you interact with on a daily basis.

I guess what I'm getting at, is that sure if they design new things they will follow modern patterns but there is so many things that are not modern. They don't have the time or incentive to just go and rebuild all this stuff. There is zero benefit to them on a bottom line, unless there is some burning fire, a way they can extract more money, or save tons of money. So, they just keep them on life support and run in a keep the lights on mode until something happens. These are the systems all sysadmin's just wish went away and there's many of these types of things all over the place.


Yeah, I agree. Your initial reply to me didn't mention legacy issues, just org size, and I was interpreting it more in the context of a large org building new systems the same unmaintainable way they always have because of inertia / politics / ossified sysadmins / etc.


The problems are around process almost always. But I’ve also seen on occasion some sadly pragmatic reasons for very slow processes such as critical legacy software that a replacement for can’t be found even with 8+ figure checks to vendors waved.

One of the big trends from the 2010s for cloud software was to cloudwash old stacks that really weren’t more than simply validated to not crash and burn on an EC2 instance, which is why the entire cloud native movement exists to differentiate greenfield cloud architecture services from cloudwashed ones.

Things can be pretty frustrating working with different vendors of different applications and competencies. People will be patching log4j issues for years to come, for example, and that’s probably easier to validate in aggregate than entire kernel upgrades for decrepit, unsupported distros like CentOS 5 that I still hear about being used.


If you are starting from scratch, sure.

Real life is more complicated, and even if the organisation willpower and politics are aligned in a way to _want_ to fix it, this takes a long time.

Chastising someone on HN because they own a system that probably wasn’t designed and might not have the power to fix seems at best, a little unfair.


I agree, there are many reasons why bad systems architecture might remain in place due to inertia, lack of resources, organisational politics, etc.

I didn't chastise anyone.


Sorry, that wasn’t addressed to you specifically, just aiming upwards in the thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: