Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Configuration Complexity Clock (mikehadlow.blogspot.com)
67 points by henrik_w on May 9, 2017 | hide | past | favorite | 30 comments


I think this "hard coding-phobia" is a left-over since the 60s, when it took a night to recompile the code. It has since been echoed in the universities for no clear reasons.

Today, it's probably not that hard to recompile and redeploy unless it's an embedded application orbiting Mars.

It's also most likely that you need to redeploy and rerun all test regardless if you change a source file or a config file.

But the warning regarding a "rules engine" or a DSL is worth listening to. Express the rules in code to reduce the complexity and make sure that you can redeploy fast and without any fuzz. The maintenance will be so much easier.

Simple flags or values that may change occasionally or needs to be changed really quick should most likely be stored in a database rather than a config file.

Note that if changing a config value also requires changing the code, there is little point in maintaining a separate config option in a file.

Don't be afraid of hard coding logic - If you don't know the logic yet, help someone find out the requirements rather than building super-flexible rules engines or DSLs.


While I agree, I think the problem is that coding your little interpreter or grand service framework or what have you is a whole lot more fun than coding business logic. Also, more often than not, business requirements are unclear at the begin of a project, so developers are attempting to anticipate each and everything at the meta-programming level.


Indeed. All developers enjoy converting boring and repeatable task into interesting and complex problems.

It's the essence of programming in a way...


For web servers, everything you say is true.

I spent years writing desktop applications, and the config file was part of the whole application bootstrap process. It couldn't connect to the database until it knew where that was, and that entirely depended on the deployment environment.

Using environment variables basically meant that bits of the application's config lived in whatever environment system the OS used. It was possible, but massively more painful than shipping a text file with the application.

Creating a new executable for each customer install, with the config values hard-coded, was not an option, regardless of how fast it compiled. Keeping a copy of the text config files (in case there was a problem) was easy, but rarely done. Usually we talked the customer through emailing us a copy of their config file if there was a problem.

This kind of attitude persists. I still find myself hiving off any config into a separate part of the code base, with the knowledge that at some point I'm going to have to deal with that being serialised from a file. Even if it's a single-deploy web server. I think it's a good habit ;)


You might be able to recompile quickly but if you're waiting 4 weeks for Apple to put your application in its store, there's still a benefit to having external configuration in sane amounts. As the article says, everything in moderation.


> Express the rules in code to reduce the complexity and make sure that you can redeploy fast and without any fuzz. The maintenance will be so much easier.

Not to mention you get source/change control for free. I push back constantly on junior engineers overzealously config-ifying everything for some hypothetical future use case. Yagni. It's a two way door and is easy to move later.

(Of course, this doesn't apply in all domains, particularly when you're shipping full applications to clients and code changes are very difficult to deploy.)


> Today, it's probably not that hard to recompile and redeploy unless it's an embedded application orbiting Mars.

For most companies it still is that hard, the idea of clicking a button and deploying is still alien. This causes some rather elaborate workarounds, at my current company we have an sql procedure that looks up which sql procedures to call for instance. To them it's a life saving feature but to us it's an artifact of an awful deployment system.


True.

It should not be so hard.

The obstacles to a smooth deploy is most often more related to bureaucracy rather than technical implementation.

I'd say it's possible to automate almost any application update process, if you spend enough time and effort on it.

Our applications also have indirections upon indirections, and the amortised time we have spent on maintaining that mess could easily have funded a one-click deploy and then some...


CI/CD, deploying fast, changing things easily and getting them into production within minutes is well within reach of most teams these days. I'm not saying it's an inexpensive exercise to get there for a long-deployed, monolithic application.. but it can be done in most cases I find; and when it is you have more options.

You can hard code things and kill most of the complexities this author talks about. It's a great feeling.

Up front cost in terms of automation is often linear. The opportunity cost of automating and simplifying is far more than linear.

The foremost principal of #leanstartup is Small Batches IMO. Keeping everything small (functions, reusability, work items, features, complexities, etc, etc) means you do things fast and often. You automate when repetition becomes painful and you understand the pain well (and not before).

These days I have a `Config` object which contains a static tree of config params. Sure these change occasionally, but not that often. It's more for reusability than 'configuration'. It's a source of truth for common values. Only a few of these values are deserialised from a config file, env variable or replaced by a deployment tool.

Mainline dev (Github Flow) means I can make a change, deploy it to an integrated environment quickly (single digit minutes) with all the build and tests passing, check it in the staging env, and click a button to put it into Prod (seconds usually). With the up front investment of automation making this possible, I can avoid a heap of non-linear complexity in my architecture and code. It's the only way to fly.


> CI/CD, deploying fast, changing things easily and getting them into production within minutes is well within reach of most teams these days.

Someone else who's living in the as-a-service bubble. Meanwhile, embedded systems, OS-level components, desktop apps, and compliance-heavy industries continue to exist.

> It's the only way to fly.

Yeah... no. Even on projects where I have fast CI/CD, the best complexity killer is taking a step back, thinking about the problem space and weighing the options before settling on an implementation.


In the world I live in, most configurations changes require a redeploy of the application anyways, so moving them to a config file doesn't really help much.

I definitely see the value of driving some configuration from database values, primarily if they're rapidly changing OR (more importantly) are being updated on the fly by non-technical employees.


If you've lived through this the post is really funny.

We've all had someone clear their throat when taking you through a new codebase and be like 'yeah, the configuration language kind of became self aware'.

Organizations aren't equipped to evaluate the costs of complexity and our software tooling isn't strong enough (yet) to tweak old design decisions to make room for changes.


To be honest, I’ve never seen an organisation go all the way around the clock, but I’ve seen plenty that have got to 5, 6, or 7 and feel considerable pain

In my (very) brief experience with enterprise Java, a very long time ago, I saw one go around the clock and make its way toward a second round: a DSL which started growing another DSL inside it. Of course, being Java, the DSL processors themselves were also overengineered excesses and had their own configuration and configuration-configuration... I didn't stay around long enough to see what eventually became of that system, but I have a feeling it's still in active use.


We've got one that went around the clock. A DSL was created for support, but the DSL is more of a UI that builds expression trees. Of course support don't understand expression trees so the developers write c# (what the DSL "compiles" to), the c# is inserted into the database and the compiled on the fly.


Oracle Forms used to be pretty bad for this I've seen a "form that controls a form" crop up fairly often.


Great article, and definitely made me chuckle -- I've been on this train too with enterprisey code and it's not fun.

Often, the road to hell is at least nominally based on good intentions: beasts like a Rules Engine are as much of design tools ("nothing that can't be solved with another layer of abstraction") as responses to a particular requirement, like the customer -- who is not you, and may not even have a direct line to you -- being able to choose the exact codepath out of the sum total of all delivered codepaths without having to get the developer on the phone to modify the application itself.

This is the same line of thinking that led to stuff like Spring beans.xml where person configuring the code on the customer-side can specify which particular implementation of a Kind of Thing to use, out of a larger palette of possible implementations [1]; this system has been decried by many for years to the point of Spring itself adopting a different preferred way of configuring these things, but once your "configuration" can only be changed by recompilation, then it can no longer be called configuration with a straight face.

Rule Engines err by swinging too far to the other side of "flexibility", and tend to happen when software is used to model complex systems and ("human") business processes where no single person really understands all the requirements from the very beginning; it's the committee-approved solution to a //TODO: impl, where the, non-technical but hopefully domain-proficient users of the system will be able to teach the system what to do as they go.

As the understanding of the system improves, both by the developers and the customers, it then becomes tempting to intentionally constrains the problem space by reframing it in terms of domain-specific concepts. This doesn't necessarily mean writing a domain-specific language (DSL), but it can. This is where the article's premise starts to stretch too much, though: if the quip of "now we're hardcoding everything in a crappier language" can be considered by true, then the DSL has been bungled from the start -- and you haven't reduced the complexity of the problem-space; you've increased it. Now you've got two problems.

Nonetheless the clock metaphor is illuminating because of the cyclical tendency of increasing flexibility as the requirements are poorly understood, vs. augmenting the domain model once the requirements are better known.

[1] https://news.ycombinator.com/item?id=13683547


I've certainly been bitten by hardcoded values in some of our "business apps" before.

One notorious example I can think of is a Laboratory LIMS system that had hardcoded sample pick up point locations every time there was an equipment breakdown or similar and the sample would get diverted a bunch of automated reports and database triggers would fall over.

Nowadays I prefer to use a database table to store values. I guess SQL could be considered as a kind of DSL.

I tend to use some schema like "Key: Value : Date Applied" it is often useful to keep the historical rules so you can rerun events using the business rules that were in place "at the time" not easy to do if you rely on hard coded constants.


> Nowadays I prefer to use a database table to store values. I guess SQL could be considered as a kind of DSL.

For user configurable options this is great, for system configuration it's awful. If you copy your production database you may end up reading/writing to places you shouldn't be. It also sucks if you use a shared development database (which is awful too, but unrelated).


With SQLite it works as a decent alternative to config files for many kinds of applications.


That's way more complexity for very little gain. Text files are simple and Universal.


> except now in a much crappier language

This is one principal point where I disagree with this post. A proper DSL is a much better language than whatever the core is build in; for example, C++ may be the perfect language to write the game engine, but LUA is usually a much better choice for game designers to write the level-specific and quest-specific logic.


The argument one's making is that at some point, the complexity of business rules extends the complexity of the DSL you wrote, at which point people tend to evolve DSLs until they become a crappy general-purpose programming language.

For a good example of that, see the evolution of template engines in web development. They quickly accrued conditionals, function definition, recursion, and at some point they just turned into a crappier version of PHP. The latter is doubly ironic when you see people using template engines in PHP, as if not noticing that PHP is already a better templating language.


Good point! I thought about templating also when I read this article. I also like how a similar perspective inspired Tero to develop RiotJS, he is even quoting the Facebook team: "Templates separate technologies, not concerns" - http://riotjs.com


In my eyes, PHP was a template engine to begin with, so it indeed doesn't make sense. But I doubt that your business logic would require to include manual memory management or bitwise operations, for example.


This is exactly the sort of application that Ruby is best for. Metaprogramming can boil down a lot of complexity. Moving along each phase on the process can be done with refactoring rather than redesign / re-engineering.


Boil down a lot of complexity... into even more even more complex code. Metaprogramming is usually only simpler for the developer who wrote it.

I love Ruby metaprogramming, don't get me wrong, but using it in production requires diligence with test coverage, and clear and well documented interfaces so non-Ruby experts can still contribute code.


> into even more even more complex code.

Sure, but there's way less of it. Way less to test, way less to maintain.

Also, you can refactor metaprogramming back out of a codebase after you've introduced it. Basically this involves taking the hacks and giving them classes with state. Essentially you're modifying the design of your application without ever introducing a break in continuity of functionality.


Perl scripts are even shorter.


Harder to refactor though.


I think you more or less get the best of all the worlds if you:

1) have a good build-test-release process so updating hard coded values is not so hard or dangerous

2) use a language like Haskell that makes custom DSLs easy and automatically well-tooled for the more 'logic-y' configuration

3) possibly use a non-DSL configuration file loaded separately from code for basically 'scalar' things like the VAT rate, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: