Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Lava Layer Anti-Pattern (2014) (mikehadlow.blogspot.com)
77 points by cheeseface on July 1, 2023 | hide | past | favorite | 62 comments


Too many software charlatans with too big responsibilities (architecture/strategy), with too many books read that were written by evangelists and way too little experience in whole software development lifecycle.

Write software, 1.5 year later update CV with fancy buzzwords you used, change job for better comp and repeat. Who cares how did the design mature?

They always have some method/approach which is unparallelled in every metric, except it being reflected in reality.

Years of brainwash caused new developers to make strange looks if you use "if" statement or write comments in your code.

Also acting as if design pattern was some holy code instead of just a name for an approach to some problem (just normal code, but with name).


The aversion to comments is especially painful. There is no such thing as self-descriptive code.

Same with if statements. There is a point to be made about encoding logic in the type system (type-state pattern), which is great. However, encoding it in indirection and abstraction in the name of saving a couple of lines of imperative code is an especially egregious cancer.


Here is the problem with the majority of comments I see in the wild.

They just repeat what the code is doing. And they are often wrong, because they are not functional, they don't cause compilation errors and they don't cause crashes and make tests fail, so errors in the comments tend to go unnoticed. It is so bad sometimes that I trained myself not to read comments as they can be deceptive.

Comments are a side channel, and IMHO, strictly a side channel. They can be used to express what can't be expressed in code. A common usage is to explain why you chose a solution over another.

I have absolutely no problem with "if" statements, I also think too much indirection is cancer. I have a problem with booleans parameters however. They tend to result in confusing and error prone calls like style(true, false, true), instead of something like style(ITALIC, NO_BOLD, UNDERLINE). In C/C++, I then use "switch" instead of "if". The advantage of "switch" is that the compiler warns you if you forgot a case, you also avoid to problem of accidentally reversing the condition. Make sure your compiler warns you of unintended tall through too.


That’s indeed a problem. However, writing no comments can also be a problem, so one has to find a compromise.

I think comments are helpful given these two criteria are met:

* The comment is concise.

* The comment reveals something that is not obvious in the code that follows immediately after.

It is then also easier to spot and fix outdated comments.

A typical problem that I have encountered with the “no comments” approach is that developers then have to name stuff. And naming stuff is hard. Writing a sentence is much easier. This can be a bigger issue than an outdated comment.

As an example, I just reviewed a code base and found a class named “CaptionSubtitles”. It had a comment, but that also suffered from a language gap. I think this is an issue that is widely overlooked. Code expresses semantics, so names matter. A comment can in this case at least hint at a concept or thought that the original author had in mind who of course left a couple of years ago.


Yeah, comments like what you describe should simply be removed on sight. I think they are most useful in describing domain-specific context around some logic, that may not be obvious by reading the code alone.


Code tells me what you wanted to achieve. Comments tell me why you needed to.

The problem developers have is they assume it's obvious why things are needed. It's usually not obvious 6 months from now, even if you wrote it.


No, comments tell me what you wanted to achieve. Code only tells me what you actually achieved.

Winston Royce's (fantastic) 1970 paper "Managing the Development of Large Software Systems" accurately anticipated how things would end up working 30+ years later in shops that try to use 'self-documenting' code as a substitute for good documentation.

> Without good documentation every mistake, large or small, is analyzed by one man who probably made the mistake in the first place because he is the only man who understands the program area.

> When system improvements are in order, good documentation permits effective redesign, updating, and retrofitting in the field. If documentation does not exist, generally the entire existing framework of operating software must be junked, even for relatively modest changes.

Add modern-day turnover rates on technical teams, so that the person who originally wrote the code is unlikely to be around to help with analyzing problems, and you've got an excellent recipe for the current chronic burnout status quo.

https://www.praxisframework.org/files/royce1970.pdf


Why from a user level belongs in executable specifications.

Why from an implementation level belongs in comments, but that should be fairly rare. Most code shouldn't provoke the question "why on earth did they implement it like this?"


Most of my job is figuring out why things were built the way they were. That's the only way I can know if I'm making the right changes without breaking something else in the process.

That is to say, sure most code is obvious. I don't work on most code. I work on the parts that need fixing.


I feel there is a huge over indexing on Clean Code and self describing code. Essentially are we suffering from a journeyman problem where the commentary we focus on are written by those that are new and have trouble understanding basic syntax? In that case the comments can be largely removed as describing a code statement with a comment indeed can often be replaced with a better statement.

Though, comments are so much more. They can be like the foreword in a book, the intro paragraph of a paper. While every paragraph might be easy to understand, a foreword to help the reader know what to expect can be invaluable.

Comments can be a scaling tool. A few lines above a test to tell me what is being tested might take 5s to digest, compared to adverse engineering the codd which might take 25s or longer. Multiplied over a dozen test cases, and you have something that can be understood in a minute compared to a dozen minutes. Multiply that put to a half dozen test suites or more and it is a time savings of hours.

As another analogy, it's like someone providing a travel itinerary. If you know the overall itinerary,the individual parts became expected and obvious - so much guess work of "where are we going with this" is removed. Instead that is replaced with, "I now expect these three steps", which makes it easier to recognize those steps and fit them into place.

Another analogy is a puzzle. Well written code is like having very large pieces where you can see a lot of detail. Good commentary is like having the full puzzle picture provided to you. Having both makes for an easy puzzle (which requires less time to understand, which means it requires less time to modify.)


>There is no such thing as self-descriptive code.

Oh there is. Well written code with good naming and clean structure often doesnt require comments.


As it has been said bazillion of times, the problem is that comments get outdated quickly. And yes, there is self descriptive code,and I'd wager 80 percent of the code is self descriptive. Comments should be reserved to describe something that might be surprising or not obvious. And for the apis.


Self commenting code gets outdated quickly too. Not the code itself, but the names. Just because a developer managed to embed their comments into function and variable names doesn't suddenly make them immune to staleness.


I've been solving a lot of codebase pattern/anti-pattern nonsense by pushing all of the types, properties and relationships into SQL. It took me about a decade to get to the point where I was done trying to be more clever than 40+ years of computer science aggregated into one magical box. A well-designed schema & clean SQL is an ego killer. It is so goddamned boring. Who would want to work on something that easy? Where did all the code go!?

I'd argue database-centric design is the only rational place to start if there is money on the table. Done correctly, this forces you to have deep & meaningful conversations with business stakeholders and customers. It was arguably the only way to build this stuff until the resume stat padders wandered into the job market and started throwing nu-SQL & "patterns" at everything that moved, or otherwise drew everyone's attention away from what is effectively a perfect solution for 99.99% of business needs.


Completely agree, show me the data and I'll tell you if your solution matches the problem you are trying to solve.

So much impedance is unclear thinking and overly complex solutions, it usually combines with inconsistent naming schemes across the stack.

It's easier to gatekeep if you're the only who knows a "game" equals a "contestQuest" because you've created some overloaded polymorphic "quest" system in your backend.


The best thing I ever did was have a personal project that I've been on for 7 years. It forced me to design in an intuitive, tested, structured way. Because I will come back to an area in a few months and not know what the hell is going on.


> Write software, 1.5 year later update CV with fancy buzzwords you used, change job for better comp and repeat. Who cares how did the design mature?

That's on their employer for offering subpar comp. They could have offered that employee a raise instead, and asked him to rewrite his bad code.


> They could have offered that employee a raise instead, and asked him to rewrite his bad code.

Why would an employee deserve a raise if they write bad code that has to be rewritten?


Usually you don't get a raise because you deserve it, you get it because the company believes keeping you will result in a more positive financial outcome in the next few quarters than replacing you.

Plenty of companies give their engineers a chance to pay their tech debt, whether it is bad code, bad infra decisions, etc. If you don't believe some engineer is capable of paying their tech debt, you may as well let them go. Or not give them a raise and hope they leave, which at most places is much cheaper.


I think you are describing a dogmatic approach to development. It's very common and causes problems like are described in the article.


Wait, what's wrong with if-statements? I apparently missed the boat on that one.


It really depends. Sometimes people try to stuff way too much strategy-picking in if-conditions when they ought to delegate the behavior to some strategy-objects instead, where each code path could be better understood and tested in isolation. (Subclassing can also work here but it’s less flexible.)

I also particularly dislike it when you’re trying to support two different formats of payload in the same API, and if() based on presence of the new field. Expedient, I’ll grant, but if not reversed quickly it’s soon absolutely unintelligible. The more-stable systems which have to support clients sending older formats do better by versioning the format, and just having multiple implementations doing whole-payload validation, and delegating to the same underlying task-object (you do have one of those outside the class that’s just about interfacing with HTTP, right? ... okay that’s fine, I understand, but you’re gonna do it now.)

On the other hand, sometimes an if-statement is the logic itself. That’s fine.


A lot of it's down to a paradigm conflict. In general, if statements are fine. But they're not really considered to be object-oriented. Smalltalk technically doesn't even have an if statement.

I will say that, when I see object-oriented code that branches on the particular subclass of a value at run time, it's often one of the earlier sign that the code is getting messy. Subclassing is supposed to be used for a "don't ask, tell" style of programming. A lot of the problems with OOP that people like to complain about aren't really problems with object-oriented programming, per se; they're problems that crop up when object-oriented and procedural programming are mixed in an uncontrolled manner.


>will say that, when I see object-oriented code that branches on the particular subclass of a value at run time, it's often one of the earlier sign that the code is getting messy.

Suppose you have AST which has Node and its various subclasses like ExpressionNode, which then has e.g ConstantExpressionNoee

And so on, many, many other

How would you then avoid branching basing on type?


There is a concept known as Chesterton's Fence - that you shouldn't take down a fence someone else put up until you know why they put it up, or more generally if you don't understand why something was previously considered a good solution to a problem, then it's possible you don't understand the problem it was meant to solve.


In my experience, so many things were _not_ a good solution to the problem but merely the first thing that looked like it worked. This leaves a trail of garbage behind which it's unclear if it's safe to remove or not, which is a huge cognitive overhead for someone trying to make changes later on.

If something is there that's not going to be obvious in 6 months time please please document/comment it. Stop building mysterious fences!


That's right, but the advice is targeted at you when somebody's already left you a mystery fence. They should have done better and documented it, but they didn't and now you have to deal with it.


Indeed, a comment in code that indicate that a choice was probably not optimal and it was done in hurry would help the next developer to not assume to much intention when reading this part of code.


Is this Chesterton's Garbage?


Chesterton's Roadside Picnic


I love this analogy.

Sometimes it's a perfectly good fence, sometimes it's an invisible spatial anomaly that turns people inside out.

¯\_(ツ)_/¯


In my experience extremely abstracted code with inconsistent naming schemes and general data confusion is a sign of bad code. Add meta programming (outside of well documented libraries with simple ergonomics) to that list.

In 20 years if it smells like duck it's because the duck was trying to shoehorn functional programming paradigms into ruby by abusing the hell out of blocks, bad clever code is just bad code.


I've never understood the analogy. Fences are built to keep people or other animals inside or outside an area, the purpose is almost always fully obvious. They are also not that cheap and require some upkeep, it is almost always easy to find the owner of a fence. I really don't know what sort of fence he had in mind.


The original quote mentions a fence or gate built across a road. Also Chesterton is writing from an English perspective which includes familiarity with manmade landscapes built up over thousands of years where the intentions of the original builders are lost to history. He probably had a dry stone wall in mind which are often used to fence in parcels of land. In any case, I’ll let him defend his position himself, his wit is as sharp as ever:

“In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.”


The sort that controls perfectly spherical cattle?

I also don’t really like the fence analogy. A real fence typically exists to demarcate property lines and / or keep things in / out. This is usually perfectly obvious, even if you don’t know who the owner actually is, or the specific role.


That’s part of the point. It usually is obvious. So when it’s not immediately obvious, we should err on the side of caution rather than action.

Chesterton’s fence is another adage in the “try not to assume things” category.


Fences are boundaries. Code structure can be thought of as mostly a set of boundaries around knowledge domains.


Each wanted to re-write the application rather than maintain it, but the business owners would not allow them the resources to do it.

One trick I've successfully used to get around this is to basically work on doing that in what would otherwise be slack time, at "background priority". Only when you're done do you give the business owners the proposal, and when they object, say that you already did it and show them the results. Of course this works only if you actually do have slack time, which is normally a given in the sort of enterprise environment these abominations usually get created in; and you understand the existing application to a sufficiently large extent and manage to find a way to simplify it to a fraction of its current size while maintaining backwards compatibility, something that seems to be the exact opposite of what the leads in this story wanted to do.

"It's easier to ask for forgiveness than permission."


There seems to be an innate human desire to fix problems right in front of your eyes that you can see clearly. And that's not limited to software development.

People seem to be constantly developing workarounds for the deliberate sabotage by their own superiors preventing them from doing the work they're being paid to do, and even sacrificing their own time for it.

Unfortunately, this keeps the parasites in power as it masks the damage their approaches are doing, or would be doing if left to run their natural course.

Don't be a cuck and pick up others' shit. Especially not if they're higher-paid and get to give you orders.


>Unfortunately, this keeps the parasites in power as it masks the damage their approaches are doing, or would be doing if left to run their natural course.

I hear this, but I also find there's practically no retrospective at the managerial level outside of an acquisition as that's the only way a culture shift is incurred at the upper levels, and that of course all depends on the acquirer.

They're all too busy playing the game of minimizing their exposure, creating leverage and advantage for their ambitions, etc.

I feel like a managerial culture is something that kicks off on day one and gets perpetuated, or it just turns into every other Enterprise eventually.


Quote from https://medium.com/geekculture/the-dead-sea-effect-d71df1372... Dead Sea Effect:

The “bad” employees left behind are not always bad — hence the quotes. I disagree with that idea. I think it lacks empathy and perspective. There are a lot of instances where good employers stay in situations like this for very valid reasons. Here are some examples:

[...]

They believe they can singlehandedly change bad practices as an individual contributor.

---

I can only say: take the whole company with you to remove technical _and_ organizational debt otherwise any rewrite will end up as the next lava layer.


In my experience, it is very difficult to get a commitment for the effort to remove deadcode.

The problem is essentially invisible to everyone except maybe the dev who is dealing with a high noise-to-signal ratio as a result.

If it was a woodworking shop instead of a code repository, management would constantly see what a mess the shop is due to never being cleaned up properly.

They would never tolerate this because of the way it looks. The woodshop looks sloppy, half-assed and almost sure to be an inefficient place to do work, if it does not get cleaned up on a regular basis.

Because they cannot see deadcode in the repository in a similar manner, it is especially challenging to get them to care.


This might not add much to the discussion, since the post is not really about dead code removal, but your woodworking shop analogy reminds me of the bakery analogy at the beginninf of this old post by Joel Spolsky:

https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...


You just delete it on the fly as you come across it, since it lives on in the version control system anyway. If your workplace doesn't have version control, leave.


Code unambiguously defines 'what' it's doing, so I'm not sure what value technical consistency provides.

If anything, consistency hurts the capacity for developers to generalise their understanding of a system. Differences in representation ie. multiple perspectives, help people to develop their understanding of the underlying concepts of the system.

This is important because code doesn't effectively record concepts, intents and oughts. These are things recorded within the context of the system. So the more context, usage and interactions available, the more opportunity for learning.

I think consistency is one of those things that sounds appealing and offers the illusion of making development 'easier' but boils down to someone deciding by fiat that their poorly communicated representation of the system is correct and nobody else should add their own representations of the system to build up a useful level of context and understanding.


Hard disagree. Inconsistency increases the amount of time needed for any feature and increases the likelihood of bugs. There is no upside in having to understand each individual piece of functionality from the ground up, because a lot of functionality is simply repeated in slightly different ways. What you call "hurting the capacity for developers to generalise their understanding of a system" is what I would call "building the capacity for developers to generalise their understanding of a system" - consistency is what allows for generalization.


Consider a codebase where all variables are consistently named with single letters and all values are string-typed for consistency.

Somebody comes along and creates a new, isolated piece of functionality using words for variable names and using the languages type system for numeric values and structures. It works well.

It certainly looks like this inconsistency would make it easier to work with this code in the future. Any conversion functions introduced that deal with the consistent string types and the new inconsistent types will help new developers understand both types of code.


I don't think that is in any way a realistic example.

But let's turn it around: consider a codebase where every type is aliases for every use. Any function that returns an int has it packed in its own type, and if you want to use that type somewhere else, you have to add a new method to turn it into that other type.

Every function returns multiple values, but in slightly different containers. Each container is unpacked slightly differently, has a slightly different memory layout.

Why would the above be easier than a codebase that always uses int and returns tuples?


Functions that return types wrapping int instead of raw ints are a good thing. Error codes in particular really should be something enum like and not 32 bits of best of luck.

In contrast, having some containers call it size and others call it length is indeed moderately unhelpful.


> Functions that return types wrapping int instead of raw ints are a good thing. Error codes in particular really should be something enum like and not 32 bits of best of luck.

That's not what I described. I was talking about numbers without special meaning, and with types which require manual implementation of any conversion. One function might return a container whose length() function returns a wrapped integer, and a different container with the same function name returns a different wrapper. You can't combine those wrappers without explicitly defining a conversion. That's what inconsistency in return types means.

> In contrast, having some containers call it size and others call it length is indeed moderately unhelpful.

And now imagine it's not just two options, but literally every container returned by every function being special. That's what it means to be inconsistent. But as you can see in your example - even small bits of inconsistency (1 bit, if you want) are annoying and unhelpful.


I don’t think anybody is arguing that consistency is _the only thing_ that matters. The argument is that « consistent and reasonable patterns » is preferable to « inconsistent yet just-as reasonable patterns » by most criteria


Yes, but- note that the new code imposes a cost in understanding, at first, because it doesn't work the way the old code does. Then it makes up for that cost by just being so much better that it's on net easier to understand anyways. That doesn't mean the cost isn't there.


> Code unambiguously defines 'what' it's doing, so I'm not sure what value technical consistency provides

Unless it’s really convoluted, most of the time I’m reading code, I’m concerned with why it’s doing something. Technical consistency helps understand the intent by removing distractions and the need to switch paradigms.


Consistent code base makes it easier

to read

to onboard new people

to review

if you have e.g 3 different ways to handle error handling then it is a mess. one time you expect exceptions, other time monads, other time int values.


It also makes it faster to write code. If the code base has a strong culture around doing X, then whenever you do X, you don’t have to spend time thinking about the best way to do it.


I’m surprised that tests are only mentioned as a way to squash bugs and not as a way to make refactoring faster. Just today I converted a large library into a service and brought in some endpoints from another service. The diff is several thousand lines. But I’m not too worried about bugs because we have decent test coverage.

I’m not sure why, but I really like working with legacy code (as long as I’m given some autonomy to improve it over time). Part of the enjoyment is the restraint, the humility when facing code that, despite not being squeaky clean, has racked up millions or billions of hours getting shit done. The code knows itself better than you do.


I once, for a short period, maintained web-application that had three different frontend frameworks (angular 1, angular 2, and one other I don't quite remember), and four different javascript builds that had to be run to build the application. Apparently management prioritized the giving demos to investors, and changes to different frameworks were aborted halfway through.

It was completely impossible to work with. Each week the build failed for new random reasons. I hardly dared touch the thing.

This "pattern" is really a failure from multiple parties:

* Managing software engineers is an art, and you really need to understand what is happening to succeed. Only prioritizing short-term goals just ensures you're going to fail in the long-term. Make sure you understand technical debt. The speed of work in bad code-bases versus good code-bases can be orders of magnitude in difference.

* Software engineers really need to use branches properly. Work that is halfway done should not be in the main branch. Consistency and simplicity is king here. Maintaining an old and new version of software for a while can be a pain, but it's much better than maintaining a halfway converted application. Pressure from management is no reason to release stuff halfway done. And if you need to demo, release a specific branch.

Nowadays I don't even ask to do necessary maintenance. It's just part of the job. Always stick to the boyscout rule (leave things in a better state then you found it). Make your code-bases cleaner incrementally, and eventually you'll be in a much better state.


I've coined this "architecture archaeology" at work. The further you dig, you find a new technology in use, corresponding with new technical leadership.


technical leadership determines technology... with some (or a lot) of "legacy" tech remaining in place.

At a company, I was brought in to help Version 2 get off the ground (serverless everything, Typescript). Version 1 worked just fine (PHP + VM), they kept none of it. When I left two years later, the new tech leadership started Version 3 (Ruby).

Three rewrites in three years!


Struggling with this on a current project, except I’d say it’s not currently consistent. It already has “lava layers”, lots of bugs and unintended behavior devs struggle to reproduce in dev/test. We’re debating rewrite vs refactor now. I find myself going back and forth between dogmatic and pragmatic approaches. Which maybe is a good balance to try to strike.


You already know the answer: rewrite is almost never the right option. If you’re not familiar with everything the code does, how will you rebuild it? Step 1 is to get some tests up to allow yourself to refactor quickly without breaking too much stuff.


In other words: any incomplete refactoring is technical debt.

In case of considering a refactor always make sure you can finish it completely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: