Too many software charlatans with too big responsibilities (architecture/strategy), with too many books read that were written by evangelists and way too little experience in whole software development lifecycle.
Write software, 1.5 year later update CV with fancy buzzwords you used, change job for better comp and repeat. Who cares how did the design mature?
They always have some method/approach which is unparallelled in every metric, except it being reflected in reality.
Years of brainwash caused new developers to make strange looks if you use "if" statement or write comments in your code.
Also acting as if design pattern was some holy code instead of just a name for an approach to some problem (just normal code, but with name).
The aversion to comments is especially painful. There is no such thing as self-descriptive code.
Same with if statements. There is a point to be made about encoding logic in the type system (type-state pattern), which is great. However, encoding it in indirection and abstraction in the name of saving a couple of lines of imperative code is an especially egregious cancer.
Here is the problem with the majority of comments I see in the wild.
They just repeat what the code is doing. And they are often wrong, because they are not functional, they don't cause compilation errors and they don't cause crashes and make tests fail, so errors in the comments tend to go unnoticed. It is so bad sometimes that I trained myself not to read comments as they can be deceptive.
Comments are a side channel, and IMHO, strictly a side channel. They can be used to express what can't be expressed in code. A common usage is to explain why you chose a solution over another.
I have absolutely no problem with "if" statements, I also think too much indirection is cancer. I have a problem with booleans parameters however. They tend to result in confusing and error prone calls like style(true, false, true), instead of something like style(ITALIC, NO_BOLD, UNDERLINE). In C/C++, I then use "switch" instead of "if". The advantage of "switch" is that the compiler warns you if you forgot a case, you also avoid to problem of accidentally reversing the condition. Make sure your compiler warns you of unintended tall through too.
That’s indeed a problem. However, writing no comments can also be a problem, so one has to find a compromise.
I think comments are helpful given these two criteria are met:
* The comment is concise.
* The comment reveals something that is not obvious in the code that follows immediately after.
It is then also easier to spot and fix outdated comments.
A typical problem that I have encountered with the “no comments” approach is that developers then have to name stuff. And naming stuff is hard. Writing a sentence is much easier. This can be a bigger issue than an outdated comment.
As an example, I just reviewed a code base and found a class named “CaptionSubtitles”. It had a comment, but that also suffered from a language gap. I think this is an issue that is widely overlooked. Code expresses semantics, so names matter. A comment can in this case at least hint at a concept or thought that the original author had in mind who of course left a couple of years ago.
Yeah, comments like what you describe should simply be removed on sight. I think they are most useful in describing domain-specific context around some logic, that may not be obvious by reading the code alone.
No, comments tell me what you wanted to achieve. Code only tells me what you actually achieved.
Winston Royce's (fantastic) 1970 paper "Managing the Development of Large Software Systems" accurately anticipated how things would end up working 30+ years later in shops that try to use 'self-documenting' code as a substitute for good documentation.
> Without good documentation every mistake, large or small, is analyzed by one man who probably made the mistake in the first place because he is the only man who understands the program area.
> When system improvements are in order, good documentation permits effective redesign, updating, and retrofitting in the field. If documentation does not exist, generally the entire existing framework of operating software must be junked, even for relatively modest changes.
Add modern-day turnover rates on technical teams, so that the person who originally wrote the code is unlikely to be around to help with analyzing problems, and you've got an excellent recipe for the current chronic burnout status quo.
Why from a user level belongs in executable specifications.
Why from an implementation level belongs in comments, but that should be fairly rare. Most code shouldn't provoke the question "why on earth did they implement it like this?"
Most of my job is figuring out why things were built the way they were. That's the only way I can know if I'm making the right changes without breaking something else in the process.
That is to say, sure most code is obvious. I don't work on most code. I work on the parts that need fixing.
I feel there is a huge over indexing on Clean Code and self describing code. Essentially are we suffering from a journeyman problem where the commentary we focus on are written by those that are new and have trouble understanding basic syntax? In that case the comments can be largely removed as describing a code statement with a comment indeed can often be replaced with a better statement.
Though, comments are so much more. They can be like the foreword in a book, the intro paragraph of a paper. While every paragraph might be easy to understand, a foreword to help the reader know what to expect can be invaluable.
Comments can be a scaling tool. A few lines above a test to tell me what is being tested might take 5s to digest, compared to adverse engineering the codd which might take 25s or longer. Multiplied over a dozen test cases, and you have something that can be understood in a minute compared to a dozen minutes. Multiply that put to a half dozen test suites or more and it is a time savings of hours.
As another analogy, it's like someone providing a travel itinerary. If you know the overall itinerary,the individual parts became expected and obvious - so much guess work of "where are we going with this" is removed. Instead that is replaced with, "I now expect these three steps", which makes it easier to recognize those steps and fit them into place.
Another analogy is a puzzle. Well written code is like having very large pieces where you can see a lot of detail. Good commentary is like having the full puzzle picture provided to you. Having both makes for an easy puzzle (which requires less time to understand, which means it requires less time to modify.)
As it has been said bazillion of times, the problem is that comments get outdated quickly. And yes, there is self descriptive code,and I'd wager 80 percent of the code is self descriptive. Comments should be reserved to describe something that might be surprising or not obvious. And for the apis.
Self commenting code gets outdated quickly too. Not the code itself, but the names. Just because a developer managed to embed their comments into function and variable names doesn't suddenly make them immune to staleness.
I've been solving a lot of codebase pattern/anti-pattern nonsense by pushing all of the types, properties and relationships into SQL. It took me about a decade to get to the point where I was done trying to be more clever than 40+ years of computer science aggregated into one magical box. A well-designed schema & clean SQL is an ego killer. It is so goddamned boring. Who would want to work on something that easy? Where did all the code go!?
I'd argue database-centric design is the only rational place to start if there is money on the table. Done correctly, this forces you to have deep & meaningful conversations with business stakeholders and customers. It was arguably the only way to build this stuff until the resume stat padders wandered into the job market and started throwing nu-SQL & "patterns" at everything that moved, or otherwise drew everyone's attention away from what is effectively a perfect solution for 99.99% of business needs.
Completely agree, show me the data and I'll tell you if your solution matches the problem you are trying to solve.
So much impedance is unclear thinking and overly complex solutions, it usually combines with inconsistent naming schemes across the stack.
It's easier to gatekeep if you're the only who knows a "game" equals a "contestQuest" because you've created some overloaded polymorphic "quest" system in your backend.
The best thing I ever did was have a personal project that I've been on for 7 years. It forced me to design in an intuitive, tested, structured way. Because I will come back to an area in a few months and not know what the hell is going on.
Usually you don't get a raise because you deserve it, you get it because the company believes keeping you will result in a more positive financial outcome in the next few quarters than replacing you.
Plenty of companies give their engineers a chance to pay their tech debt, whether it is bad code, bad infra decisions, etc. If you don't believe some engineer is capable of paying their tech debt, you may as well let them go. Or not give them a raise and hope they leave, which at most places is much cheaper.
It really depends. Sometimes people try to stuff way too much strategy-picking in if-conditions when they ought to delegate the behavior to some strategy-objects instead, where each code path could be better understood and tested in isolation. (Subclassing can also work here but it’s less flexible.)
I also particularly dislike it when you’re trying to support two different formats of payload in the same API, and if() based on presence of the new field. Expedient, I’ll grant, but if not reversed quickly it’s soon absolutely unintelligible. The more-stable systems which have to support clients sending older formats do better by versioning the format, and just having multiple implementations doing whole-payload validation, and delegating to the same underlying task-object (you do have one of those outside the class that’s just about interfacing with HTTP, right? ... okay that’s fine, I understand, but you’re gonna do it now.)
On the other hand, sometimes an if-statement is the logic itself. That’s fine.
A lot of it's down to a paradigm conflict. In general, if statements are fine. But they're not really considered to be object-oriented. Smalltalk technically doesn't even have an if statement.
I will say that, when I see object-oriented code that branches on the particular subclass of a value at run time, it's often one of the earlier sign that the code is getting messy. Subclassing is supposed to be used for a "don't ask, tell" style of programming. A lot of the problems with OOP that people like to complain about aren't really problems with object-oriented programming, per se; they're problems that crop up when object-oriented and procedural programming are mixed in an uncontrolled manner.
>will say that, when I see object-oriented code that branches on the particular subclass of a value at run time, it's often one of the earlier sign that the code is getting messy.
Suppose you have AST which has Node and its various subclasses like ExpressionNode, which then has e.g ConstantExpressionNoee
And so on, many, many other
How would you then avoid branching basing on type?
Write software, 1.5 year later update CV with fancy buzzwords you used, change job for better comp and repeat. Who cares how did the design mature?
They always have some method/approach which is unparallelled in every metric, except it being reflected in reality.
Years of brainwash caused new developers to make strange looks if you use "if" statement or write comments in your code.
Also acting as if design pattern was some holy code instead of just a name for an approach to some problem (just normal code, but with name).