When a manager walks in with a new requirement, you'd rather be able to do it in 15 minutes than 2 hours, because managerial injections are annoying and you'd rather get back to more interesting work. Sure. I get that.
However, once enough hands have passed over code, with hackish fixes thrown in to satisfy dipshit flavor-of-the-month requirements, then if it's easy to inject complexity without consideration of the loss of conceptual integrity, people will do so. What you get over time is spaghetti. It doesn't happen overnight. Almost no one sits down and says, "I'm going to write a bunch of shitty spaghetti code."
Spaghetti code is rarely written by one person. It's an "all of us is dumber than any of us" phenomenon. Even Google has some spaghetti code, and I've never met an incompetent engineer at Google. (Project and people management would be a different discussion.)
The nice thing about FP is that changing behavior (except for performance) actually requires altering an interface. In a statically typed language, this requires that you make changes elsewhere in the code to accomodate the new information flow. Yes, it's time-consuming, but it also forces you to think about how your change is going to affect the whole system. That's a good thing.
Writing code shouldn't be easy, because reading average-case code is impossible. If it takes 50% longer to write code, but the result is that people can still comprehend it at scale, I call that a major win.
> to think about how your change is going to affect the whole system. That's a good thing.
I don't think you've made a coherent argument why that's a good thing. Putting in a printf for debugging is not spaghettification and does/should not affect the whole system.
You can printline for debugging using the trace function in Haskell. Or use unsafePerformIO, which is usually outlawed for production code but just fine in a transient state for debugging.
On "writing code shouldn't be easy", you're right that this statement, out of context, seems ridiculous. What I mean is that, when there are trade-offs between making it easy to write vs. read code, generally the reader's interests should be favored, because legible code is so rare. Obviously it's better for it to be easy to write code and to read it, but ease-of-writing is a secondary priority for large, production systems.
I pretty much agree with everything you say here. Thanks for the trace and unsafePerformIO tips.
> ease-of-writing is a secondary priority
In my experience, once you have enough experience and witnessed the suck of maintaining crappy code, writing good code is second nature and doesn't take conscious effort.
There's a lot of truth in what you're saying. However, if maintaining shit code turned people into good coders, then there'd be a lot less bad code (because the precursor is so damn common). I think it also requires curiosity, desire to do things right, and access to the right information (which is now trivial if one knows where to look).
What I would actually say is: the differential in difficulty between writing good code and writing bad code should be minimal or, if possible, negative.
Actually I disagree on your first point. My first job was maintainenance programming, and for some reason I stuck around and did it for 3 years. That has taught me the suck of trying to maintain and work with shitty code.
Now, X years later I still tell stories about that first job, and I always remember how I felt maintaining spaghetti code when I write code.
BUT, I find that most of my fellow programmers either have not worked as m.programmers; or have done so but didn't notice it, because I guess they don't have a built-in desire for arete or just didn't want to get upset about their work and chose to ignore the crappy code.
> it also forces you to think about how your change is going to affect the whole system. That's a good thing.
How do you reconcile that with software's pervasive idea of abstractions, which lets us avoid exactly what you say is a good thing?
Abstraction is a good thing, but what I'm talking about here is object-oriented programming. An "object" is something of which one has incomplete knowledge. OOP is, I would argue, about managing incomplete knowledge, on the (correct) assumption that if systems get large, it will be inevitable that people don't understand all the details of what they're working with.
Good abstractions deliver a lot of value. Files and sockets, for example, should be treated as objects, because the client should (usually) be agnostic of how the things are actually implemented. So OOP isn't all bad.
What's messy is a paradigm that gives an unbounded right to inject complexity. Programmers don't have enough autonomy in most corporations to do things right, so the result in most cases of Brownian-motion requirements is an increasingly gnarly pile of hacks. What Haskell does well (being both functional and statically-typed) is punish that style of development while it occurs, making it so that the kludge isn't always the path of least resistance.
That's a feature, because it protects against spaghettification. Explained here: http://michaelochurch.wordpress.com/2012/12/06/functional-pr...
When a manager walks in with a new requirement, you'd rather be able to do it in 15 minutes than 2 hours, because managerial injections are annoying and you'd rather get back to more interesting work. Sure. I get that.
However, once enough hands have passed over code, with hackish fixes thrown in to satisfy dipshit flavor-of-the-month requirements, then if it's easy to inject complexity without consideration of the loss of conceptual integrity, people will do so. What you get over time is spaghetti. It doesn't happen overnight. Almost no one sits down and says, "I'm going to write a bunch of shitty spaghetti code."
Spaghetti code is rarely written by one person. It's an "all of us is dumber than any of us" phenomenon. Even Google has some spaghetti code, and I've never met an incompetent engineer at Google. (Project and people management would be a different discussion.)
The nice thing about FP is that changing behavior (except for performance) actually requires altering an interface. In a statically typed language, this requires that you make changes elsewhere in the code to accomodate the new information flow. Yes, it's time-consuming, but it also forces you to think about how your change is going to affect the whole system. That's a good thing.
Writing code shouldn't be easy, because reading average-case code is impossible. If it takes 50% longer to write code, but the result is that people can still comprehend it at scale, I call that a major win.