I don't think a method being tied to an instance is the best case for calling it "tight coupling".
OOP can be used to design relatively uncoupled systems.
One of the lessons learned in all these decades is what the grandparent post alludes to, which is distilled into "prefer composition over inheritance". Implementation inheritance (as opposed to interface inheritance) indeed introduces coupling and is therefore discouraged in current advice.
> OOP can be used to design relatively uncoupled systems.
Never go full-Object-oriented programming.
In this case I think it is valuable to make a distinction between OOP which is a style of programming, and object-oriented languages, which are just a language designed with that style in mind.
I have seen issues in codebases where developers have used OOP as style to aspire to, using it in an academic sense. They tend to try to use inheritance frequently, have deep inheritance trees, and suffer from hidden coupling through these.
On the other hand, those who use object oriented languages in a mostly functional style (side-effect free functions, effective immutability, and almost no use of inheritance) tend to be much healthier in the long term.
So it's fine to use OO languages, but never go full OO programming.
I think the poster child of going full OOP (that one can look at) is ACE/TAO [1], an implementation of CORBA. It had deep inheritance trees and abstractions piled on abstractions.
Similar to Mach and microkernels, folks ran ACE/TAO and thought CORBA was slow, when it was just the implementation that was not built for speed.
Agreed, but that wasn't what I was saying or replying to, was it?
I was arguing that method implementation tied to an instance isn't the type of thing people mean when they refer to tight coupling. Coupling is related to breakage/maintenance; when you touch this thing here, if it's tightly coupled with some other component, it will also require (sometimes unexpected) changes in that other component.
Whether one should or shouldn't go full OO is an orthogonal consideration.
It’s better to have all your logic be loosely coupled down to the smallest primitive.
What’s the point of tying up groups of logic together and glue it up with state and say this is the fundamental unit of composition?
You see the problem? 2 years down the line you find out that a certain class has methods that are better reused in another context but it’s so tightly coupled that the refactoring is insanity.
Better to have had state decoupled from function and to have functions decoupled from each other and not tied together by common state. If you do this you get rid of all the fundamental technical debt that arises from oop. You guys don’t see it. Oop is a major cause of technical debt because of tight coupling.
We can’t predict the future. You can’t guess that a method that exists in class A will 2 years down the line be better fit in class B or as its own class. So because you can’t know the future isn’t it logically better to not couple all your logic together into these arbitrary bundles called classes?
Break your function down into more smaller modules of computation. The object class is too large.
But then you ask how do I create bigger abstractions? Just compose functions together to form bigger functions. For state Compose struct types together to form bigger structs. Using this method to build your abstractions allows you to break down your abstractions into smaller units whenever you want!
You can’t break down the class. The class is stuck. I can’t reuse a portion of state in another context and I can’t do the same thing with my methods. What’s the point of using classes to place arbitrary and pointless restrictions on modularity? None.
I agree that in many cases there is a problem, and indeed, objects can be designed as too coarse.
I agree in many cases it leads to problems of composition. Some design principles have been devised to mitigate this, such as the "Single Responsibility Principle" (and others). Nothing is fool-proof however, and everything is further complicated by the fact no-one seems to agree on precise definitions of any principles.
God Objects are one such know problem of highly coupled, low cohesion functions grouped into arbitrary objects.
Objects naturally group related functions in some cases (when they truly conform to a coherent entity), so I guess I disagree they are always wrong. But when OOP became fashionable, designers started thinking everything must be an object, and this is obviously wrong -- but is it OOP's fault, or was it the fault of its adopters? The "everything is an object" mantra is indeed misguided when applied to every software system.
Functions can fall prey to the same faulty thinking. I've seen many times functions "in the wild" that do too many things, tweakeable through too many parameters. They usually must be refactored.
In fact, refactoring is where you split objects that have become too large or ill-defined for their own good, is it not?
In the end, I think this is more about good software engineering practices rather than "one must use/must not use OOP/FP" or whatever ?Programming style.
>I agree that in many cases there is a problem, and indeed, objects can be designed as too coarse.
Or don't put your methods in an object at all. Then you don't need to even worry about everything being designed coarse because your object doesn't even exist in the first place.
>Objects naturally group related functions in some cases (when they truly conform to a coherent entity), so I guess I disagree they are always wrong.
Think of it like this: You can build a lego project by gluing all the pieces together (aka OOP) but I would say this is always wrong because if you just connect the pieces together without glue they will stick together but they can be split apart at the same time. In OOP your mistakes may not be evident until years later, OR changing requirements make the glue hard to remove...
Thus I say it's always wrong to use OOP. Just don't glue anything together. Leave it all decoupled. There's no point to bring glue to a lego set.
>Functions can fall prey to the same faulty thinking. I've seen many times functions "in the wild" that do too many things, tweakeable through too many parameters. They usually must be refactored.
So? It's not like Taking the SAME function and placing it in an Object doesn't solve this problem. This problem you describe is completely orthogonal to the issue I'm describing because it exists in your logic independent of whether or not that logic is a method or a function.
>In fact, refactoring is where you split objects that have become too large or ill-defined for their own good, is it not?
Yeah, If your logic was a collection of functions you don't have to spend the inordinate effort to remove the glue. All you need to do is recompose the lego building blocks in a different way because there wasn't any glue holding it together (if you didn't use OOP)
>In the end, I think this is more about good software engineering practices rather than "one must use/must not use OOP/FP" or whatever ?Programming style.
I didn't specify FP here. OOP is NOT good software engineering practice is what I'm saying here.
OOP can be used to design relatively uncoupled systems.
One of the lessons learned in all these decades is what the grandparent post alludes to, which is distilled into "prefer composition over inheritance". Implementation inheritance (as opposed to interface inheritance) indeed introduces coupling and is therefore discouraged in current advice.