Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These conversations aren't helped by the fact that OOP isn't well-defined, and observations about 'typical' OOP--the banana that has a reference to the gorilla and transitively the entire jungle, or the pervasive abuse of inheritance--are "aren't really OOP and you can write bad code in any paradigm!" even though other paradigms (for whatever reason) don't share these issues at nearly the scale as OOP. Inheritance and god objects were certainly how OOP was taught at universities and in the predominate literature when I was in school circa 2010. And if these (mis)features aren't part of the definition of OOP, then what distinguishes it from functional and/or data oriented paradigms? Encapsulation? Seems like FP has encapsulation and DO is orthogonal. Certainly localization of state is common to both FP and DO--it's not an OOP innovation.

If OOP is so abstract as to be indistinguishable from other paradigms, then what value does it add?



An object is in some ways a function that defines its own parameters (fields and methods) and arguments (state). This then is injected into another such function.


if it's a function, what are the inputs/outputs? do you mean that in the way of "an object is a poor person's closure"?

in case anyone's unfamiliar, here's one basic way to emulate objects with closures. i think i saw it in SICP (?), reproducing it here because i just love how simple it is:

  // might be easier to first look at
  // the usage example at the bottom

  let Point = (x, y) => {
    let self = (message, args=null) => {
      switch (message) {
        // getters
        case 'x': return x;
        case 'y': return y;

        // some operations
        // (immutable, but that's not required)

        case 'toString':
          return `Point(x=${x}, y=${y})`;

        case 'move':
          let [dx, dy] = args;
          // use our getters
          return Point(self('x')+dx, self('y')+dy);

        // let's get DRY!
        case 'plus':
          let [other] = args;
          return self('move', other('x'), other('y'));

        default:
          throw Error(`unknown message: ${message} ${JSON.stringify(args)}`);
      }
    };
    return self;
  };
  
  
  let p1 = Point(3, 5);
  // sending messages
  p1('x') === 3;
  p1('y') === 5;
  
  p1('move', [1, 2])('toString');
  // --> "Point(x=4, y=7)"
  
  let p2 = Point(1, 2);
  p1('plus', [p2])('toString');
  // --> "Point(x=4, y=7)"
  
dynamic dispatch & message passing, just like that! and you can easily do `__getattr__/method_missing`-style dynamicism just by looking at `message`.

for the other way around, see how Java lambdas desugar to objects with a "call" method and closed-over variables as members.


do you mean that in the way of "an object is a poor person's closure"?

Objects are a poor man's closures. And closures are a poor man's objects.

Modern languages have both. And they serve different purposes.


i intended to put "(and vice versa)" in a footnote, but forgot about it! you can see a relic of that in the last paragraph.

though i must say, i'm pretty happy in languages that have closures but don't have objects, as long as there's a nice way to do ad-hoc polymorphism (like traits/typeclasses)


also, i just made a possibly interesting connection with JS methods (and their weirdness). in the above implementation, a message is first class, so you can send the same message to multiple objects:

  let moveUp = ['move', [0, 1]];
  p1(...moveUp)
  p2(...moveUp)
but you can't get a `move` that's "bound" to a particular object¹ – the "receiver" is decided when you "send" the message. which reminds me of how JS methods work: `this` is bound to what's "to the left of the dot" when you call a method, so unlike Python,

  obj.foo(1)
is not the same as

  let f = obj.foo
  f(1) // Error: 'this' is undefined
maybe there's a connection to some language that inspired JS' OO model, with prototypes and all that?

---

¹ well, unless you explicitly wrap it in another closure like

  (args) => p1('move', args)


p1.move.bind(p1)


i know :) the point is that that's not the default (unlike many other languages), and i was wondering about a possible origin for that choice.


btw this could (sorta) be considered a special case of the fact that a tuple/struct (product type) can be represented as a function:

  let pair = (a, b) => (
    (ix) => (
      ix === 0 ? a :
      ix === 1 ? b :
      undefined
    )
  );

  
  let p = pair(3, "foo");
  p(0) // 3
  p(1) // "foo"
i've seen sth like this used as the definition of tuples in a math context. (there are also other ways to do it too, like church/scott encoding)

however the object has the extra wrinkle of self-referentiality, because the `self` function is defined recursively


This is a neat example, but wouldn't dynamic dispatch imply polymorphism?


It’s dynamic dispatch because the decision about what code to call happens at runtime (via the switch statement) rather than compile time.


But that's essentially a table lookup, not a vtable lookup, right?

Dynamic dispatch implies that you dispatch based on the class of the receiver. For instance, you can't have a Point3 value that uses Point's implementation of 'x' and its own implementation of 'toString' under this example.


  let Point3 = (x, y, z) => {
    let parent = Point(x, y);
    let self = (message, args=null) => {
      switch (message) {
        // getters
        case 'z': return z

        // some operations
        // (immutable, but that's not required)

        case 'toString':
          return `Point(x=${x}, y=${y}, z=${z})`;

        case 'move':
          let [dx, dy, dz] = args;
          // use our getters
          return Point(self('x')+dx, self('y')+dy, self('z')+dz);

        // let's get DRY!
        case 'plus':
          let [other] = args;
          return self('move', other('x'), other('y'), other('z'));

        default:
          parent(msg, args)
      }
    };
    return self;
  };


maybe "extremely late binding" would be more appropriate? it's true that usually "dynamic dispatch" means a vtable, but i think this is dynamic dispatch in spirit - in fact "dispatching" messages to implementations is pretty much the only thing a closure-object does :) i chose to write the impls inline, so they're p much impossible to reuse, but that's not required.

> you can't have a Point3 value that uses Point's implementation of 'x' and its own implementation of 'toString' under this example.

you can't do much of anything under this this example! i didn't think a whole object system would fit in a HN comment, i intended it to be minimal :)


this is particularly visible in the implementation of `plus` – `other` might not be a Point at all, so `other('x')` and `other('y')` might do anything; which i'd say is textbook (ad-hoc) polymorphism.

i remember reading this somewhere: "branching on values is the ultimate dynamic dispatch" :)


Excellent example, thank you.

Edit: To answer your question, the set of all public methods could be seen as 1, but not the only, interpretation of the functional interface.


thank you :) i hope i didn't hijack the thread! but i just love sharing stuff like this

> the set of all public methods could be seen as 1, but not the only, interpretation of the functional interface.

could you describe another interpretation? "the set of public methods"¹ is the only thing i can think of when i hear about an object's "functional interface". it could be a terminology issue though – i'm reading "functional interface" in a general sort of way, but i could imagine it having some specific definition in OO theory.

---

¹ or recognized messages, in a more smalltalk-ish context


The other interpretation has slipped my mind since yesterday, sorry mate. I do think it's an interesting idea that I now see I have no formal understanding of. Cheers.


shame! cheers :)

PS. if you're interested in weird perspectives on this, you might find Coinduction (and "codata") interesting. from wikipedia:

> Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result.

it's a deep rabbit hole, but i remember it being applied OOP-ish things – "destructors" would roughly correspond to exposed methods.


In perl an object is literally a data structure with funtions glued on. Which is kind of the opposite of what you say!

Usually this is a hash (dictionary) which is similar to the object with fields that we all know, but it can just as easily be an array or scalar value.


> These conversations aren't helped by the fact that OOP isn't well-defined, and observations about 'typical' OOP--the banana that has a reference to the gorilla and transitively the entire jungle, or the pervasive abuse of inheritance--are "aren't really OOP and you can write bad code in any paradigm!"

From Alan Kay (who coined the term)[1]:

> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.

Almost anyone who has dealt with present-day OOP (Python, Ruby, C++, C#, others, or, God forbid, Java) has had the same thoughts as you, I'm guessing. I've come to the conclusion that OO as originally envisioned had a lot of good ideas, and these are surprisingly compatible with and complementary to a lot of the (almost-)pure FP ideas that have gained traction over the last few years.

OOP also had huge mistakes:

- Class inheritance is the biggest one. Kay himself mentions inheritance appearing only in the later versions of Xerox's Smalltalk. No need to really elaborate. Implement interfaces, compose behavior and don't inherit. He has posted some more of his thoughts on this exact topic. [2] - Java was a giant industry-harming one. Now multiple generations of programmers have been brought up to think that the definition of an object is to write a class (which is not an essential characteristic), declare internal state and then write accessors and mutators for every one of them. We might as well write COBOL. But, having tried to solve problems in Java, I get why it's done. The OO is so weak and awful that I'd rather just crank out slightly-safer C in Java than jump through the horrific hoops. (Just as an aside, I find it sad that Sun had the teams that developed both Java and Self. Self used prototypes instead of classes, which influenced Javascript, but also had a killer JIT that made it super fast and a ground-breaking generational garbage collector. Sun ended up cannibalizing the Self team to assist with Java, and some of the features of the JVM were ports of work that had originally been done for the Self VM.) - C++ was kind of a mistake, because it introduced the interminable "public" and "private" and now the endless litany of stupid access modifiers. They were needed because C++ had this need to be compatible with C and do dynamic dispatch and have some semblance of type safety (though C is only weakly typed) and have it run (and compile) before the heat death of the universe. C++ isn't as horrible as what came after it though IMO (i.e. Java). - Xerox horribly mismanaging Smalltalk was another mistake. Xerox was smart enough to realize that Smalltalk was insanely valuable, but instead of realizing that it should be a loss leader/free drugs, they decided to lock it away in the high tower and then let everyone else pillage their research that could be commercialized (GUIs on the desktop, laser printers, Ethernet, among many others). This literally led Apple to develop Objective-C.

I really went down a rabbit hole on this topic around two years ago. Two ideas really helped crystalize the idea for me, both from Kay.

One is that the Internet is an object-oriented system. It communicates via message-passing, it is fault-tolerant, and all of its behavior is defined at runtime (i.e. dynamically bound). We don't need to bring the entire Internet down when someone develops a new network protocol or when a gateway dies or some other problem happens. It continues to work and has in fact never gone down (though early on during ARPANet, there were instances of synchronous upgrades [3]).

The other, deeper one (and the much more open one) is that "data" is error-prone and should be avoided and is, in fact, "the problem" (i.e. the root of the Software Crisis). This sounds preposterous to most programmers.[4] In fact, I didn't even get it when I first heard it. I think the basic idea is that he uses "data" as a physicist or statistician would use it: quantities measured in the world with varying accuracy/precision. This seemed preposterous or even heretical until I started noticing it in the large codebase I was working on at the time. A lot of the code was in a nominally object-oriented language, but the style was very procedural (i.e. "If a then do X, do Y; else do Z"). I noticed that all of our code was working on large data structures (think a relational database schema), but there was no coherent description of it or what it meant anywhere. If I touched one of the pieces of code, I invariably had to start adding more conditional branches in order to preserve correctness. In order to make any sense of the code, I made a ton of implicit assumptions about what the patterns I saw actually meant, rather than relying on the program itself to communicate this through its code or meaningful constraints on the data. We would be better served with "information" with less noise and more signal than what we get with data and data structures (think Shannon). The only real success we've had with data is a few encoding schemes (e.g. ASCII/Unicode, 8-bit bytes, IEEE-754, etc.), but that approach clearly doesn't scale beyond a dozen or two data encoding schemes. OO (of the high signal-to-noise variety championed by Kay) at least has a story for this, which is that certain patterns do indeed have predictable characteristics, and complex patterns can usually be composed by a handful of simple ones. I have yet to see a system like this actually in practice, but I can imagine something like it could exist, given the appropriate tools (in particular, a good, state-of-the-art language that isn't some rehash of stuff from the 60s - Erlang might fit the bill).

The ideas of DO as articulated by Mike Acton are antithetical to the ideas of OO (although a well-designed future OO system - including late bound hardware - could probably represent and implement a lot of the techniques that Acton talks about to improve performance and quality). This doesn't mean they're wrong, but I see treating data as the _only_ thing that matters as being fundamentally opposed to the idea that data is actually the root of most evil. People adopting DO should be aware that robustly applying the technique in the long run will require having detailed knowledge of every aspect of the system on every change. (I think it's not a coincidence that it was originally developed for game dev, which often doesn't have the same long-term maintenance requirements as other kinds of software, and certainly not software like the Internet.)

FP and OO are more complementary. Immutability and referential transparency can be very valuable in object systems; encapsulation, extreme late binding and message passing can be very valuable in functional systems (in fact, Haskell and Erlang surpass "OO" languages when scored on some of these characteristics). Scala explicitly embraces the complementarity between the two paradigms.

[1] https://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_ka...

[2] https://www.quora.com/What-does-Alan-Kay-think-about-inherit...

[3] https://en.wikipedia.org/wiki/Flag_day_(computing)

[4] https://news.ycombinator.com/item?id=11945869


Oof, I just looked at the formatting of this post and realized I rendered the list wrong. Here's the non-eyebleed version:

- Class inheritance is the biggest one. Kay himself mentions inheritance appearing only in the later versions of Xerox's Smalltalk. No need to really elaborate. Implement interfaces, compose behavior and don't inherit. He has posted some more of his thoughts on this exact topic.[2]

- Java was a giant industry-harming one. Now multiple generations of programmers have been brought up to think that the definition of an object is to write a class (which is not an essential characteristic), declare internal state and then write accessors and mutators for every one of them. We might as well write COBOL. But, having tried to solve problems in Java, I get why it's done. The OO is so weak and awful that I'd rather just crank out slightly-safer C in Java than jump through the horrific hoops. (Just as an aside, I find it sad that Sun had the teams that developed both Java and Self. Self used prototypes instead of classes, which influenced Javascript, but also had a killer JIT that made it super fast and a ground-breaking generational garbage collector. Sun ended up cannibalizing the Self team to assist with Java, and some of the features of the JVM were ports of work that had originally been done for the Self VM.)

- C++ was kind of a mistake, because it introduced the interminable "public" and "private" and now the endless litany of stupid access modifiers. They were needed because C++ had this need to be compatible with C and do dynamic dispatch and have some semblance of type safety (though C is only weakly typed) and have it run (and compile) before the heat death of the universe. C++ isn't as horrible as what came after it though IMO (i.e. Java).

- Xerox horribly mismanaging Smalltalk was another mistake. Xerox was smart enough to realize that Smalltalk was insanely valuable, but instead of realizing that it should be a loss leader/free drugs, they decided to lock it away in the high tower and then let everyone else pillage their research that could be commercialized (GUIs on the desktop, laser printers, Ethernet, among many others). This literally led Apple to develop Objective-C.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: