Hacker Newsnew | past | comments | ask | show | jobs | submit | rauljara's commentslogin

This is true, in that most of the scholarship builds up its proofs starting with the lambda calculus. But there are so many paradigms (Turing machines, SKI combinators, excel spreadsheets) that are equivalent that I’m not at all convinced they had to start with lambda calculus. They just happened to.

Out in the real world, the thing that all programming languages are actually built on top of looks much more like a Turing machine than a collection of composed anonymous functions. But of course if you want to make your programs go really fast, you can’t treat them like Turing machines either. You need to acknowledge that all of this theory goes out the window in the face of how important optimizing around memory access is.

Which isn’t to say one perspective is right and one is wrong. These perspective all exist and have spread because they can all be useful. But acting like one of them is “reality” isn’t all that helpful.

Ps. Not that the parent actually said the formal perspective was reality. I just wanted to articulate this thought I had bouncing around in my head for a while.


> Out in the real world, the thing that all programming languages are actually built on top of looks much more like a Turing machine than a collection of composed anonymous functions.

Hardware logic as described in a HDL language is precisely a collection of "composed anonymous functions", including higher-order functions which are encoded as "instructions" or "control signals". We even build stateful logic from the ground up by tying these functions together in a "knot", with the statefulness being the outcome of feedback.


But it's hard to argue the machine at the end is stateless. We can endlessly do this. You can construct lambda calculus with Turing machines and Turing machines in lambda calculus.

There seems to be this weird idea in the functional community that the existence of some construction of one thing in another shows that one of those things is "more fundamental" than the other, when in reality this is often a circular exercise. e.g. Functions can be formalized as sets and sets can be formalized as functions.

Even worse in this specific case, the Church-Turing thesis tells us that they're equivalent, which is the only sensible answer to the question of which is more fundamental. There's an oft quoted phrase of "deep and abiding equivalencies" and it bears pointing out how deep and abiding these equivalencies are. From a formal perspective they are the same. Yes, there's arguments could be made that typed lambda calculus and its relation to logic are important, and that's true but it's not a formal argument at all and I think it's best to be clear on that.


> You can construct lambda calculus with Turing machines and Turing machines in lambda calculus.

I realize that these models of computation are equivalent. My point was rather that the imperative paradigm collapses into the functional paradigm in practical programming when I disregard the admissibility of arbitrary side effects.

> e.g. Functions can be formalized as sets and sets can be formalized as functions

I can derive functions from set theory in my sleep, and I can kickstart set theory without functions, but I wouldn't know how to define the concept of a function without sets. And even if I did: I can't even specify the characteristic function of a set without resorting to the inclusion relation.

> But it's hard to argue the machine at the end is stateless.

I'm not really that interested in the relationship between the various paradigms and the machine. What interests me most is how well I, as a human being, can write non-trivial programs. To me, it is immediately obvious that the composition of purely functional program units is conceptually simple enough to be done by a child, while unrestricted side effects can very quickly make things very complicated. However, I don't want to get involved in the discussion on this topic. I have accepted that others see it differently, although I find that completely baffling. I don't want to take away anyone's for loops, etc. To each their own.


> I realize that these models of computation are equivalent. My point was rather that the imperative paradigm collapses into the functional paradigm in practical programming when I disregard the admissibility of arbitrary side effects.

But in practical programming with imperative languages, arbitrary side effects can't be disregarded, so they don't collapse into the functional paradigm. In fact, from a physical perspective, every possible CPU has states, so the most physically fundamental model of computation (something like register machines, or GOTO programs) is imperative and more fundamental than functional models, like untyped lambda calculus. Functional models might be more mathematically elegant though.

> I wouldn't know how to define the concept of a function without sets.

Whitehead and Russell showed how to define functions just in first-order logic with identity, without requiring any set theoretical axioms, by defining an n-ary function via an n+1 place relation. See here top right: https://mally.stanford.edu/Papers/rtt.pdf

This is quite natural, because predicates (properties and relations) already occur in natural language, while sets do not; they are a mathematical abstraction. For example, sets can be empty, or arbitrarily nested, or both arbitrarily nested and otherwise empty, which has no analog in natural language.

> I can't even specify the characteristic function of a set without resorting to the inclusion relation.

If you try to define sets by using functions, functions are in this context assumed to be more fundamental than sets. Then you don't need to define functions. Then the inclusion relation is simply defined via the characteristic function. You don't need to define that function. Just like you, in the reverse case, don't need to define sets, if you want to define functions via sets.


> But in practical programming with imperative languages, arbitrary side effects can't be disregarded, so they don't collapse into the functional paradigm.

I'm sorry to have to say this so bluntly, but I think you understand as well as I do that in a language such as C#, it is entirely possible to write large amounts of purely functional yet useful code, just as you would in Haskell. That's why it's possible in SICP to wait until Chapter 3 to introduce the special form set!. That is the issue I was concerned with.

> from a physical perspective

I already mentioned that this is not the perspective that interests me. I don't care at all about the physical substrate for computation.

Thanks for the paper. I might take a look at it, although I've already been given a good tip elsewhere with map theory. I'm not convinced by the claim that properties and relations occur in natural language but sets supposedly do not.

The last paragraph isn't very helpful either. I'm not sure who is misunderstanding whom here, but we don't need to hash it out. This isn't a conversation I'm enjoying.


You're looking for Grue's map theory for formalizing sets in terms of functions.


Thanks! It seems that in the metatheory, one can resort to type theory in order to avoid having to fall back on set theory in a circular manner. Unfortunately, I don't know anything about that, but I'll take a closer look at it.


“Sec. 2. The Gold Card. (a) The Secretary of Commerce, in coordination with the Secretary of State and the Secretary of Homeland Security, shall establish a “Gold Card” program authorizing an alien who makes an unrestricted gift to the Department of Commerce under 15 U.S.C. 1522 (or for whom a corporation or similar entity makes such a gift) to establish eligibility for an immigrant visa using an expedited process, to the extent consistent with law and public safety and national security concerns. The requisite gift amount shall be $1 million for an individual donating on his or her own behalf and $2 million for a corporation or similar entity donating on behalf of an individual”

Calling it a “gift” somehow manages to add an extra level of ick in my mind.


> Calling it a “gift” somehow manages to add an extra level of ick in my mind.

Yeah, it's a gross lie. The "gift" is clearly a fee.


The 'r' in gift is both silent and invisible.


Thank you for sharing, and sorry you had to go through that. I had a good friend go through a psychotic break and I spent a long time trying to understand what was going on in his brain. The only solid conclusion I could come to was that I could not relate to what he was going through, but that didn’t change that he was obviously suffering and needed whatever support I could offer. Thanks for giving me a little bit of insight into his brain. Hope you were/are able to find support out there.


GC compactions were indeed a problem for a number of systems. The trading systems in general had a policy of not allocating after startup. JS has a library, called "Zero" that provides a host of non-allocating ways of doing things.


Couldn’t find this after 6 seconds of googling, link?


The linked podcast episode mentions it.


There's no mention of a library called zero, or even JavaScript.


Im assuming the JS refers to Janes street


That makes sense, I guess I've got web tunnel vision.


I was bit by the same spider that gave you web tunnel vision. In any case, I find OCaml too esoteric for my taste. F# is softer and feels more..modern perhaps? But I don’t think GC can be avoided in dotnet.


You can avoid GC in hot loops in F# with value-types, explicit inlining, and mutability.

Mutability may not result in very idiomatic code however, although it can often be wrapped with a functional API (e.g. parser combinators).


> This is what I like to call a dialect of OCaml. We speak in sometimes and sometimes we gently say it’s zero alloc OCaml. And the most notable thing about it, it tries to avoid touching the garbage collector ...


Would love to see a pterosaur / bat version of this drone. Birds use one set of muscles to jump in the air and another to flap their wings, limiting how big they can get. That’s because, if you make your wing muscles bigger, then you need bigger leg muscles to support them, then you need bigger wing muscles to support your legs, etc. pterosaurs and bats have tiny little legs and use their “arm” (wing) muscles to do the initial jump into the air. It’s just one set of muscles that are used for both functions, which is why pterosaurs were able to get so big. It does beg the question, tho, why we haven’t seen any truly giant bats.

This pbs aeons video has a great explanation: https://youtu.be/scAp-fncp64?si=hjeWKGBI7riyjE1M


> It does beg the question, tho, why we haven’t seen any truly giant bats.

They're mammals, birds have different respiratory system

"Flow-Through Ventilation

Unlike mammals, birds breathe through continuous one-directional flow of air through the respiratory system. We take air in and breathe it out, sort of like the tide moves in and out of a bay. As a result, our breathing system is said to be tidal. Avians have a non-tidal respiratory system, with air flowing more like a running stream."

https://birdfact.com/anatomy-and-physiology/respiratory-syst...


That's why mammals can't breathe at high altitudes that birds can, but I'm not sure if that affects the body plan much in terms of size. The largest birds are smaller than the largest mammals on land or at sea. Then again, lower oxygen levels compared to the past seems to be a limitation for insect sizes too (who have an even less efficient respiratory system).

I also don't think it's the warmbloodedness. There are giant mammals in general after all.

Perhaps it is because bats form large, dense colonies? There is only so many resources available in any given ecological niche, so then for any species that fills a niche one would expect those resources to be divided either among many small individuals or a few large ones. Bat evolution chose the "big colony" route, which I assume favors smaller individuals.


> The largest birds are smaller than the largest mammals on land or at sea

With all my respect to you theory I think comparing size of animals should not ignore the medium they moved in: water, land or air. Weight is (loosely but still) related to size. It’s probably not a coincidence the largest mammals lives on water where they need less energy to supper their weight, and it’s not a coïncidents the largest mammals on earth are way bigger that bats.

The biggest bats are ~1.7m which is not so far from biggest albatros (3.7m).

Also consider the biggest bird (Ostriches) can’t fly. Now I’m trying to picture a swimming gigantic bird.


Well, fair. But birds are warmblooded too so that doesn't change much there, and on top of that the difference in requiring bigger lungs for the same amount of oxygen extraction would exactly add much weight per volume, so to speak


An Emperor Penguin?


Right! To complete the unusual list : flying fish and... Amphibious fish! Wikipedia says there's 11 of them. Ok stop procrastinating now.


Nature optimizes. The bigger you get, the more you need to eat. The harder it gets to fly. Fruit bats eat fruits.

Look at the food source and you'll understand the evolution.


> Fruit bats eat fruits.

The most caloric dense source of nutrition available in nature? I don't see why that is a limitation to body size for a flying animal - quite the opposite!


fruit bats are the biggest bats

not GP but I think that was the point.

also, volume grows as the cube of linear dimensions which also puts an upper limit on size, as wing surface area only grows as the square (not sure what/how lift grows relative to)


Plants aren't particularly calorie-dense. Meat, on the other hand...


this is almost in "not even wrong" territory, but for the fact that autotrophs are definitionally the entry point for abiotic energy into edible calories for animals, and the observation that the largest terrestrial megafauna are herbivorous.

bamboo is not calorie dense to humans, because we've lost the ability to digest most of it, but pecans are absolutely more calorie dense than even fatty beef.

all else being equal, an ideal carbohydrate source is more calorically dense than an equivalent ideal lean protein source due to the balance in the thermic effect of food between the two. most mammals outside the obligate carnivores are really well optimized for getting calories from plants— this is why we have amylase in our saliva.


Look at great apes. Large land mammals in general. (Apes came to mind specifically because they usually eat fruit)


Are you aware you switched "fruit" for "plant" there?


Fruits want to be eaten, Veggies don't.


Robots and living animals have different limitations and constraints though: compared to separate legs and wings for animals, using one motor with some kind of gearbox to switch output from wings/propellers to legs might have a lower added cost in terms of weight . The legs can stay very skinny. The limitation would be how bulky such a gearbox would be, and how much extra kinetic energy loss it would introduce. At the same time creating functioning wings that can also work as legs sounds like it might be a huge challenge in robotics (unless there's a way to massively simplify it).

Definitely an interesting idea that should be investigated though! :)

(Also, I've seen so many "AI learns to walk" videos that I'm wondering if it could be used to find a design that would work for this task)


I wish articles like this had more examples in them. In between “this thin wrapper adds no value but a lot of complexity”, and “this thin wrapper clarified the interface and demonstrably saved loads of work last time requirements changed” is an awful lot of grey area and nuance.

I did like the advice that if you peak under the abstraction a lot, it’s probably a bad one, tho even this I feel could use some nuance. I think if you need to change things in lots of places that’s a sign of a bad abstraction. If there is some tricky bit of complexity with changing requirements, you might find yourself “peeking under the hood” a lot. How could it be otherwise? But if you find yourself only debugging the one piece of code that handles the trickiness, and building up an isolated test for that bit of code, well, that sounds like you built a wonderful abstraction despite it being peaked at quite a bit.


The article did start off giving TCP as a good abstraction but then didn't follow up with examples of bad abstractions.

Dynamic typing is an example of an indirection masquerading as an abstraction. You end up carrying around an object and occasionally asking it whether it's an int64_t or a banana. You maybe think your type luggage will take you on exotic vacations when really in fact you take it on exotic vacations.


To me, it ties in with John Ousterhout's concept of "deep, small interfaces"

TCP is a good abstraction because it's essentially 4 operations (connect, disconnect, send, receive), but there's a lot going on inside to make these operations work. So are TLS, filesystems, optimizing compilers and JITs, modern CPUs, React (or rather the concept of "reactive UI" in general), autograd and so on.


Isn't there a flip side to this? earlier today i saw someone tweet that monads are indirection, not abstraction.


Amen.

Articles like this are a dime a dozen. Literally, there are 1000s of articles that all say the exact same thing using way too many words: "Bad abstractions are bad, good abstractions are good".


I second this, such posts are very generic, they are hard to disagree with, but also to agree with empathically as there are no clear examples of what is too much.

As someone who uses lots of layers and dependency injection I would like to be poked on where is that too much abstraction but I end up being no wiser.


I believe that the place to find detailed examples and deep analysis is in books, not one-off web articles.


The way to tell whether an abstraction is good or bad is to develop good taste. Engineers with good taste have intuition about these things.

You are not going to acquire good taste from reading an article.


Relying on mere “taste” is bad engineering. Engineers do need experience to make good decisions, yes. But surely we are able to come up with objective criteria of what makes a good abstraction vs. a bad abstraction. There will be trade-offs, as depending on context, some criteria will be more important than other (opposing) criteria. These are sometimes called “forces”. Experience is what leads an engineer in assessing and weighing the different present forces in the concrete situation.


That’s seems like it should be true, and it would be great if it was.

But in my many years of experience working with Jr engineers, I have found no substitute other then practice guided by someone more Sr (who has good taste).

There are just too many different situations and edge cases. Everything is situational. You can come up with lists of factors to consider (better versions of this post often have them), but no real firm rules.


I wouldn’t call that “taste”. It’s not a matter of taste which solution is better. If different engineers disagree about which solution to choose, then it’s fundamentally a different assessment of the relevant factors, and not about taste. Or at least, it shouldn’t be the latter.


I don’t know. We could look for some other word to encode “often sub-conscious though sometimes explicit heuristics developed by long periods of experiencing the consequences of specific trade-offs” but “taste” seems like a pretty good one because it’s quite intuitive.

There often - usually? - are more than one good solution and more than one path to success, and I don’t find calling different good engineers making different choices primarily because of their past experiences an egregious misuse of language.


I think you are going after something that is more an element of craftsmanship than engineering, and I agree it is a big part of real world software development. And, not everyone practices it the same way! It's more of a gestalt perception and thinking process, and that instinctual aspect is colored by culture and aesthetics.

In my career, I've always felt uncomfortable with people conflating software development with engineering. I think software has other humans as the audience more so than traditional engineered products. Partly this may be the complexity of software systems, but partly it is how software gets modified and reused. There isn't the same distinction between the design and the product as in other domains.

Other domains have instances of a design and often the design is tweaked and customized for each instance for larger, complex products. And, there is a limited service life during which that instance undergoes maintenance, possible refurbishing, etc. Software gets reused and reformed in ways that would make traditional engineers panic at all the uncertainties. E.g. they would rather scrap and rebuild, and rely on specialists to figure out how to safely recycle basic materials. They don't just add more and more complexity to an old building, bridge, airplane, etc.


Perhaps there is a better word. But there is a real skill that you pretty much have to learn through experience and mentorship.


Yes, this is what I mentioned in my original comment about experience being needed to weigh the trade-offs. That doesn’t mean that we can’t very concretely speak about the objective factors in play for any given decision. We can objectively say that x, y, z are good about this abstraction and a, b, c are bad, and then discuss which might outweigh the other in the specific context.

Needing experience to regularly make good decisions doesn’t mean that an article explaining the important factors in deciding about an abstraction is useless.


If we used the word "judgement", would that be a better option? It seems that pretty much anyone can write code (even AI), but ultimately in software development, we get paid for judgement.


Is ought fallacy


Folks like to claim that software is Engineering but it’s as much Craftsmanship. Hence, taste is in fact important.

In some areas you need more engineering but API design, for example, is mostly taste and hardly any science.


No kind of engineering ever gets into that "taste-independent" level of formalization.

Yes, it should be this way. But it's not.


Taste amounts to a well trained neural net in the engineers skull. It should not be belittled. Articles like this attempt to describe taste systematically, which is worth attempting but impossible


Maybe not, but you can still move the needle one way or another based on reading an article. For those readers who recognize themselves as erring on the side of adding too many abstractions, they might move the needle a bit towards the other side.


try using LangChain and you'll get countless examples of bad abstractions

started working with it this week for a new project

gosh, it's so painful and unintuitive... I find myself digging deep into their code multiple times a day to understand how I'm supposed to use their interfaces


"I wish articles like this had more examples in them."

There is a class of things that don't fit in blogs very well, because any example that fits in a blog must be broken some other way to fit into a blog, and then you just get a whole bunch of comments about how the example isn't right because of this and that and the other thing.

It's also a problem because the utility of an abstraction depends on the context. Let me give an example. Let us suppose you have some bespoke appliance and you need to provide the ability for your customer to back things up off of it.

You can write a glorious backup framework capable of backing up multiple different kinds of things. It enforces validity checks, slots everything nicely into a .zip file, handles streaming out the backup so you don't have to generate everything on disk, has metadata for independent versions for all the components and the ability to declare how to "upgrade" old components (and maybe even downgrade them), support for independent testing of each component, and has every other bell and whistle you can think of. It's based on inheritance OO and so you subclass a template class to fill out the individual bit and it comes with a hierarchy pre-built for things like "execute this program and take the output as backup" and an entire branch for SQL stuff, and so on.

Is this a good abstraction?

To which the answer is, insufficient information.

If the appliance has two things to backup, like, a small SQL database and a few dozen kilobytes of some other files, such that the streaming is never useful because it never exceeds a couple of megabytes, this is an atrocious backup abstraction. If you have good reason to believe it's not likely to ever be much more than that, just write straight-line code that says what to do and does it. Jamming that into the aforementioned abstraction is a terrible thing, turning straight code into a maze of indirection and implicit resolution and a whole bunch of code that nobody is going to want to learn about or touch.

On the other hand, if you've got a dozen things to backup, and every few months another one is added, sometimes one is removed, you have meaningful version revs on the components, you're backing up a quantity of data that perhaps isn't practical to have entirely in memory or entirely on disk before shipping it out, if you're using all that capability... then it's a fantastic abstraction. Technically, it's still a lot of indirection and implicit resolution, but now, compared to "straight line" code that tries to do all of this in a hypothetical big pile of spaghetti, with redundancies, idiosyncracies of various implementations, etc., it's a huge net gain.

I don't know that there's a lot of abstractions in the world that are simply bad. Yeah, some, because not everything is good. But I think they are greatly outnumbered by places where people use rather powerful, massive abstractions meant to do dozens or hundreds of things, for two things. Or one thing. Or in the worst case, for no things at all, simply because it's "best practices" to put this particular abstraction in, or it came with the skeleton and was never removed, or something.


The entangled particles don’t have any sort of an effect on the other. Changing one doesn’t change the other. You can think of it like the two particles were always a pair and you just didn’t know which particle was the left one and which was the right. By measuring one, you know what the other one “has always” been.

The “has always” is in quotes because it’s a useful lie. You kind of need to really understand the double slit experiment to get quantum fields, superpositions, and how that related to entanglement. Took me years and years of occasional YouTube physics videos before it finally clicked. But if entanglement still doesn’t make sense, I’d start by trying to understand the double slit experiment. It sounds way less awesome than entanglement, but it isn’t really. Double slit is in fact awesome and just as weird. Entanglement is way less cool than it sounds, and no, not actually a way of cheating the speed of light limit for information transmission.


A huge +1 for automating all the things as a form of practice! I don’t even think that you have to (strictly speaking) end up saving time on a particular task for a lot of automation to be worth it. The act of practicing automation makes you more efficient at future automation. Even a failed attempt at automating something can teach you stuff about why certain things are hard to automate that can make you a better engineer.

Getting in the habit of automating stuff in your editor and environment can also have a real snowballing effect. Yes, you end up “wasting” some time with yak shaves that don’t work out. But it doesn’t take long before the scope of what you can tackle in a day grows. It’s really profound how much friction you can remove, and how much friction there is in fresh environments.

Also, and ymmv, but a lot of repetitive tasks can be pretty soul crushing. Too much toil and you can come to dread your job. Automating something away almost always feels rewarding to me. Keeping yourself happy and motivated in your work should also count for something.


I feel like my ADD makes the soul-crushing busywork thing way more of a motivator. Half of what I do at work is just an upgraded version of the things I do to go really far out of my way not to do a thing twice.

There's another benefit, too. By "living in the system" and treating every part of the computer as something to manipulate, automate, and control, you get a more organic sense of the shape of the thing. It's a fairly common occurrence that I'll have a feeling that something ought to be different without really knowing why, and then later that feeling is proven out (this typically has to do with stuff like the shape of dependencies or the usage of tools in contexts that are a misfit, which then turns into ever-expanding kluges that should have been better design from the outset).

That said, try explaining to someone else on the team that "I dunno, just doesn't feel, like, nice".


Here here! And not just automation, but glue! At my last job I was a big advocate for automate all the things. "Manual Jenkins lookup task? I bet I could write a plugin for that.", etc. I got lots of practice jumping into new systems, tracing what the minimum essentials for what I needed to do were, and then making something useful.

At my current job we have some code that generates typescript RPC bindings from our java code. It's quite slick but the backend definitions and the frontend definitions aren't linked, and navigation between them is a pain. So I decided to write an IntelliJ plugin that allows for navigation and find usages across our two languages. Took 2 days-ish, but totally worth it.


Hmmm… at first glance, this feels like I’d use it for the same sorts of things I’d use jq for, only easier to use but also way less powerful. Jq does have a little bit of a learning curve necessary to get good use out of it, so I could see this being a nice quick tool for people who don’t want to make that investment. Having already learned jq, I’m not sure why I would reach for gron, but maybe I’m missing something.


Not missing, retaining something: the details of jq. Many developers find this difficult for usual and rarely used tools. See also bash.


Seems like HN broke the trailer on their site. Fortunately, it also lives on YouTube: https://m.youtube.com/watch?v=CNBk1DK046k&pp=ygUMTmFldiB0cmF...

Glad they aren’t shy about their escape velocity roots. I swear, the ship in their logo looks just like a kestrel.


It's the Kestrel from EV: Nova (which as we all know is the ship that singlehandedly put Krain Industries on the map), with a little bit of red added.

They're pretty shameless! Lots of the ships are pulled from EV: Nova with cosmetic changes (no idea about the legality, I suspect they got permission but I haven't looked into it), and there's a setting that skins the UI to look like EV classic.


The Kestrel in Nova is a copy of the Kestrel from the original EV, which itself was a copy of the Estes 'Corsair' model rocket kit, see: https://blog.eamonnmr.com/2018/02/estes-rockets-and-escape-v...


Isn't it all just based upon Wing Commander?

(Without this bracketed addendum, this would be a troll comment rank 16 mega-trolls)


Actually, he based the gameplay on reading the Elite manual but never actually having played Elite.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: