Hacker Newsnew | past | comments | ask | show | jobs | submit | jadar's commentslogin

I have been using omz for YEARS. I have practically grown up with it. I resent that I would have never noticed that it took several hundred milliseconds to load, if it were not for this discussion. I never felt the delay particularly unless I was in a very large Git repository — which is rare. On the bright side, now I know about `fish` and am learning there are some nice features I never had in ZSH (i.e. much more advanced autosuggest.) What I basically use omz for is 1) autocomplete, 2) git aliases, and 3) my beloved prompt. In about 30 minutes, Claude helped me port my prompt from ZSH, autocomplete comes OOB, and a cursory Google search shows someone has made a Fish plugin to satiate my Git alias muscle memory. I could leave Zsh/omz in the rear view mirror tomorrow — but for why? I never would have noticed before this discussion...


I thought Canadians were supposed to be nice…


This tells me nothing except the author’s politics.


Super impressive. Looks like the author also reimplemented Apple’s new Liquid Glads UI in Jetpack Compose?


I don’t know what the other UI options are, but that seems like a step backwards.


I think this is wrong and should not exist.


do you care to elaborate or are we being mysterious today?


Just installed uBlock lite on my iPhone and it seems to hold up against this test.


I don’t want to undermine the author’s enthusiasm for the universality of the MCP. But part of me can’t help wondering: isn’t this the idea of APIs in general? Replace MCP with REST and does that really change anything in the article? Or even an Operating System API? POSIX, anyone? Programs? Unix pipes? Yes, MCP is far simpler/universal than any of those things ended up being — but maybe the solution is to build simpler software on good fundamental abstractions rather than rebuilding the abstractions every time we want to do something new.


MCP is not REST. In your comparison, its more that MCP is a protocol for discovering REST endpoints at runtime and letting users configure what REST endpoints should be used at runtime.

Say i'm building a app and I want my users to be able to play spotify songs. Yea, i'll hit the spotify api. But now, say i've launched my app, and I want my users to be able to play a song from sonofm when they hit play. Alright, now I gotta open up the code and do some if statements hard code the sonofm api and ship a new version, show some update messages.

MCP is literally just a way to make this extensible so instead of hardcoding this in, it can be configured at runtime


That only works if you let the LLM do the interpretation of the MCP descriptions, in the case of TFA the idea was to use MCP without LLM, which is essentially same as any old API.


You can use MCP to dynamically call different services, without ever having to use an LLM to decide.

With an LLM it would go

List MCP Tools -> Get User prompt -> Feed both into LLM -> LLM tells you what tools to call

You could skip the LLM aspect completely and get all tools and let the user at runtime pick the tool that "playsSong" for example


HATEOAS was supposed to be that.

https://en.wikipedia.org/wiki/HATEOAS


Wait was it? HATEOAS is all about hypermedia, which means there must be a human in the loop being presented the rendered hypermedia. MCP seems like it's meant to be for machine<->machine communication, not human<->machine


I agree that HATEOAS never made sense without a human in the loop, although I also have never seen it be described as such. IMO that’s an important reason why it never gained useful traction.

There is a confused history where Roy Fielding described REST, then people applied some of that to JSON HTTP APIs, designating those as REST APIs, then Roy Fielding said “no you have to do HATEOAS to achieve what I meant by REST”, then some people tried to make their REST APIs conform to HATEOAS, all the while that change was of no use to REST clients.

But now with AI it actually can make sense, because the AI is able to dynamically interpret the hypermedia content similar to a human.


Hypermedia isn't just for human consumption. Back in the 90s, the Web was going to be crawled by "User Agents": software performing tasks on behalf of people (say, finding good deals on certain items; or whatever). Web browsers (human-driven interfaces) were the most common User Agent, but ended up being a lowest-common-denominator; the only other User Agents to get any widespread support were Google crawlers.


My understanding was that the discoverable part of HATEAOS was meant for machine to machine. Actually all of REST is machine to machine except in very trivial situations.

Not sure I'm understanding your point in hypermedia means there is human in the loop. Can you expand?


H in HATEOAS stands for "hypermedia". Hypermedia is a type of document that includes hypermedia controls, which are presented by the hypermedia client to a user for interaction. It's the user who makes decision what controls to interact with. For example, when I'm writing this comment, HN server gave a hypermedia document, which contains your comment, a textarea input and a button to submit my reply, and me, the human in the loop, decides what to put in it the input and when to press the button. A machine can't do that on its own (but LLMs potentially can), so a user is required. That also means that JSON APIs meant for purely machine to machine interactions, commonly referred to as REST, can't be considered HATEOAS (and REST) due to absence of hypermedia controls.

Further reading:

- https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...

- https://htmx.org/essays/hateoas/


So that's not my understanding. Hypermedia, as I understand it, are embedded links in responses that present possible forward actions.

They are structured in a way that machine program could parse and use.

I don't believe it requires human-in-the-loop, although that is of course possible.


HTML is a hypermedia format, the most widely used, and it's designed mainly for human consumption. Machines parsing and using something is too broad an idea to engage with meaningfully: browsers parse HTML and do something with it: they present it to humans to select actions (i.e. hypermedia controls) to perform.

Your understanding is incorrect, the links above will explain it. HATEOAS (and REST, which is a superset of HATEOAS) requires a consumer to have agency to make any sense (see https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...)

MCP could profitably explore adding hypermedia controls to the system, would be interesting to see if agentic MCP APIs are able to self-organize:

https://x.com/htmx_org/status/1938250320817361063


I've programmed machines to use those links so I'm pretty certain machines can use it. I've never heard of the HTML variation but so will have a look at those links.


> I've programmed machines to use those links so I'm pretty certain machines can use it

I'm curious to learn how it worked.

The way I see it, the key word here is "programmed". Sure, you read the links from responses and eliminated the need to hardcode API routes in the system, but what would happen if a new link is created or old link is unexpectedly removed? Unless an app somehow presents to the user all available actions generated from those links, it would have to be modified every time to take advantage of newly added links. It would also need a rigorous existence checking for every used link, otherwise the system would break if a link is suddenly removed. You could argue that it would not happen, but now it's just regular old API coupling with backward compatibility concerns.

Building on my previous example of hn comments, if hn decides to add another action, for example "preview", the browser would present it to the user just fine, and the user would be able to immediately use it. They could also remove the "reply" button, and again, nothing would break. That would render the form somewhat useless of course, but that's the product question at this point


Yes, most of your observations about limitations are true. That doesn't mean it's not a useful technique.

This is a reasonable summary of how I understand it: https://martinfowler.com/articles/richardsonMaturityModel.ht...

This aligns with how I've seen it used. It helps identify forward actions, many of which will be standard to the protocol or to the domain. These can be programmed for and traversed, called or aggregated data presented using general or specific logic.

So New actions can be catered for but Novel actions cannot, custom logic would require to be added. Which then becomes part of the domain and the machine can now potentially handle it.

Hope that helps illustrate how it can be used programmatically.


how did the machines you programmed react to new and novel links/actions in the response?


New are fine, Novel need catered for. I have left a fuller explanation to a sibling comment.


Right, so that means that the primary novel aspect of REST (according to the coiner of that term, Roy Fielding), the uniform interface, is largely wasted. Effectively you get a level of indirection on top of hard-coded API response endpoints. Maybe better, but I don't think by much and a lot of work for the payoff.

To take advantage of the uniform interface you need to have a consumer with agency who can respond to new and novel interactions as presented in the form of hypermedia controls. The links above will explain more in depth, the section on REST in our book is a good overview:

https://hypermedia.systems/components-of-a-hypermedia-system...


If you have machine <-> machine interaction, why would you use HTML with forms and buttons and text inputs etc? Wouldn't JSON or something else (even XML) make more sense?


heh, there was a good convo about HATEOAS and MCP on HN awhile back:

* https://news.ycombinator.com/item?id=43307225

* https://www.ondr.sh/blog/ai-web


MCP is a JSON RPC implementation of OpenAPI, or, get this, XML and WSDL/SOAP.


WSDL triggered me ha. I'm afraid you're right


>Alright, now I gotta open up the code and do some if statements hard code the sonofm api and ship a new version, show some update messages.

You will need to do that anyway. Easier discovery of the API doesn't say much.

The user might want a complicated functionality, which combines several API calls, and more code for filtering/sorting/searching of that information locally. If you let the LLM to write the code by itself, it might take 20 minutes and millions of wasted tokens of the LLM going back and forth in the code to implement the functionality. No user is going to find that acceptable.


so... is this OpenAPI then?


Basically, yes. But with much more enthusiasm!


OpenAPI doesn't have a baked in discoverability mechanism. It isn't compatible with LLMs out of the box. It is a lower level abstraction. I don't want to write a blob of code that talks to an Open API service every time I want to do something with an LLM.


>OpenAPI doesn't have a baked in discoverability mechanism.

Well, Swagger was there from the start, and there's nothing stopping an LLM from connecting to an openapi.json/swagger.yaml endpoint, perhaps meditated by a small xslt-like filter that would make it more concise.


Can't you just build a simple REST that takes this abstraction of plugging in different song providers away?


Feels like segment.com but for calling APIs rather than adding libraries to the frontend.


Now make the segment for MCPs ;p


The main difference between MCP and Rest is that MCP is self described from the very start. REST may have OpenAPI, but it is a later addon, and we haven't quite standardised on using it. The first step of exposing an MCP is describing it, for Rest is is an optional step that's often omitted.


isn't also SOAP self described?


When I read about MCP the first time and saw that it requires a "tools/list" API reminded me of COM/DCOM/ActiveX from Microsoft, it had things like QueryInterface and IDispatch. And I'm sure that wasn't the first time someone came up with dynamic runtime discovery of APIs a server offers.

Interestingly, ActiveX was quite the security nightmare for very similar reasons actually, and we had to deal with infamous "DLL Hell". So, history repeats itself.


And gRPC with reflection, yeah?


and GQL with reflection?


JSON-LD?


Is it "self-described" in the sense I can get a list of endpoints or methods, with a human- (or LLM-) readable description for each - or does it supply actual schemata that I could also use with non-AI clients?

(Even if only the former, it would of course be a huge step forward, as I could have the LLM generate schemata. Also, at least, everyone is standardizing on a base protocol now, and a way to pass command names, arguments, results, etc. That's already a huge step forward in contrast to arbitrary Rest+JSON or even HTTP APIs)


For each tool you get the human description as well as a JSON schema for the parameters needed to call the function.


You're getting an arbirary string back though...


how else would you describe an arbitrary tool?


But you're describing it in a way that is useless to anything but an LLM. It would have been much better if the description language had been more formalized.


> It would have been much better if the description language had been more formalized.

To speculate about this, perhaps the informality is the point. A full formal specification of something is somewhere between daunting and Sisyphean, and we're more likely to see supposedly formal documentation that nonetheless is incomplete or contains gaps to be filled with background knowledge or common sense.

A mandatory but informal specification in plain language might be just the trick, particularly since vibe-APIing encourages rapid iteration and experimentation.


The description includes an input and output json schema.



You're not looking at the latest version. They added output schemas.


Thank you!


In my mind the only thing novel about MCP is requiring the schema is provided as part of the protocol. Like, sure it's convenient that the shape of the requests/response wrappers are all the same, that certainly helps with management using libraries that can wrap dynamic types in static types, but everyone was already doing that with APIs already we just didn't agree on what that envelope's shape should be. BUT, with the requirement that schema be provided with the protocol, and the carrot of AI models seamlessly consuming it, that was enough of an impetus.


> the only thing novel about MCP is requiring the schema is provided as part of the protocol

You mean, like OpenAPI, gRPC, SOAP, and CORBA?


Where is the mandatory human readable prose description of the purpose of the tool in any of those specs. It isn't. Also the simplicity of JSON interface descriptions is key.


You can’t connect to a gRPC endpoint and ask to download the client protobuf, but yes.


It's not enabled by default, but you can --- gRPC Reflection:

* https://github.com/grpc/grpc-java/blob/master/documentation/...

* https://grpc.io/docs/guides/reflection/

You can then use generic tools like grpc_cli or grpcurl to list available services and methods, and call them.


The main difference between MCP and REST is `list-tools`.

REST APIs have 5 or 6 ways of doing that, including "read it from our docs site", HATEOAS, OAS running on an endpoint as part of the API.

MCP has a single way of listing endpoints.


> The main difference between MCP and REST is `list-tools`.

> REST APIs have 5 or 6 ways of doing that

You think nobody's ever going to publish a slight different standard to Anthropic's MCP that is also primarily intended for LLMs?


Why would they? I'm sure the "Enterprise" folks are putting together some working group to develop ANSI-xyzzy standard for Enterprise operability which will never see the light of day.


Because they genuinely think it'll work better, because they think it will build brand awareness/moat, because they're upset MCP comes from a competitor


WSDL + XML API's have been around since 1998.

OpenAPI, OData, gRPC, GraphQL

I'm sure I'm missing a few...


In other words, there’s no commonly-used, agreed upon standard for creating APIs. The closest is REST-like APIs, which are really no more specific than “hit a URL and get some data back”.

So why are we all bitching about it? Programmatically communicating with an ML model is a new thing, and it makes sense it might need some new concepts. It’s basically just a wrapper with a couple of opinions. Who cares. It’s probably better to be more opinionated about what exactly you put into MCP, rather than just exposing your hundreds of existing endpoints.


I don’t think the comments here are complaining, they are pointing out that what’s being claimed as being new is not actually new


Where is "list-tools" in any of those low level protocols?


I don't know enough about OData, but:

- Introspection (__schema queries) for every graphQL server. You can even see what it exposes because most services expose a web playground for testing graohQL APIs, e.g. GitHub: https://docs.github.com/en/graphql/overview/explorer

- Server Reflection for gRPC, though here it's optional and I'm not aware of any hosted web clients, so you'll need a tool like gRPCCurl if you want to see how it looks in real services yourself.

- OpenAPI is not a protocol, but a standard for describing APIs. It is the list-tools for REST APIs.


All of them already provide an IDL with text descriptions and a way to query a server's current interface, what else do we need? Just force those two optional features to be required for LLM tool calls and done.

Is there anything stopping generic MCP servers for bridging those protocols as-is? If not, we might as well keep using them.


Damn, I just read this and it's comforting to see how similar it is to my own response.

To elaborate on this, I don't know much about MCP, but usually when people speak about it is in a buzzword-seeking kind of way, and the people that are interested in it make these kinds of conceptual snafus.

Second, and this applies not just to MCP, but even things like JSON, Rust, MongoDB. There's this phenomenon where people learn the complex stuff before learning the basics. It's not the first time I've cited this video on Homer studying marketing where he reads the books out of order https://www.youtube.com/watch?v=2BT7_owW2sU . It makes sense that this mistake is so common, the amount of literature and resources is like an inverted pyramid, there's so little classical foundations and A LOT of new stuff, most of which will not stand the test of time. Typically you have universities to lead the way and establish a classical corpus and path, but being such a young discipline, 70 years in and we are still not finding much stability, Universities have gone from teaching C, to teaching Java, to teaching Python (at least in intro to CS), maybe they will teach Rust next, but this buzzwording seems more in line with trying to predict the future, and there will be way more losers than winners in that realm. And the winners will have learned the classicals in addition to the new technology, learning the new stuff without the classics is a recipe for disaster.


Apis do not need to necessarily tell you everything about themselves. Anyone who has used poorly documented or fully undocumented apis knows exactly what I'm talking about here.

Obviously, for http apis you might often see something like an open API specification or graphql which both typically allow an api to describe itself. But this is not commonly a thing for non-http, which is something that mcp supports.

MCP might be the first standard for self-described apis across all protocols(I might be misusing protocols here but not sure what the word technically should be. I think the MCP spec calls it transport but I might be wrong there), making it slightly more universal.

I think the author is wrong to discount the importance of an llm as an interface here though. I do think the majority of mcp clients will be llms. An API might get you 90% of the way there but if the llm gets you 99.9% by handling that last bit of plumbing it's going to go mainstream.


honestly, yes - but MCP includes a really simple 'reflection' endpoint to list the capabilities of an API, with human readable docs on methods and types. That is something that gRPC and OpenAPI and friends have supported as an optional extension for ages, but it has largely been a toy. MCP makes it central and maybe that makes all the difference.


At a previous job most of our services supported gRPC reflection, and exploring and tinkering with these APIs using the grpc_cli tool was some of the most fun I had while working there. Building and using gRPC services in golang left a strong positive impression on me.


I had the same experience working with GQL :)


This is exactly what the author is saying. It is "the idea of APIs in general" that has suddenly become a fad under the guise of MCP, riding the AI wave. And it may well be a very imperfect way to build APIs, but if it eventually becomes the standard to the point where every app has to offer it, it's still "good enough" and would massively improve interoperability all around as a side effect.


One major difference is that MCP has discovery built into the protocol. There’s nothing in REST that informs clients what the API can do, what resources are available, etc.


My first thought as well. But maybe at least people wanting to plug their apps to their AI forces developers to actually implement the interface, unlike APIs that are mostly unheard of in general population and thus not offered?


The tragedy of the modern library is that no one has the attention span for good books. Libraries are getting rid of the classics to make room for new books, the majority of which are not worth the paper they’re printed on. We would do well to heed C.S. Lewis’ call to read more old books for every new book that we read.


I personally think the focus on attention span is a red herring.

Many good books don't require that much attention span, and putting the onus on the reader to like and focus on a book that is supposed to be good feels kinda backward. Given that people binge watch whole tv series and still read a ton online there is a desire, and probably ways to properly reach the audience.

Not all classics need to be liked forever, tastes change, and the stories are retold in different manners anyway. I'd be fine with people reading Romeo and Juliet as a mastodon published space opera if it brings them joy and insights.


Some classics were written with a "per word" payment scheme to the author. That created bad writing in awkward places.

The Swiss family Robinson is an example of this. Times of interesting adventure and then long passages about poetry analysis.

Ironically, reading it feels like you are reading the works of an author with a low attention span.

There's a reason so many of the classics have abridged versions.


Even a short and engaging chapter book will require someone to focus for more than 10 minutes on the text

I have been online since the early web and have seen how much content has changed to engage people. It’s all short form videos and posts with a 4th grade vocabulary now. If you post anything longer I have seen people actually get upset about it.

People may binge a series but they are still on their phones half of the time scrolling for dopamine. I am trying to train my own children to seek out difficult things to consume and balance out the engagement bait.

It’s hard these days. Everything is engineered to hijack your attention


> People may binge a series but they are still on their phones half of the time scrolling for dopamine.

This. Both movies and series are now FAR less popular (and profitable) than video games, and video games are far less popular than social media. Even the minority that still enjoys legacy media enjoys it WHILE consuming other media.

Movie theaters are in as much trouble as libraries, and blaming either of them for their decline in popularity without mentioning the root causes would be myopic.

The cost of all this is that nuance and the ability to have a single train of thought that lasts longer than the length of a TikTok video or tweet are dying.


> The cost of all this is that nuance and the ability to have a single train of thought

People aren't watching TikToks while video gaming. The rise of video games, and the success of narrative ones, should tell us that people engage with the content and focus. For hours at a time.

But they need to care about it, expect way more quality and are way less tolerant of mediocrity. That's sure not great for Hollywood producers, cry me a river.

Libraries are reinventing themselves in many places, IMHO they'll happily outlive movie theaters by a few centuries.


> People aren't watching TikToks while video gaming.

I'm aware that the plural of anecdote is not data, but I can say from personal experience that most of the people I know pick up their phones whenever an unskippable cut scene appears on screen. Many, many people no longer have the patience for narrative in any form and as a consequence literacy rates have been declining for years.

> Libraries are reinventing themselves in many places

They have no choice. People can't read anymore. Fifty four percent of Americans now read below the sixth grade level.


> I can say from personal experience that most of the people I know pick up their phones whenever an unskippable cut scene appears on screen

My personal experience as a gamer and running a gaming community for many years does not line up with this at all.


> My personal experience as a gamer and running a gaming community

I think that's the rub. Your experience is with people who care.

For example, I'm a cinephile. My personal experience is that people have home theaters with 100"+ screens, Dolby Atmos and Dolby Vision, and they would never use a cell phone during a film. That's not most peoples reality though.


People definitely watch YouTube videos while playing video games and play games on their phones while watching TV/movies.

Narrative video games are a tiny and obscure niche.


I’m not sure if it’s true but I’ve heard that the reason so many streaming shows are like twice as long as they should be to best-serve their stories, and are so repetitive, is because they’re written for an audience that’s using their phones while they “watch”.


I wonder if it's not that people are getting dumber or less able to hold attention; rather, that everyone is being more exposed to lowest common denominator material because of efficient distribution.

Reader's Digest was always there on the shelf at the store and was very commercially successful. Most people who consumed more advanced content ignored it.


> It’s all short form videos and posts with a 4th grade vocabulary now

We've had more publicly available educational content than ever with 40+ minutes videos finding their public. Podcasts have brought the quality of audio content to a new level, people pay to get additional content.

People are paying for publications like TheVerge, Medium and newsletter also became a viable business model. And they're not multitasking when watching YouTube or reading on their phone.

That's where I'd put the spotlight. And the key to all of it is, content length is often not dictated by ads (Sponsors pay by the unit, paid member don't get the ads) but by how long it needs to be.

If on the other hand we want to keep it bleak, I'd remind you that the before-the-web TV was mostly atrocious and aimed at people keeping it on while they do the dishes. The bulk of books sold where "Men come from Mars" airport books and movies were so formulaic I had a friends not pausing them when going to the bathroom without missing much.

Basically we accepted filler as a fact of life, and we're now asking the you generation while they're not bitting the bullet. And honestly, I can still read research papers but I completely lost tolerance for 400 pages book that could have been a blog post.


I’ve come to the same conclusion after years of feeling like the idiot for not being able to sit through books. If people aren’t making it through your book, they might have a short attention span but your book also might just be bloated, unclear, or uninteresting. It may even not have set expectations well enough. As Brandon Sanderson says, it’s very easy to skip out on the last half of Into The Woods if you don’t know who Stephen Sondheim is as a writer.


Early in life I learned the rule: If one person is a jerk, he's just a jerk. If you feel like everyone is a jerk, you are probably the one being a jerk.

The same is true of books. If you think one book is bad, it's probably the book. If you think all/most books are slow you should work on your attention span.


Shouldn't we take into account that the industry is also famous for being a monetization path for bloggers, pundits and grifters, for whom a book deal means jackpot; combined with a minimum word count pushing authors/ghost writers to pad their work to reach an average page volume ?

I mostly read non-fiction, so the landscape is probably grimmer, but actual good books aren't that many, and I feel that has been a common wisdom for centuries. Except we're trying push that fact under the carpet as already fewer people are buying books.


There are more books now than ever, and we've been producing books in vast numbers for hundreds of years. Even if the vast majority were garbage there would still be more great books available than could be read in several lifetimes.

Have you considered trying to optimize the way you discover your next read? It almost sounds like you're getting your recommendations from social media, and that it isn't really working out well for you.


"More books than ever" will be eternally true unless we actively destroy books (god no).

The book industry isn't in a good shape otherwise[0], revenue has recovered while unit sales is declining.

I actually don't get recommendations per se, I mostly read books from authors I already like (fiction), or books on subject I think want to read and will scrape the reviews to see what to settle on, or straight go through each book if it's at my local library (non fiction).

A surprising number of them are available in the Kindle Unlimited bundle or at the library, so I read a lot without per unit money involved, and without the sunk cost calculation.

> your next read

I think that might be the core of it. I don't see books as something that needs to be read continually. I already use my eyes way too much, so it's not a hobby and I expect value that can't be gained from other means.

[0] https://nielseniq.com/global/en/news-center/2025/internation...


> "More books than ever" will be eternally true unless we actively destroy books (god no).

You are right, of course. My phrasing was off. I meant to say that we produce more books than ever.

Although that is also a bit of a misleading statement. It is factually true that we produce more books per annum than ever before, but the average book now sells far less than 1,000 copies in it's lifetime (one source I found said around 500) and the growth in quantity has not produced a corresponding growth in quality.

> I don't see books as something that needs to be read continually.

Fair enough. There are only so many hours in a lifetime, and we all have to choose how we spend the ones allotted to us. Although, personally I feel that the world would be better off if people spent more of them reading fiction, and fewer on social media.


People don't even have the attention span for tweets. You see people asking grok to summarize the points of whoever they're fighting with.

Try going back in time and explaining to Neil Postman that people today find watching TV to be a chore that needs abbreviation or summarization.


"Grok summarize this comment"

I kid you not, I've had people ask Grok to summarize a 3-4 tweet thread I posted.


40 minutes or so? You guys are getting lazy. I expected an AI connection in less than 10 minutes after the post.


Are you being too passive aggressive to say directly that you're offended by commentary about AI that disagrees with your stance, or do you really keep track of these timings?


My stance is chaotic good, and HN keeps track of timings for me, I just have to look.


Most libraries track circulation of their catalog. If nobody is using the classics, they're going to get weeded. Most libraries have limited shelf space, and it's best used for things that people are using.

Archival can be part of a library too, but I think a reasonable tradeoff is interlibrary loans, public catalogs, and considering copies in other libraries while weeding. Some library systems can also move items to non-public stacks which may be less space constrained, and only access them on request.


> The tragedy of the modern library is that no one has the attention span for old books.

Fixed that to mean what you say.

Luckily, people still have the attention for good books. Which is why libraries still stock good books, classic or otherwise. They also stock books that people want to read. Which might seem odd until you realize that libraries are there for the community to use.

However, you are free to setup a library that stores books that no one reads.


This has been an ongoing discussion within libraries for more than a hundred years not a recent issue. Should libraries be a place with classics to uplift people or popular books that people want to read even if they are low quality?


I respect what libraries do, yet the past few times I went to my local library I couldn’t find anything I was looking for—and these were well regarded and well known books. I get that they want to stock things people read, but I am a person who wants older books, and I think part of the library’s responsibility should include such books.


I find that old books can often take away more than they give to me. They often have outdated ideas on women or race and are usually far clumsier with depicting homeless, disabled, or sick people. Engagement with fans of old books often is a set of very sheepish defensiveness when I point these out.


You're lucky these days if all you get is sheepish defensiveness and not revanchist conservatism.


I think this is a somewhat wrong framing, and its also shitty to blame libraries for this shift. Tech companies, for the most part, are responsible for the destruction of attention spans, if that has really happened. And I'd be happy to bet that by whatever criteria you choose to select there are more great books written per year now than in 1240 or whatever time you think they only wrote great shit. Its just that now there is much more to wade through and the media environment is totally different.

At any rate, I just think that its a very strange thing to do to use "old" as a substitute for "good." There are tons of old books that are moronic and if the population of the world back then had been the same as now there would be tons more.


This ranks among those things that are obvious and yet we “study” them expecting a different outcome.

“If anyone will not work, neither shall he eat. For we hear that there are some who walk among you in a disorderly manner, not working at all, but are busybodies. Now those who are such we command and exhort through our Lord Jesus Christ that they work in quietness and eat their own bread.” - 2 Thes 3:10b-12


I see lots of people claiming massive positive economic impacts of UBI, but whenever the results are released its always, saved money and is happier.

Like yeah fairly obvious conclusions.

Closest I have seen to the economic impact was a study done on a charity that makes a pseudo UBI in african villages, but thats skewed in a place that has no other monetary investment mechanism. Its probably true that UBI can be used instead of targeted industrial investment by government.


> Its probably true that UBI can be used instead of targeted industrial investment by government.

In a small village, maybe on the metric of wealth per capita. But there are going to be massive externalities because there is nothing intrinsic there to perpetuate or grow that wealth through industry. That old proverb rings true, “Give a man a fish, he’ll eat for a day. Teach a man to fish…”


The report I read, and your statement is very true, but their findings were that instead of buying a meal, when they had capital, they began businesses (or larger capital investments, like livestock) so they could take care of themselves long term.

The issue is that, sort of like you say, theres a lot of low hanging fruit in a poor village that just a bit of money can set someone up. And the comparison was other charity, apparently another charity had sent this village a shipping container full of beach balls. So comparatively money is easily the victor.


I remember installing iPod Linux on my first gen iPod nano. One of my biggest regrets is trading it in for a brand new iPod nano when they had the battery replacement program. The memories were worth much more than a free iPod.


I remember playing Doom on Linux on my 3rd gen iPod...

It got rather warm, and just seemed crazy silly at the time. For context, smart phones weren't quite a thing yet and capacitative touch screens were an emerging technology.


And the Half-Life to Doom engine port! Good memories.


I still have a iPod. I replaced the battery and the hard drive for a SD card and its still a great way to listen to music over a nice pair of headphones. I plan to make a few more so I have enough for the rest of my life.

Phone + Bluetooth is more convenient but something about the experience of the iPod, perhaps the lack of distraction makes it more visceral.


Modern digital audio players (DAPs) exist. Basically an iPod with an upgraded DAC and amp, and optionally WiFi and Bluetooth for direct streaming from Tidal or local media server or from your phone. I also use mine as a Bluetooth receiver for my TV audio when watching late at night.


What model do you have?


Recently got a Hiby S3 II. Happy with it so far. The Tidal UI could be better but it works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: