Hacker Newsnew | past | comments | ask | show | jobs | submit | ericb's commentslogin

I took a look--it seems like you can pass a path on the command-line to open to. Can you pass a line number, also?

No, but that's a good idea, I'll add that

Also--cool editor!

Done, you can pass now file, or file:line, or file:line:column in the cli

> the tech is real and has great promise.

This was very true of the dotcom bubble. The entire "web" was new, and the promise was everything you use it for today.

Pets.com was a laughing stock for years as an example of dotcom excess, and now we have chewy.com, successfully running the same model.

Webvan.com, was a similar example of "excess" and now we have Instacart and others.

I looked up webvan just now--the postmortem seems relevant:

"Webvan failed due to a combination of overspending on infrastructure, rapid and unproven expansion, and an unsustainable business model that prioritized growth over profitability."


This to me is the whole bubble.

The problem of dotcom is we needed a cultural shift. I had my first internet date during the dot com bubble and I remember we would lie to people about how we met because the idea sounded so insane at the time to basically everyone. In 1999 it seemed kind of crazy to even use your real name online let alone put your credit card into the web browser.

Put your credit card into the internet browser then a stranger brings you items in their van? Completely insane culturally in 1999. It would have sounded like the start of an Unsolved Mysteries episode to the average person in 1999. There was no market for that in 1999.

The lesson I take from dotcom is we had this massive bubble and burst over technology that already existed, worked flawlessly and largely just needed time for the culture to adapt to it.

The main difference this time is we are pricing in technology that doesn't actually exist.

I can't think of another bubble that was based on something that doesn't exist. The closest analogy I can think of is the railroad bubble but with the trains not actually existing outside of some vague theoretical idea that we don't actually know how to build. A bubble in laying down rail because of how big it will be when we figure out how to build the trains.

The only way you would get a bubble that stupid would be to have 50-100 years of art, stories and movies priming the entire population on the inevitability of the train.


Uber might be the wildest cultural shift of the last 25 years.

Nobody blinks twice nowadays at getting into a car with a total stranger.


I don't get it. Nobody blinked twice about getting into a car with a total stranger before Uber either — taxis have been around for well over a hundred years. It's not exactly a huge cultural change, just more efficient and convenient.


Isn't openAI already profitable on inference?

I understand training is still costly, but it's not unimaginable for it to turn profitable as well if you think believe they'll generate trillions in value by eliminating millions of jobs.


If you eliminate ONE job and let's say the job pays $100K, in theory at most $100K goes instead to AI revenue. In practice it's a lot less, nobody is going to move everything to AI if it's just a 10% saving.

So, to get a trillion in value, you'd have to eliminate many tens or even hundreds of millions of jobs.


Yeah, I think high tens of millions of jobs would be eliminated. Most employers are seat warmers anyways.


No, inference is actually pointing to them being economically unviable.

https://www.ft.com/content/fce77ba4-6231-4920-9e99-693a6c38e...


>Isn't openAI already profitable on inference?

I don't believe this has been the case or claim at all. At best they have recognized some limited use cases in certain models where API tokens have generated a gross profit.


They won’t generate trillions because there are several companies all competing and will undercut each other to win users.


> Isn't openAI already profitable on inference?

Probably not, but the numbers they've released are too opaque to tell.


Some people treat politics like a tribal sport where "morally OK" is determined solely by which team did it.

Their mental model of the "other side" is someone who is similarly team-driven.

These folks get really confused when "whatabout your team?" falls flat on people who want to live by principles or morality, rather than hat color.


Not the op, but I think about that. Here's what I came to, for the moment:

* LLM's are lousy at bugs

* Apps are a bit like making a baby. Fun in the moment, but a lifetime support commitment

* Supporting software isn't fun, even with an LLM. Burnout is common in open source.

* At the end of the day, it is still a lot of work, even guiding an LLM

* Anything hosted is a chore. Uptime, monitoring, patching, backing up, upgrading, security, legal, compliance, vulnerabilities

I think we'll see github littered with buggy, unsupported, vibe coded one-offs for every conceivable purpose. Now, though, you literally have no idea what you're looking at or if it is decent.

Claude made four different message passing implementations in my vibe coded app. I realized this once it was trying to modify the wrong one during a fix. In other words, claude was falling over trying to support what it made, and only a dev could bail it out. I am perfectly capable of coding this myself, but you have two choices at the moment--invest the labor, or get crap. But, then we come to "maybe I should just pay for this instead of burning my time and tokens."


In regards to the duplication of code — yes I’ve found this to be a tremendous problem.

One technique which appears to combat this is to do “red team / blue team Claude”

Red team Claude is hypercritical, and tries to find weaknesses in the code. Blue team Claude is your partner, who you collaborate with to setup PRs.

While this has definitely been helpful for me finding “issues” that blue team Claude will lie to you about — hallucinations are still a bit of an issue. I mostly put red team Claude into ultrathink + task mode to improve the veracity of its critiques.


Injecting ENV variables into the template would be super useful.


>gemini -p "Say hello"

  Says hello, and just returns right away.
The gemini doc for -p says "Prompt. Appended to input on stdin (if any)." So it doesn't follow the doc.

gemini "Say hello"

  Fails as it doesn't take any arguments.
For comparison, claude lets you pass the prompt as a positional argument, but it does append it to the prompt and then gives you a running session. That's what I'd want for my use-case.


Feedback: A command to add MCP servers like claude code offers would be handy.


100% - It's on our list!


Sounds good on paper, but it has a game theory problem. If your efforts can always be out-raced by someone using AI to do autonomous work, don't you end up having to use it that way just to keep up?


Game theory ideas are great on paper, but in the real world it's messy. For simple, demo and concept sized uses, sure the AI doing it autonomously will succeed. Which betrays the reality that any real application with real world complexity that includes a dynamic environment and maintenance cannot be created by AI atomonmously while at the same time existing within an organization that can maintain it. They may create it, but it will be a shit show of cascading failure over time.


You know who outpaces you 100% of the time as you walk down the stairs? The guy jumping out of the window. Just because it is faster does not mean it is the right economic strategy. E.g. which contractor would you hire for your roof, that old roofer with 20+ years of experience or some AI startup that hires the cheapest subcontractors and "plan" your roof using a LLM?

The latter may be cheaper, sure. But too cheap can become very expensive quickly.


I love your analogy.


If this paper is right, then you might at first be outraced by a competitor using autonomous AI, but only until that competitor gets stabbed in the back by its own AI.


Which unfortunately might still be long enough for them to sink your business

And their customers won't care either way it seems

Or if they do care they won't have any real ability to do anything about it anyways


Maybe. The backstabbing rate is unknown so far. If it's high enough, then autonomy will be poor strategy.


It might be the trigger :)


From an economic perspective it requires LLMs and humans to have comparable outputs. That's not possible in all domains - at least in the near future.


Maybe they’ll outpace you, or maybe they’ll end up dying in a spectacular fiery crash?


Is the percentage meaningful, though? If an LLM produces the most interesting, insightful, thought-provoking content of the day, isn't that what the best version of HN would be reading and commenting on?

If I invent the wheel, and have an LLM write 90% of the article from bullet points and edit it down, don't we still want HN discussing the wheel?

Not to say that the current generation of AI isn't often producing boring slop, but there's nothing that says it will remain that way, and a percent-AI assistance seems like the wrong metric to chase to me?


Because why do you anti-compress your thoughts using LLM at all? It makes things harder to read.


I re-compress my thoughts during editing. That's how I write normally. First, a long draft, then a short one. Saving writing time on the long draft is helpful.

Slop is slop, whether a human or AI wrote it--I don't want to read it. Great is great. Period. If a human or AI writes something great, I want to read it.

Assuming AI writing will remain slop is a bold assumption, even if it holds true for the next 24 hours.

“I didn't have time to write a short letter, so I wrote a long one instead.”

- Mark Twain


> If an LLM produces the most interesting, insightful, thought-provoking content of the day, isn't that what the best version of HN would be reading and commenting on?

Absolutely not. Would much rather take some that is boring, not thought provoking but that was authentic and real rather than as you say AI slop.

If you want that sort of content maybe LinkedIn is a better place.


Working gVisor Mac install instructions here.

https://dev.to/rimelek/using-gvisors-container-runtime-in-do...

After this is done, it is:

docker run --rm --runtime=runsc hello-world


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: