Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This conception makes sense iff you believe in ChatGPT as the universal user interface of the future. If anything the agentic wave is showing that the chat interfaces are better off hidden behind stricter user interface paradigms.


I suspect there are many, many things for which chat is a great interface. And by positioning ChatGPT as the distributor for all these things, they get to be the new Google. But you're also right that many domains for which a purpose-built interface is the right approach, and if the domain is valuable enough, it'll have someone coming after it to build that.


I have yet to see a chat agent deployed that is more popular than tailored browsing methods. The most charitable way to explain this is that the tailored browsing methods already in place are the results of years of careful design and battle testing and that the chat agent is providing most of the value that a tailored browsing method would but without any of the investment required to bring a traditional UX to fruition - that may be the case and if it is then allowing them the same time to be refined and improved would be fair. I am skeptical of that being the only difference though, I think that chatbots are a way to, essentially, outsource the difficult work of locating data within a corpus onto the user and that users will always have a disadvantage compared to the (hopefully) subject matter experts building the system.

So perhaps chatbots are an excellent method for building out a prototype in a new field while you collect usage statistics to build a more refined UX - but it is bizarre that so many businesses seem to be discarding battle tested UXes for chatbots.


agree.

Thing is, for those who paid attention to the last chatBot hype cycle, we already knew this. Look at how Google Assistant was portrayed back in 2016. People thought you'd be buying starbucks via the chat. Turns out the starbucks app has a better UX


Yea, I don't want to sit there at my computer, which can handle lots of different input methods, like keyboard, mouse, clicking, dragging, or my phone which can handle gestures, pinching, swiping... and try to articulate what I need it to do in English language conversation. This is actually a step backwards in human-computer interaction. To use an extreme example: imagine instead of a knob on my stereo for volume, I had a chat box where I had to type in "Volume up to 35". Most other "chatbot solved" HCI problems are just like this volume control example, but less extreme.


It's funny, because the chat bot designers seem to be continually attempting to recreate the voice computer interface from Star Trek: TNG. Yet if you watch the show carefully, the vast majority of the work done by all the Enterprise crew is done via touchscreens, not voice.

The only reason for the voice interface is to facilitate the production of a TV show. By having the characters speak their requests aloud to the computer as voice commands, the show bypasses all the issues of building visual effects for computer screens and making those visuals easy to interpret for the audience, regardless of their computing background. However, whenever the show wants to demonstrate a character with a high level of computer mastery, the demonstration is almost always via the touchscreen (this is most often seen with Data), not the voice interface.

TNG had issues like this figured out years ago, yet people continue to fall into the same trap because they repeatedly fail to learn the lessons the show had to teach.


It's actually hilarious to think of a scene where all the people on the bridge are shouting over each other trying to get the ship to do anything at all.

Maybe this is how we all get our own offices again and the open floor plan dies.


Hmm. Maybe something useful will come of this after all!

"...and that is why we need the resources. Newline, end document. Hey, guys, I just got done with my 60 page report, and need-"

"SELECT ALL, DELETE, SAVE DOCUMENT, FLUSH UNDO, PURGE VERSION HISTORY, CLOSE WINDOW."

Here's hoping this at least gets us back to cubes.


Getting our own offices would simply take collective action, and we're far too smart to join a union, err, software developers association to do that.


They’d just have an array of microphones everywhere and isolate each voice - rooms only need n+1 microphones where n is the maximum number of people. That’s already simple to do today, and it’s not even that expensive.


Profound observation, thank you for this.


Remember Alexa? Amazon kept wanting people to buy things with their voice via assorted echo devices, but it turns out people really want to actually be in charge of what their computers are doing, rather than talking out loud and hoping for the best.


“volume up to 35”

>changes bass to +4 because the unit doesn't do half increments

“No volume up to 35, do not touch the EQ”

>adjusts volume to 4 because the unit doesn’t do half increments

> I reach over, grab my remote, and do it myself

We have a grandparent that really depends on their Alexa and let me tell you repeatedly going “hey Alexa, volume down. Hey Alexa, volume down. Hey Alexa, volume down,” gets really old lol we just walk over and start using the touch interface


It's also a matter of incentives. Starbucks wants you in their app instead of as a widget in somebody else's - it lets them tell you about new products, cross-sell/up-sell, create habits, etc.

This general concept (embedding third parties as widgets in a larger product) has been tried many times before. Google themselves have done this - by my count - at least three separate times (Search, Maps, and Assistant).

None have been successful in large part because the third party being integrated benefits only marginally from such an integration. The amount of additional traffic these integrations drive generally isn't seen as being worth the loss of UX control and the intermediation in the customer relationship.


Current LLMs are way better at understanding language than the old voice assistants.


Omg thank you guys. It felt so obvious to me but nobody talked about it.

A UX is better and another app or website feels like the exact separation needed.

Booking flights => browser => skyscanner => destination typing => evaluation options with ai suggestions on top and UX to fine-tune if I have out of the ordinary wishes (don’t want to get up so early)

I can’t imagine a human or an AI be better than is this specialized UX.


Hard disagree.

At least in my domains, the "battle-tested" UX is a direct replication of underlying data structures and database tables.

What chat gives you access to is a non-structured input that a clever coder can then sufficiently structure to create a vector database query.

Natural language turns out to be far more flexible and nuanced interface than walls of checkboxes.


> I have yet to see a chat agent deployed that is more popular than tailored browsing methods.

Not an agent, but I've seen people choose doctors based on asking ChatGpt for criteria and the did make those appointments. Saved them countless web interfaces to dig through.

ChatGpt saved me so much money by searching for discount coupons on courses.

It even offered free entrance passwords on events I didn't know had such a thing (I asked it where the event was and it also told me the free entrance password it found on some obscure site).

I've seen doctors use ChatGpt to generate medical letters -- Chat Gpt used some medical letters python code and the doctors loved the result.

I've used ChatGpt to trim an energy bill to 10 pages because my current provider generated a 12 page bill in an attempt to prevent me from switching (because they knew the other provider did not accept bills of more than 10 pages).

Combined with how incredibly good codex is, combined with how easily chat gpt can just create throw away one-time apps, no way the whole agent interface doesn't eat a huge chunk of the traditional UX software we are used to.


> the tailored browsing methods already in place are the results of years of careful design and battle testing

Have you ever worked in a corporation? Do you really think that Windows 8 UI was the fruit of years of careful design? What about Workday?

> but it is bizarre that so many businesses seem to be discarding battle tested UXes for chatbots

Not really. If the chatbot is smart enough then chatbot is the more natural interface. I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for. If anything, language is the "battle tested UX", and the other stuff is temporary fad.

Of course the problem is that most chatbots aren't smart. But this is a purely technical problem that can be solved within foreseeable future.


> I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI.

It's quicker that way. Other things, such as zooming in to an image, are quicker with a GUI. Bladerunner makes clear how the voice UI is poor for this compared to a GUI.


In an alarm, there is only one parameter to set. In more complex tasks, chat is a bad ui because it does not scale well and it does not offer good ways to arrange information. Eg if I want to buy something and I have a bunch of constraints, I would rather use a search-based UI where i can fast tweak these constraints and decide. Chathpt being smart or not here is irrelevant, it would just be bad ui for the task.


You're thinking in wrong categories. Suppose you want to buy a table. You could say "I'm looking for a €400 100x200cm table, black" and these are your search criteria. But that's not what you actually want. What you actually want is a table that fits your use case and looks nice and doesn't cost much, and "€400 100x200cm table, black" is a discrete approximation of your initial fuzzy search. A chatbot could talk to you about what you want, and suggest a relevant product.

Imagine going to a shop and browsing all the aisles vs talking to the store employee. Chatbot is like the latter, but for a webshop.

Not to mention that most webshops have their categories completely disorganized, making "search by constraints" impossible.


Funny, I almost always don't want to talk to store employees about what I want. I want to browse their stock and decide for myself. This is especially true for anything that I have even a bit of knowledge about.


The thing is that "€400 100x200cm table, black" is just much faster to input and validate versus a salesperson, be it a chatbot or an actual person.

Also, the chatbot is just not going to have enough context, at least not in it's current state. Why those measurements? Because that's how much room you have, you measured. Why black? Because your couch is black too (bad choice), and you're trying to do a theme.

That's kind of a lot to explain.


Even when going to a shop, I prefer to look into the options myself first. Explaining a salesperson what I need can take much more time, and then I am never sure if they just try to upsell, if I can explain my use case well etc. The only case where I opt for a salesperson first is when I cannot translate my use case to specification due to high degree of technical or other knowledge needed. I can imagine eg somebody who knows nothing about computers ask "I want a laptop, with good battery, I would use it for this and that", the same way they would ask a salesperson or a technical friend. But I cannot imagine using such an LLM to look for a table where I need it to fit measurements etc, or anything that is not inaccessible in terms of product knowledge. If I know the specifications, opting for an AI chatbot is inefficient. If not, it could help.


> I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for.

I don't think it's necessary to resort to evolutionary-biology explanations for that.

When I use voice to set my alarm, it's usually because my phone isn't in my hand. Maybe it's across the room from me. And speaking to it is more efficient than walking over to it, picking it up, and navigating to the alarm-setting UI. A voice command is a more streamlined UI for that specific task than a GUI is.

I don't think that example says much about chatbots, really, because the value is mostly the hands-free aspect, not the speak-it-in-English aspect.


Even when my phone is in my hand I'll use voice for a number of commands, because it's faster.


I'd love to know the kind of phone you're using where the voice commands are faster than touchscreen navigation.

Most of the practical day to day tasks on the Androids I've used are 5-10 taps away from a lock screen, and get far less dirty looks from those around me.


My favorite voice command is to set a timer.

If I use the touchscreen I have to:

1 unlock the phone - easy, but takes an active swipe

2 go to the clock app - i might not have been on the home screen, maybe a swipe or two to get there

3 set the timer to what I want - and here it COMPLETELY falls down, since it probably is showing how long the last timer I set was, and if that's not what I want, I have to fiddle with it.

If I do it with my voice I don't even have to look away from what I'm currently doing. AND I can say "90 seconds" or "10 minutes" or "3 hours" or even (at least on an iPhone) "set a timer for 3PM" and it will set it to what I say without me having to select numbers on a touchscreen.

And 95% of the time there's nobody around who's gonna give me a dirty look for it.


and less mental overhead. Go to the home screen, find the clock app, go to the alarm tab, set the time, set the label, turn it on, get annoyed by the number of alarms that are there that I should delete so there isn't a million of them. Or just ask Siri to do it.


One thing people forget is that if you do it by hand you can do it even when people are listening, or when it’s loud. Meaning its working more reliable. And in your brain you only have to store one execution instead of two. So I usually prefer the more reliable approach.

I don’t know any people that do Siri except the people that have really bad eyes


God I miss physical buttons and controls. being able to do something without even looking at it.


> Not really. If the chatbot is smart enough then chatbot is the more natural interface. I've seen people who prefer to say "hey siri set alarm clock for 10 AM" rather than use the UI. Which makes sense, because language is the way people literally have evolved specialized organs for. If anything, language is the "battle tested UX", and the other stuff is temporary fad.

I do that all the time with Siri for setting alarms and timers. Certain things have extremely simple speech interfaces. And we've already found a ton of them over the last decade+. If it was useful to use speech for ordering an uber, it would've been worth it for me to learn the specific syntax Alexa wanted.

Do I want to talk to a chatbot to get a detailed table of potential flight and hotel options? Hell no. It doesn't matter how smart it is, I want to see them on a map and be able to hover, click into them, etc. Speech would be slow and awful for that.


Alarm is a good example of an “output only” task. The more inputs that need to be processed the less a pure chatbot interface is good (think lunch bowl menus, shopping in general etc.)


> Of course the problem is that most chatbots aren't smart. But this is a purely technical problem that can be solved within foreseeable future.

Ah yes, it's just a small detail. Don't worry about it.


I'm sure some very smart Chatbots are working on it.


I don't understand how come that a website for tech people turned into a boomerland of people who pride themselves in not using technology. It's like those people who refuse to use computers because they prefer doing everything the old-fashioned way and they insist on the society following them.


Maybe you can have discussions with a chatbot instead. They always agree with you.


I knew it!

-diehard CLI user


i can't imagine that users will be interested in asking chatGPT to ask zillow things, or ask chatGPT to ask canva to do things. that's a clunky interface. i can see users asking chatGPT to look up house prices, or to generate graphics, but they're not going to ask for zillow or canva specifically.

and if the apps are trusting ChatGPT to send them users based on those sort of queries, it's only a matter of time before ChatGPT brings the functionality first-party and cuts out the apps - any app who believes chat is the universal interface of the future and exposes their functionality as a ChatGPT app is signing their own death warrant.


Every company should see OpenAi as a threat. They absolutely will come for you when the time comes.

It's just like Google and websites, but much more insidious. If they can get your data, they'll subsume your function (and revenue stream).


That and the erosion in privacy make OpenAI somehthing to be very vigilant about.


This x1000. Are businesses short sided enough to help create and develop another wallet garden just like "Google" and "Amazon" are right now? Time will tell but I think businesses want to own their sales funnel, not just give the user a way to avoid interacting with them.


Exactly.

This is exactly the same playbook as has already been played multiple times in the past(and currently playing) by existing companies.

These companies initially laid out red carpets for such builders, but once they themselves had enough apps, they started to tighten the rope, and then gradually shifted to complete 100% control and extortion in the name of "security" or other made-up-excuse.

No-more walled garden. If something like this has to come (which I truly believe is helpful), it should be buiild on open-web and open protocols, not controlled by single for-profit company (ironical since OpenAI is technically non-profit).


>If anything the agentic wave is showing that the chat interfaces are better off hidden behind stricter user interface paradigms.

I'm not sure that claim is justified. The primary agentic use case today is code generation, and the target demographic is used to IDEs/code editors.

While that's probably a good chunk of total token usage, it's not representative of the average user's needs or desires. I strongly doubt that the chat interface would have become so ubiquitous if it didn't have merit.

Even for more general agentic use, a chat interface allows the user the convenience of typing or dictating messages. And it's trivially bundled with audio-to-audio or video-to-video, the former already being common.

I expect that even in the future, if/when richer modalities become standard (and the models can produce video in real-time), most people will be consuming their outputs as text. It's simply more convenient for most use-cases.


Having already seen this explored late '24, what ends up happening is that the end user generates apps that have lots of jank, quirks, and logical errors that they lack the ability to troubleshoot or resolve. Like the fast forward button corrupting their settings config, the cloud sync feature causing 100% CPU load, icons gradually drifting away from their original positions on each window resize event, or the GUI tutorial activating every time they switch views in the app. Even worse, because their app is the only one of its kind, there is no other human to turn to for advice.


Hopefully, people, and technology aren't stuck in late '24.


It's not just as ChatGPT as the interface. It's that Chat with AI will now be the universal interface and every tech company will have their version of it. Everything you want to do will happen in one place. Cards will provide predefined and interactive experience. Over time you'll see entirely dynamic content get generated on the fly. The user experience is going to be one where we've shrunk websites to apps and apps to cards or widgets. Effectively any action you need to take can be done like this and then agents can operate more complex workflow in the background. This is probably the interface for the next 10 years and what replaces the mobile app experience and stronghold that Apple or Google have. This lasts until fully immersive AR/VR become a more mainstream thing. At that point these cards are on a heads up display but we'll be looking at something totally different. Like agents roaming the earth...


This has been the pitched playbook for decades. (Metamates!) I'm increasingly convinced its driven by a specific generation of tech entrepreneurs who cut their teeth while reading ca. 1980s science fiction.

I could see chat apps becoming dominant in Slack-oriented workplaces. But, like, chatting with an AI to play a song is objectively worse than using Spotify. Dynamically-created music sounds nice until one considers the social context in which non-filler music is heard.


The thing it reminds me of is those old Silicon Graphics greybeards that were smug about how they were creating tools for people that created wealth when those other system providers "just" created tools for people tracking wealth.

There's a whole bizarre subculture in computing that fails to recognize what it is about computers that people actually find valuable.


It's because Zuck can't own a pane of glass. He's locked out of the smartphone duopoly.

Everyone wants the next device category. They covet it. Every other company tries to will it into existence.


Chatting with an AI to play a song whose title you know, sure.

Getting an AI to play "that song that goes hmm hmmm hmmm hmmm ... uh, it was in some commercials when I was a kid" tho


> Getting an AI to play "that song that goes hmm hmmm hmmm hmmm ... uh, it was in some commercials when I was a kid" tho

Absolutely. The point is this is a specialised and occasional use case. You don't want to have to go through a chat bot every time you want to play a particular song just because sometimes you might hum at it.

The closest we've come to a widely-adopted AR interface are AirPods. Critically, however, they work by mimicing how someone would speak to a real human by them.


more abstract than that, "I'm throwing a wedding/funeral/startup IPO/Halloween/birthday party for a whatever year old and need appropriate music". Or, without knowing specific bands, "I want to hear some 80's metal music". "more cowbell!"


You don't need AI for this, Spotify has, like, infinite playlists.

Also their playlists are made by real people (mostly...), so they don't completely suck ass.


Playlists aren't interactive though. I can't say "like this but with less guitar".

Also, following the Beatport top 100 tech house playlist, and hearing how many tracks aren't actually tech house makes me wonder about who makes that particular playlist.


I don't know, I don't buy that this is a use case that matters enough to sway anyone.

That's how I feel about a lot of AI stuff.

Like... It's neat. It's a fun novelty. It makes a good party trick. It's the software equivalent of a knick knack.

Like 90% of the pixel AI features. There's some good ones in there, sure, but most of them you play around with for a day and then forget exist.


Okay so you're at the party, and you do a cool party trick, and then that cute stranger you've been eyeing all night finally comes over to talk to you. Why's it need to be more than that?


Because we're pouring trillions of dollars into that.

This isn't me making a cute little website in my free time. This is thousands of developers, super computers out the wazoo, and a huge chunk of the western economy.

Like, a snowglobe is cute. They don't do much, but they're cute. I'd buy one for ten dollars.

I would not buy a snowglobe for 10 million dollars.


The interface of the future is local "AI" in the form of functions embedded in hardware inferred from data sets

One way to consider it that I like as an EE working in the energy model realm; consider the geometry of an oscilloscope.

Electromagnetism to be carved up into equations that recreate it.

Geometric generators that create bulk structure and allow for changing min/max parameters to achieve desired result.

Consider a hardware system that boots and offers little more than blender and photoshop like parameter UI widgets to manipulate whatever segment of the geometry that isn't quite right.

Currently we rely on an OS paradigm that is basically a virtual machine to noodle strings. The future will be a vector virtual machine that lets users noodle coordinates.

Way less resource intensive to think of it all as sync of memory matrix to display matrix and jettison all the syntax sugar developers stuck with string munging OS of history.


I agree with you. I think chat interfaces are really good with voice interfaces while walking, asking for a foreign language lesson, effectively doing a web search while walking by speaking and listening to the answer.

Other app-like interfaces like NotebookLM can be useful, for me one or two real uses a week.

Then there is engineering small open models into larger systems to do structured data extraction, etc.

I am skeptical about the current utility of agentic systems, MCP, etc. - even though I like to experiment.

Someone else said that at least the didn’t go on and on about AGI today - a nice thing. FOMO chasing ASI and AGI will drive us bankrupt, and produce some useful results.


I agree with what you are saying.

I’m building a tool that helps you solve any type of questionnaire (https://requestf.com) and I just can’t imagine how I could leverage Apps.

It would be awesome to get the distribution, but it has to also make sense from the UX perspective.


Your link is broken?


> conception makes sense iff you believe in ChatGPT as the universal user interface of the future

Out of curiosity, why iff?


"iff" means "if and only if". It's common in mathematics.


Correct. I’m asking why this SDK makes sense <—> ChatGPT becomes a universal interface. Why isn’t it useful for intermediate applications?


The apps can send any arbitrary HTML / interface back though.

e.g. Coursera can send back a video player


This will be a bunch of rushed garbage. It will be like Java applets


Maybe, but don't forget they are godly at iteration.


There's a lot of appropriate blowback against stupid AI hype and I'm all for it. But I do think in many respects it's a better interface than (1) bad search results, (2) cluttered websites, (3) freemium apps with upgrade nags, as well as the collective search cost of sorting through all those things.

I remember reading some not-Neuromancer book by William Gibson where one of his near-future predictions was print magazines but with custom printed articles curated to fit your interests. Which is cool! In a world where print magazines were still dominant, you could see it as a forward iteration from the magazine status quo, potentially predictive of a future to come. But what happened in reality was a wholesale leapfrogging of magazines.

So I think you sometimes get leapfrogging rather than iteration, which I suspect is in play as a possibility with AI driven apps. I don't think apps will ever literally be replaced but I think there's a real chance they get displaced by AI everything-interfaces. I think the mitigating factor is not some foundational limit to AI's usefulness but enshittification, which I don't think used to consume good services so voraciously in the 00s or 2010s as it does today. Something tells me we might look back at the current chat based interfaces as the good old days.


I think you need to be careful here because you shouldn't be comparing chat apps to the current state of search results. Instead you compare it to the ideal or to the state of them before companies decided that instead of providing what people are looking for it was more profitable to provide them with related content that they're paid to show.

We are at a moment where we're trying to figure out how to design good interfaces, but very soon after that the moment of "okay, now let's start selling with them" will come and that's really what we're going to be left with.

In that regard, things like adblockers which now a days can be used to mitigate some of these defects you talk about are probably going to be much more difficult to implement in a chat-app interface. What are we going to do when we ask an agent for something and it responds with an ad rather than the relevant information we're seeking? It seems to me like it's going to be even more difficult to be in control for the user.


Its fine though, because this technology is a commodity, anyone can run it or resell it. I expect I can continue paying Kagi or someone like them to provide a good experience at a fair price.


I think you're right that it's going to get enshittified (in fact I tried to say a similar thing toward the end of my comment). I'll stand by this though, LLM Chat, as it exists now, is (imo) objectively better than Google Search, as it is now. Google Search at its best (or, say, Kagi), vs LLM Chat at its best, I would say there's an interesting open question, but I can see the case for chat winning.

But I think it's going to be like Kagi, you'll pay for a subscription to a good-enough one, but the main companies will try to make their proprietary ones too feature rich and too convenient so that you'll have no choice but to use their enshittified version. What we have now might be a golden age that we will miss having.

But, for better or worse, I do think what's coming may be a paradigm where they are effectively one big omniscient super-app.


I'll say it: ChatGPT is better than Kagi, and better than Google Search 1.0 at searching the web and finding relevant sources, even if that is all you use it for is to just find links that you read. Usually its analysis is sound if I don't know anything about the subject matter.


My 5 year-old nephew's analysis is also sound when you don't know anything about the subject matter


Does your 5 year-old nephew understand sarcasm?


at least with bad search results, you had to look at them to know they were bad or become used to certain domains that you could prejudge the result and move to the next one. LLMs confidently tell you false/made up information as fact. If you fail to follow up with any references and just accept result, you are very susceptible to getting fooled by the machine. Getting outside of the tech bubble echo chamber that is HN, a large number of GPT app users have never heard of hallucinations or any of the issues inherit with LLMs.


Once it's efficient enough, you will be able to just vocally talk to your computer to do all of this. Text chat is just the simplest form of a natural language interface, which is obviously the future of computing.


The ChatGPT phone app has had voice conversation mode for a while now. it's more interactive than a podcast while driving. There are apps (Wispr, non-affiliated) to make talking to your computer easier. The future is definitely a hybrid of them. sometimes I want to talk, other times I want to type.


I don’t think natural language is efficient enough. Whether that be text or voice.

I imagine the Star Trek vision is pretty accurate. You occasionally talk to the computer when it makes sense, but more often than not you’re still interacting with a GUI of some kind.


WeChat is the counterexample of your affirmation.


Is wechat purely conservational, without visuals? I think not.


Is it? Honestly, most agents and/or ai apps I interact with that are actually useful present some form of chat-like interface.

I’m not very bullish on people wanting to live in the ChatGPT UI, specifically, but the concept of dynamic apps embedded into a chat-experience I think is a reasonable direction.

I’m mostly curious about if and when we get an open standard for this, similar to MCP.


The whole value of an actual executive assistant is them solving problems and you not micromanaging them.

What users want, which various entities religiously avoid providing to us, is a fair price comparison and discovery mechanism for essentially everything. A huge part of the value of LLMs to date is in bypassing much of the obfuscation that exists to perpetuate this, and that's completely counteracted by much of what they're demonstrating here.


Yes, I certainly prefer "chatting with Claude Code" to "Copilot taking forever to hallucinate all over my IDE, displacing the much-more-useful previous-generation semantic autocomplete."

The former is like a Waymo, the latter is like my car suddenly and autonomously deciding that now is a good time to turn into a Dollar Tree to get a COVID vaccine when I'm on my way to drop my kid off at a playdate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: