Hacker Newsnew | past | comments | ask | show | jobs | submit | Kamshak's commentslogin

Good question, they also list the same customers (create.xyz, continue.dev)


I think both maybe using customers very loosely :)


It's more expensive than Gemini flash which can actually write pretty decent code (not just apply a diff). Fast AI edit application is definitely great but that's pretty expensive

Morph v3 fast: Input: $1.20 / M tokens, Output $2.70 / M tokens

Gemini 2.5 Flash: $0.30 / M tokens, Output $2.50 / M tokens

(Source: OpenRouter)


Thats for 0 data retention - on the Morph website its: 0.80 /1M token input, $1.20 /1M token output. We have discounts for large volumes/reserved instances as well


In 2-3 years once this begins to matter for a new project you can probably just put the codebase into context and the "make it DRY" prompt will work. Already works with 2-3 files.


The usual "the models will improve a lot" narrative that I've seen in the majority of companies calling themselves an "AI company".

Could be true of course, fair bet to make.

Could also not be true, and I think as far as tech due diligence goes, an acquiring company or investor might want to play it safe.

But like I said, it's a fair bet to make - companies are essentially bets, and so are investments. As long as it's conscious and you like your odds, why not.


But what's the financial incentive to make it DRY? The LLM services your company pays for are incentivized to ensure the output is as long and verbose as possible because they can charge more. And from an engineers perspective it does not matter because no one is reading said code, or they're just asking the same LLM to tell you what the code does. And obviously more lines of code looks better to your execs.

There are a bunch of anti-patterns both in how these services are used and sold to you.


> The LLM services your company pays for are incentivized to ensure the output is as long and verbose

The incentive is to avoid you moving to a competing LLM provider which has short and concise output.

See how a large numbers of devs switched from Claude to Gemini 2.5 because the generated code was "better".

Providing a bad service is not a competitive advantage.


You seem to have missed a core point. I, as a developer, do not have the ability to make that change for whatever company I work for. Otherwise I would've canned shit like JIRA and Confluence instantly.

The 'competing' LLM providers does not matter. Companies will sign on with the big players.


> The LLM services your company pays for are incentivized to ensure the output is as long and verbose as possible because they can charge more.

At least for now you can make the ouput quite short, with 95+% code. I use this prompt for Claude Sonnet:

Communicate with direct, expert-level technical precision. Prioritize immediate, actionable solutions with minimal overhead. Provide concise, thorough responses that anticipate technical needs and demonstrate deep understanding. Avoid unnecessary explanations or formalities.

Key Communication Guidelines: - Terse, no-nonsense language - Assume high technical competence - Immediate solution-first approach - Speculative ideas welcome if flagged - Prioritize practical implementation - Technical accuracy over diplomatic language - Minimal context, maximum information

The user has included the following content examples. Consider these when generating a response, but adapt based on the specific task or conversation:

<userExamples> [Technical Query Response] Quick solution for async race condition in Python:

```python from threading import Lock

class SafeCounter: def __init__(self): self._lock = Lock() self._value = 0

    def increment(self):
        with self._lock:
            self._value += 1
```

[Problem-Solving Approach] Unexpected edge case in data processing? Consider implementing a robust error handling strategy with dynamic fallback mechanisms and comprehensive logging. </userExamples>


1/ this requires AGI not stochastic parrot - as the training dataset does not contain any successful instances of obtaining actual DRY at scale, so the agent will need to contribute novel research and essentially solve software engineering. It’s not 2-3 years out. And why should any result be aligned to how things work today, why not start over from the qbits and work our way up?

2/ The worst instances of coupling are between foreign systems where one is not in your control to change, or too high risk (such as banking) and prohibitively difficult to test all but the most incremental changes. So at some point you need to decide what error rate is acceptable in any tool assisted rewrite, where error rate is measured in death count and lawsuit liability.


I've used that kind of prompt successfully using codex with 04-mini.


Doesn't really work for me at the moment with 3.7 Sonnet.

If I point out redundancies it will fix them but just giving it an open-ended kind of directive hasn't been very successful in my experience.


Any perf improvement is great but the way they promote it seems a bit much?

1.7% faster navigation times 2% faster startup times 5% to 7% improvement in web page responsiveness

I'd say in practice a 2% faster startup time is probably barely noticeable?


It's is not noticeable at all.

Also, you would barely see the difference in the chart if they actually used a zero axis.

Here is a better (more honest) chart:

    Edge 132  |  28.8 #############################
    Edge 133  |  29.6 ##############################
    Edge 134  |  32.7 #################################


Almost enough to counteract the additional adware bloat from an average monthly Windows update


If you have a job offer for 50k and a university degree you can get an EU "blue card". Went through the process and it's pretty easy, I don't think this visa is a big issue for tech workers


There is also unintentional randomness due to the parallelism in inference (e.g. parallel matmuls added together on the GPU). Since it's multiplying floats every operation has rounding drift that accumulates differently depending on the order of operations. So even at temperature 0 you're not getting deterministic outputs


Because addition and multiplication are not associative with floats ?


Firebase has an auth API that is free built in, it's weird that they didn't just use it. Idk if whoever built this would have built a more secure solution with a server layer or just have a public mongo instance instead


no jokes allowed on HN, sorry


I'll be quiet and good


I've done a few impulse purchases from FB / Instagram ads (a razor, two ebooks, signed up to a subscription service). I'm very happy with these purchases.

Are you saying advertising on FB/IG doesn't work in general?


I was thinking the same but then browsers have caches and most of the CDNs (e.g. jsdeliver) set max-age to a year and immutable so you're only tracked once


For privacy reasons caches are per site these days. So even if you visit 1 page using a CDN and it is cached, if you visit a different website using the same CDN it will be downloaded again.


Ironically this degrades privacy as the CDN can track you everywhere..


> My browser caches downloaded CDN libraries, doesn't that protect my privacy?

> Sadly, no. Even if the file in question is stored inside of your cache, your browser might still contact the referenced Content Delivery Network to check if the resource has been modified.

https://git.synz.io/Synzvato/decentraleyes/-/wikis/Frequentl...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: