Hacker Newsnew | past | comments | ask | show | jobs | submit | prasoon2211's commentslogin

Agree on all points, especially half of my friend circle being unemployed at one point.

Folks with amazing jobs having to spend >1y trying to find a "meh" job. Pretty much impossible to find a job without fluent German.


My dev friends could find English speaking jobs but everyone else is struggling. Some are moving away, compromising, or nervously watching the end date of their unemployment insurance. The salaries actually went down since last year if I'm not mistaken.


Most people actually using python do not start off in scripts. Usually, I would mess around in IPython / Jupyter for a couple days until I have something I'm happy with. Then I'll "productionize" the project.

tbh this has been a sticking point for me too with uv (though I use it for everything now). I just want to start of a repl with a bunch of stuff installed so I can try out a bunch of stuff. My solution now it to have a ~/tmp dir where I can mess around with all kinds of stuff (not just python) and there I have a uv virtualenv installed with all kinds of packages pre-installed.


> Usually, I would mess around in IPython / Jupyter for a couple days until I have something I'm happy with. Then I'll "productionize" the project.

Right, it's this. I get the feeling a lot of people here don't work that way though. I mean I can understand why in a sense, because if you're doing something for your job where your boss says "the project is X" then it's natural to start with a project structure for X. But when I'm going "I wonder if this will work..." then I want to start with the code itself and only "productionize" it later if it turns out to work.


>tbh this has been a sticking point for me too with uv (though I use it for everything now). I just want to start of a repl with a bunch of stuff installed so I can try out a bunch of stuff.

I hope the people behind UV or someone else adress this. A repl/notebook thing that is running on a .venv preinstalled with stuff defined in some config file.


> A repl/notebook thing that is running on a .venv preinstalled with stuff defined in some config file.

So, create a project as a playground, put what you want it to include (including something like Jupyter if you want notebooks) in the pyproject.toml and... use it for that?

What do you want a tool to do for that style of exploration that uv doesn't already do? If you want to extract stuff from that into a new, regular project, that maybe could use some tooling, sure, that would take some new tooling.

Do you need a prepackaged set of things to define the right “bunch of stuff” for the starting point? Because that will vary a lot by what your area of exploration is.


    uv run --with=numpy,pandas python


This is partially why, at least for LLM-assisted coding workloads, orgs are going with the $200 / mo Claude Code plans and similar.


Until the rug inevitably gets pulled on those as well. It's not in your interest buy a $200/mo subscription unless you use >$200 of tokens per month, and long term it's not in their interest to sell you >$200 of tokens for a flat $200.


> It's not in your interest buy a $200/mo subscription unless you use >$200 of tokens per month

This is only true if you can find someone else selling them at cost.

If a company has a product that cost them $150, but they would ordinarily sell piecemeal for a total of $250, getting a stable recurring purchase at $200 might be worthwhile to them while still being a good deal for the customer.


The pricing model works as long as people (on average) think they need >$200 worth of tokens per month but actually do something less, like $170/month. Is that happening? No idea.


Maybe that is what Anthropic is banking on, from what I gather they obscure Max accounts actual token spend so it's hard for subscribers to tell if they're getting their moneys worth.

https://github.com/anthropics/claude-code/issues/1109


Well, the $200/mo plan model works as long as people on the $100/mo plan is insufficient for some people which works as long as the $17/mo plan is insufficient for some people.

I don't see how it matters to you that you aren't saturating your $200 plan. You have it because you hit the limits of the $100/mo plan.


I don't know about for people using CC on a regular basis, but according to `ccusage`, I can trivially go over $20 of API credits in a few days of hobby use. I'd presume if you are paying for a $200 plan then you know you have heavy usage and can easily exceed that.


It's probably easier (and hence, cheaper) to finance the AI infrastructure investments if you have a lot of recurring subscriptions.

There is probably a lot of value in predictability. Meaning it might be visible for a $200, to offer more tokens than $200.


meanwhile me hiding from accounting for spending $500 on cursor max mode in a day


Did you actually get 500 bucks worth of work out of it?


How should they know? It's not like they're checking what it does.


No way to measure it directly, but it did write 4kLOC of mostly working angular... whether non-max would manage the same feat in the same time is an open question.


It depends on the salary, right? If you're in Silicon Valley paying 500k TC it probably makes sense to let your employees go wild and use as much token spend as they like.


Presumably the tab based edit-prediction model + $5 of tokens is worth the (new) $10 / mo price.

Though from everything I've read online, Zed's edit prediction model is far, _far_ behind that of Cursor.


RLHF is not the "RL" the parent is posting about. RLHF is specifically human driven reward (subjective, doesn't scale, doesn't improve the model "intelligence", just tweaks behavior) - which is why the labs have started calling it post-training, not RLHF, anymore.

True RL is where you set up an environment where an agent can "discover" solutions to problems by iterating against some kind of verifiable reward AND the entire space of outcomes is theoretically largely explorable by the agent. Maths and Coding are have proven amenable to this type of RL so far.


Right, it took me a couple of re-reads and I (non-native speaker) ended up asking ChatGPT about it and yes, the sentence is worded incorrectly


As others have said, this is classic selection-effect. Lina Khan isn't coming out and telling people about

- the companies that died because acquiring them was too much of a hassle

- the companies that died or never got funded / started because the investors couldn't see and exit path

- the companies that got acquired piecemeal (Windsurf, Inflection), leaving the early employees with NOTHING simply to avoid the ire of anti-trust hawks at the FTC. This has irreversibly damaged the SV bargain - early startup employees work hard in case of an acquisition, they get rich.

So Lina Khan can keep patting her own back but there's a reason founders, early-stage startup employees and investors disagree.


> the companies that died because acquiring them was too much of a hassle

So it was a shitty non-viable business

> the companies that died or never got funded / started because the investors couldn't see and exit path

More shitty non-viable business. Creating a company only to sell it and screw over your customers is evil.

Maybe those founders should try to make real businesses, instead of playing glorified rollette.

> the companies that got acquired piecemeal (Windsurf, Inflection), leaving the early employees with NOTHING simply to avoid the ire of anti-trust hawks at the FTC

Plain-old loop hole exploitation. Simple anit-competetive stealth aquihire that isn't called so. My conclusion is that the market needs more regulations.

Does your ideal market only contain 5 or mega-corps that control every aspect of our lives?


I worked at a company where we copied this from Amazon for a specific type of meeting (bi-weekly review). But we also had the other "normal" type of meeting.

People never read the documents before the meeting in those "normal" meetings.

The challenge with your suggestion is that people will half-ass the doc reading before the meeting - we tried doing this for the "normal" meetings. It was obvious the people skimmed the doc before the meeting. You're also now relying on the manager (if there even IS one for everyone in the meeting!) to care about this.

So, in practice, giving people dedicated 10 minutes at the start of the meeting works far better.

Besides, in most "normal" meetings, the main presenter often ends up discussing background / context for 10 minutes interspersed throughout the meeting anyway. In the "pre-read" meetings, you're just compress that to the first 10 minutes while increasing the amount of information transferred.


This is really cool! Just started using it today. It's missing some of superwhisper's ease of use but other than that, 10/10


I tried using it for something non-trivial. And:

> 429: Too many requests

Mind you, this is with a paid API key


Been working on this, and should now be resolved https://github.com/google-gemini/gemini-cli/discussions/2064


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: