Hacker Newsnew | past | comments | ask | show | jobs | submit | Difwif's commentslogin

This just seems like an engineered pipeline of existing GenAI to get a 3d procedurally generated world that doesn't even look SOTA. I'm really sorry to dunk on this for those that worked on it, but this doesn't look like progress to me. The current approach looks like a dead end.

An end-to-end _trained_ model that spits out a textured mesh of the same result would have been an innovation. The fact that they didn't do that suggests they're missing something fundamental for world model training.

The best thing I can say is that maybe they can use this to bootstrap a dataset for a future model.


The people who worked on it did what they could to satisfy the demands of their higher-up’s, who frequently are out of touch with the technical landscape.

Being kind to them and understanding the environment they work in won’t improve their lives, but it will expand our understanding of the capability of particular large companies to innovate.


What’s SOTA in this area right now?


[removed]


My roommate was born disabled.

He relies on SNAP and SSI disability.

These extra steps can cause him weeks of stress, physical and mental. These extra steps cost him money he does not have. The stress can set him back physically for weeks.

Reapplying, waiting on hold for half a day, going down to offices, etc are not easy for some folks. People fall through the cracks and die.

This is called forced attrition. It's pretty common in the business world when companies don't want to fire people. Make it too difficult to bother, so folks stop bothering. Unfortunately this is a literal lifeline for millions of people, so it's more like make it too difficult to bother, so folks start dying.


It doesn't pass the sniff test. If they "know" 186,000 people are deceased who are receiving benefits, then they can simply stop disbursements to those accounts. It doesn't require any action from those who are alive.


> If someone doesn't reapply for food stamps then they weren't that critical for their survival.

For a good number it might be that they don't successfully reapply due to living on a knife edge that lacks the slack to jump through yet another hoop.

The experience here in Australia is that raising welfare barriers hurts those that need welfare the most, the actual fraudsters have the resources to beat the system.


> somehow incapable of doing basic things for something they care about

Even my ADHD often made me incapable of doing basic things for stuff I cared about. I can't imagine the struggle for people with more severe live conditions. Same goes for you, apparently.


Maybe go try to meet some truly poor people and understand their story. It might provide you enough context for this discussion.


You go through the process of actually calling, get sent through a 4-5 week rabbit hole, and then people wonder why less people make it through the funnel that has more holes than a grater.

Remember the whole "waste fraud and abuse" stuff in the beginning of the year? Yeah, there's a lot of waste in how inefficient it is signing up for this government stuff.


> Hate this argument so much. You lose people in your sales funnel because they didn't actually care all that much about the product to justify the extra effort.

On more than one occasion I've been the primary decision maker for a technology choice that was going to be worth tens of thousands of dollars or more per year.

For reasons that aren't relevant here, didn't have a ton of time to do the evaluation... extreme prejudice was exercised against anything that didn't have a 'download now and get started button'.

Even if I wanted to jump on a sales call, I didn't have 2 and 1/2 days to wait for you to get back to me.

Maybe a sales funnel is the right tool for certain industries but when your primary user is technical, don't make them jump on a phone call. Get out of their way and make sure the documentation is good. If they like what they see and they have questions, they will chase you down. That is when you should do the pitch call...


A valid rationalization but never an excuse. At some point the buck has to stop being passed around. Standing up to all instances of violence is the only way to stop the endless cycles.


Pixi has also been such a breathe of fresh air for me. I think it's as big of a deal as UV (It uses UV under the hood for the pure python parts).

It's still very immature but if you have a mixture of languages (C, C++, Python, Rust, etc.) I highly recommend checking it out.


I used to be in this camp until I tried and bought an M1 Macbook as my daily driver. I thought I was going to be Thinkpad/XPS w/ Linux until I die. I don't love MacOS but POSIX is mostly good enough for me and the hardware is so good that I'm willing to look past the shortfalls.

Seriously I would love to switch back to a full-time Linux distro but I'm more interested in getting work done and having a stable & performant platform. Loosing a day of productivity fixing drivers and patching kernels gets old. The M-series laptops have been the perfect balance for me so far.


>Loosing a day of productivity fixing drivers and patching kernels gets old.

You are talking like it was 1997.

The typical linux users don't have to do that. Only those who buy unsupported devices on purpose for the challenge to make them work.


That’s just not true. Every coworker I know who use Linux[1] have occasional issues with webcams, mics, Slack notifications, whatever. It’s all fixable and this kind of inconvenience can be worth it when balanced with the perceived advantages, but saying driver issues are a thing of the past is just a lie.

[1]: I’ve seen these issues on Dell (XPS 13), Thinkpads, and HP laptops


That's funny because you sent me comment a few hours after I struggled at work with a webcam constantly freezing on windows/teams.

Webcam that has always worked flawlessly on Fedora on my other laptops.

Also Teams was much more reliable for the last 5-6 years or so I used it with ungoogled chromium on Linux than it did for the last 6 months using the official app on windows. I have had to kill it an awful number of times after struggling with unrecognized audio device, freezing video, or eveb freezing everything except sound.


I've been using Linux for 25 years and I think its been nearly that long since I had kernel issues that required patching the kernel (if ever). Maybe back in the 2.5 days?

The only drivers that I've had memorable issues with over the years are printer drivers, but those have nothing to do with the kernel. And printers are pretty cursed on every platform.


Every coworker I know who use Windows have occasional issues with webcams, mics, Slack notifications, whatever.


Well you should tell that to Dell because I have coworkers with a range of their models that are constantly fighting with webcams, audio, bluetooth, wifi, and Nvidia driver updates.


If they're new models, the webcam issue is not Dell specific, but an Intel / ipu6 thing. It should be integrated into most systems by now though, even as an out of tree module. The rest should just work, especially on xps machines. Without specifying the models/issues, it's hard to take it as more than an anecdote.


I am surprised. My former employer game me a Dell and the experience was quite smooth on Fedora.


They have a line they sell with linux pre installed. Those always work fine. It takes so e work to figure out which old ones on ebay were in that situation.


Im really not sure why you have to lie to make your point. Just to be clear, you never tried a modern laptop with linux. Because you certianly don't have to patch kernels or deal with drivers anymore. The only time you have to deal with drivers is if you want to game on linux, and even then most of that is covered by modern distros.


This was me too. It just works and it's nice to use. Sometimes life's too short to be hacking around all day.


I'm not really sure what you mean? I've been in fast and crazy startups now years, all the time ton of work to do. Never having issues with Linux, the CachyOS and Fedora spins I run just keep on chugging day to day.

Using a workstation and an AMD Thinkpad.


Is this available to use now in Codex? Should I see a new /model?


Yes, but I had to update the Codex CLI manually via NPM to see it. The VS Code extension auto-updated for me


(2) Seems like a media narrative rather than truth. I don't think that would be anywhere remotely high on a CEO's priority list unless they were a commercial real estate company.

It's far more likely a mixture of (1) and actual results - in-person/hybrid teams produce better outcomes (even if why that's true hasn't been deeply evaluated or ultimately falls on management)


It would be interesting to see two versions of a model. A primary model tuned for precision that's focused on correctness that works with or orchestrates a creative model that's tuned for generating new (and potentially incorrect) ideas. The primary model is responsible for evaluating and reasoning about the ideas/hallucinations. Feels like a left/right brain architecture (even though that's an antiquated model of human brain hemispheres).


I took a quick informal poll of my coworkers and the majority of us have found workflows where CC is producing 70-99% of the code on average in PRs. We're getting more done faster. Most of these people tend to be anywhere from 5-12 yrs professional experience. There are some concerns that maybe more bugs are slipping through (but also there's more code being produced).

We agree most problems stem from: 1. Getting lazy and auto-accepting edits. Always review changes and make sure you understand everything. 2. Clearly written specification documents before starting complex work items 3. Breaking down tasks into a managable chunk of scope 4. Clean digestible code architecture. If it's hard for a human to understand (e.g: poor separation of concerns) it will be hard for the LLM too.

But yeah I would never waste my time making that video. Having too much fun turning ideas into products to care about proving a point.


> Having too much fun turning ideas into products to care about proving a point.

This is a strange response to me. Perhaps you and others aren’t aware that there’s a subculture of folks who livestream coding in general? Nothing to do with proving a point.

My interest in finding such examples is exactly due to the posting of comments like yours - strong claims of AI success - that don’t reflect my experience. I want to see videos that show what I’m doing wrong, and why that gives very different results.

I don’t have an agenda or point to prove, I just want to understand. That is the hacker way!


2, 3, 4 are all what human coders need to be efficient too :)

I'm kinda hoping that this LLM craze will force people to be better at it. Have documentation up to date and easily accessible is good for everyone.

Like we're (over here) better at marking lines in the road, because the EU mandated lane keeping assist needs the road markings to be there or it won't work.


My parents could have said your first paragraph when I tried to teach them they could Google their questions and find answers.

Technology moves forward and productivity improves for those that move with it.


A few examples of technology that moved 'forward' but decreased productivity for those who moved with it from my 'lived' experience:

1) CASE tools (and UML driven development)

2) Wizard driven code.

3) Distributed objects

4) Microservices

These all really were the hot thing with massive pressure to adopt them just like now. The Microsoft demos of Access wizards generating a complete solution for your business had that same wow feeling as LLM code. That's not to say that LLM code won't succeed but it is to say that this statement is definitely false:

> Technology moves forward and productivity improves for those that move with it.


> Technology moves forward and productivity improves for those that move with it.

It does not, technology regresses just as often and linear deterministic progress is just a myth to begin with. There is no guarantee for technology to move forward and always make things better.

There are plenty of examples to be made where technology has made certain things worse.


I would say it as "technology tends to concentrate power to those who wield it."

That's not all it does but I think it's one of the more important fundamentals.


Why is productivity so important? When do regular people get to benefit from all this "progress?"


Being permitted to eat - is that not great benefit?


"But with Google is easier!" When you were trying to teach your folks about Google, were you taking into consideration dependence, enshittification, or the surveillance economy? No, you were retelling them the marketing.

Just by having lived longer, they might've had the chance to develop some intuition about the true cost of disruption, and about how whatever Google's doing is not a free lunch. Of course, neither them, nor you (nor I for that matter) had been taught the conceptual tools to analyze some workings of some Ivy League whiz kinds that have been assigned to be "eating the world" this generation.

Instead we've been incentivized to teach ourselves how to be motivated by post-hoc rationalizations. And ones we have to produce at our own expense too. Yummy.

Didn't Saint Google end up enshittifying people's very idea of how much "all of the world's knowledge" is; gatekeeping it in terms of breadth, depth and availability to however much of it makes AdSense. Which is already a whole lot of new useful stuff at your fingertips, sure. But when they said "organizing all of the world's knowledge" were they making any claims to the representativeness of the selection? No, they made the sure bet that it's not something the user would measure.

In fact, with this overwhelming amount of convincing non-experientially-backed knowledge being made available to everyone - not to mention the whole mass surveillance thing lol (smile, their AI will remember you forever) - what happens first and foremost is the individual becomes eminently marketable-to, way more deeply than over Teletext. Thinking they're able to independently make sense of all the available information, but instead falling prey to the most appealing narrative, not unlike a day trader getting a haircut on market day. And then one has to deal with even more people whose life is something someone sold to them, a race to the bottom in the commoditized activity (in the case of AI: language-based meaning-making).

But you didn't warn your parents about any of that or sit down and have a conversation about where it means things are headed. (For that matter, neither did they, even though presumably they've had their lives altered by the technological revolutions of their own day.) Instead, here you find yourself stepping in for that conversation to not happen among the public, either! "B-but it's obvious! G-get with it or get left behind!" So kind of you to advise me. Thankfully it's just what someone's paid for you to think. And that someone probably felt very productive paying big money for making people think the correct things, too, but opinions don't actually produce things do they? Even the ones that don't cost money to hold.

So if it's not about the productivity but about the obtaining of money to live, why not go extract that value from where it is, instead of breathing its informational exhaust? Oh, just because, figuratively speaking, it's always the banks have AIs that don't balk at "how to rob the bank"; and it's always we that don't. Figures, no? But they don't let you in the vault for being part of the firewall.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: