Hacker Newsnew | past | comments | ask | show | jobs | submit | smartbit's commentslogin

Walk Through Walls: A Memoir https://www.penguin.co.uk/books/288046/walk-through-walls-by... also covers her relationship with Ula.

Easy read, personal & entertaining. Highly recommended.


Also if you’re interested in this type of art, look up Tehching Hsieh, who was an inspiration to Abramović. He did some really cool stuff and only got recognition in the last 15 years or so.

Great research and write-up, maybe a bit too elaborate.

Will be interesting to see if a public outcry will happen once these boxes start arriving at those who funded the kickstarter.


It’s LLM slop and very shallow, in my opinion.

whats shallow about the research? it all seems to check out?

Thank you.

Every time I complain about this kind of useless AI slop I get downvoted to hell and get dozens of comments saying "it doesn't look AI at all", so I don't even bother anymore. It's incredibly sad, I expected much more from this community... But it looks like it'll soon be dead like the rest of the internet.


The blogpost?

ctrl-f for "This isn't" and note how many instances of this pattern there are:

> This isn't X. It's Y.


I don't see a single occurrence in the article of the word "isn't".

> That means the lock-in isn’t just product strategy. It’s also architecture.

> And that omission isn’t some harmless simplification. It’s the entire trick.

It isn't just once. It's—twice. ;)


Also stuff like this:

>That’s not exotic. That’s just model parallelism with extra suffering.

>That’s not product magic. That’s a checkbox.

What really triggers my internal AI slop detector is this:

>Their renders. Their prototype shots. Their exploded views. Their spec sheet.

>Nobody asked what silicon was inside. Nobody asked how 120B on LPDDR5X was supposed to work. Nobody spent

>No cloud. No GPU. No subscriptions.

>wrong class of chip, wrong power envelope, wrong everything

>The visual geometry matches. The licensing model matches. The China-based semiconductor ecosystem match

>Real researchers. Real papers. Real contributions.

LLMs love to overuse this pattern.


This also smells of an autoregressive model trying to make a point that TiinyAI simply forked another repo and claimed as their own invention, before realizing mid-paragraph it's by the same people:

>So no, TiinyAI did not “launch” PowerInfer. SJTU researchers did.

>TiinyAI’s GitHub repo is a fork of the original PowerInfer repository. At least one of the original academic authors appears tied to the code history. So there is clearly some real overlap between the research world and the product world.


Oof, thanks! (I'm going to blame it on my Android Chrome "find in page" tool not working as expected, and I apologize)

IMHO the Dutch are more direct for the same reason they are less sensitive to authority and approach their superiors as equals.

Netherlands effectively being a River Delta, there always was the threat of water, a force greater than anyone. IOW if a flood comes, both the king and the peasant start digging.

This is completely different from neighboring countries UK and Germany, which both traditionally had strong sense of hierarchy and not contradicting the master.


> IOW if a flood comes, both the king and the peasant start digging.

By the same reasoning, India, Bangladesh and China — all ancient civilizations threatened by great rivers — should have developed similar egalitarian cultures but the reality is the polar opposite.

Maybe something as complex as human civilizations can't be the result of just one geographical feature.


India and China are huge countries with very small percentage river deltas. Not comparable by any means. Bangladesh in a very young country that inherited it’s culture from India. I’m not an anthropologist, but sorry, I don’t agree there is any likeness with these countries.

If you have an other theory about the Dutch culture and why it is so different from it’s neighbors, I happy to hear.


No, you’re not the only person. I use it to read news, blogs & hckrnews. Probably more than 2h per day. Often in an IKEA bamboo Bergenes, which I have several laying around the house, upside down with a usb-c cord charging it till 80%.


The person https://www.linkedin.com/in/vvoss/ seems to exist, I even have a mutual linkedin connection. What makes you think the “person” behind it doesn't even appear to exist?


I can't log-in to linkedin right now, but here's a few things:

- the profile picture is almost certainly (like 99%, certainty) AI-generated (I can even tell you it's ChatGPT-generated, the style is way too characteristic to miss).

- the LinkedIn profile shows prolific activity for the past few days, but almost nothing before that, I'm not sure the profile existed before.

- the github account is just 2 weeks old.

Having a mutual connection doesn't mean much, the interesting question would be who's the mutual and for how long has he be a connection. It's not hard to get to 500 LinkedIn connections on LinkedIn in a few days, you just need to add headhunters and other hiring specialists, they'll never refuse an invitation from a profile that look interesting. They could also have added someone who interacted with their LinkedIn slop submission, making the person more likely to accept the invitation.


Note: AI features must be enabled in the server configuration

  LLM_ENABLED = True 
in config.py for these preferences to be available.


I did not enable this and yet I got the panel in the UI.


This Time Is Different: Eight Centuries of Financial Folly (2011) https://press.princeton.edu/books/paperback/9780691152646/th...


Guardian article https://archive.ph/lDwTA


Qwen version 3.5 might be the last serious version (for some time at least), see Something is afoot in the land of Qwen (2 days ago) https://news.ycombinator.com/item?id=47249343

Also interesting experiences shared in that thread, even someone using it on a rented H200.


Not necessarily, Alibaba is still working on it and the CEO is directly co-leading the team. Translated with Qwen 3.5:

> To all colleagues in the Tongyi Lab:

> The company has approved Lin Junyang’s resignation and thanks him for his contributions during his tenure. Jingren will continue to lead the Tongyi Lab in advancing future work. At the same time, the company will establish a Foundation Model Support Group, jointly coordinated by myself, Jingren, and Fan Yu, to mobilize group resources in support of foundation model development.

> Technological progress demands constant advancement — stagnation means regression. Developing foundational large models is our key strategic direction toward the future. While continuing to uphold our open-source model strategy, we will further increase R&D investment in artificial intelligence, intensify efforts to attract top talent, and move forward together with renewed commitment.

> Wu Yongming

https://x.com/poezhao0605/status/2029396117239276013


Thanks for the tip, didn’t think of using 2 subscriptions at the same company.

When reaching a limits, I switch to GLM 4.7 as part of a subscription GLM Coding Lite offered end 2025 $28/year. Also use it for compaction and the like to save tokens.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: