Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anthropic emphasizes safety but their acceptance of Middle Eastern sovereign funding undermines claims of independence.

Their safety-first image doesn’t fully hold up under scrutiny.



IMO the idea that an LLM company can make a "safe" LLM is.. unrealistic at this time. LLMs are not very well-understood. Any guardrails are best-effort. So even purely technical claims of safety are suspect.

That's leaving aside your point, which is the overwhelming financial interest in leveraging manipulative/destructive/unethical psychological instruments to drive adoption.


There’s a close tangle between the problems that we don’t know how to build a company that would turn down the opportunity to make every human into paperclips for a dollar; and no one knows how how to build a smart AI and stil prevent that outcome even if the companies would choose to avoid it given the chance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: