Hacker Newsnew | past | comments | ask | show | jobs | submit | trcf23's commentslogin

Yet most people seem to agree that AI favours skilled dev more than unskilled ones...

Look at the marketing. It's focused on unskilled people.

Nice! Do you think it would be easy for someone with no hardware experience to build one?

Yes, I think so. Electronics prototyping is so accessible now, and there's such a deluge of inspirational projects out there to learn from. YouTube is a gold mine, and I'll leave links to a few channels I follow, below.

If you get an Arduino or Esp32 microcontroller (maybe in one of those starter-kits with various sensors), some breadboards, assorted jumper-cables and a kit with electronic components (resistors, caps) you'll be good to go. A device like a wall clock most likely won't require soldering, since it won't be jostled or moved around much.

Ben Eater: https://www.youtube.com/@BenEater/videos

Paul McWhorter: https://www.youtube.com/@paulmcwhorter/videos

Huy Vector: https://www.youtube.com/@huyvector/videos

I'd also take a look at the other DIY projects that people have linked in this discussion.


If you already have Home Assistant running, I think it should be simple. Most of the time you can buy devices with pins already soldered and it's just the matter of connecting them together. AIs are pretty good with ESPHome configs. You can even take a picture so that they can help you identify the correct pins. Some coding may be required for drawing things on the display though.

Has anyone find a useful way to to something with Claws without massive security risk?

As a n8n user, i still don't understand the business value it adds beyond being exciting...

Any resources or blog post to share on that?


> Has anyone find a useful way to to something with Claws without massive security risk?

Not really, no. I guess the amount of integrations is what people are raving about or something?

I think one of the first thing I did when I got access to codex, was to write a harness that lets me fire off jobs via a webui on a remote access, and made it possible for codex to edit and restart it's own process, and send notifications via Telegram. Was a fun experiment, still use it from time to time, but it's not a working environment, just a fun prototype.

I gave openclaw a try some days ago, and besides that the setup wrote config files that had syntax errors, it couldn't run in a local container and the terminology is really confusing ("lan-only mode" really means "bind to all found interfaces" for some stupid reason), the only "benefit" I could see would be the big amount of integrations it comes with by default.

But it seems like such a vibeslopped approach, as there is a errors and nonsense all over the UI and implementation, that I don't think it'll manageable even in the short-term, it seems to already have fallen over it's own spaghetti architecture. I'm kind of shocked OpenAI hired the person behind it, but they also probably see something we from the outside cannot even see, as they surely weren't hired because of how openclaw was implemented.


Well for the OpenAi part, there was another HN thread on it where several people pointed out it was a marketing move more than a technical one.

If Anthropic is able to spend millions for TV commercial to attract laypeople, OpenAi can certainly do the same to gain traction from dev/hacky folks i guess.

One thing i've done so far -not with claws- is to create several n8n workflows like: reading an email, creating a draft + label, connecting to my backend or CRM, etc which allow me to control all that from Claude or Claude Code if needed.

It's been a nice productivity boost but I do accept/review all changes beforehand. I guess the reviewing is what makes it different from openclaws


once the models get smart enough, you wont need n8n, they will just do the workflow without it needing to be specified. this is coming pretty soon

Probably but with n8n you can keep a trace of execution no?

They’re raising tens and hundred of billions.

If you and others want that feature, and they think that’ll keep you using and paying, they’ll build it.


The question is: what type of mac mini. If you go for something with 64G + +16 cores, it's probably more than most laptop so you can run much bigger models without impacting your job laptop.

64GB Mac Mini is easily in the $2000 territory. At that point you might as well just buy a DGX Spark and get proper CUDA/Linux support.

If the idea is to have a few claws instances running non stop and scrapping every bit of the web, emails, etc, it would probably cost quite a lot of money.

But if still feels safer to not have openAI access all my emails directly no?


Very nice, thanks! It’s great to be able to play with viz!

For a deeper tutorial, I highly recommend PyTorch for Deep Learning Professional Certificate on deeplearning.ai — probably one of the best mooc I’ve seen so far

https://www.deeplearning.ai/courses/pytorch-for-deep-learnin...


Good introduction! Building pytorch-lite using python and numpy is the way to go.

Free book: https://zekcrates.quarto.pub/deep-learning-library/

Ml by hand : https://github.com/workofart/ml-by-hand

Micrograd: https://github.com/karpathy/micrograd


Wow thanks for sharing! Could you explain how you made the specs? Did you already know pretty much everything you wanted to cover before hand? Was one CC session enough to go through it?

In my experience, trying to make a plan/specs that really match what I want often ends in a struggle with Claude trying to regress to the mean.

Also it’s so easy to write code that I always have tons of ideas I end up implementing that diverge from the original plan…


- No, I did not know everything I wanted to cover beforehand. Claude helps me brainstorm, research, and elaborate on my ideas. The spec is a living document that I occasionally check in: https://github.com/Leftium/rift-transcription/commits/main/s...

- It was definitely not one CC session. In fact, this spec is a spin-off of several other specs on several other branches/projects.

- I've actually experienced quite the opposite: I suggest an idea for the spec and Claude says "great idea!" Then I change my mind and go in the opposite direction: "great idea!" again. Once in a while, I have to argue with Claude to get my idea implemented (like adding dependencies to parse into a proper AST instead of regex.)

- One tip: it's very useful to explain the "why" to Claude vs the "what." In fact, if you just explain the why/problem without a specific solution, Claude's suggestions may surprise you!


The what-why switch is quite useful, because it also helps you avoid Claude's "great idea!" responses as well.


They can’t update it though. In docs it makes sense to use that as a basis and have the Llm update it when needed


Mermaid diagrams are even better because you don't waste characters on the visual representation but rather the relationships between them. It's the difference between

    graph TD
            User -->|Enters Credentials| Frontend[React App]
            Frontend -->|POST /auth| API[NodeJS Service]
            API -->|Query| DB[(PostgreSQL)]
            API --x|Invalid| Frontend
            DB -->|User Object| API
            API -->|JWT| Frontend
and

    +-------+           +-------------+           +---------+
    |  User |           | React App   |           | NodeJS  |
    +-------+           +-------------+           +---------+
        |                      |                       |
        |  Enters Creds        |       POST /auth      |
        |--------------------->|---------------------->|
        |                      |                       |
        |      Invalid         |    <-- [X] Error -----|
        |<---------------------|                       |
        |                      |       Query DB        |
        |                      |---------------------->| [ DB ]
Plus while an LLM can understand relationships via pure ASCII or an image, it's just easier to give it the relationship data directly.


But the point is to have something easy to read both for humans and LLM, no?

It’s harder to read mermaid in a terminal or a markdown file…


Mermaid diagrams automatically render on Markdown and IDE chat windows as in VSCode or Cursor. So you get the best of both worlds, a graph you can look at a ND manipulate with the mouse but also in a format LLMs can read.


Ah thanks I didn’t know that…


Lucky you. It still crashes quite often for me and it drives me nuts that my Claude code history is lost every time…

But love the project and been using it for almost 2 years now though


Debug log says what about the crash?


Never thought to check the debug log.

I would say mostly with the agent panel going into panic in longer conversation or not able to proceed for some reason.

Claude code in terminal also has issues rendering. Never have that with vscode or iterm so I guess it’s zed related.

And sometime for eslint it won’t check ts errors then start adding false errors everywhere and become unusable if I don’t start over.


This too I’m curious about. If its memory related I have 36 of it on the work laptop if that helps


I didn’t know it was a common thing but it’s definitely something I’ve experienced!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: