Hacker Newsnew | past | comments | ask | show | jobs | submit | digdugdirk's commentslogin

There is a substantial difference between the standard lobbying and greasing the legislative wheels, and what's going on with this current administration.

Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretend. When a society can see belligerent ostentatious corruption going on as the norm, nothing good can follow.


> Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretend

I'm not sure that's better. I'm hopeful that all this open air corruption leads to real reform. But I'm sure I'll be disappointed.


At least of the previous couple US election, "people" paid more than a billion dollar each wanna-be president

That is investment aka corruption


Very interesting! I hope it gets upstreamed soon, there's a ton of potential for "mental overhead" simplification in the nix ecosystem, this seems like it could be a huge help for that.

I'd imagine at some point the rig tolerances/vibrations/newly settled dust specks from snapshot to snapshot would completely negate any benefits you'd get from that level of detail. The processing power to handle that resolution would be a huge (but potentially interesting...) problem as well.

Can you please explain a bit more about why it's a difficult photogrammetry challenge, or point me in the direction of resources so I can learn more about it myself? This is an exact project on my projects list, so I'd love to have a better grounding in the topic when I get around to diving in to it.

Edit: I'm more focused on getting a dimensionally accurate/stable model, vs an esthetically pleasing one, if that matters. The hope is to be able to scan a broken chair and be able to design a jig in CAD that I could then 3d print for holding a specific piece in place while everything goes back together.


Most recent gaussian and nerf to mesh algorithms are surprisingly good at getting reasonable results for objects that traditional photogrammetry would struggle with. The main challenge are reflective and uniform surfaces (e.g. lether or coated wood). See this overview what you'd want for perfect photogrammetry: https://openscan-org.github.io/OpenScan-Doc/photogrammetry/b... and also the challenging surfaces lower on that site

Same, which is why I asked. My naive intuition is that if you had an industrial grade turntable, like the one in the below video, you could hack together a hardware setup.

https://www.youtube.com/watch?v=YWaJEnKSM0w


They don't get as much visibility into your data, just the actual call to/from the api. There's so much more value to them in that, since you're basically running the reinforcement learning training for them.


Looks very useful and very cool! Just a heads up - your graph loads terribly on mobile (android + Firefox), it's just a skinny strip in a container at the top of the page.


Thanks! Yeah the pyvis viewer isn't mobile-friendly — it's built for desktop browser exploration. I should add a note about that. Appreciate the heads up.


Aha! You might be just the person to ask about something that's I've always been curious about - are there any other types of Braille mechanisms other than the "pin on a lever arm" concept? They seem so fragile and clunky, and I'm surprised there hasn't been anything revolutionary that's sprung out of the miniaturization over the past 3 decades or so.


There are some, in particular the orbit reader[0] is much cheaper than a piezoelectric display. The trade off is that is is relatively slow to refresh and quite noisy.

There is also the dot pad[1] which is much more like a screen with a rectangle of cells that can show Braille and graphics! It is a different technology using electromagnetic actuators with latching. It can only refresh when not being touched. It's also out of the price range of most consumers, but apparently the technology scales very well so they expect the price to fall. It is also modular so users can easily replace broken cells.

The Monarch[2] is based on Dot Pad technology and also runs Android and Humanware's Keysoft software like the BrailleNotes.

[0] https://www.dotincorp.com/en/product/dotpadx

[1] https://www.dotincorp.com/en/product/dotpadx

[2] https://www.aph.org/product/monarch/


I agree with Rob here, Piezoelectric displays are expensive to build, need quiet a bit of tuning and are almost always non-repairable.

When I was working on the Tactis and researching about all the mechanism' that exists, I came across Electromegnitism based mechanisms' very rarely, It is an underexplored way of building braille displays, mainly because of the actuation problem when being pressed against, we are trying to come up with a solution in our V2. Hopefully we get there.


I'd love to see this paired up with Pydantic for a lightweight pydantic based configuration "language". Similar to CUElang, but using pydantic to describe the configuration models themselves.


How does this compare to the sPy project [1]?

[1] - https://github.com/spylang/spy


I'd love to see a breakdown of the token consumption of inaccurate/errored/unused task branches for claude code and codex. It seems like a great revenue source for the model providers.


Yeah, that's what I was thinking. They do have an incentive to not get everything right on the first try, as long as they don't over do it... I also feel like that they try to get more token usage by asking unnecesary follow up questions that the user may say yes to etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: