Hacker Newsnew | past | comments | ask | show | jobs | submit | llimllib's commentslogin

My alma mater, Ursinus, is a very small school and has few claims to fame; but one of them is that John Mauchly taught there before going to Penn to design ENIAC. Wikipedia puts it bluntly:

> Mauchly's teaching career truly began in 1933 at Ursinus College where he was appointed head of the physics department, where he was, in fact, the only staff member.


OKPalette by David Aerne is my favorite tool for this, it chooses points sensibly but then also lets you drag around or change the number of colors you want: https://okpalette.color.pizza/


For me, it's file system latency on mac os when virtualizing that kills me. Cargo, npm, pip, etc create many small files and there's a high per-file latency on the FS layer

I've built tools with both Go and Rust as LLM experiments, and it is a real advantage for Go that the test/compile cycle is much faster.

I've been successful with each, I think there's positives and negatives to both, just wanted to mention that particular one that stands out as making it relatively more pleasant to work with.


> who has a js engine that is both fast and embeds well? nobody

Fabrice Bellard! https://github.com/bellard/mquickjs

(I agree with you, just wanted to note this super neat project)


quickjs/mquickjs are good at embedding but nowhere close to luajit in terms of speed. (i have some experience with quickjs https://github.com/justjake/quickjs-emscripten)

as an aside i’m curious how quickjs/mquickjs compares to mruby in speed and size. something to ponder


I created one I like: https://github.com/llimllib/mdriver

it can echo images with kitty image protocol, and streams the output, which I use to show LLM output as it arrives

It doesn't handle paging - you can pipe it to `less` or whatever pager for that


Not OP, but I use ripgrep and customize it with an alias as well, so it applies equally there


Yes I think the device itself is fine, but the Apple TV apps are mostly terrible and often very laggy/poorly written.

The way developers use the UI toolkit that the Apple TV provides also seems to tend towards apps where it's very difficult to figure out what's the active selection, which is of course _the_ critical challenge.


The issue here is that the app developers design & test for the latest Apple TV 4K models, which have about 10X the performance (and 2-4X the RAM) compared to the old HD models.

Apple left a large generational gap because they kept selling the HD for many years (until 2022) as an entry-level device alongside much more capable 4K models.

> ”it's very difficult to figure out what's the active selection”

Yes, based on my observation this seems to be one of the biggest challenges people face with the AppleTV interface, along with accidentally changing the selection when they try to select it (because of the sensitive touch controls on the remote).


Is that why the BritBox app is absolute garbage?


> it's very difficult to figure out what's the active selection

I don't think is the fault of the 3rd party devs, Apple seemed to start this and other devs followed their example.

I tend to make a small circle with my thumb in the center of the select button, or just slightly move it back and forth, to see what thing on the screen starts moving with me.


I have never noticed this issue. Buttons get highlighted in contrasting colours. Things like episode thumbnails get a different colour highlight border and sometimes even drop shadow. What I find harder to do is to see when going to the left means going to the menu on that side or just the previous tile.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: