Hacker Newsnew | past | comments | ask | show | jobs | submit | anyfoo's commentslogin

A big problem I have with ssh carts is that they are not universally supported. For me, there is always some device or daemon (for example tinyssh in the initramfs of my gaming pc so that I can unlock it remotely) that only works with “plain old ssh keys”. And if I have to distribute and sync my keys onto a few hosts anyway, it takes away the benefits.

Adding to this: while certs are indeed well-supported by OpenSSH, it's not always the SSH daemon used on alternate or embedded platforms.

For example, OpenWRT used Dropbear [1] instead, which does not support certs. Also, Java programs that implement SSH stuff, like Jenkins, may be doing so using Apache Mina [2] which, though the underlying library supports certs, it is buggy [3] and requires the application to add the UX to also support it.

[1] https://matt.ucc.asn.au/dropbear/dropbear.html

[2] https://mina.apache.org/sshd-project/

[3] I've been dealing for years with NullPointerExceptions causing the connection to crash when presented with certain ed25519 certificates.


You can just replace dropbear with openssh on OpenWRT. That was one of the first things I did, since DropBear also doesn't support hardware backed (sk) keys. Just move it to 2222 and disable the service.

I reenabled DB on that alt port when I did the recent major update, just in case, but it wasn't necessary. After the upgrade, OpenSSH was alive and ready.


Upgrade to a better one in initramfs?

Might actually be a positive instead of a negative. Gaming use-cases should have not any effect on security policies, these should be as separate as possible, different auth mechanisms for your gaming stuff and your professional stuff ensures nothing gets mixed.

Hah? It being my gaming machine has nothing to do with the problem. It’s also my FPGA development machine, though it gets used less for that. It only happens to be the only Linux workstation in my home (the others are Macs or OpenBSD).

If you care about security, I recommend investing into a separate computer for developing hardware and software and another for downloading games on.

You can setup your security any way you like, but nothing beats an air gap in terms of security and simplicity.


remote unlock is also useful when you're not gaming so that feels like the wrong aspect to focus on

I feel like that was super common. Apart from changing the volumes of entire channels (e.g. changing the level of Line In vs. digital sound), volume was a relatively “global” thing.

And I’m not sure if that was still the case in 1997, but most likely changing the volume of digital sound meant the CPU having to process the samples in realtime. Now on one hand, that’s probably dwarfed by what the CPU had to do for decompressing the video. On the other hand, if you’re already starved for CPU time…


I mentioned this in another thread now, but it was definitely noteworthy to me that it did this since I was used to other programs not doing so, for example Winamp, I would also have thought Windows' Media Player did not do this, but I can't remember for certain.


Winamp had a software equalizer with a preamp, which was noteworthy. Are you sure changing the volume did not mean changing the preamp level in Winamp?

If you turned off the preamp (could be directly done in the EQ window I think), what did the volume control actually do?


Maybe we're not understanding each other correctly here.

It's 30 years ago now, but my recollection is that Winamp did not change Windows' global volume.

I am less certain, but I thought Windows' own Media Player similarly also did not change Windows' global volume.

What I definitely recall correctly is being surprised that Real Player would change the Windows' global volume and this would not have been so noteworthy to me unless it was unusual compared to other applications I typically used.


No, I get you. I'm stating that Winamp might have been "special" because it had a software equalizer, and its volume control might have actually changed the preamp level. This would be fairly unusual for other app of its time, and I also wondered what would happen if you turned the Preamp off with its big shiny button, and whether that would let the volume control control the global volume instead, or whether it maybe would disable the volume control entirely.

What I'm saying is: I still feel (perhaps wrongly, quite possibly so) that in 1997, changing the global volume was more common, and that even being able to change app-specific volumes required some non-trivial features from the app who can do so.


Awesome.

Side note: virtual 8086 mode was protected mode, or rather, implied protected mode. A task could run in virtual 8086 mode where to the task it was (mostly) looking like it was running in real mode, when in actuality the kernel was running in full protected mode.

Note that the "kernel" was never DOS. It could often actually be a so called "memory manager", like EMM386, and the actual DOS OS (the entire thing, including apps, not just the DOS "kernel") would run as a sole vm86 task, without any other tasks. The memory manager was then serving DOS with a lot of the 386 32 bit goodness through a straw, effectively.

It's very bizarre from today's (or even back then's) OS standards, and evolved that way because compatibility.


Is it so bizarre from today's perspective? Virtualization and hypervisors are commonplace.


The virtualization itself is not the bizarre part. The bizarre part is where the actual OS is 16 bit and runs as the singular "task" of a thin 32 bit layer that merely calls itself a "memory manager". The details of that machinery (segmentation, DPMI, ...) are quite a sight to behold. And it's all because of how PCs evolved at that time, and because we needed to keep running DOS and still wanted to make use of all the extra memory that wouldn't fit into its address space.


macOS also uses compression in the virtual memory layer.

(It's fun to note that I try to type out "virtual memory" in this thread, because I don't want people to think I talk about virtual machines.)


I'm getting tired of typing this, but swap space is not just to increase available virtual memory. If you upgrade from 8 GB to 24 GB, then with proper swap space usage, you have 16 GB that could be used for additional disk cache.

Sure, you're still better off with 24 GB overall compared to 8GB+swap whether you add swap to your 24 GB or not, but swap can still make things more better.

(That says nothing about whether the 2x rule is still useful though, I have no idea.)


There's a chance that those servers might run more efficiently with some swap space, for the reasons mentioned many times in this thread. Swap space is not just for overcommitting.


The theories are repeated a often but I have never seen any empirical data to back it up assuming one is setting the options I mentioned. These anecdotes usually come from servers with default settings and no attempt to tune them for the intended workloads and no capacity planning for application resources. Even OS maintainers are starting to recognize this and have created daemons such as tuned for the people that never touch settings. The next evolution will be dynamic adjustments from continuous bpf traces. I just keep it simple and avoid the circular arguments all together.


Oh sure, it might or might not make a significant difference at all. Chances are, if you do a lot of I/O on a large (or very large) amount of data, and you also have a lot of rarely used but resident anonymous memory, then swap space should help, as that anonymous memory can get paged out in favor of disk cache, but I have no idea how common that is.


Yeah I mean, I know what you mean but this is where it gets into circular reasoning. I will always have operations groups move the workload to a node that has more memory if that is what is needed. In my case having swap on disk would require it to be encrypted due to contracts requiring any customer data touching a disk to be encrypted but I just avoid that all together and just add more memory. If 2TB or RAM isn't enough then they get 3TB and so on. We pushed vendors and OEM's to grow their motherboard capacity. At some point application groups just get more servers.


Yeah, that seems like a reasonable approach for your case!


As has been mentioned a few times in other comments here, I don't believe that's correct. Swap space is not just for "using more memory than you have RAM".


I'm not an expert, but aren't you just reducing the choice of what pages can be offloaded from RAM? Without swap space, only file-backed pages can be written out to reclaim RAM for other uses (including caching). With swap space, rarely used anonymous memory can be written out as well.

Swap space is not just for overcommitting memory (in fact, I suspect nowadays it rarely ever is), but also for improving performance by maximizing efficient usage of RAM.

With 48GB, you're probably fine, but run a few VMs or large programs, and you're backing your kernel into a corner in terms of making RAM available for efficient caching.


The point is to have so much RAM that you don't need to offload anything.


I don't think that's correct. Having swap still allows you to page out rarely-used pages from RAM, and letting that RAM be used for things that positively impact performance, like caching actually used filesystem objects. Pages that are backed by disk (e.g. files) don't need that, but anonymous memory that e.g. has only been touched once and then never even read afterwards should have a place to go as well. Also, without swap space you have to write out file backed pages, instead of including anonymous memory in that choice.

For that reason, I always set up swap space.

Nowadays, some systems also have compression in the virtual memory layer, i.e. rarely used pages get compressed in RAM to use up less space there, without necessarily being paged out (= written to swap). Note that I don't know much about modern virtual memory and how exactly compression interacts with paging out.


Every time I've ran out of physical memory on Linux I've had to just reboot the machine, being unable to issue any kind of commands by input devices. I don't know what it is, but Linux just doesn't seem to be able to deal with that situation cleanly.


The mentioned situation is not running out of memory, but being able to use memory more efficiently.

Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).

If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.


Windows doesn't do that, though. If a process starts thrashing the performance goes to shit, but you can still operate the machine to kill it manually. Linux though? Utterly impossible. Usually even the desktop environment dies and I'm left with a blinking cursor.

What good is it to get marginally better performance under low memory pressure at the cost of having to reboot the machine under extremely high memory pressure?


In my experience the situations where you run into thrashing are rather rare nowadays. I personally wouldn't give up a good optimization for the rare worst case. (There's probably some knobs to turn as well, but I haven't had the need to figure that out.)


Try doing cargo build on a large Rust codebase with a matching number of CPU cores and GBs of RAM.


I believe that it's not very hard to intentionally get into that situation, but... if you notice it doesn't work, won't you just not? (It's not that this will work without swap after all, just OOM-kill without thrashing-pain.)


I don't intentionally configure crash-prone VMs. I have multiple concerns to juggle and can't always predict with certainty the best memory configuration. My point is that Linux should be able to deal with this situation without shitting the bed. It sucks to have some unsaved work in one window while another has decided that now would be a good time to turn the computer unusable. Like I said before, trading instability for marginal performance gains is foolish.


No argument there. I also always had the impression that Linux fails less gracefully than other systems.


That only helps if you don't have much free RAM. If you've got more free RAM than you need cache (including disk cache), swap only slows things down. With RAM prices these days, getting enough RAM is not worth it to avoid swap. IME on a desktop with 128GiB of RAM & Zswap I've never hit the backing store but have gone over 64GiB a few times. I wouldn't want to have pay to rebuild my desktop these days, 128GiB of ECC RAM was pricey enough in 2023!


Yes, it was apparently very visible: https://martypc.blogspot.com/2024/09/pc-floppy-copy-protecti...

But as I mentioned in a sibling comment, I’m not sure it was ever confirmed that it was really a laser that made that mark.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: