Probably a safe default as there's extra memory of kernel structures, file buffering, SSH sessions to allow logins to debug why your server suddenly has high load and high IOwait (swapping).
If you know a a system is going to run (e.g.) a Postgres database, then tweaking the vm.* sysctl values is part of the tuning process.
> he way stuff fails when it runs out of memory is really confusing
have you checked what your `vm.overcommit_ratio` is? If its < 100%, then you will get OOM kills even if plenty of RAM is free since the default is 50 i.e. 50% of RAM can be COMMITTED and no more.
curious what kind of failures you are alluding to.
The main scenario that caused me a lot of grief is temporary RAM usage spikes, like a single process run during a build that uses ~8gb of RAM or more for a mere few seconds and then exits. In some cases the oom killer was reaping the wrong process or the build was just failing cryptically and if I examined stuff like top I wouldn't see any issue, plenty of free RAM. The tooling for examining this historical memory usage is pretty bad, my only option was to look at the oom killer logs and hope that eventually the culprit would show up.
Thanks for the tip about vm.overcommit_ratio though, I think it's set to the default.
you can get statistics off cgroups to get idea what it was (assuming it's a service and not something user ran), but that requires probing it often enough
For anyone feeling brave enough to disable overcommit after reading this, be mindful that default `vm.overcommit_ratio` is 50% which means that if no swap is available, on a system with 2GB of total RAM, more than 1GB of RAM can't be allocated and requests will fail with preemptive OOMs. (e.g. postgresql servers typically disable overcommit)
Nice usability features definitely. Apart from that how would you say it compares against something like sysdig falco / cilium + tetragon?
Apart from this a major issue is DNS based dynamic filtering which is way batter to get right in a Kubernetes environment with something like Cilium. IP lists are impossible to manage with modern level of third party integrations.
- technological advantage (ebpf + AI/LLM)
- lightweight, uses very less resouces than other heavy/bloated solutions
- seamless installation.
- highly customizable and fast shipping compared to existing solutions.
- can create custom rules to raise alerts on any file, commands, uid, gid, port, ip etc.
- XDR: automated response/blocking of malicious ip/port.
I would have expected at least Virustotal to flag them if that were the case. It does more than just looking up in a database of known malicious URLs and I think the reputation of the domains is the key factor here.
It is the same for nested links as well. They mostly have a chain of links, each one taking you to a new one with hop count ranging anywhere from 5 up to 10 or more.
There is also another escape sequence, OSC 1337, apparently already implemented in iTerm2 [0], which makes iTerm2 open the URL instead of printing it:
The hypothetical new control code is different because it does not display a hyperlink; it directly opens the link using the appropriate system URL handler.
The commit linked there checks host only and not protocol for "always allow". I wonder if that's going to be a problem with some of the more interesting protocols.
iTerm2 has definitely not been designed with security in mind.
It has a massive and rapidly growing attack surface and quite a bit of feature bloat (literally hundreds of "features") - I would not recommend using it over Terminal for anyone security minded.