Hacker Newsnew | past | comments | ask | show | jobs | submit | vmp's commentslogin

I made a web-extension to add PassMark benchmark results to processors and mainboards with embedded CPUs to allow you to compare the best price per performance: https://addons.mozilla.org/en-US/firefox/addon/sharkys-geizh...

It really helped me make some decent budget builds.

Tried to publish it to the chrome extension store as well but they removed it. Honestly just use Firefox, google is garbage.


I suffer from borderline personality disorder and it's rough but usually treatable with lots of therapy and learning of "skills."

The covid crisis made my issues boil over to the point of losing relationships, friendships and attempting to take my own life twice (before I found the therapist).


If I made a facemask with that printed on it (as a QR code or whatever) would google stop trying to identify my face? Or would that paint a huge red target on my face?


Some HN'ers might be too young to remember the DeCSS [0] saga and, in particular, the t-shirts.

The EFF, who successfully represented two defendants accused of publishing the DeCSS source code in the two main cases, has the details [1]:

> In Bunner, the [ DVD Copy Control Association ] summarily dismissed its claims after the California Supreme Court ruled that computer programs could be preliminarily restrained from publication only in very narrow circumstances. The California Court of Appeals ruled that those circumstances were not met in Mr. Bunner's case because the program was not a trade secret at the time it was published, but instead was widely available around the world.

> In Pavlovich, the California Supreme Court ruled that Matthew Pavlovich, a Texas resident who published DeCSS on the Internet, could not be forced to stand trial in California. The landmark decision laid out clear jurisdiction rules for claims arising from publishing information on the Internet. DVD CCA's attempt to seek U.S. Supreme Court review of the decision was also rejected.

Additionally, the one known author of DeCSS was acquitted in a criminal trial in Norway.

Obligatory EFF donation link: https://supporters.eff.org/donate/

--

[0]: https://en.wikipedia.org/wiki/DeCSS

[1]: https://www.eff.org/cases/dvdcca-v-bunner-and-dvdcca-v-pavlo...

--

EDIT: See also "AACS encryption key controversy" and the "free speech flag": https://en.wikipedia.org/wiki/AACS_encryption_key_controvers...


Probably a good idea to also incorporate that into some very-clearly-satire related artwork on the mask, to help with the legality. :)


OpenWRT is awesome. Though I have my own story of woe: they switched a very specific WiFi driver implementation to another one which completely broke my 5GHz wifi, it leaked kernel-memory until the AP became unresponsive (fast). Though OpenWRT is so awesome that even someone like me who has no experience with embedded hardware can compile a new firmware with the old WiFi driver and it worked! :)


The first gen Archer C7 routers kernel panic as Qualcomm decided to not support the 5GHz card in the driver they use (but later revisions of said card are supported). You end up needing to open the router and remove the card just to get it to boot.


That issue will be fixed in 19.07.

https://github.com/openwrt/openwrt/commit/34113999ef430ce74a...

Given that the pci card is removable, it makes sense to replace with something else.


I'm curious. What implementation is this? The only thing that comes to mind is ath10k-ct.

edit: Recompilation should not be necessary. opkg remove && opkg update && opkg install should do it.


(meme) I figured out why the google outage took a while to recover: https://i.imgur.com/hzcLx5X.png


Personally, I've only ever had issues with Ubuntu Server and have since switched back to Debian.

I don't know if I'm extremely unlucky or if the choices I make are really that bad;

- I've hit a bug with amavisd and bitdefender[1]

- Recently, Ubuntu pushed for 'netplan' instead of ifupdown and that didn't work with an empty bridge for LXC[2][3]

- They broke a (convenience) script for remotely unlocking a LUKS rootfs[4]

- postconf segfaulting every 5 minutes[5], nothing bad but it just looks ugly in the server logs

That's the major annoyances that I've experienced in the last 3 years. Never had anything like it on Debian. :/

[1] https://bugs.launchpad.net/ubuntu/+source/amavisd-new/+bug/1...

[2] https://bugs.launchpad.net/netplan/+bug/1736975

[3] https://bugs.launchpad.net/netplan/+bug/1773997

[4] https://bugs.launchpad.net/ubuntu/+source/busybox/+bug/16518...

[5] https://bugs.launchpad.net/ubuntu/+source/postfix/+bug/17534...


Most Debian packages are years behind the current stable release. This debate is as old as Ubuntu itself. It is the very reason Ubuntu exists. Stable, free, current: pick any two


netplan seems to be a nightmare, I ended up falling back to ifupdown when trying to setup libvirtd.


I wrote a lightweight DNS proxy for this purpose in python+twisted and sqlite3: https://gitlab.com/Sharky/blocklist2bind/blob/master/twisted... which also works really well on a RaspPi 3 with pypy3.


The next default compressor might be lrzip [1] by Con Kolivas; I've only see it a couple of times in the wild so far but for certain files it can increase the compression ratio quite a bit.

[1] https://github.com/ckolivas/lrzip

  # 151M    linux-4.14-rc6.tar.gz
  # GZIP decompression
  ~$ time gzip -dk linux-4.14-rc6.tar.gz

  real    0m4.518s
  user    0m3.328s
  sys     0m13.422s

  # 787M    linux-4.14-rc6.tar
  # LRZIP compression
  ~$ time lrzip -v linux-4.14-rc6.tar
  [...]
  linux-4.14-rc6.tar - Compression Ratio: 7.718. Average Compression Speed: 13.789MB/s.
  Total time: 00:00:56.37

  real    0m56.533s
  user    5m35.484s
  sys     0m9.422s

  # 137M    linux-4.14-rc6.tar.lrz
  # LRZIP decompression
  ~$ time lrzip -dv linux-4.14-rc6.tar.lrz
  [...]
  100%     786.16 /    786.16 MB
  Average DeCompression Speed: 131.000MB/s
  Output filename is: linux-4.14-rc6.tar: [OK] - 824350720 bytes
  Total time: 00:00:06.35

  real    0m6.524s
  user    0m8.031s
  sys     0m1.766s

  # Results
  ~$ du -hs linux* | sort -h
  137M    linux-4.14-rc6.tar.lrz
  151M    linux-4.14-rc6.tar.gz
  787M    linux-4.14-rc6.tar

tested on WSL (Ubuntu BASH for Windows 10)

edit:

  ~$ time xz -vk linux-4.14-rc6.tar
  linux-4.14-rc6.tar (1/1)
    100 %        98.9 MiB / 786.2 MiB = 0.126   3.0 MiB/s       4:25

  real    4m25.189s
  user    4m23.828s
  sys     0m1.094s
  
  ~$ du -hs linux* | sort -h
  99M     linux-4.14-rc6.tar.xz
  137M    linux-4.14-rc6.tar.lrz
  151M    linux-4.14-rc6.tar.gz
  787M    linux-4.14-rc6.tar
It looks like XZ still has the best compression ratio but also took the longest (real)time.


lrzip is a preprocessor that finds matches in the distant past that the backend compressor (xz) couldn't normally find. zstd has a new long range matcher mode inspired by the ideas behind rzip/lrzip with some extra tricks. It produces data in the standard zstd format, so can be decompressed the the normal zstd decompressor. There is a short article about it in the latest release notes https://github.com/facebook/zstd/releases/tag/v1.3.2


I tried a few times with zstd at various levels of compression with the linux kernel sources. While I've been impressed with zstd, and have some projects lined up to use it, it seems in the case of the linux kernel sources, it's not a great fit. xz handily beats it, and not by a small margin either. I had to really ratchet up the compression levels (20+) before I could get close to 100Mb.


In general, xz beats zstd in compression ratio, as xz is very committed to providing the strongest compression, at the expense of speed, while zstd provides a range of compression ratio vs speed tradeoffs [0]. At the lower levels, zstd isn't approaching xz's compression level, but it's doing it much much faster. Additionally, zstd generally massively outperforms xz in decompression speed

  $ time xz linux-4.14-rc6.tar

  real    4m26.009s
  user    4m24.828s
  sys     0m0.724s

  $ wc -c linux-4.14-rc6.tar.xz
  103705148 linux-4.14-rc6.tar.xz

  $ time zstd --ultra -20 linux-4.14-rc6.tar
  linux-4.14-rc6.tar   : 12.81%   (824350720 => 105564246 bytes, linux-4.14-rc6.tar.zst)

  real    4m34.129s
  user    4m33.484s
  sys     0m0.432s

  $ time cat linux-4.14-rc6.tar.xz | xz -d > out1                                                                                                                                           

  real    0m9.677s
  user    0m6.608s
  sys     0m0.704s

  $ time cat linux-4.14-rc6.tar.zst | zstd -d > out2

  real    0m1.702s
  user    0m1.220s
  sys     0m0.520s
[0]: https://github.com/facebook/zstd/blob/dev/doc/images/DCspeed...


While making no judgement against lrzip, I'll point out that out-performing gzip is pretty much the baseline as far as compression goes. More interesting comparison would be against some modern compressors like zstd: http://facebook.github.io/zstd/


lrzip is somewhat more polished implementation of same idea as rzip by Andrew Tridgell. That means: use rsync's rolling hashing algorithm to implement LZ78 with enormous dictionary size and compress output of that with some general purpose compression algorithm (bzip2 in rzip, IIRC in lrzip it is configurable)


> ifupdown has been deprecated in favor of netplan and is no longer present on new installs [...]

> Given that ifupdown is no longer installed by default, its commands will not be present: ifup and ifdown are thus unavailable, replaced by ip link set $device up and ip link set $device down.

Sad to see this go; I always use ifup, ifdown and ifconfig. Yes, they're old and clunky tools but in a way they are the staple of a proper Linux installation to me.

Not to mention that it's quite a handful to write out "ip link set $device up".


Yeah. This will no doubt affect me too.

On a related note, I've been meaning to read this article from 2011 for a good while now: https://dougvitale.wordpress.com/2011/12/21/deprecated-linux...

I guess it's getting about time to get it done now :)


> ifup and ifdown are thus unavailable, replaced by ip link set $device up and ip link set $device down.

That's weird, they do two very different things. ifup and ifdown apply / unapply the configuration in /etc/network/interfaces, like setting IP addresses, setting routes, running other commands you might have like configuring card-specific hardware things or sending notifications to other processes, etc. ip link set dev $dev up/down only enables or disables the card. They also require the card to exist already, in the case of bridges or bonds or VLANs or similar things.

Does netplan cause all of these things to be done automatically when the card is upped or downed?

> ifconfig ... staple of a proper Linux installation to me.

To be honest, I have a lot of muscle memory for ifconfig because it's what I grew up with, but also Linux's ifconfig has gotten very different from the BSD ifconfig (or ifconfigs? I know FreeBSD's and macOS's support different things), so it's not like it was a standard tool across UNIXes, just a standard name. I'm honestly happier with the Linux tool having its own name.

Also, ip supports things that ifconfig straight-up doesn't, like having multiple IP addresses on an interface (without configuring alias interfaces), creating various types of virtual devices, creating VLANs without using an even more awful tool, getting card stats (ip -s -s link) in a vendor-neutral way, etc.

I do really really hate the ip command's syntax though.


A tool is a tool, but holy hell do I hate ip's and iw's syntax, and the difficulty of automatically parsing their output compared to net-tools.


This is just such a shambles.

Great idea number 1: let's put all the functionality into one command line tool, and we'call it ip for ... "internet protocol"? Why is this ip tool managing my ethernet network card?

Great idea number 2: for this powerful swiss army knife, only a special syntax will do where we repeat the name of an argument before the argument value. To make sure no one can use it's functionality with less than 5 invocations, we'll have a hierarchical help menu that literally outputs BNF.


This tool is hardly new, I think it was introduced back in the nineties. http://linux-ip.net/gl/ip-cref/ says April 14, 1999.


I think it's not a good thing to mention age as if that matters much.

Would you argue that gpg is old and therefore good software?

I would argue instead that iproute2 was not picked up quickly because it's clunky and hard to reason about. openBSD managed to improve ifconfig enough that a change was not needed. I applaud that to be honest.


Sadly too much of IT is run on childish bravado these days.

End result is that anything older than the person talking is stale and clunky code best replaced by something written in the latest bling language that "everyone" raves about...


When it comes to standard unix tools, a part of me still feels 1999 is rather new. :)


and it was a pain in the arse then, and it's no better now.


> we'call it ip for ... "internet protocol"? Why is this ip tool managing my ethernet network card?

For CLI tools I use every day, their etymology is far less important to me than their brevity. "ip" suits the bill quite nicely.

Yeah, the name-before-argument thing can be annoying though, but again, at least they're all short.


Effectively the ip syntax reminds me of configuring cisco hardware, and not in a good way.


It's not like we're losing the previous tools completely. This is just switched defaults and net-tools and ifupdown are just one "apt install" away.


It stands for iproute not "internet protocol". :p


How did the "ip" get into "iproute"?


You could consider making an alias for both maybe? I know it's not the same, but at least then you can have your own shorthand for it.

I've done this and similar for commands I'm all too familiar with from time to time if I find myself typing the wrong one (sometimes I mix up Windows and Linux shell commands, PowerShell has helped with this at least by supporting plenty of Linux commands).


I wonder why netplan instead of straight-up systemd-networkd?

To avoid the usual "systemd takes away all my toys" shitstorm? Does systemd-networkd fundamentally lack anything which netplan doesn't?


"Server users now have their network devices managed via systemd-networkd on new installs. This only applies to new installations."


networkd is really designed more for server users, on a laptop that connects through wlan you want NetworkManager instead. Netplan is designed to be a single frontend to whichever of the two you are using.


fyi: you can shorten ""ip link set $device up"" to "ip l s $device up" (same for all ip commands, e.g. "ip address" can be shortened to "ip addr" or simply "ip a")


Yeah but please don't.... or at least don't create documentation or guides with the abbreviations.


To be honest, I didn't even know "addr" was short for address. When I was first taught iproute2 in 2007, I was told that the command was "ip addr" (and I'm fairly sure the very first iproute2 command I ever typed was "ip addr show").


`apt install ifupdown net-tools` and you have those tools again.


Maybe reducing the entropy by almost half for a vanity key/domain name wasn't such a good idea.


What are you talking about?


The address, 7 out of 16 characters are fixed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: