Hacker Newsnew | past | comments | ask | show | jobs | submit | more tsuraan's commentslogin

> That's as opposed to "validated" ECC support in, say, AMD's Threadripper platform. In that case, if a company builds and markets a motherboard as compatible with Threadripper, and it lacks ECC support, they can expect to receive a nasty letter from AMD's legal team.

Are you sure about that? I'm only asking because the System76 Threadripper Thelio[0] doesn't support ECC (according to the response I received from their support people). Their response was actually that "Threadripper and our motherboard do not offer ECC" (TR obviously does support it though), but is it the case that they're actually contractually obligated to support ECC?

[0] - https://system76.com/cart/configure/thelio-major-r1


> I'm only asking because the System76 Threadripper Thelio[0] doesn't support ECC

The Thelio Major uses a Gigabyte X399 Designare EX motherboard, which has ECC DRAM support. System76 may not offer or support ECC DRAM as an option, but you can add it yourself if you're so inclined.


Okay, that's actually great to know. Thanks!


It's EPYC that has ECC. Threadripper is for enthusiasts, not the professional market.


I'm running ECC memory on my R1700X... I'm fairly sure TR also has ECC memory. In fact, all of AMDs offerings except for APUs have ECC enabled, but AMD does not force motherboard manufacturers to implement it.


Threadripper and many Ryzen chips support ECC as well, and the professional market can use either and still be professionals.


EPYC has registered memory. While ECC is (traditionally) more common in RDIMMs, it's also available in UDIMMs.


OP is somewhat conflating two different things: non-strict function evaluation and lazy IO. With lazy IO, you can get, for example, a String from a file. That string is actually a lazily-constructed chain of Cons cells, so if you're into following linked lists and processing files one char at a time, then it's fun to use. The dangerous bit comes in when you close the file after evaluating its contents as a string:

    fd <- open "/some/path"
    s <- readContentsLazy fd
    close fd
    pure $ processString s
Now, processString is getting a string with the file's contents, right? Nope, you have a cons cell that probably contains the first character of the file, and maybe even a few more up to the first page that got read from disk, but eventually as you're processing that string, you'll hit a point where your pure string processing actually tries to do IO on the file that isn't open anymore, and your perfect sane and pure string processing code will throw an exception. So, that's gross.

That's a real issue that will hit beginners. There's been a lot of work done to make ergonomic and performant libraries that handle this without issues; I think that right now pipes[0] and conduit[1] are the big ones, but it's a space that people like to play with.

[0] - https://hackage.haskell.org/package/pipes [1] - https://github.com/snoyberg/conduit


Seems like the problem is that file close is strict, whereas the file handle should be a locked resource that is auto-closed when the last reference is destroyed (and “fclose” just releases the habdle’s own lock).

In other words the problem seems to be (in this example) that the standard library mixes lazy and strict semantics. A better library wouldn’t carry that flaw.


So that's actually how it works if you just ignore hClose. The problem is that it sometimes matters when things get closed, so they do "need" to expose the ability to close things sooner.


Sort of. It eventually gets cleaned up by the garbage collector, yes. But that could be after an indeterministic amount of time if the GC is mark-and-sweep. My point is that in this circumstance reference counting could be used regardless so that as soon as the last thunk is read, the file is closed. The 'hClose' is basically making a promise to close the file as soon as it is safe to do so.


> as soon as the last thunk is read, the file is closed.

That's probably doable. It's true that when the only reference to the handle in question is the one buried in the thunk pointed at by the lazy input, it should be safe to close it when a thunk evaluates to end-of-input (or an error, for that matter).

I'm not sure whether or not it'd be applicable enough to be worth doing. The immediate issues I spot are that a lot of input streams aren't consumed all the way to the end, and that you'd have to be careful not to capture a reference anywhere else (or you'll be waiting for GC to remove that reference before the count falls to zero).


Also things like unix pipes or network sockets, where the "close" operation means something different as there are multiple parties involved. Arguably the same is true of files as you could be reading a file being simultaneously written to by others.


Right. It's easy to handle the simple case, but honestly "let the GC close it" works fine in the simplest cases.


Why is it possible to "close" a file? You could have a function that mapped open files to closed files, but the open files would still be there... I think the reason why this weird behavior is cropping up is that the entire language is designed around functions, and here you are reaching into the internal datastructures, mutating state.


> Why is it possible to "close" a file?

Because the program we're compiling needs to work on actual computers, running under actual (usually at least vaguely POSIX) operating systems. In that context, it's unavoidable that the set of open file descriptors sometimes matters. It can matter because of resource limits. It can also change whether another process gets an SIGPIPE versus blocking forever. It can affect locking.


> Why is it possible to "close" a file?

i guess i would ask why it's possible to close a file that's going to be used after it's closed? will linear types[1] solve this?

[1] https://gitlab.haskell.org/ghc/ghc/wikis/linear-types


> i guess i would ask why it's possible to close a file that's going to be used after it's closed?

I don't think there's much reason to want to do it, but it's not obvious how to enforce that while still retaining the flexibility we'd want.

Linear types expand the solution space, to be sure. Whether they "solve this" depends a bit on exactly what we consider the problem to be.


The file isn't going to be used after it's closed. The string from the file is going to be used after the file is closed. But with lazy IO, you don't have (all of) the string from the file yet, even though you've "read" it.

That is, the abstractions don't do what non-Haskell abstractions would lead you to expect.


Right. The whole point of lazy IO is that you hide the actual IO behind values that don't appear to be IO. That means your use of the file isn't visible to the type system, so it's not really reasonable to expect it to prevent it. Unless I miss something, you can't write lazy IO without lying to the type system anyway (unsafePerformIO).


I've had a thought for a while, that if a cop can produce their body cam footage, then they were acting as a cop. If they cannot, then they were acting as a private citizen (and their actions must be judged as though any other private citizen had done them). It seems sufficient to me, but I'd love to hear peoples' ideas on how it would be abused. I'm guessing it falls apart somewhere around how evidence is judged to be admissible (cops actually have tighter rules than private citizens?), but I really don't know.


That's an interesting idea, but I'm not sure how it would work in practice. How much police abuse goes unpunished due to legal protections afforded to police, vs. police departments and prosecutors protecting each other?


I believe there should be a special prosecutor specifically for police. This would remove the conflict of interest between prosecutors and the officers they rely on to work with them to do their jobs.


I believe you're thinking about their initial application. They've updated it [0] to an altitude of 550km, so atmospheric drag should still de-orbit broken satellites fairly promptly.

[0] - <https://fcc.report/IBFS/SAT-MOD-20181108-00083>


Thank you, I was thinking about their initial application. Yes, de-orbit times from 550 km are on the order of years to possibly tens of years. The requirement from the FCC is to deorbit your satellite within 25 years after the end of mission.

Edit: Their most recent application only moved the Phase 1 satellites in to the lower orbit. In later phases they are still planning on having more than 1000 satellites at the higher orbital altitude.


Yes, de-orbit times from 550 km are on the order of years to possibly tens of years.

So it wouldn't stop progress for a century, but might it put a damper on things on the same scale as the Great Depression?


I think the correct spelling is "無", but wikipedia lists english spellings as "mu" (from Japanese) or "wu" (from Chinese)[0]

0 - https://en.wikipedia.org/wiki/Mu_(negative)


Thank you!


So, I think WireGuard can't, but it can be used as a part of the puzzle. I'm running a mesh using WG as the security layer, and then l2tp to provide a layer 2 on which to run batman-adv. I have a bunch of machines getting DHCP addresses from my home router that way, and (I assume) batman gives me good routing. Of course, cobbling together your own "secure" mesh probably isn't ideal, but it works surprisingly well.


Mind providing more details on your setup here? I tried throwing together an overlay network using OSPF, but never really got it off the ground. I'd love to hear what you've got here!


Looks like a 2 vs 3 thing:

  Python 2.7.14 (default, Jan  6 2018, 14:37:03)
  [GCC 5.4.0] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> dict(**{1:2,3:4})
  {1: 2, 3: 4}
  >>>
vs

  Python 3.4.5 (default, Jan  6 2018, 14:44:12)
  [GCC 5.4.0] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  >>> dict(**{1:2,3:4})
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  TypeError: keyword arguments must be strings
  >>>


This works in Python 3.6:

    >>> {**{1:2,3:4}}
    {1: 2, 3: 4}
:)


tbrock gave the work-around, but as for the "why", it's probably because Debian uses glibc for its libc, while Alpine uses musl. musl aims to be a much more compliant libc than glibc, so when you tell it to look up a name, it probably always does so. Like tbrock said, installing a local dns cache should help (or, failing that, an /etc/hosts entry?).


Ah ok..

My apk list looks something like

            gcc \
            make \
            libc-dev \
            musl-dev \
            linux-headers \
            pcre-dev \
            postgresql-dev \
            zlib-dev \
            jpeg-dev \
            libxslt-dev \
            libxml2-dev \
            git \

so I see musl but glibc in there....

I do not think I want to do the /etc/hosts thing, as the IP can change for some things.. I would rather just cache it locally for ? 1 minute or something


Its probably just nscd that is caching dns requests on debian not glibc. Though I thought nscd dns ttl caching was broken there. I could be wrong.

Musl libc's dns strategy should actually be faster than glibc when its used.

https://wiki.musl-libc.org/functional-differences-from-glibc...

That said a local cache would be ideal.


GHC does non-strict evaluation as well as tail call optimization, and it offers pretty decent stack traces. The team that implemented them wrote a paper on how they did it, including how they fold mutually recursive calls in the call stack representation so that a normal Haskell program won't cause unbounded growth of the stack representation: https://www.microsoft.com/en-us/research/wp-content/uploads/...

I wouldn't expect the same from an OO language, but I'm not sure that it's impossible either.


That's really weird. I haven't used nftables, but I'm planning to do so the next time I upgrade my router. https://stosb.com/blog/explaining-my-configs-nftables/ makes it look nearly as pretty as pf. Is there something under the hood that's awful?


That actually doesn't look too bad.

I do wish they'd just adopted the PF syntax tho. It really is the gold standard for stateful firewall definition

Was anyone involved in the discussions around the creation of nftables that can comment on whether this was considered?


Let's see: Getting started

https://wiki.nftables.org/wiki-nftables/index.php/Main_Page#...

None of these is anything like a tutorial or introduction. "Quick reference, nftables in 10 minutes" claims to be a ten-minute guide but it's actually just an information dump without any guidance.

Some highlights: "matches are clues used to access to certain packet information and create filters according to them." My translation: "Matches are conditions for rules to apply. They match certain properties (hyperlink) of packets."

"position is an internal number that is used to insert a rule before a certain handle." My translation: "position is an index into the list of rules. It can be used to insert rules at a given position in the list."

I don't know if my translations are correct due to the absurdly bad originals. It is like the authors explain verbs without explaining the nouns they act on. For the nouns, there are mostly just tables of them without any explanation at all. In other places, the few most important nouns are explained.

This alien logic is not only in the documentation, it is also in the syntax. Nobody I know thinks like that.


Wow, that NAT syntax is just plain awful as well. :-/

For reference: https://wiki.nftables.org/wiki-nftables/index.php/Multiple_N...


That example is made needlessly complicated to compress down to a one-liner and show off maps.

It does look nicer when properly formatted as part of a rule file, however.

The docs needs some work.


Yes, I see that now and yes those docs definitely need work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: