Hacker Newsnew | past | comments | ask | show | jobs | submit | dvdkhlng's commentslogin

I already have problems following Proposition 1 where they introduce a different DNF with twice the clauses of the original CNF than claim that maximizing the satisfied clauses in the DNF would also yield a maximum satisfied solution for the CNF.

It's a pitty they don't release any source code that we can unit-test against problems with known solutions (while also profiling run-time to stay within polynomial bounds) :)

edit: what am I missing, their reduction to a DNF seems unsuitable to me?

I read proposition 1 [1] to mean: they find an optimal solution for the DNF V∪X, then expect the corresponding CNF C to have at least as many satisfied clauses. However isn't that insufficient? I mean there could be an even better solution with more satisfied clauses in C, that are totally missing when only looking for solutions for V∪X. But maybe I'm just misunderstanding something.

[1] https://arxiv.org/pdf/2304.12517.pdf


The conversion to DNF is fine. The original clauses map directly to pairs of new clauses, so fixing an assignment to the original variables V you can always satisfy the same number of the DNF clauses. You can do worse by choosing the wrong values for X, sure, but there's always a straightforward correspondence.


Last time I read about that, the problem seemed to be buckling [1]. E.g. if you look at Euler's critical load [2] then the pressure that a slender column can withstand just depends on the aspect ratio of the column (imagine a vacuum airship that uses internal columns to withstand the air pressure). So scaling things up means, that the column's weight has to scale proportionally with total airship volume (and the (air) pressure is a constant anyways). See also these discussions here [3]. Disclaimer: I don't claim to understand any of this.

[1] https://en.wikipedia.org/wiki/Buckling

[2] https://en.wikipedia.org/wiki/Euler%27s_critical_load

[3] https://en.wikipedia.org/wiki/Vacuum_airship


I did check the wiki article on Vacuum airships before writing my initial comment, and they are talking about the buckling of a hollow sphere, not of a column - hence why Akhmeteli and Gavrilin suggest considering alternative structures.

But regarding your point that the columns would have to scale with volume you might be right about that. I was thinking that it should scale with the total force, but the bigger your structure the longer distance the force has to be transmitted. So yeah I think I got that one wrong.


I'd think that the distance limit for _terrestrial_ mobile networks comes from the guard interval of the OFDM modulation [1]. I.e. on longer distances the time-offset between different different reception paths (due to reflections) of the signal becomes so long that you cannot compensate those with just a complex gain-factor of the OFDM vectors.

AFAIR LTE (4G) even uses different guard intervals depending on rural vs. city setting because that time-offset is larger in rural areas (less base station density).

I would not expect those problems to be relevant for satellite communication as ground<->satellite does not suffer much of the multi-path signal propagation of terrestrial systems. (IIRC DVB-c sat-TV broadcasts did not even use OFDM, at least not for the older "v1" DVB-c standard).

[1] https://en.wikipedia.org/wiki/Orthogonal_frequency-division_...


Nitpick: You probably mean DVB-S (and S2, S2X..) rather than DVB-C, and you're correct that they don't employ OFDM (they're just a straightforward single wideband channel mostly employing variations on phase-shift keying), and aren't particularly concerned with multipath interference.


> You can't get these speeds with even VDSL..

That's not quite true. At least in germany, VDSL2 with "supervectoring" profile 35b [1] is routinely used, advertised as 250 Mbit internet service [2].

Technologically it's quite a waste of hardware resources and electric power to put these kind of data rates on old twisted pair copper cables (instead of using fiber everywhere), but that seems to be the status quo here right now.

(And I'd guess that the ratio of people being in need of satellite internet is much lower in germany than it is in the US).

[1] https://en.wikipedia.org/wiki/VDSL#Vplus/35b

[2] https://de.wikipedia.org/wiki/Supervectoring


Oh man, I remember a podcast with a network guy in Germany and he hated 'super-vectoring' so much and ranted about it. Basically a way for the telco to get lots of money for not building any fiber.


Interesting, and what speeds can a average customer on the top tier actually get? In the UK the fastest profile used is 17a, which is then capped by our telco at 80 down 20 up line rate. The reality is that only a few percent of users actually achieve these speeds; the median speed is just over 50Mb and 30Mb or less is not uncommon.

We do have Gfast in some urban areas but its only useful if the cabinet is less than 200M or so away. We are actually making good progress on 1-10Gb FTTP coverage now though.

Just thought it was a interesting if not suprising note that Internet from space which travels hundreds of miles through free air to the ground station where it hits fibre is faster than we can achieve over a few hundred metres of twisted copper pair with a consumer level budget.


I'm currently on profile 17a, I pay for 100 Mbit (downstream), and get like 90 Mbit actual IP protocol bandwidth (wget tops out at 10.7 MB/s). But I think the ATM signalling that's still used on the underlying VDSL physical layer is eating a significant portion of the bandwidth (due to the high frame header overhead).


Sounds like you have a nearly perfect line and very close to the Cabinet?

In the UK lines can typically be 500M while worst case may be 1KM or more from the Cabinet. We do have alot of aluminium or cca based lines as well due to copper conservation in world war 2.

All in your perfect 17a vdsl line is still slower than Jeff is complaining about over starlink (100mb)


Yes, I can see the cabinet from my window, so maybe less than 100 meters cable. Inside big cities that seems to be the state-of-the-art. They must have placed a lot of cabinets in recent years. I'm surprised those cabinets don't give up a noticeable amount of heat, they don't even have noticeable active cooling. That's a huge amount of DSP operations that they are running, like a hundred DSL modems crammed together. But maybe modern ASICs can run a single 17a VDSL line at a just few watts.


Right now the modem reports a (link layer) data rate of 98338000 bit/s. Wget tops out at 10.7 MB/s.


100 Gbps over fiber is now the new norm.


Not for FTTP :). In the UK we mostly have XGS PON or GPON with maybe a few hundred thousand active ethernet lines.

You can always order a 1Gb service, sometimes a 2.5Gb or 3Gb service rarely a 10Gb service at consumer prices if you have FTTP here. What about in your area?


Many apartments in big cities in the US have telecom infrastructure which behaves similarly to a wet string and can only pull a few megabits down on a good day. There is no alternative for them except T-Mobile Home Internet (5G) if they are lucky.


> Interestingly, videos don't seem to count? It must be a written description?

Wondering the same myself. Googeling for this issue turns up this power-point [1] which seems to imply on page 6 that "electronic publications, on-line databases, websites, or Internet publications" are also considered as "printed documents". But this is just a power-point so who knows which standard gets applied in practice.

I get the impression that the "printed document" language got written before digital documents and the internet were a thing.

I am not a laywer, don't know a thing about the topic, this is not legal advice etc.

[1] https://www.uspto.gov/sites/default/files/documents/May%20In...


The power point says "Public Use or On Sale" counts. That could be interesting, given that the lock was given to a member of the public, the Lock Picking Lawyer, for a public picking. A convenient case of having a lawyer when you need one!

https://www.youtube.com/watch?v=Ecy1FBdCRbQ


But you need to be aware of the downsides of becoming inadvertently over-reliant on GPS [1]

[1] https://en.wikipedia.org/wiki/Death_by_GPS


No offense, but that’s not really a downside of GPS, people make bad decisions all the time tech or no tech.


> Long love ergomacs!

I recently stumbled over, and started using (and modifying) Xah's "xah-fly-keys" emacs bindings, which are a somewhat more radical implementation of the ideas behind ergoemacs (e.g. use Emacs without any "chording", i.e. without ever having to press two keys at once apart from shift+letter).

[1] https://github.com/xahlee/xah-fly-keys


Why not just use Evil, the VIM emulation layer.


> Why not just use Evil, the VIM emulation layer.

To cite Xah [1]:

"xah-fly-keys.el is a modal editing mode for emacs, like vi, but the design of key/command choice is based on command frequency statistics and ease-of-key score. Most frequently used commands have most easy keys."

I can confirm, that the amount of work on the hands is extremely reduced with xah-fly-keys vs. vanilla emacs (depending on where you put the command mode switch), though I have no experience with evil or vi to compare against.

[1] http://xahlee.info/emacs/misc/ergoemacs_vi_mode.html


This really disregards the compositional aspect of vims commands


My not very well-informed take on the situation is: Yes, Emacs seems to make a lot of progress in many directions. Latest git master does have elisp native compilation support (via libgccjit). Various elisp semantics have been modernized (support for common-lisp style lexical closures).

Still it is rather lightweight (compared with, say firefox), you can download upstream source and compile&install it in minutes rather than hours.


> Also, the fact that it's not rewritable means that...

> 1. Security bugs[2] will never be fixed[3], rendering the hardware unsafe to use over time.

But on the other hand: security bugs will never be introduced post-manufacturing. Additional DRM cannot be forced onto you, after you bought the product. Or: DRM bugs that allow you to fully use your hardware cannot later be patched by the vendor.


Without the source there could be all sorts of timebombs or tripwires in the firmware. For instance a device refusing to operate unless another device was present or changing behavior after a certain date.


I think this is called "green hydrogen" [1] which has application as an energy storage medium, but also more. Energy storage also does not imply, that that hydrogen needs to be turned back into grid electricity, but could be used for heating or for hydrogen cars.

[1] https://en.wikipedia.org/wiki/Green_hydrogen


> I think this is called "green hydrogen"

I'm aware of the term, 'green hydrogen', which is in contrast to 'blue hydrogen' (or as I prefer to call it 'dirty hydrogen') which is a product of methane gas extraction and allows some of that methane to escape.

> but could be used for heating or for hydrogen cars.

I'm under the impression that hydrogen ICE vehicles at scale is impractical due to lack of infrastructure, safety issues etc, at least compared to batteries


Methane synthesis is a potential solution. Simpler infrastructure/reuse of existing infrastructure.

Or potentially methanol (and better yet, butanol, which is easier to deal with than methanol, less corrosive, less hydroscopic).


Hydrogen as a replacement fuel for vehicles is a dead duck. Batteries have won there.

Hydrogen, or zero net carbon hydrocarbons synthesized from it, is a good fit where energy density exceeds what current lithium chemistry batteries can provide, like aeroplanes, or rockets.


If we overbuild solar to store energy for winter heating, CNG may be pretty competitive for on road use (especially for larger vehicles).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: