Hacker Newsnew | past | comments | ask | show | jobs | submit | haikuginger's commentslogin

Python urllib3 maintainer here. urllib3 made a change to be more RFC-compliant in December, and which fixed this issue, but that change has not been released yet. We are in the process of looking into that.

I have verified that Requests, which uses us, appears to have its own handling, back at least to requests 2.0 (released in 2013) that prevents this when used directly as an abstraction layer on top of urllib3.


Interesting. I was recently debating whether to use Requests or just urllib3 directly. Figured I'd minimize dependencies by just using urllib3 but didn't think it might actually be more secure to use Requests. Great work btw!


What is your use case that would make minimizing dependencies to this extreme a valuable activity?


I was just using urllib3 to post a form on another website and get the resulting html page, then parsed it with BeautifulSoup.

Since it was just a one off use case and ultimately very simple, I didn't see the need for any more functionality. Why bother with the extra packages? Or do you think it's still worthwhile to use Requests even still? Is it not just unnecessary bloat that might slow runtime?


There's a lot to unpack in your comment, but I'll just work with the most easily verifiable thing for you; what was the response time of the resource you were querying with urllib3, and do you think using requests instead of urllib3 directly would be an order of magnitude (or two) more or less runtime?


I admittedly didn't test the response times between the two, but just felt adding additional dependencies was unnecessary. I don't realistically expect the speed to be too different between the two, but the less I have to rely on external libraries the better. If I can get the job done with urllib3 why use Requests?

Though admittedly, after reading OPs statement, I see that Requests might actually have some extra security that urllib3 alone might not have. But barring security improvements or the need for extra features that Requests has, seems like using Requests for my usecase would be adding unnecessary complexity.


> but just felt adding additional dependencies was unnecessary

This notion, especially in Python and HTTP client programming, is wrong and will cost you many many more hours than it will save you.

Requests is an entire order of magnitude easier to use than urllib3, and while we may be dealing in minutes for this specific scenario, you will make up for any time investment you pay to learn Requests the very next time you need to do HTTP related work in the language.

It's a matter of not overreacting to a cost, and you're paying way more than you should to get a much smaller gain than you could, if you paid that cost elsewhere (by learning/using Requests and how to manage dependencies in Python, which you have to do anyway with bs4).


Not the OP, but deploying code on government servers that interact with the public web means that minimizing the required modules saves you piles of paperwork and meetings. I'd rather spend the extra two days writing/testing my own code then filling out paperwork and waiting weeks


Fair enough in general, but IMO Requests really is worth it.


The trust dynamic is the opposite of what you think - SGX doesn't enable an enclave that protects the machine owner from the code they execute; it enables an enclave that protects executed code from the machine owner.

The largest consumer application of this is DRM - modern UHD Blu-Ray playback on a PC requires a fully SGX-enabled backend; the negotiation to obtain playback keys and the decryption of the on-disc content is done in the SGX enclave.


In that case the DRM code running in your PC is in a hostile environment. That is someone else's code running on my host and fearing me. The dynamic is exactly as I described. The question is: Whose code are you going to run on your machine? And why is there a trust issue?

We've already seen the Sony rootkit fiasco, so it doesn't seem unfair to say one should not trust what DRM providers are doing. We should definitely not let their malicious garbage run in a secure onclave where you can't tell what it's doing as suggested by these researchers.


> So... Because lists are dynamically typed and heterogeneous, does that mean the underlying C is basically a contiguous segment of memory of python object references?

Yes. Every Python object is held by the interpreter as a pointer to a PyObject struct on the (C) heap.


Not something I would put into production, but had a fun application of this on a side project. I had the `id` of a function (but not the function object itself) and needed to recover the function. In CPython, `id` of a function corresponds to it's memory address (not sure if that can be overridden). By casting the id to a PyObject, I was able to recover the original.

    func = ctypes.cast(int(address), ctypes.py_object).value
Was kinda cool actually seeing that work.


For fuck's sake, "being gay" isn't even one of those classes.


Federal protected class, no. But in CA, yeah, it is.

https://www.employmentattorneyla.com/blog/2017/06/what-are-c...


The CPU itself had plenty of thermal headroom, but was overdrawing the power system - which then thermally throttled itself way more aggressively and in a less-controlled manner. The update changes the curve to ensure that the CPU doesn't draw more power than the power system can handle for extended periods. The CPU will still downclock itself as needed if it experiences its own thermal overload.


Are you saying Intel CPUs now can adapt to power loss, instead of just not working? I doubt that very much. The CPU got too hot(easily reaching 100C) so there was definetly not "plenty of headroom", in fact, there was no headroom at all.


It's not a powerloss but voltage goes down. Modern VRMs can also signal their power capabilies and reserves.

If the CPU draws more current than the VRM can handle, this is usually okay within bounds for a very short time, after than you'll get dropping voltage when the VRM starts to self regulate and the CPU will downclock in response.


I'm curious to see how the "Quad Bayer" mosaic works out. Other manufacturers have tried novel filter patterns before, but nothing so far has really been able to compete.

Essentially, almost all digital cameras today use a planar CMOS sensor with alternating RGB-sensitive pixels, arrayed like so:

    RGRGRGRG
    GBGBGBGB
    RGRGRGRG
    GBGBGBGB
This pattern is not perfect, but is highly effective. Luma (color-independent) resolution is essentially equivalent to the actual number of pixels, while chroma (color-dependent) resolution is only slightly less - we essentially get one "real" point of color information at each intersection of four color pixels, because at each of those intersections we have one red, one blue, and two green pixels.

In other words, "the luma information we gather for a given color on a pixel of that color is immediately relevant to the effective pixel composed by it and its adjacent neighbors at each of its four corners". In this 8x8 Bayer pattern pixel grid with 64 real pixels, we get 49 effective chroma pixels; one for each intersection of 4 physical pixels.

In comparison, here's the pattern for "Quad Bayer":

    RRGGRRGG
    RRGGRRGG
    GGBBGGBB
    GGBBGGBB
I'm concerned that chroma resolution and overall color accuracy will be much lower with this pattern. Essentially, with the original Bayer demosaicing, you only need to sample from the four color pixels adjacent to each corner in order to get a bit of the three channels, and the pattern gives equal weight to both red and blue, while providing extra accuracy in the green channel that human vision is most sensitive to.

In comparison, as far as I can tell, a single "effective pixel" (one with information on all three channels) using the Quad Bayer pattern has to be made up of data from nine individual pixels. Additionally, when an effective pixel is centered on an actual pixel with either red or blue filters, that color is relatively dominant in the pixels considered - it'll be equally weighted with the green channel, and the opposing color will only make up 1/9 of the total signal composing that effective pixel. Effective pixels centered on green pixels will give equal weight to red and blue, with slightly over 5/9 of the weight given to the green channel.

Granted, the sensor should still be able to produce a full 48MP of luma resolution, but chroma detail will be much more "smeared" because of the wide area that has to be considered to get a full color pixel, and the more substantial overlap of that full color pixel with other full color pixels. Color accuracy will likely also be lower, because in effective pixels centered on red and blue pixels, only a single pixel of the opposing color will be used, which means that any noise in that channel will have an outsized impact on the overall color.

What this boils down to is that, when used as a 48MP sensor, this sensor will have entirely different imaging characteristics than a traditional Bayer imager, and that those characteristics will be highly dependent on how the output of this sensor is processed - which will be interesting in a world full of software highly optimized to demosaic Bayer-pattern images.

What's slightly more interesting is the high-sensitivity 12MP mode. Essentially, it's an attempt to reduce the impact of random noise in the image by adding together four pixels of each channel to produce a "superpixel" less impacted by noise overall. These superpixels can then be processed in a standard Bayer pattern as a 12MP effective image.

Thinking about it overall, though, I become more and more confused. In both of these modes, this pattern doesn't give us anything, really, that we can't already do using a Bayer filter.

Let B represent a sensor using a standard Bayer filter pattern, and let Q represent a sensor using this "Quad Bayer" pattern, where each of these patterns have a red pixel in the top-left corner.

Let any given effective pixel be represented by a 3-tuple of the form (R, G, B), where R, G, and B are the number of physical pixels sensitive to each of the red, green, and blue channels which compose that effective pixel.

Let f(p, w, h, s, i) be a function returning a two-dimensional matrix of all the effective pixels produced by a matrix of physical RGB pixels, laid out in pattern p, with actual pixel width and height w and h, where an effective pixel measures s actual pixels horizontally and vertically, and where an offset of i actual pixels in either vertical or horizontal directions produces the "next" pixel in that direction.

Thus, our standard Bayer pattern produces the following:

    f(B, 4, 4, 2, 1) =>
    (
        ((1,2,1),(1,2,1),(1,2,1)),
        ((1,2,1),(1,2,1),(1,2,1)),
        ((1,2,1),(1,2,1),(1,2,1))
    )
The 9-pixel-effective-pixel Quad Bayer pattern produces this:

    f(Q, 4, 4, 3, 1) =>
    (
        ((4,4,1),(2,5,2)),
        ((2,5,2),(1,4,4))
    )
Note that there are fewer effective pixels for the same total number of pixels - that's okay, though, because the number of effective pixels approaches the number of total pixels as the sensor scales in the X and Y dimensions - this very small hypothetical sensor doesn't benefit from that scale yet.

You can also see that each effective pixel is composed of a larger number of physical pixels - there's a tradeoff there, in that this means that overall, noise should have a smaller impact on the value of a given pixel, but there's a loss of resolution because those pixels are spread over a wider area.

This raises the question, "what if we do a 9-pixel effective pixel on a standard Bayer pattern?" Well, we get this:

    f(B, 4, 4, 3, 1) =>
    (
        ((4,4,1),(2,5,2)),
        ((2,5,2),(1,4,4))
    )
Interestingly, while the exact arrangements of the different color channels within the effective pixels are different, the total number of pixels of each channel remains completely identical, meaning that any given effective pixel should have identical noise characteristics to the Quad Bayer pattern. In fact, it's arguable that the Bayer pattern is better, because the color physical pixels are more evenly distributed around the effective pixel.

What if we do the high-sensitivity superpixel sampling? For the Quad Bayer pattern, it looks like this:

    f(Q, 8, 8, 4, 2) =>
    (
        ((4,8,4),(4,8,4),(4,8,4)),
        ((4,8,4),(4,8,4),(4,8,4)),
        ((4,8,4),(4,8,4),(4,8,4))
    )
And for the standard Bayer, like this:

    f(B, 8, 8, 4, 2) =>
    (
        ((4,8,4),(4,8,4),(4,8,4)),
        ((4,8,4),(4,8,4),(4,8,4)),
        ((4,8,4),(4,8,4),(4,8,4))
    )
Again, sampling in a similar pattern gives the same overall result. So, all else being equal, I'm not sure it makes sense.

Of course, there is the possibility that all else is not equal. Having multiple adjacent pixels of the same color could enable consolidating the signals of those pixels together earlier on in the image processing pipeline into an actual lower-resolution standard-Bayer signal. That could actually have real benefits if that early-stage signal combination results in a greater signal amplitude that drowns out noise.

Basically, this has all been kind of stream-of-consciousness and much longer than I originally planned, but here's the Cliff Notes from what I can tell.

In comparison to a Bayer sensor of the same pixel resolution...

Pros:

- (If implemented to take advantage, possibly) Ability to act as unified large pixels, increasing SNR at lower resolution settings

Cons:

- Less-fine maximum chroma resolution when all pixels are active


GP is saying that since there is no license, the code is copyrighted and there is no allowable use anyone could put it to. Therefore, since you own the copyright, and did not license the code to a third party, you could have the app using your code taken down from the App Store.

In contrast, if you had put an open source license on it, then anyone would be well within their rights (assuming the license allows it) to compile and release a version to whatever app store they want.


> In contrast, if you had put an open source license on it, then anyone would be well within their rights (assuming the license allows it) to compile and release a version to whatever app store they want.

This is actually potentially untrue, as some versions of the GPL require that the end user must not be restricted wrt the app they download, and that's not compatible with the Apple's store requirements. See e.g. https://apple.stackexchange.com/questions/6109/is-it-possibl...


Also, the Mac has a webcam instead of a nose hair viewer.


Ha. I've seen many a review that calls that out actually. The consequence of edge-to-edge screen. Gotta put that cam somewhere! I'd disable it and get a hi-qual external.


Maybe the next version will adopt the 'notch' that's becoming popular on mobile phones!


Having wireless providers who are able to provide landline-level speeds means that many markets will go from having one or maybe two real ISP options to having three or four. Competition means that if the benefits of net neutrality are desirable, customers will prefer an ISP (wireless or otherwise) that provides those benefits.

Of course, most actual counterexamples to net neutrality are seen as positives (free Netflix or Hulu with usage not counted towards a data cap) rather than negatives, so it might not work out.


I really don't get why people think this is possible. Even with microcells and loads of spectrum, you might be able to get 2gigabit/sec of internet per cell, which would be enough for perhaps a couple hundred streams (not including all the other internet services people require all the time).

Considering most cells right now serve thousands if not tens of thousands of devices, there is simply no way that wireless broadband will ever be able to service that, unless you have hundreds of femtocells, but at that point you might as well just deliver fibre to the home as you'll be a few metres from the premises.

5G really changes nothing of this. Shannon's law dictates this and we are close to topping out on it in terms of radio efficiency.


5G changes things dramatically. 5G deployment will be heavily focused on small cells. That means you can go to higher frequencies, because you don't care as much about propagation, and you've got much more bandwidth available at higher frequencies. So cell size goes down, users per cell goes down, and bandwidth per cell goes up.

That still ends up being massively cheaper than FTTP. Getting fiber into peoples' houses is an incredibly labor-intensive and high-touch process. I just had fiber installed at my house. It took half a day to run fiber down the main road about 1/3 of a mile to my subdivision. Another half day to run it 200 feet down my residential road. Almost a full day to dig under my driveway into my house. And a solid half day to install the CPE. With small cells, you'd basically only have to do the first step. You could've installed a small cell serving hundreds of people in the time it took to retrofit just my house.


5G might end up cheaper than FTTP, but I wouldn't get my hopes up on it being massively cheaper.

Higher frequencies will require either line of sight or very short distances to the small cell. The small cells themselves will incur costs both CAPEX and OPEX.

Basically the only part 5G will replace in a FTTP network is the drop. And that's where the density and the topography is cooperating. Whereas if you install a fiber drop, you'll be set for 20+ years and you won't have to install, maintain and power a small cell forever.


You’re not just getting rid of the drop, but also the last 100 meters or so through the subdivision. Moreover, the drop and CPE install is 30-40% of the cost of deployment.

Also, fiber is not fire and forget. Just the other day a tree took out the cable to my house. Buried cable has less maintenance, but also much higher initial costs, increasing the cost advantage of wireless for the last 200m.


Like I wrote in the grandparent, it's a density thing. How many subscribers have line of sight (or close enough) for the 5G small cell to work? At some point it's going to be more cost effective to do FTTP.

The CPE cost is negligible. You can pick one up for $20. True, the drop will cost you, but it has a far longer lifespan than the small cell. It's not like the small cell, it's installation, permits, engineering, pole rental or tower, power, etc. are free either.

Like I stated earlier, 5G may be cheaper than FTTP. Or it may not. It may not even be available in your area due to insufficient density. Even if 5G is cheaper, it's not going to be massively more cheaper.


Sorry, how is this any different to LTE on 3.4GHz or 2.6GHz? There is literally nothing different between 4G and 5G on this. 4G deployment on 3.4 or 2.6 could equally be said to be focussed on small cell, but we also have massive worldwide deployments on 450, 600, 700 and 800MHz. So is LTE also about long range?


US 5G deployment will leverage spectrum above 24 GHz where it is feasible: https://arstechnica.com/information-technology/2015/10/5g-mo.... The FCC is working on auctions of 100+ MHz channels in these bands: http://www.telecompetitor.com/fcc-proposes-schedule-for-28-g.... AT&T already spent more than a billion dollars buying up that spectrum. It’s starting 5G trials at 15 GHz: http://www.lightreading.com/mobile/5g/atandt-5g-trials-to-st...

In the US, almost all LTE deployment is below 2 GHz.


24GHz will require line of sight. I can't see how this is going to work for anything more than cell backhaul or fixed wireless access in very rural areas (without trees)?


At relatively short distances (hundreds of meters), enough radio waves bounce around in the environment to still make it to the receiever. https://spectrum.ieee.org/telecom/wireless/smart-antennas-co.... You need beam forming antennas to take advantage of this fact.

See: http://about.att.com/innovationblog/two_years_of_5g_tria

> Learned mmWave signals can penetrate materials such as significant foliage, glass and even walls better than initially anticipated.


>5G really changes nothing of this. Shannon's law dictates this and we are close to topping out on it in terms of radio efficiency.

10 to 15 years ago that is what I thought. Until Massive MIMO. Some of the crazy stuff we are doing now in Wireless tech was literally dimmed theoretically impossible when I did some very early work on 3G in University. It doesn't break Shannon's law, we just find many ways around it.

Many doubt Massive MIMO will ever work, including industry expert. I doubt it too, if anyone remember something similar called pCell many years ago. It turns out it did work. It was originally created and based on TDD, we could do 128 x 128, or even 1024 x 1024 antenna. Sprint are doing 64 x 64 / 128 x 128 works on their Network already. [1] But there is a cost, power etc involved but the tech works. We thought this crazy thing would not work on FDD, which is what majority of the world uses, it turns out they have found ways around it too. Both Ericsson and Huawei's solution is much better then some originally thoughts. Not as elegant or as effective as TDD but it still works.

We have small cells, this time it actually works as advertised. Combined with LAA in 5Ghz Spectrum.

5G provides an order of magnitude increase in capacity. It also makes some backend services cheaper to run, there are already a few countries started price war in bid to attract more customer on their network as they have more capacity resources.

I still remember a few years ago, when my friend were installing her first fibre installation at her home. It was such as hassle with cable layering, ONT modem etc. She was very frustrated and asked a simple question that stuck me at the time. Why cant we all use mobile. Mobile is enough for me, why cant my home PC uses 4G too? Will there be someday where they sent me a "Modem" with SIM card in mail I plug it in and it will work?

I thought she was crazy. That is not possible, what makes her think that? You told me 5 years ago ( That is 8 - 9 years from today ) Smartphone wasn't a thing, now everyone has it and we are watching video on it already. Surely 10 years from now that should be possible right?

I said no, it is not possible. I had Shannon's law in my mind. I had me BiTorrent downloading Terabytes of data in my mind. Cell tower contention in my mind. Now I am not so sure.

[1] https://www.youtube.com/watch?v=7onQZ51b0yc


If there is more demand for cell service the cell companies will build more towers. Lower the power levels and use higher frequency bands to reduce interference.

By your numbers, 200 clients x $50/mo $10k revenue per mo for a tower. A new cell tower costs $150k, so it pays for itself in 15 months.


It can take years in the UK to add a new cell tower. Apply to the local govt for planning, then more applications to dig the road up for fibre. And then you need a power connection which often aren't trivial or quick to get. $150k sounds unbelievably low.


You have to remember that the denser you install towers, the shorter and less obtrusive they need to be. I.e., you don't need to be nearly as high if you only need line of sight over one mile instead of ten.



Do what Verizon did in Boston. Get the local government to give you access ostensively for FTTH broadband like FiOS then go "psyche!" and use the fiber to run cell towers.


Are these providers actually going to compete with landline ISP's? My landline ISPs over the last couple of years have had monthly data caps ranging from 256gb to 1tb. My cell plan currently throttles me after I exceed 6gb... That's two orders of magnitude that my wireless provider would have to increase the cap in order to compete with the landline ISP.


The plan appears to be, yes: https://www.pcmag.com/news/357374/verizon-no-4g-level-data-c.... Verizon execs are throwing around 5G caps in that ballpark.


That's a tiny ballpark:

> During a roundtable, VP of network support Mike Haberman, some other Verizon folks, and the assembled journalists agreed that an average data cap in the vicinity of 180GB/month would satisfy the average consumer.

> "That shouldn't be a problem with 5G. What does 4K video use? Think about how many 4K TVs you can put on a service that's a true 1 gigabit to your house," Haberman said.

I don't see that 180GB lasting long at all...


The average Netflix user watches 40-50 hours per month, and its going down: https://techcrunch.com/2017/12/11/netflix-users-collectively.... Netflix says 7GB/hour for 4K (~350 gigabytes), but HEVC will halve that number. That's under 180GB, and that's assuming all streaming is 4K, which is far from the case.


That's also just for one person on Netflix. Now add a roommate, Youtube, Twitch, general overhead (email, web browsing, app updates, etc).


That is per account not per person. For a typical household, all other bandwidth usage pales in comparison to Netflix. Youtube, etc., streaming at HD or lower resolutions isn't going to move the needle much compared to the 4K streaming in the calculation above.


T-Mobile soft cap at 50gb, that’s on LTE. With 5g speed going up 10x I wouldn’t be surprised caps going 10 x as well. We’re nearly there.


Hmmm. I'm on T-Mobile, but I share the plan with my brother and he's the one who actually cares about this stuff. Maybe I don't get throttled at 6gb, but I just lose a discount. Either way, I'm definitely incentivized to stay under 6gb on T-Mobile.


Verizon really really wants to be your main provider. Once they can provide you with a 1gbps wireless service, they will compete on price and data caps with your local wireline ISP to win your business.


We will see. Would be a first for wireless in a metropolitan area.

(WISPS in rural areas are a bit different)


> Having wireless providers who are able to provide landline-level speeds ...

Are they going to cap me at 1000GB like Comcast, also?


I think it's much more arbitrary. Silverlight is on life support with support for EME/HTML5 video on most platforms, but Netflix has historically chosen to only support 1080p video on the first-party browser for any given OS (Chrome on ChromeOS, Safari on macOS, and IE/Edge on Windows).

EDIT: Looking at some stuff, it seems like Netflix might "trust" a first-party browser to select the highest-quality stream that it has hardware video decode support for. In comparison, it sounds like there are extensions that enable 1080p in Chrome by pushing it into the list of playlist options, but it can cause a serious performance hit by decoding on the CPU.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: