Hacker Newsnew | past | comments | ask | show | jobs | submit | jbellis's commentslogin

Not because of a sudden outbreak of sanity, but because they have CT scanners now.

3-1-1 is rarely enforced. I always got confused why the 100ml limit existed, since I could just take multiple bottles of 100ml of whatever I wanted and it was okay. Then I realized that technically I only could take 3 bottles but I’ve been getting away with more for decades.

It's not 3 bottles, it's 3.4 oz or 100 ml.

isn't it whatever fits in a quart-sized ziploc? i presume that's where the other poster estimated "only 3 bottles."

3-1-1 is an awful mnemonic, but it's basically: 3.4 oz containers in 1 1-quart ziplock bag.

I guess the comms people got their hands on it before they deployed the original mnemonic: 3.4-1-1

It’s as many bottles sized 100ml or less that you can fit in a 1 liter bag.

Yeah, but arent you allowed to exit and re-enter security as many times as you like as long as you have a valid ticket?

They'd probably find it suspicious

Then you hide them somewhere inside and go back out and in again

OR, you just have one or more accomplices ;-)

> Not because of a sudden outbreak of sanity, but because they have CT scanners now.

What's is the evidence for believing so strongly that airports all over the world have been prohibiting large amounts of liquids due to widespread insanity?


Yeah, I flew thru Eindhoven Airport in the Netherlands a few years ago, and I couldn't believe they let me through with water.

The security used something I would describe as out of an Iron Man film, they were zooming around a translucent 3D view of my backpack. (It was on an LCD display instead of hovering midair, but I was still impressed. But the fact they let me keep the water was even more amazing, hahah.)


> The security used something I would describe as out of an Iron Man film, they were zooming around a translucent 3D view of my backpack. (It was on an LCD display instead of hovering midair, but I was still impressed.

I just flew with two laptops in my backpack which I didn't have to take out for the first time (haven't flown in a while), with a custom PCB with a couple of vivaldi antennas sandwiched in between the laptops.

It was a real trip watching them view the three PCBs as a single stack, then automatically separate them out, and rotate them individually in 3D. The scanner threw some kind of warning and the operator asked me what the custom PCB was, so I had to explain to them it was a ground penetrating radar (that didn't go over well; I had to check the bag)


Tel Aviv has allowing this for quite some time (10 years?). I guess they update their security devices as soon as new technology becomes available.

They don't advertise it, I found out by accident, trying to empty my water bottle by drinking when a security person told me to just put it together with the rest of my stuff. I had no idea that was a thing and was pretty confused.


They’re multi wavelength CT. Basically whenever you see a 4:3 box with a “smiths” logo over the belt it’s going to be a pretty painless process (take nothing out except analog film)

You can do realtime 3D flythroughs on CT scans with open source viewers. If you've ever had one, get your DICOM data set and enjoy living in the future.

Can you recommend one? I've tried Aeskulap and Amide and I found it hard to get the 3D views to work.

Inveselius works well. The UI lacks some polish but the rendering beats what most physicians have access to.

I've seen this too in the US, the newer machines let them spin the scan around in 3D space and must make it much easier to tell if something needs inspection or not

Yeah these are pretty common in the US, but they're just not ubiquitous. Many airports will still have a CT machine next to the old one and it just depends on what line you get out in.

I would say just as if not more important are probably some advanced nitrates detector.

Love to see people leveraging static analysis for AI agents. Similar to what we're doing in Brokk but we're more tightly coupled to our own harness. (https://brokk.ai/) Would love to compare notes; if you're interested, hmu at [username]@brokk.ai.

Quick comparison: Auditor does framework-specific stuff that Brokk does not, but Brokk is significantly faster (~1M loc per minute).


Would be really cool to compare notes :D Sent from a "non tech" company email so it doesn't get filtered lol.

My speed really depends on language and what needs indexing. On pure Python projects I get around 220k loc/min, but for deeper data flow in Node apps (TypeScript compiler overhead + framework extraction) it's roughly 50k loc/min.

Curious what your stack is and what depth you're extracting to reach 1M/min - those are seriously impressive numbers! :D


Now I'm curious about what fastutil's implementation is doing.


> where are the professional tools, meant to be used for people who don't want to do vibe-coding, but be heavily assisted by LLMs?

This is what we're building at Brokk: https://brokk.ai/

Quick intro: https://blog.brokk.ai/introducing-lutz-mode/


Has the current administration made it harder to qualify for an o1 visa?


A bit but not significantly so.


swebench is (1) terrible and (2) saturated


"Released" but not available on API. I think they rushed it out before Gemini 3 drops.


two great points here: (1) quantization is how you speed up vector indexes, and (2) how your build your graph matters much much less*

These are the insights behind DiskANN, which has replaced HNSW in most production systems.

past that, well, you should really go read the DiskANN paper instead of this article, product quantization is way way way way way more effective than simple int8 or binary quant.

here's my writeup from a year and a half ago: https://dev.to/datastax/why-vector-compression-matters-64l

and if you want to skip forward several years to the cutting edge, check out https://arxiv.org/abs/2509.18471 and the references list for further reading

* but it still matters more than a lot of people thought circa 2020


Hi! I worked with product quantization in the past in the context of a library I released to read LLMs stored in llama.cpp format (GUFF). However, in the context of in-memory HNSWs, I found them to make a small difference. The recall is already almost perfect with int8. Of course it is very different in the case you are quantizing an actual neural network with, for instance 4 bit quants. There it will make a huge difference. But in my use case I picked what would be the fastest, given that both performed equally well. What could be potentially done with PQ in the case of Redis Vector Sets is to make 4 bit quants work decently (but not as well as int8 anyway), however given how fat the data structure nodes are per-se, I don't think this is a great tradeoff.}

All this to say: the blog post tells mostly the conclusions, but to reach that design, many things were tried, including things that looked cooler but in the practice were not the best fit. It's not by chance that Redis HNSWs are easily able to go 50k full queries/sec in decent hardware.


if you're getting near-perfect recall with int8 and no reranking then you're either testing an unusual dataset or a tiny one, but if it works for you then great!


Near perfect recall VS fp32, not in absolute terms: TLDR, it's not int8 to ruin it, at least if the int8 quants are computed per-vector and not with global centroids. And also, recall is a very illusionary metric, but this is an argument for another blog post (In short, what really matters is that the best candidates are collected: the long tail is full of elements that are anyway far enough or practically equivalent, since this happens under the illusion that the embedding model already captures the similarity our application demands. This is, indeed, already an illusion, so if the 60th result is 72th, it normally does not matter. The reranking that really matters (if there is the ability to do that) is the LLM picking / reranking: that, yes, makes all the difference.


About twice the price of the Dell 8k.


This is extremely well trodden ground, and he's right. The world doesn't need him to spend time explaining that water is wet.


And I think he's also acknowledging that not everybody has an application that needs these performance optimizations.


I have an unpopular opinion. I simply do not read anything on medium anymore. I in fact have a ublock rule that blocks the site so I do not accidentally go there or give them traffic anymore.

I saw go in the title so I just checked the HN comments first.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: