When an asteroid strikes Earth, its kinetic power is rapidly transferred primarily to the atmosphere, surface, and subsurface in the forms of thermal energy (heating and vaporization), mechanical energy (crater formation and ejecta), and seismic energy (earthquakes and waves).
I think the question was less about the efficacy of ABS, and more about the failure mode. Is it possible for the ABS system to "fail open" unintentionally, such that depressing the brake pedal has no effect whatsoever?
There is no credible way to answer that question. Analysis cannot prove the absence of such a flaw. All we have are records, and those are incomplete. What we can say is that there is no record of ABS “fail-open” flaws plaguing passenger vehicle designs, and the record is now rather long.
My employer blocks access to Google Docs as part of our confidential information protection policy. They're certainly not the only ones. I'd hesitate to call on-premises file management "specialized needs" - rather, it's (still) the default, particularly if you take a peek outside of the software bubble.
Historically this was a huge concern because not every manufacturer implemented their ESD protection properly; or, on occasion, the process technology meant that ESD protection would hinder the functionality of the device. This happened a lot in RF circuits, and still to this day many RF instruments are extremely sensitive to ESD events. Board assembly was also a lot less automated in the early days of integrated circuits, so more human handlers and more opportunities for ESD events were anticipated.
Modern IC ESD protection is very effective against a few moderate energy events distributed on different pins, and there's a few industry standards that help determine the required amount of caution for dealing with a particular IC (HBM or human-body model, and CDM or charged-device model, are common - targeted toward human assembly procedures and things like triboelectric or inductive charge buildup). In the right climate, a single high energy event is sometimes enough to degrade functionality or (rarely) completely destroy the device, so board assembly and semiconductor manufacturing facilities still require workers to use wrist straps, shoe grounders, mats, treated floors, climate control, etc. Some high voltage GaN work I did years ago required ionizing blowers (basically a spark gap with a fan) because GaN gates are easy to destroy with gate overstress, and there are risks involved with unintended high voltage contact with typical ESD protective solutions. In another embedded-focused lab, the only time I've ever seen someone put on a wrist strap was for handling customer hardware returns. It really depends what you're working with, and in what environment.
I've more frequently (once or twice a year) had devices which exhibit symptoms of something being wrong at the inputs or the outputs, but only on a specific pin or port. For outputs, some symptoms include the output slew rate is inadequate, or the output appears stuck sometimes, or the output has higher than expected voltage noise (though this is a non-exhaustive list). For inputs, the symptoms are more complex - sometimes there's a manifestation at the outputs for amplifiers or other linear circuits, but for feedback systems or digital systems they might behave as though an input is stuck, toggling slowly, etc. which is difficult to distinguish from other, more common errors. I've also directly been the cause of several ESD failures, but in these cases the test objective was to determine the failure thresholds for the system, so I'm not sure that counts.
I've had a customer hardware failure that was eventually traced back to electrical overstress damage on a single pin of an IC near the corner of a board, right where someone might put their thumb if they were holding the board in one hand. In the absence of a better explanation, I suggested this was an ESD failure due to handling error. I never heard about it again, which is weak evidence in favor of a one-off ESD event.
These parts are bonkers. The ringing on its own outputs with a few inches of trace or (heaven forbid) a connector is regularly sufficient to self-trigger the automatic direction reversal. These things genuinely deserve the "experts only" label - they are close to unusable in the situations where you'd be most inclined to reach for them.
Learned a few tricks that I'm sure are buried on fstring.help somewhere (^ for centering, # for 0x/0b/0o prefixes, !a for ascii). I missed the nested f-strings question, because I've been stuck with 3.11 rules, where nested f-strings are still allowed but require different quote characters (e.g. print(f"{f'{{}}'}") would work). I guess this got cleaned up (along with a bunch of other restrictions like backslashes and newlines) in 3.12.
F-strings are great, but trying to remember the minute differences between string interpolation, old-style formatting with %, and new-style formatting with .format(), is sort of a headache, and there's cases where it's unavoidable to switch between them with some regularity (custom __format__ methods, templating strings, logging, etc). It's great that there's ergonomic new ways of doing things, which makes it all the more frustrating to regularly have to revert to older, less polished solutions.
Yeah I consider that one to be a trick question. I knew same-quote-style nested f-strings were coming, I just didn't know which version, and I still use the `f'{f"{}"}'` trick because I want my code to support "older" versions of python. One of my servers is still on 3.10. 3.11 won't be EOL until 2027.
One thing I keep not understanding is I see colleagues or AIs put f-strings in logger calls! Doesn't the logger do lazy interpolation? Doesn't this lose that nice feature?
I used to take this approach (didn’t worry about late interpolation for debug logs unless in a tight loop), but I was wrong and got bitten several times. Now I very strongly tend to favor lazy interpolation for logging.
The reason is that focusing on formatting CPU performance misses a bigger issue: memory thrashing.
There are a surprisingly large number of places in the average log-happy program where the arguments to the logger format string can be unexpectedly large in uncommon-but-not-astronomically-improbable situations.
When the amount of memory needed to construct the string to feed down into the logger (even if nothing is done with it due to log levels) is an order of magnitude or two bigger than the usual less-than-1kb log line, the performance cost of allocating and freeing that memory can be surprising—surprising enough to go from “only tight number crunching loops will notice overhead from logger formatting” to “a 5krps fast webhook HTTP handler that calls logger.debug a few times with the request payload just got 50% slower due to malloc thrashing”.
As said I default to deferred %. That leaves judgement for need vs cost of call vs frequency, etc. Also f-string is faster than others with own via instruction, more readable, less likely to throw a runtime error. Judgement call.
Bottom line, still a micro optimization on a modern machine in the vast majority of cases.
If you have a truly costly format op and still want to use fstring, look into .isEnabledFor().
Because, unlike most professions, ATC is immediately, personally responsible for making decisions for which a slight mistake could instantly claim the lives of hundreds of people.
We've not heard of anyone not qualified getting a job, only not hiring enough people. This has been an issue for many many years now. On the other hand there's lots of other professions with their own profiled hiring issues that claim lives over extended periods of time. So yeah, it still smells like an artificially popular topic. Especially since many changes have already been made this year but we're still seeing the same gripes reposted - but https://www.transportation.gov/briefing-room/us-transportati... hasn't been posted even once.
They hired more people who then failed the academy and the FAA only has budget for a fixed number of seats. This resulted in a shortage of trainees making it to towers.
Coincidence: I was just noodling around youtube and saw that "Still Alive" from Portal was added 17 years ago. I'm pretty sure I've discovered a date bug in youtube.
I think OTP is pretty much universally admired, the language syntax and tooling much less so.
Erlangers generally dismiss criticisms about syntax as unimportant, but looking at the popularity of Elixir (which compiles to the same VM) suggests they're wrong about discounting syntax as important when it comes to adoption. Another new language that targets the BEAM VM is Gleam, which looks fantastic.
From reading Armstrong's stories about the origins of Erlang, it seems to me the Prolog syntax was an accident of history and a pragmatic engineering decision at the time, more than it was the deliberate choice of a careful language designer. That suggests that other options shouldn't be dismissed.
I seem to remember a phase when talking about it was denounced as vitriol, together with praising Rust. Maybe it was just a phase, maybe I took some random criticism too seriously.
For what it's worth, this post just helped me explain several years of failure to wake from sleep state, across several different MSI-based machines, when I've connected them to an HDMI port in my TV. I think this debug is interesting in its own right, and unlike 99% of the content on this website, it was directly and immediately useful to me. I doubt I'm the only one, too.
As someone with a hardware background, I'll throw in my $0.02. The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like. Once you get past defining the block behaviors in HDL, instantiation can become tedious and error-prone in text, since the tools all kinda suck with very little hinting or argument checking, and the modules can and regularly do have dozens of I/O arguments. Instead, it's often very easy to map the module inputs to schematic-level wires, particularly in situations where large buses can be combined into single fat lines, I/O types can be visually distinguished, etc. IDE keyboard shortcuts also make these signals easy to follow and trace as they pass through hierarchical organization of blocks, all the way down to transistor-level implementations in many cases.
I've also always had an admiration for the Falstad circuit simulation tool[0], as the only SPICE-like simulator that visually depicts magnitude of voltages and currents during simulation (and not just on graphs). I reach for it once in a while when I need to do something a bit bigger than I can trivially fit in my head, but not so complex that I feel compelled to fight a more powerful but significantly shittier to work with IDE to extract an answer.
Schematics work really well for capturing information that's independent of time, like physical connections or common simple functions (summers, comparators, etc). Diagrams with time included sacrifice a dimension to show sequential progress, which is fine for things that have very little changing state attached or where query/response is highly predictable. Sometimes, animation helps restore the lost dimension for systems with time-evolution. But beyond trivial things that fit on an A4 sheet, I'd rather represent time-evolution of system state with timing diagrams. I don't think there's many analogous situations in typical programming applications that call for timing diagrams, but they are absolutely foundational for digital logic applications and low-level hardware drivers.
As much as I prefer to do everything in a text editor and use open-source EDA tools/linters/language servers, Xilinx's Vivado deserves major credit from me for its block editor, schematic view, and implementation view.
For complex tasks like connecting AXI, SoC, memory, and custom IP components together, things like bussed wires and ports, as well as GUI configurators, make the process of getting something up and running on a real FPGA board much easier and quicker than if I had to do it all manually (of course, after I can dump the Tcl trace and move all that automation into reproducible source scripts).
I believe the biggest advantage of the Vivado block editor is the "Run Block Automation" flow that can quickly handle a lot of the wire connections and instantiation of required IPs when integrating an SoC block with modules. I think it would be interesting to explore if this idea could be successfully translated to other styles of visual programming. For example, I could place and connect a few core components and let the tooling handle the rest for me.
Also, a free idea (or I don't know if it's out there yet): an open-source HDL/FPGA editor or editor extension with something like the Vivado block editor that works with all the open source EDA tools with all the same bells and whistles, including an IP library, programmable IP GUI configurators, bussed ports and connections, and block automation. You could even integrate different HDL front-ends as there are many more now than in the past. I know Icestudio is a thing, but that seems designed for educational use, which is also cool to see! I think a VSCode webview-based extension could be one easier way to prototype this.
> Also, a free idea (or I don't know if it's out there yet): an open-source HDL/FPGA editor or editor extension with something like the Vivado block editor that works with all the open source EDA tools with all the same bells
"Free idea: do all this work that it takes hundreds of people to do. Free! It's even free! And easy!"
Lol you must be one of those "the idea is worth more than the implementation types.
> The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like.
Right. Trying to map lines of code to blocks 1 to 1 is a bad use of time. Humans seem to deal with text really well. The problem becomes when we have many systems talking to one another, skimming through text becomes far less effective. Being able to connect 'modules' or 'nodes' together visually(whatever those modules are) and rewire them seems to be a better idea.
For a different take that's not circuit-based, see how shader nodes are implemented in Blender. That's not (as far as I know a) a Turing complete language, but it gives one idea how you can connect 'nodes' together to perform complex calculations: https://renderguide.com/blender-shader-nodes-tutorial/
A more 'general purpose' example is the blueprint system from Unreal Engine. Again we have 'nodes' that you connect together, but you don't create those visually, you connect them to achieve the behavior you want: https://dev.epicgames.com/documentation/en-us/unreal-engine/...
> I don't think there's many analogous situations in typical programming applications that call for timing diagrams
Not 'timing' per se (although those exist), but situations where you want to see changes over time across several systems are incredibly common, but existing tooling is pretty poor for that.
There's no reason they can't instead be used to show how data transforms. The sort of 'flow wall' someone sees in a large industrial setting (think water/waste water treatment plants, power plants, chemical plants, etc) or process mockup diagrams for spreadsheet heavy modpacks (I'm looking at you GregTech New Horizons).
Data can instead be modeled as inputs which transform as they flow through a system, and possibly modify the system.
Building block diagrams in Vivado to whip up quick FPGA designs was a pleasant experience. Unfortunately the biggest problem wasn't the visual editor. The provided implementations of the AMD/Xilinx IP cores are terrible and not on par with what you would expect first party support to be. The other problem was that their AXI subordinate example was trash and acted more like a barrier to get started. What they should have done is acquire or copy airhdl and let people auto generate a simpler register interface that they can then drag and drop.
i remember using the falstad sim constantly at university a decade ago. super helpful and so much more intuitive than any spice thing. cool to see that it's still around and used