In the year 1974, Practical Wireless, a UK magazine, published a design for a pong game that connects to a television, called PW Tele Tennis.
It uses sixty four NAND gates, twelve NE555 timers, two dozen diodes and some analog parts.
It's about the most basic version of the game. They later published a sound effects board and an on-screen scoring board that uses a couple of dozen more chips.
The implications of Karnaugh maps and state machine reduction, which
we did in "Digital Logic" when I was a student, were that you could
take any problem, express it as a set of states and transforms, and
boil that down to an optimal netlist of discrete logic gates.
Of course, in the mid 80's that was a pedagogical tool to lead us
toward register machines and von Neumann architectures, but there were
still some old-skool EE hackers around who built things like guidance
systems for the Navy which were hybrid analogue/digital "computers"
totally without CPUs or code. Today we have FPGAs and high level tools
for building ASIC, but cheap microprocessors effectively swept aside
an entire approach.
Maybe we missed something. Many small and well constrained problems in
IoT type applications might better be served by hard-configured
solutions. They would use less power, be immune to malicious network
hacking, not need 'firmware' updates,
true, but most people doing hardware are doing formal designs and prove their work correct. While the proofs are not perfect, there are a lot less bugs. The cost of fixing bugs in hardware is a lot more than software, so it is seen as worth it.
Of course the cost of doing the above is one reason we don't do everything in hardware. If you have the money you could implement everything people do with computers in hardware, no software - I don't even want to think about the cost.
In the past, technicians would get engineering change orders and follow the instructions to rewire boards to fix problems. You could also replace state machine and microprogram ROMs, which I guess sounds a lot like a firmware update.
It's expensive to hire someone technical enough to:
- Perform such an update
- Sign off that they performed the update _correctly_
If you need an air-gapped system, it's still much easier to set it up so it can update from a USB flash drive and log "I did the update correctly" back to the drive.
The problems are power consumption, speed, cost, size, development time, and the difficulty of updates and bug fixes. A modern embedded processor handily solves all of those.
Boards full of TTL are a fascinating engineering exercise, but there aren't many applications where they're a better solution.
It's also tempting to cheat and solve some of the sub-problems with monostables and analog timers. As soon as you do that you're introducing potential issues caused by temperature drift, component tolerances, and component ageing.
A fully clocked solution is always more reliable, but often that means a higher component count and cost.
FPGAs have real applications, but they're still harder to develop than code.
When I was a student one of the tutors said "We'll all be doing this in software soon" - and he was right.
We as EE engineers learned to test our creations. We are however slowly pushed to a SW process world where there are modules and integration tests and at the end, testing is just pingponged between EE and System and nobody do the testing.
That's the only one I don't quite understand. All your other points
are definitely great objections.
Are you saying that a clocked system consistently uses less power than
a stateful but quiescently 'static' circuit? I can imagine there's a
reason, but it goes counter to my experience that the faster you clock
a microprocessor the more power it consumes; therefore at zero clock
rate a purely data-driven system should consume the least power. What
am I missing?
Roughly, the power of a digital system is sum of the static power and the dynamic power (P=1/2 CV^2*F).
1. Discrete logic chips tend to be built in substantially larger process nodes (microns vs nanometers) that are less efficient. This means higher leakage current and more static power.
2. Discrete logic has to drive traces on a PCB, which have substantially higher capacitance (C) and therefore use more power getting across a board.
3. Discrete logic operates at higher voltages. Contrast 5V TTL vs. 1V core voltage inside a processor. Power is proportional to the voltage squared.
4. A microprocessor running even at low speed can replace a massive number of discrete logic chips, so for simple solutions F is low. If you're doing something very simple and interrupt-driven, F can be in the tens-hundreds of kHz.
Consequently, there's a whole lot more of both static and dynamic power with discrete logic than with a uC.
I don’t understand speed, either. Once you have the desired circuit, you don’t have to build it out of discrete components, you also can send it to a fab (I think that already happened with 7400-style ICs, too. The 74248 BCD to seven segment decoder doesn’t contain lots of individual NAND gates)
That will introduce practical problems, though. If you want your design on the best tech possible, that costs serious money that you may not be able to afford, especially if you don’t want an enormous number of circuits. Your 10,000 transistor design may fit a million or more times on a top-of-the-line die.
> Once you have the desired circuit, you don’t have to build it out of discrete components, you also can send it to a fab
You are still going to use a very old and obsolete process, compared to the microcontroller.
As a rule of thumb, every generation of lithography that has made transistors smaller and more efficient, has also roughly doubled the NRE costs. As you move down the feature size slope, you get all kinds of useful properties, but the tradeoff is that you have to manufacture more of any given design for it to be able to make any economic sense. To the point where you can get an amazing chip that has an arm core, storage and memory in a single package that costs pennies (well, not right now it doesn't, but it did in the past and will again) and uses almost no power, so long as you can use the exact same device that is also shipped in the millions for other things too.
Simplest explanation, that universal logic chips MUST have very wide tolerances, to be really universal.
- They have to use significantly higher voltages and consider higher currents, than really need to work.
For example, typical logic output of universal TTL logic, considers connect to it more than 10 inputs, each of them drain some current.
And also, universal logic i/o MUST tolerate some differences in power supply voltages and interference on real circuits.
But if you don't need to communicate to outside of chip, you could make things much more optimized, make customized outputs, considering for only as much drain as really exists in scheme; make internal highly stabilized power supply and very powerful power distribution network.
For first CPUs this was not talked, they just considered as very expensive logic chip, but ~ from 80186, hard to say exactly date, appears division: some outputs become high power, others stay "normal", low power.
And in commodity cpus, in Pentium appear two voltages - one for core and other for interface circuits.
I could not disagree more. These were highly reliable and effective systems long after their expected design lifetime. The F14 CADC is one example, as well as earlier ADC's.
-That depends on how you use them; this is a people problem, not a tech problem, as you (IMHO) correctly observe.
If you use the voting machines to keep a running tally of votes cast so results are available immediately after polls close, you have already gotten a large benefit from them.
However, to ensure the (most warranted!) concern of the electorate that the votes are not being tampered with, the machine should also print a receipt to the voter after his/her vote is cast, in a human-readable format, which is then deposited in an urn much like today.
So - you get instant results, and if the result is challenged, you can audit the actual ballots rather than just doing a code audit and hoping the numbers haven't been tampered with in some undetectable way.
Paper slips signed by an independent observer and marked with indelible ink can be audited more easily by electoral participants than counting featureless stones in a bucket.
They have the added security feature of oily fingerprints containing unique DNA imprinted on them. It's customary in functioning democracies to not sequence fingerprints on a ballot paper, but theoretically it could be done.
Since you posted this comment, we've seen a very interesting article
on formal languages and 'program proving compilers based on separation
logic'. The basis for truly correct, reusable, "eternal code" is in
fact to regress to something not unlike ASM but combined with a
Rust-like higher level.
Always good to remain mindful that what we assume is 'progress' in one
direction may not be progress overall, and that allowing backtracking
to older interpretations is actually a more mature scientific stance.
There is a very old Steve Jobs video where he says something like:i dont understand what is special about software, what you cannot do in hardware. Let me see if I can find that.
In my EE degree (in year 2000), we had to implement something with external I/O as a state machine using logic gates (it was in an FPGA though, we drew the schematic). Pong is a way cooler exercise than whatever I did with a 7 segment display and a keypad. But the idea of state machines is still a big part of introductory digital logic.
That is way cooler of what we did (earlier 90's), the typical garage door opener.
Later on digital circuits, we got to design basic CPUs, with optional breadboard implementation, but no one bothered to go that far for optional stuff.
While it has nothing to do with my current work, introductory digital systems was definitely one of my favorite classes. It was amazing to go from logic gates to adders and muxes and whatnot, to state machines, flip-flops, and useful computation. Now I work in software, and although it doesn't really come up, it's very satisfying to understand down to first principles how computers can be built up from the gate level to whatever ridiculous level of abstraction we work with on a daily basis. That's what I love about engineering generally, the ability to roughly understand what is going on around me down to some first principles.
Same here! I built a 5x7 animated display from logic gates in an electronics class as part of my degree in physics. That class and fortran were the only two classes I liked in college. I'm glad I got that experience to work with circuit gates. I feel like I have some insight into the magic box I program all day.
The coolest part was our professors never told us we had to use logic (and then cycle through the pieces faster than the eye could see) to get the 5x7 led display to work for non-symmetric letters. They let us figure that out on our own. I was sitting in history class not paying attention when it came to me. I drew out the circuit I wanted and couldn't wait to get to electronics class to try it out.
> That's what I love about engineering generally, the ability to roughly understand what is going on around me down to some first principles.
This! It's very empowering and one of the things that drew me to tech/computers. Being able to understand things helped me realize the potential of what is possible with computers/computing technology.
I majored in EE in undergrad and didn't really appreciate my EE education until I got older (I was more interested in software).
In probably 1976 for $1 I bought a one-page schematic for such a thing, from classified ad in the back of a magazine. I suppose it was a related version, but IIRC it relied upon 74123 style monostables.
The 70's were when it started. You can see it in the Sears catalogs (which are available online). For most of the early 70's there's nothing interesting. Then there's Pong. Then there's an absolutely explosion.
I implemented Conways Game of Life in TTL and oscilloscope for my 1975 MIT digital circuits lab. The clock was 6 nanoseconds or 166 MHz. The limiting chip was one kilobit RAM which was in tight supply and expensive. I think we used two for alternating generations. There are similarities to the pong circuit.
At MIT in 1973 I had to use RTL logic (lower density, speed, and fan-out than TTL) for my digital design lab. I decided to design a circuit that played perfect Nim on a board of one to four piles of up to 15 stones in each pile. It took around 20 JK Flip-flops to manage the game state and do the calculations plus more latches and muxes and demuxes for I/O.
My biggest problem was that I hadn’t yet learned to pick minimum viable projects that would still result in a good grade.
What logic family did you use? Standard 74-series TTL chips that existed in 1975 are unlikely to work at 166Mhz, their propagation delay was around 20 ns.
The 1 kbit RAM chip probably also would have difficulty running at 166MHz. That being said, having the master clock of the system running at 166MHz doesn't mean the entire thing does. You could use a high clock rate to generate a video signal but have the actual logic of the system behind a clock divider running at a much lower frequency.
My hardware design prof in undergrad had a story about one of the chip manufacturers having some cray fast internal clock in the 70s or 80s that was running microcode to implement the processor’s ISA. In my (probably faulty) memory I think the story was something like 800 MHz made by a weird brand (weird to my naive undergrad brain) like Rockwell or something, while the instruction clock presented to the user was maybe in the single or double digit MHz. No idea if I’m remembering right, and my Google Fu is failing to verify this story. But anyway, to your point, modern GPUs have a bunch of different clocks for different sub-systems.
Way out of my wheelhouse here but maybe something like emitter-coupled logic that was used in the Cray-1 of similar vintage? Power hungry but allowed the Cray-1 to hit 80 MHz in 1975.
It'll be exciting if the clock cycles keep stalling like they've basically been doing in the last few years and we end up having to learn and do things at this level again to squeeze out what we want happen out of a long-term fixed compute budget.
The literal clock speed of CPUs has been stalling, but CPU performance is and has been on a massive increasing trend effectively ever since AMD released the first Ryzen CPUs.
Recent product announcements from Intel and AMD show no sign of slowing down. Sure it's not the 'double performance in 1-2 generations' of the olden days, but it's definitely not stalled either.
The improvements outside the processor matter more for a typical user. DDR5, PCIe5, USB4, Bluetooth 5.2, WiFi 6E, etc. These recent version bumps make everything feel faster, but the CPU gains are indeed coming much slower and at vastly higher cost. Die shrinks will likely reach their physical limit this decade for traditional silicon.
The Ryzen 7 2700X chip I bought back in 2018 is still fairly close to the latest Ryzen 9 chips in terms of single-core performance on benchmarks (within 10% IIRC). The Ryzen 7 is an 8-core (16 thread) CPU, and now you can get 12 or 16 core Ryzen 9's, but most workloads don't take proper advantage of even an 8-core machine.
Quick online research shows a 30%-40% single core improvement from the 2700X to the 5700X, and a 40% to 50% (single core) improvement to the 5900X. Maybe you have a specific weird workload that isn't improved much, but performance improvement for the average workload is much better than 10%.
Most workloads don't even take proper advantage of more than 1 core. Single core performance is still the most important metric and there hasn't been anything exciting regarding that in the last... 15 years?
One of the reasons why CPUs are getting faster is because the more transistors you have, the faster is the chip and vice versa (this is not a proven statement but rather my intuition). We might not be able to use higher clock frequency, but we still can fit more transistors on the same area.
What’s even more, is we can make higher frequency chips, it just turned out higher density gates was the easier path to go down.
If the gates path reaches its end, we can still go back to clock. It won’t be easy or cheap to solve all the clock problems, but if it’s better than the alternative someone will do it (like how fracking only became viable as a means of drilling oil once the cheap, easy to get oil was somewhat depleted)
By the way, why did clock frequency stop around 3-4 GHz? I assume that as transistors become smaller, their propagation delay decreases, as their power consumption, and it allows to use higher clock frequencies. Is there something else that I am missing?
Even account for security vuln mitigations? As an aside, is there a good table somewhere listing all of the Meltdown/Spectre etc. vulns and current status regarding software and hardware fixes? My understanding is most still don't have hardware fixes yet.
This is part of why Intel acquired Altera. On the other end of the spectrum, some of the latest AVR families have configurable glue logic akin to a tiny PLD.
In the same vein, the Raspberry Pi RP2040 has what they call Programmable IO modules which are tiny cores that can run small state machines doing whatever you want, separate from the main ARM cores.
Yeah, doing my thesis with it right now, but not sure if it's worth looking into. For one I'd have to brush up on ASM, while the two-core Pico lets me meet my requirements as-is anyway.
Pretty cool though, you can write VGA drivers with it apparently!
PLDesque glue logic is cool, but the coolest thing I've seen is uC's with fully programmable pins, so eg the DAC device inside the chip can be connected to any of the external pins, making board layout dramatically simpler.
I'm guessing you've seen the Infineon (formally Cypress) PSoC [0] stuff too? Both analogue and digital peripherals that can be configured mostly arbitrarily.
There are also some neat mixed-signal parts from Dialog [1] - no MCU but interesting analogue and digital blocks all the same.
The analog configurability of PSoCs is perhaps a little less flexible than you might be led to believe. There's a small number of analog components which can be linked to pins or to each other through a limited number of non-uniform interconnects. It's certainly useful, don't get me wrong, but I'd hesitate to call it arbitrarily configurable.
Probably a decade or two premature but can't hurt to hedge the bets (there's definitely opportunity to make software that makes fpga and asic compilation from higher level languages easier though - only reason it's not happening is the high talent required and low amount of people at that intersection, there's no way verilog/vhdl is the global minima).
I'm going to sound like one of those fanboys but Rust really is a breath of fresh air. Lots of people now are starting to make actually fast and performant applications due to the ergonomics of the language being more high level than something like C or C++. It's actually my favorite ML type language, I've use OCaml to a large extent before but with Rust, the DX is still pretty nice, even if you have to contend with the borrow checker.
I think D will be much better suited for embedded system because you can even seamlessly import C functions into D program [1],[2]. Since C is the de facto language for embedded system this new capability is a game changer.
Added to the fact that D language designers try to make programming D similar to programming Python and D is by default has GC, it'll make it easier for those coming from application software developer background to program embedded system with D.
The level of a language depends on what idiomatic code looks like. Idiomatic C is lower level than idiomatic C++ which is lower level than idiomatic Rust. Just because you can write things at a lower level doesn't mean that people do so, or even are allowed to do so. Most places where you write C++ you aren't allowed to write it like C, and similarly most places where you write Rust doesn't allow you to write large unsafe blocks.
That's ... not even wrong. Or severely misguided. It's like saying "My impression of cooking is that you have to make several hotdogs before you can start frying things."
Doing things at this level is what FPGAs are used for. Processing many GiB per second of samples through complex chains of signal processing logic, for example.
Even with modern technology (I guess) we have to use lots of tricks to get desired performance. So (I guess) the level of skills that was necessary to create the Pong, would help designing modern chips as well.
About 10 years ago, I met Al Alcorn at an event. Prior to that, I had studied the original pong schematics in school as part of an interesting challenge in a digital design course, where the goal was to figure out what the schematic did, without knowing it was pong.
So when I met Al, I mentioned that I found the schematics fascinating, and had some questions. He was happy to walk me through the whole thing! After that, he told me all kinds of great stories about the different versions of pong that they built, including color support, the home version, and PAL support.
When I was a kid, I remember being at my local arcade and seeing them open up Monaco GP to service it. My mind was completely blown by the hundreds and hundreds of wires and I couldn't fathom how anyone could make sense of it. I believe this is another game that doesn't have code or a CPU, but uses discrete logic circuitry instead. ...Which, I think, is why it's not emulated in MAME.
Life is weird, I was just using this site yesterday to do some electrical diagrams for a wiring harness I am building for my ancient mobile.
My trouble was understanding how to flow something through a series of relays, and implementing "AND" and "OR" logic with a relay series.
if(AC && Temp > 160) { run both fans }
if(AC || Temp > 160 { run both fans at reduced speed }
Now, the AC is already a relay, but you can just do 'and' and 'or' together without a extra set of diodes.
Anyway, falstad, I love your CircuitJS. I wish I could drag items around though, as for me the hardest part of making sense of things is having a good clean layout.
Random question to readers: does anyone know of a tool to generate the very nice wiring diagrams you'll see in bently and other automotive manuals? I imagine they were originally by hand but later were done with a CAD tool. Just wondering if there are any good open source options.
CircuitJS fails there because I want to create objects, like a 4 pin relay, that has pinouts numbered (87, 87a, 30, etc get reused a lot!) and have colored cables (because they're colored in real life!).
I've played around with it a bit in the past for Arduino stuff, and it could be promising for this application. Do you get to pick the colors of each wire? (Bonus: are striped wires supported?)
Ideally you'd be able to mark each wire with an indication of color(s) and diameter so the diagram could be printed in black and white (example[0]). I assume that could be done with labels, but on the scale of a vehicle that could get real tedious real fast.
When I need a pretty diagram I typically use either Dia or Libreoffice Draw. Neither is perfectly suited to vehicle wiring, but they get the job done. (Edit: also no simulation, which both of the tools mentioned upthread seem capable of.)
Drew, why do you have a fuel diagram of a Saab 900? are you another saab guy?
The wiring diagram I am working on is actually for my 900, for which the original fans are bulks, tend to break, and are a NLA. I've exhausted my spares and so now I am going a different route.
But yes, these are exactly the diagrams I am talking about.
Having a computerized version of this would be awesome, actual colors, inlining some information or having 'hyperlinking' around. A lot of the density in these diagrams is to simply fit them on two pages. But, at the same time, it'd be cool to have a picture of say, the solenoid with arrows pointing to the particular pins or replacing the numbers with labels, or say, selecting a relay and having an 'active' path so you could easily see how things flow around without finger tracing it over 3 pages.
Coincidentally, yes! I had to use that diagram to troubleshoot a no-fuel issue on my '83 900 last year. (Bad pump.) Now I'm learning all about fuel leaks and brake calipers and putting too many miles on (and paying too much attention to) my emergency backup Saturn.
What year is your 900? I haven't heard of this fan issue yet, but there are a lot of really specific SAAB things that I'm sure I'll encounter if I can keep this car going long enough.
What I'd really like would be to select a wire and have it highlight the wire and every connection; that'd make continuity tests a lot easier to confirm. I'm sure places like Haynes have some great software in-house they can use to make their diagrams.
please reach out to me at calvin@pobox.com. It's good to have saab freinds on call when you have Q's. I have... a LOT of experience with 900's. This is my 88. I am getting fairly close to ripping out the whole fan relay system as it's very complex and replacing it with fewer, simpler, easier to replace and maintain switches and relays.
Even with the 80's CIS stuff I have experience so... happy to help. The pre 86 calipers are trash though, if you are east coast i have a whole set of 88 front knuckles you might be able to swap over (Not sure if it'll work with your axles).
Anyway, I have a good network of saab people offline so parts and things I can usually wrangle up. Last year we did a saab underground railroad and shipped an 1968 96 transmission 700 miles for free over a few months of people visiting people
That's awesome! I've never been a part of a car community anything like this before. The closest is with Saturn fans, but those cars are so plentiful that parts can almost always be found locally even with the early ones like my backup car.
I'll have to see where I get on the caliper rebuild. The surviving local SAAB shop has been a great resource, and they've offered some good used pistons if I end up needing them. I'm in the midwest and I get the sense that SAAB never had as big of a presence around here; mine was an IDS car so it was purchased directly from the factory. Most of them including mine have suffered heavy rust damage by now, so there aren't many left.
The best trick I have for old caliper rebuilds is to pop the piston out with compressed air. You can do it with a bike pump and some tubes or anything you can figure out how to supply air to the brake line inlet. This will act like you are pressing the pedal, sort of.
What's an IDS Car?
Also for rust... I honestly mostly ignore it. These cars are unibody construction. You'll have to seriously be rusty to have enough structural problems. I am the king of wire wheel and por-15. I have only replaced a few sections on my cars, mostly floorboards and lower A arm areas.
For what it's worth, I have gotten by with the humble harbor freight welding setup, flux core. It doesn't look pretty but it gets metal welded. I did both of my floor pans this way, and it's... well it's fine. It's not beautiful but it does work. Copious seem sealer helps along as well.
I'd also lightly hone the inside of the calipers if you can, if there's pitting that you can feel rubbing your finger along the inner wall.
I haven't rebuild a pre 86 caliper in a long time, but there should be some good resources on the forums.
I'd mostly focus on getting on the saab facebook group. I am not on facebook, but it is a treasure trove for asking questions. Unfortunately the forums are mostly relegated to historical information now as people have transitioned to single sign on social media. There's also saabnet which is invaluable.
As for popularity, you're quite right. Saab seems most popular in new england. With it's relative wealth, weather and proximity to the outdoors, I think that was a natural fit, unfortunately that means that we lack the arizona rust free cars of yesteryear, there weren't many saabs out there to begin with.
Anyway... I can probably write forever on saab, but I have just recieved parts in the mail from the saab heritage museum and am going to try out the fitment!
Good tip on the compressed air. IDS = International Diplomat Sales; as I understand it, instead of renting a car when traveling in Europe, you could buy a SAAB and they'd arrange shipment back home. The former owner of mine was a professor who spent a year guest lecturing in Norway in 1983-84 and brought the car home when he was done. And I'm certainly heading toward needing to do some welding, because here in the rust belt it's impossible to ignore. (3 of my last 4 cars were killed by structural rust, and of those, 2 were unibody.)
I'll send you an email. I could (evidently) talk about SAAB for weeks.
Unfortunately it hasn't been maintained for a while, which is a shame because I don't think any other program does this. MAME/MESS only simulates machines with CPUs.
The MiSTer FPGA system is probably what you’re after, in the modern scene. See https://youtu.be/lVPa5EW5mp8 for an example of it being used with arcade hardware via an adapter.
Pong is one of the sample chapters for the "Retrocomputing with Clash" book (https://gergo.erdi.hu/retroclash/). Clash is a hardware description DSL using Haskell, and the book is all about using it to implement progressively more complex 1970s chips.
Haskell, which makes I/O a pain, finds major application in the realm of "problem solving" where the program does some math and spits out an answer. Hardware design is a perfect example.
So I've been learning a bit of puredata recently and thought to myself when I saw the article "I wonder if someone has made a pong patch for pd?". So I googled it and found this[1]. It's interesting because in some ways the idiom for pd is like working with pure electronic circuitry.
this sent me down a rabbit hole of circuit based games, thanks for sharing. I found this article about pong that goes into the details and logic behind each of the circuits: http://www.pong-story.com/LAWN_TENNIS.pdf
Yes! Atari Pong Circuit Analysis - Awesomely detailed, 106 pages. "Atari’s Arcade Pong PCB contained 66 IC’s. ... It was simply hard wired TTL logic and predates microprocessor and software controlled video games ... the game has also been emulated in software to play on computers. ... in most cases it is a poor facsimile of the real thing."
> General Instrument Microelectronics, also known as General Instruments (GI), was well known for designing Large Scale Integragion (LSI) chips. In 1975, GI had a revolutionar idea: the design of a low-cost chip playing several Ball & Paddle games, and available to any manufacturer.
[snip]
> GI's first video game chip was the AY-3-8500. It played six games: four Ball & Paddle variants and two target shooting games, which all had variable difficulty settings changed using switches. In addition, a seventh undocumented game could be played when none of the previous six was selected: Handicap, a football/hockey variant where the player on the right has a third paddle. Very few systems played this game. Interestingly, two versions of the AY-3-8500 exist: the early one with dashed central line (about twice larger) and solid horizontal boundaries.
GI expanded its lineup of single-game chips but, by the 1980s, it looks like the whole concept was dead.
My great aunt used to work at Atari and build Pong units among other things; there's a story that if you brought her a metal lunchbox she'd stuff it with ping and mount the paddles through the lid.
I can't remember which game it was, but one of the arcade games used a set of basic chips to create a detailed vertical half of a spacecraft & the other half was mirrored over from the centerline due to space/current & chip cost constraints. When I had read about it, I marveled how absolutely ingenious early game developers were.
Since I grew up in post-golden age of arcade, I don't really know which game that was - but I presume it was one of the Nolan Bushnell creations from vague recollection.
Edit: It was Spacerace. From Wikipedia:
>The engineering and prototyping for Asteroid was done by Alcorn. The game is encoded entirely in discrete electronic components, like Atari's earlier games, and unlike later computer-based arcade games; the graphics are all simple line elements with the exception of the spacecraft, which are generated based on diodes on the circuit board arranged in the shape of half of a ship to represent the shape they create [..] That half ship is mirrored on the screen, similar to the diode array in Computer Space, which generated eight directions of a rotating ship with a mirrored four images.
It may surprise, but ALL Cray-1 machines, does not have MICROprocessors at all - just digital logic low scale integration.
And nearly all "Big" computers before era of mini-computers, also does not have MICROprocessors, their CPU consists of whole board of chips or even more than one board.
Imagine, one of the first commercial computers with MICROprocessor chip, was microVAX II, appears nearly decade later than 8086.
- MICROprocessor, ideally is a one semiconductor die, which integrates all cpu parts.
Unfortunately, now silicon approaching limits, so to make things cost effective, have to do compromises, use chiplets, imposers, but this is very far from universal components on board.
One indication of logic gate level programming's enduring success is that algorithmic complexity can be expressed in terms of the minimum number of gates involved. There are proofs relating to NP completeness et. al. that cite gate count as proof of an algorithm's membership in such and such complexity class.
Indeed. I got curious how the display works, and apparently in the circuit there are just labeled nodes, and in the frame with the display the simulated CRT is just a bit of JavaScript looking at the voltage of those nodes (going to the next line or frame based on the voltage on the respective sync signals, monochrome CRTs are very simple in that regard).
That's enormously cool, I do use the falstad circuit simulator, but I've never thought I could add a separate frame with JavaScript for I/O.
This just brings me back to Mathilde Mupe's 'power pong' (as featured in the docu Hippies from Hell). It was 1v1 pong on bicycles. Steering affected the pad, and cycling quicker or slower made the ball go quicker or slower.
I believe Pong was, due to it's complexity, and possible dual use, not an exportable item. Not sure it that's true, or how I knew it, perhaps folklore? Perhaps not.
Neither were most of IBM's processors.
Shit, I just remembered that PGP in its early days was considered a "munition" and also not exportable. whoops, sorry for the de-rail.
Notably, there was Computer Space, by Nolan Bushnell and Ted Dabney (1971, Nutting Associates/Syzygy Engineering), the first coin-op arcade video game, also made from TTL logic. This one actually pioneered the approach.
Syzygy Engineering (Bushnell and Dabney) soon became Atari, but still entertained links to Nutting Associates with Atari titles appearing under the Nutting Associates brand, as well. (E.g., Pong was Computer Space Ball in the somewhat fancier NA version.)
This may be due to the machine not being a great success and becoming somewhat obscure by this. Also, restoration may be not that easy for those more accustomed to later arcade machines.
Most of the early arcade games, like Space Race (much like its later, better remembered revival Chicken Run), Gotcha, etc., are seriously overshadowed by Pong.
Regarding Computer Space, I once made a simulator for the PDP-1 (the machines that ran Spacewar), so it can be played in a browser (emulating a PDP-1). I have never seen the original in person, so there's no guarantee for this being faithful down to the tinier details. Anyways: https://www.masswerk.at/icss/
I'm not sure how many Computer Space machines were made, but one ended in the corner of my smokey late 80s arcade. It sorta worked for a bit before glitching out.
The game is also seen in the movie Soylent Green (1973) as one of the "furniture" in a rich man's apartment.
Man I can't believe that was a thing like "alright let's play bounce the pixel" ooh ahh. Then of course the foresight to see what we have now (doubtful) but still, crazy.
At our school, pong was given as a freebie template for building your final project in the digital design course! A great time, ended up enjoy it so much I TAed it till graduation.
Circa 1975-976, as best as I can remember, a magazine (Popular Electronics maybe, or that other one?), had an article about building a Pong game, complete with extensive schematics. There was zero code involved. It was comprised of pretty much nothing but TTL chips in DIP packages and maybe an oscillator or two. I imagined I could build it in my basement. Of course I never did.
https://en.wikipedia.org/wiki/Pong states "TTL logic" with later IC versions for consumer mass production, whereas the subcircuits in this demonstration state "analog". Well, I guess everything's analog if you look at it that way. Certainly logic ICs are a long way from pure analog. The interface is https://lushprojects.com/circuitjs/
Huh? TFA states "It was a circuit, implemented mostly using digital logic chips, with a few timers and other analog components." and provides schematics from which all the components can be identified as digital (74xx) or analog.
It uses sixty four NAND gates, twelve NE555 timers, two dozen diodes and some analog parts.
It's about the most basic version of the game. They later published a sound effects board and an on-screen scoring board that uses a couple of dozen more chips.
http://searle.x10host.com/TeleTennis/PWTennis.html