Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder how easy and cheap it would be to build a comparable analog computer using modern ICs. I also wonder if analog computers could make a comeback. Training a neural net seems very much like an analog computation to me...


If you want an accurate analog computer, you'd still be paying a pile of money for precision components. (0.01% resistors cost about $10 on Mouser.) And even with modern ICs, you're not getting a Moore's Law scaleup on your op amps. Plus, the interconnect doesn't scale nicely.

That said, there are claims that analog computers will make a comeback: https://spectrum.ieee.org/computing/hardware/not-your-father...


A good neural net simulator would work just fine with imperfect components.


This is basically what a brain is, right?


Far less than you might think. For example the wide range of neurochemicals would be largely unnecessary if that was the case.

It’s like how a city contains many buildings, but it also contains roads etc.


Different neurotransmitters provide different types (speed, response function) of activation.


The same neuron may release different neurotransmitters based on stimulus, but it’s not universal.

https://www.ncbi.nlm.nih.gov/books/NBK10818/


In my view it has become a cost/performance decision. There are still some cases where analog signal processing is the only way, or the cheapest way, of doing something. There will always be a place for the electronic equivalent of "last mile delivery," at least for getting stuff in and out of the analog domain.

One of my projects at my day job was an analog front end for a digital system, where part of the analog circuit was attached to a sensitive optical detector held at liquid nitrogen temperature. But the majority of signal processing was done in the digital domain.


I agree. A couple of years ago I was asked to design a circuit that could read a voltage, perform a few calculations and multiply the result by another voltage and create two phase-shifted signals from that multiplication product. It was a simple design in that it could be reduced to high-school trigonometry. However, finding an efficient implementation was not trivial.

After spending a lot of time looking at parts and considering multiple approaches, the simplest answer was to do the slower aspects in the CPU and use a 4-quadrant Multiplying Digital/Analog Converter (MDAC) to do the heavy lifting (signal multiplication).

ISTR that doing it all digitally would have required a processor with at least a 40MHz clock and probably a separate 200MHz clock to implement a 1-bit DAC. And I would still have needed extra circuitry since one of the inputs (and both outputs) could swing +/-10V.

Not to say that the all-digital approach wouldn't have worked, but the development cost would have been overwhelming for something that probably won't sell more than 200 units or so. By using an MDAC and a couple of opamps, the design was reduced to an Arduino "shield."

Have to say, it works pretty damn nice :-)


Look into Field-programmable analog arrays (FPAA) It's an FPGA for opamps instead of gates.


It's questionable how valuable this would be. Analog computers were just a precise implementation of technology available at the time. To multiply more accurately, you bought more accurate discrete components. Now that we have digital computers, we are not constrained by the physical properties of the components of the computer -- you can use as many bits as you need to get the desired accuracy in your computation, and it doesn't matter if every component in the computer has a calibrated accuracy.

(Work on better manufacturing is still relevant for digital computers, of course. But the work leads to lower-power devices, not more accurate computations. Years ago, we used 5V logic because the average manufacturing tolerances gave acceptable differentiation between 5V and 0V. As manufacturing got better, 3.3V worked fine, then 1.2V, and now even lower. The result is wasting less power.)

There are certainly physical processes at work that limit how "good" of a digital computer we can build... the question is whether or not there is something about manufacturing analog computers that scales better than digital computers, and when we get to the limits of die size or how small features can be with lithography, if analog computers will become competitive again. I kind of doubt it. But I know pretty much nothing about this area.


For simple adder/multiplier type circuits analog computers are inferior to digital ones for most uses because of the additive error factor that accumulates over every iteration of a loop.

I imagine that there are some analog circuits that would directly solve complex polynomials or perhaps differential equations without iteration, which would not suffer the error amplification problem. I don't know how common the equations or systems that those circuits solve for are in most scientific or other application use.


I once came across a paper on talking about something similar (https://dl.acm.org/citation.cfm?id=3001164). I can't access it right now and don't remember all of it, but it was about using analog hardware to run a convolutional network for computer vision before converting the camera's output to digital.



Now that analog fpgas are becoming a thing, I see someone in the next couple years replicting one with that.


Training a neural net with analog circuitry would mean an extra challenge in the reproducibility of results.

That's currently already a challenge, tiny changes have large differences in performance of the network.


Yes, but reproducibility is not always necessary.

For example, drawing a simple anti-aliased line on a computer produces results that depend (at the pixel-level) on the software stack and or graphics card used. Nobody ever complained (much) about that.

(Then again, it may allow your computer to be fingerprinted more easily).


thing about transistors is that it's easy to have many transistors on the same IC behave in set ratios of each other (By changing the transistor width and length ratio, etc), but getting a performance metric exactly controlled is fairly difficult (not saying it's not possible, just something you try to avoid because cost, reliability and yield).

I'd imagine any hypothetical implementation of a NN will be fairly difficult because of that.

Also achieving linearity cheaply (especially multiplication) is very difficult.


Doing it with discrete components is of limited use, but there are various companies working on doing it on-chip.

(Don't forget that analog systems have limited gain-bandwidth and noise immunity)


Quantum computers are analog devices! And, yes, folks are using then for training neural nets.


> Quantum computers are analog devices!

I've read this before used disparagingly; quantum is "just analog." Is that really the case? Is quantum computing is simply the ultimate miniaturization of analog computing?

Reading the recently disclosed "quantum superiority" paper from Google[1] that was somehow leaked by NASA one could be convinced that quantum is "just analog." The paper deals in "resonance", "coupling", "filter" and "attenuator"; it reads like the description of a superheterodyne transceiver; analog RF.

This reverse-engineering story points out that the speed of analog computation is due to the effectively parallel processing of the op amps. This aligns with the description of qubits also working in parallel.

One thing is certain; classical analog computers are vastly easier to understand. An op amp is simply an electrical function. So now I'm really intrigued; Is quantum computing really "just" faster and smaller analog?

[1] The actual PDF has appeared: https://drive.google.com/file/d/19lv8p1fB47z1pEZVlfDXhop082L...


> The paper deals in "resonance", "coupling", "filter" and "attenuator"; it reads like the description of a superheterodyne transceiver; analog RF.

This is the scaffolding for measuring and manipulating quantum states. Quantum computing itself is not "analog" in the original, signal processing meaning of the word: the states are not isomorphic analogies for modelled physical processes.


It's true that qubit states are inherently quantized. However, the control circuitry, couplers or gates, and often the qubits themselves, are analog devices. The "un-sexy" truth is that calibration is one of the most difficult parts of producing a useful quantum computer.


Sure, and any PC has tons of analog circuits too, for example speaker output. The calibration and scaffolding in quantum computers isn't a model of anything. Not in the same way as opamp board in this article is an analogy of integral.


Digital circuits also process everything in parallel. It's only CPUs (i.e. one particular thing you can build out of a digital circuit) that have trouble with it, really...

Quantum computing is (somewhat) analog in nature, but it's not "just analog". In theory, taking advantage of quantum physics in the computer gives you an improvement in asymptotic complexity (not just a constant factor speedup), one that you can't get any other way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: