¿Es la aritmética de señal analógica más rápida que la digital?

37

¿Sería teóricamente posible acelerar los procesadores modernos si uno usara aritmética de señal analógica (a costa de la precisión y la precisión) en lugar de FPU digitales (CPU -> DAC -> FPU analógica -> ADC -> CPU)?

¿Es posible la división de señal analógica (ya que la multiplicación de FPU a menudo toma un ciclo de CPU de todos modos)?

mrpyo
fuente
It doesn't answer you question, but here is an interesting article on the use of analog electromechanical computers in warships arstechnica.com/information-technology/2014/03/…
Doombot
There have been proposals from time to time to use multi-state digital logic -- eg, "flip-flops" with four states instead of two. This has actually been done in some production memory chips, since it reduces the wiring bottleneck. (I don't know if any currently produced chips use multi-state logic, though.)
Hot Licks

Respuestas:

45

Fundamentally, all circuits are analog. The problem with performing calculations with analog voltages or currents is a combination of noise and distortion. Analog circuits are subject to noise and it is very hard to make analog circuits linear over huge orders of magnitude. Each stage of an analog circuit will add noise and/or distortion to the signal. This can be controlled, but it cannot be eliminated.

Digital circuits (namely CMOS) basically side-step this whole issue by using only two levels to represent information, with each stage regenerating the signal. Who cares if the output is off by 10%, it only has to be above or below a threshold. Who cares if the output is distorted by 10%, again it only has to be above or below a threshold. At each threshold compare, the signal is basically regenerated and noise/nonlinearity issues/etc. stripped out. This is done by amplifying and clipping the input signal - a CMOS inverter is just a very simple amplifier made with two transistors, operated open-loop as a comparator. If the level is pushed over the threshold, then you get a bit error. Processors are generally designed to have bit error rates on the order of 10^-20, IIRC. Because of this, digital circuits are incredibly robust - they are able to operate over a very wide range of conditions because the linearity and noise are basically non-issues. It's almost trivial to work with 64 bit numbers digitally. 64 bits represents 385 dB of dynamic range. That's 19 orders of magnitude. There is no way in hell you are going to get anywhere near that with analog circuits. If your resolution is 1 picovolt (10^-12) (and this will basically be swamped instantly by thermal noise) then you have to support a maximum value of 10^7. Which is 10 megavolts. There is absolutely no way to operate over that kind of dynamic range in analog - it's simply impossible. Another important trade-off in analog circuitry is bandwidth/speed/response time and noise/dynamic range. Narrow bandwidth circuits will average out noise and perform well over a wide dynamic range. The tradeoff is that they are slow. Wide bandwidth circuits are fast, but noise is a larger problem so the dynamic range is limited. With digital, you can throw bits at the problem to increase dynamic range or get an increase in speed by doing things in parallel, or both.

However, for some operations, analog has advantages - faster, simpler, lower power consumption, etc. Digital has to be quantized in level and in time. Analog is continuous in both. One example where analog wins is in the radio receiver in your wifi card. The input signal comes in at 2.4 GHz. A fully digital receiver would need an ADC running at at least 5 gigasamples per second. This would consume a huge amount of power. And that's not even considering the processing after the ADC. Right now, ADCs of that speed are really only used for very high performance baseband communication systems (e.g. high symbol rate coherent optical modulation) and in test equipment. However, a handful of transistors and passives can be used to downconvert the 2.4 GHz signal to something in the MHz range that can be handled by an ADC in the 100 MSa/sec range - much more reasonable to work with.

The bottom line is that there are advantages and disadvantages to analog and digital computation. If you can tolerate noise, distortion, low dynamic range, and/or low precision, use analog. If you cannot tolerate noise or distortion and/or you need high dynamic range and high precision, then use digital. You can always throw more bits at the problem to get more precision. There is no analog equivalent of this, however.

alex.forencich
fuente
This deserves much more upvoting!
John U
I knew it! I just couldn't put it into good words. Nice additional info on the wireless receivers.
Smithers
2
Sample and hold circuit? Magnetic tape? Phonographic record? Photographic film? Analog memory devices certainly exist, but they have slightly different characteristics from digital ones.
alex.forencich
1
Any range, yes. But any range with any arbitrary resolution? Not so much.
alex.forencich
1
@ehsan amplification does not increase your dynamic range, your minimum value (the noise floor) gets amplified right along with the maximum.
mbrig
20

I've attended an IEEE talk last month titled “Back to the Future: Analog Signal Processing”. The talk was arranged by IEEE Solid State Circuit Society.

It was proposed that an analog MAC (multiply and accumulate) could consume less power than digital one. One issue, however, is that an analog MAC is a subject to analog noise. So, if you present it with the same inputs twice, the results would not be exactly the same.

Nick Alexeev
fuente
1
(+1 for analog noise.)
George Herold
Likewise, an article on the use of mechanical computers on warships arstechnica.com/information-technology/2014/03/…
Doombot
18

What you're talking about is called an Analog Computer, and was fairly widespread in the early days of computers. By about the end of the '60s they had essentially disappeared. The problem is that not only is precision much worse than for digital, but accuracy is as well. And speed of digital computation is much faster than even modest analog circuits.

Analog dividers are indeed possible, and Analog Devices makes about 10 different models. These are actually multipliers which get inserted into the feedback path of an op amp, producing a divider, but AD used to produce a dedicated divider optimized for large (60 dB, I think) dynamic range of the divisor.

Basically, analog computation is slow and inaccurate compared to digital. Not only that, but the realization of any particular analog computation requires the reconfiguration of hardware. Late in the game, hybrid analog computers were produced which could do this under software control, but these were bulky and never caught on except for special uses.

WhatRoughBeast
fuente
6
I like your answer, (+1) and the question. But I'll disagree on the speed part. Analog is plenty fast. The problem is precision and perhaps most importantly noise. Analog always has some noise. Digital is noise free, computer-wise.
George Herold
Thanks for the kind words. But. Analog may be "plenty" fast but in general digital is faster. And noise is easy to simulate.
WhatRoughBeast
4
Analog is fast, if it's just arithmetic, exp, sqrt etc. But as soon as you add a capacitor or inductor, needed for differentiation and integration, then it's slow. The analog computers of history were often used for solving differential equations - they were "slow". But some just did algebra. So I can see why different people may have different views on analog computation speed.
DarenW
1
Could you explain why analog is slow? In digital computer some instructions are "slow" because they need few iterations to be completed. But with analog I believe it only takes one pass to get the result.
mrpyo
1
@mrpyo - Absolutely, you can do both functions. If you take a multiplier and connect both inputs together, it becomes a "squarer". If you use the circuit The Photon used in his answer with both inputs tied to the op amp output it generates square roots. The voltage/current relationship in a diode is exponential, so you can use that to generate exponents. And by putting a diode in a feedback path you get logarithms. In all cases, though, the dynamic range can be limited by amplifier offsets, drifts, etc. And for the diode circuits there are other error sources as well.
WhatRoughBeast
11

Is analog signal division possible (as FPU multiplication often takes one CPU cycle anyway)?

If you have an analog multiplier, an analog divider is "easy" to make:

schematic

simulate this circuit – Schematic created using CircuitLab

Assuming X1 and X2 are positive, this solves Y = X1 / X2.

Analog multipliers do exist, so this circuit is possible in principle. Unfortunately most analog multipliers have a fairly limited range of allowed input values.

Another approach would be to first use log amplifiers to get the logarithm of X1 and X2, subtract, and then exponentiate.

Would it be theoretically possible to speed up modern processors if one would use analog signal arithmetic (at the cost of precision) instead of digital FPUs (CPU -> ADC -> analog FPU -> DAC -> CPU)?

At heart it's a question of technology---so much has been invested in R&D to make digital operations faster, that analog technology would have a long way to go to catch up at this point. But there's no way to say it's absolutely impossible.

On the other hand, I wouldn't expect my crude divider circuit above to work above maybe 10 MHz without having to do some very careful work and maybe deep dive research to get it to go faster.

Also, you say we should neglect precision, but a circuit like I drew is probably only accurate to 1% or so without tuning and probably only to 0.1% without inventing new technology. And the dynamic range of the inputs that can be usefully calculated on is similarly limited. So not only is it probably 100 to 1000 times slower than available digital circuits, its dynamic range is probably about 10300 times worse as well (comparing to IEEE 64-bit floating point).

The Photon
fuente
5
Hey I've got an old AD multiplier that does 10 MHz. I bet I can get something faster now. Just to throw a monkey wrench into this topic, if quantum computing ever pans out it will be analog.
George Herold
@GeorgeHerold, that's my best argument why quantum computing is snake oil.
The Photon
Very neat trick. Except I think that computes A(x1) / (1 + A (x2)), which should be accurate for a large gain A.
Yale Zhang
@georgeherold A mixer is really just a fast analog multiplier with slightly odd input requirements, and I think microwave people are getting those up to 60 GHz or more these days
mbrig
@mbrig, the difficulty is the op-amp and keeping the feedback loop closed.
The Photon
7
  1. No, because DAC and ADC conversions take much more time than digital division or multiplication.

  2. Analog multiplication and division is not that simple, uses more energy and that would be not cost efficient (compared to digital IC).

  3. Fast (GHz range) analog multiplication and division ICs have precision about 1%. That means all you can divide on fast analog divider is... 8-bit numbers or something like that. Digital ICs deal with numbers like this very fast.

  4. Another problem is that floating point numbers cover very huge range - from very small numbers. 16-bit float number range is 3.41034 to 3.41034. That would require 1360dB dynamics range (!!!) if I didnt messed up anything.

Here you can look at analog dividers and multipliers offered by Analog Devices (link)

enter image description here

These things are not very useful in general computing. These are much better in analog signal processing.

Kamil
fuente
4. Not exactly. Floating point numbers are represented in scientific notation, basically two numbers - coefficient and exponent both cover more limited range.
mrpyo
@mrpyo Are you sure? I think 16-bit float range is much higher than numbers I wrote before edit (something like 0000000000000.1 and 10000000000000).
Kamil
en.wikipedia.org/wiki/IEEE_floating_point For C float it's 23 bits for coefficient, 8 bits for exponent and 1 bit for sign. You would have to represent those 3 ranges in analog.
mrpyo
Couldn't you reduce required frequency by having many units in series and using only one at the time?
mrpyo
4
The true analog equivalent of Floating Point would be the logarithmic domain, therefore absurdly high dynamic range (higher than the FP mantissa) is not necessary. Otherwise, good points.
Brian Drummond