¿Sería teóricamente posible acelerar los procesadores modernos si uno usara aritmética de señal analógica (a costa de la precisión y la precisión) en lugar de FPU digitales (CPU -> DAC -> FPU analógica -> ADC -> CPU)?
¿Es posible la división de señal analógica (ya que la multiplicación de FPU a menudo toma un ciclo de CPU de todos modos)?
Respuestas:
Fundamentally, all circuits are analog. The problem with performing calculations with analog voltages or currents is a combination of noise and distortion. Analog circuits are subject to noise and it is very hard to make analog circuits linear over huge orders of magnitude. Each stage of an analog circuit will add noise and/or distortion to the signal. This can be controlled, but it cannot be eliminated.
Digital circuits (namely CMOS) basically side-step this whole issue by using only two levels to represent information, with each stage regenerating the signal. Who cares if the output is off by 10%, it only has to be above or below a threshold. Who cares if the output is distorted by 10%, again it only has to be above or below a threshold. At each threshold compare, the signal is basically regenerated and noise/nonlinearity issues/etc. stripped out. This is done by amplifying and clipping the input signal - a CMOS inverter is just a very simple amplifier made with two transistors, operated open-loop as a comparator. If the level is pushed over the threshold, then you get a bit error. Processors are generally designed to have bit error rates on the order of 10^-20, IIRC. Because of this, digital circuits are incredibly robust - they are able to operate over a very wide range of conditions because the linearity and noise are basically non-issues. It's almost trivial to work with 64 bit numbers digitally. 64 bits represents 385 dB of dynamic range. That's 19 orders of magnitude. There is no way in hell you are going to get anywhere near that with analog circuits. If your resolution is 1 picovolt (10^-12) (and this will basically be swamped instantly by thermal noise) then you have to support a maximum value of 10^7. Which is 10 megavolts. There is absolutely no way to operate over that kind of dynamic range in analog - it's simply impossible. Another important trade-off in analog circuitry is bandwidth/speed/response time and noise/dynamic range. Narrow bandwidth circuits will average out noise and perform well over a wide dynamic range. The tradeoff is that they are slow. Wide bandwidth circuits are fast, but noise is a larger problem so the dynamic range is limited. With digital, you can throw bits at the problem to increase dynamic range or get an increase in speed by doing things in parallel, or both.
However, for some operations, analog has advantages - faster, simpler, lower power consumption, etc. Digital has to be quantized in level and in time. Analog is continuous in both. One example where analog wins is in the radio receiver in your wifi card. The input signal comes in at 2.4 GHz. A fully digital receiver would need an ADC running at at least 5 gigasamples per second. This would consume a huge amount of power. And that's not even considering the processing after the ADC. Right now, ADCs of that speed are really only used for very high performance baseband communication systems (e.g. high symbol rate coherent optical modulation) and in test equipment. However, a handful of transistors and passives can be used to downconvert the 2.4 GHz signal to something in the MHz range that can be handled by an ADC in the 100 MSa/sec range - much more reasonable to work with.
The bottom line is that there are advantages and disadvantages to analog and digital computation. If you can tolerate noise, distortion, low dynamic range, and/or low precision, use analog. If you cannot tolerate noise or distortion and/or you need high dynamic range and high precision, then use digital. You can always throw more bits at the problem to get more precision. There is no analog equivalent of this, however.
fuente
I've attended an IEEE talk last month titled “Back to the Future: Analog Signal Processing”. The talk was arranged by IEEE Solid State Circuit Society.
It was proposed that an analog MAC (multiply and accumulate) could consume less power than digital one. One issue, however, is that an analog MAC is a subject to analog noise. So, if you present it with the same inputs twice, the results would not be exactly the same.
fuente
What you're talking about is called an Analog Computer, and was fairly widespread in the early days of computers. By about the end of the '60s they had essentially disappeared. The problem is that not only is precision much worse than for digital, but accuracy is as well. And speed of digital computation is much faster than even modest analog circuits.
Analog dividers are indeed possible, and Analog Devices makes about 10 different models. These are actually multipliers which get inserted into the feedback path of an op amp, producing a divider, but AD used to produce a dedicated divider optimized for large (60 dB, I think) dynamic range of the divisor.
Basically, analog computation is slow and inaccurate compared to digital. Not only that, but the realization of any particular analog computation requires the reconfiguration of hardware. Late in the game, hybrid analog computers were produced which could do this under software control, but these were bulky and never caught on except for special uses.
fuente
If you have an analog multiplier, an analog divider is "easy" to make:
simulate this circuit – Schematic created using CircuitLab
Assuming X1 and X2 are positive, this solves Y = X1 / X2.
Analog multipliers do exist, so this circuit is possible in principle. Unfortunately most analog multipliers have a fairly limited range of allowed input values.
Another approach would be to first use log amplifiers to get the logarithm of X1 and X2, subtract, and then exponentiate.
At heart it's a question of technology---so much has been invested in R&D to make digital operations faster, that analog technology would have a long way to go to catch up at this point. But there's no way to say it's absolutely impossible.
On the other hand, I wouldn't expect my crude divider circuit above to work above maybe 10 MHz without having to do some very careful work and maybe deep dive research to get it to go faster.
Also, you say we should neglect precision, but a circuit like I drew is probably only accurate to 1% or so without tuning and probably only to 0.1% without inventing new technology. And the dynamic range of the inputs that can be usefully calculated on is similarly limited. So not only is it probably 100 to 1000 times slower than available digital circuits, its dynamic range is probably about 10300 times worse as well (comparing to IEEE 64-bit floating point).
fuente
No, because DAC and ADC conversions take much more time than digital division or multiplication.
Analog multiplication and division is not that simple, uses more energy and that would be not cost efficient (compared to digital IC).
Fast (GHz range) analog multiplication and division ICs have precision about 1%. That means all you can divide on fast analog divider is... 8-bit numbers or something like that. Digital ICs deal with numbers like this very fast.
Another problem is that floating point numbers cover very huge range - from very small numbers. 16-bit float number range is3.4∗10−34 to 3.4∗1034 . That would require 1360dB dynamics range (!!!) if I didnt messed up anything.
Here you can look at analog dividers and multipliers offered by Analog Devices (link)
These things are not very useful in general computing. These are much better in analog signal processing.
fuente
float
it's 23 bits for coefficient, 8 bits for exponent and 1 bit for sign. You would have to represent those 3 ranges in analog.Actually, researchers are now re-visiting analog computing techniques under the context of VLSI, because analog computation could provide much higher energy efficiency than the digital ones in specific applications. See this paper:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7313881&tag=1
fuente