What kind of hardware implements Fourier transform?Harmonic and Fourier transformWhy more bandwidth means...
Math reviews in "Zentralblatt für Mathematik und ihre Grenzgebiete"
Copy large number of files of specific date to another directory?
How to mitigate "bandwagon attacking" from players?
How to implement expandbefore, similarly to expandafter?
How should I handle players who ignore the session zero agreement?
Why do neural networks need so many training examples to perform?
High pressure canisters of air as gun-less projectiles
Auto Insert date into Notepad
Overfitting and Underfitting
Why is working on the same position for more than 15 years not a red flag?
Can a person refuse a presidential pardon?
It took me a lot of time to make this, pls like. (YouTube Comments #1)
Find the number of ways to express 1050 as sum of consecutive integers
Can a hotel cancel a confirmed reservation?
Discrepancy in P/E ratio of stocks in Robinhood app?
What is the wife of a henpecked husband called?
How can I improve my fireworks photography?
Finishing chicken thighs under broiler
What does "keynote company" mean?
What's the rationale behind the objections to these measures against human trafficking?
Quenching swords in dragon blood; why?
Am I using the wrong word all along?
How to acknowledge an embarrassing job interview, now that I work directly with the interviewer?
When does coming up with an idea constitute sufficient contribution for authorship?
What kind of hardware implements Fourier transform?
Harmonic and Fourier transformWhy more bandwidth means more bit rate per second?Fourier TransformFourier Transform in real lifeFourier transform and signal envelopeFourier Transform using pairs tableFourier TransformFourier Transform - convolution theoremCoefficients and harmonics of a PWM signalFourier Transform vs. Laplace Transform?
$begingroup$
I looked around online but I found nothing relevant. It is very difficult for an electronic device to decompose a signal in different frequencies.
How this is done at the bare metal level?
Any suggested source or comment will be very helpful
power-electronics hardware fourier
$endgroup$
add a comment |
$begingroup$
I looked around online but I found nothing relevant. It is very difficult for an electronic device to decompose a signal in different frequencies.
How this is done at the bare metal level?
Any suggested source or comment will be very helpful
power-electronics hardware fourier
$endgroup$
3
$begingroup$
A lot of the time you don't need FT to do signal processing, especially filtering. E.g. you can use passive or active filters which depend on the properties of capacitors and inductors. Even in the digital domain, when working with values out of ADC, you can go without FT for some tasks (e.g. see exponential smoothing).
$endgroup$
– anrieff
2 days ago
$begingroup$
"What kind of..." questions are too broad to fit the stack exchange model. Typically when one specifically mentions a Fourier transform something capable of computation is implied (approximately convolution ie delay, multiply, and accumulate, in parallel or with storage and logic for iterative sequence), but the hardware requirements depend on the application requirements, and as many are pointing out there are alternatives to numeric (or at least digital) computation.
$endgroup$
– Chris Stratton
2 days ago
2
$begingroup$
A lens does (not an answer since it is not an electronic device but then again neither are vibrating reeds).
$endgroup$
– Ghanima
yesterday
$begingroup$
en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm
$endgroup$
– mkeith
yesterday
add a comment |
$begingroup$
I looked around online but I found nothing relevant. It is very difficult for an electronic device to decompose a signal in different frequencies.
How this is done at the bare metal level?
Any suggested source or comment will be very helpful
power-electronics hardware fourier
$endgroup$
I looked around online but I found nothing relevant. It is very difficult for an electronic device to decompose a signal in different frequencies.
How this is done at the bare metal level?
Any suggested source or comment will be very helpful
power-electronics hardware fourier
power-electronics hardware fourier
asked 2 days ago
veronikaveronika
172210
172210
3
$begingroup$
A lot of the time you don't need FT to do signal processing, especially filtering. E.g. you can use passive or active filters which depend on the properties of capacitors and inductors. Even in the digital domain, when working with values out of ADC, you can go without FT for some tasks (e.g. see exponential smoothing).
$endgroup$
– anrieff
2 days ago
$begingroup$
"What kind of..." questions are too broad to fit the stack exchange model. Typically when one specifically mentions a Fourier transform something capable of computation is implied (approximately convolution ie delay, multiply, and accumulate, in parallel or with storage and logic for iterative sequence), but the hardware requirements depend on the application requirements, and as many are pointing out there are alternatives to numeric (or at least digital) computation.
$endgroup$
– Chris Stratton
2 days ago
2
$begingroup$
A lens does (not an answer since it is not an electronic device but then again neither are vibrating reeds).
$endgroup$
– Ghanima
yesterday
$begingroup$
en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm
$endgroup$
– mkeith
yesterday
add a comment |
3
$begingroup$
A lot of the time you don't need FT to do signal processing, especially filtering. E.g. you can use passive or active filters which depend on the properties of capacitors and inductors. Even in the digital domain, when working with values out of ADC, you can go without FT for some tasks (e.g. see exponential smoothing).
$endgroup$
– anrieff
2 days ago
$begingroup$
"What kind of..." questions are too broad to fit the stack exchange model. Typically when one specifically mentions a Fourier transform something capable of computation is implied (approximately convolution ie delay, multiply, and accumulate, in parallel or with storage and logic for iterative sequence), but the hardware requirements depend on the application requirements, and as many are pointing out there are alternatives to numeric (or at least digital) computation.
$endgroup$
– Chris Stratton
2 days ago
2
$begingroup$
A lens does (not an answer since it is not an electronic device but then again neither are vibrating reeds).
$endgroup$
– Ghanima
yesterday
$begingroup$
en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm
$endgroup$
– mkeith
yesterday
3
3
$begingroup$
A lot of the time you don't need FT to do signal processing, especially filtering. E.g. you can use passive or active filters which depend on the properties of capacitors and inductors. Even in the digital domain, when working with values out of ADC, you can go without FT for some tasks (e.g. see exponential smoothing).
$endgroup$
– anrieff
2 days ago
$begingroup$
A lot of the time you don't need FT to do signal processing, especially filtering. E.g. you can use passive or active filters which depend on the properties of capacitors and inductors. Even in the digital domain, when working with values out of ADC, you can go without FT for some tasks (e.g. see exponential smoothing).
$endgroup$
– anrieff
2 days ago
$begingroup$
"What kind of..." questions are too broad to fit the stack exchange model. Typically when one specifically mentions a Fourier transform something capable of computation is implied (approximately convolution ie delay, multiply, and accumulate, in parallel or with storage and logic for iterative sequence), but the hardware requirements depend on the application requirements, and as many are pointing out there are alternatives to numeric (or at least digital) computation.
$endgroup$
– Chris Stratton
2 days ago
$begingroup$
"What kind of..." questions are too broad to fit the stack exchange model. Typically when one specifically mentions a Fourier transform something capable of computation is implied (approximately convolution ie delay, multiply, and accumulate, in parallel or with storage and logic for iterative sequence), but the hardware requirements depend on the application requirements, and as many are pointing out there are alternatives to numeric (or at least digital) computation.
$endgroup$
– Chris Stratton
2 days ago
2
2
$begingroup$
A lens does (not an answer since it is not an electronic device but then again neither are vibrating reeds).
$endgroup$
– Ghanima
yesterday
$begingroup$
A lens does (not an answer since it is not an electronic device but then again neither are vibrating reeds).
$endgroup$
– Ghanima
yesterday
$begingroup$
en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm
$endgroup$
– mkeith
yesterday
$begingroup$
en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm
$endgroup$
– mkeith
yesterday
add a comment |
6 Answers
6
active
oldest
votes
$begingroup$
Devices using the Fourier Transform
It is very difficult for an electronic device to decompose a signal in different frequencies.
It's not.
There's actually quite a few devices that do that, explicitly.
First of all, you'll have to make a difference between the continuous Fourier transform (which you probably know as $mathcal Fleft{x(t)right}(f)=int_{-infty}^{infty} x(t)e^{j2pi f t},mathrm dt$) and the Digital Fourier Transform (DFT), which is what you can do with a sampled signal.
For both, there's devices that implement these.
Continuous Fourier Transform
There's little in the ways of actual need for this in digital electronics – digital signals are sampled, so you'd use the DFT.
In optics and photonics you'll notice that there's an actual chance to get perfectly periodic things for a "large" (read as: nearly as infinite as the integral above) length. Effectively, an acousto-optic element can be excited with one or multiple tones, and it will have the same correlating effects as the integral above. You don't have to look at 2018's Physics Nobel Prize winners to find an example of Fourier Optics.
Discrete Fourier Transform
This is really all over the place; it's such a standard processing step that as a communication engineer, we often even forget where it is.
So, this list is far less than complete; just examples:
Equalizers: It's pretty easy building a digital audio equalizer with a DFT. Typically, the zero-forcing equalizer type for communication systems uses a DFT to find the frequency domain representation of the channel necessary to be "removed", inverts that and uses the IDFT to get back that back to time domain to be used as taps in a FIR filter.
Antenna Arrays / Beamsteering: If you have an array of antennas in fixed distance from each other, you can steer the beam of these antennas, by calculating the DFT of the "directional vector" you'd like to achieve and use the result as complex coefficients to be multiplied with the transmit signal that you distribute to these antennas. Real-world MIMO systems do that.
Direction Finding: What works in transmit direction works exactly the same, but reverse, in receive direction: Get a signal for each of your antennas in your array, find the complex factors between these signals, do an IDFT, get a vector containing the info how power came from which direction. Easy! And done for estimation where aircraft are, where Wifi communication partners are, submarines (though there it's not antennas but underwater microphones)…
Channelization: Satellites in space are expensive, so multiple TV programs need to be uplinked to one satellite. You can use a DFT (especially in an Polyphase Filterbank) to put multiple channels in one uplink, or to isolate individual channels from one wideband signal. That's not a domain of TV; it happens in audio processing, medical imaging, ultrasonic analysis, Radio broadcasting…)
Data Encoding for multicarrier systems: To combat the problems of wide channels (which you need if you want to transport many bits per second), namely the need for complex equalizers, you'd want to chop up your channel in many small channels (see "channelization" above). However, you can understand the DFT alone as Filterbank for frequency-shifted time-domain-rectangular filters. The nice thing about that is that these channels are very tightly packed. The other nice thing is that the convolution with the channel reduces to a point-wise multiplication which is super simple to revert. We call that method OFDM, and all Wifi, LTE, 5G, WiMax, ATSC, DVB-T, Digital Audio Broadcasting, DSL, and many more systems use that.
Efficient Filtering: A FIR filter is a convolution with the filter impulse response in time domain. As such, it uses a lot of operations per output sample – it's very computationally intense. You can greatly reduce that effort when you implement fast convolution, which is based on DFT'ing sections of input samples, mutliplying them with the DFT of the impulse response in frequency domain, overlapping with previous segments, and back-transformation to time domain. That's so handy that it's used in almost all systems that have long FIR filters (and "long" might start with so benign numbers as "16 taps").
Radar: Classical automotive radars use self-modulating FMCW radars; to get a picture of both the relative speed and distance of reflectors observed by that, you typically do a two-dimensional DFT (which really is just DFT'ing all columns of a matrix and afterwards all rows of the result).
Audio and Image/Video Compression: Though JPEG uses the Discrete Cosine Transform, not the DFT itself, there's ample of codecs that mechanism which at least use significant parts of a DFT.
Note that the above list only contains things that do DFTs during operation. You can be 100% sure that during design of anything remotely related to RF, especially antenas, mixers, amplifiers, (de)modulators, a lot of Fourier Transforms / Spectral analysis was involved. Same goes for audio device design, any high-speed data link design, image analysis…
How is it done?
I'll just address the DFT here.
Usually, that's implemented as an FFT, Fast Fourier Transform. That's one of the most important algorithmic discoveries of the 20th century, so I will spare but few words on it, because there's literally thousands of articles out there that explain the FFT.
You go in and look at the $e^{j2pi frac nN k}$ multipliers of a DFT. You'll notice that these can basically be understood as ${e^{j2pi frac 1N k}}^n=W^n$; and there you have your twiddle factor. Now you avoid calculating coefficients that you've already calculated, and just swap a sign where necessary.
That way, you can reduce the complexity of a DFT from the $N^2$ (which would be the complexity if you implemented the DFT as the naive sum) to something in the order of $Nlog N$ – a huge win, even for relatively small $N$.
It's relatively straightforward to implement that in hardware, if you can get your whole input vector at once – you get $log N$ as a combinatorial depth and fixed coefficients at every step. The trick is knowing how (whether) to pipeline the individual layers, and how to use the specific hardware type you have (ASIC? FPGA? FPGA with hardware multipliers?). You can basically piece together $N=2^l$-length transform only from what we call Butterflies, which you'll recognize once you read about the FFT.
In software, the principle is the same, but you need to know how to multi-thread very large transforms, and how to access memory as fast as possible by utilizing your CPU caches optimally.
However, for both hardware and software, there's libraries that you'd just use to calculate the DFT (FFT). For Hardware, that usually comes from your FPGA vendor (e.g. Altera/Intel, Xilinx, Lattice…) , or a large ASIC design tool company (Cadence) or your ASIC house.
$endgroup$
$begingroup$
Impressive dedication to your art, agree that 'long' is O(16) for FIR filters.
$endgroup$
– Neil_UK
2 days ago
$begingroup$
:) thank you! Seen you write far more impressive answers than this, though :)
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
While this is a very detailed answer, and will provide an accurate FFT of an incoming signal, it does not answer the question. This is a digital process applied to an input signal, it is not a solution implemented in hardware (other than the AD converter on the front end).
$endgroup$
– Jennifer
2 days ago
1
$begingroup$
Jennifer is right. You should discuss analogue DFT or at least clarify that DFT means discrete FT, but not necessarily digital FT.
$endgroup$
– leftaroundabout
2 days ago
2
$begingroup$
Page 43 (pdf numbering) in this proceedings discusses a FIR based on analogue FFT: imagesensors.org/Past%20Workshops/Marvin%20White%20Collection/…
$endgroup$
– leftaroundabout
yesterday
|
show 5 more comments
$begingroup$
You can't get much more "bare metal" and "hardware" than a set of vibrating reeds.
http://www.stichtco.com/freq_met.htm
So what hardware does a fourier transform, a bunch of resonant systems can do that
$endgroup$
$begingroup$
huh, fancy. My father told me of similar devices they used at uni for frequency analysis of vibrating machines.
$endgroup$
– Marcus Müller
2 days ago
4
$begingroup$
This is roughly how your ears work, too, see cochlea.eu/en/cochlea/function
$endgroup$
– zwol
2 days ago
add a comment |
$begingroup$
Surface Acoustic Wave Devices were used as analog electro-mechanical devices to perform several signal processing tasks. Most papers are paywalled.
Chapter 16 of Colin Campbell's 1989 book Surface Acoustic Wave Devices and their Signal Processing Applications
Publisher Summary
This chapter presents fast real-time Fourier transform techniques using SAW linear frequency modulated (FM) chirp filters with processing times of only a few microseconds. SAW based techniques have applications to sonar, radar, spread spectrum, and other communications technologies requiring the fast analysis or filtering of complex signals. With SAW-based Fourier transform systems, this is carried out in the receiver intermediate-frequency (IF) stages. SAW linear FM chirp filters can be configured to affect a number of Fourier transform manipulations. Three of these are (1) single-stage Fourier transformers for spectrum or network analysis, (2) two-stage Fourier transform processors for cepstrum analysis, and (3) two-stage Fourier transform processors for real-time filtering. SAW-based Fourier transform processors for the spectral analysis of signals, known as compressive receivers, are available in a wide range of configurations to provide spectral resolutions over analytic bandwidths up to 1 GHz. The chapter also discusses the use of bilinear mixers in a SAW Fourier transform processor.
$endgroup$
add a comment |
$begingroup$
This can be done on the - literally - bare metal level using the Harmonic Analyzer:
https://www.youtube.com/watch?v=NAsM30MAHLg
And sorry to give a link-only answer, but this one you really have to see by yourself.
$endgroup$
$begingroup$
yep, the short series is worth a watch.
$endgroup$
– uhoh
2 hours ago
add a comment |
$begingroup$
A Fourier transform on a discrete sampled function is a change of
basis functions from a series of (typically) times-of-sample values to
an equivalent series of frequency-component values.
It is a linear transformation (the Fourier transform of
a sum of two series is the sum of the Fourier transforms of
the two series), so is identical to a matrix operating on
a vector (the time-of-sample series).
A matrix of rank N operating on a vector with N components generates
a second vector with N components by doing N^2 multiplies, and
(N^2 - N) additions.
Okay, so now how the metal does this:
There's a gizmo called a 'harmonic analyzer' which
multiplies and accumulates one frequency (basically one row
of the matrix), which is a kind of analog computer. It involves
plotting the function input on a graph paper, connecting a polar
planimeter (mechanical integrator) and linkage (mechanical multiplier)
and tracing the curve gives you... one element of the output.
Using it isn't too bad, but for a 1024-element transform, you have to
do the operation... 1024 times. This is how tide tables were calculated,
though, a century ago. see Mathematical Instruments article here, page 71
Then there's the manual method, using slide rule and adding machine,
which requires looking up the matrix elements in a table of sines/cosines,
and that means you operate your slide rule, for a 1024-element sampling,
over 2 million times.
A general-purpose computer can do the operation, too.
Some (digital signal processor, DSP) specialized CPU designs are
made with accelerated multiply-accumulate hardware, which speeds things up.
And, there's a very clever algorithm, the FFT, that gets around the
problem of N samples requiring N^2 operations, by noting that a
4x4 matrix is a 2x2 matrix of 2x2 matrices; there's a way to
take any composite number (a power of two, like '1024' is convenient)
and use only on-the-order-of N* Log(N) operations instead of N^2.
That means the 1024 inputs only requires 61,440 operations
instead of 1,048,576.
The FFT doesn't simplify a general discrete Fourier transform, because it requires
that the N value be nonprime (and almost always a power of two is
used), but it can be hardware-supported in a variety of ways, so that
the operations (multiply-accumulate) are the time-limiting step.
One modern (2019) chip (ADBSP-561 from Analog DevicesMMAC column) can do 2400 such operations per microsecond.
$endgroup$
add a comment |
$begingroup$
That's basically what a spectrum analyzer does:
https://www.electronics-notes.com/articles/test-methods/spectrum-analyzer/realtime-spectrum-analyser.php
$endgroup$
7
$begingroup$
No it's not what a spectrum analyzer does in general. Some (many) spectrum analyzers have an FFT mode, but even then, what the spectrum analyzer shows you is a PSD estimate, not a Fourier transform.
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
Answers that are mainly a link to another site to do not provide lasting value, as the link may be broken tomorrow. You should summarize the important content from the link in your own answer.
$endgroup$
– Elliot Alderson
2 days ago
$begingroup$
@MarcusMüller -- what is a "PSD estimate"?
$endgroup$
– Pete Becker
2 days ago
$begingroup$
@PeteBecker An estimate for the Power Spectral Density. The distribution of the expected power over frequencies for a signal that you have to consider random because you don't know it. The mathematically exact definition for a PSD is "Fourier Transform of the Autocorrelation Function of a stochastic process"; but for most cases, we just assume the stochastic process (==random signal) to be Weak-Sense stationary and hence FT(ACF) == expectation(FT²(Time signal)).
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
@MarcusMüller -- thanks for clarifying.
$endgroup$
– Pete Becker
2 days ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");
StackExchange.ifUsing("editor", function () {
return StackExchange.using("schematics", function () {
StackExchange.schematics.init();
});
}, "cicuitlab");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "135"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f425008%2fwhat-kind-of-hardware-implements-fourier-transform%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Devices using the Fourier Transform
It is very difficult for an electronic device to decompose a signal in different frequencies.
It's not.
There's actually quite a few devices that do that, explicitly.
First of all, you'll have to make a difference between the continuous Fourier transform (which you probably know as $mathcal Fleft{x(t)right}(f)=int_{-infty}^{infty} x(t)e^{j2pi f t},mathrm dt$) and the Digital Fourier Transform (DFT), which is what you can do with a sampled signal.
For both, there's devices that implement these.
Continuous Fourier Transform
There's little in the ways of actual need for this in digital electronics – digital signals are sampled, so you'd use the DFT.
In optics and photonics you'll notice that there's an actual chance to get perfectly periodic things for a "large" (read as: nearly as infinite as the integral above) length. Effectively, an acousto-optic element can be excited with one or multiple tones, and it will have the same correlating effects as the integral above. You don't have to look at 2018's Physics Nobel Prize winners to find an example of Fourier Optics.
Discrete Fourier Transform
This is really all over the place; it's such a standard processing step that as a communication engineer, we often even forget where it is.
So, this list is far less than complete; just examples:
Equalizers: It's pretty easy building a digital audio equalizer with a DFT. Typically, the zero-forcing equalizer type for communication systems uses a DFT to find the frequency domain representation of the channel necessary to be "removed", inverts that and uses the IDFT to get back that back to time domain to be used as taps in a FIR filter.
Antenna Arrays / Beamsteering: If you have an array of antennas in fixed distance from each other, you can steer the beam of these antennas, by calculating the DFT of the "directional vector" you'd like to achieve and use the result as complex coefficients to be multiplied with the transmit signal that you distribute to these antennas. Real-world MIMO systems do that.
Direction Finding: What works in transmit direction works exactly the same, but reverse, in receive direction: Get a signal for each of your antennas in your array, find the complex factors between these signals, do an IDFT, get a vector containing the info how power came from which direction. Easy! And done for estimation where aircraft are, where Wifi communication partners are, submarines (though there it's not antennas but underwater microphones)…
Channelization: Satellites in space are expensive, so multiple TV programs need to be uplinked to one satellite. You can use a DFT (especially in an Polyphase Filterbank) to put multiple channels in one uplink, or to isolate individual channels from one wideband signal. That's not a domain of TV; it happens in audio processing, medical imaging, ultrasonic analysis, Radio broadcasting…)
Data Encoding for multicarrier systems: To combat the problems of wide channels (which you need if you want to transport many bits per second), namely the need for complex equalizers, you'd want to chop up your channel in many small channels (see "channelization" above). However, you can understand the DFT alone as Filterbank for frequency-shifted time-domain-rectangular filters. The nice thing about that is that these channels are very tightly packed. The other nice thing is that the convolution with the channel reduces to a point-wise multiplication which is super simple to revert. We call that method OFDM, and all Wifi, LTE, 5G, WiMax, ATSC, DVB-T, Digital Audio Broadcasting, DSL, and many more systems use that.
Efficient Filtering: A FIR filter is a convolution with the filter impulse response in time domain. As such, it uses a lot of operations per output sample – it's very computationally intense. You can greatly reduce that effort when you implement fast convolution, which is based on DFT'ing sections of input samples, mutliplying them with the DFT of the impulse response in frequency domain, overlapping with previous segments, and back-transformation to time domain. That's so handy that it's used in almost all systems that have long FIR filters (and "long" might start with so benign numbers as "16 taps").
Radar: Classical automotive radars use self-modulating FMCW radars; to get a picture of both the relative speed and distance of reflectors observed by that, you typically do a two-dimensional DFT (which really is just DFT'ing all columns of a matrix and afterwards all rows of the result).
Audio and Image/Video Compression: Though JPEG uses the Discrete Cosine Transform, not the DFT itself, there's ample of codecs that mechanism which at least use significant parts of a DFT.
Note that the above list only contains things that do DFTs during operation. You can be 100% sure that during design of anything remotely related to RF, especially antenas, mixers, amplifiers, (de)modulators, a lot of Fourier Transforms / Spectral analysis was involved. Same goes for audio device design, any high-speed data link design, image analysis…
How is it done?
I'll just address the DFT here.
Usually, that's implemented as an FFT, Fast Fourier Transform. That's one of the most important algorithmic discoveries of the 20th century, so I will spare but few words on it, because there's literally thousands of articles out there that explain the FFT.
You go in and look at the $e^{j2pi frac nN k}$ multipliers of a DFT. You'll notice that these can basically be understood as ${e^{j2pi frac 1N k}}^n=W^n$; and there you have your twiddle factor. Now you avoid calculating coefficients that you've already calculated, and just swap a sign where necessary.
That way, you can reduce the complexity of a DFT from the $N^2$ (which would be the complexity if you implemented the DFT as the naive sum) to something in the order of $Nlog N$ – a huge win, even for relatively small $N$.
It's relatively straightforward to implement that in hardware, if you can get your whole input vector at once – you get $log N$ as a combinatorial depth and fixed coefficients at every step. The trick is knowing how (whether) to pipeline the individual layers, and how to use the specific hardware type you have (ASIC? FPGA? FPGA with hardware multipliers?). You can basically piece together $N=2^l$-length transform only from what we call Butterflies, which you'll recognize once you read about the FFT.
In software, the principle is the same, but you need to know how to multi-thread very large transforms, and how to access memory as fast as possible by utilizing your CPU caches optimally.
However, for both hardware and software, there's libraries that you'd just use to calculate the DFT (FFT). For Hardware, that usually comes from your FPGA vendor (e.g. Altera/Intel, Xilinx, Lattice…) , or a large ASIC design tool company (Cadence) or your ASIC house.
$endgroup$
$begingroup$
Impressive dedication to your art, agree that 'long' is O(16) for FIR filters.
$endgroup$
– Neil_UK
2 days ago
$begingroup$
:) thank you! Seen you write far more impressive answers than this, though :)
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
While this is a very detailed answer, and will provide an accurate FFT of an incoming signal, it does not answer the question. This is a digital process applied to an input signal, it is not a solution implemented in hardware (other than the AD converter on the front end).
$endgroup$
– Jennifer
2 days ago
1
$begingroup$
Jennifer is right. You should discuss analogue DFT or at least clarify that DFT means discrete FT, but not necessarily digital FT.
$endgroup$
– leftaroundabout
2 days ago
2
$begingroup$
Page 43 (pdf numbering) in this proceedings discusses a FIR based on analogue FFT: imagesensors.org/Past%20Workshops/Marvin%20White%20Collection/…
$endgroup$
– leftaroundabout
yesterday
|
show 5 more comments
$begingroup$
Devices using the Fourier Transform
It is very difficult for an electronic device to decompose a signal in different frequencies.
It's not.
There's actually quite a few devices that do that, explicitly.
First of all, you'll have to make a difference between the continuous Fourier transform (which you probably know as $mathcal Fleft{x(t)right}(f)=int_{-infty}^{infty} x(t)e^{j2pi f t},mathrm dt$) and the Digital Fourier Transform (DFT), which is what you can do with a sampled signal.
For both, there's devices that implement these.
Continuous Fourier Transform
There's little in the ways of actual need for this in digital electronics – digital signals are sampled, so you'd use the DFT.
In optics and photonics you'll notice that there's an actual chance to get perfectly periodic things for a "large" (read as: nearly as infinite as the integral above) length. Effectively, an acousto-optic element can be excited with one or multiple tones, and it will have the same correlating effects as the integral above. You don't have to look at 2018's Physics Nobel Prize winners to find an example of Fourier Optics.
Discrete Fourier Transform
This is really all over the place; it's such a standard processing step that as a communication engineer, we often even forget where it is.
So, this list is far less than complete; just examples:
Equalizers: It's pretty easy building a digital audio equalizer with a DFT. Typically, the zero-forcing equalizer type for communication systems uses a DFT to find the frequency domain representation of the channel necessary to be "removed", inverts that and uses the IDFT to get back that back to time domain to be used as taps in a FIR filter.
Antenna Arrays / Beamsteering: If you have an array of antennas in fixed distance from each other, you can steer the beam of these antennas, by calculating the DFT of the "directional vector" you'd like to achieve and use the result as complex coefficients to be multiplied with the transmit signal that you distribute to these antennas. Real-world MIMO systems do that.
Direction Finding: What works in transmit direction works exactly the same, but reverse, in receive direction: Get a signal for each of your antennas in your array, find the complex factors between these signals, do an IDFT, get a vector containing the info how power came from which direction. Easy! And done for estimation where aircraft are, where Wifi communication partners are, submarines (though there it's not antennas but underwater microphones)…
Channelization: Satellites in space are expensive, so multiple TV programs need to be uplinked to one satellite. You can use a DFT (especially in an Polyphase Filterbank) to put multiple channels in one uplink, or to isolate individual channels from one wideband signal. That's not a domain of TV; it happens in audio processing, medical imaging, ultrasonic analysis, Radio broadcasting…)
Data Encoding for multicarrier systems: To combat the problems of wide channels (which you need if you want to transport many bits per second), namely the need for complex equalizers, you'd want to chop up your channel in many small channels (see "channelization" above). However, you can understand the DFT alone as Filterbank for frequency-shifted time-domain-rectangular filters. The nice thing about that is that these channels are very tightly packed. The other nice thing is that the convolution with the channel reduces to a point-wise multiplication which is super simple to revert. We call that method OFDM, and all Wifi, LTE, 5G, WiMax, ATSC, DVB-T, Digital Audio Broadcasting, DSL, and many more systems use that.
Efficient Filtering: A FIR filter is a convolution with the filter impulse response in time domain. As such, it uses a lot of operations per output sample – it's very computationally intense. You can greatly reduce that effort when you implement fast convolution, which is based on DFT'ing sections of input samples, mutliplying them with the DFT of the impulse response in frequency domain, overlapping with previous segments, and back-transformation to time domain. That's so handy that it's used in almost all systems that have long FIR filters (and "long" might start with so benign numbers as "16 taps").
Radar: Classical automotive radars use self-modulating FMCW radars; to get a picture of both the relative speed and distance of reflectors observed by that, you typically do a two-dimensional DFT (which really is just DFT'ing all columns of a matrix and afterwards all rows of the result).
Audio and Image/Video Compression: Though JPEG uses the Discrete Cosine Transform, not the DFT itself, there's ample of codecs that mechanism which at least use significant parts of a DFT.
Note that the above list only contains things that do DFTs during operation. You can be 100% sure that during design of anything remotely related to RF, especially antenas, mixers, amplifiers, (de)modulators, a lot of Fourier Transforms / Spectral analysis was involved. Same goes for audio device design, any high-speed data link design, image analysis…
How is it done?
I'll just address the DFT here.
Usually, that's implemented as an FFT, Fast Fourier Transform. That's one of the most important algorithmic discoveries of the 20th century, so I will spare but few words on it, because there's literally thousands of articles out there that explain the FFT.
You go in and look at the $e^{j2pi frac nN k}$ multipliers of a DFT. You'll notice that these can basically be understood as ${e^{j2pi frac 1N k}}^n=W^n$; and there you have your twiddle factor. Now you avoid calculating coefficients that you've already calculated, and just swap a sign where necessary.
That way, you can reduce the complexity of a DFT from the $N^2$ (which would be the complexity if you implemented the DFT as the naive sum) to something in the order of $Nlog N$ – a huge win, even for relatively small $N$.
It's relatively straightforward to implement that in hardware, if you can get your whole input vector at once – you get $log N$ as a combinatorial depth and fixed coefficients at every step. The trick is knowing how (whether) to pipeline the individual layers, and how to use the specific hardware type you have (ASIC? FPGA? FPGA with hardware multipliers?). You can basically piece together $N=2^l$-length transform only from what we call Butterflies, which you'll recognize once you read about the FFT.
In software, the principle is the same, but you need to know how to multi-thread very large transforms, and how to access memory as fast as possible by utilizing your CPU caches optimally.
However, for both hardware and software, there's libraries that you'd just use to calculate the DFT (FFT). For Hardware, that usually comes from your FPGA vendor (e.g. Altera/Intel, Xilinx, Lattice…) , or a large ASIC design tool company (Cadence) or your ASIC house.
$endgroup$
$begingroup$
Impressive dedication to your art, agree that 'long' is O(16) for FIR filters.
$endgroup$
– Neil_UK
2 days ago
$begingroup$
:) thank you! Seen you write far more impressive answers than this, though :)
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
While this is a very detailed answer, and will provide an accurate FFT of an incoming signal, it does not answer the question. This is a digital process applied to an input signal, it is not a solution implemented in hardware (other than the AD converter on the front end).
$endgroup$
– Jennifer
2 days ago
1
$begingroup$
Jennifer is right. You should discuss analogue DFT or at least clarify that DFT means discrete FT, but not necessarily digital FT.
$endgroup$
– leftaroundabout
2 days ago
2
$begingroup$
Page 43 (pdf numbering) in this proceedings discusses a FIR based on analogue FFT: imagesensors.org/Past%20Workshops/Marvin%20White%20Collection/…
$endgroup$
– leftaroundabout
yesterday
|
show 5 more comments
$begingroup$
Devices using the Fourier Transform
It is very difficult for an electronic device to decompose a signal in different frequencies.
It's not.
There's actually quite a few devices that do that, explicitly.
First of all, you'll have to make a difference between the continuous Fourier transform (which you probably know as $mathcal Fleft{x(t)right}(f)=int_{-infty}^{infty} x(t)e^{j2pi f t},mathrm dt$) and the Digital Fourier Transform (DFT), which is what you can do with a sampled signal.
For both, there's devices that implement these.
Continuous Fourier Transform
There's little in the ways of actual need for this in digital electronics – digital signals are sampled, so you'd use the DFT.
In optics and photonics you'll notice that there's an actual chance to get perfectly periodic things for a "large" (read as: nearly as infinite as the integral above) length. Effectively, an acousto-optic element can be excited with one or multiple tones, and it will have the same correlating effects as the integral above. You don't have to look at 2018's Physics Nobel Prize winners to find an example of Fourier Optics.
Discrete Fourier Transform
This is really all over the place; it's such a standard processing step that as a communication engineer, we often even forget where it is.
So, this list is far less than complete; just examples:
Equalizers: It's pretty easy building a digital audio equalizer with a DFT. Typically, the zero-forcing equalizer type for communication systems uses a DFT to find the frequency domain representation of the channel necessary to be "removed", inverts that and uses the IDFT to get back that back to time domain to be used as taps in a FIR filter.
Antenna Arrays / Beamsteering: If you have an array of antennas in fixed distance from each other, you can steer the beam of these antennas, by calculating the DFT of the "directional vector" you'd like to achieve and use the result as complex coefficients to be multiplied with the transmit signal that you distribute to these antennas. Real-world MIMO systems do that.
Direction Finding: What works in transmit direction works exactly the same, but reverse, in receive direction: Get a signal for each of your antennas in your array, find the complex factors between these signals, do an IDFT, get a vector containing the info how power came from which direction. Easy! And done for estimation where aircraft are, where Wifi communication partners are, submarines (though there it's not antennas but underwater microphones)…
Channelization: Satellites in space are expensive, so multiple TV programs need to be uplinked to one satellite. You can use a DFT (especially in an Polyphase Filterbank) to put multiple channels in one uplink, or to isolate individual channels from one wideband signal. That's not a domain of TV; it happens in audio processing, medical imaging, ultrasonic analysis, Radio broadcasting…)
Data Encoding for multicarrier systems: To combat the problems of wide channels (which you need if you want to transport many bits per second), namely the need for complex equalizers, you'd want to chop up your channel in many small channels (see "channelization" above). However, you can understand the DFT alone as Filterbank for frequency-shifted time-domain-rectangular filters. The nice thing about that is that these channels are very tightly packed. The other nice thing is that the convolution with the channel reduces to a point-wise multiplication which is super simple to revert. We call that method OFDM, and all Wifi, LTE, 5G, WiMax, ATSC, DVB-T, Digital Audio Broadcasting, DSL, and many more systems use that.
Efficient Filtering: A FIR filter is a convolution with the filter impulse response in time domain. As such, it uses a lot of operations per output sample – it's very computationally intense. You can greatly reduce that effort when you implement fast convolution, which is based on DFT'ing sections of input samples, mutliplying them with the DFT of the impulse response in frequency domain, overlapping with previous segments, and back-transformation to time domain. That's so handy that it's used in almost all systems that have long FIR filters (and "long" might start with so benign numbers as "16 taps").
Radar: Classical automotive radars use self-modulating FMCW radars; to get a picture of both the relative speed and distance of reflectors observed by that, you typically do a two-dimensional DFT (which really is just DFT'ing all columns of a matrix and afterwards all rows of the result).
Audio and Image/Video Compression: Though JPEG uses the Discrete Cosine Transform, not the DFT itself, there's ample of codecs that mechanism which at least use significant parts of a DFT.
Note that the above list only contains things that do DFTs during operation. You can be 100% sure that during design of anything remotely related to RF, especially antenas, mixers, amplifiers, (de)modulators, a lot of Fourier Transforms / Spectral analysis was involved. Same goes for audio device design, any high-speed data link design, image analysis…
How is it done?
I'll just address the DFT here.
Usually, that's implemented as an FFT, Fast Fourier Transform. That's one of the most important algorithmic discoveries of the 20th century, so I will spare but few words on it, because there's literally thousands of articles out there that explain the FFT.
You go in and look at the $e^{j2pi frac nN k}$ multipliers of a DFT. You'll notice that these can basically be understood as ${e^{j2pi frac 1N k}}^n=W^n$; and there you have your twiddle factor. Now you avoid calculating coefficients that you've already calculated, and just swap a sign where necessary.
That way, you can reduce the complexity of a DFT from the $N^2$ (which would be the complexity if you implemented the DFT as the naive sum) to something in the order of $Nlog N$ – a huge win, even for relatively small $N$.
It's relatively straightforward to implement that in hardware, if you can get your whole input vector at once – you get $log N$ as a combinatorial depth and fixed coefficients at every step. The trick is knowing how (whether) to pipeline the individual layers, and how to use the specific hardware type you have (ASIC? FPGA? FPGA with hardware multipliers?). You can basically piece together $N=2^l$-length transform only from what we call Butterflies, which you'll recognize once you read about the FFT.
In software, the principle is the same, but you need to know how to multi-thread very large transforms, and how to access memory as fast as possible by utilizing your CPU caches optimally.
However, for both hardware and software, there's libraries that you'd just use to calculate the DFT (FFT). For Hardware, that usually comes from your FPGA vendor (e.g. Altera/Intel, Xilinx, Lattice…) , or a large ASIC design tool company (Cadence) or your ASIC house.
$endgroup$
Devices using the Fourier Transform
It is very difficult for an electronic device to decompose a signal in different frequencies.
It's not.
There's actually quite a few devices that do that, explicitly.
First of all, you'll have to make a difference between the continuous Fourier transform (which you probably know as $mathcal Fleft{x(t)right}(f)=int_{-infty}^{infty} x(t)e^{j2pi f t},mathrm dt$) and the Digital Fourier Transform (DFT), which is what you can do with a sampled signal.
For both, there's devices that implement these.
Continuous Fourier Transform
There's little in the ways of actual need for this in digital electronics – digital signals are sampled, so you'd use the DFT.
In optics and photonics you'll notice that there's an actual chance to get perfectly periodic things for a "large" (read as: nearly as infinite as the integral above) length. Effectively, an acousto-optic element can be excited with one or multiple tones, and it will have the same correlating effects as the integral above. You don't have to look at 2018's Physics Nobel Prize winners to find an example of Fourier Optics.
Discrete Fourier Transform
This is really all over the place; it's such a standard processing step that as a communication engineer, we often even forget where it is.
So, this list is far less than complete; just examples:
Equalizers: It's pretty easy building a digital audio equalizer with a DFT. Typically, the zero-forcing equalizer type for communication systems uses a DFT to find the frequency domain representation of the channel necessary to be "removed", inverts that and uses the IDFT to get back that back to time domain to be used as taps in a FIR filter.
Antenna Arrays / Beamsteering: If you have an array of antennas in fixed distance from each other, you can steer the beam of these antennas, by calculating the DFT of the "directional vector" you'd like to achieve and use the result as complex coefficients to be multiplied with the transmit signal that you distribute to these antennas. Real-world MIMO systems do that.
Direction Finding: What works in transmit direction works exactly the same, but reverse, in receive direction: Get a signal for each of your antennas in your array, find the complex factors between these signals, do an IDFT, get a vector containing the info how power came from which direction. Easy! And done for estimation where aircraft are, where Wifi communication partners are, submarines (though there it's not antennas but underwater microphones)…
Channelization: Satellites in space are expensive, so multiple TV programs need to be uplinked to one satellite. You can use a DFT (especially in an Polyphase Filterbank) to put multiple channels in one uplink, or to isolate individual channels from one wideband signal. That's not a domain of TV; it happens in audio processing, medical imaging, ultrasonic analysis, Radio broadcasting…)
Data Encoding for multicarrier systems: To combat the problems of wide channels (which you need if you want to transport many bits per second), namely the need for complex equalizers, you'd want to chop up your channel in many small channels (see "channelization" above). However, you can understand the DFT alone as Filterbank for frequency-shifted time-domain-rectangular filters. The nice thing about that is that these channels are very tightly packed. The other nice thing is that the convolution with the channel reduces to a point-wise multiplication which is super simple to revert. We call that method OFDM, and all Wifi, LTE, 5G, WiMax, ATSC, DVB-T, Digital Audio Broadcasting, DSL, and many more systems use that.
Efficient Filtering: A FIR filter is a convolution with the filter impulse response in time domain. As such, it uses a lot of operations per output sample – it's very computationally intense. You can greatly reduce that effort when you implement fast convolution, which is based on DFT'ing sections of input samples, mutliplying them with the DFT of the impulse response in frequency domain, overlapping with previous segments, and back-transformation to time domain. That's so handy that it's used in almost all systems that have long FIR filters (and "long" might start with so benign numbers as "16 taps").
Radar: Classical automotive radars use self-modulating FMCW radars; to get a picture of both the relative speed and distance of reflectors observed by that, you typically do a two-dimensional DFT (which really is just DFT'ing all columns of a matrix and afterwards all rows of the result).
Audio and Image/Video Compression: Though JPEG uses the Discrete Cosine Transform, not the DFT itself, there's ample of codecs that mechanism which at least use significant parts of a DFT.
Note that the above list only contains things that do DFTs during operation. You can be 100% sure that during design of anything remotely related to RF, especially antenas, mixers, amplifiers, (de)modulators, a lot of Fourier Transforms / Spectral analysis was involved. Same goes for audio device design, any high-speed data link design, image analysis…
How is it done?
I'll just address the DFT here.
Usually, that's implemented as an FFT, Fast Fourier Transform. That's one of the most important algorithmic discoveries of the 20th century, so I will spare but few words on it, because there's literally thousands of articles out there that explain the FFT.
You go in and look at the $e^{j2pi frac nN k}$ multipliers of a DFT. You'll notice that these can basically be understood as ${e^{j2pi frac 1N k}}^n=W^n$; and there you have your twiddle factor. Now you avoid calculating coefficients that you've already calculated, and just swap a sign where necessary.
That way, you can reduce the complexity of a DFT from the $N^2$ (which would be the complexity if you implemented the DFT as the naive sum) to something in the order of $Nlog N$ – a huge win, even for relatively small $N$.
It's relatively straightforward to implement that in hardware, if you can get your whole input vector at once – you get $log N$ as a combinatorial depth and fixed coefficients at every step. The trick is knowing how (whether) to pipeline the individual layers, and how to use the specific hardware type you have (ASIC? FPGA? FPGA with hardware multipliers?). You can basically piece together $N=2^l$-length transform only from what we call Butterflies, which you'll recognize once you read about the FFT.
In software, the principle is the same, but you need to know how to multi-thread very large transforms, and how to access memory as fast as possible by utilizing your CPU caches optimally.
However, for both hardware and software, there's libraries that you'd just use to calculate the DFT (FFT). For Hardware, that usually comes from your FPGA vendor (e.g. Altera/Intel, Xilinx, Lattice…) , or a large ASIC design tool company (Cadence) or your ASIC house.
edited 2 days ago
answered 2 days ago
Marcus MüllerMarcus Müller
34.4k362100
34.4k362100
$begingroup$
Impressive dedication to your art, agree that 'long' is O(16) for FIR filters.
$endgroup$
– Neil_UK
2 days ago
$begingroup$
:) thank you! Seen you write far more impressive answers than this, though :)
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
While this is a very detailed answer, and will provide an accurate FFT of an incoming signal, it does not answer the question. This is a digital process applied to an input signal, it is not a solution implemented in hardware (other than the AD converter on the front end).
$endgroup$
– Jennifer
2 days ago
1
$begingroup$
Jennifer is right. You should discuss analogue DFT or at least clarify that DFT means discrete FT, but not necessarily digital FT.
$endgroup$
– leftaroundabout
2 days ago
2
$begingroup$
Page 43 (pdf numbering) in this proceedings discusses a FIR based on analogue FFT: imagesensors.org/Past%20Workshops/Marvin%20White%20Collection/…
$endgroup$
– leftaroundabout
yesterday
|
show 5 more comments
$begingroup$
Impressive dedication to your art, agree that 'long' is O(16) for FIR filters.
$endgroup$
– Neil_UK
2 days ago
$begingroup$
:) thank you! Seen you write far more impressive answers than this, though :)
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
While this is a very detailed answer, and will provide an accurate FFT of an incoming signal, it does not answer the question. This is a digital process applied to an input signal, it is not a solution implemented in hardware (other than the AD converter on the front end).
$endgroup$
– Jennifer
2 days ago
1
$begingroup$
Jennifer is right. You should discuss analogue DFT or at least clarify that DFT means discrete FT, but not necessarily digital FT.
$endgroup$
– leftaroundabout
2 days ago
2
$begingroup$
Page 43 (pdf numbering) in this proceedings discusses a FIR based on analogue FFT: imagesensors.org/Past%20Workshops/Marvin%20White%20Collection/…
$endgroup$
– leftaroundabout
yesterday
$begingroup$
Impressive dedication to your art, agree that 'long' is O(16) for FIR filters.
$endgroup$
– Neil_UK
2 days ago
$begingroup$
Impressive dedication to your art, agree that 'long' is O(16) for FIR filters.
$endgroup$
– Neil_UK
2 days ago
$begingroup$
:) thank you! Seen you write far more impressive answers than this, though :)
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
:) thank you! Seen you write far more impressive answers than this, though :)
$endgroup$
– Marcus Müller
2 days ago
1
1
$begingroup$
While this is a very detailed answer, and will provide an accurate FFT of an incoming signal, it does not answer the question. This is a digital process applied to an input signal, it is not a solution implemented in hardware (other than the AD converter on the front end).
$endgroup$
– Jennifer
2 days ago
$begingroup$
While this is a very detailed answer, and will provide an accurate FFT of an incoming signal, it does not answer the question. This is a digital process applied to an input signal, it is not a solution implemented in hardware (other than the AD converter on the front end).
$endgroup$
– Jennifer
2 days ago
1
1
$begingroup$
Jennifer is right. You should discuss analogue DFT or at least clarify that DFT means discrete FT, but not necessarily digital FT.
$endgroup$
– leftaroundabout
2 days ago
$begingroup$
Jennifer is right. You should discuss analogue DFT or at least clarify that DFT means discrete FT, but not necessarily digital FT.
$endgroup$
– leftaroundabout
2 days ago
2
2
$begingroup$
Page 43 (pdf numbering) in this proceedings discusses a FIR based on analogue FFT: imagesensors.org/Past%20Workshops/Marvin%20White%20Collection/…
$endgroup$
– leftaroundabout
yesterday
$begingroup$
Page 43 (pdf numbering) in this proceedings discusses a FIR based on analogue FFT: imagesensors.org/Past%20Workshops/Marvin%20White%20Collection/…
$endgroup$
– leftaroundabout
yesterday
|
show 5 more comments
$begingroup$
You can't get much more "bare metal" and "hardware" than a set of vibrating reeds.
http://www.stichtco.com/freq_met.htm
So what hardware does a fourier transform, a bunch of resonant systems can do that
$endgroup$
$begingroup$
huh, fancy. My father told me of similar devices they used at uni for frequency analysis of vibrating machines.
$endgroup$
– Marcus Müller
2 days ago
4
$begingroup$
This is roughly how your ears work, too, see cochlea.eu/en/cochlea/function
$endgroup$
– zwol
2 days ago
add a comment |
$begingroup$
You can't get much more "bare metal" and "hardware" than a set of vibrating reeds.
http://www.stichtco.com/freq_met.htm
So what hardware does a fourier transform, a bunch of resonant systems can do that
$endgroup$
$begingroup$
huh, fancy. My father told me of similar devices they used at uni for frequency analysis of vibrating machines.
$endgroup$
– Marcus Müller
2 days ago
4
$begingroup$
This is roughly how your ears work, too, see cochlea.eu/en/cochlea/function
$endgroup$
– zwol
2 days ago
add a comment |
$begingroup$
You can't get much more "bare metal" and "hardware" than a set of vibrating reeds.
http://www.stichtco.com/freq_met.htm
So what hardware does a fourier transform, a bunch of resonant systems can do that
$endgroup$
You can't get much more "bare metal" and "hardware" than a set of vibrating reeds.
http://www.stichtco.com/freq_met.htm
So what hardware does a fourier transform, a bunch of resonant systems can do that
answered 2 days ago
JasenJasen
11.1k1531
11.1k1531
$begingroup$
huh, fancy. My father told me of similar devices they used at uni for frequency analysis of vibrating machines.
$endgroup$
– Marcus Müller
2 days ago
4
$begingroup$
This is roughly how your ears work, too, see cochlea.eu/en/cochlea/function
$endgroup$
– zwol
2 days ago
add a comment |
$begingroup$
huh, fancy. My father told me of similar devices they used at uni for frequency analysis of vibrating machines.
$endgroup$
– Marcus Müller
2 days ago
4
$begingroup$
This is roughly how your ears work, too, see cochlea.eu/en/cochlea/function
$endgroup$
– zwol
2 days ago
$begingroup$
huh, fancy. My father told me of similar devices they used at uni for frequency analysis of vibrating machines.
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
huh, fancy. My father told me of similar devices they used at uni for frequency analysis of vibrating machines.
$endgroup$
– Marcus Müller
2 days ago
4
4
$begingroup$
This is roughly how your ears work, too, see cochlea.eu/en/cochlea/function
$endgroup$
– zwol
2 days ago
$begingroup$
This is roughly how your ears work, too, see cochlea.eu/en/cochlea/function
$endgroup$
– zwol
2 days ago
add a comment |
$begingroup$
Surface Acoustic Wave Devices were used as analog electro-mechanical devices to perform several signal processing tasks. Most papers are paywalled.
Chapter 16 of Colin Campbell's 1989 book Surface Acoustic Wave Devices and their Signal Processing Applications
Publisher Summary
This chapter presents fast real-time Fourier transform techniques using SAW linear frequency modulated (FM) chirp filters with processing times of only a few microseconds. SAW based techniques have applications to sonar, radar, spread spectrum, and other communications technologies requiring the fast analysis or filtering of complex signals. With SAW-based Fourier transform systems, this is carried out in the receiver intermediate-frequency (IF) stages. SAW linear FM chirp filters can be configured to affect a number of Fourier transform manipulations. Three of these are (1) single-stage Fourier transformers for spectrum or network analysis, (2) two-stage Fourier transform processors for cepstrum analysis, and (3) two-stage Fourier transform processors for real-time filtering. SAW-based Fourier transform processors for the spectral analysis of signals, known as compressive receivers, are available in a wide range of configurations to provide spectral resolutions over analytic bandwidths up to 1 GHz. The chapter also discusses the use of bilinear mixers in a SAW Fourier transform processor.
$endgroup$
add a comment |
$begingroup$
Surface Acoustic Wave Devices were used as analog electro-mechanical devices to perform several signal processing tasks. Most papers are paywalled.
Chapter 16 of Colin Campbell's 1989 book Surface Acoustic Wave Devices and their Signal Processing Applications
Publisher Summary
This chapter presents fast real-time Fourier transform techniques using SAW linear frequency modulated (FM) chirp filters with processing times of only a few microseconds. SAW based techniques have applications to sonar, radar, spread spectrum, and other communications technologies requiring the fast analysis or filtering of complex signals. With SAW-based Fourier transform systems, this is carried out in the receiver intermediate-frequency (IF) stages. SAW linear FM chirp filters can be configured to affect a number of Fourier transform manipulations. Three of these are (1) single-stage Fourier transformers for spectrum or network analysis, (2) two-stage Fourier transform processors for cepstrum analysis, and (3) two-stage Fourier transform processors for real-time filtering. SAW-based Fourier transform processors for the spectral analysis of signals, known as compressive receivers, are available in a wide range of configurations to provide spectral resolutions over analytic bandwidths up to 1 GHz. The chapter also discusses the use of bilinear mixers in a SAW Fourier transform processor.
$endgroup$
add a comment |
$begingroup$
Surface Acoustic Wave Devices were used as analog electro-mechanical devices to perform several signal processing tasks. Most papers are paywalled.
Chapter 16 of Colin Campbell's 1989 book Surface Acoustic Wave Devices and their Signal Processing Applications
Publisher Summary
This chapter presents fast real-time Fourier transform techniques using SAW linear frequency modulated (FM) chirp filters with processing times of only a few microseconds. SAW based techniques have applications to sonar, radar, spread spectrum, and other communications technologies requiring the fast analysis or filtering of complex signals. With SAW-based Fourier transform systems, this is carried out in the receiver intermediate-frequency (IF) stages. SAW linear FM chirp filters can be configured to affect a number of Fourier transform manipulations. Three of these are (1) single-stage Fourier transformers for spectrum or network analysis, (2) two-stage Fourier transform processors for cepstrum analysis, and (3) two-stage Fourier transform processors for real-time filtering. SAW-based Fourier transform processors for the spectral analysis of signals, known as compressive receivers, are available in a wide range of configurations to provide spectral resolutions over analytic bandwidths up to 1 GHz. The chapter also discusses the use of bilinear mixers in a SAW Fourier transform processor.
$endgroup$
Surface Acoustic Wave Devices were used as analog electro-mechanical devices to perform several signal processing tasks. Most papers are paywalled.
Chapter 16 of Colin Campbell's 1989 book Surface Acoustic Wave Devices and their Signal Processing Applications
Publisher Summary
This chapter presents fast real-time Fourier transform techniques using SAW linear frequency modulated (FM) chirp filters with processing times of only a few microseconds. SAW based techniques have applications to sonar, radar, spread spectrum, and other communications technologies requiring the fast analysis or filtering of complex signals. With SAW-based Fourier transform systems, this is carried out in the receiver intermediate-frequency (IF) stages. SAW linear FM chirp filters can be configured to affect a number of Fourier transform manipulations. Three of these are (1) single-stage Fourier transformers for spectrum or network analysis, (2) two-stage Fourier transform processors for cepstrum analysis, and (3) two-stage Fourier transform processors for real-time filtering. SAW-based Fourier transform processors for the spectral analysis of signals, known as compressive receivers, are available in a wide range of configurations to provide spectral resolutions over analytic bandwidths up to 1 GHz. The chapter also discusses the use of bilinear mixers in a SAW Fourier transform processor.
answered 2 days ago
uhohuhoh
1,069634
1,069634
add a comment |
add a comment |
$begingroup$
This can be done on the - literally - bare metal level using the Harmonic Analyzer:
https://www.youtube.com/watch?v=NAsM30MAHLg
And sorry to give a link-only answer, but this one you really have to see by yourself.
$endgroup$
$begingroup$
yep, the short series is worth a watch.
$endgroup$
– uhoh
2 hours ago
add a comment |
$begingroup$
This can be done on the - literally - bare metal level using the Harmonic Analyzer:
https://www.youtube.com/watch?v=NAsM30MAHLg
And sorry to give a link-only answer, but this one you really have to see by yourself.
$endgroup$
$begingroup$
yep, the short series is worth a watch.
$endgroup$
– uhoh
2 hours ago
add a comment |
$begingroup$
This can be done on the - literally - bare metal level using the Harmonic Analyzer:
https://www.youtube.com/watch?v=NAsM30MAHLg
And sorry to give a link-only answer, but this one you really have to see by yourself.
$endgroup$
This can be done on the - literally - bare metal level using the Harmonic Analyzer:
https://www.youtube.com/watch?v=NAsM30MAHLg
And sorry to give a link-only answer, but this one you really have to see by yourself.
answered yesterday
AndreKRAndreKR
2,29821425
2,29821425
$begingroup$
yep, the short series is worth a watch.
$endgroup$
– uhoh
2 hours ago
add a comment |
$begingroup$
yep, the short series is worth a watch.
$endgroup$
– uhoh
2 hours ago
$begingroup$
yep, the short series is worth a watch.
$endgroup$
– uhoh
2 hours ago
$begingroup$
yep, the short series is worth a watch.
$endgroup$
– uhoh
2 hours ago
add a comment |
$begingroup$
A Fourier transform on a discrete sampled function is a change of
basis functions from a series of (typically) times-of-sample values to
an equivalent series of frequency-component values.
It is a linear transformation (the Fourier transform of
a sum of two series is the sum of the Fourier transforms of
the two series), so is identical to a matrix operating on
a vector (the time-of-sample series).
A matrix of rank N operating on a vector with N components generates
a second vector with N components by doing N^2 multiplies, and
(N^2 - N) additions.
Okay, so now how the metal does this:
There's a gizmo called a 'harmonic analyzer' which
multiplies and accumulates one frequency (basically one row
of the matrix), which is a kind of analog computer. It involves
plotting the function input on a graph paper, connecting a polar
planimeter (mechanical integrator) and linkage (mechanical multiplier)
and tracing the curve gives you... one element of the output.
Using it isn't too bad, but for a 1024-element transform, you have to
do the operation... 1024 times. This is how tide tables were calculated,
though, a century ago. see Mathematical Instruments article here, page 71
Then there's the manual method, using slide rule and adding machine,
which requires looking up the matrix elements in a table of sines/cosines,
and that means you operate your slide rule, for a 1024-element sampling,
over 2 million times.
A general-purpose computer can do the operation, too.
Some (digital signal processor, DSP) specialized CPU designs are
made with accelerated multiply-accumulate hardware, which speeds things up.
And, there's a very clever algorithm, the FFT, that gets around the
problem of N samples requiring N^2 operations, by noting that a
4x4 matrix is a 2x2 matrix of 2x2 matrices; there's a way to
take any composite number (a power of two, like '1024' is convenient)
and use only on-the-order-of N* Log(N) operations instead of N^2.
That means the 1024 inputs only requires 61,440 operations
instead of 1,048,576.
The FFT doesn't simplify a general discrete Fourier transform, because it requires
that the N value be nonprime (and almost always a power of two is
used), but it can be hardware-supported in a variety of ways, so that
the operations (multiply-accumulate) are the time-limiting step.
One modern (2019) chip (ADBSP-561 from Analog DevicesMMAC column) can do 2400 such operations per microsecond.
$endgroup$
add a comment |
$begingroup$
A Fourier transform on a discrete sampled function is a change of
basis functions from a series of (typically) times-of-sample values to
an equivalent series of frequency-component values.
It is a linear transformation (the Fourier transform of
a sum of two series is the sum of the Fourier transforms of
the two series), so is identical to a matrix operating on
a vector (the time-of-sample series).
A matrix of rank N operating on a vector with N components generates
a second vector with N components by doing N^2 multiplies, and
(N^2 - N) additions.
Okay, so now how the metal does this:
There's a gizmo called a 'harmonic analyzer' which
multiplies and accumulates one frequency (basically one row
of the matrix), which is a kind of analog computer. It involves
plotting the function input on a graph paper, connecting a polar
planimeter (mechanical integrator) and linkage (mechanical multiplier)
and tracing the curve gives you... one element of the output.
Using it isn't too bad, but for a 1024-element transform, you have to
do the operation... 1024 times. This is how tide tables were calculated,
though, a century ago. see Mathematical Instruments article here, page 71
Then there's the manual method, using slide rule and adding machine,
which requires looking up the matrix elements in a table of sines/cosines,
and that means you operate your slide rule, for a 1024-element sampling,
over 2 million times.
A general-purpose computer can do the operation, too.
Some (digital signal processor, DSP) specialized CPU designs are
made with accelerated multiply-accumulate hardware, which speeds things up.
And, there's a very clever algorithm, the FFT, that gets around the
problem of N samples requiring N^2 operations, by noting that a
4x4 matrix is a 2x2 matrix of 2x2 matrices; there's a way to
take any composite number (a power of two, like '1024' is convenient)
and use only on-the-order-of N* Log(N) operations instead of N^2.
That means the 1024 inputs only requires 61,440 operations
instead of 1,048,576.
The FFT doesn't simplify a general discrete Fourier transform, because it requires
that the N value be nonprime (and almost always a power of two is
used), but it can be hardware-supported in a variety of ways, so that
the operations (multiply-accumulate) are the time-limiting step.
One modern (2019) chip (ADBSP-561 from Analog DevicesMMAC column) can do 2400 such operations per microsecond.
$endgroup$
add a comment |
$begingroup$
A Fourier transform on a discrete sampled function is a change of
basis functions from a series of (typically) times-of-sample values to
an equivalent series of frequency-component values.
It is a linear transformation (the Fourier transform of
a sum of two series is the sum of the Fourier transforms of
the two series), so is identical to a matrix operating on
a vector (the time-of-sample series).
A matrix of rank N operating on a vector with N components generates
a second vector with N components by doing N^2 multiplies, and
(N^2 - N) additions.
Okay, so now how the metal does this:
There's a gizmo called a 'harmonic analyzer' which
multiplies and accumulates one frequency (basically one row
of the matrix), which is a kind of analog computer. It involves
plotting the function input on a graph paper, connecting a polar
planimeter (mechanical integrator) and linkage (mechanical multiplier)
and tracing the curve gives you... one element of the output.
Using it isn't too bad, but for a 1024-element transform, you have to
do the operation... 1024 times. This is how tide tables were calculated,
though, a century ago. see Mathematical Instruments article here, page 71
Then there's the manual method, using slide rule and adding machine,
which requires looking up the matrix elements in a table of sines/cosines,
and that means you operate your slide rule, for a 1024-element sampling,
over 2 million times.
A general-purpose computer can do the operation, too.
Some (digital signal processor, DSP) specialized CPU designs are
made with accelerated multiply-accumulate hardware, which speeds things up.
And, there's a very clever algorithm, the FFT, that gets around the
problem of N samples requiring N^2 operations, by noting that a
4x4 matrix is a 2x2 matrix of 2x2 matrices; there's a way to
take any composite number (a power of two, like '1024' is convenient)
and use only on-the-order-of N* Log(N) operations instead of N^2.
That means the 1024 inputs only requires 61,440 operations
instead of 1,048,576.
The FFT doesn't simplify a general discrete Fourier transform, because it requires
that the N value be nonprime (and almost always a power of two is
used), but it can be hardware-supported in a variety of ways, so that
the operations (multiply-accumulate) are the time-limiting step.
One modern (2019) chip (ADBSP-561 from Analog DevicesMMAC column) can do 2400 such operations per microsecond.
$endgroup$
A Fourier transform on a discrete sampled function is a change of
basis functions from a series of (typically) times-of-sample values to
an equivalent series of frequency-component values.
It is a linear transformation (the Fourier transform of
a sum of two series is the sum of the Fourier transforms of
the two series), so is identical to a matrix operating on
a vector (the time-of-sample series).
A matrix of rank N operating on a vector with N components generates
a second vector with N components by doing N^2 multiplies, and
(N^2 - N) additions.
Okay, so now how the metal does this:
There's a gizmo called a 'harmonic analyzer' which
multiplies and accumulates one frequency (basically one row
of the matrix), which is a kind of analog computer. It involves
plotting the function input on a graph paper, connecting a polar
planimeter (mechanical integrator) and linkage (mechanical multiplier)
and tracing the curve gives you... one element of the output.
Using it isn't too bad, but for a 1024-element transform, you have to
do the operation... 1024 times. This is how tide tables were calculated,
though, a century ago. see Mathematical Instruments article here, page 71
Then there's the manual method, using slide rule and adding machine,
which requires looking up the matrix elements in a table of sines/cosines,
and that means you operate your slide rule, for a 1024-element sampling,
over 2 million times.
A general-purpose computer can do the operation, too.
Some (digital signal processor, DSP) specialized CPU designs are
made with accelerated multiply-accumulate hardware, which speeds things up.
And, there's a very clever algorithm, the FFT, that gets around the
problem of N samples requiring N^2 operations, by noting that a
4x4 matrix is a 2x2 matrix of 2x2 matrices; there's a way to
take any composite number (a power of two, like '1024' is convenient)
and use only on-the-order-of N* Log(N) operations instead of N^2.
That means the 1024 inputs only requires 61,440 operations
instead of 1,048,576.
The FFT doesn't simplify a general discrete Fourier transform, because it requires
that the N value be nonprime (and almost always a power of two is
used), but it can be hardware-supported in a variety of ways, so that
the operations (multiply-accumulate) are the time-limiting step.
One modern (2019) chip (ADBSP-561 from Analog DevicesMMAC column) can do 2400 such operations per microsecond.
answered yesterday
Whit3rdWhit3rd
4,7071019
4,7071019
add a comment |
add a comment |
$begingroup$
That's basically what a spectrum analyzer does:
https://www.electronics-notes.com/articles/test-methods/spectrum-analyzer/realtime-spectrum-analyser.php
$endgroup$
7
$begingroup$
No it's not what a spectrum analyzer does in general. Some (many) spectrum analyzers have an FFT mode, but even then, what the spectrum analyzer shows you is a PSD estimate, not a Fourier transform.
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
Answers that are mainly a link to another site to do not provide lasting value, as the link may be broken tomorrow. You should summarize the important content from the link in your own answer.
$endgroup$
– Elliot Alderson
2 days ago
$begingroup$
@MarcusMüller -- what is a "PSD estimate"?
$endgroup$
– Pete Becker
2 days ago
$begingroup$
@PeteBecker An estimate for the Power Spectral Density. The distribution of the expected power over frequencies for a signal that you have to consider random because you don't know it. The mathematically exact definition for a PSD is "Fourier Transform of the Autocorrelation Function of a stochastic process"; but for most cases, we just assume the stochastic process (==random signal) to be Weak-Sense stationary and hence FT(ACF) == expectation(FT²(Time signal)).
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
@MarcusMüller -- thanks for clarifying.
$endgroup$
– Pete Becker
2 days ago
add a comment |
$begingroup$
That's basically what a spectrum analyzer does:
https://www.electronics-notes.com/articles/test-methods/spectrum-analyzer/realtime-spectrum-analyser.php
$endgroup$
7
$begingroup$
No it's not what a spectrum analyzer does in general. Some (many) spectrum analyzers have an FFT mode, but even then, what the spectrum analyzer shows you is a PSD estimate, not a Fourier transform.
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
Answers that are mainly a link to another site to do not provide lasting value, as the link may be broken tomorrow. You should summarize the important content from the link in your own answer.
$endgroup$
– Elliot Alderson
2 days ago
$begingroup$
@MarcusMüller -- what is a "PSD estimate"?
$endgroup$
– Pete Becker
2 days ago
$begingroup$
@PeteBecker An estimate for the Power Spectral Density. The distribution of the expected power over frequencies for a signal that you have to consider random because you don't know it. The mathematically exact definition for a PSD is "Fourier Transform of the Autocorrelation Function of a stochastic process"; but for most cases, we just assume the stochastic process (==random signal) to be Weak-Sense stationary and hence FT(ACF) == expectation(FT²(Time signal)).
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
@MarcusMüller -- thanks for clarifying.
$endgroup$
– Pete Becker
2 days ago
add a comment |
$begingroup$
That's basically what a spectrum analyzer does:
https://www.electronics-notes.com/articles/test-methods/spectrum-analyzer/realtime-spectrum-analyser.php
$endgroup$
That's basically what a spectrum analyzer does:
https://www.electronics-notes.com/articles/test-methods/spectrum-analyzer/realtime-spectrum-analyser.php
answered 2 days ago
KitKit
21428
21428
7
$begingroup$
No it's not what a spectrum analyzer does in general. Some (many) spectrum analyzers have an FFT mode, but even then, what the spectrum analyzer shows you is a PSD estimate, not a Fourier transform.
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
Answers that are mainly a link to another site to do not provide lasting value, as the link may be broken tomorrow. You should summarize the important content from the link in your own answer.
$endgroup$
– Elliot Alderson
2 days ago
$begingroup$
@MarcusMüller -- what is a "PSD estimate"?
$endgroup$
– Pete Becker
2 days ago
$begingroup$
@PeteBecker An estimate for the Power Spectral Density. The distribution of the expected power over frequencies for a signal that you have to consider random because you don't know it. The mathematically exact definition for a PSD is "Fourier Transform of the Autocorrelation Function of a stochastic process"; but for most cases, we just assume the stochastic process (==random signal) to be Weak-Sense stationary and hence FT(ACF) == expectation(FT²(Time signal)).
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
@MarcusMüller -- thanks for clarifying.
$endgroup$
– Pete Becker
2 days ago
add a comment |
7
$begingroup$
No it's not what a spectrum analyzer does in general. Some (many) spectrum analyzers have an FFT mode, but even then, what the spectrum analyzer shows you is a PSD estimate, not a Fourier transform.
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
Answers that are mainly a link to another site to do not provide lasting value, as the link may be broken tomorrow. You should summarize the important content from the link in your own answer.
$endgroup$
– Elliot Alderson
2 days ago
$begingroup$
@MarcusMüller -- what is a "PSD estimate"?
$endgroup$
– Pete Becker
2 days ago
$begingroup$
@PeteBecker An estimate for the Power Spectral Density. The distribution of the expected power over frequencies for a signal that you have to consider random because you don't know it. The mathematically exact definition for a PSD is "Fourier Transform of the Autocorrelation Function of a stochastic process"; but for most cases, we just assume the stochastic process (==random signal) to be Weak-Sense stationary and hence FT(ACF) == expectation(FT²(Time signal)).
$endgroup$
– Marcus Müller
2 days ago
1
$begingroup$
@MarcusMüller -- thanks for clarifying.
$endgroup$
– Pete Becker
2 days ago
7
7
$begingroup$
No it's not what a spectrum analyzer does in general. Some (many) spectrum analyzers have an FFT mode, but even then, what the spectrum analyzer shows you is a PSD estimate, not a Fourier transform.
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
No it's not what a spectrum analyzer does in general. Some (many) spectrum analyzers have an FFT mode, but even then, what the spectrum analyzer shows you is a PSD estimate, not a Fourier transform.
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
Answers that are mainly a link to another site to do not provide lasting value, as the link may be broken tomorrow. You should summarize the important content from the link in your own answer.
$endgroup$
– Elliot Alderson
2 days ago
$begingroup$
Answers that are mainly a link to another site to do not provide lasting value, as the link may be broken tomorrow. You should summarize the important content from the link in your own answer.
$endgroup$
– Elliot Alderson
2 days ago
$begingroup$
@MarcusMüller -- what is a "PSD estimate"?
$endgroup$
– Pete Becker
2 days ago
$begingroup$
@MarcusMüller -- what is a "PSD estimate"?
$endgroup$
– Pete Becker
2 days ago
$begingroup$
@PeteBecker An estimate for the Power Spectral Density. The distribution of the expected power over frequencies for a signal that you have to consider random because you don't know it. The mathematically exact definition for a PSD is "Fourier Transform of the Autocorrelation Function of a stochastic process"; but for most cases, we just assume the stochastic process (==random signal) to be Weak-Sense stationary and hence FT(ACF) == expectation(FT²(Time signal)).
$endgroup$
– Marcus Müller
2 days ago
$begingroup$
@PeteBecker An estimate for the Power Spectral Density. The distribution of the expected power over frequencies for a signal that you have to consider random because you don't know it. The mathematically exact definition for a PSD is "Fourier Transform of the Autocorrelation Function of a stochastic process"; but for most cases, we just assume the stochastic process (==random signal) to be Weak-Sense stationary and hence FT(ACF) == expectation(FT²(Time signal)).
$endgroup$
– Marcus Müller
2 days ago
1
1
$begingroup$
@MarcusMüller -- thanks for clarifying.
$endgroup$
– Pete Becker
2 days ago
$begingroup$
@MarcusMüller -- thanks for clarifying.
$endgroup$
– Pete Becker
2 days ago
add a comment |
Thanks for contributing an answer to Electrical Engineering Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f425008%2fwhat-kind-of-hardware-implements-fourier-transform%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
$begingroup$
A lot of the time you don't need FT to do signal processing, especially filtering. E.g. you can use passive or active filters which depend on the properties of capacitors and inductors. Even in the digital domain, when working with values out of ADC, you can go without FT for some tasks (e.g. see exponential smoothing).
$endgroup$
– anrieff
2 days ago
$begingroup$
"What kind of..." questions are too broad to fit the stack exchange model. Typically when one specifically mentions a Fourier transform something capable of computation is implied (approximately convolution ie delay, multiply, and accumulate, in parallel or with storage and logic for iterative sequence), but the hardware requirements depend on the application requirements, and as many are pointing out there are alternatives to numeric (or at least digital) computation.
$endgroup$
– Chris Stratton
2 days ago
2
$begingroup$
A lens does (not an answer since it is not an electronic device but then again neither are vibrating reeds).
$endgroup$
– Ghanima
yesterday
$begingroup$
en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm
$endgroup$
– mkeith
yesterday