Talk:Spectral density

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Cross spectral density definition[edit]

What is the definition of the capital F function used in the definition of the cross spectral density? I don't think it's the F defined just above it which is the integrated spectrum. 208.69.128.41 (talk) 22:48, 26 January 2015 (UTC)[reply]

Article currently reads: "By an extension of the Wiener–Khinchin theorem, the Fourier transform of the cross-spectral density [...] is the cross-covariance function" This is not correct, and should instead read: "By an extension of the Wiener-Khinchin theorem, the inverse Fourier transform of the cross-spectral density [...] is the cross-covariance function" or: "By an extension of the Wiener-Khinchin theorem, the Fourier transform of the cross-covariance function [...] is the cross-spectral density" Nickmasonsmith (talk) 00:33, 6 June 2017 (UTC)[reply]

Other fields[edit]

Physics is not the only field in which this concept appears, although I am not ready to write an article on its use in mathematics or statistics. Is any other Wikipedian? Michael Hardy 20:11 Mar 28, 2003 (UTC)

Yes, I am. All the fields in which it appears are using the methods of statistical analysis. So the concept is fundamentally statistical. The different applications could be preceded by some fundamental sections in which the statistical concepts are explained, using only the signal processing case as an example, or using only a statistical time series example as an example.66.167.204.242 (talk) 07:13, 18 April 2014 (UTC)[reply]

middle c[edit]

" then the pressure variations making up the sound wave would be the signal and "middle C and A" are in a sense the spectral density of the sound signal."

What does that mean? - Omegatron 13:39, May 23, 2005 (UTC)

My idea is to write an article such that the first paragraph would be useful to the newcomer to the subject. I may not have succeeded here, so please fix it if you have a better idea. I just don't want to start the article with "In the space of Lebesgue integrable functions..." PAR 14:58, 23 May 2005 (UTC)[reply]

yeah i hate that! i see what you're trying to say now... hmm... - Omegatron 16:57, May 23, 2005 (UTC)

stationarity[edit]

"If the signal is not stationary then the same methods used to calculate the spectral density can still be used, but the result cannot be called the spectral density."

Are you sure of that? - Omegatron 13:39, May 23, 2005 (UTC)

Well, no. It was a line taken from the "power spectrum" article which I merged with this one. Let's check the definition. If you find it to be untrue, please delete it.

I'll try to find something. -Omegatron 16:57, May 23, 2005 (UTC)
It seems that this is true. They are considering the power spectrum to only apply to a stationary signal, and when you take a measurement of a non-stationary signal you are approximating. I know to take the spectrum of an audio clip with an FFT, you window the clip and either pad to infinity with zeros or loop to infinity, which sort of turns it into a stationary signal. - Omegatron 17:56, May 23, 2005 (UTC)

I'm starting to wonder about this. I think of a stationary process as a noise signal, with a constant average value, and a constant degree of correlation from one point to the next (with white noise having no correlation). I don't understand the meaning of stationary if its not with respect to noise, so I don't know whether the top statement is true or not. I'm not sure I understand what you mean by stationary in your example either. PAR 21:18, 23 May 2005 (UTC)[reply]

Yeah, my concept of "stationarity" is not terribly well-defined, either. I'm pretty sure a sinusoid is stationary, and a square or triangle wave would be. As far as the power spectrum is concerned, stationary means that the spectrum will be the same no matter what section of the signal you window and measure. An audio signal would not be stationary, for instance. But I am thinking in terms of spectrograms and I'm not really sure of the mathematical foundations behind this. - Omegatron 21:25, May 23, 2005 (UTC)
Omegatron, are you sure that you're not thinking of "cyclostationarity"? I'm not sure if a sinusoid is stationary. I think what IS stationary is a process that looks like sin( t + x ) where t is a deterministic variable and x is a random phase, picked uniformly from [0,2*pi). Without the random variable x, it's not stationary. Lavaka (talk) 16:55, 19 November 2009 (UTC)[reply]
A sine wave is a sample from a stationary process of many sine waves with the same frequency but random phases66.167.204.242 (talk) 07:13, 18 April 2014 (UTC)[reply]

Change Definition[edit]

I removed the line defining the SPD as the FT of the autocorrelation. The relationship to the autocorrelation is in the "properties" section. As it stands now, the SPD is defined as the absolute value of the FT of the signal, and the relationship to the autocorrelation follows. We could use either as a definition, and the other as a derived property, but I think the present set-up is best, because it goes directly to the fact that the SPD is a measure of the distribution among frequencies. If we start by defining it as the FT of the autocorrelation, we have to introduce the autocorrelation which may be confusing, then define the SPD, then show that its the square of the FT of the signal, in order to show that its a measure of distribution among frequencies. This way is better. PAR 19:44, 21 July 2005 (UTC)[reply]

I disagree. The definition you removed was the correct and only sensible definition of PSD. What you put instead I have relabeled as an energy spectral density, and I've put back a definition of PSD in terms of autocorrelation function of a stationary random process. I think I need to go further, making that definition primary, since that's what the article is supposed to be about. Comments? Dicklyon 19:47, 13 July 2006 (UTC)[reply]
It is common and mathematically well-founded to DEFINE the Power Specral Density (PSD, and where on Earth did "SPD" come from?) as the Fourier Transform of the autocorrelation function. All of the others, such as the ones given in the article now, are simply mathematical Hand Waving with no foundation in the fact. I despise Hand Waving because it is more confusing that it is anything else.98.81.0.222 (talk) 22:18, 8 July 2012 (UTC)[reply]
The literature and texts vary on whether the FT of the auto-correlation function is the definition of the power spectral density,

or the result of a *theorem* because they have already defined the power spectrum. It is untrue to say that only one of these alternatives can be mathematically rigorous. This article has a definition of power spectrum which is not far from being rigorous, although it is not very physically motivated either. The idea of defining the power spectrum by the distribution function appearing in the Stieltjes integral that gives the spectral decomposition of the auto-correlation function is perfectly rigorous even for auto-correlation functions that are not square-integrable or absolutely integrable, is the definition used by e.g. Chatfield in his undergrad text---without proofs, and it does require a bit of work to prove, which is why it deserves to be called a theorem. I myself disapprove of using the Wiener--Khintchine formula as a definition since it obscures the physical content98.109.227.161 (talk) 00:38, 8 November 2012 (UTC)[reply]

finite time intervall[edit]

It is nice to define the spectral density by the infinite time Fourier integral of the signal.

However, often one has a signal s(t) defined for 0<t<T only. One can then define S(w) at the Frequencies w_n=2 pi n/T as the modulus-squared of the respective Fourier coefficients normalized by 1/T. The limit T->infinity recovers the infinite time Fourier integral definition.

If found the normalization by 1/T especially tricky. However it is needed to get a constant power spectrum for white noise independent of T. This reflects somehow decay of Fourier modes of white noise due to phase diffusion.

reference Fred Rieken, ... : Spikes: Exploring the Neural Code.

I'm not sure I understand your point. First of all, a Fourier series (discrete frequencies of a finite or periodic signal) is not a spectral density. More of a periodogram. For stationary random process, where PSD is the right concept, periodogram does not approach a limit as T goes to infinity. And for a signal only defined on 0 to T, taking such a limit makes no sense.
I just noticed the comment above yours, where in July 2005 the correct definition of PSD based on Fourier transform of autocorrelation function was removed. That explains why I had to put it back early this year. I think I didn't go far enough in fixing the article, though. Dicklyon 19:44, 13 July 2006 (UTC)[reply]
I take back what I said about the limit as T goes to infinity. As long as you have the modulus squared inside the limit, it will exist as you said, assuming the signal is defined for all time and is a sample function of a weak-sense-stationary ergodic random process; that is, the time average of the square converges on the expected value of the square, i.e. the variance. Blackman and Tukey use something like that definition in their book The Measurement of Power Spectra. Dicklyon 05:23, 14 July 2006 (UTC)[reply]

Units of Phi(omega) for continuous and discrete transforms[edit]

Hi!

I may be wrong, but the letter phi is used for definition of both continuous (Eq. 1) and discrete Fourier transforms (Eq. 2), which is confusing since phi doesn't have the same dimensions (units) from one to the other. Let's say f(t) is the displacement of a mass attached to a spring, then f(t) is in meters and phi(omega) will be in meters squared times seconds squared in the continous definition, and phi(omega) will be in meters squared in the discrete definition.

Using the same letter (phi) for two quantities that do not have the same meaning seems unappropriate. Would it be possible to add a small sentence saying that units differ in the continuous and discrete transforms?

What do you people think of this suggestion?

142.169.53.185 12:11, 26 March 2007 (UTC)[reply]

Yes, that makes sense. I'll add something. Dicklyon 19:24, 26 March 2007 (UTC)[reply]

QFT[edit]

The concept of a spectral mass function appears in quantum field theory, but that isn't mentioned here. I'm don't feel confident enough in my knowledge to write a summary of it, however. --Starwed 13:44, 12 September 2007 (UTC)[reply]

Difference between power spectral density and energy spectral density[edit]

The difference between power spectral density and energy spectral density is very unclear. Take a look at Signals Sounds and Sensations from William Hartmann, chapter 14. You can find this book almost entirely at http://books.google.com/. It is not really my field unfortunately, so I would rather not change the text myself. —Preceding unsigned comment added by 62.131.137.4 (talk) 13:52, 29 December 2007 (UTC)[reply]

Hartmann talks about power spectral density, as is typical in the sound field, since they're speaking of signals that are presumed to be stationary, that is, with same statistics at all times. Such signals do not have a finite energy, but do have a finite energy per time, or power. The alterative, energy spectral density, applies to signals with finite energy, that is, things you can take a Fourier transform of. I tried to make this clear in the article a while back, but maybe it needs some help. Dicklyon (talk) 16:05, 29 December 2007 (UTC)[reply]

Spectral Analyzer[edit]

If I am not mistaken, spectral analyzers (SAs) do not always measure the magnitude of the short-time Fourier transform (STFT) of an input signal, as suggested by the text. Indeed, I understand that (as indicated by the link to SAs) this kind of measurement is not performed by analog SAs. Digital SAs do perform some kind of Fourier Transform on the input signal but then the spectrum becomes susceptible to aliasing. This could be discussed in the text.

What do you guys know about it? Anyway, I don't feel confident enough for changing the text. —Preceding unsigned comment added by Abbade (talkcontribs) 05:02, 6 January 2008 (UTC)[reply]

Analog spectrum analyzers use a heterodyne/filter technique, sort of like an AM radio. The result is not so much different from using an FFT of a windowed segment; both give you an estimate of spectral density, with ways to control bandwidth, resolution, and leakage; and each way can be re-expressed, at least approximately, in terms of the other. Read all about it. Dicklyon (talk) 05:31, 6 January 2008 (UTC)[reply]

Spectral intensity[edit]

Some sources call F(w) the spectral density and Phi(w) the spectral intensity (cf. Palmer & Rogalski). Is this a British vs. American thing or did I misunderstand?--Adoniscik (talk) 20:28, 23 January 2008 (UTC)[reply]

Neither the math nor the terminology in that section connects to anything I can understand. Do those Fourier-like integrals make any sense to you? How did they get them to be one-sided? Anyway, I'd be surprised to find that usage of spectral density in other places; let us know what you find. In general, it appears to me that "density" means on a per-frequency basis, while "intensity" can mean just about anything. I don't see any other sources that would present a non-squared Fourier spectrum as a "density"; it doesn't make sense. Dicklyon (talk) 23:30, 23 January 2008 (UTC)[reply]
I didn't dwell on it, but the one-sidedness of the Fourier transform follows because it assumes the source to be real (opening paragraph, second sentence, and also after equation 20.6). Im[f(t)]=0 implies F(w)=F(-w) (Fourier transform#Functional relationships). I'm thinking the density here is akin to the density in "probability density function" … aka the "probability distribution function". See also Special:Whatlinkshere/Spectral_intensity… --Adoniscik (talk) 00:53, 24 January 2008 (UTC)[reply]
No, there's no such relationship, and nothing that looks like that on the page you cite. Read it again. Now if he had said the wave was even symmetric about zero, that would be different; but he didn't. If he had said the power spectrum was symmetric about zero, that would be OK; but didn't, he implied the Fourier amplitude spectrum is symmetric, and it's not, due to phase effects (it's Hermitian). Your interpretation of "density" is correct; that's why it has to be in an additive domain, such as power. Dicklyon (talk) 01:26, 24 January 2008 (UTC)[reply]

In general, all over math and engineering and statistics, "density" refers to an infinitesimal contribution, so here, that is on a per-frequency basis. In the theory of power spectra, "intensity" is merely a synonym: in fact, it was Sir Arthur Schuster's original word. He referred to the contribution of a frequency to the power of the light as the "intensity" of that frequency in its spectrum. 66.167.204.242 (talk) 18:01, 18 April 2014 (UTC)[reply]

Survey: bit/s/Hz, (bit/s)/Hz or bit·s−1·Hz−1 as Spectral efficiency unit?[edit]

Please vote at Talk:Eb/N0#Survey on which unit that should be used at Wikipedia for measuring Spectral efficiency. For a background discussion, see Talk:Spectral_efficiency#Bit/s/Hz and Talk:Eb/N0#Bit/s/Hz. Mange01 (talk) 07:21, 16 April 2008 (UTC)[reply]

Energy of a signal - doubled wiki entry[edit]

There is another wiki page on signal energy. Most of its content is better explained in the spectral density page. As far as appropriate, the two pages should be merged. I suggest keeping only the explanation of why energy of a signal is called energy in the other page. I found this page navigating from energy disambigation where I created a link to the spectral density entry.Sigmout (talk) 09:16, 18 August 2008 (UTC)[reply]

Recent changes that confuse power and energy spectral density[edit]

A raft of changes introduced a ton of confusion and the ambiguous acronym "SD", with no supporting sources. That's why I reverted them and will revert them again. I realize it was well intentioned, but since I can't discern what the point is, it's hard to see how to fix it. What kind of SD is not already covered in the distinction between power and energy spectral density? Why is variant terminology being introduced? Is there some source that this change is based on? The one that was cited about the W-K theorem was not at all in support of the text it was attached to. Dicklyon (talk) 05:19, 28 November 2008 (UTC)[reply]

Be that as it may, it would have been more friendly to discuss the matter and give whoever it was a chance to provide sources, rather than revert what is clearly several hours of work. Particularly since it was probably done by a newbie who may not be aware of the possibility of reversions, or of the need for RS. --Zvika (talk) 13:35, 28 November 2008 (UTC)[reply]
OK, it was me (admitedly very newbie in wiki modification) who introduced these reverted changes. Let me explain my point of view.
Spectral density is a widely used concept, not necessarily related to energy or power. Spectral density is therefore defined for physical quantities which are not linked by any mean to a power or energy. OK, I agree that a voltage spectral density is most often called a power spectral density, for the reason that it relates nicely and conventionnaly to a power by the relation V^2/R with R=1 Ohm. However, a lot of other physical quantities do not relates to a power in any conventional way. One crucial exemple of that, which is of prime importance in my own personnal field of work, is the spectral density of phase fluctuations and of frequency fluctuations. Refere for exemple to IEEE Std 1193-1994 for more detail (I recommand draft revision in proc. of IEEE IFCS 1997 p338-357). They do not relate to a power at all. Yes, I know that some author/community conventionnaly use the term "power spectral density" for anything which is the square of a given physical quantity per Hz. This usage is however not followed by everyone (by far) (see again IEEE IFCS 1997 p338-357 and IEEE Std 1193), and I was actually trying to clarify that a little more than what was already made.
The actual form of the article is IMHO not totally adapted to all these fields where SD is being used. The title of the article being "spectral density" and not "spectral density of power (and energy)", It seems to me that it needs to be much less specific and define SD for every physical quantities. Again, this is most specially true when there is a large community who is using SD for physical quantities other than power (or energy) which cannot be turned into a power by any cannonical means.
One possible way to adress all these different communities would be to give the specific explanations for every single SD concept we can find in the literature (PSD, ESD, Frequency fluctuation SD, phase fluctuations SD, Timing fluctuations SD etc...), with the risk of forgetting a lot of them we don't know about (because those working in these fields don't bother correcting Wikipedia). The other possibiliy is to give a very general (mathematical) definition of SD for any physical quantity, and then give some specific exemples of use. This second solution is what I would very strongly recommend.
The definition in terms of variance is universally applicable to all these situations. All are data, therefore all have variance, and variance is always the squared thing. But even in Stats, it is traditional and universal to call it the "power" spectrum because signal processors did so much with it early on. 66.167.204.242 (talk) 06:20, 18 April 2014 (UTC)[reply]
An other thing which I have strong problem with is the "snake bitting it's tail" situation between WK and spectral density which is very confusing in Wikipedia:en right know : basically the WK theorem states that the spectral density of a signal is the FT of the autocorrelation of the signal, while the PSD is defined by the current article by use of the WK theorem. This doesn't make sense at all and will confuse anyone not already familiar with the concepts (either the WK theorem is a theorem or a definition...). The WK and SD article together should give a better clarification on this topic IMHO. In practice, nowadays, the use of autocorrelation for extracting SD from a signal is not so frequent, and fast fourier transform of the sampled signal is very often prefered, the definition of SD in wikipedia should therefore reflect this reality. Besides, the easy confusion I find in students mind between one sided and two sided spectral densities was also a topic which, IMHO needed clarification. I added some comment on that with this purpose.
This issue is addressed in some of the more carefully written texts on the subject, but not the majority. The vast majority suffer from the problem you mention. I have been looking at this issue for quite a while now, in part because of your and Dyckon's comments about it. The original approach took a physical definition for the power spectrum: it indicates the amount of power contributed to the signal (or data) by that set of frequencies, which can either be a single frequency if there is a line in the spectrum, or a band of frequencies if there is a continuous spectrum. Then it becomes a theorem, due to Wiener but conjectured by Einstein much earlier, that the spectrum, defined in this way, is equal to the Fourier transform of the auto-correlation function. (With the necessary caveats in case the usual definition of Fourier transform does not converge.) But this approach is difficult for an introductory engineering course to take because for continuous signals, it is very difficult to make a formal mathematical formula for what one means by "power contributed to the signal by those frequencies". The intuition is to imagine the signal passing through a perfect bandpass filter that covers precisely the frequencies in question, then calculate the average power of the resulting signal. But there are convergence problems with writing this down precisely, so only one text i have seen, cited by someone on this page, tries that approach (and fails since they run into trouble with the convergence problems and do not know how to treat them). Wiener's original papers in 1925 take this approach and succeed in showing that his notion of "quadratic variation" of the function's generalised Fourier transform, embodies the intuition of the power being contributed to the signal by that range of frequencies. Some other texts and reference works treat the discrete time case (and assuming a finite number of sample points) first, where there are no convergence problems, and based on this motivation, then just assert that it works for the continuous case and the infinite time case as well. I could write something original that does this, but Wikipedia does not incorporate original research. I am instead writing up the approach of Wiener himself so that we can avoid the logical problem of defining the power spectrum as the Fourier transform of the auto-correlation function and then never proving that it has anything to do with "power". (And yet, this illogical approach is what Wiener himself adopted in his later papers, from 1930 on, saying, after proving that the auto-correlation function has a generalised Fourier transform 'S', "now it is manifest that S gives the distribution of power ... " when all he does is show this for the simple example of a line spectrum... 66.167.204.242 (talk) 06:20, 18 April 2014 (UTC)[reply]
Hope this clarifies my admitedly clumsy and too quick attempt at giving the article more generality and clarity.
On the other hand, I admit introducing the Acronym "SD" by facility, while it's not particularly standard (except as slang in the research labs I know...). I agree that SD should therefore be replaced by "spectral density" everywhere it appears.
I'm refrening from restoring for a while, but, seriously, it looks to me that my modifications were, at least, a step in the right direction. —Preceding unsigned comment added by 145.238.204.158 (talk) 16:13, 18 December 2008 (UTC)[reply]
Thanks for responding. A step in the right direction is a step that is backed up by sources (see WP:V and WP:RS). What sources do it your way? Without knowing, it's hard to help integrate your viewpoint. As for "the square of a given physical quantity per Hz", that applies to both energy and power spectral density; the difference is in whether it's per unit time or not. An energy spectral density is the the squared magnitude of the fourier transform of a signal with finite integral of its square (finite energy); a power spectral density, on the other hand, is mean square per Hz, as opposed to integral square per Hz, and applies to a signal with a finite mean square, and infinite integral square. It really doesn't matter whether the values related to physical energy. If this is not clear in the article, consult the cited sources and think about clarifying it. As for calculating with an FFT or DFT, that's a detail of how to estimate an SD, not a definition of what it is. So if you'd like to add a section on estimating it, go ahead, as long as you cite a good source. Dicklyon (talk) 05:03, 19 December 2008 (UTC)[reply]
Also, if you'd like me to look at those IEEE standards, post us a link, or email me a copy. Dicklyon (talk) 05:04, 19 December 2008 (UTC)[reply]

terminology, scaling[edit]

Is there any difference in the terms 'power spectral density', 'power spectrum', 'power density spectrum', 'spectral power distribution', and similarly for energy? Also, 'spectral density' and 'spectrum level' ? I'm looking through some of my textbooks and I don't see any attempt to distinguish between these terms, so I assume there are simply two concepts, the power spectrum and the energy spectrum. Also, doesn't the definition of the power spectrum require a scale factor to ensure that the total signal power is equal to the integral of the power spectrum (to account for the arbitrary scale factor in front of the Fourier transform)?. Since several different definitions of the Fourier transform are commonly used, wouldn't it be appropriate to elaborate on how these definitions depend on each other? 146.6.200.213 (talk) 17:27, 11 May 2009 (UTC)[reply]

The statement at the end of the "energy spectral density" section notes that the definition depends on the scale factor, but it does not specify how exactly. The page on Fourier transforms might elaborate on this; its hard to say because that page is longer than it should be. It is also one of many Wikipedia pages where intuitive explanations have been replaced with abstract mathematics definitions (more than would be necessary to ensure the article is strictly correct) 146.6.200.213 (talk) 19:09, 11 May 2009 (UTC)[reply]
An encyclopedia has to make correct statements and refrain from making incorrect ones. Many times, this looks like unnecessarily abstract mathematics, but in this case, it is the minimum necessary to avoid the howlers that pervade the first chapters of so many engineering textbooks on the subject. A lecture can focus on intuitive explanations that are full of "little white lies", but not an encyclopedia article. The current state of this article is very bad in this respect, for example talking about the expectation of x(t) when x is deterministic and X is the stochastic process of which x is one sample.66.167.204.242 (talk) 06:27, 18 April 2014 (UTC)[reply]

Suggestion for better definition of spectral density (June 2010)[edit]

The current definition of (Power) Spectral Density (PSD) in this article obviously has dimension "... per squared Hz". A proper definition should have the intended dimension, i.e. ".. per Hz"). Moreover the PSD should never diverge, as will happen when using the present definition in the case of stationary noise for example. Therefore I suggest to change the integral into a time average: integrate over a time interval ΔT and divide the (squared) integral by ΔT. Then take the limit for ΔT → ∞.

  • This comment was written after reading a lecture of E. Losery (www.ee.nmt.edu/~elosery/lectures/power_spectral_density.pdf). In his turn E. Losery refers to the definition of Stremmler (F.G. Stremler, Introduction to Communication Systems, 2nd Ed., Addison-Wesley, Massachusetts, 1982).

Eric Hennes June 3, 2010 —Preceding undated comment added 16:06, 2 June 2010 (UTC).[reply]

What you say is obvious remains unclear to me. Can you point which formula obviously has these wrong units, and what you would do to fix it? Or did you already fix it? Dicklyon (talk) 05:55, 12 October 2010 (UTC)[reply]
Well, suppose the signal s is unit-less, then the autocorrelation integral R(tau) of the signal has dimension "time". Its Fourier integral adds another "time", so we end up with time^2, i.e. "inverse frequency"^2, 1/Hz^2. Eric Hennes December 21, 2010

s and S[edit]

In the section on power spectral density, the discussion is confusing, in part because of bad notation. At one point s(t)**2 is called 'power' at at another point S(f) (without a square) is called 'power'. For someone trying to understand, this confuses things. One might expect that S(f) = F(s(t)), where 'F' denotes a Fourier transform, etc. —Preceding unsigned comment added by 136.177.20.13 (talk) 15:09, 11 October 2010 (UTC)[reply]

True. Any expectations that the case shift signifies a relationship between the two is not correct here. Otherwise, it looks rihgt. See if you can find a better or more conventional set of names. Dicklyon (talk) 05:53, 12 October 2010 (UTC)'[reply]

Adding to the confusion described above, the definition of the term is not entirely clear? Yitping (talk) 13:46, 30 March 2011 (UTC)[reply]

stationary vs cyclostationary[edit]

The article says "The power spectral density of a signal exists if and only if the signal is a wide-sense stationary process." Is this correct? I would think that it also exists if the signal is wide-sense cyclostationary. Lavaka (talk) 00:31, 31 January 2011 (UTC)[reply]

I think you're right; this book agrees. I'll take out "and only if". Dicklyon (talk) 04:20, 31 January 2011 (UTC)[reply]
This is one of the many inaccuracies in the article. The integrated spectrum of a signal exists if the signal comes from a stationary process almost surely, but there is no «only if». But even so, the spectral density need not exist, since the integrated spectrum might not be absolutely continuous. If the stationary process is also purely indeterministic, then the spectral density will exist almost everywhere, but even then might not be differentiable itself (it is the integrated spectrum which is then differentiable almost everywhere, and its derivative gives the spectral density). I fear that a cyclostationary process will not have a continuous integrated spectrum, and so its spectral density will not exist, and I would rather omit the topic of cyclostationarity from this article altogether, as being too specialised and too advanced. 98.109.227.161 (talk) 15:49, 9 November 2012 (UTC)[reply]

PSD relation to periodogram[edit]

If we are to follow the definition of the periodogram given in the peridogram wiki page, which is linked here, the periodogram has units of voltage (given that the signal has units of voltage). The PSD, which according to the definition here is the squared periodogram divided by time, must then have units of voltage squared times frequency, or voltage squared divided by time. If either the formula for the PSD here or the definition of the periodogram omits the division by T, we get the correct units. Thus something must be changed, or are my dimensional analysis skills severely lacking?

Definition section was confusing: Major Rewrite[edit]

Why? A good and useful article should bridge the gap from beautiful abstract mathematics to real-world applications. It should help engineers to understand the language of mathematicians and vice versa. This was not the case right. Please let me outline a possible route for reorganizing the article (that builds on suggestions made by others on this page, thanks!): 1. Explain power spectrum in laymen language 2. Abstract mathematical definition of power spectrum using infinite time Fourier transform 3. Introduce normalized Fourier transform for stationary processes and define power spectrum with respect to those 4. Discuss the case of discrete time-series and show pseudo-code. 5. State the Wiener-Khinchine theorem and remark that it is sometimes used even as a definition of the power spectrum — Preceding unsigned comment added by Benjamin.friedrich (talkcontribs) 09:24, 25 May 2012 (UTC)[reply]

gDear all,

I started to rewrite the definition section. I do very much appreciate the contribution of the former authors. Still, I think a good Wikipedia article should bridge the gap between elegant textbook definitions and recipes for real world tasks (like analyzing time series data). In the interest of all, please do not just undo my changes. If you feel like adding a little bit of more mathematical rigor, please feel free to do so. But I honestly believe the link to physics and engineering should be kept. (In this spirit, all quantities should have correct physical units.) Let's move this article to a higher quality level together.

Best Ben Benjamin.friedrich (talk)

These recent redefinitions are a lot to digest. It would not be unreasonable to back them out and take a more incremental approach; the attempt to start a discussion, misplaced at the top, needs time for reactions. I can't tell without a lot more work whether there's an improvement there. Anyone else willing to study it? Dicklyon (talk) 16:53, 25 May 2012 (UTC)[reply]
Please note that the recent rewrite takes up suggestion made before. In particular, defining the power spectral density using a finite time equivalent of the Fourier transform was suggested several times before (see sections 'finite time interval' and 'Suggestion for better definition of spectral density (June 2010)'). I think this suggestion is more transparent than the alternative definition as the Fourier transform of the autocorrelation function (which, of course, should still be mentioned for completeness.) A second point that came up before is how all the different definitions of the spectral density (energy/power/time-continuous signals/discrete signals) go together, and which normalization factors should be used, so that they get the right physical units. Correct physical units is not so important in pure math but get crucial in physics and engineering applications. So, I would say the new changes are useful and actually combine previous suggestions by others. But I agree that all this needs discussion, so let's start it here ... Benjamin.friedrich (talk)
The new definition of PSD in terms of the limit of the Fourier transform of a random signal doesn't make sense, as this book explains. It at least needs an expectation over an ensemble average. You mention a source, but there's no citation. And the Weiner–Khintchine theorem isn't linked, and there are lots of errors. Very hard to review this big a change. Dicklyon (talk) 16:12, 28 May 2012 (UTC)[reply]
I just added some sources. I very much like the reference you give: The book by Miller very nicely explains how using the truncated Fourier transform ensures the existence of Fourier transformed stochastic signals. I added this as a source too. I agree on your point that for stochastic signals, one is interested not so much in the PSD of a particular realization (although it is perfectly fine to apply the definition to that case), but rather in an ensemble average. I added a sentence on that. Surely, there will be more errors. My personal view is that any larger improvement implies leaving a local maximum of text quality and will transiently introduce errors, before a new (hopefully even better) maximum is found. Thanks a lot for spotting some of these errors. Benjamin.friedrich (talk)
Thanks, I appreciate the support. I did a bit more on it, and changed to definition to be the one in this source. I can't see the relevant page on the other source, but I have a hard time imagining that it said what was in the article. I'm unsure of the rest of that section, as I haven't digested it all yet. I think it needs work. Dicklyon (talk) 18:38, 3 June 2012 (UTC)[reply]
Typo? The PSD is now defined as a limit T->0. I wonder if it shouldn't read T->infty as in reference [5]? Otherwise, writing the PSD as an expectation value is fine with me. Benjamin.friedrich (talk) —Preceding undated comment added 13:15, 5 June 2012 (UTC)[reply]
Yes, sorry, my mistake. Thanks for fixing the limit. Dicklyon (talk) 14:20, 5 June 2012 (UTC)[reply]

Most basic texts and all mathematicians make the definition of the power spectrum come from the autocorrelation function via the Wiener--Khintchine theorem, or Bochner's theorem. Some engineering texts first define the power spectrum in terms of the physical concept of power. Some statistics texts first motivate the definition of power spectrum, but *ony* for discrete set of data, i.e., Fourier *series*, in terms of an ANOVA and least-squares regression, so the square of a Fourier coefficient measures the contribution of that component of the model to the variance of the data. And only then do they do Wiener--Khintchine, so it is then truly a theorem and not a definition. And that is the way I prefer. Now this is independent of the question of rigour: I have seen each way presented both rigorously and sloppily. And sloppily-but-not-so-bad and also sloppy-nauseating. The texts out there are all over the map. Very few of them are reliable.98.109.227.161 (talk) 15:57, 9 November 2012 (UTC)[reply]

I agree with using Wiener–Khintchine as a theorem; using it as a definition seems revisionist, and misses the point that power spectral density has a more physical or intuitive meaning. Dicklyon (talk) 17:43, 9 November 2012 (UTC)[reply]

Properties section[edit]

Could please someone have a look at the properties section? What is so special about the interval [-1/2,1/2]? I wonder if the statement on the variance is true? Benjamin.friedrich (talk) —Preceding undated comment added 14:52, 25 May 2012 (UTC).[reply]

The insistance on that interval [-1/2,1/2] is quite strange and doubtless not true. It sounds like someone copied this without understanding why, wherefore, or under what circumstances this holds.
D.A.W., master of science in electrical engineering, Georgia Tech.98.81.0.222 (talk) 22:01, 8 July 2012 (UTC)[reply]

Here is the diff where the problem came it. The ref he's using is probably about discrete time series, and this property is in units of cycles per sample, I'd guess. But it needs to be fixed. We can ask User:Tomaschwutz, but he seems to be long dormant. Dicklyon (talk) 22:26, 8 July 2012 (UTC)[reply]

The properties section is full of falsehoods and missing hypotheses. I may get around to fixing it. 98.109.227.161 (talk) 15:36, 9 November 2012 (UTC)[reply]
I have begun fixing it, but it is a mammoth task, for example some of the references are references to textbooks with mistakes in them. I will continue gradually unless other editors' feedback is decidely negative. I feel that the cross-spectral density should be put in a later, separate section.98.109.240.7 (talk) 01:56, 14 November 2012 (UTC)[reply]

Definition[edit]

No statistician has had a chance to have some input about the article, I am guessing. The math definition of spectral density is the following: start with the autocovariance function rx(h) (or autocorrelation function) of a covariance stationary time series xt. By covariance-stationarity, this is a positive definite function on the real line R in the case of continuous time (or the integers Z for discrete time). Bochner's theorem then says rx(h) is the Fourier transform (i.e. characteristic function) of a positive measure m on R (or Z):

Using this measure, the given time series can be given a spectral representation as a stochastic integral of a time series with orthogonal increments. Only when the covariance function has sufficient decay at infinity (when it's in the Lebesgue space L1) does a spectral density emerge. Under this assumption, the measure m is absolutely continuous and the spectral density is its Radon-Nikodym derivative f with respect to the Lebesgue measure, dm = f(x)dx; the spectral density f and rx form a Fourier pair. This is the case, for example, in autoregressive–moving-average models where the filters are square-integrable.

Mathematically, one can not simply take the Fourier transform of any function (in this case any autocovariance function). Also, in pure math, I believe the modern route usually goes through Bochner's theorem, not the more cumbersome Wiener–Khinchin.

This can be found in probably any math textbook on spectral analysis of time series. The article right now has a very applied point of view, which is OK, but a clean definition would be useful for mathematicians and statisticians who happen to wonder in. This should also fall under the Wiki project on statistics. Mct mht (talk) 09:01, 7 October 2012 (UTC)[reply]

This is the cleanest, most elegant route. But when taken, you have no connection with "power". You have defined a measure, call it μ, and given it the name "power spectrum", but never shown that μ([a,b]) equals the amount of power contributed to the signal by the set of all frequencies lying between a and b.66.167.204.242 (talk) 06:43, 18 April 2014 (UTC)[reply]

Statisticians never define the PSD using Bochner's theorem. I do not see that Bochner's theorem actually subsumes the Wiener-Khintchine theorem: in order to use Bochner, you have to verify that the autocorrelation function phi is positive definite. That seems like a lot of work to me. Then, after you have finished with Bochner, all you have is that phi is the Fourier transform of *some* probability measure, whereas the Wiener-Khintchine formula tells you *which* measure: the power spectrum. Interestingly, Wiener's methods of proof foreshadow the Laurent Schwartz theory of distributions, whereas Bochner's don't. Schwartz comments on something similar in Wiener's collected works (where each paper is commented on by a prominent contemporary mathematician, such as Kahane, Ito, Schwartz, etc.) — Preceding unsigned comment added by 98.109.227.161 (talk) 20:01, 7 November 2012 (UTC)[reply]

From your comments in our exchange on talk:Wiener-Khintchin theorem, I would disagree that Bochner's theorem is less direct. If anything, Wiener's approach is more roundabout. Sure, it gives you a measure: the one obtained by dividing the autocovariance function by x, take the Fourier transform (using a P.V. argument), then taking the distributional derivative. This is not exactly like specifying a measure by, say, give you its values on Borel sets. Bochner points out that the only property of ACF, the essential property, one needs is positive definiteness. This is trivial to verify, and begs one to apply the Gelfand transform, and the measure comes right out. As for foreshadowing the theory of distributions, a Radon measure on Tis just a special case of a distribution, although one can probably say Wiener's technique lean more heavily in that direction as Bochner's has more of the harmonic analysis flavor. Matter of taste, I suppose.
It would seem that we have very different definitions of a statistician. Mct mht (talk) 12:29, 13 November 2012 (UTC)[reply]

Statisticians also avoid stochastic integrals and complex numbers... Unless I am mistaken, the whole intuition behind Wiener and Chatfield is to first define the power spectrum as the amount of variance contributed to the signal by the frequencies up to and including omega. In an undergrad text like Chatfield, and survey articles by Wiener, this is done by example, the example of a Fourier series, then the distribution function of the power spectrum is obviously the sum of the squares of the Fourier coefficients up to and including that frequency... a finite sum since we work with cosine and sine transforms (avoiding complex numbers and negative frequencies...get it?) Then one skips over the formal definition but it is intuitively that given any interval of frequencies, construct a band-pass filter for that interval. The impulse response function is bounded and decays nicely so the power in the signal that is output by the filter is finite even if the signal has no Fourier transform, as long as the signal is bounded, locally measurable, and has finite auto-correlation function. This defines the power spectrum rigorously, and there are a few textbooks which do something like that.98.109.227.161 (talk) 00:48, 8 November 2012 (UTC)[reply]

The phrase power spectrum is indifferent to whether it is a probability density or a probability distribution. sin(t) has a power spectrum, but not a spectral density (unless one is going to talk about delta functions). Probabilists and statisticians are both equally willing to use either a cumulative distribution function to describe a random variable or a probability density function. But the power spectrum is an example of a probability measure which is not necessarily absolutely continuous, so its density might not exist. It is more natural, then, to first define the cumulative distribution function of the power spectrum and only then pass on to the density for the special case of a purely indeterministic process. So a formula or definition that goes straight to the integrated spectrum from the signal is not more roundabout than a formula for the density that only makes sense for a special class of signals. A Stieltjes integral that actually converges is not more roundabout than a Riemann or Lebesgue integral that doesn't even exist. 98.109.240.7 (talk) 13:24, 13 November 2012 (UTC)[reply]

Desiderata:

Contributors to the talk page have asked for no handwaving, which I take to mean statements that do not contain mathematical mistakes, such as claiming that taking expectations cures the problem of the lack of existence of a Fourier transform of a sample signal (this is a mistake copied from several standard engineering textbooks and shows why a source which is reliable for its engineering content cannot be relied on for mathematical content)---a counterexample is included already in this talk page. Such as claiming that the Fourier transform of something exists when it doesn't, etc. ON the other hand, the way many careful texts solve this problem is the un-physical approach of defining the power spectrum as the F.T. of the auto-correlation function or using the Wiener--Khintchine theorem. It would be desirable to follow the lead of the few textbooks that define the power spectrum directly from the notions of power and signal or process. But, per Wikipedia standards, one cannot propose an original treatment. So, I am working on a treatment which follows Wiener, Koopman, Chatfield, and Oppenheim, but tacitly correcting the mistakes contained in Oppenheim (an engineer). It is not easy to do this without being «original» or carrying out «synthesis», which are forbidden by Wikipedia policies. [the loser who missed Canadian Thanksgiving by visiting Hurricane Sandy in NYC, and is now missing USA-ian Thanksgiving in order to work on a Wikipedia article :-P]174.94.44.105 (talk) 14:57, 23 November 2012 (UTC)[reply]
Wiener and Koopman and Chatfield all start with the same motivating example I inserted into the article. It is intuitive, clear, *and* highlights why simple-minded approaches do not work: although the sinusoidal process obviously has a power spectrum, and a power spectral distribution function, it does not have a power spectral density, neither it nor its auto-correlation function have a Fourier transform, one cannot convolve it with a bandpass filter since the convolution integral does not converge, i.e., all the simple-minded hand waving in engineering textbooks fails. Some slightly more careful engineering texts, such as Oppenheim, try to approach the definition by truncating the signal in time, and this almost works, but various steps are still invalid unless one assumes in advance that the process has a continuous spectrum. But since one of the main things one does in the spectral analysis of time series is to split up the process into two parts, based on analysing the power spectrum, into a continuous part and a deterministic part, their approach makes this impossible. It's as if someone gave a definition of «filter» that assumed the process had mean zero, forgetting that some of the most important filters are used to de-trend a series.... The next step is to define the power due to frequencies from 0 to nu by using a bandpass on the FT side, which exists because of the truncation, and is equal to what we think of as power by Parseval's theorem which relates the L2 norm on the FT side to the power on the time-domain side, where the signal is. The last step is to pass to the limit as the truncating parameter goes to infinity. As an encyclopedia article, we can omit the proof of why this limit exists, and simply assert it. But we must not tacitly interchange two limits where this is invalid, as Oppenheim does.174.94.44.105 (talk) 15:19, 23 November 2012 (UTC)[reply]
The preceding steps work as long as x is measurable and the power is finite. The procedure defines the power spectrum via its power spectral distribution function F (and Koopman shows it defines the power spectrum as a measure) , and then can state the hypothesis that the distribution function be absolutely continuous, and define a process as «purely indeterministic» when that happens. Only then can one define the power spectral density.174.94.44.105 (talk) 16:00, 23 November 2012 (UTC)[reply]

New section, simple example for motivation[edit]

I have added a section to motivate the concepts and theorems: the example of a finite sum of sines and cosines. No complex numbers, no negative frequencies, no difficult convergence issues.

But even this simple example is an example of where a commonly offered definition of the power spectrum breaks down: take f(t) to be just simply cosine t. The average power is finite. The truncated Fourier transform, normalised as one of the editors has suggested here, is always zero or infinity.... that is not very useful. This is why mathematically careful writers define the spectral distribution function first, and then, *if* it is differentiable, define the spectral density, as is usual in probability and statistics.

I will make the notation consistent throughout the article, later, consistent with standard time-series and signal processing texts. (Not all are completely consistent with each other, but those square brackets have to go... 98.109.240.7 (talk) 02:47, 13 November 2012 (UTC)[reply]

It's kind of long, and doesn't end up with something that has a spectral density, as you note. So maybe it's not the best way to motivate this stuff? I did some copyedits for WP style (WP:MOS). Dicklyon (talk) 03:04, 13 November 2012 (UTC)[reply]
Well, the example of the autocorrelation function could be left out if people thought it was not worth the space. One important point is that the definitions of power density given later don't make sense for the general wide-sense stationary process. For example, the comment that the expectation (ensemble average) of the truncated Fourier transform will have a limit even if the typical sample path, or signal, does not, is just a mistake, as seen by the simple example of where p is a random variable uniformly distributed on the interval from zero to . So it is crucial to talk about the power spectrum even when the density is always zero or a delta function. That is why careful writers first define the integrated spectrum, or spectral distribution function, first. 98.109.240.7 (talk) 03:33, 13 November 2012 (UTC)[reply]
It would also be shorter if the summation notation were removed and one simply did the example of sin(t).98.109.240.7 (talk) 13:15, 13 November 2012 (UTC)[reply]

notational consistency[edit]

using phi was just my habit, Wiener always uses it. Now I have been consulting the few (less than a dozen) time-series of signal processing texts which I happen to have with me to pick notation. I am thinking as follows: as always in stats, caps for random variables, lower case for the sample. greek for theoretical statistics, roman for the sample statistic. So, the stochastic process will be and the signal will be . Frequency will be f, angular frequency omega. Discrete time will be n, sampled signal will be . Correlation will be rho for theoretical, r for sample. Covariance will be gamma for theoretical, c for sample. But as usual, what most statisticians call the autocovariance function will be called the autocorrelation function in accordance with Wiener and many signal processing engineers, so it will be c. When cross-correlations are involved, r and c will be supplied with subscripts, but not in the basic discussion of the univariate case. STochastic integrals and processes with orthogonal increments will only be mentioned at the tail end, as dessert...as is usual in statistics or engineering texts. So far, the consensus on this page seems to be not to bring up Laurent Schwartz's theory of distributions as a key building block in the definitions. I actually don't know of a reliabel source for the off-hand statement often made, that that theory makes the use of all these non-convergent integrals rigorous.... and such a reliance on Schwartz is not normal in careful statistics texts. Some mention should be made of it at the tail end. 98.109.240.7 (talk) 17:26, 13 November 2012 (UTC) Thinking more about it, the off-hand statement common in engineering texts that the use of Laurent Schwartz's notion of distribution fixes things, is just wrong. Distributions cannot be multiplied together, so you cannot take their absolute value squared. Yet that is what always winds up needing to be done. Distributions do not have a value at a point, so you cannot prove that the power contributed to the signal by the frequencies in the band [a,b] is given by S(b)-S(a) if you have defined S as the Fourier transform of something in the sense of a distribution. S has to be shown to exist as a statistical distribution function, i.e., a monotone function.66.167.204.242 (talk) 06:35, 18 April 2014 (UTC)[reply]

Illustrative image(s)[edit]

For real? This is an relatively long article on PSD and there is no sample graph?! —DIV (137.111.13.36 (talk) 23:29, 4 December 2013 (UTC))[reply]

Thank you for your suggestion. When you believe an article needs improvement, please feel free to make those changes. Wikipedia is a wiki, so anyone can edit almost any article by simply following the edit this page link at the top.
The Wikipedia community encourages you to be bold in updating pages. Don't worry too much about making honest mistakes—they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome. You don't even need to log in (although there are many reasons why you might want to). Zueignung (talk) 04:18, 5 December 2013 (UTC)[reply]

Title[edit]

The concept of spectrum is more fundamental than that of spectral density. For the same reasons, the concept of power spectrum is more fundamental than that of power spectral density. When analysing real data, one first looks at the power spectrum as a whole. Then filters out the discontinuous part. Only the remainder, the continuous part, has a power spectral density. So it is a pity that the title "Power Spectrum" redirects to "spectral density". I think that including the word "density" in this title is disorienting to the whole article, and its baleful influence disorients both the readers and the writers.66.167.204.242 (talk) 07:13, 18 April 2014 (UTC)[reply]

The most serious mistake at present[edit]

The definition.

The formula given for the definition is nonsensical, i.e., it makes no sense. It is

"
"Here E denotes the expected value; explicitly, we have
"

But in the notation of this article, x is a deterministic signal, so it does not have a probabilistic expectation, so the formula is nonsensical.

But even if this were fixed, the formula would be a mistake. Because the process is wide-sense stationary, its expectation is a constant. And even if this were fixed, one would run into convergence problems.66.167.204.242 (talk) 07:44, 18 April 2014 (UTC)[reply]


I totally agree that this needs to be changed. I got so confused because is defined as both the energy spectral density AND the power spectral density in the article. Please someone fix this!Nscozzaro (talk) 05:48, 21 September 2016 (UTC)[reply]

Beware of internet sources and non-famous textbooks on this topic[edit]

You would think that MIT Opencourse notes would be reliable. But no. Look at http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-011-introduction-to-communication-control-and-signal-processing-spring-2010/readings/MIT6_011S10_chap10.pdf for example. They use, without alerting the reader, non-standard terminology, make mathematical mistakes, and are generally unreliable. The sentence leading up to Eq. 10.1, for example, uses the probabilistic term "expectation" and the usual expectation symbol, E, but is actually a time average. Perhaps this is why they use the phrase "given by" instead of "defined by". They say the expected instantaneous power is "given by" Eq. 10.1. Well, it is, provided that the process is ergodic, almost always. But this formula is not the definition of expected instantaneous power, since the expectation operator is an ensemble average, not a time average. But the worst thing about this is that the average reader is going to get the impression that (10.1) is the definition of the concept, which it is not.

They themselves *might be* suffering from some confusion on exactly this issue, since they go on to say that when the system is ergodic, this *also* "represents" the time average. This is confused, at best. It is always the time average of the instantaneous power, and if the system is ergodic then it is, for almost all sample realisations, equal to the expected instantaneous power.

The treatment, also in internet-available course notes, but from New Zealand, by Tan is correct, and contradicts the treatment at MIT, see http://home.comcast.net/~szemengtan/LinearSystems/noise.pdf .


Another two examples, this time published[edit]

The Amazon user reviews of an introductory text in one of the Wiley series points out mathematical "howlers" in the foundational chapter on Fourier analysis. Yet this is a printed text, published by a reputable publisher. http://www.amazon.com/Principles-Random-Signal-Analysis-Design/dp/0471226173 Ah ha...but reputable in what precise sense? This is an engineering series. Engineers are professionally trained, and the reviewers for an engineering series are known for, getting the right answers and making things that work. They are not professionally trained to make mathematical statements that are not mistaken. The foundational chapter in question, then, does not fall within the sphere of the good repute of this series of texts, nor of its authors, nor of its reviewers. The conclusion, fortified by my comparison of how statistics textbooks treat this topic with how engineering textbooks treat this topic, is that an engineering text is not a reliable source for any mathematical statement on this topic.

So I then looked at my little brother's old textbook on this topic, written by the head of AT&T's French division of research. He makes the mistake of saying that the operational properties of the Fourier transform, of taking products into convolutions and vice versa, are still true when the Fourier transform in the sense of distribution is used. But everyone knows that you cannot multiply together two distributions! Digital Processing of Signals, M. Bellanger, Wiley, 1984. 66.167.204.242 (talk) 17:56, 18 April 2014 (UTC)[reply]

Constant normalising factors in definition of fourier Transform[edit]

The Wikipedia article on the Fourier transform puts the 2 pi in the exponent, so that the argument is frequency in cycles per secon, i.e., Hertz. For consistency, we should do the same, and not use angular frequency. In general, many mathematicians, such as Wiener, and electrical engineering texts do use angular frequency, and then the 2 pi has to be put into the definition of the transform or else the spectral synthesis. But then the units aren't quite Hz, so that is just another reason for being consistent with the other Wikipedia article and not do it in terms of radians, but frequency.

Furthermore...most statisticians prefer to use real frequencies and thus put the 2 pi in the exponent or inside the argument of the sine and cosine. Chatfield, for example. Also, among signal processing texts, the classic Blackman and Tukey do so, as does Laurent Schwartz in his book on mathematics for Physics. There are many computational and conceptual advantages, pointed out by such authors.

Furthermore, since all data are real, and all measurements are real, many statisticians prefer not to work with complex exponentials either. This then has the advantage that all frequencies are positive, and the formulas for the power contributed by frequencies between a and b are simpler. The concept of a negative frequency is not used in Statistics. Complex exponentials have advantages when dealing with proofs and the operational properties of the Fourier transform, and are useful in electrical engineering in order to deal with phases and phase shifts. But this article will omit nearly all proofs, and the power spectrum ignores all phase relations, so a text such as Chatfield does not use complex exponentials.68.164.80.96 (talk) 06:22, 3 May 2014 (UTC)[reply]

merge[edit]

Signal frequency spectrum here: "distribution over the frequency domain" is synonym with spectral distribution. Fgnievinski (talk) 04:29, 27 June 2015 (UTC)[reply]

I disagree. There are many ways to measure a spectral density, but the signal frequency spectrum is measured [obtained] by measuring a time or space domain signal and then computing the Fourier transform. The signal frequency spectrum is a special case of a much more general idea. Rather than combine the two articles, the spectral density article needs to be fixed so that it is clear to readers that the spectral density represents any effective quantifying [quantification] of the density of some response (output) as a function of frequency or wavelength input. Fourier1789 (talk) 17:39, 3 July 2015 (UTC)[reply]
@Fourier1789: I hesitate to disagree with Baron Jean Baptiste Joseph Fourier himself, but I don't think signal frequency spectrum necessarily implies the application of a particular spectral density estimation method, your highness; could you provide any sources documenting the distinction you make? I do agree with either (i) clarifying spectral density so as to remove any assumptions that a time series is a pre-requisite or (ii) rename the article to signal spectral density and refer the reader to Spectrum#Physical science for energy spectrum, mass spectrum, etc.; which one would you prefer? Fgnievinski (talk) 19:11, 3 July 2015 (UTC)[reply]
The "signal" in signal frequency spectrum implies a subset of signal processing. In contrast, if you do a google scholar search for things like spectral density in astronomy, spectral density in chemistry, and spectral density in atomic physics, you will find lots of scholarly uses for spectral density unrelated to signal processing or taking a Fourier transform of time series data. I think the better approach is to keep separate articles, because "signal frequency spectrum" is sufficiently notable to have a separate article independent of the more general "spectral density" article. I propose the spectral density article be made more general to include other areas as outlined above.Fourier1789 (talk) 20:22, 3 July 2015 (UTC)[reply]
@Fourier1789: So let's create a new spectral density (physical sciences), move the current spectral density to spectral density (signal processing), and leave a disambiguation page in spectral density. Then signal frequency spectrum can be merged into spectral density (signal processing). Fgnievinski (talk) 20:35, 3 July 2015 (UTC)[reply]

@Fourier1789: I've put a hatnote [1] but I didn't rename the present article, as the signal processing meaning is primary. Let me know if there are any outstanding impediments for the merger, otherwise signal frequency spectrum will go into spectral density (signal processing)#Explanation. Thanks. Fgnievinski (talk) 23:31, 4 July 2015 (UTC)[reply]

Explanation/Definition[edit]

The page doesn't seem to have a clear explanation on what a spectral density actually is.

This seems quite... unintuitive? Why have an article for spectral density as a concept but not have the explanation. Reading a few resources has left me with around 20 ideas of what it should be with nothing quite definitive as a concept. Would anyone with a better understanding of the concept be able to provide an explanation in the definition or explanation? — Preceding unsigned comment added by 65.39.42.160 (talk) 02:18, 19 November 2018 (UTC)[reply]

total hog wash[edit]

The opening thesis fails logic and learning.

"According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies"

Fourier neither allows or dis-allows any of the above, ie categorizations or partitioning style.

It would be like saying, "addition allows groups of things to be added together" — Preceding unsigned comment added by 2601:143:400:547B:B5C2:D763:11F8:A71D (talk) 18:45, 28 April 2019 (UTC)[reply]

Symmetry of complex PSD[edit]

I often see mentioned that the PSD of complex signals are not necessarily symmetric. So I don't know why this statement is here "The spectrum of a real valued process (or even a complex process using the above definition[dubious – discuss]) is real and an even function of frequency:" 174.62.124.25 (talk) 01:59, 12 December 2020 (UTC)[reply]

section "Properties" is redundant[edit]

I don't know which came first, Spectral_density#Energy_spectral_density or Spectral_density#Properties, but now they are redundant. If there is any new information, e.g. the name Wiener–Khinchin theorem, it could probably be incorporated into Energy_spectral_density. Another problem with Properties, as currently written, is the mixture of and . should be changed to for consistency with the other sections.
--Bob K (talk) 13:36, 30 December 2020 (UTC)[reply]

How does the magnitude squared of a Fourier transform become a time convolution?[edit]

Just below Eq.2, there is an unnumbered equation, whose first part is:

Is this correct? How do you get a convolution? I would expect the complex conjugates to multiply to get an absolute value, not convolve. The rest of the equation is then used as a proof (without using Wiener–Khinchin theorem) that power spectral density and the autocorrelation of a function are Fourier transform pairs

-- 108.60.43.48

See Fourier_transform#Cross-correlation_theorem. For a more convincing argument, see Multi-dimensional derivation of Eq.1, for the dimension n=1, and for the special case of whose Fourier transform is
--Bob K (talk) 21:42, 27 May 2021 (UTC)[reply]

Cross spectral density section[edit]

This section needs work. The first sentence should be extended to actually define the csd: "a csd can be defined... as something". There is actually no definition either in text or in math. And towards the end there's a "therefore it is one of them with a factor of 2", which kind of comes out of nowhere since it was not defined yet.Moo (talk) 16:11, 17 January 2023 (UTC)[reply]