I came across to Shannon’s formula for calculating channel capacity and I don’t quite understand why we calculate average signal power… “over bandwidth”. Why over that "domain of interest"?

Here is my interpretation why but I’m not sure if I understood it right.

Imagine we are sending some signal f(t) from point A to point B. We take FT (Fourier transform) of signal f(t) at point A, and we get ourselves all the sinusoid components of that signal. For every component of that signal in frequency domain we have particular value of amplitude. Now, I’m not sure if this can be done but here is my interpretation of “power over bandwidth”: “Inside frequency domain” we calculate avg. power of signal in such fashion that we calculate power of every component of signal and by doing some manipulations with all those calculated “powers” (sum of them equals power of signal in time domain?) we can calculate average power of the signal in frequency domain.

So we have avg. power of “clear/undistorted signal” of value for example 40[W] “over bandwidth” that interest us (bandwidth of our signal).

Signal traveling from point A to receiver side B gets distorted. At point B we take FT of that distorted signal. Now, maybe we have some components of signal that are added within our signal, so all those components represent “noise signal”. We “separate” noise components and our signal components (in our head). Since signal is distorted maybe some of the amplitudes of signal’s components are now changed, so we again calculate (in a same fashion as one above) average power of that components and get… maybe some different average power which for example might be less than one at point A.

So, based on this interpretation do we by that mean average signal power “over bandwidth” (“over bandwidth of something that interests us, over our signal bandwidth to see if something has changed”)?

## Best Answer

First to your question: The term "Average received signal power over bandwidth" is the power spectral density (PSD), usually in unit dBm/Hz or W/Hz. It cannot be used directly to calculate channel capacity, it's prime use is to calculate interference.

For discussion of channel capacity, it is important we are not dealing with "distortion" (which would be counteracted by pre-distortion and line coding), but with additional noise. There are different sources of noise with different spectra, but for discussion of channel coding it is sufficient to look at additional white gaussian noise (AWGN), a memoryless stochastic process totally uncorrelated to your signal.

AWGN has in infinite bandwidth, but limited radiant intensity. This opens the door to reduction of total received noise power while keeping the full total received signal power, thus improving SNR.

At each reference point in your receiver, you are able to divide total signal by noise power and tell a signal to noise ratio (SNR), typically in decibels. Important for channel capacity is the SNR before channel decoding. Now Shannon comes into play.

He states that information can be transferred as long as the SNR (in dB) is \$>-\infty\$. A common misunderstanding is that SNR > 0 is required, but this is just the point where detection and demodulation of the signal become very easy. Detection is harder for SNR < 0 but still any deviation from a pure stochastic process is detectable. The reasoning that information transfer with arbitrary low bit error rate is possible for every SNR is quite surprising and makes Shannons original article an interesting read.

The rate at which information can be transferred is however limited and Shannon is able to proove an upper bound for this rate. It depends on occupied bandwidth, which is related to maximum slew rate (think of Fourier decomposition) and minimum temporal spacing of symbols, and the SNR, requiring a minimum symbol distinctness and limiting bits per symbol.

The Shannon-Limit is a theoretical upper bound, channel codes with near maximum performance are still an active field of research. LPDC-Codes and Turbo-Codes are the best known solutions.