11.213GPPBase Station System (BSS) equipment specificationRadio aspectsRelease 1999TS
The idealized assumptions resulting in table A.1 are:
1) All random variables Xi (error events) are assumed to be independent.
2) The observed random variable X is assumed to have a Gaussian distribution.
3) All random variables Xi (error events) are assumed to be outputs of stationary random processes with identical distributions.
4) The system requirement Ps is assumed to be sufficiently small.
A.3.1 Independent errors
The assumption that all error events are independent does not strictly hold. The fact that error events are mutually dependent, would increase the variance of the observed random variable X, and consequently, the number of samples needed for the confidence required should be multiplied by some factor indicating the number of error events which on average are completely correlated.
‑ For FERs the events occur so seldom that the events may be regarded as independent (factor of 1), the exception being TCH/FS, FACCH, TCH/AxS which should have a factor of 2.
‑ Since a convolutional decoder on average will produce burst errors of the order of the constraint length, BERs and RBERs should have a factor of 5.
Generally, the situation will be such that a "good" BSS will have a performance Pg which is better than Ps. Consequently, the number of samples found in all cases by (Eq 6) should be multiplied by an additional factor of 2.
A.3.2 Gaussian distribution
The assumption of a Gaussian distribution for the observed random variable X should hold in most cases due to the high number of samples used.
A.3.3 Stationary random processes
The assumption that the error events are outputs of stationary random processes with identical distributions holds generally for static propagation conditions. However, for multipath propagation conditions this is not true. On the other hand, the multipath propagation condition may be assumed to be stationary for short periods of time. Taking into account the worst‑case situation of flat fading where the distance between fades is a wavelength, the characteristics of the propagation condition may be assumed to change e.g. 10 times per wavelength and to be short term stationary in between. This means that all the different random variables Xi (error events) have a different pi and consequently different E(Xi) and Var(Xi). Since all pi are unknown and only the random variable X, which is the average of all Xi, is observed against a system requirement Ps, the statistical parameters of (Eq 7) result in the case of multipath propagation conditions assuming that all pi are independent.
Also in this case the variance can (and should) be simplified to p/N if all pi are small. However, in this case the second term of (Eq 7b) is dominated by the greatest pi and the simplification is less valid than for static propagation conditions. Nevertheless, the needed number of samples given by (Eq 6) is conservative because the variance would ideally be lower. On the other hand, if the fact that the different pi are likely to be correlated with positive correlation is taken into account, Var(X) will increase and the simplification to p/N might be adequate.
Since under multipath conditions the observed random variable X results from an average of a set of random processes, we should ensure that the average takes into account a sufficient number of processes to get an overall stationary process. Requiring an average over 1000 wavelengths (or 10 000 processes if the multipath propagation condition is updated every 10th of a wavelength), the resulting observation period needed is indicated in table A.2 if the logical channel in question occupies the basic physical channel all the time. The percentage of the time "on the air" for the logical channel should also be taken into account and consequently, the observation period indicated in table A.2 will be increased by an inverse frame filling factor.
Table A.2: Required observation periods under multipath
Time per Wavelength
Required observation period
A.3.4 Low error ratios
The assumption that the system requirement Ps is sufficiently small holds generally. However, when reaching a high Ps, e.g. around 10 E‑1, the approximation in (Eq 3) is not strictly accurate. However, using the correct variance would decrease the needed number of samples, so the assumptions give conservative results.
A.3.5 Total corrections
As a conclusion, the various limitations of the assumptions discussed in the above subclauses all lead to different increases of the needed number of samples to obtain the required confidence. The different increases should all be taken into account by taking the highest increase, and calculated number of samples are indicated in annex C. The overall confidence resulting is possibly slightly lower than 99.9 % and 95.0 %, but it should be quite close. Considering as well that the different tests are likely to be correlated, will make the overall probabilities of passing a "good" BSS and failing a "bad" BSS higher than indicated.
NOTE: The worst case in terms of test time it is the static sensitivity performance for the SACCH/T, giving 7,9 hours. On average, the test times are around 35,6 min and range from 5,0s.