Generalized characteristics of the signal. Main characteristics of signals

When studying the generalized theory of signals, the following questions are considered.

1. Basic characteristics and methods of analyzing signals used in radio engineering to transmit information.

2. The main types of signal transformations in the process of building channels.

3. Methods for constructing and methods for analyzing radio circuits through which operations are performed on the signal.

Radio engineering signals can be defined as signals that are used in radio engineering. According to their purpose, radio signals are divided into signals:

radio broadcasting,

television,

telegraph,

radar,

radio navigation,

telemetry, etc.

All radio signals are modulated. When generating modulated signals, primary low-frequency signals (analog, discrete, digital) are used.

Analog signal repeats the law of change in the transmitted message.

Discrete signal – the message source transmits information at certain time intervals (for example, about the weather), in addition, a discrete source can be obtained as a result of time sampling of an analog signal.

Digital signal is the display of a message in digital form. Example: we encode a text message into a digital signal.

All message characters can be encoded into binary, hexadecimal and other codes. Encoding is carried out automatically using an encoder. Thus, the code symbols are converted into standard signals.

The advantage of digital data transmission is its high noise immunity. The reverse conversion is carried out using a digital-to-analog converter.

Mathematical models of signals

When studying the general properties of signals, one usually abstracts from their physical nature and purpose, replacing them with a mathematical model.

Mathematical model – the selected method of mathematical description of the signal, reflecting the most essential properties of the signal. Based on a mathematical model, it is possible to classify signals in order to determine their common properties and fundamental differences.

Radio signals are usually divided into two classes:

deterministic signals,

random signals.

Deterministic signal is a signal whose value at any time is a known quantity or can be calculated in advance.

Random signal is a signal whose instantaneous value is random variable(for example, a beep).

Mathematical models of deterministic signals

Deterministic signals are divided into two classes:

periodic,

non-periodic.

Let s ( t ) – deterministic signal. Periodic signals are described by a periodic function of time:

and repeat after a period T . Approximately t >> T . The remaining signals are non-periodic.

A pulse is a signal whose value is different from zero for a limited time interval (pulse duration ).

However, when describing a mathematical model, functions defined over an infinite time interval are used. The concept of effective (practical) pulse duration is introduced:

.

Exponential momentum.

For example: defining the effective duration of an exponential pulse as the time interval during which the signal value decreases by a factor of 10. Determine the effective pulse duration for the pattern:

Energy characteristics of the signal . Instantaneous power is the signal power at a resistance of 1 ohm:

.

For a non-periodic signal, we introduce the concept of energy at a resistance of 1 Ohm:

.

For a periodic signal, the concept of average power is introduced:

The dynamic range of a signal is defined as the ratio of the maximum P ( t ) to that minimum P ( t ) , which allows you to ensure a given transmission quality (usually expressed in dB):

.

The calm speech of a speaker has a dynamic range of approximately 25...30 dB, for a symphony orchestra up to 90 dB. Selecting a value P min related to the level of interference:
.

Signal - a physical process that displays a message. IN technical systems electrical signals are most often used. Signals are usually functions of time.

1. Signal classification

Signals can be classified according to various criteria:

1. Continuous ( analog) - signals that are described by continuous functions of time, i.e. take a continuous set of values ​​on the definition interval. Discrete - are described discrete functions time i.e. take a finite set of values ​​on the definition interval.

Deterministic - signals that are described by deterministic functions of time, i.e. whose values ​​are determined at any point in time. Random - are described by random functions of time, i.e. whose value at any time is a random variable. Random processes (RP) can be classified into stationary, non-stationary, ergodic and non-ergodic, as well as Gaussian, Markov, etc.

3. Periodic - signals whose values ​​are repeated at intervals equal to the period

x (t) = x (t+nT), Where n= 1,2,...,¥; T- period.

4. Causal - signals that have a beginning in time.

5. Finite - signals of finite duration and equal to zero outside the detection interval.

6. Coherent - signals that coincide at all definition points.

7. Orthogonal - signals opposite to coherent.

2. Signal characteristics

1. Signal duration ( transmission time) T s- the time interval during which the signal exists.

2. Spectrum width Fc- the range of frequencies within which the main signal power is concentrated.

3. Signal base - the product of the signal spectrum width and its duration.

4. Dynamic range Dc- logarithm of the ratio of the maximum signal power - Pmax to the minimum - Pmin(minimum difference at the noise level):

D c = log (P max /P min).

In expressions where logarithms with any base can be used, the base of the logarithm is not indicated.

Typically, the base of the logarithm determines the unit of measurement (for example: decimal - [Bel], natural - [Neper]).

5. Signal volume is determined by the relation V c = T c F c D c .

6. Energy characteristics: instantaneous power - P(t); average power - P avg and energy - E. These characteristics are determined by the relations:

P(t) =x 2 (t); ; (1)

Where T=t max -tmin.

3. Mathematical models of random signals

Deterministic, i.e. a message known in advance does not contain information, since the recipient knows in advance what the transmitted signal will be. Therefore, the signals are statistical in nature.

A random (stochastic, probabilistic) process is a process that is described by random functions of time.

Random process X(t) can be represented by an ensemble of non-random time functions xi(t), called realizations or samples (see Fig. 1).


Fig.1. Implementations random process X(t)

A complete statistical characteristic of a random process is n- dimensional distribution function: F n (x 1, x 2,..., x n; t 1, t 2,..., t n), or probability density f n (x 1, x 2,..., x n; t 1, t 2,..., t n).

The use of multidimensional laws is associated with certain difficulties,

therefore, they are often limited to using one-dimensional laws f 1 (x, t), characterizing the statistical characteristics of a random process at individual points in time, called sections of a random process or two-dimensional f 2 (x 1, x 2; t 1, t 2), characterizing not only the statistical characteristics of individual sections, but also their statistical relationship.

Distribution laws are comprehensive characteristics of a random process, but random processes can be quite fully characterized using the so-called numerical characteristics (initial, central and mixed moments). In this case, the following characteristics are most often used: mathematical expectation (initial moment of the first order)

; (2)

mean square (second order initial moment)

; (3)

dispersion (second order central moment)

; (4)

correlation function, which is equal to the correlation moment of the corresponding sections of the random process

. (5)

In this case, the following relation is valid:

(6)

Stationary processes - processes in which numerical characteristics do not depend on time.

Ergodic processes - a process in which the results of averaging and the results of the set coincide.

Gaussian processes - processes with a normal distribution law:

(7)

This law plays an extremely important role in the theory of signal transmission, since most interference is normal.

According to the central limit theorem, most random processes are Gaussian.

M Arkov process - a random process in which the probability of each subsequent value is determined only by one previous value.

4. Forms of analytical description of signals

Signals can be presented in the time, operator or frequency domain, the connection between which is determined using the Fourier and Laplace transforms (see Fig. 2).

Laplace transform:

L-1: (8)

Fourier transforms:

F-1: (9)

Fig.2 Signal representation areas

In this case, various forms of signal representation can be used in the form of functions, vectors, matrices, geometric, etc.

When describing random processes in the time domain, the so-called correlation theory of random processes is used, and when describing in the frequency domain, the spectral theory of random processes is used.

Taking into account the parity of functions

and and in accordance with Euler’s formulas: (10)

we can write expressions for the correlation function R x (t) and energy spectrum (spectral density) of a random process S x (w), which are related to the Fourier transform or the Wiener-Khinchin formulas

; (11) . (12)

5. Geometric representation of signals and their characteristics

Any n- numbers can be represented as a point (vector) in n-dimensional space, distant from the origin at a distance D,

Where . ( 13)

Signal duration T s and spectrum width F with, in accordance with Kotelnikov’s theorem is determined N samples, where N = 2F c T c.

This signal can be represented by a point in n-dimensional space or by a vector connecting this point to the origin.

The length of this vector (norm) is:

; (14)

Where x i =x (nDt) - signal value at time t = n.Dt.

Let's say: X- the message being transmitted, and Y- accepted. Moreover, they can be represented by vectors (Fig. 3).

X1, Y1

0 a1 a2 x1 y1

Fig.3. Geometric representation of signals

Let us define the connections between the geometric and physical representation of signals. For the angle between vectors X And Y can be written down

cosg =cos(a 1 -a 2) =cosa 1cosa 2 +sina 1sina 2 =

Signal characteristics

Generalized block diagram of a telecommunications system

Classification of converters

Ways to convert a message into a signal and back

Sound-signal converters

Still image - signal converters

Moving image-signal converters

Harmonic Signal Characteristics. The signals we use in telecommunications networks, whether analog or digital, exist in the form of electrical voltage and current. The magnitude of such voltage or current changes over time, and this change contains information. The simplest is a signal that varies according to the cosine law and is called cosine or harmonic.

We can consider any telecommunication signal as a combination of cosine waves with different amplitudes and frequencies. Frequency is determined by the number of cycles or complete oscillations per second. For example, we hear fluctuations in air pressure as sound. We are able to hear frequencies ranging from approximately 20 Hz to 15 kHz, where 1 Hz (hertz) represents 1 cycle per second. We experience these vibrations as sounds of low and high tones.

The AC voltage example is much more important. The alternating voltage periodically changes its direction and magnitude, several tens of times per second. The complete voltage fluctuation is known as a cycle, and the frequency of voltage fluctuations is defined as the number of cycles per second. If the voltage has 1,000 complete oscillations per second, then the frequency is 1,000 Hz or 1 kHz.

Rice. 4.3 shows, in the form of an arrow, a frame of wire rotating in a constant magnetic field. The magnetic flux passing through the frame is proportional to the sine of the angle between the plane of the frame and the direction of the magnetic field. Since the magnetic flux changes, a voltage is induced between the ends of the frame, the magnitude of which varies according to the cosine law in time:

v( t)= cos( t –φ) = cos(2 f t –φ)

-(2 ft –φ) - oscillation phase in radians.

F – frequency equal to the number of complete oscillations (cycles) per second, measured

in Hz. It characterizes the speed of the process.

2 f – angular frequency, which is measured in radians per second;

T – time measured in seconds,

- φ is the initial phase of the oscillation at the moment t = 0, it characterizes the delay time of the wave when passing through the network. In fact, let the initial phase of the oscillation be equal to zero at the input of the network, and φ at the output. The output oscillation can then be represented as:

v(t) = cos(t-φ) = cos (t - ) ,

where the delay time plays a role.

The period T represents the time of one cycle, i.e. time of complete oscillation:

T= 1/f and f= 1/T

The maximum amount of vibration is called amplitude. The square of this quantity serves as an energy characteristic of the vibration.

An oscillation propagating in space is called a wave. Wavelength is the distance a wave travels in 1 cycle or 1 period:

= c/f = cT,

Where cwave speed. The speed of propagation of a sound wave in air is approximately 346 m/s; for light or radio waves c= 300,000 km/sec.

Fig. 4.3 Cosine oscillation and its parameters

Frequency ranges in telecommunications. The information signal is usually low frequency, but we can use a high frequency signal called a carrier wave to transport it. To do this, you need to change the amplitude, frequency or initial phase of the carrier oscillation according to the law of the information signal. This process is called modulation. With modulation, telecommunications signals can be placed in a wide variety of frequency ranges.

Fig.4.4 shows frequency ranges, associated media for distribution telecommunication signals, methods of their transmission and application.

Transmission speed determined by the pace at which digital signals are transmitted over a network. Generally speaking, the transmission rate r is measured in bits per second (bps).

A bit is a minimal message indicating the choice of one of two values: “0” and “1”. 8 bits make up 1 byte, which can be used to encode any value digital signal. Transmitting a 2 bps signal over a network typically requires 1 Hz of bandwidth.

Signal spectrum. Real telecommunication signals are complex, but any of them can be represented as a combination of a number of harmonic components (harmonics). The set of frequencies of harmonic components corresponding to one signal is usually called spectrum this signal. The difference between the maximum and minimum frequencies of the spectrum is called spectrum width(Hz) signal. The more the signal shape differs from a sinusoid, the more components the signal contains and the wider its spectrum. The signal spectrum is one of the most important features analog signals and this is also the most important factor limiting their transmission speed.

In telecommunications technology, the signal spectrum is reduced. This is due to the fact that the equipment has limited frequency bandwidth. The spectrum is reduced based on the permissible signal distortion. For example, when telephone communication It is required that speech be intelligible and subscribers can recognize each other by voice. To fulfill these conditions, it is enough to transfer speech signal in the frequency range from 300 to 3400 Hz. The spectrum width of a telephone signal depends on its transmission speed and is usually taken equal to F ≈ 1.5υ, where υ is the transmission (telegraphy) speed in Bodah, i.e., in the number of characters transmitted per second. So, with teletype transmission υ = 50 Baud and F = 75 Hz.

Figure 4.4 Frequency bands used in telecommunications

Parameter units. In communications technology, along with absolute units for measuring the parameters of electrical signals (power, voltage and current), relative units are widely used.

Transmission level signal at some point in a channel or path is called a logarithmic transformation of the ratio of the energy parameter S (power, voltage or current) to the reference value of the same parameter. The conversion rule is determined by the formula:

Where m- scale factor, a- base of the logarithm, - reference value of the parameter.

Transmission levels are measured in decibels if the following relationships apply:

for power levels in dBm (power decibels);

for voltage levels, dBc (voltage decibels).

Transmission level called absolute, if P 0 =1 mW. If now the level is set at resistance R 0, then when given values power and resistance it is easy to obtain the corresponding voltage values ​​U 0 at the resistance:

At R0 = 600 Ohm, in practical calculations the rounded value U0 = 0.775 V is taken.

Gain, attenuation and decibel power measurement. Over a long path in telecommunications networks, the signal weakens and strengthens again and again. Signal power is tightly controlled to be high enough relative to noise, yet low enough to avoid network congestion and associated signal distortion. When the signal level decreases, this is expressed using the term “attenuation” in power. When a signal is restored, it is expressed using the term "gain" in power. Thus, an attenuation of 10 times corresponds to an amplification of 10 times.

Alexander Bell was the first to propose the use of a logarithmic scale to measure power levels. The scale turned out to be successful, and this was reflected in the fact that power amplification began to be expressed in decibels (dB). The gain in decibels is determined by the formula:

If output power greater than the input, then there is amplification and is positive, in otherwise it becomes negative. If the powers of the output and input signals are the same, then there is no gain or attenuation and is equal to zero.

In Fig. Figure 4.4 shows an element of a telecommunications network with a specific input and output. The given formulas determine the amplification and attenuation of signal power during transmission. In a telecommunications network we usually have many (often more than 100) elements arranged in a chain.

Rice. 4.4. Gain and attenuation calculations for network sections

If you need to calculate the overall gain or attenuation, then you need to multiply the corresponding coefficients of the individual elements. If the coefficient of each element is presented in decibels, then they are added, as shown in the figure. Decibels allow you to add small positive or negative values ​​instead of multiplying them. For example, amplification by two times corresponds to (gain) 3 dB, amplification by 10 times - 10 dB, etc.

Power levels. Power levels in telecommunications networks vary widely, from picowatts to tens of watts, corresponding to a variation of 1 to 1,000,000,000. Power measurements based on decibels make it easy to express this wide power range. The absolute power level is often expressed in dBm0, comparing the measured power to 1 mW. The power level in dBm is given by:

If we need to determine the power in milliwatts, then we can easily do this from the known value of p. The absolute level in dBm is often used instead of expressing power in watts, for example when determining input power from known values ​​of input power and gain:

Examples of such calculations for a radio link and a section of fiber-optic communication are shown in Fig. 4, 5

Rice. 4.5 Calculations of output power levels for radio link and fiber optic link section

The transmission speed of measurement information determines the efficiency of the communication system included in the measurement system.

Simplified diagram measuring system shown in Fig. 175.

Typically, the primary measuring transducer converts the measured quantity into an electrical signal X (t), which needs to be sent by communication channel. Depending on what the communication channel is (electrical wire or cable, optical fiber, aquatic environment, air or airless space), the carriers of measurement information can be electricity, beam of light, sound vibrations, radio waves, etc. Selecting a carrier is the first step in matching the signal with the channel.

The generalized characteristics of the communication channel are time T to, in during which it is provided for transmitting measurement information, bandwidth F to and dynamic range N to, which is understood as the ratio of the permissible power in the channel to the power of the interference inevitably present in the channel, expressed in decibels. Work

called channel capacity.

Similar generalized signal characteristics are time T s, during which measurement information is transmitted, spectrum width Fc and dynamic range Nc is the ratio of the highest signal power to the lowest power that must be distinguished from zero for a given transmission quality, expressed in decibels. Work

called signal volume.

The geometric interpretation of the introduced concepts is shown in Fig. 176.

The condition for matching a signal with a channel that ensures the transmission of measurement information without loss and distortion in the presence of interference is the fulfillment of the inequality

when the signal volume completely “fits” into the channel capacity. However, the condition for matching the signal with the channel can be satisfied even when some (but not all) of the last inequalities are not satisfied. In this case, there is a need for the so-called exchange transactions, in which there is a kind of “exchange” of the duration of the signal for the width of its spectrum, or the width of the spectrum for the dynamic range of the signal, etc.

Example 82. A signal having a spectral width of 3 kHz must be transmitted over a channel whose bandwidth is 300 Hz. This can be done by first recording it on magnetic tape and playing it back during transmission at a speed 10 times lower than the recording speed. In this case, all frequencies of the original signal will decrease by 10 times, and the transmission time will increase by the same amount. The received signal will also need to be recorded on magnetic tape. By then playing it back at 10 times the speed, it will be possible to reproduce the original signal.

Similarly, it is possible to transmit a long-lasting signal in a short time if the channel bandwidth is wider than the signal spectrum.

In channels with additive uncorrelated interference

where P c and P p are the signal and interference powers, respectively. When transmitting electrical signals, the ratio

can be considered as the number of signal quantization levels that ensure error-free transmission. Indeed, with the selected quantization step, a signal of any level cannot be mistaken for a signal of an adjacent level due to the influence of interference. If we now imagine the signal as a set of instantaneous values ​​taken in accordance with V.A.’s theorem. Kotelnikov at intervals D t= ,

then at each of these moments in time it will correspond to one of the levels, i.e. may have one of P equally probable values, which corresponds to entropy

After the receiving device registers one of the levels at a fixed point in time, entropy (a posteriori) will be equal to 0, and the quantum of information (the amount of information transmitted at a discrete point in time)

Since the entire signal is transmitted N= 2 F c T with quanta, then the amount of information contained in it

directly proportional to the volume of the signal. To transmit this information in time Tk, it is necessary to ensure the transmission speed

If the signal and channel are consistent and T c = T c; F c = F k, then

This K. Shannon's formula for the maximum channel capacity.It sets the maximum speed for error-free information transfer. At T c< T к скорость может быть меньшей, а при Т с >Errors are possible.

The dependence of the maximum channel capacity on the signal-to-noise ratio for several values ​​of bandwidth is shown in Fig. 177. The nature of this dependence is different for large and small ratios

those. The dependence of the channel capacity on the signal/noise ratio is logarithmic.

If “1, then despite the fact that R p » R c, error-free transmission is still possible, but at a very low speed. In this case, the expansion is valid

in which we can restrict ourselves to the first term. Taking into account that log e = 1.443, we get

Thus, for small signal-to-noise ratios, the dependence of throughput on the signal-to-noise ratio is linear.

Dependence of throughput on channel bandwidth in real systems more complex than just linear. The power of noise interference at the input of the receiving device depends on the channel bandwidth. If the interference spectrum is uniform, then

where G is the spectral power density of the interference, i.e. interference power per unit frequency band. Then

The signal power can be expressed in terms of the same spectral density if we take into account equivalent frequency band F e:

Dividing both sides of this expression by F e, we get:


The nature of this dependence is shown in Fig. 178. It is important to note that as the channel bandwidth increases, its capacity does not increase indefinitely, but tends to a certain limit. This is explained by increased noise in the channel and a deterioration in the signal-to-noise ratio at the input of the receiving device. The limit to which c tends with increasing Fk can be determined by using the already known series expansion of the logarithmic function for large Fk. Then if


Thus, the maximum value to which the maximum channel capacity tends as its bandwidth increases is proportional to the ratio of the signal power to the interference power per unit frequency band. This obviously leads to the following practical conclusion: to increase the maximum channel capacity, you need to increase the power of the transmitting device and use a receiving device with a minimum noise level at the input.

Along with efficiency, the second most important indicator of the quality of a communication system is noise immunity. When transmitting measurement information in analog form, it is assessed by the deviation of the received signal from the transmitted one. Noise immunity discrete channels communication is characterized probability of error Rosh (the ratio of the number of erroneously received characters to the total number of transmitted ones) and is related to it by the dependence

If, for example, Рosh = 10 -5, then æ = 5; if Rosh = 10 -6, then æ = 6.

Effective way increasing noise immunity when transmitting measurement information in analog form and uncorrelated interference is accumulation. The signal is transmitted several times and with the coherent addition of all received implementations, its values ​​at the corresponding times are summed up, while the interference at these times, being random, is partially compensated. As a result, the signal-to-noise ratio increases and noise immunity increases. Similarly, the idea of ​​accumulation is implemented when transmitting measuring information over a discrete channel.

Example 83. Let the nature of the interference be such that it can be mistaken for a signal (i.e., 0 can be mistaken for 1). When transmitted by Baudot code, the combination 01001 is received three times in the form:

If the adder is a device that does not operate when at least one zero appears in the column, then the combination will be accepted correctly provided that each zero was accepted correctly at least once.

If during one transmission the probability of independent errors is denoted by Posh, then after N- If the transmission is repeated multiple times, it will be equal to Rosh. Therefore, noise immunity after N retransmissions

where æ - Noise immunity during single transmission. Thus, the noise immunity during accumulation increases by the number of repetitions.

One of the ways to increase noise immunity is also application of correction codes.

Increasing noise immunity is achieved by increasing redundancy, and more generally, by increasing the signal volume with the same amount of measurement information. In this case, the condition of matching the signal with the channel must be maintained. If this condition is met and T c = T k; Н с = Н к transmission of measuring information using amplitude-modulated high-frequency oscillations is more noise-resistant than direct signal transmission, because in the case of, for example, tone modulation it occupies twice the frequency band. In turn, the use of deep frequency or phase modulation, due to spectrum expansion, further increases the noise immunity of the communication system. In this sense, it is promising to use not simple signals that

F c T c ≈ 1,

A complex, for which

These include pulse signals with high-frequency filling and frequency modulation or phase shift keying of carrier oscillations, etc.

The requirements for efficiency and noise immunity of communication systems are contradictory. They encourage, on the one hand, to reduce, and on the other hand, to increase the volume of the signal, without violating the conditions of its coordination with the channel and without changing the amount of information contained in it. Satisfying these requirements involves the synthesis of optimal technical solutions.

The signal can be characterized various parameters. Generally speaking, there are a lot of such parameters, but for problems that have to be solved in practice, only a small number of them are significant. For example, when choosing a device to control technological process may require knowledge of signal dispersion; if the signal is used for control, its power is essential, and so on. Three main signal parameters that are essential for transmitting information over the channel are considered. The first important parameter is the signal transmission time T s. The second characteristic that has to be taken into account is power P with signal transmitted over a channel with a certain level of interference P z. The higher the value P with compared with P z, the lower the likelihood of an erroneous reception. Thus, the relation of interest is P s /P z . It is convenient to use the logarithm of this ratio, called the excess of signal over noise:

The third important parameter is the frequency spectrum Fx. These three parameters allow you to represent any signal in three-dimensional space with coordinates L, T, F in the form of a parallelepiped with volume T x F x L x. This product is called the volume of the signal and is denoted by V x

An information channel can also be characterized by three corresponding parameters: time of use of the channel T k, the bandwidth of the frequencies transmitted by the channel F k, and the dynamic range of the channel Dk characterizing its ability to transmit different signal levels.

Magnitude

called channel capacity.

Undistorted transmission of signals is possible only if the signal volume “fits” into the channel capacity.

Hence, general condition the coordination of the signal with the information transmission channel is determined by the relation

However, the relation expresses a necessary but not sufficient condition for matching the signal with the channel. A sufficient condition is agreement on all parameters:

For an information channel, the following concepts are used: information input speed, information transmission speed and channel capacity.

Under speed of information input (information flow) I(X) understand the average amount of information entered from the message source into the information channel per unit of time. This characteristic of the message source is determined only by the statistical properties of the messages.

Information transfer rate I(Z,Y) – the average amount of information transmitted over the channel per unit of time. It depends on the statistical properties of the transmitted signal and on the properties of the channel.

Bandwidth C is the highest theoretically achievable information transfer rate for a given channel. This is a characteristic of the channel and does not depend on the signal statistics.

With the aim of the most effective use information channel, measures must be taken to ensure that the information transmission speed is as close as possible to the channel capacity. At the same time, the speed of information input should not exceed the channel capacity, otherwise not all information will be transmitted over the channel.

This is the main condition for dynamic coordination of the message source and the information channel.

One of the main issues in the theory of information transmission is determining the dependence of information transmission speed and capacity on channel parameters and characteristics of signals and interference. These questions were first deeply studied by K. Shannon.

Computer