Forget about discrete audio cards. Integrated is enough for everyone

A sound wave represents areas of high and low pressure that are perceived by our hearing organs. These waves can travel through solid, liquid and gaseous media. This means that they easily pass through the human body. Theoretically, if the pressure of the sound wave is too high, it could kill a person.

Any sound wave has its own specific frequency. The human ear is capable of hearing sound waves with frequencies ranging from 20 to 20,000 Hz. The level of sound intensity can be expressed in dB (decibels). For example, the intensity level of the sound of a jackhammer is 120 dB - a person standing next to you will not receive the most pleasant sensation from a terrible roar in the ears. But if we sit in front of a speaker playing at a frequency of 19 Hz and set the sound intensity to 120 dB, we will not hear anything. But sound waves and vibrations will all affect us. And after a while you will begin to experience various visions and see phantoms. The thing is that 19 Hz is the resonant frequency for our eyeball.

This is interesting: Scientists learned that 19 Hz is the resonant frequency for our eyeball under rather interesting circumstances. American astronauts, when ascending into orbit, complained of periodic visions. Detailed studies of the phenomenon have shown that the frequency of operation of the engines of the first stage of the rocket coincides with the frequency of operation of the human eyeball. At the required intensity of sound, strange visions arise.

Sound with a frequency below 20 Hz is called infrasound. Infrasound can be extremely dangerous for living beings, since organs in the human and animal bodies operate at infrasound frequencies. The superposition of certain infrasound frequencies on top of each other with the required sound intensity will cause disruptions in the functioning of the heart, vision, nervous system or brain. For example, when rats are exposed to 8 Hz infrasound, 120 dB causes brain damage. [wiki]. When the intensity increases to 180 dB and the frequency remains at 8 Hz, the person will not feel the best - breathing will slow down and become intermittent. Prolonged exposure to such sound waves will cause death.

This is interesting: The record for the loudest car sound system belongs to two engineers from Brazil - Richard Clarke and David Navone, who managed to install a subwoofer in the car with a theoretical sound volume of 180 dB. Needless to say, this system should not be used to its full potential?

During testing, the subwoofer, driven by electric motors and a crankshaft, reached a sound intensity of 168 dB and broke down. After this incident, they decided not to repair the system.

February 18, 2016

The world of home entertainment is quite varied and can include: watching movies on a good home theater system; exciting and exciting gameplay or listening to music. As a rule, everyone finds something of their own in this area, or combines everything at once. But whatever a person’s goals for organizing his leisure time and whatever extreme they go to, all these links are firmly connected by one simple and understandable word - “sound”. Indeed, in all of the above cases, we will be led by the hand by sound. But this question is not so simple and trivial, especially in cases where there is a desire to achieve high-quality sound indoors or any other conditions. To do this, it is not always necessary to buy expensive hi-fi or hi-end components (although it will be very useful), but a good knowledge of physical theory is sufficient, which can eliminate most of the problems that arise for anyone who sets out to obtain high-quality voice acting.

Next, the theory of sound and acoustics will be considered from the point of view of physics. In this case, I will try to make this as accessible as possible to the understanding of any person who, perhaps, is far from knowing physical laws or formulas, but nevertheless passionately dreams of realizing the dream of creating a perfect acoustic system. I do not presume to say that in order to achieve good results in this area at home (or in a car, for example), you need to know these theories thoroughly, but understanding the basics will allow you to avoid many stupid and absurd mistakes, and will also allow you to achieve the maximum sound effect from the system any level.

General theory of sound and musical terminology

What is it sound? This is the sensation that the auditory organ perceives "ear"(the phenomenon itself exists without the participation of the “ear” in the process, but this is easier to understand), which occurs when the eardrum is excited by a sound wave. The ear in this case acts as a “receiver” of sound waves of various frequencies.
Sound wave it is essentially a sequential series of compactions and discharges of the medium (most often the air medium under normal conditions) of various frequencies. The nature of sound waves is oscillatory, caused and produced by the vibration of any body. The emergence and propagation of a classical sound wave is possible in three elastic media: gaseous, liquid and solid. When a sound wave occurs in one of these types of space, some changes inevitably occur in the medium itself, for example, a change in air density or pressure, movement of air mass particles, etc.

Since a sound wave has an oscillatory nature, it has such a characteristic as frequency. Frequency measured in hertz (in honor of the German physicist Heinrich Rudolf Hertz), and denotes the number of oscillations over a period of time equal to one second. Those. for example, a frequency of 20 Hz indicates a cycle of 20 oscillations in one second. The subjective concept of its height also depends on the frequency of the sound. The more sound vibrations occur per second, the “higher” the sound appears. The sound wave also has one more most important characteristic, which has a name - wavelength. Wavelength It is customary to consider the distance that a sound of a certain frequency travels in a period equal to one second. For example, the wavelength of the lowest sound in the human audible range at 20 Hz is 16.5 meters, and the wavelength of the highest sound at 20,000 Hz is 1.7 centimeters.

The human ear is designed in such a way that it is capable of perceiving waves only in a limited range, approximately 20 Hz - 20,000 Hz (depending on the characteristics of a particular person, some are able to hear a little more, some less). Thus, this does not mean that sounds below or above these frequencies do not exist, they are simply not perceived by the human ear, going beyond the audible range. Sound above the audible range is called ultrasound, sound below the audible range is called infrasound. Some animals are able to perceive ultra and infra sounds, some even use this range for orientation in space (bats, dolphins). If sound passes through a medium that is not in direct contact with the human hearing organ, then such sound may not be heard or may be greatly weakened subsequently.

In the musical terminology of sound, there are such important designations as octave, tone and overtone of sound. Octave means an interval in which the frequency ratio between sounds is 1 to 2. An octave is usually very distinguishable by ear, while sounds within this interval can be very similar to each other. An octave can also be called a sound that vibrates twice as much as another sound in the same period of time. For example, the frequency of 800 Hz is nothing more than a higher octave of 400 Hz, and the frequency of 400 Hz in turn is the next octave of sound with a frequency of 200 Hz. The octave, in turn, consists of tones and overtones. Variable vibrations in a harmonic sound wave of the same frequency are perceived by the human ear as musical tone. High-frequency vibrations can be interpreted as high-pitched sounds, while low-frequency vibrations can be interpreted as low-pitched sounds. The human ear is capable of clearly distinguishing sounds with a difference of one tone (in the range of up to 4000 Hz). Despite this, music uses an extremely small number of tones. This is explained from considerations of the principle of harmonic consonance; everything is based on the principle of octaves.

Let's consider the theory of musical tones using the example of a string stretched in a certain way. Such a string, depending on the tension force, will be “tuned” to one specific frequency. When this string is exposed to something with one specific force, which causes it to vibrate, one specific tone of sound will be consistently observed, and we will hear the desired tuning frequency. This sound is called the fundamental tone. The frequency of the note “A” of the first octave is officially accepted as the fundamental tone in the musical field, equal to 440 Hz. However, most musical instruments never reproduce pure fundamental tones alone; they are inevitably accompanied by overtones called overtones. Here it is appropriate to recall an important definition of musical acoustics, the concept of sound timbre. Timbre- this is a feature of musical sounds that gives musical instruments and voices their unique, recognizable specificity of sound, even when comparing sounds of the same pitch and volume. The timbre of each musical instrument depends on the distribution of sound energy among overtones at the moment the sound appears.

Overtones form a specific coloring of the fundamental tone, by which we can easily identify and recognize a specific instrument, as well as clearly distinguish its sound from another instrument. There are two types of overtones: harmonic and non-harmonic. Harmonic overtones by definition are multiples of the fundamental frequency. On the contrary, if the overtones are not multiples and noticeably deviate from the values, then they are called non-harmonic. In music, operating with multiple overtones is practically excluded, so the term is reduced to the concept of “overtone,” meaning harmonic. For some instruments, such as the piano, the fundamental tone does not even have time to form; in a short period of time, the sound energy of the overtones increases, and then just as rapidly decreases. Many instruments create what is called a "transition tone" effect, where the energy of certain overtones is highest at a certain point in time, usually at the very beginning, but then changes abruptly and moves on to other overtones. frequency range each instrument can be considered separately and is usually limited to the fundamental frequencies that that particular instrument is capable of producing.

In sound theory there is also such a concept as NOISE. Noise- this is any sound that is created by a combination of sources that are inconsistent with each other. Everyone is familiar with the sound of tree leaves swaying by the wind, etc.

What determines the volume of sound? Obviously, such a phenomenon directly depends on the amount of energy transferred by the sound wave. To determine quantitative indicators of loudness, there is a concept - sound intensity. Sound intensity is defined as the flow of energy passing through some area of ​​space (for example, cm2) per unit of time (for example, per second). During normal conversation, the intensity is approximately 9 or 10 W/cm2. The human ear is capable of perceiving sounds over a fairly wide range of sensitivity, while the sensitivity of frequencies is heterogeneous within the sound spectrum. This way, the frequency range 1000 Hz - 4000 Hz, which most widely covers human speech, is best perceived.

Because sounds vary so greatly in intensity, it is more convenient to think of it as a logarithmic quantity and measure it in decibels (after the Scottish scientist Alexander Graham Bell). The lower threshold of hearing sensitivity of the human ear is 0 dB, the upper is 120 dB, also called the “pain threshold”. The upper limit of sensitivity is also perceived by the human ear not in the same way, but depends on the specific frequency. Sounds low frequencies must have a much greater intensity than high ones to cause a pain threshold. For example, the pain threshold at a low frequency of 31.5 Hz occurs at a sound intensity level of 135 dB, when at a frequency of 2000 Hz the sensation of pain will appear at 112 dB. There is also the concept of sound pressure, which actually expands the usual explanation of the propagation of a sound wave in the air. Sound pressure- this is a variable excess pressure that arises in an elastic medium as a result of the passage of a sound wave through it.

Wave nature of sound

To better understand the system of sound wave generation, imagine a classic speaker located in a pipe filled with air. If the speaker makes a sharp movement forward, the air in the immediate vicinity of the diffuser is momentarily compressed. After this, the air will expand, thereby pushing the compressed air area along the pipe.
This wave movement will subsequently become sound when it reaches the auditory organ and “excites” the eardrum. When a sound wave occurs in a gas, excess pressure and excess density are created and particles move at a constant speed. About sound waves, it is important to remember the fact that the substance does not move along with the sound wave, but only a temporary disturbance of the air masses occurs.

If we imagine a piston suspended in free space on a spring and making repeated movements “back and forth”, then such oscillations will be called harmonic or sinusoidal (if we imagine the wave as a graph, then in this case we will get a pure sinusoid with repeated declines and rises). If we imagine a speaker in a pipe (as in the example described above) performing harmonic oscillations, then at the moment the speaker moves “forward” the well-known effect of air compression is obtained, and when the speaker moves “backwards” the opposite effect of rarefaction occurs. In this case, a wave of alternating compression and rarefaction will propagate through the pipe. The distance along the pipe between adjacent maxima or minima (phases) will be called wavelength. If the particles oscillate parallel to the direction of propagation of the wave, then the wave is called longitudinal. If they oscillate perpendicular to the direction of propagation, then the wave is called transverse. Typically, sound waves in gases and liquids are longitudinal, but in solids waves of both types can occur. Transverse waves in solids arise due to resistance to change in shape. The main difference between these two types of waves is that a transverse wave has the property of polarization (oscillations occur in a certain plane), while a longitudinal wave does not.

Sound speed

The speed of sound directly depends on the characteristics of the medium in which it propagates. It is determined (dependent) by two properties of the medium: elasticity and density of the material. The speed of sound in solids directly depends on the type of material and its properties. Velocity in gaseous media depends on only one type of deformation of the medium: compression-rarefaction. The change in pressure in a sound wave occurs without heat exchange with surrounding particles and is called adiabatic.
The speed of sound in a gas depends mainly on temperature - it increases with increasing temperature and decreases with decreasing temperature. Also, the speed of sound in a gaseous medium depends on the size and mass of the gas molecules themselves - the smaller the mass and size of the particles, the greater the “conductivity” of the wave and, accordingly, the greater the speed.

In liquid and solid media, the principle of propagation and the speed of sound are similar to how a wave propagates in air: by compression-discharge. But in these environments, in addition to the same dependence on temperature, it is sufficient important has the density of the medium and its composition/structure. The lower the density of the substance, the higher the speed of sound and vice versa. The dependence on the composition of the medium is more complex and is determined in each specific case, taking into account the location and interaction of molecules/atoms.

Speed ​​of sound in air at t, °C 20: 343 m/s
Speed ​​of sound in distilled water at t, °C 20: 1481 m/s
Speed ​​of sound in steel at t, °C 20: 5000 m/s

Standing waves and interference

When a speaker creates sound waves in a confined space, the effect of waves being reflected from the boundaries inevitably occurs. As a result, this most often occurs interference effect- when two or more sound waves overlap each other. Special cases of interference phenomena are the formation of: 1) Beating waves or 2) Standing waves. Wave beats- this is the case when the addition of waves with similar frequencies and amplitudes occurs. The picture of the occurrence of beats: when two waves of similar frequencies overlap each other. At some point in time, with such an overlap, the amplitude peaks may coincide “in phase,” and the declines may also coincide in “antiphase.” This is how sound beats are characterized. It is important to remember that, unlike standing waves, phase coincidences of peaks do not occur constantly, but at certain time intervals. To the ear, this pattern of beats is distinguished quite clearly, and is heard as a periodic increase and decrease in volume, respectively. The mechanism by which this effect occurs is extremely simple: when the peaks coincide, the volume increases, and when the valleys coincide, the volume decreases.

Standing waves arise in the case of superposition of two waves of the same amplitude, phase and frequency, when when such waves “meet” one moves in the forward direction and the other in the opposite direction. In the area of ​​space (where the standing wave was formed), a picture of the superposition of two frequency amplitudes appears, with alternating maxima (the so-called antinodes) and minima (the so-called nodes). When this phenomenon occurs, the frequency, phase and attenuation coefficient of the wave at the place of reflection are extremely important. Unlike traveling waves, there is no energy transfer in a standing wave due to the fact that the forward and backward waves that form this wave transfer energy in equal quantities both forward and backward. opposite directions. To clearly understand the occurrence of a standing wave, let’s imagine an example from home acoustics. Let's say we have floor-standing speaker systems in some limited space (room). Making them play some song with big amount bass, let's try to change the location of the listener in the room. Thus, a listener who finds himself in the zone of minimum (subtraction) of a standing wave will feel the effect that there is very little bass, and if the listener finds himself in a zone of maximum (addition) of frequencies, then the opposite effect of a significant increase in the bass region is obtained. In this case, the effect is observed in all octaves of the base frequency. For example, if base frequency is 440 Hz, then the phenomenon of “addition” or “subtraction” will also be observed at frequencies of 880 Hz, 1760 Hz, 3520 Hz, etc.

Resonance phenomenon

Most solids have a natural resonance frequency. It is quite easy to understand this effect using the example of an ordinary pipe, open at only one end. Let's imagine a situation where a speaker is connected to the other end of the pipe, which can play one constant frequency, which can also be changed later. So, the pipe has a natural resonance frequency, saying in simple language- this is the frequency at which the pipe “resonates” or emits its own sound. If the frequency of the speaker (as a result of adjustment) coincides with the resonance frequency of the pipe, then the effect of increasing the volume several times will occur. This happens because the loudspeaker excites vibrations of the air column in the pipe with a significant amplitude until the same “resonant frequency” is found and the addition effect occurs. The resulting phenomenon can be described as follows: the pipe in this example “helps” the speaker by resonating at a specific frequency, their efforts add up and “result” in an audible loud effect. Using the example of musical instruments, this phenomenon can be easily seen, since the design of most instruments contains elements called resonators. It is not difficult to guess what serves the purpose of enhancing a certain frequency or musical tone. For example: a guitar body with a resonator in the form of a hole mating with the volume; The design of the flute tube (and all pipes in general); The cylindrical shape of the drum body, which itself is a resonator of a certain frequency.

Frequency spectrum of sound and frequency response

Since in practice there are practically no waves of the same frequency, it becomes necessary to decompose the entire sound spectrum of the audible range into overtones or harmonics. For these purposes, there are graphs that display the dependence of the relative energy of sound vibrations on frequency. This graph is called a sound frequency spectrum graph. Frequency spectrum of sound There are two types: discrete and continuous. A discrete spectrum plot displays individual frequencies separated by blank spaces. The continuous spectrum contains all sound frequencies at once.
In the case of music or acoustics, the usual graph is most often used Amplitude-Frequency Characteristics(abbreviated as "AFC"). This graph shows the dependence of the amplitude of sound vibrations on frequency throughout the entire frequency spectrum (20 Hz - 20 kHz). Looking at such a graph it is easy to understand, for example, strong or weak sides a specific speaker or an acoustic system as a whole, the strongest areas of energy output, frequency dips and rises, attenuation, and also trace the steepness of the decline.

Propagation of sound waves, phase and antiphase

The process of propagation of sound waves occurs in all directions from the source. The simplest example to understand this phenomenon: a pebble thrown into water.
From the place where the stone fell, waves begin to spread across the surface of the water in all directions. However, let’s imagine a situation using a speaker in a certain volume, say a closed box, which is connected to an amplifier and plays some kind of musical signal. It is easy to notice (especially if you apply a powerful low-frequency signal, for example a bass drum) that the speaker makes a rapid movement “forward”, and then the same rapid movement “backward”. What remains to be understood is that when the speaker moves forward, it emits a sound wave that we hear later. But what happens when the speaker moves backward? And paradoxically, the same thing happens, the speaker makes the same sound, only in our example it propagates entirely within the volume of the box, without going beyond its limits (the box is closed). In general, in the above example one can observe quite a lot of interesting physical phenomena, the most significant of which is the concept of phase.

The sound wave that the speaker, being in the volume, emits in the direction of the listener is “in phase”. The reverse wave, which goes into the volume of the box, will be correspondingly antiphase. It remains only to understand what these concepts mean? Signal phase– this is the sound pressure level at the current moment in time at some point in space. The easiest way to understand the phase is by the example of the reproduction of musical material by a conventional floor-standing stereo pair of home speaker systems. Let's imagine that two such floor-standing speakers are installed in a certain room and play. In this case, both acoustic systems reproduce a synchronous signal of variable sound pressure, and the sound pressure of one speaker is added to the sound pressure of the other speaker. A similar effect occurs due to the synchronicity of signal reproduction from the left and right speakers, respectively, in other words, the peaks and troughs of the waves emitted by the left and right speakers coincide.

Now let’s imagine that the sound pressures still change in the same way (have not undergone changes), but only now they are opposite to each other. This can happen if you connect one speaker system out of two in reverse polarity ("+" cable from the amplifier to the "-" terminal of the speaker system, and "-" cable from the amplifier to the "+" terminal of the speaker system). In this case, the signal opposite in direction will cause a pressure difference, which can be represented in numbers as follows: left acoustic system will create a pressure of "1 Pa", and the right speaker system will create a pressure of "minus 1 Pa". As a result, the total sound volume at the listener's location will be zero. This phenomenon is called antiphase. If we look at the example in more detail for understanding, it turns out that two speakers playing “in phase” create identical areas of air compaction and rarefaction, thereby actually helping each other. In the case of an idealized antiphase, the area of ​​compressed air space created by one speaker will be accompanied by an area of ​​rarefied air space created by the second speaker. This looks approximately like the phenomenon of mutual synchronous cancellation of waves. True, in practice the volume does not drop to zero, and we will hear a highly distorted and weakened sound.

The most accessible way to describe this phenomenon is as follows: two signals with the same oscillations (frequency), but shifted in time. In view of this, it is more convenient to imagine these displacement phenomena using the example of ordinary round pointer clock. Let's imagine that there are several identical round clocks hanging on the wall. When the second hands of this watch run synchronously, on one watch 30 seconds and on the other 30, then this is an example of a signal that is in phase. If the second hands move with a shift, but the speed is still the same, for example, on one watch it is 30 seconds, and on another it is 24 seconds, then this is a classic example of a phase shift. In the same way, phase is measured in degrees, within a virtual circle. In this case, when the signals are shifted relative to each other by 180 degrees (half a period), classical antiphase is obtained. Often in practice, minor phase shifts occur, which can also be determined in degrees and successfully eliminated.

Waves are plane and spherical. A plane wave front propagates in only one direction and is rarely encountered in practice. A spherical wavefront is a simple type of wave that originates from a single point and travels in all directions. Sound waves have the property diffraction, i.e. ability to go around obstacles and objects. The degree of bending depends on the ratio of the sound wavelength to the size of the obstacle or hole. Diffraction also occurs when there is some obstacle in the path of sound. In this case, two scenarios are possible: 1) If the size of the obstacle is much larger than the wavelength, then the sound is reflected or absorbed (depending on the degree of absorption of the material, the thickness of the obstacle, etc.), and an “acoustic shadow” zone is formed behind the obstacle. . 2) If the size of the obstacle is comparable to the wavelength or even less than it, then the sound diffracts to some extent in all directions. If a sound wave, while moving in one medium, hits the interface with another medium (for example, an air medium with a solid medium), then three scenarios can occur: 1) the wave will be reflected from the interface 2) the wave can pass into another medium without changing direction 3) a wave can pass into another medium with a change in direction at the boundary, this is called “wave refraction”.

The ratio of the excess pressure of a sound wave to the oscillatory volumetric velocity is called wave resistance. Speaking in simple words, wave impedance of the medium can be called the ability to absorb sound waves or “resist” them. The reflection and transmission coefficients directly depend on the ratio of the wave impedances of the two media. Wave resistance in a gaseous medium is much lower than in water or solids. Therefore, if a sound wave in air strikes a solid object or the surface of deep water, the sound is either reflected from the surface or absorbed to a large extent. This depends on the thickness of the surface (water or solid) on which the desired sound wave falls. When the thickness of a solid or liquid medium is low, sound waves almost completely “pass”, and vice versa, when the thickness of the medium is large, the waves are more often reflected. In the case of reflection of sound waves, this process occurs according to a well-known physical law: “The angle of incidence is equal to the angle of reflection.” In this case, when a wave from a medium with a lower density hits the boundary with a medium of higher density, the phenomenon occurs refraction. It consists in the bending (refraction) of a sound wave after “meeting” an obstacle, and is necessarily accompanied by a change in speed. Refraction also depends on the temperature of the medium in which reflection occurs.

In the process of propagation of sound waves in space, their intensity inevitably decreases; we can say that the waves attenuate and the sound weakens. In practice, encountering a similar effect is quite simple: for example, if two people stand in a field at some close distance (a meter or closer) and start saying something to each other. If you subsequently increase the distance between people (if they begin to move away from each other), the same level of conversational volume will become less and less audible. Similar example clearly demonstrates the phenomenon of a decrease in the intensity of sound waves. Why is this happening? The reason for this is various processes of heat exchange, molecular interaction and internal friction of sound waves. Most often in practice, sound energy is converted into thermal energy. Such processes inevitably arise in any of the 3 sound propagation media and can be characterized as absorption of sound waves.

The intensity and degree of absorption of sound waves depends on many factors, such as pressure and temperature of the medium. Absorption also depends on the specific sound frequency. When a sound wave propagates through liquids or gases, a friction effect occurs between different particles, which is called viscosity. As a result of this friction at the molecular level, the process of converting a wave from sound to heat occurs. In other words, the higher the thermal conductivity of the medium, the lower the degree of wave absorption. Sound absorption in gaseous media also depends on pressure (atmospheric pressure changes with increasing altitude relative to sea level). As for the dependence of the degree of absorption on the frequency of sound, taking into account the above-mentioned dependences of viscosity and thermal conductivity, the higher the frequency of sound, the higher the absorption of sound. For example, at normal temperature and pressure in air, the absorption of a wave with a frequency of 5000 Hz is 3 dB/km, and the absorption of a wave with a frequency of 50,000 Hz will be 300 dB/m.

In solid media, all the above dependencies (thermal conductivity and viscosity) are preserved, but several more conditions are added to this. They are associated with the molecular structure of solid materials, which can be different, with its own inhomogeneities. Depending on this internal solid molecular structure, the absorption of sound waves in this case can be different, and depends on the type of specific material. When sound passes through solid, the wave undergoes a number of transformations and distortions, which most often leads to the dispersion and absorption of sound energy. At the molecular level, a dislocation effect can occur when a sound wave causes a displacement of atomic planes, which then return to their original position. Or, the movement of dislocations leads to a collision with dislocations perpendicular to them or defects in the crystal structure, which causes their inhibition and, as a consequence, some absorption of the sound wave. However, the sound wave can also resonate with these defects, which will lead to distortion of the original wave. The energy of the sound wave at the moment of interaction with the elements of the molecular structure of the material is dissipated as a result of internal friction processes.

In this article I will try to analyze the features of human auditory perception and some of the subtleties and features of sound propagation.

Sounds belong to the section of phonetics. The study of sounds is included in any school curriculum in the Russian language. Familiarization with sounds and their basic characteristics occurs in the lower grades. A more detailed study of sounds with complex examples and nuances takes place in middle and high school. This page provides only basic knowledge according to the sounds of the Russian language in a compressed form. If you need to study the structure of the speech apparatus, the tonality of sounds, articulation, acoustic components and other aspects that go beyond the modern school curriculum, refer to specialized manuals and textbooks on phonetics.

What is sound?

Sound, like words and sentences, is the basic unit of language. However, the sound does not express any meaning, but reflects the sound of the word. Thanks to this, we distinguish words from each other. Words differ in the number of sounds (port - sport, crow - funnel), a set of sounds (lemon - estuary, cat - mouse), a sequence of sounds (nose - sleep, bush - knock) up to complete mismatch of sounds (boat - speedboat, forest - park).

What sounds are there?

In Russian, sounds are divided into vowels and consonants. The Russian language has 33 letters and 42 sounds: 6 vowels, 36 consonants, 2 letters (ь, ъ) do not indicate a sound. The discrepancy in the number of letters and sounds (not counting b and b) is caused by the fact that for 10 vowel letters there are 6 sounds, for 21 consonant letters there are 36 sounds (if we take into account all combinations of consonant sounds: deaf/voiced, soft/hard). On the letter, the sound is indicated in square brackets.
There are no sounds: [e], [e], [yu], [ya], [b], [b], [zh'], [sh'], [ts'], [th], [h] , [sch].

Scheme 1. Letters and sounds of the Russian language.

How are sounds pronounced?

We pronounce sounds when exhaling (only in the case of the interjection “a-a-a”, expressing fear, the sound is pronounced when inhaling.). The division of sounds into vowels and consonants is related to how a person pronounces them. Vowel sounds are pronounced by the voice due to exhaled air passing through tense vocal cords and freely exiting through the mouth. Consonant sounds consist of noise or a combination of voice and noise due to the fact that the exhaled air encounters an obstacle in its path in the form of a bow or teeth. Vowel sounds are pronounced loudly, consonant sounds are pronounced muffled. A person is able to sing vowel sounds with his voice (exhaled air), raising or lowering the timbre. Consonant sounds cannot be sung; they are pronounced equally muffled. Hard and soft signs do not represent sounds. They cannot be pronounced as an independent sound. When pronouncing a word, they influence the consonant in front of them, making it soft or hard.

Transcription of the word

Transcription of a word is a recording of the sounds in a word, that is, actually a recording of how the word is correctly pronounced. Sounds are enclosed in square brackets. Compare: a - letter, [a] - sound. The softness of consonants is indicated by an apostrophe: p - letter, [p] - hard sound, [p’] - soft sound. Voiced and voiceless consonants are not indicated in writing in any way. The transcription of the word is written in square brackets. Examples: door → [dv’er’], thorn → [kal’uch’ka]. Sometimes the transcription indicates stress - an apostrophe before the stressed vowel.

There is no clear comparison of letters and sounds. In the Russian language there are many cases of substitution of vowel sounds depending on the place of stress of the word, substitution of consonants or loss of consonant sounds in certain combinations. When compiling a transcription of a word, the rules of phonetics are taken into account.

Color scheme

In phonetic analysis, words are sometimes drawn color schemes: letters are painted different colors depending on what sound they mean. The colors reflect the phonetic characteristics of sounds and help you visualize how a word is pronounced and what sounds it consists of.

All vowels (stressed and unstressed) are marked with a red background. Iotated vowels are marked green-red: green color means a soft consonant sound [й‘], red means the vowel that follows it. Consonants with hard sounds are colored blue. Consonants with soft sounds are colored green. Soft and hard signs are painted gray or not painted at all.

Designations:
- vowel, - iotated, - hard consonant, - soft consonant, - soft or hard consonant.

Note. The blue-green color is not used in phonetic analysis diagrams, since a consonant sound cannot be soft and hard at the same time. The blue-green color in the table above is only used to demonstrate that the sound can be either soft or hard.

If we talk about objective parameters that can characterize quality, then of course not. Recording on vinyl or cassette always involves introducing additional distortion and noise. But the fact is that such distortions and noise do not subjectively spoil the impression of the music, and often even the opposite. Our hearing and sound analysis system work quite complexly; what is important for our perception and what can be assessed as quality from the technical side are slightly different things.

MP3 is a completely separate issue; it is a clear deterioration in quality in order to reduce the file size. MP3 encoding involves removing quieter harmonics and blurring the fronts, which means a loss of detail and “blurring” of the sound.

The ideal option in terms of quality and honest transmission of everything that happens is digital recording without compression, and CD quality is 16 bits, 44100 Hz - this is no longer the limit, you can increase both the bit rate - 24, 32 bits, and the frequency - 48000, 82200, 96000, 192000 Hz. Bit depth affects dynamic range, and sampling frequency affects frequency range. Given that the human ear hears, at best, up to 20,000 Hz and according to the Nyquist theorem, a sampling frequency of 44,100 Hz should be sufficient, but in reality, for a fairly accurate transmission of complex short sounds, such as the sounds of drums, it is better to have a higher frequency. It is also better to have more dynamic range, so that more can be recorded without distortion. quiet sounds. Although in reality, the more these two parameters increase, the less changes can be noticed.

At the same time, appreciate all the delights of quality digital audio it will work if you have a good one sound card. What's built into most PCs is generally terrible; Macs with built-in cards are better, but it's better to have something external. Well, the question, of course, is where you will get these digital recordings with a quality higher than CD :) Although the most crappy MP3 will sound noticeably better on a good sound card.

Returning to analog things - here we can say that people continue to use them not because they are really better and more accurate, but because high-quality and accurate recording without distortion is usually not the desired result. Digital distortions, which can arise from poor audio processing algorithms, low bit rates or sampling rates, digital clipping - they certainly sound much nastier than analog ones, but they can be avoided. And it turns out that a really high-quality and accurate digital recording sounds too sterile and lacks richness. And if, for example, you record drums on tape, this saturation appears and is preserved, even if this recording is later digitized. And vinyl also sounds cooler, even if tracks made entirely on a computer were recorded on it. And of course, all this includes external attributes and associations, how it all looks, the emotions of the people who do it. It’s quite understandable to want to hold a record in your hands, listen to a cassette on an old tape recorder rather than a recording from a computer, or understand those who now use multi-track tape recorders in studios, although this is much more difficult and costly. But this has its own certain fun.

Internet