Whether sound. Is there sound in space? Does sound travel in space? Propagation of sound waves, phase and antiphase

Sounds belong to the section of phonetics. The study of sounds is included in any school curriculum in the Russian language. Familiarization with sounds and their basic characteristics occurs in the lower grades. A more detailed study of sounds with complex examples and nuances takes place in middle and high school. This page provides only basic knowledge according to the sounds of the Russian language in a compressed form. If you need to study the structure of the speech apparatus, the tonality of sounds, articulation, acoustic components and other aspects that go beyond the scope of the modern school curriculum, refer to specialized manuals and textbooks on phonetics.

What is sound?

Sound, like words and sentences, is the basic unit of language. However, the sound does not express any meaning, but reflects the sound of the word. Thanks to this, we distinguish words from each other. Words differ in the number of sounds (port - sport, crow - funnel), a set of sounds (lemon - estuary, cat - mouse), a sequence of sounds (nose - sleep, bush - knock) up to complete mismatch of sounds (boat - speedboat, forest - park).

What sounds are there?

In Russian, sounds are divided into vowels and consonants. The Russian language has 33 letters and 42 sounds: 6 vowels, 36 consonants, 2 letters (ь, ъ) do not indicate a sound. The discrepancy in the number of letters and sounds (not counting b and b) is caused by the fact that for 10 vowel letters there are 6 sounds, for 21 consonant letters there are 36 sounds (if we take into account all combinations of consonant sounds: deaf/voiced, soft/hard). On the letter the sound is indicated in square brackets.
There are no sounds: [e], [e], [yu], [ya], [b], [b], [zh'], [sh'], [ts'], [th], [h] , [sch].

Scheme 1. Letters and sounds of the Russian language.

How are sounds pronounced?

We pronounce sounds when exhaling (only in the case of the interjection “a-a-a”, expressing fear, the sound is pronounced when inhaling.). The division of sounds into vowels and consonants is related to how a person pronounces them. Vowel sounds are pronounced by the voice due to exhaled air passing through tense vocal cords and freely exiting through the mouth. Consonant sounds consist of noise or a combination of voice and noise due to the fact that the exhaled air encounters an obstacle in its path in the form of a bow or teeth. Vowel sounds are pronounced loudly, consonant sounds are pronounced muffled. A person is able to sing vowel sounds with his voice (exhaled air), raising or lowering the timbre. Consonant sounds cannot be sung; they are pronounced equally muffled. Hard and soft signs do not represent sounds. They cannot be pronounced as an independent sound. When pronouncing a word, they influence the consonant in front of them, making it soft or hard.

Transcription of the word

Transcription of a word is a recording of the sounds in a word, that is, actually a recording of how the word is correctly pronounced. Sounds are enclosed in square brackets. Compare: a - letter, [a] - sound. The softness of consonants is indicated by an apostrophe: p - letter, [p] - hard sound, [p’] - soft sound. Voiced and voiceless consonants are not indicated in writing in any way. The transcription of the word is written in square brackets. Examples: door → [dv’er’], thorn → [kal’uch’ka]. Sometimes the transcription indicates stress - an apostrophe before the stressed vowel.

There is no clear comparison of letters and sounds. In the Russian language there are many cases of substitution of vowel sounds depending on the place of stress of the word, substitution of consonants or loss of consonant sounds in certain combinations. When compiling a transcription of a word, the rules of phonetics are taken into account.

Color scheme

In phonetic analysis, words are sometimes drawn with color schemes: letters are painted in different colors depending on what sound they represent. The colors reflect the phonetic characteristics of sounds and help you visualize how a word is pronounced and what sounds it consists of.

All vowels (stressed and unstressed) are marked with a red background. Iotated vowels are marked green-red: green means the soft consonant sound [й‘], red means the vowel that follows it. Consonants with hard sounds are colored blue. Consonants with soft sounds are colored green. Soft and hard signs are painted gray or not painted at all.

Designations:
- vowel, - iotated, - hard consonant, - soft consonant, - soft or hard consonant.

Note. The blue-green color is not used in phonetic analysis diagrams, since a consonant sound cannot be soft and hard at the same time. The blue-green color in the table above is only used to demonstrate that the sound can be either soft or hard.

Space is not a homogeneous nothingness. There are clouds of gas and dust between various objects. They are the remnants of supernova explosions and the site of star formation. In some areas, this interstellar gas is dense enough to propagate sound waves, but they are imperceptible to human hearing.

Is there sound in space?

When an object moves - be it the vibration of a guitar string or an exploding firework - it affects nearby air molecules, as if pushing them. These molecules crash into their neighbors, and those, in turn, into the next ones. Movement travels through the air like a wave. When it reaches the ear, a person perceives it as sound.

When a sound wave passes through air, its pressure fluctuates up and down, like seawater in a storm. The time between these vibrations is called the frequency of sound and is measured in hertz (1 Hz is one oscillation per second). The distance between the highest pressure peaks is called the wavelength.

Sound can only travel in a medium in which the wavelength is no greater than the average distance between particles. Physicists call this the “conditionally free road” - the average distance that a molecule travels after colliding with one and before interacting with the next. Thus, a dense medium can transmit sounds with a short wavelength and vice versa.

Long wavelength sounds have frequencies that the ear perceives as low tones. In a gas with a mean free path greater than 17 m (20 Hz), the sound waves will be too low frequency for humans to perceive. They are called infrasounds. If there were aliens with ears that could hear very low notes, they would know exactly whether sounds were audible in outer space.

Song of the Black Hole

Some 220 million light years away, at the center of a cluster of thousands of galaxies, hums the deepest note the universe has ever heard. 57 octaves below middle C, which is about a million billion times deeper than the frequency a person can hear.

The deepest sound that humans can detect has a cycle of about one vibration every 1/20 of a second. The black hole in the constellation Perseus has a cycle of about one fluctuation every 10 million years.

This became known in 2003, when NASA's Chandra Space Telescope discovered something in the gas filling the Perseus cluster: concentrated rings of light and darkness, like ripples in a pond. Astrophysicists say these are traces of incredibly low-frequency sound waves. The brighter ones are the tops of the waves, where the pressure on the gas is greatest. The darker rings are depressions where the pressure is lower.

Sound you can see

Hot, magnetized gas swirls around the black hole, similar to water swirling around a drain. As it moves, it creates a powerful electromagnetic field. Strong enough to accelerate gas near the edge of a black hole to almost the speed of light, turning it into huge bursts called relativistic jets. They force the gas to turn sideways on its path, and this effect causes eerie sounds from space.

They are carried through the Perseus cluster hundreds of thousands of light years from their source, but the sound can only travel as far as there is enough gas to carry it. So he stops at the edge of the gas cloud filling Perseus. This means that it is impossible to hear its sound on Earth. You can only see the effect on the gas cloud. It looks like looking through space into a soundproof chamber.

Strange planet

Our planet emits a deep groan every time its crust moves. Then there is no doubt whether sounds travel in space. An earthquake can create vibrations in the atmosphere with a frequency of one to five Hz. If it's strong enough, it can send infrasonic waves through the atmosphere into outer space.

Of course, there is no clear boundary where the Earth's atmosphere ends and space begins. The air simply gradually becomes thinner until it eventually disappears altogether. From 80 to 550 kilometers above the Earth's surface, the free path of a molecule is about a kilometer. This means that the air at this altitude is approximately 59 times thinner than at which it would be possible to hear sound. It is only capable of transmitting long infrasonic waves.

When a magnitude 9.0 earthquake rocked Japan's northeast coast in March 2011, seismographs around the world recorded its waves traveling through the Earth, its vibrations causing low-frequency oscillations in the atmosphere. These vibrations travel all the way to where the Gravity Field and stationary satellite Ocean Circulation Explorer (GOCE) compares the Earth's gravity in low orbit to 270 kilometers above the surface. And the satellite managed to record these sound waves.

GOCE has very sensitive accelerometers on board that control the ion thruster. This helps keep the satellite in a stable orbit. GOCE's 2011 accelerometers detected vertical shifts in the very thin atmosphere around the satellite, as well as wave-like shifts in air pressure, as sound waves from the earthquake propagated. The satellite's engines corrected the displacement and stored the data, which became a kind of recording of the infrasound of the earthquake.

This entry was kept secret in the satellite data until a group of scientists led by Rafael F. Garcia published this document.

The first sound in the universe

If it were possible to go back in time, to about the first 760,000 years after the Big Bang, it would be possible to find out whether there was sound in space. At this time, the Universe was so dense that sound waves could travel freely.

Around the same time, the first photons began to travel through space as light. Afterwards, everything finally cooled enough to condense into atoms. Before cooling occurred, the Universe was filled with charged particles - protons and electrons - that absorbed or scattered photons, the particles that make up light.

Today it reaches Earth as a faint glow from the microwave background, visible only to very sensitive radio telescopes. Physicists call this cosmic microwave background radiation. This is the oldest light in the universe. It answers the question of whether there is sound in space. The cosmic microwave background contains a recording of the oldest music in the universe.

Light to the rescue

How does light help us know if there is sound in space? Sound waves travel through air (or interstellar gas) as pressure fluctuations. When gas is compressed, it gets hotter. On a cosmic scale, this phenomenon is so intense that stars are formed. And when the gas expands, it cools. Sound waves traveling through the early universe caused slight fluctuations in pressure in the gaseous environment, which in turn left subtle temperature fluctuations reflected in the cosmic microwave background.

Using temperature changes, University of Washington physicist John Cramer was able to reconstruct those eerie sounds from space - the music of an expanding universe. He multiplied the frequency by 10 26 times so that human ears could hear him.

So no one will actually hear the scream in space, but there will be sound waves moving through clouds of interstellar gas or in the rarefied rays of the Earth's outer atmosphere.

If we talk about objective parameters that can characterize quality, then of course not. Recording on vinyl or cassette always involves introducing additional distortion and noise. But the fact is that such distortions and noise do not subjectively spoil the impression of the music, and often even the opposite. Our hearing and sound analysis system work quite complexly; what is important for our perception and what can be assessed as quality from the technical side are slightly different things.

MP3 is a completely separate issue; it is a clear deterioration in quality in order to reduce the file size. MP3 encoding involves removing quieter harmonics and blurring the fronts, which means a loss of detail and “blurring” of the sound.

The ideal option in terms of quality and fair transmission of everything that happens is digital recording without compression, and CD quality is 16 bits, 44100 Hz - this is no longer the limit, you can increase both the bit rate - 24, 32 bits, and the frequency - 48000, 82200, 96000, 192000 Hz. Bit depth affects dynamic range, and sampling frequency affects frequency range. Given that the human ear hears, at best, up to 20,000 Hz and according to the Nyquist theorem, a sampling frequency of 44,100 Hz should be sufficient, but in reality, for a fairly accurate transmission of complex short sounds, such as the sounds of drums, it is better to have a higher frequency. Dynamic range It’s better to have more, too, so that quieter sounds can be recorded without distortion. Although in reality, the more these two parameters increase, the less changes can be noticed.

At the same time, you can appreciate all the delights of high-quality digital sound if you have a good sound card. What's built into most PCs is generally terrible; Macs with built-in cards are better, but it's better to have something external. Well, the question, of course, is where you will get these digital recordings with a quality higher than CD :) Although the most crappy MP3 will sound noticeably better on a good sound card.

Returning to analog things - here we can say that people continue to use them not because they are really better and more accurate, but because high-quality and accurate recording without distortion is usually not the desired result. Digital distortions, which can arise from poor audio processing algorithms, low bit rates or sampling rates, digital clipping - they certainly sound much nastier than analog ones, but they can be avoided. And it turns out that a really high-quality and accurate digital recording sounds too sterile and lacks richness. And if, for example, you record drums on tape, this saturation appears and is preserved, even if this recording is later digitized. And vinyl also sounds cooler, even if tracks made entirely on a computer were recorded on it. And of course, all this includes external attributes and associations, how it all looks, the emotions of the people who do it. It’s quite understandable to want to hold a record in your hands, listen to a cassette on an old tape recorder rather than a recording from a computer, or understand those who now use multi-track tape recorders in studios, although this is much more difficult and costly. But this has its own certain fun.

February 18, 2016

The world of home entertainment is quite varied and can include: watching movies on a good home theater system; exciting and exciting gameplay or listening to music. As a rule, everyone finds something of their own in this area, or combines everything at once. But whatever a person’s goals for organizing his leisure time and whatever extreme they go to, all these links are firmly connected by one simple and understandable word - “sound”. Indeed, in all of these cases we will be led by the hand sound accompaniment. But this question is not so simple and trivial, especially in cases where there is a desire to achieve high-quality sound in a room or any other conditions. To do this, it is not always necessary to buy expensive hi-fi or hi-end components(although it will be very useful), and sometimes a good knowledge of physical theory is sufficient, which can eliminate most of the problems that arise for everyone who sets out to obtain high-quality voice acting.

Next, the theory of sound and acoustics will be considered from the point of view of physics. In this case, I will try to make this as accessible as possible to the understanding of any person who, perhaps, is far from knowing physical laws or formulas, but nevertheless passionately dreams of realizing the dream of creating a perfect acoustic system. I do not presume to say that in order to achieve good results in this area at home (or in a car, for example), you need to know these theories thoroughly, but understanding the basics will allow you to avoid many stupid and absurd mistakes, and will also allow you to achieve the maximum sound effect from the system any level.

General theory of sound and musical terminology

What is it sound? This is the sensation that the auditory organ perceives "ear"(the phenomenon itself exists without the participation of the “ear” in the process, but this is easier to understand), which occurs when the eardrum is excited by a sound wave. The ear in this case acts as a “receiver” of sound waves of various frequencies.
Sound wave it is essentially a sequential series of compactions and discharges of the medium (most often the air medium under normal conditions) of various frequencies. The nature of sound waves is oscillatory, caused and produced by the vibration of any body. The emergence and propagation of a classical sound wave is possible in three elastic media: gaseous, liquid and solid. When a sound wave occurs in one of these types of space, some changes inevitably occur in the medium itself, for example, a change in air density or pressure, movement of air mass particles, etc.

Since a sound wave has an oscillatory nature, it has such a characteristic as frequency. Frequency measured in hertz (in honor of the German physicist Heinrich Rudolf Hertz), and denotes the number of oscillations over a period of time equal to one second. Those. for example, a frequency of 20 Hz indicates a cycle of 20 oscillations in one second. The subjective concept of its height also depends on the frequency of the sound. The more sound vibrations occur per second, the “higher” the sound appears. A sound wave also has another important characteristic, which has a name - wavelength. Wavelength It is customary to consider the distance that a sound of a certain frequency travels in a period equal to one second. For example, the wavelength of the lowest sound in the human audible range at 20 Hz is 16.5 meters, and the wavelength of the highest sound at 20,000 Hz is 1.7 centimeters.

The human ear is designed in such a way that it is capable of perceiving waves only in a limited range, approximately 20 Hz - 20,000 Hz (depending on the characteristics of a particular person, some are able to hear a little more, some less). Thus, this does not mean that sounds below or above these frequencies do not exist, they are simply not perceived by the human ear, going beyond the audible range. Sound above the audible range is called ultrasound, sound below the audible range is called infrasound. Some animals are able to perceive ultra and infra sounds, some even use this range for orientation in space (bats, dolphins). If sound passes through a medium that is not in direct contact with the human hearing organ, then such sound may not be heard or may be greatly weakened subsequently.

In the musical terminology of sound, there are such important designations as octave, tone and overtone of sound. Octave means an interval in which the frequency ratio between sounds is 1 to 2. An octave is usually very distinguishable by ear, while sounds within this interval can be very similar to each other. An octave can also be called a sound that vibrates twice as much as another sound in the same period of time. For example, the frequency of 800 Hz is nothing more than a higher octave of 400 Hz, and the frequency of 400 Hz in turn is the next octave of sound with a frequency of 200 Hz. The octave, in turn, consists of tones and overtones. Variable vibrations in a harmonic sound wave of the same frequency are perceived by the human ear as musical tone. High-frequency vibrations can be interpreted as high-pitched sounds, while low-frequency vibrations can be interpreted as low-pitched sounds. The human ear is capable of clearly distinguishing sounds with a difference of one tone (in the range of up to 4000 Hz). Despite this, music uses an extremely small number of tones. This is explained from considerations of the principle of harmonic consonance; everything is based on the principle of octaves.

Let's consider the theory of musical tones using the example of a string stretched in a certain way. Such a string, depending on the tension force, will be “tuned” to one specific frequency. When this string is exposed to something with one specific force, which causes it to vibrate, one specific tone of sound will be consistently observed, and we will hear the desired tuning frequency. This sound is called the fundamental tone. The frequency of the note “A” of the first octave is officially accepted as the fundamental tone in the musical field, equal to 440 Hz. However, most musical instruments never reproduce pure fundamental tones alone; they are inevitably accompanied by overtones called overtones. Here it is appropriate to recall an important definition of musical acoustics, the concept of sound timbre. Timbre- this is a feature of musical sounds that gives musical instruments and voices their unique, recognizable specificity of sound, even when comparing sounds of the same pitch and volume. The timbre of each musical instrument depends on the distribution of sound energy among overtones at the moment the sound appears.

Overtones form a specific coloring of the fundamental tone, by which we can easily identify and recognize a specific instrument, as well as clearly distinguish its sound from another instrument. There are two types of overtones: harmonic and non-harmonic. Harmonic overtones by definition are multiples of the fundamental frequency. On the contrary, if the overtones are not multiples and noticeably deviate from the values, then they are called non-harmonic. In music, operating with multiple overtones is practically excluded, so the term is reduced to the concept of “overtone,” meaning harmonic. For some instruments, such as the piano, the fundamental tone does not even have time to form; in a short period of time, the sound energy of the overtones increases, and then just as rapidly decreases. Many instruments create what is called a "transition tone" effect, where the energy of certain overtones is highest at a certain point in time, usually at the very beginning, but then changes abruptly and moves on to other overtones. The frequency range of each instrument can be considered separately and is usually limited to the fundamental frequencies that that particular instrument is capable of producing.

In sound theory there is also such a concept as NOISE. Noise- this is any sound that is created by a combination of sources that are inconsistent with each other. Everyone is familiar with the sound of tree leaves swaying by the wind, etc.

What determines the volume of sound? Obviously, such a phenomenon directly depends on the amount of energy transferred by the sound wave. To determine quantitative indicators of loudness, there is a concept - sound intensity. Sound intensity is defined as the flow of energy passing through some area of ​​space (for example, cm2) per unit of time (for example, per second). During normal conversation, the intensity is approximately 9 or 10 W/cm2. The human ear is capable of perceiving sounds over a fairly wide range of sensitivity, while the sensitivity of frequencies is heterogeneous within the sound spectrum. This way, the frequency range 1000 Hz - 4000 Hz, which most widely covers human speech, is best perceived.

Because sounds vary so greatly in intensity, it is more convenient to think of it as a logarithmic quantity and measure it in decibels (after the Scottish scientist Alexander Graham Bell). The lower threshold of hearing sensitivity of the human ear is 0 dB, the upper is 120 dB, also called the “pain threshold”. The upper limit of sensitivity is also perceived by the human ear not in the same way, but depends on the specific frequency. Sounds low frequencies must have a much greater intensity than high ones to cause a pain threshold. For example, the pain threshold at a low frequency of 31.5 Hz occurs at a sound intensity level of 135 dB, when at a frequency of 2000 Hz the sensation of pain will appear at 112 dB. There is also the concept of sound pressure, which actually expands the usual explanation of the propagation of a sound wave in the air. Sound pressure- this is a variable excess pressure that arises in an elastic medium as a result of the passage of a sound wave through it.

Wave nature of sound

To better understand the system of sound wave generation, imagine a classic speaker located in a pipe filled with air. If the speaker makes flick forward, then the air in the immediate vicinity of the diffuser is momentarily compressed. The air will then expand, thereby pushing the compressed air region along the pipe.
This wave movement will subsequently become sound when it reaches the auditory organ and “excites” the eardrum. When a sound wave occurs in a gas, excess pressure and excess density are created and particles move at a constant speed. About sound waves, it is important to remember the fact that the substance does not move along with the sound wave, but only a temporary disturbance of the air masses occurs.

If we imagine a piston suspended in free space on a spring and making repeated movements “back and forth”, then such oscillations will be called harmonic or sinusoidal (if we imagine the wave as a graph, then in this case we will get a pure sinusoid with repeated declines and rises). If we imagine a speaker in a pipe (as in the example described above), performing harmonic vibrations, then at the moment the speaker moves “forward”, the already known effect of air compression is obtained, and when the speaker moves “backward”, the opposite effect of vacuum is obtained. In this case, a wave of alternating compression and rarefaction will propagate through the pipe. The distance along the pipe between adjacent maxima or minima (phases) will be called wavelength. If the particles oscillate parallel to the direction of propagation of the wave, then the wave is called longitudinal. If they oscillate perpendicular to the direction of propagation, then the wave is called transverse. Typically, sound waves in gases and liquids are longitudinal, but in solids waves of both types can occur. Transverse waves in solids arise due to resistance to change in shape. The main difference between these two types of waves is that a transverse wave has the property of polarization (oscillations occur in a certain plane), while a longitudinal wave does not.

Sound speed

The speed of sound directly depends on the characteristics of the medium in which it propagates. It is determined (dependent) by two properties of the medium: elasticity and density of the material. The speed of sound in solids directly depends on the type of material and its properties. Velocity in gaseous media depends on only one type of deformation of the medium: compression-rarefaction. The change in pressure in a sound wave occurs without heat exchange with surrounding particles and is called adiabatic.
The speed of sound in a gas depends mainly on temperature - it increases with increasing temperature and decreases with decreasing temperature. Also, the speed of sound in a gaseous medium depends on the size and mass of the gas molecules themselves - the smaller the mass and size of the particles, the greater the “conductivity” of the wave and, accordingly, the greater the speed.

In liquid and solid media, the principle of propagation and the speed of sound are similar to how a wave propagates in air: by compression-discharge. But in these environments, in addition to the same dependence on temperature, the density of the medium and its composition/structure are quite important. The lower the density of the substance, the higher the speed of sound and vice versa. The dependence on the composition of the medium is more complex and is determined in each specific case, taking into account the location and interaction of molecules/atoms.

Speed ​​of sound in air at t, °C 20: 343 m/s
Speed ​​of sound in distilled water at t, °C 20: 1481 m/s
Speed ​​of sound in steel at t, °C 20: 5000 m/s

Standing waves and interference

When a speaker creates sound waves in a confined space, the effect of waves being reflected from the boundaries inevitably occurs. As a result, this most often occurs interference effect- when two or more sound waves overlap each other. Special cases of interference phenomena are the formation of: 1) Beating waves or 2) Standing waves. Wave beats- this is the case when the addition of waves with similar frequencies and amplitudes occurs. The picture of the occurrence of beats: when two waves of similar frequencies overlap each other. At some point in time, with such an overlap, the amplitude peaks may coincide “in phase,” and the declines may also coincide in “antiphase.” This is how sound beats are characterized. It is important to remember that, unlike standing waves, phase coincidences of peaks do not occur constantly, but at certain time intervals. To the ear, this pattern of beats is distinguished quite clearly, and is heard as a periodic increase and decrease in volume, respectively. The mechanism by which this effect occurs is extremely simple: when the peaks coincide, the volume increases, and when the valleys coincide, the volume decreases.

Standing waves arise in the case of superposition of two waves of the same amplitude, phase and frequency, when when such waves “meet” one moves in the forward direction and the other in the opposite direction. In the area of ​​space (where the standing wave was formed), a picture of the superposition of two frequency amplitudes appears, with alternating maxima (the so-called antinodes) and minima (the so-called nodes). When this phenomenon occurs, the frequency, phase and attenuation coefficient of the wave at the place of reflection are extremely important. Unlike traveling waves, there is no energy transfer in a standing wave due to the fact that the forward and backward waves that form this wave transfer energy in equal quantities in both the forward and opposite directions. To clearly understand the occurrence of a standing wave, let us present an example from home acoustics. Let's say we have floor-standing speaker systems in some limited space (room). Having them play something with a lot of bass, let's try to change the location of the listener in the room. Thus, a listener who finds himself in the zone of minimum (subtraction) of a standing wave will feel the effect that there is very little bass, and if the listener finds himself in a zone of maximum (addition) of frequencies, then the opposite effect of a significant increase in the bass region is obtained. In this case, the effect is observed in all octaves of the base frequency. For example, if the base frequency is 440 Hz, then the phenomenon of “addition” or “subtraction” will also be observed at frequencies of 880 Hz, 1760 Hz, 3520 Hz, etc.

Resonance phenomenon

Most solids have a natural resonance frequency. It is quite easy to understand this effect using the example of an ordinary pipe, open at only one end. Let's imagine a situation where a speaker is connected to the other end of the pipe, which can play one constant frequency, which can also be changed later. So, the pipe has a natural resonance frequency, saying in simple language is the frequency at which the pipe "resonates" or produces its own sound. If the frequency of the speaker (as a result of adjustment) coincides with the resonance frequency of the pipe, then the effect of increasing the volume several times will occur. This happens because the loudspeaker excites vibrations of the air column in the pipe with a significant amplitude until the same “resonant frequency” is found and the addition effect occurs. The resulting phenomenon can be described as follows: the pipe in this example “helps” the speaker by resonating at a specific frequency, their efforts add up and “result” in an audible loud effect. Using the example of musical instruments, this phenomenon can be easily seen, since the design of most instruments contains elements called resonators. It is not difficult to guess what serves the purpose of enhancing a certain frequency or musical tone. For example: a guitar body with a resonator in the form of a hole mating with the volume; The design of the flute tube (and all pipes in general); The cylindrical shape of the drum body, which itself is a resonator of a certain frequency.

Frequency spectrum of sound and frequency response

Since in practice there are practically no waves of the same frequency, it becomes necessary to decompose the entire sound spectrum of the audible range into overtones or harmonics. For these purposes, there are graphs that display the dependence of the relative energy of sound vibrations on frequency. This graph is called a sound frequency spectrum graph. Frequency spectrum of sound There are two types: discrete and continuous. A discrete spectrum plot displays individual frequencies separated by blank spaces. In a continuous spectrum, everything is present at once audio frequencies.
In the case of music or acoustics, the usual graph is most often used Amplitude-Frequency Characteristics(abbreviated as "AFC"). This graph shows the dependence of the amplitude of sound vibrations on frequency throughout the entire frequency spectrum (20 Hz - 20 kHz). Looking at such a graph, it is easy to understand, for example, the strengths or weaknesses of a particular speaker or acoustic system as a whole, the strongest areas of energy output, frequency dips and rises, attenuation, and also to trace the steepness of the decline.

Propagation of sound waves, phase and antiphase

The process of propagation of sound waves occurs in all directions from the source. The simplest example to understand this phenomenon is a pebble thrown into water.
From the place where the stone fell, waves begin to spread across the surface of the water in all directions. However, let’s imagine a situation using a speaker in a certain volume, say a closed box, which is connected to an amplifier and plays some kind of musical signal. It is easy to notice (especially if you apply a powerful low-frequency signal, for example a bass drum) that the speaker makes a rapid movement “forward”, and then the same rapid movement “backward”. What remains to be understood is that when the speaker moves forward, it emits a sound wave that we hear later. But what happens when the speaker moves backward? And paradoxically, the same thing happens, the speaker makes the same sound, only in our example it propagates entirely within the volume of the box, without going beyond its limits (the box is closed). In general, in the above example one can observe quite a lot of interesting physical phenomena, the most significant of which is the concept of phase.

The sound wave that the speaker, being in the volume, emits in the direction of the listener is “in phase”. The reverse wave, which goes into the volume of the box, will be correspondingly antiphase. It remains only to understand what these concepts mean? Signal phase– this is the sound pressure level at the current moment in time at some point in space. The easiest way to understand the phase is by the example of the reproduction of musical material by a conventional floor-standing stereo pair of home speaker systems. Let's imagine that two such floor-standing speakers are installed in a certain room and play. In this case, both acoustic systems reproduce a synchronous signal of variable sound pressure, and the sound pressure of one speaker is added to the sound pressure of the other speaker. A similar effect occurs due to the synchronicity of signal reproduction from the left and right speakers, respectively, in other words, the peaks and troughs of the waves emitted by the left and right speakers coincide.

Now let’s imagine that the sound pressures still change in the same way (have not undergone changes), but only now they are opposite to each other. This can happen if you connect one speaker system out of two in reverse polarity ("+" cable from the amplifier to the "-" terminal of the speaker system, and "-" cable from the amplifier to the "+" terminal of the speaker system). In this case, the signal opposite in direction will cause a pressure difference, which can be represented in numbers as follows: left acoustic system will create a pressure of "1 Pa", and the right speaker system will create a pressure of "minus 1 Pa". As a result, the total sound volume at the listener's location will be zero. This phenomenon is called antiphase. If we look at the example in more detail for understanding, it turns out that two speakers playing “in phase” create identical areas of air compaction and rarefaction, thereby actually helping each other. In the case of an idealized antiphase, the area of ​​compressed air space created by one speaker will be accompanied by an area of ​​rarefied air space created by the second speaker. This looks approximately like the phenomenon of mutual synchronous cancellation of waves. True, in practice the volume does not drop to zero, and we will hear a highly distorted and weakened sound.

The most accessible way to describe this phenomenon is as follows: two signals with the same oscillations (frequency), but shifted in time. In view of this, it is more convenient to imagine these displacement phenomena using the example of an ordinary round clock. Let's imagine that there are several identical round clocks hanging on the wall. When the second hands of this watch run synchronously, on one watch 30 seconds and on the other 30, then this is an example of a signal that is in phase. If the second hands move with a shift, but the speed is still the same, for example, on one watch it is 30 seconds, and on another it is 24 seconds, then this is a classic example of a phase shift. In the same way, phase is measured in degrees, within a virtual circle. In this case, when the signals are shifted relative to each other by 180 degrees (half a period), classical antiphase is obtained. Often in practice, minor phase shifts occur, which can also be determined in degrees and successfully eliminated.

Waves are plane and spherical. A plane wave front propagates in only one direction and is rarely encountered in practice. A spherical wavefront is a simple type of wave that originates from a single point and travels in all directions. Sound waves have the property diffraction, i.e. ability to go around obstacles and objects. The degree of bending depends on the ratio of the sound wavelength to the size of the obstacle or hole. Diffraction also occurs when there is some obstacle in the path of sound. In this case, two scenarios are possible: 1) If the size of the obstacle is much larger than the wavelength, then the sound is reflected or absorbed (depending on the degree of absorption of the material, the thickness of the obstacle, etc.), and an “acoustic shadow” zone is formed behind the obstacle. . 2) If the size of the obstacle is comparable to the wavelength or even less than it, then the sound diffracts to some extent in all directions. If a sound wave, while moving in one medium, hits the interface with another medium (for example, an air medium with a solid medium), then three scenarios can occur: 1) the wave will be reflected from the interface 2) the wave can pass into another medium without changing direction 3) a wave can pass into another medium with a change in direction at the boundary, this is called “wave refraction”.

The ratio of the excess pressure of a sound wave to the oscillatory volumetric velocity is called wave resistance. In simple words, wave impedance of the medium can be called the ability to absorb sound waves or “resist” them. The reflection and transmission coefficients directly depend on the ratio of the wave impedances of the two media. Wave resistance in a gaseous medium is much lower than in water or solids. Therefore, if a sound wave in air strikes a solid object or the surface of deep water, the sound is either reflected from the surface or absorbed to a large extent. This depends on the thickness of the surface (water or solid) on which the desired sound wave falls. When the thickness of a solid or liquid medium is low, sound waves almost completely “pass”, and vice versa, when the thickness of the medium is large, the waves are more often reflected. In the case of reflection of sound waves, this process occurs according to a well-known physical law: “The angle of incidence is equal to the angle of reflection.” In this case, when a wave from a medium with a lower density hits the boundary with a medium of higher density, the phenomenon occurs refraction. It consists in the bending (refraction) of a sound wave after “meeting” an obstacle, and is necessarily accompanied by a change in speed. Refraction also depends on the temperature of the medium in which reflection occurs.

In the process of propagation of sound waves in space, their intensity inevitably decreases; we can say that the waves attenuate and the sound weakens. In practice, encountering a similar effect is quite simple: for example, if two people stand in a field at some close distance (a meter or closer) and start saying something to each other. If you subsequently increase the distance between people (if they begin to move away from each other), the same level of conversational volume will become less and less audible. This example clearly demonstrates the phenomenon of a decrease in the intensity of sound waves. Why is this happening? The reason for this is various processes of heat exchange, molecular interaction and internal friction of sound waves. Most often in practice, sound energy is converted into thermal energy. Such processes inevitably arise in any of the 3 sound propagation media and can be characterized as absorption of sound waves.

The intensity and degree of absorption of sound waves depends on many factors, such as pressure and temperature of the medium. Absorption also depends on the specific sound frequency. When a sound wave propagates through liquids or gases, a friction effect occurs between different particles, which is called viscosity. As a result of this friction at the molecular level, the process of converting a wave from sound to heat occurs. In other words, the higher the thermal conductivity of the medium, the lower the degree of wave absorption. Sound absorption in gaseous media also depends on pressure (atmospheric pressure changes with increasing altitude relative to sea level). As for the dependence of the degree of absorption on the frequency of sound, taking into account the above-mentioned dependences of viscosity and thermal conductivity, the higher the frequency of sound, the higher the absorption of sound. For example, when normal temperature and pressure, in air the absorption of a wave with a frequency of 5000 Hz is 3 dB/km, and the absorption of a wave with a frequency of 50,000 Hz will be 300 dB/m.

In solid media, all the above dependencies (thermal conductivity and viscosity) are preserved, but several more conditions are added to this. They are associated with the molecular structure of solid materials, which can be different, with its own inhomogeneities. Depending on this internal solid molecular structure, the absorption of sound waves in this case can be different, and depends on the type of specific material. When sound passes through a solid body, the wave undergoes a number of transformations and distortions, which most often leads to the dispersion and absorption of sound energy. At the molecular level, a dislocation effect can occur when a sound wave causes a displacement of atomic planes, which then return to their original position. Or, the movement of dislocations leads to a collision with dislocations perpendicular to them or defects in the crystal structure, which causes their inhibition and, as a consequence, some absorption of the sound wave. However, the sound wave can also resonate with these defects, which will lead to distortion of the original wave. The energy of the sound wave at the moment of interaction with the elements of the molecular structure of the material is dissipated as a result of internal friction processes.

In this article I will try to analyze the features of human auditory perception and some of the subtleties and features of sound propagation.

Before you suspect that the sound card on your computer is broken, carefully inspect the existing PC connectors for external damage. You should also check the functionality of the subwoofer with speakers or headphones through which the sound is played - try connecting them to any other device. Perhaps the cause of the problem lies precisely in the equipment you are using.

It is likely that reinstalling will help in your situation operating system Windows, be it 7, 8, 10 or the Xp version, since the necessary settings could simply be lost.

Let's move on to checking the sound card

Method 1

The first step is to deal with the device drivers. To do this you need:


After this, the drivers will be updated and the problem will be resolved.

Also this procedure can be carried out if available current version software on removable media. In this situation, you need to install by specifying the path to a specific folder.

If the audio card is not in the device manager at all, then move on to the next option.

Method 2

In this case, a complete diagnosis is required to ensure correct technical connection. You must do the following in a specific order:


Please note that this option is only suitable for discrete components that are installed on a separate board.

Method 3

If, after a visual inspection and checking the speakers or headphones, they are in working order, and reinstalling the OS did not bring any results, we move on:


After the sound card test is completed, the system will inform you about its status and if it is inoperative, you will understand this based on the results.

Method 4

Another option to quickly and easily check sound card on Windows OS:


In this way, we will run a diagnosis of audio problems on the computer.

The program will offer you several options for problems and also indicate the connected audio devices. If so, the diagnostic wizard will allow you to quickly identify this.

Method 5

The third option for checking whether the sound card is working is as follows:


In the “Driver” and “Information” tabs, you will receive additional data about the parameters of all devices installed on your PC, both integrated and discrete. This method also allows you to diagnose problems and quickly identify them through software testing.

Now you know how to quickly and easily check your sound card in several ways. Their main advantage is that for this you do not need online access to the Internet, and all procedures can be carried out independently, without contacting a specialized service.




Top