Whether sound. Is there sound in space? Does sound propagate in space. Propagation of sound waves, phase and antiphase

Sounds belong to the section of phonetics. The study of sounds is included in any school curriculum in the Russian language. Acquaintance with sounds and their main characteristics occurs in the lower grades. A more detailed study of sounds with complex examples and nuances takes place in middle and high school. This page gives only basic knowledge by the sounds of the Russian language in a compressed form. If you need to study the device of the speech apparatus, the tonality of sounds, articulation, acoustic components and other aspects that are beyond the scope of the modern school curriculum, refer to specialized textbooks and textbooks on phonetics.

What is sound?

Sound, like words and sentences, is the basic unit of language. However, the sound does not express any meaning, but reflects the sound of the word. Thanks to this, we distinguish words from each other. Words differ in the number of sounds (port - sport, crow - funnel), a set of sounds (lemon - firth, cat - mouse), a sequence of sounds (nose - dream, bush - knock) up to a complete mismatch of sounds (boat - boat, forest - park).

What sounds are there?

In Russian, sounds are divided into vowels and consonants. There are 33 letters and 42 sounds in Russian: 6 vowels, 36 consonants, 2 letters (ь, ъ) do not indicate a sound. The discrepancy in the number of letters and sounds (not counting b and b) is due to the fact that there are 6 sounds for 10 vowels, 36 sounds for 21 consonants (if we take into account all combinations of consonant sounds deaf / voiced, soft / hard). On the letter, the sound is indicated in square brackets.
There are no sounds: [e], [e], [u], [i], [b], [b], [g '], [w '], [ts '], [th], [h] , [sch].

Scheme 1. Letters and sounds of the Russian language.

How are sounds pronounced?

We pronounce sounds when exhaling (only in the case of the interjection “a-a-a”, expressing fear, the sound is pronounced when inhaling.). The division of sounds into vowels and consonants is related to how a person pronounces them. Vowel sounds are pronounced by the voice due to the exhaled air passing through the tense vocal cords and freely exiting through the mouth. Consonant sounds consist of noise or a combination of voice and noise due to the fact that the exhaled air meets an obstacle in its path in the form of a bow or teeth. Vowel sounds are pronounced loudly, consonant sounds are muffled. A person is able to sing vowel sounds with his voice (exhaled air), raising or lowering the timbre. Consonant sounds cannot be sung, they are pronounced equally muffled. Hard and soft signs do not represent sounds. They cannot be pronounced as an independent sound. When pronouncing a word, they affect the consonant in front of them, make it soft or hard.

Word transcription

Transcription of a word is a record of sounds in a word, that is, in fact, a record of how the word is pronounced correctly. Sounds are enclosed in square brackets. Compare: a - letter, [a] - sound. The softness of consonants is indicated by an apostrophe: p - letter, [p] - hard sound, [p '] - soft sound. Voiced and voiceless consonants are not marked in writing. The transcription of the word is written in square brackets. Examples: door → [dv'er '], thorn → [kal'uch'ka]. Sometimes stress is indicated in transcription - an apostrophe before a vowel stressed sound.

There is no clear juxtaposition of letters and sounds. In the Russian language, there are many cases of substitution of vowel sounds depending on the place of stress of a word, substitution of consonants or dropping out of consonant sounds in certain combinations. When compiling a transcription of a word, the rules of phonetics are taken into account.

Color scheme

In phonetic analysis, words are sometimes drawn with color schemes: letters are painted with different colors depending on what sound they mean. Colors reflect the phonetic characteristics of sounds and help you visualize how a word is pronounced and what sounds it consists of.

All vowels (stressed and unstressed) are marked with a red background. Iotated vowels are marked green-red: green means a soft consonant sound [y ‘], red means the vowel following it. Consonants with solid sounds are colored blue. Consonants with soft sounds are colored green. Soft and hard signs are painted in gray or not painted at all.

Designations:
- vowel, - iotated, - hard consonant, - soft consonant, - soft or hard consonant.

Note. The blue-green color is not used in the schemes for phonetic analysis, since a consonant cannot be both soft and hard at the same time. The blue-green color in the table above is only used to show that the sound can be either soft or hard.

The cosmos is not a homogeneous nothing. Between various objects there are clouds of gas and dust. They are the remnants of supernova explosions and the site for star formation. In some areas, this interstellar gas is dense enough to propagate sound waves, but they are not susceptible to human hearing.

Is there sound in space?

When an object moves - be it the vibration of a guitar string or an exploding firework - it affects nearby air molecules, as if pushing them. These molecules crash into their neighbors, and those, in turn, into the next ones. Movement spreads through the air like a wave. When it reaches the ear, the person perceives it as sound.

When a sound wave travels through airspace, its pressure fluctuates up and down like sea water in a storm. The time between these vibrations is called the frequency of sound and is measured in hertz (1 Hz is one oscillation per second). The distance between the highest pressure peaks is called the wavelength.

Sound can only propagate in a medium in which the wavelength is not greater than the average distance between the particles. Physicists call this "conditionally free road" - the average distance that a molecule travels after colliding with one and before interacting with the next. Thus, a dense medium can transmit short wavelength sounds and vice versa.

Long wave sounds have frequencies that the ear perceives as low tones. In a gas with a mean free path greater than 17 m (20 Hz), sound waves will be too low frequency to be perceived by humans. They are called infrasounds. If there were aliens with ears that perceive very low notes, they would know for sure whether sounds are heard in outer space.

Song of the Black Hole

Some 220 million light-years away, at the center of a cluster of thousands of galaxies, hums the lowest note the universe has ever heard. 57 octaves below middle C, which is about a million billion times deeper than the sound of the frequency that a person can hear.

The deepest sound that humans can hear has a cycle of about one vibration every 1/20th of a second. A black hole in the constellation Perseus has a cycle of about one oscillation every 10 million years.

This became known in 2003, when NASA's Chandra Space Telescope detected something in the gas filling the Perseus Cluster: concentrated rings of light and dark, like ripples in a pond. Astrophysicists say that these are traces of incredibly low-frequency sound waves. The brighter ones are the tops of the waves, where the pressure on the gas is greatest. The darker rings are depressions where the pressure is lower.

Sound that can be seen

Hot, magnetized gas swirls around the black hole, much like water swirling around a drain. As it moves, it creates a powerful electromagnetic field. Strong enough to accelerate gas near the edge of a black hole to almost the speed of light, turning it into huge bursts called relativistic jets. They force the gas to turn sideways on its way, and this impact causes eerie sounds from space.

They travel through the Perseus Cluster hundreds of thousands of light-years from their source, but sound can only travel as long as there is enough gas to carry it. Therefore, he stops at the edge of the gas cloud that fills Perseus. This means that it is impossible to hear its sound on Earth. You can only see the effect on the gas cloud. It looks like looking through space at a soundproof chamber.

strange planet

Our planet lets out a deep groan every time its crust moves. Then there is no doubt whether sounds propagate in space. An earthquake can create vibrations in the atmosphere with a frequency of one to five Hz. If strong enough, it can send infrasonic waves through the atmosphere into outer space.

Of course, there is no clear boundary where the Earth's atmosphere ends and space begins. The air just gradually becomes thinner until it eventually disappears altogether. From 80 to 550 kilometers above the Earth's surface, the mean free path of a molecule is about a kilometer. This means that the air at this altitude is about 59 times thinner than it would be possible to hear sound. It can only carry long infrasonic waves.

When a magnitude 9.0 earthquake rocked the northeast coast of Japan in March 2011, seismographs around the world recorded how its waves passed through the Earth, and vibrations caused low-frequency vibrations in the atmosphere. These vibrations have traveled all the way to where the ship (Gravity Field) and the stationary Ocean Circulation Explorer (GOCE) satellite compares Earth's gravity in low orbit to 270 kilometers above the surface. And the satellite managed to record these sound waves.

GOCE has very sensitive accelerometers on board that control the ion thruster. This helps keep the satellite in a stable orbit. 2011, GOCE accelerometers detected vertical displacement in the very thin atmosphere around the satellite, as well as undulating shifts in air pressure as sound waves from an earthquake propagate. The satellite's thrusters corrected for the offset and stored the data, which became something like an infrasound recording of an earthquake.

This entry was classified in the satellite data until a team of scientists led by Rafael F. Garcia published this document.

The first sound in the universe

If it were possible to go back in time, to about the first 760,000 years after the Big Bang, it would be possible to find out if there is sound in space. At that time, the universe was so dense that sound waves could travel freely.

Around the same time, the first photons began to travel through space as light. After that, everything finally cooled down enough to condense into atoms. Before the cooling took place, the universe was filled with charged particles - protons and electrons - that absorbed or scattered photons, the particles that make up light.

Today it reaches Earth as a faint microwave background glow, visible only to very sensitive radio telescopes. Physicists call this relic radiation. It is the oldest light in the universe. It answers the question of whether there is sound in space. The cosmic microwave background contains a record of the oldest music in the universe.

Light to help

How does light help you know if there is sound in space? Sound waves travel through air (or interstellar gas) as pressure fluctuations. When the gas is compressed, it gets hotter. On a cosmic scale, this phenomenon is so intense that stars form. And when the gas expands, it cools down. Sound waves propagating through the early universe caused slight pressure fluctuations in the gaseous environment, which in turn left subtle temperature fluctuations reflected in the cosmic microwave background.

Using temperature changes, University of Washington physicist John Cramer has been able to reconstruct these eerie sounds from space - the music of the expanding universe. He multiplied the frequency by a factor of 1026 so that human ears could hear it.

So no one will really hear the scream in space, but there will be sound waves moving through clouds of interstellar gas or in the rarefied rays of the Earth's outer atmosphere.

If we talk about objective parameters that can characterize quality, then of course not. Recording to vinyl or cassette always involves the introduction of additional distortion and noise. But the fact is that such distortions and noise do not subjectively spoil the impression of music, and often even vice versa. Our hearing and the sound analysis system work quite complicated, what is important for our perception, and what can be assessed as quality from the technical side, are slightly different things.

MP3 is generally a separate issue, it is a clear deterioration in quality in order to reduce the file size. MP3 encoding involves the removal of quieter harmonics and blurring of the fronts, which means a loss of detail, "blurring" of the sound.

The ideal option in terms of quality and honest transmission of everything that happens is a digital recording without compression, and the quality of a CD is 16 bits, 44100 Hz - this is no longer the limit, you can increase both the bit rate - 24, 32 bits, and the frequency - 48000, 82200, 96000, 192000 Hz. The bit depth affects the dynamic range, and the sampling rate affects the frequency range. Given that the human ear hears at best up to 20,000 Hz and, according to the Nyquist theorem, a sampling rate of 44,100 Hz should be enough, but in reality, for sufficiently accurate transmission of complex short sounds, such as drum sounds, it is better to have a larger frequency. Dynamic Range it is also better to have more so that quieter sounds can be recorded without distortion. Although realistically, the more these two parameters increase, the less changes can be noticed.

At the same time, you can appreciate all the delights of high-quality digital sound if you have a good sound card. What's built into most PCs is generally terrible, Macs with built-in cards are better, but it's better to have something external. Well, the question is, of course, where do you get these digital recordings with a quality higher than CD :) Although the worst MP3 on a good sound card will sound noticeably better.

Returning to analog things, here we can say that people continue to use them not because they are really better and more accurate, but because high-quality and accurate recording without distortion is usually not the desired result. Digital distortion, which can come from poor audio processing algorithms, low bit or sample rate, digital clipping - they certainly sound much nastier than analog ones, but they can be avoided. And it turns out that a really high-quality and accurate digital recording sounds too sterile, there is not enough saturation. And if, for example, you record drums on a tape, this saturation appears and is preserved, even if this recording is later digitized. And vinyl also sounds cooler, even if tracks made entirely on a computer were recorded on it. And of course, external attributes and associations are invested in all this, the way it all looks, the emotions of the people who do it. It is quite possible to understand the desire to hold a record in your hands, to listen to a cassette on an old tape recorder, and not a recording from a computer, or to understand those who now use multi-track tape recorders in studios, although this is much more complicated and costly. But this has its own specific fun.

February 18, 2016

The world of home entertainment is quite varied and can include: watching a movie on a good home theater system; fun and addictive gameplay or listening to music. As a rule, everyone finds something of their own in this area, or combines everything at once. But no matter what the goals of a person in organizing their leisure time and no matter what extreme they go to, all these links are firmly connected by one simple and understandable word - "sound". Indeed, in all these cases, we will be led by the hand sound accompaniment. But this question is not so simple and trivial, especially in cases where there is a desire to achieve high-quality sound in a room or any other conditions. To do this, it is not always necessary to buy expensive hi-fi or high-end components(although it will be very helpful), but sometimes a good knowledge of physical theory is sufficient, which can eliminate most of the problems that arise for everyone who sets out to get high-quality voice acting.

Next, the theory of sound and acoustics will be considered from the point of view of physics. In this case, I will try to make it as accessible as possible for the understanding of any person who, perhaps, is far from the knowledge of physical laws or formulas, but nevertheless passionately dreams of the realization of the dream of creating a perfect acoustic system. I do not presume to claim that to achieve good results in this area at home (or in a car, for example) you need to know these theories thoroughly, however, understanding the basics will avoid many stupid and absurd mistakes, as well as allow you to achieve the maximum sound effect from the system. any level.

General sound theory and musical terminology

What is sound? This is the sensation that the auditory organ perceives. "an ear"(the phenomenon itself exists even without the participation of the “ear” in the process, but it’s easier to understand this way), which occurs when the eardrum is excited by a sound wave. The ear in this case acts as a "receiver" of sound waves of different frequencies.
Sound wave It is, in fact, a sequential series of seals and discharges of the medium (most often the air environment under normal conditions) of various frequencies. The nature of sound waves is oscillatory, caused and produced by the vibration of any bodies. The emergence and propagation of a classical sound wave is possible in three elastic media: gaseous, liquid and solid. When a sound wave occurs in one of these types of space, some changes inevitably occur in the medium itself, for example, a change in the density or pressure of air, the movement of particles of air masses, etc.

Since the sound wave has an oscillatory nature, it has such a characteristic as frequency. Frequency measured in hertz (in honor of the German physicist Heinrich Rudolf Hertz), and denotes the number of vibrations over a period of time equal to one second. Those. for example, a frequency of 20 Hz means a cycle of 20 oscillations in one second. The subjective concept of its height also depends on the frequency of the sound. The more sound vibrations are made per second, the "higher" the sound seems. The sound wave also has another important characteristic, which has a name - the wavelength. Wavelength It is customary to consider the distance that a sound of a certain frequency travels in a period equal to one second. For example, the wavelength of the lowest sound in the human audible range at 20 Hz is 16.5 meters, and the wavelength of the highest sound at 20,000 Hz is 1.7 centimeters.

The human ear is designed in such a way that it is able to perceive waves only in a limited range, approximately 20 Hz - 20,000 Hz (depending on the characteristics of a particular person, someone is able to hear a little more, someone less). Thus, this does not mean that sounds below or above these frequencies do not exist, they are simply not perceived by the human ear, going beyond the audible range. Sound above the audible range is called ultrasound, sound below the audible range is called infrasound. Some animals are able to perceive ultra and infra sounds, some even use this range for orientation in space (bats, dolphins). If the sound passes through a medium that does not directly come into contact with the human hearing organ, then such a sound may not be heard or be greatly weakened later.

In the musical terminology of sound, there are such important designations as octave, tone and overtone of sound. Octave means an interval in which the ratio of frequencies between sounds is 1 to 2. An octave is usually very audible, while sounds within this interval can be very similar to each other. An octave can also be called a sound that makes twice as many vibrations as another sound in the same time period. For example, a frequency of 800 Hz is nothing but a higher octave of 400 Hz, and a frequency of 400 Hz is in turn the next octave of sound with a frequency of 200 Hz. An octave is made up of tones and overtones. Variable oscillations in a harmonic sound wave of one frequency are perceived by the human ear as musical tone. High frequency vibrations can be interpreted as high-pitched sounds, low-frequency vibrations as low-pitched sounds. The human ear is able to clearly distinguish sounds with a difference of one tone (in the range up to 4000 Hz). Despite this, an extremely small number of tones are used in music. This is explained from considerations of the principle of harmonic consonance, everything is based on the principle of octaves.

Consider the theory of musical tones using the example of a string stretched in a certain way. Such a string, depending on the tension force, will be "tuned" to one specific frequency. When this string is exposed to something with one specific force, which will cause it to vibrate, one specific tone of sound will be steadily observed, we will hear the desired tuning frequency. This sound is called the fundamental tone. For the main tone in the musical field, the frequency of the note "la" of the first octave, equal to 440 Hz, is officially accepted. However, most musical instruments never reproduce pure fundamental tones alone; they are inevitably accompanied by overtones called overtones. Here it is appropriate to recall an important definition of musical acoustics, the concept of sound timbre. Timbre- this is a feature of musical sounds that give musical instruments and voices their unique recognizable specificity of sound, even when comparing sounds of the same pitch and loudness. The timbre of each musical instrument depends on the distribution of sound energy over the overtones at the moment the sound appears.

Overtones form a specific color of the fundamental tone, by which we can easily identify and recognize a particular instrument, as well as clearly distinguish its sound from another instrument. There are two types of overtones: harmonic and non-harmonic. Harmonic overtones are, by definition, multiples of the fundamental frequency. On the contrary, if the overtones are not multiples and deviate noticeably from the values, then they are called inharmonious. In music, the operation of non-multiple overtones is practically excluded, therefore the term is reduced to the concept of "overtone", meaning harmonic. For some instruments, for example, the piano, the main tone does not even have time to form, in a short period the sound energy of the overtones increases, and then the decline occurs just as rapidly. Many instruments create a so-called "transitional tone" effect, when the energy of certain overtones is maximum at a certain point in time, usually at the very beginning, but then abruptly changes and moves to other overtones. The frequency range of each instrument can be considered separately and is usually limited by the frequencies of the fundamental tones that this particular instrument is capable of reproducing.

In the theory of sound there is also such a thing as NOISE. Noise- this is any sound that is created by a combination of sources that are inconsistent with each other. Everyone is well aware of the noise of the leaves of trees, swayed by the wind, etc.

What determines the sound volume? It is obvious that such a phenomenon directly depends on the amount of energy carried by the sound wave. To determine the quantitative indicators of loudness, there is a concept - sound intensity. Sound intensity is defined as the flow of energy passing through some area of ​​space (for example, cm2) per unit of time (for example, per second). In a normal conversation, the intensity is about 9 or 10 W/cm2. The human ear is able to perceive sounds with a fairly wide range of sensitivity, while the susceptibility of frequencies is not uniform within the sound spectrum. So the best perceived frequency range is 1000 Hz - 4000 Hz, which most widely covers human speech.

Since sounds vary so much in intensity, it is more convenient to think of it as a logarithmic value and measure it in decibels (after the Scottish scientist Alexander Graham Bell). The lower threshold of hearing sensitivity of the human ear is 0 dB, the upper 120 dB, it is also called the "pain threshold". The upper limit of sensitivity is also not perceived by the human ear in the same way, but depends on the specific frequency. Sounds low frequencies must have a much greater intensity than the high ones in order to induce a pain threshold. For example, the pain threshold at a low frequency of 31.5 Hz occurs at a sound intensity level of 135 dB, when at a frequency of 2000 Hz the sensation of pain appears already at 112 dB. There is also the concept of sound pressure, which actually expands the usual explanation for the propagation of a sound wave in air. Sound pressure- this is a variable overpressure that occurs in an elastic medium as a result of the passage of a sound wave through it.

Wave nature of sound

To better understand the system of sound wave generation, imagine a classic speaker located in a tube filled with air. If the speaker makes flick forward, the air in the immediate vicinity of the diffuser is momentarily compressed. After that, the air will expand, thereby pushing the compressed air region along the pipe.
It is this wave movement that will subsequently be the sound when it reaches the auditory organ and “excites” the eardrum. When a sound wave occurs in a gas, excess pressure and density are created, and particles move at a constant speed. About sound waves, it is important to remember the fact that the substance does not move along with the sound wave, but only a temporary perturbation of air masses occurs.

If we imagine a piston suspended in free space on a spring and making repeated movements "forward and backward", then such oscillations will be called harmonic or sinusoidal (if we represent the wave in the form of a graph, then in this case we get a pure sine wave with repeated ups and downs). If we imagine a speaker in a pipe (as in the example described above), making harmonic vibrations, then at the moment the speaker moves "forward", the already known effect of air compression is obtained, and when the speaker moves "back", the reverse effect of rarefaction is obtained. In this case, a wave of alternating compressions and rarefaction will propagate through the pipe. The distance along the pipe between adjacent maxima or minima (phases) will be called wavelength. If particles oscillate parallel to the direction of wave propagation, then the wave is called longitudinal. If they oscillate perpendicular to the direction of propagation, then the wave is called transverse. Usually, sound waves in gases and liquids are longitudinal, while in solids, waves of both types can occur. Transverse waves in solids arise due to resistance to shape change. The main difference between these two types of waves is that a transverse wave has the property of polarization (oscillations occur in a certain plane), while a longitudinal wave does not.

Sound speed

The speed of sound directly depends on the characteristics of the medium in which it propagates. It is determined (dependent) by two properties of the medium: elasticity and density of the material. The speed of sound in solids, respectively, directly depends on the type of material and its properties. Velocity in gaseous media depends on only one type of medium deformation: compression-rarefaction. The change in pressure in a sound wave occurs without heat exchange with the surrounding particles and is called adiabatic.
The speed of sound in a gas depends mainly on temperature - it increases with increasing temperature and decreases with decreasing. Also, the speed of sound in a gaseous medium depends on the size and mass of the gas molecules themselves - the smaller the mass and size of the particles, the greater the "conductivity" of the wave and the greater the speed, respectively.

In liquid and solid media, the principle of propagation and the speed of sound are similar to how a wave propagates in air: by compression-discharge. But in these media, in addition to the same dependence on temperature, the density of the medium and its composition/structure are quite important. The lower the density of the substance, the higher the speed of sound and vice versa. The dependence on the composition of the medium is more complicated and is determined in each specific case, taking into account the location and interaction of molecules/atoms.

Speed ​​of sound in air at t, °C 20: 343 m/s
Speed ​​of sound in distilled water at t, °C 20: 1481 m/s
Speed ​​of sound in steel at t, °C 20: 5000 m/s

Standing waves and interference

When a speaker creates sound waves in a confined space, the effect of wave reflection from the boundaries inevitably occurs. As a result, most often interference effect- when two or more sound waves are superimposed on each other. Special cases of the phenomenon of interference are the formation of: 1) Beating waves or 2) Standing waves. The beat of the waves- this is the case when there is an addition of waves with close frequencies and amplitudes. The pattern of the occurrence of beats: when two waves similar in frequency are superimposed on each other. At some point in time, with such an overlap, the amplitude peaks may coincide "in phase", and also the recessions in "antiphase" may also coincide. This is how sound beats are characterized. It is important to remember that, unlike standing waves, phase coincidences of peaks do not occur constantly, but at some time intervals. By ear, such a pattern of beats differs quite clearly, and is heard as a periodic increase and decrease in volume, respectively. The mechanism for the occurrence of this effect is extremely simple: at the moment of coincidence of peaks, the volume increases, at the moment of coincidence of recessions, the volume decreases.

standing waves arise in the case of superposition of two waves of the same amplitude, phase and frequency, when when such waves "meet" one moves in the forward direction, and the other in the opposite direction. In the area of ​​space (where a standing wave was formed), a picture of superposition of two frequency amplitudes arises, with alternating maxima (so-called antinodes) and minima (so-called nodes). When this phenomenon occurs, the frequency, phase and attenuation coefficient of the wave at the place of reflection are extremely important. Unlike traveling waves, there is no energy transfer in a standing wave due to the fact that the forward and backward waves that form this wave carry energy in equal amounts in the forward and opposite directions. For a visual understanding of the appearance of a standing wave, we present an example from home acoustics. Let's say we have floor standing speakers in some limited space (room). Having made them play some song with a lot of bass, let's try to change the location of the listener in the room. Thus, the listener, having got into the zone of minimum (subtraction) of the standing wave, will feel the effect that the bass has become very small, and if the listener enters the zone of maximum (addition) of frequencies, then the opposite effect of a significant increase in the bass region is obtained. In this case, the effect is observed in all octaves of the base frequency. For example, if the base frequency is 440 Hz, then the phenomenon of "addition" or "subtraction" will also be observed at frequencies of 880 Hz, 1760 Hz, 3520 Hz, etc.

Resonance phenomenon

Most solids have their own resonance frequency. To understand this effect is quite simple on the example of a conventional pipe, open only at one end. Let's imagine a situation where a speaker is connected from the other end of the pipe, which can play some one constant frequency, it can also be changed later. So, the pipe has its own resonance frequency, saying plain language is the frequency at which the trumpet "resonates" or makes its own sound. If the frequency of the speaker (as a result of adjustment) coincides with the resonance frequency of the pipe, then there will be an effect of increasing the volume several times. This is because the loudspeaker excites the vibrations of the air column in the pipe with a significant amplitude until the same “resonant frequency” is found and the addition effect occurs. The resulting phenomenon can be described as follows: the pipe in this example "helps" the speaker by resonating at a specific frequency, their efforts add up and "pour out" into an audible loud effect. On the example of musical instruments, this phenomenon is easily traced, since the design of the majority contains elements called resonators. It is not difficult to guess what serves the purpose of amplifying a certain frequency or musical tone. For example: a guitar body with a resonator in the form of a hole, matched with the volume; The design of the pipe at the flute (and all pipes in general); The cylindrical shape of the body of the drum, which itself is a resonator of a certain frequency.

Frequency spectrum of sound and frequency response

Since in practice there are practically no waves of the same frequency, it becomes necessary to decompose the entire sound spectrum of the audible range into overtones or harmonics. For these purposes, there are graphs that display the dependence of the relative energy of sound vibrations on frequency. Such a graph is called a sound frequency spectrum graph. Frequency spectrum of sound There are two types: discrete and continuous. The discrete spectrum plot displays the frequencies individually, separated by blank spaces. In the continuous spectrum, all sound frequencies are present at once.
In the case of music or acoustics, the usual schedule is most often used. Peak-to-Frequency Characteristics(abbreviated "AFC"). This graph shows the dependence of the amplitude of sound vibrations on frequency throughout the entire frequency spectrum (20 Hz - 20 kHz). Looking at such a graph, it is easy to understand, for example, the strengths or weaknesses of a particular speaker or speaker system as a whole, the strongest areas of energy return, frequency drops and rises, attenuation, as well as to trace the steepness of the decline.

Propagation of sound waves, phase and antiphase

The process of propagation of sound waves occurs in all directions from the source. The simplest example for understanding this phenomenon: a pebble thrown into the water.
From the place where the stone fell, waves begin to diverge on the surface of the water in all directions. However, let's imagine a situation using a speaker in a certain volume, let's say a closed box, which is connected to an amplifier and plays some kind of musical signal. It is easy to notice (especially if you give a powerful low-frequency signal, such as a bass drum), that the speaker makes a rapid movement "forward", and then the same rapid movement "back". It remains to be understood that when the speaker moves forward, it emits a sound wave, which we hear afterwards. But what happens when the speaker moves backwards? But paradoxically, the same thing happens, the speaker makes the same sound, only it propagates in our example entirely within the volume of the box, without going beyond it (the box is closed). In general, in the above example, one can observe quite a lot of interesting physical phenomena, the most significant of which is the concept of a phase.

The sound wave that the speaker, being in volume, radiates in the direction of the listener - is "in phase". The reverse wave, which goes into the volume of the box, will be correspondingly antiphase. It remains only to understand what these concepts mean? Signal phase- this is the sound pressure level at the current time at some point in space. The phase is most easily understood by the example of the playback of musical material by a conventional stereo floor-standing pair of home speakers. Let's imagine that two such floor-standing speakers are installed in a certain room and play. Both speakers in this case reproduce a synchronous variable sound pressure signal, moreover, the sound pressure of one speaker is added to the sound pressure of the other speaker. A similar effect occurs due to the synchronism of the signal reproduction of the left and right speakers, respectively, in other words, the peaks and valleys of the waves emitted by the left and right speakers coincide.

Now let's imagine that the sound pressures are still changing in the same way (they have not changed), but now they are opposite to each other. This can happen if you connect one of the two speakers in reverse polarity ("+" cable from the amplifier to the "-" terminal of the speaker system, and "-" cable from the amplifier to the "+" terminal of the speaker system). In this case, the signal opposite in direction will cause a pressure difference, which can be represented as numbers as follows: left acoustic system will create a pressure of "1 Pa", and the right speaker will create a pressure of "minus 1 Pa". As a result, the total sound volume at the listener's position will be equal to zero. This phenomenon is called antiphase. If we consider the example in more detail for understanding, it turns out that two dynamics playing "in phase" create the same areas of air compression and rarefaction, which actually help each other. In the case of an idealized antiphase, the area of ​​air space compaction created by one speaker will be accompanied by an area of ​​air space rarefaction created by the second speaker. It looks approximately like the phenomenon of mutual synchronous damping of waves. True, in practice, the volume does not drop to zero, and we will hear a heavily distorted and attenuated sound.

In the most accessible way, this phenomenon can be described as follows: two signals with the same oscillations (frequency), but shifted in time. In view of this, it is more convenient to represent these displacement phenomena using the example of ordinary round clocks. Let's imagine that several identical round clocks hang on the wall. When the second hands of these watches run in sync, 30 seconds on one watch and 30 seconds on the other, then this is an example of a signal that is in phase. If the second hands run with a shift, but the speed is still the same, for example, on one watch 30 seconds, and on the other 24 seconds, then this is a classic example of a phase shift (shift). In the same way, phase is measured in degrees, within a virtual circle. In this case, when the signals are shifted relative to each other by 180 degrees (half of the period), a classical antiphase is obtained. Often in practice, there are minor phase shifts, which can also be determined in degrees and successfully eliminated.

Waves are flat and spherical. A flat wavefront propagates in only one direction and is rarely encountered in practice. A spherical wavefront is a simple type of wave that radiates from a single point and propagates in all directions. Sound waves have the property diffraction, i.e. the ability to avoid obstacles and objects. The degree of envelope depends on the ratio of the sound wave length to the dimensions of the obstacle or hole. Diffraction also occurs when there is an obstacle in the path of sound. In this case, two scenarios are possible: 1) If the dimensions of the obstacle are much larger than the wavelength, then the sound is reflected or absorbed (depending on the degree of absorption of the material, the thickness of the obstacle, etc.), and an "acoustic shadow" zone is formed behind the obstacle . 2) If the dimensions of the obstacle are comparable to the wavelength or even less than it, then the sound diffracts to some extent in all directions. If a sound wave, when moving in one medium, hits the interface with another medium (for example, an air medium with a solid medium), then three scenarios may arise: 1) the wave will be reflected from the interface 2) the wave can pass into another medium without changing direction 3) a wave can pass into another medium with a change of direction at the boundary, this is called "wave refraction".

The ratio of the excess pressure of a sound wave to the oscillatory volumetric velocity is called the wave impedance. In simple words, wave resistance of the medium can be called the ability to absorb sound waves or "resist" them. The reflection and transmission coefficients directly depend on the ratio of the wave impedances of the two media. Wave resistance in a gas medium is much lower than in water or solids. Therefore, if a sound wave in the air is incident on a solid object or on the surface of deep water, then the sound is either reflected from the surface or absorbed to a large extent. It depends on the thickness of the surface (water or solid) on which the desired sound wave falls. With a low thickness of a solid or liquid medium, sound waves almost completely "pass", and vice versa, with a large thickness of the medium, the waves are more often reflected. In the case of reflection of sound waves, this process occurs according to a well-known physical law: "The angle of incidence is equal to the angle of reflection." In this case, when a wave from a medium with a lower density hits the boundary with a medium of higher density, the phenomenon occurs refraction. It consists in bending (refracting) a sound wave after "meeting" with an obstacle, and is necessarily accompanied by a change in speed. Refraction also depends on the temperature of the medium in which reflection occurs.

In the process of propagation of sound waves in space, their intensity inevitably decreases, we can say the attenuation of the waves and the weakening of the sound. In practice, it is quite simple to encounter such an effect: for example, if two people stand in a field at some close distance (a meter or closer) and start talking to each other. If you subsequently increase the distance between people (if they start to move away from each other), the same level of conversational volume will become less and less audible. A similar example clearly demonstrates the phenomenon of reducing the intensity of sound waves. Why is this happening? The reason for this is the various processes of heat transfer, molecular interaction and internal friction of sound waves. Most often in practice, the conversion of sound energy into thermal energy occurs. Such processes inevitably arise in any of the 3 sound propagation media and can be characterized as absorption of sound waves.

The intensity and degree of absorption of sound waves depends on many factors, such as pressure and temperature of the medium. Also, absorption depends on the specific frequency of the sound. When a sound wave propagates in liquids or gases, there is an effect of friction between different particles, which is called viscosity. As a result of this friction at the molecular level, the process of transformation of the wave from sound into thermal occurs. In other words, the higher the thermal conductivity of the medium, the lower the degree of wave absorption. Sound absorption in gaseous media also depends on pressure (atmospheric pressure changes with increasing altitude relative to sea level). As for the dependence of the degree of absorption on the frequency of sound, then taking into account the above dependences of viscosity and thermal conductivity, the absorption of sound is the higher, the higher its frequency. For example, when normal temperature and pressure, in air the absorption of a wave with a frequency of 5000 Hz is 3 dB/km, and the absorption of a wave with a frequency of 50000 Hz will be already 300 dB/m.

In solid media, all the above dependencies (thermal conductivity and viscosity) are preserved, but a few more conditions are added to this. They are associated with the molecular structure of solid materials, which can be different, with its own inhomogeneities. Depending on this internal solid molecular structure, the absorption of sound waves in this case can be different, and depends on the type of particular material. When sound passes through a solid body, the wave undergoes a series of transformations and distortions, which most often leads to scattering and absorption of sound energy. At the molecular level, the effect of dislocations can occur, when a sound wave causes a displacement of atomic planes, which then return to their original position. Or, the movement of dislocations leads to a collision with dislocations perpendicular to them or defects in the crystal structure, which causes their deceleration and, as a result, some absorption of the sound wave. However, the sound wave may also resonate with these defects, which will lead to distortion of the original wave. The energy of a sound wave at the moment of interaction with the elements of the molecular structure of the material is dissipated as a result of internal friction processes.

In I will try to analyze the features of human auditory perception and some of the subtleties and features of sound propagation.

Before you suspect a broken sound card on your computer, carefully inspect the existing PC connectors for external damage. You should also check the performance of the subwoofer with speakers or headphones through which sound is played - try connecting them to any other device. Perhaps the cause of the problem lies precisely in the equipment you are using.

It is likely that reinstallation will help in your situation. operating system Windows, whether it be 7, 8, 10 or the Xp version, as the necessary settings could simply go wrong.

Let's move on to checking the sound card

Method 1

The first step is to deal with the device drivers. For this you need:


After that, the drivers will be updated and the problem will be solved.

Same this procedure can be done if available current version software on removable media. In this situation, you need to install by specifying the path to a specific folder.

If the audio card is not in the device manager at all, then go to the next option.

Method 2

In this case, its complete diagnostics is required for the correct technical connection. You need to do the following in a specific order:


Please note that this option is only suitable for discrete components that are installed as a separate board.

Method 3

If, after visual inspection and checking the speakers or headphones, they turned out to be in working condition, and reinstalling the OS did not bring any results, we move on:


After the sound card test is completed, the system will inform you about its status and if it turns out to be inoperative, you will understand this based on the results.

Method 4

Another option, how to quickly and easily check sound card on Windows OS:


Thus, we will start diagnosing sound problems on the computer.

The program will offer you several options for problems, and also indicate the connected audio devices. If , the diagnostic wizard will allow you to quickly identify it.

Method 5

The third option, how you can check if the sound card is working, is the following:


In the tab "Driver" and "Details" you will receive additional data on the parameters of all devices installed on your PC, both integrated and discrete. Also, this method allows you to diagnose problems and quickly identify them through software verification.

Now you know how to quickly and easily check your sound card in several ways. Their main advantage is that for this you do not need online access to the Internet, and all procedures can be carried out on your own, without contacting a specialized service.




Top