The digital camera is working. How does a TV set-top box work to watch digital channels? Digital printing - how it works

If you want to watch the highest quality television in Russia, then you cannot do without basic concepts about digital. And the most important thing you should know about is digital television receivers or set-top boxes. We will tell you everything about them!

A digital receiver is a device for receiving a signal digital television, converting and transferring it to an analog TV of absolutely any model. Often digital receivers are also called digital set-top boxes, TV tuners, dvb-t2 set-top boxes or simply dvb-t2 receivers. The designation “dvb-t2” indicates which digital television standard this or that receiver supports. Today, there are several fundamentally different digital television standards:
- DVB-T/T2 – terrestrial digital television
- DVB-S – satellite television
- DVB-C – cable television
- DVB-T – terrestrial digital television
- DVB-H – mobile television

The simplest and most accessible today is terrestrial digital television of the DVB-T2 standard. It is this that should in the very near future replace all analog television in Russia as part of a special state program. Therefore, in this article we will discuss further specifically digital television receivers designed to receive a signal of the DVB-T2 standard. There are set-top boxes for home TVs and for cars, and they all work on the same principle, they are all characterized by simple operation and wide functionality.


Watching digital television channels is the main task of the receiver; additional options include:

1. Support various video and audio formats
2. Recording function live broadcast television
3. Playing multimedia files from USB drives
4. Function of pausing live broadcast and continuing playback from the moment it was stopped
5. TimeShift - the ability to delay viewing of digital television programs

How does a digital television receiver work?

Scheme of work digital set-top box quite simple. At the first intermediate frequency, a signal in the range of 950-2150 MHz from the output of the low-noise amplifier of the converter passes through the cable to the microwave receiver of the receiver, potential errors are corrected in the demodulator, and the stream selected at the output goes to a demultiplexer, which separates the information stream into video, sound, etc., where decryption is carried out. In the MPEG-2 video stream decoder, video signals are decoded into decompressed digital signals, which are further divided into components: brightness (U), green (G), red (R), blue (B).

The digital TV encoder converts standards, therefore, you can connect a receiver operating in one of three standards for analogue TV to its output: PAL, SECAM or NTSC. And from the audio decoder, both digital and analog signals are output. The multiprocessor is designed to control the demultiplexer-decoder and isolate the signal when using an interactive communication system, as well as to isolate integrated data packets. And thanks to the module digital control and an IR sensor, it is possible to control the receivers using the remote control.

In this issue I’m going to start a “long-running” topic about how a digital camera is designed and works, what all sorts of smart words like “bracketing” and “exposure compensation” mean and, most importantly, how to use all this purposefully.

In general, a digital camera is a device that allows you to obtain images of objects in digital form. By and large, the difference between a conventional and a digital camera is only in the image receiver. In the first case, it is a photographic emulsion, which then requires chemical treatment. In the second, there is a special electronic sensor that converts the incident light into an electrical signal. This sensor is called a sensor or matrix and is really a rectangular matrix of light-sensitive cells placed on one semiconductor crystal.

When light hits a matrix element, it produces an electrical signal proportional to the amount of light received. Then the signals (for now these are analog signals) from the matrix elements are read and converted into digital form by an analog-to-digital (ADC) converter. Next, the digital data is processed by the camera processor (yes, it also has a processor) and is saved in the form of, in fact, a picture.

So, the heart of any digital camera is the sensor. Now there are two main technologies for producing sensors - CCD (charge coupled device) and CMOS. In the CCD matrix, during exposure (that is, at the moment of actually taking photographs), a charge proportional to the intensity of the incident light accumulates in the photosensitive elements. When reading data, these charges are shifted from cell to cell until the entire matrix is ​​read (in fact, reading occurs row by row). In popular literature they like to compare this process with passing buckets of water along a chain. CCD matrices are produced using MOS technology and, to obtain a high-quality image, require high uniformity of parameters over the entire area of ​​the chip. Accordingly, they are quite expensive.

An alternative to CCDs are CMOS (that is, in Russian, CMOS) matrices. In essence, a CMOS sensor is quite similar to a random access memory chip - DRAM. Also a rectangular matrix, also capacitors, also random access reading. Photodiodes are used as photosensitive elements in CMOS matrices. In general, CMOS matrices are much better suited for production using today's well-developed manufacturing processes. In addition, among other things (higher packing density of elements, lower power consumption, lower price), this allows you to integrate related electronics onto a single chip with a matrix. True, until recently, CMOS could not compete with CCD in terms of quality, so mainly cheap devices like web cameras were made based on CMOS sensors. However, recently several large companies (in particular, such an industry monster as Kodak) have been developing technologies for the production of high-resolution and high-quality CMOS matrices. The first “serious” (three-megapixel digital SLR) CMOS camera - the Canon EOS-D30 - appeared almost two years ago. And the Canon EOS 1Ds and Kodak Pro DCS-14n full-format cameras announced at the latest Photokina finally demonstrated the potential of CMOS sensors. However, most cameras are still produced on the basis of CCD matrices.

Those who want to get acquainted with both technologies in more detail can start with this address www.eecg.toronto.edu/~kphang/ece1352f/papers/ng_CCD.pdf, and we will move on.

The next point is that the matrix elements (of any of the types described above) perceive only the intensity of the incident light (that is, they give a black and white image). Where does color come from? To obtain a color image, a special light filter is located between the lens and the matrix, consisting of cells of primary colors (GRGB or CMYG) located above the corresponding pixels. Moreover, for green color two pixels are used (in RGB, or one in CMY), since the eye is most sensitive to this color. The final color of a pixel in a picture in such a system is calculated taking into account the intensities of neighboring elements of different colors, so that as a result, each single-color pixel in the matrix corresponds to a colored pixel in the picture. Thus, the final image is always interpolated to one degree or another (that is, calculated and not obtained by directly photographing the object, which inevitably affects the quality of the small details of the image). As for specific filters, in most cases a rectangular matrix GRGB (Bayer filter) is used.

There is also something called SuperCCD, invented by Fuji Photo Film and used in Fuji cameras since 2000. The essence of this technology is that pixels (and light filter elements - also GRGB) are arranged in the form of a kind of diagonal matrix.

Moreover, the camera interpolates not only the colors of the pixels themselves, but also the colors of the points located between them. Thus, Fuji cameras always indicate a resolution that is twice the number of physical (single-color) pixels, which is not true. However, Fuji's technology still turned out to be quite successful - most people who compared the quality of images from SuperCCD and conventional cameras agree that the image quality from SuperCCD corresponds to a conventional matrix with a resolution approximately 1.5 times greater than the physical resolution of SuperCCD. But not 2 times, as stated by Fuji.

Finishing the conversation about filters, it’s time to mention the third alternative sensor technology, namely Foveon X3. It was developed by Foveon and was announced in the spring of this year. The essence of the technology is the physical reading of all three colors for each pixel (in theory, the resolution of such a sensor will be equivalent to the resolution of a conventional sensor with three times as many pixels). In this case, to divide the incident light into color components, the property of silicon (from which the sensor is made) is used to transmit light with different wavelengths (that is, color) to different depths. In fact, each Foveon pixel is a three-layer structure, and the depth of the active elements corresponds to the maximum light transmission of silicon for primary colors (RGB). In my opinion, a very promising idea. At least in theory. Because in practice, the first announced camera based on Foveon X3 remains the only one for now. And its deliveries have not yet really begun. We wrote in more detail about this technology in the sixth issue of the newspaper this year.

However, let's return to the sensors. The main characteristic of any matrix, from the point of view of the end user, is its resolution - that is, the number of photosensitive elements. Most cameras are now made on the basis of matrices of 2-4 megapixels (one million pixels). Naturally, the higher the resolution of the matrix, the more detailed the image you can get on it. Of course, the larger the matrix, the more expensive it is. But you always have to pay for quality. The resolution of the matrix and the size of the resulting image in pixels are directly related, for example, on a megapixel camera we will get a picture of size 1024x960 = 983040. It must be said that increasing the resolution of the matrix is ​​one of the main tasks that digital camera manufacturers are currently struggling with. Let's say, three years ago, most cameras in the mid-price range were equipped with megapixel matrices. Two years ago, this number increased to two megapixels. A year ago it was already equal to three or four megapixels. Now, most of the latest camera models are equipped with sensors with a resolution of 4-5 megapixels. And there are already several semi-professional models equipped with matrices larger than 10 megapixels. Apparently, somewhere at this level the race will stop, since a picture from a 10-megapixel matrix is ​​approximately equal in detail to a picture taken on standard 35 mm film.

By the way, do not confuse the resolution of the matrix in the form we defined it above with the resolution. The latter is defined as the camera's ability to separate the image of two objects and is usually measured by taking a line image with a known distance between the lines. Resolution describes the properties of the entire optical system of the camera - that is, the matrix and lens. In principle, resolution and resolving power are related, but this relationship is determined not only by the parameters of the matrix, but also by the quality of the optics used in the camera.

The next characteristic of a digital camera that is directly related to the matrix is ​​sensitivity. Or, more precisely, photosensitivity. This parameter, as the name suggests, describes the sensitivity of the matrix to incident light and, in principle, is completely similar to the photosensitivity of conventional photographic materials. For example, you can buy film at a store with a sensitivity of 100, 200 or 400 units. In the same way, you can set the sensitivity of the matrix, but the advantage of a digital camera is that the sensitivity is set individually for each frame. For example, in bright sunlight you can shoot with a sensitivity of 100 or 50, and for night photography you can switch to 400 (and in some cameras even 1400). Most digital cameras allow you to set standard sensitivity values ​​- 50, 100, 200 and 400. In addition, the autoexposure system can change sensitivity smoothly. Since sensitivity is physically adjusted by changing the gain of the signal from the matrix, this is quite easy to implement in the camera.

Sensitivity is measured in ISO units (at least for digital cameras they have already become the standard). You can see how they are converted into DIN and GOST units in the table.

GOST 8 11 32 65 90 180 250
ISO 9 12 35 70 100 200 300
DIN 10 11-20 16 19-20 21 24 25-26

However, adjustable sensitivity has its drawbacks. Since the properties of the matrix do not physically change in this case, but the existing signal is simply amplified, the noise characteristic of any electronic device begins to appear more and more in the image. This greatly reduces the working dynamic range of the camera, so at high sensitivity you will not get a good picture. By the way, a similar problem can be encountered with long exposures - any matrix is ​​noisy, and over time the noise accumulates. Nowadays, many cameras implement special noise reduction algorithms for long exposures, but they tend to smooth out the image and blur fine details. In general, you can’t argue with the laws of physics, but still the ability to adjust sensitivity is a big plus of digital cameras.

Konstantin Afanasiev

© 2014 site

To have complete control over the process of obtaining a digital image, you must at least have a general understanding of the structure and operating principle of a digital camera.

The only fundamental difference between a digital camera and a film camera is the nature of the photosensitive material used in them. If in a film camera it is film, then in a digital camera it is a light-sensitive matrix. And just as the traditional photographic process is inseparable from the properties of film, the digital photographic process largely depends on how the matrix converts the light focused on it by the lens into digital code.

The principle of operation of the photomatrix

The photosensitive matrix or photosensor is integrated circuit(in other words, a silicon wafer) consisting of the smallest light-sensitive elements - photodiodes.

There are two main types of sensors: CCD (Charge-Coupled Device, also known as CCD - Charge-Coupled Device) and CMOS (Complementary Metal-Oxide-Semiconductor, also known as CMOS - Complementary Metal-Oxide-Semiconductor). Both types of matrices convert the energy of photons into an electrical signal, which is then subject to digitization, however, if in the case of a CCD matrix the signal generated by photodiodes enters the camera processor in analog form and only then is centrally digitized, then in a CMOS matrix each photodiode is equipped with an individual analog signal. digital converter (ADC), and the data enters the processor in discrete form. In general, the differences between CMOS and CCD matrices, although fundamental for an engineer, are absolutely insignificant for a photographer. For manufacturers of photographic equipment, it is also important that CMOS matrices, being more complex and expensive to develop than CCD matrices, turn out to be more profitable than the latter in mass production. So the future most likely lies with CMOS technology due to purely economic reasons.

Photodiodes, which make up any matrix, have the ability to convert the energy of the light flux into electric charge. The more photons the photodiode captures, the more electrons are produced at the output. Obviously, the larger the total area of ​​​​all photodiodes, the more light they can perceive and the higher the photosensitivity of the matrix.

Unfortunately, photodiodes cannot be located close to each other, since then there would be no space on the matrix for the electronics accompanying the photodiodes (which is especially important for CMOS matrices). The light-sensitive surface of the sensor averages 25-50% of its total area. To reduce light loss, each photodiode is covered with a microlens that is larger in area and actually comes into contact with the microlenses of neighboring photodiodes. Microlenses collect the light falling on them and direct it into the photodiodes, thus increasing the light sensitivity of the sensor.

Upon completion of exposure, the electrical charge generated by each photodiode is read, amplified, and converted into a binary code of a given bit depth using an analog-to-digital converter, which is then sent to the camera processor for further processing. Each photodiode of the matrix corresponds (although not always) to one pixel of the future image.

Thank you for your attention!

Vasily A.

Post scriptum

If you found the article useful and informative, you can kindly support the project by making a contribution to its development. If you didn’t like the article, but you have thoughts on how to make it better, your criticism will be accepted with no less gratitude.

Please remember that this article is subject to copyright. Reprinting and quoting are permissible provided there is a valid link to the source, and the text used must not be distorted or modified in any way.

Modern cameras do everything themselves - to take a photo, the user just needs to press a button. But it’s still interesting: by what magic does the picture get into the camera? We will try to explain the basic principles of digital cameras.

Main parts

Basically, the design of a digital camera follows the design of an analog one. Their main difference is in the photosensitive element on which the image is formed: in analog cameras it is film, in digital cameras it is a matrix. Light passes through the lens onto the matrix, where an image is formed, which is then recorded in memory. Now let's look at these processes in more detail.

The camera consists of two main parts - the body and the lens. The body contains a matrix, a shutter (mechanical or electronic, and sometimes both), a processor and controls. A lens, detachable or integral, is a group of lenses housed in a plastic or metal housing.

Where does the picture come from?

The matrix consists of many photosensitive cells - pixels. Each cell, when light hits it, produces an electrical signal proportional to the intensity of the light flux. Since only information about the brightness of the light is used, the picture turns out to be black and white, and to make it color, you have to resort to various tricks. The cells are covered with color filters - in most matrices, each pixel is covered with a red, blue or green filter (only one!) in accordance with the well-known RGB (red-green-blue) color scheme. Why these particular colors? Because they are the main ones, and all the rest are obtained by mixing them and reducing or increasing their saturation.

On the matrix, the filters are arranged in groups of four, so that for every two green there is one blue and one red. This is done because the human eye is most sensitive to green color. Light rays of different spectrums have different wavelengths, so the filter transmits only rays of its own color into the cell. The resulting image consists only of red, blue and green pixels - this is the form in which RAW (raw format) files are recorded. For recording JPEG files and TIFF, the camera's processor analyzes the color values ​​of neighboring cells and calculates the color of the pixels. This processing process is called color interpolation, and it is extremely important for producing high-quality photographs.

This arrangement of filters on matrix cells is called the Bayer pattern

There are two main types of matrices, and they differ in the way they read information from the sensor. In CCD-type matrices, information is read from the cells sequentially, so file processing can take quite a long time. Although such sensors are "thoughtful", they are relatively cheap, and besides, the noise level in the images taken with their help is less.

CCD type matrix

In CMOS type matrices (CMOS), information is read individually from each cell. Each pixel is designated by coordinates, which allows you to use the matrix for exposure metering and autofocus.

CMOS matrix

The types of matrices described are single-layer, but there are also three-layer ones, where each cell simultaneously perceives three colors, distinguishing differently colored color streams by wavelength.

Three-layer matrix

The camera processor has already been mentioned above - it is responsible for all the processes that result in a picture. The processor determines the exposure parameters and decides which of them need to be applied in a given situation. From the processor and software The quality of the photos and the speed of the camera depend.

With the click of the shutter

The shutter measures the amount of time that light is exposed to the sensor (shutter speed). In the vast majority of cases, this time is measured in fractions of a second - as they say, and you won’t have time to blink. In digital SLR cameras, as in film cameras, the shutter consists of two opaque curtains that cover the sensor. Because of these curtains in digital SLRs, it is impossible to view the display - after all, the matrix is ​​closed and cannot transmit the image to the display.

In compact cameras, the matrix is ​​not covered by a shutter, and therefore you can compose the frame according to the display

When the shutter button is pressed, the curtains are driven by springs or electromagnets, allowing light to enter and forming an image on the sensor - this is how a mechanical shutter works. But digital cameras also have electronic shutters - they are used in compact cameras. An electronic shutter, unlike a mechanical one, cannot be touched with your hands; it is, in general, virtual. The matrix of compact cameras is always open (which is why you can compose a shot while looking at the display, and not through the viewfinder), but when the shutter button is pressed, the frame is exposed for the specified exposure time, and then recorded in memory. Due to the fact that electronic shutters do not have curtains, their shutter speeds can be ultra-short.

Let's focus

As mentioned above, the matrix itself is often used for autofocusing. In general, there are two types of autofocus - active and passive.

For active autofocus, the camera requires an infrared or ultrasonic transmitter and receiver. The ultrasonic system measures the distance to an object using the echolocation method of the reflected signal. Passive focusing is carried out using the contrast estimation method. Some professional cameras combine both types of focusing.

In principle, the entire area of ​​the sensor can be used for focusing, and this allows manufacturers to place dozens of focusing zones on it, as well as use a “floating” focus point, which the user can place wherever he wants.

Anti-distortion

It is the lens that forms the image on the matrix. A lens consists of several lenses - three or more. One lens cannot create a perfect image - it will be distorted at the edges (this is called aberration). Roughly speaking, the light beam should go directly to the sensor without scattering along the way. To some extent, this is facilitated by the diaphragm - a round plate with a hole in the middle, consisting of several blades. But you can’t close the aperture too much - because of this, the amount of light entering the sensor decreases (which is used when determining the desired exposure). If you assemble several lenses in series with different characteristics, the distortions produced by them together will be much less than the aberrations of each of them separately. The more lenses, the less aberration and the less light hits the sensor. After all, glass, no matter how transparent it may seem to us, does not transmit all the light - some part is scattered, some is reflected. To ensure that the lenses transmit as much light as possible, they are coated with a special anti-reflective coating. If you look at the camera lens, you will see that the surface of the lens shimmers with a rainbow - this is anti-reflective coating.

The lenses are located inside the lens approximately like this

One of the characteristics of a lens is aperture, the value of the maximum open aperture. It is indicated on the lens, for example, like this: 28/2, where 28 is the focal length and 2 is the aperture ratio. For a zoom lens, the markings look like this: 14-45/3.5-5.8. Two aperture values ​​are indicated for zooms, since they have different minimum aperture values ​​at wide-angle and telephoto. That is, at different focal lengths the aperture ratio will be different.

The focal length, which is indicated on all lenses, is the distance from the front lens to the light receiver (in this case, the matrix). The focal length determines the viewing angle of the lens and its, so to speak, range, that is, how far it “sees.” Wide-angle lenses move the image away from our normal vision, while telephoto lenses bring it closer and have a small viewing angle.

The viewing angle of a lens depends not only on its focal length, but also on the diagonal of the light receiver. For 35 mm film cameras, a lens with a focal length of 50 mm is considered normal (that is, approximately corresponding to the viewing angle of the human eye). Lenses with a shorter focal length are “wide-angle”, and those with a longer focal length are “telephoto”.

The left part of the lower inscription on the lens is the focal length of the zoom, the right part is the aperture ratio

This is where the problem lies, due to which the equivalent for 35 mm is often indicated next to the focal length of a digital lens. The diagonal of the matrix is ​​smaller than the diagonal of the 35 mm frame, and therefore it is necessary to “convert” the numbers to a more familiar equivalent. Due to this same increase in focal length, wide-angle shooting becomes almost impossible in SLR cameras with “film” lenses. A lens with a focal length of 18 mm on a film camera is a super wide-angle lens, but on a digital camera its equivalent focal length will be around 30 mm, or even longer. As for telephoto lenses, increasing their “range” only benefits photographers, because a regular lens with a focal length of, say, 400 mm is quite expensive.

Viewfinder

In film cameras, you can only compose a frame using the viewfinder. Digital ones allow you to completely forget about it, since in most models it is more convenient to use the display for this. Some very compact cameras do not have a viewfinder at all, simply because there is no room for one. The most important thing about a viewfinder is what you can see through it. For example, SLR cameras are so called precisely because of the design features of the viewfinder. The image through the lens is transmitted through a system of mirrors to the viewfinder, and thus the photographer sees the real area of ​​the frame. During shooting, when the shutter opens, the mirror blocking it rises and lets light into the sensitive sensor. Such designs, of course, cope with their tasks perfectly, but they take up quite a lot of space and therefore are completely inapplicable in compact cameras.

This is how the image through the mirror system gets into the viewfinder of a SLR camera

Compact cameras use real-vision optical viewfinders. This is, roughly speaking, a through hole in the camera body. Such a viewfinder does not take up much space, but its overview does not correspond to what the lens “sees”. There are also pseudo-mirror cameras with electronic viewfinders. Such viewfinders have a small display, the image to which is transferred directly from the matrix - just like to an external display.

Flash

Flash, a pulsed light source, is known to be used for illumination where the main lighting is not enough. Built-in flashes are usually not very powerful, but their impulse is enough to illuminate the foreground. On semi-professional and professional cameras there is also a contact for connecting a much more powerful external flash, it is called a “hot shoe”.

These are, in general, the basic elements and principles of operation of a digital camera. Agree, when you know how the device works, it is easier to achieve high-quality results.

Electronic digital signature is now widely known - many modern companies are slowly switching to electronic document management. Yes and in Everyday life You've probably encountered this thing. In a nutshell, the essence of digital signature is very simple: there is a certification center, there is a key generator, a little more magic, and voila - all documents are signed. It remains to figure out what kind of magic allows digital signature work.

Roadmap

This is the fifth lesson in the “Dive into Crypto” series. All lessons in the series in chronological order:

1. Key generation

The reason for RSA's strength lies in the difficulty of factoring large numbers. In other words, it is very difficult to brute-force to find such prime numbers that in the product give modulus n. Keys are generated the same way for signing and encryption.


Once the keys have been generated, you can begin to calculate the electronic signature.

2. Calculation of electronic signature


3. Electronic signature verification


RSA, as we know, is about to retire because computing power is growing by leaps and bounds. The day is not far when a 1024-bit RSA key can be guessed in minutes. However, we will talk about quantum computers next time.

In general, you should not rely on the strength of this RSA signature scheme, especially with such “crypto-strong” keys as in our example.

Continuation is available only to members

Option 1. Join the “site” community to read all materials on the site

Membership in the community within the specified period will give you access to ALL Hacker materials, increase your personal cumulative discount and allow you to accumulate a professional Xakep Score rating!




Top