Computer networks and telecommunications rgatu. Computer telecommunications. Large telecommunications companies

Topic 9. Telecommunications

Lecture outline

1. Telecommunications and computer networks

2. Characteristics of local and global networks

3. System software

4. OSI model and information exchange protocols

5. Data transmission media, modems

6. Tele capabilities information systems

7. Opportunities worldwide network Internet

8. Prospects for creating an information highway

Telecommunications and computer networks

Communication is the transfer of information between people, carried out using various means (speech, symbolic systems, communication systems). As communication developed, telecommunications appeared.

Telecommunications - transfer of information over a distance using technical means(telephone, telegraph, radio, television, etc.).

Telecommunications are an integral part of the country's industrial and social infrastructure and are designed to meet the needs of physical and legal entities, public authorities in telecommunications services. Thanks to the emergence and development of data networks, a new highly efficient way of interaction between people has emerged - computer networks. The main purpose of computer networks is to provide distributed data processing and increase the reliability of information and management solutions.

A computer network is a collection of computers and various devices, providing information exchange between computers on a network without the use of any intermediate storage media.

In this case, there is a term - network node. A network node is a device connected to other devices as part of a computer network. Nodes can be computers, special network devices, such as a router, switch, or hub. A network segment is a part of the network limited by its nodes.

A computer on a computer network is also called a “workstation.” Computers on a network are divided into workstations and servers. At workstations, users solve application problems (work in databases, create documents, make calculations). The server serves the network and provides its own resources to everyone network nodes including workstations.

Computer networks are used in various fields, affect almost all areas of human activity and are an effective tool for communication between enterprises, organizations and consumers.

The network provides faster access to various sources of information. Using the network reduces resource redundancy. By connecting several computers together, you can get a number of advantages:

· expand the total amount of available information;


· share one resource with all computers (common database, network printer and so on.);

· simplifies the procedure for transferring data from computer to computer.

Naturally, the total amount of information accumulated on computers connected to a network, compared to one computer, is incomparably greater. As a result, the network provides new level employee productivity and effective communication of the company with manufacturers and customers.

Another purpose of a computer network is to ensure the efficient provision of various computer services to network users by organizing their access to resources distributed in this network.

In addition, an attractive side of networks is the availability of programs Email and planning the working day. Thanks to them, managers of large enterprises can quickly and effectively interact with a large staff of their employees or business partners, and planning and adjusting the activities of the entire company is carried out with much less effort than without networks.

Computer networks as a means of realizing practical needs find the most unexpected applications, for example: selling air and railway tickets; access to information from reference systems, computer databases and data banks; ordering and purchasing consumer goods; payment of utility costs; exchange of information between the teacher’s workplace and students’ workplaces (distance learning) and much more.

Thanks to the combination of database technologies and computer telecommunications, it has become possible to use so-called distributed databases. Huge amounts of information accumulated by humanity are distributed across various regions, countries, cities, where they are stored in libraries, archives, and information centers. Typically, all large libraries, museums, archives and other similar organizations have their own computer databases that contain the information stored in these institutions.

Computer networks allow access to any database that is connected to the network. This relieves network users from the need to maintain a giant library and makes it possible to significantly increase the efficiency of searching for the necessary information. If a person is a user of a computer network, then he can make a request to the appropriate databases, receive an electronic copy of the necessary book, article, archival material over the network, see what paintings and other exhibits are in a given museum, etc.

Thus, the creation of a unified telecommunications network should become the main direction of our state and be guided by the following principles (the principles are taken from the Law of Ukraine “On Communications” dated February 20, 2009):

  1. consumer access to publicly available telecommunications services that
    they need to satisfy their own needs, participate in political,
    economic and social life;
  2. interaction and interconnectedness of telecommunication networks to ensure
    communication capabilities between consumers of all networks;
  3. ensuring the sustainability of telecommunication networks and managing these networks with
    taking into account their technological features on the basis of uniform standards, norms and rules;
  4. state support for the development of domestic production of technical
    telecommunications means;

5. encouraging competition in the interests of consumers of telecommunication services;

6. increasing the volume of telecommunications services, their list and the creation of new jobs;

7. introduction of world achievements in the field of telecommunications, attraction and use of domestic and foreign material and financial resources, the latest technologies, management experience;

8. promoting the expansion of international cooperation in the field of telecommunications and the development of the global telecommunications network;

9. ensuring consumer access to information on the procedure for obtaining and the quality of telecommunications services;

10. efficiency, transparency of regulation in the field of telecommunications;

11. creation of favorable conditions for activity in the field of telecommunications, taking into account the characteristics of technology and the telecommunications market.

The purpose of teaching students the basics of computer networks is to provide knowledge of theoretical and practical fundamentals in the field of LAN and WAN, network applications and applications for creating web pages and sites, in the field of organization computer security and protection of information in networks, as well as in the field of doing business on the Internet.

A computer network is a collection of computers that can communicate with each other using communication equipment and software.

Telecommunications is the transmission and reception of information such as sound, image, data and text over long distances via electromagnetic systems: cable channels; fiber optic channels; radio channels and other communication channels. A telecommunications network is a set of technical and software means through which telecommunications are carried out. Telecommunication networks include: 1. Computer networks (for data transmission) 2. Telephone networks (transmission of voice information) 3. Radio networks (transmission of voice information - broadcast services) 4. Television networks (transmission of voice and image - broadcast services)

Why are computing or computer networks needed? Computer networks are created for the purpose of accessing system-wide resources (information, software and hardware) distributed (decentralized) in this network. Based on territorial characteristics, networks are distinguished between local and territorial (regional and global).

It is necessary to distinguish between computer and terminal networks. Computer networks connect computers, each of which can work autonomously. Terminal networks usually connect powerful computers (mainframes) with terminals (input and output devices). An example of terminal devices and networks is a network of ATMs or ticket offices.

The main difference between a LAN and a WAN is the quality of the communication lines used and the fact that in a LAN there is only one path for transmitting data between computers, while in a WAN there are many (there is redundancy of communication channels). Since the communication lines in the LAN are of higher quality, the speed of information transfer in the LAN is much higher than in the WAN. But LAN technologies are constantly penetrating into the WAN and vice versa, which significantly improves the quality of networks and expands the range of services provided. Thus, the differences between LAN and WAN are gradually smoothed out. The trend of convergence (convergence) is characteristic not only of LAN and WAN, but also of other types of telecommunication networks, which include radio networks, telephone and television networks. Telecommunication networks consist of the following components: access networks, highways, information centers. A computer network can be represented as a multilayer model consisting of layers:

 computers;

 communication equipment;

 operating systems;

 network applications. Computer networks use different types and classes of computers. Computers and their characteristics determine the capabilities of computer networks. Communication equipment includes: modems, network cards, network cables and intermediate network equipment. Intermediate equipment includes: transceivers or transceivers (traceivers), repeaters or repeaters (repeaters), hubs (hubs), bridges (bridges), switches, routers (routers), gateways (gateways).

To ensure the interaction of software and hardware systems in computer networks, uniform rules or a standard were adopted that define the algorithm for transmitting information in networks. Were adopted as standard network protocols, which determine the interaction of equipment in networks. Since the interaction of equipment on a network cannot be described by one single network protocol, a multi-level approach was used to develop network interaction tools. As a result, a seven-layer model of open systems interaction - OSI - was developed. This model divides communication tools into seven functional levels: application, presentation (data presentation layer), session, transport, network, channel and physical. A set of protocols sufficient to organize the interaction of equipment on a network is called a communication protocol stack. The most popular stack is TCP/IP. This stack is used to connect computers in Internet networks and in corporate networks.

Protocols are implemented by stand-alone and network operating systems (communication tools that are included in the OS), as well as telecommunications equipment devices (bridges, switches, routers, gateways). Network applications include various email applications (Outlook Express, The Bat, Eudora and others) and browsers - programs for viewing web pages ( Internet Explorer, Opera, Mozzila Firefox and others). Application programs for creating websites include: Macromedia HomeSite Plus, WebCoder, Macromedia Dreamweaver, Microsoft FrontPage and other applications. The global information network Internet is of great interest. The Internet is an association of transnational computer networks with various types and classes of computers and network equipment operating using various protocols and transmitting information through various communication channels. The Internet is a powerful means of telecommunication, storing and providing information, conducting electronic business and distance (interactive or online) learning.

Ontopsychology has developed a whole series of rules and recommendations for shaping the personality of a manager, businessman, or top-level executive, which are subject to almost any manager who is able to understand their usefulness and necessity. From the entire set of these recommendations, it is advisable to highlight and summarize the following:

1. There is no need to destroy your image by dishonest actions or fraud.

2. You should not underestimate your business partner, consider him stupider than yourself, try to deceive him and offer a low-level market system.

3. Never associate with those who are unable to manage their own affairs.

If you have a person working on your team who fails in all his endeavors, then you can predict that in a few years you will also experience collapse or large losses. Pathological losers, even if they are honest and intelligent, are characterized by unconscious programming, immaturity and unwillingness to take responsibility for their lives. This is already social psychosomatics.

4. Never hire a fool for your team. You need to stay away from him in work and in your personal life. Otherwise, unpredictable consequences for the manager may occur.

5. Never take on your team someone who is frustrated with you.

When selecting personnel, do not be guided by devotion, being seduced by flattery or sincere love. These people may prove to be incompetent in difficult work situations. You need to choose those who believe in their work, who use work to achieve their own interests, who want to make a career and improve their financial situation. By serving the leader (master) well, he can achieve all these goals and satisfy personal egoism.

6. In order to earn money and prosper, you must be able to serve your partners and cultivate your own behavior.

The main tactic is not to please your partner, but to study his needs and interests and take them into account in business communication. It is necessary to build value-based relationships with the bearers of wealth and success.

7. You should never mix personal and business relationships, personal life and work.

An excellent leader should be distinguished by refined taste in his personal life and the highest reasonableness and extraordinary style in the business sphere.

8. A true leader needs the mentality of being the only person who has the absolute right to the final idea.

It is known that the most major projects real leaders owe their success to his silence.

9. When making a decision, one must focus on global success for the company, i.e. when the result will benefit everyone who works for the leader and whom he leads.

In addition, in order for the solution to be optimal it is necessary:

preserving everything positive that has been created up to now;

careful rationality based on available means;

rational intuition (if, of course, it is inherent in the leader, since this is already the quality of a manager - a leader)

10. The law must be observed, circumvented, adapted to and used.

This formulation, despite its inconsistency, has a deep meaning and in any case means that the activities of a leader should always be in the right field, but this can be done in different ways. Law represents the power structure of society, the connective tissue between the leader and others physically aligned for or against him.

11. You should always follow a plan to get ahead of the situation and not pay too much attention to an erroneous action.

In the absence of the strictest control on the part of the manager, the situation objectifies him and, ultimately, despite the fact that he could do everything, he does nothing and stress arises and rapidly develops.

12. It is always necessary to create an everyday aesthetic, because... Achieving perfection in the little things leads to great goals.

The whole is achieved through the orderly coordination of the parts. Objects left in disarray are always the protagonists. The leader, depriving himself of aesthetics, robs his own aesthetic ability.

To lead effectively, you must have proportionality in 4 areas: individual personal, family, professional and social.

13. In order to avoid the conflicts that beset us every day, we must not forget about 2 principles: avoid hatred and revenge; never take someone else’s property that does not belong to you in accordance with the intrinsic value of things.

In general, all managers, merchants and businessmen, regional and party leaders can be divided into 2 classes:

The first class consists of individuals who, at their core, pursue personal and (or) social, humanistic, moral goals in their activities.

The second class pursues personal and (or) social egoistic, monopolistic goals (in the interests of a group of persons).

The first class of people is able to realize the need to use the rules and recommendations discussed above. A significant part of these people, due to their decency and rational intuition, are already using them, even without being familiar with these recommendations.

The second group of people, who can be conditionally called new Russians (“NR”), are incapable of understanding this problem due to their personal qualities and due to the unfortunately still lack of a civilized socio-economic environment in the country:

Communication with this group has a number of negative aspects, because... “NR” have a number of negative professionally important qualities (Table 23).

Table 23

Negative professionally important qualities (PVK) “NR”

Psychological qualities Psychophysiological qualities
1. Irresponsibility 1. Unproductive and illogical thinking
2. Aggression 2. Conservatism of thinking
3. Permissiveness 3. Lack of quick thinking in non-standard situations
4. Impunity 4. Instability of attention.
5. The vagueness of the concept of “legality of actions” 5. Bad RAM
6. Inflated professional self-esteem 6. Inability to coordinate in various ways perception of information.
7. Categorical 7. Slow response to changing situations
8. Arrogance 8. Inability to act unconventionally
9. Low professional and interpersonal competence 9. Lack of flexibility in decision making

These negative aspects of communication give rise to a number of conflicts, which are not always of a personal nature and, due to their widespread nature and often specificity, give rise to a number of public, departmental and state problems and, ultimately, affect the psychological safety of leaders as individuals and even the national one. country security. This situation can be reversed only through the purposeful formation of a civilized socio-economic environment with a focus on humanistic, moral, national goals and widespread propaganda of the achievements of ontopsychology in the field of personality formation of top-level managers. The ultimate goal of this process is to change the value orientations of the widest circles of the population. National security is obviously affected by the ratio of the number of first and second class persons. It is quite possible that at present the number of people in the second group is greater than in the first. To what extent the number of persons in the first class exceeds the second class can national security be ensured is a complex question. Perhaps the standard condition for the reliability of static hypotheses (95%) should be met. In any case, when carrying out the activities listed above, the number of people in the first class will increase, and the number in the second will decrease, and this process itself will already have a beneficial effect.


Mironova E.E. Collection of psychological tests. Part 2.

Computer networks and telecommunications

A computer network is an association of several computers for the joint solution of information, computing, educational and other problems.

Computer networks have given rise to significantly new information processing technologies - network technologies. In the simplest case, network technologies allow the sharing of resources - large-capacity storage devices, printing devices, Internet access, databases and data banks. The most modern and promising approaches to networks involve the use of collective division of labor in working together with information - development of various documents and projects, management of an institution or enterprise, etc.

The simplest type of network is the so-called peer-to-peer network, which provides communication between personal computers of end users and allows the sharing of disk drives, printers, and files. More developed networks, in addition to end-user computers - workstations - include special dedicated computers - servers . Server is a computer that performs special functions on the network for servicing other computers on the network - workers ants. There are different types of servers: file servers, telecommunication servers, servers for mathematical calculations, database servers.

A very popular and extremely promising technology for processing information on the network today is called “client-server”. The client-server methodology assumes a deep separation of the functions of computers on the network. At the same time, the functions of the “client” (by which we mean a computer with the appropriate software) include

Providing user interface, focused on specific operational responsibilities and user powers;

Generating requests to the server, without necessarily informing the user about it; ideally, the user does not delve into the technology of communication between the computer on which he works and the server;

Analysis of server responses to requests and presentation of them to the user. The main function of the server is to perform specific actions on requests

client (for example, solving a complex mathematical problem, searching for data in a database, connecting a client to another client, etc.); in this case, the server itself does not initiate any interactions with the client. If the server that the client has contacted is unable to solve the problem due to lack of resources, then ideally he himself finds another, more powerful server and transfers the task to it, becoming, in turn, a client, but without needlessly informing about it initial client. Please note that the “client” is not at all a remote terminal of the server. The client can be a very powerful computer, which, due to its capabilities, solves problems independently.

Computer networks and network information processing technologies have become the basis for building modern information systems. The computer should now be considered not as a separate processing device, but as a “window” into computer networks, a means of communication with network resources and other network users.

Local networks (LAN computers) unite a relatively small number of computers (usually from 10 to 100, although much larger ones are occasionally found) within one room (educational computer class), building or institution (for example, a university). The traditional name is local area network (LAN)

There are:

Local area networks or LAN (LAN, Local Area Network) are networks that are geographically small in size (a room, a floor of a building, a building or several adjacent buildings). As a rule, cable is used as a data transmission medium. However, wireless networks have recently gained popularity. The close location of computers is dictated by the physical laws of signal transmission through the cables used in the LAN or by the power of the wireless signal transmitter. LANs can connect from several units to several hundred computers.

The simplest LAN, for example, can consist of two PCs connected by a cable or wireless adapters.

Internets or network complexes are two or more LANs united by special devices to support large LANs. They are, in essence, networks of networks.

Global networks - (WAN, Wide Area Network) LANs connected by means of remote data transfer.

Corporate networks are global networks run by a single organization.

From the point of view of logical organization of networks, there are peer-to-peer and hierarchical.

The creation of automated enterprise management systems (ACS) had a great influence on the development of drugs. ACS include several automated workstations (AWS), measuring systems, and control points. Another important field of activity in which drugs have proven their effectiveness is the creation of educational classes computer technology(KUVT).

Thanks to the relatively short lengths of communication lines (usually no more than 300 meters), information can be transmitted digitally over the LAN at a high transmission speed. At long distances, this transmission method is unacceptable due to the inevitable attenuation of high-frequency signals; in these cases, it is necessary to resort to additional technical (digital-to-analog conversions) and software (error correction protocols, etc.) solutions.

Feature PM- the presence of a high-speed communication channel connecting all subscribers for transmitting information in digital form. Exist wired and wireless channels. Each of them is characterized by certain values ​​of parameters that are essential from the point of view of drug organization:

Data transfer rates;

Maximum length lines;

Noise immunity;

Mechanical strength;

Convenience and ease of installation;

Cost.

Currently usually used four types of network cables:

Coaxial cable;

Unprotected twisted pair;

Protected twisted pair;

Fiber optic cable.

The first three types of cables transmit an electrical signal through copper conductors. Fiber optic cables transmit light along glass fibers.

Wireless connection on microwave radio waves can be used to organize networks within large premises such as hangars or pavilions, where the use of conventional communication lines is difficult or impractical. Besides, wireless lines can connect remote segments of local networks at distances of 3 - 5 km (with a wave channel antenna) and 25 km (with a directional parabolic antenna) subject to direct visibility. Organizations wireless network significantly more expensive than usual.

To organize educational LANs, twisted pair cables are most often used, like itself! cheap, since the requirements for data transfer speed and line length are not critical.

To connect computers using LAN communication lines, you need network adapters(or as they are sometimes called, network pla You). The most famous are: adapters of the following three types:

ArcNet;

INTRODUCTION

A computer network is an association of several computers for the joint solution of information, computing, educational and other problems.

One of the first problems that arose during the development of computer technology, which required the creation of a network of at least two computers, was to ensure reliability many times greater than what one machine could provide at that time when managing a critical process in real time. Thus, when launching a spacecraft, the required rate of reaction to external events exceeds human capabilities, and failure of the control computer threatens with irreparable consequences. IN the simplest scheme the work of this computer is duplicated by a second identical one, and if the active machine fails, the contents of its processor and RAM are very quickly transferred to the second one, which takes over control (in real systems, of course, everything is much more complicated).

Here are examples of other, very heterogeneous, situations in which the unification of several computers is necessary.

A. In the simplest, cheapest educational computer class, only one computer - the teacher's workstation - has a disk drive that allows you to save programs and data for the entire class on disk, and a printer that can be used to print texts. To exchange information between the teacher’s workstation and the students’ workstations, a network is needed.

B. To sell railway or airline tickets, in which hundreds of cashiers across the country simultaneously participate, a network is needed that connects hundreds of computers and remote terminals at ticket sales points.

Q. Today there are many computer databases and data banks on various aspects of human activity. To access the information stored in them, you need a computer network.

Computer networks are breaking into people's lives - both in professional activities and in everyday life - in the most unexpected and massive way. Knowledge about networks and skills in working with them are becoming necessary for many people.

Computer networks have given rise to significantly new information processing technologies - network technologies. In the simplest case, network technologies allow the sharing of resources - large-capacity storage devices, printing devices, Internet access, databases and data banks. The most modern and promising approaches to networks involve the use of a collective division of labor when working together with information - developing various documents and projects, managing an institution or enterprise, etc.

The simplest type of network is the so-called peer-to-peer network, which provides communication between personal computers of end users and allows the sharing of disk drives, printers, and files.

More developed networks, in addition to end-user computers - workstations - include special dedicated computers - servers. A server is a computer. performing special functions in the network servicing other computers on the network - workstations. There are different types of servers: file servers, telecommunication servers, servers for mathematical calculations, database servers.

A very popular and extremely promising technology for processing information on the network today is called “client-server”. The client-server methodology assumes a deep separation of the functions of computers on the network. In this case, the functions of the “client” (by which we mean a computer with the appropriate software) include

Providing a user interface tailored to specific user responsibilities and responsibilities;

Generating requests to the server, without necessarily informing the user about it; ideally, the user does not delve into the technology of communication between the computer on which he works and the server;

Analysis of server responses to requests and presentation of them to the user. The main function of the server is to perform specific actions based on client requests (for example, solving a complex mathematical problem, searching for data in a database, connecting a client to another client, etc.); in this case, the server itself does not initiate any interactions with the client. If the server that the client has contacted is unable to solve the problem due to lack of resources, then ideally he himself finds another, more powerful server and transfers the task to it, becoming, in turn, a client, but without informing about it without needs of the initial client. Please note that the “client” is not at all a remote terminal of the server. The client can be a very powerful computer, which, due to its capabilities, solves problems independently.

Computer networks and network information processing technologies have become the basis for building modern information systems. The computer should now be considered not as a separate processing device, but as a “window” into computer networks, a means of communication with network resources and other network users.

LOCAL NETWORKS

HARDWARE

Local networks (LAN computers) unite a relatively small number of computers (usually from 10 to 100, although much larger ones are occasionally found) within one room (educational computer class), building or institution (for example, a university). The traditional name - local area network (LAN) - is rather a tribute to those times when networks were mainly used to solve computing problems; today in 99% of cases we're talking about exclusively about the exchange of information in the form of texts, graphic and video images, numerical arrays. The usefulness of drugs is explained by the fact that from 60% to 90% of the information an institution needs circulates within it, without needing to go outside.

The creation of automated enterprise management systems (ACS) had a great influence on the development of drugs. ACS include several automated workstations (AWS), measuring systems, and control points. Another important field of activity in which LS has proven its effectiveness is the creation of educational computer technology classes (ECT).

Thanks to the relatively short lengths of communication lines (usually no more than 300 meters), information can be transmitted digitally over a LAN at a high transmission speed. At long distances, this transmission method is unacceptable due to the inevitable attenuation of high-frequency signals; in these cases, it is necessary to resort to additional technical (digital-to-analog conversions) and software (error correction protocols, etc.) solutions.

A characteristic feature of the LAN is the presence of a high-speed communication channel connecting all subscribers for transmitting information in digital form. There are wired and wireless (radio) channels. Each of them is characterized by certain values ​​of parameters that are essential from the point of view of drug organization:

Data transfer rates;

Maximum line length;

Noise immunity;

Mechanical strength;

Convenience and ease of installation;

Cost.

Currently, four types of network cables are commonly used:

Coaxial cable;

Unprotected twisted pair;

Protected twisted pair;

Fiber optic cable.

The first three types of cables transmit an electrical signal through copper conductors. Fiber optic cables transmit light along glass fibers.

Most networks allow several cabling options.

Coaxial cables consist of two conductors surrounded by insulating layers. The first layer of insulation surrounds the central copper wire. This layer is braided from the outside with an external shielding conductor. The most common coaxial cables are thick and thin "Ethernet" cables. This design provides good noise immunity and low signal attenuation over distances.

There are thick (about 10 mm in diameter) and thin (about 4 mm) coaxial cables. Having advantages in noise immunity, strength, and line length, a thick coaxial cable is more expensive and more difficult to install (it is more difficult to pull through cable channels) than a thin one. Until recently, a thin coaxial cable represented a reasonable compromise between the basic parameters of LAN communication lines and in Russian conditions was most often used to organize large LANs of enterprises and institutions. However, thicker, more expensive cables provide better data transmission over longer distances and are less susceptible to electromagnetic interference.

Twisted pairs are two wires twisted together with six turns per inch to provide EMI protection and impedance matching or electrical resistance. Another name commonly used for this wire is "IBM Type-3". In the USA, such cables are laid during the construction of buildings to provide telephone communication. However, using telephone wire, especially when it is already placed in a building, can create big problems. First, unprotected twisted pairs are susceptible to electromagnetic interference, such as electrical noise generated by fluorescent lamps and moving elevators. Interference can also be caused by signals transmitted in a closed loop in telephone lines running along a local network cable. In addition, twisted pair Bad quality may have a variable number of turns per inch, which distorts the calculated electrical resistance.

It is also important to note that telephone wires are not always laid in a straight line. A cable connecting two adjacent rooms can actually go around half the building. Underestimating the cable length in this case may result in it actually exceeding the maximum permissible length.

Protected twisted pairs are similar to unprotected twisted pairs, except that they use thicker wires and are protected from external influences by a layer of insulator. The most common type of such cable used in local networks, IBM Type-1, is a secure cable with two twisted pairs of continuous wire. In new buildings, Type-2 cable may be a better option, since it includes, in addition to the data line, four unprotected pairs of continuous wire for transmitting telephone conversations. Thus, “type-2” allows you to use one cable to transmit both telephone conversations and data over a local network.

Protection and careful adherence to twists per inch make rugged twisted pair cable a reliable alternative cabling solution. However, this reliability comes at a cost.

Fiber optic cables transmit data in the form of light pulses along glass “wires.” Most LAN systems today support fiber optic cabling. Fiber optic cable has significant advantages over any copper cable option. Fiber optic cables provide the highest transmission speeds; they are more reliable because they are not subject to loss of information packets due to electromagnetic interference. Optical cable is very thin and flexible, making it easier to transport than heavier copper cable. However, the most important thing is that only optical cable has sufficient bandwidth, which will be required for faster networks in the future.

While the price of fiber optical cable significantly higher than copper. Compared to copper cable, installation of optical cable is more labor-intensive, since its ends must be carefully polished and aligned to ensure a reliable connection. However, now there is a transition to fiber optic lines, which are absolutely not subject to interference and are beyond competition in terms of bandwidth. The cost of such lines is steadily decreasing, and the technological difficulties of joining optical fibers are being successfully overcome.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

ALL-RUSSIANCORRESPONDENTFINANCIAL AND ECONOMIC

INSTITUTE

DEPARTMENT OF AUTOMATED PROCESSING

ECONOMIC INFORMATION

COURSE WORK

By discipline « COMPUTER SCIENCE"

on the topic “Computer networks and telecommunications”

Performed:

Plaksina Natalya Nikolaevna

Specialty of State Medical University

Record book number 07МГБ03682

Checked:

Sazonova N.S.

Chelyabinsk - 2009

  • INTRODUCTION
  • THEORETICAL PART
    • 1. CLASSIFICATION OF COMPUTER NETWORKS
  • 2. LAN CONSTRUCTION TOPOLOGY
  • 3. METHODS OF ACCESS TO THE TRANSMISSION MEDIA IN THE LAN
  • 4. CORPORATE INTERNET NETWORK
  • 5. PRINCIPLES, TECHNOLOGIES, INTERNET PROTOCOLS
  • 6. INTERNET DEVELOPMENT TRENDS
  • 7. MAIN COMPONENTS WWW, URL, HTML
  • PRACTICAL PART
  • CONCLUSION
  • BIBLIOGRAPHY

INTRODUCTION

In recent years, the global Internet has become a global phenomenon. The network, which until recently was used by a limited number of scientists, government officials and educational workers in their professional activities, has become available to large and small corporations and even individual users. computer network LAN internet

Initially, the Internet was a fairly complex system for the average user. As soon as the Internet became available to businesses and private users, software development began to work with various useful Internet services, such as FTP, Gopher, WAIS and Telnet. Specialists also created a completely new type of service, for example, the World Wide Web - a system that allows you to integrate text, graphics and sound.

In this work I will look at the structure of the Network, its tools and technologies and the applications of the Internet. The question I am studying is extremely relevant because the Internet today is experiencing a period of explosive growth.

THEORETICAL PART

1. CLASSIFICATION OF COMPUTER NETWORKS

Networks of computers have many advantages over a collection of individual systems, including the following:

· Resource sharing.

· Increasing the reliability of the system.

· Load distribution.

· Extensibility.

Resource sharing.

Network users can have access to certain resources of all network nodes. These include, for example, data sets, free memory on remote nodes, computing power of remote processors, etc. This allows you to save significant money by optimizing the use of resources and their dynamic redistribution during operation.

Increasing the reliability of system operation.

Since the network consists of a collection of individual nodes, if one or more nodes fail, other nodes will be able to take over their functions. At the same time, users may not even notice this; the redistribution of tasks will be taken over by the network software.

Load distribution.

In networks with variable load levels, it is possible to redistribute tasks from some network nodes (with increased load) to others where free resources are available. Such redistribution can be done dynamically during operation; moreover, users may not even be aware of the peculiarities of scheduling tasks on the network. These functions can be taken over by network software.

Extensibility.

The network can be easily expanded by adding new nodes. Moreover, the architecture of almost all networks makes it easy to adapt network software to configuration changes. Moreover, this can be done automatically.

However, from a security perspective, these strengths turn into vulnerabilities, creating serious problems.

The features of working on a network are determined by its dual nature: on the one hand, the network should be considered as a single system, and on the other, as a set of independent systems, each of which performs its own functions; has its own users. The same duality is manifested in the logical and physical perception of the network: at the physical level, the interaction of individual nodes is carried out using messages of various types and formats, which are interpreted by protocols. At the logical level (i.e. from the point of view of protocols upper levels) the network is presented as a set of functions distributed over various nodes, but connected into a single complex.

Networks are divided:

1. By network topology (classification by organization physical level).

Common bus.

All nodes are connected to a common high-speed data bus. They are simultaneously configured to receive a message, but each node can only receive the message that is intended for it. The address is identified by the network controller, and there can only be one node in the network with a given address. If two nodes are simultaneously busy transmitting a message (packet collision), then one or both of them stop it, wait for a random time interval, then resume attempting transmission (collision resolution method). Another case is possible - at the moment a node transmits a message over the network, other nodes cannot begin transmission (conflict prevention method). This network topology is very convenient: all nodes are equal, the logical distance between any two nodes is 1, and the message transmission speed is high. For the first time, the “common bus” network organization and the corresponding lower-level protocols were developed jointly by DIGITAL and Rank Xerox, it was called Ethernet.

Ring.

The network is built in the form of a closed loop of unidirectional channels between stations. Each station receives messages via an input channel; the beginning of the message contains address and control information. Based on it, the station decides to make a copy of the message and remove it from the ring or transmit it via the output channel to a neighboring node. If no message is currently being transmitted, the station itself can transmit a message.

Ring networks use several different control methods:

Daisy chain - control information is transmitted through separate sets (chains) of ring computers;

Control token -- control information is formatted in the form of a specific bit pattern circulating around the ring; only when a station receives a token can it issue a message to the network (the most well-known method, called token ring);

Segmental - a sequence of segments circulates around the ring. Having found an empty one, the station can place a message in it and transmit it to the network;

Register insertion - a message is loaded into a shift register and transmitted to the network when the ring is free.

Star.

The network consists of one hub node and several terminal nodes connected to it, not directly connected to each other. One or more terminal nodes can be hubs of another network, in which case the network acquires a tree topology.

The network is managed entirely by the hub; terminal nodes can communicate with each other only through it. Typically, only local data processing is performed on terminal nodes. Processing of data relevant to the entire network is carried out at the hub. It is called centralized. Network management is usually carried out using a polling procedure: the hub, at certain intervals, polls the terminal stations in turn to see if there is a message for it. If there is, the terminal station transmits a message to the hub; if not, the next station is polled. The hub can transmit a message to one or more terminal stations at any time.

2. By network size:

· Local.

· Territorial.

Local.

A data network connecting a number of nodes in one local area (room, organization); Network nodes are usually equipped with the same type of hardware and software (although this is not necessary). Local networks provide high speeds of information transfer. Local networks are characterized by short (no more than a few kilometers) communication lines, a controlled operating environment, a low probability of errors, and simplified protocols. Gateways are used to connect local networks with territorial networks.

Territorial.

They differ from local ones by the greater length of communication lines (city, region, country, group of countries), which can be provided by telecommunications companies. A territorial network can connect several local networks, individual remote terminals and computers, and can be connected to other territorial networks.

Area networks rarely use any standard topological designs, since they are designed to perform other, usually specific, tasks. Therefore, they are usually built in accordance with an arbitrary topology, and control is carried out using specific protocols.

3. According to the organization of information processing (classification at the logical level of presentation; here the system is understood as the entire network as a single complex):

Centralized.

Systems of such organization are the most widespread and familiar. They consist of a central node, which implements the entire range of functions performed by the system, and terminals, whose role is limited to partial input and output of information. Mostly peripherals play the role of terminals from which the information processing process is controlled. The role of terminals can be performed by display stations or personal computers, both local and remote. All processing (including communication with other networks) is performed through a central node. A feature of such systems is the high load on the central node, due to which it must have a highly reliable and high-performance computer. The central node is the most vulnerable part of the system: its failure disables the entire network. At the same time, security problems in centralized systems are solved most simply and actually come down to protecting the central node.

Another feature of such systems is the inefficient use of the resources of the central node, as well as the inability to flexibly rearrange the nature of work (the central computer must work all the time, which means that some part of it can be idle). Currently, the share of centrally controlled systems is gradually falling.

Distributed.

Almost all nodes of this system can perform similar functions, and each individual node can use the hardware and software of other nodes. The main part of such a system is a distributed OS, which distributes system objects: files, processes (or tasks), memory segments, and other resources. But at the same time, the OS can distribute not all resources or tasks, but only part of them, for example, files and free memory on the disk. In this case, the system is still considered distributed; the number of its objects (functions that can be distributed across individual nodes) is called the degree of distribution. Such systems can be either local or territorial. In mathematical terms, the main function of a distributed system is to map individual tasks to a set of nodes on which they are executed. A distributed system must have the following properties:

1. Transparency, that is, the system must ensure the processing of information regardless of its location.

2. A resource allocation mechanism, which must perform the following functions: ensure interaction of processes and remote calling of tasks, support virtual channels, distributed transactions and naming services.

3. Naming service, uniform for the entire system, including support unified service directory.

4. Implementation of services of homogeneous and heterogeneous networks.

5. Controlling the functioning of parallel processes.

6. Security. In distributed systems, the security problem moves to a qualitatively new level, since it is necessary to control the resources and processes of the entire system as a whole, as well as the transfer of information between system elements. The main components of protection remain the same - access control and information flows, network traffic control, authentication, operator control and security management. However, control in this case becomes more complicated.

A distributed system has a number of advantages that are not inherent in any other organization of information processing: optimal use of resources, resistance to failures (failure of one node does not lead to fatal consequences - it can be easily replaced), etc. However, new problems arise: methods of resource distribution, ensuring security, transparency, etc. Currently, all the capabilities of distributed systems are far from being fully realized.

Recently, the concept of client-server information processing has become increasingly recognized. This concept is transitional from centralized to distributed and at the same time combines both of the latter. However, client-server is not so much a way of organizing a network as a way of logical presentation and processing of information.

Client-server is an organization of information processing in which all functions performed are divided into two classes: external and internal. External functions consist of user interface support and user-level information presentation functions. Internal ones concern the execution of various requests, the process of information processing, sorting, etc.

The essence of the client-server concept is that the system has two levels of elements: servers that process data ( internal functions), and workstations that perform the functions of generating queries and displaying the results of their processing (external functions). There is a stream of requests from the workstations to the server, and in the opposite direction - the results of their processing. There can be several servers in the system and they can perform different sets of lower-level functions (print servers, file and network servers). The bulk of information is processed on servers, which in this case play the role of local centers; information is entered and displayed using workstations.

The distinctive features of systems built on the client-server principle are as follows:

The most optimal use of resources;

Partial distribution of the information processing process in the network;

Transparent access to remote resources;

Simplified management;

Reduced traffic;

Possibility of more reliable and simpler protection;

Greater flexibility in using the system as a whole, as well as heterogeneous equipment and software;

Centralized access to certain resources,

Separate parts of one system can be built according to different principles and combined using appropriate matching modules. Each class of networks has its own specific characteristics, both in terms of organization and in terms of protection.

2.TOPOLOGY OF LAN CONSTRUCTION

The term network topology refers to the path that data travels across a network. There are three main types of topologies: bus, star, and ring.

Figure 1. Bus (linear) topology.

The “common bus” topology involves the use of one cable to which all computers on the network are connected (Fig. 1). In the case of "common bus" the cable is shared by all stations in turn. Special measures are taken to ensure that when working with a common cable, computers do not interfere with each other transmitting and receiving data.

In a common bus topology, all messages sent by individual computers connected to the network. Reliability here is higher, since the failure of individual computers will not disrupt the functionality of the network as a whole. Finding faults in the cable is difficult. In addition, since only one cable is used, if a break occurs, the entire network is disrupted.

Figure 2. Star topology.

In Fig. Figure 2 shows computers connected in a star. In this case, each computer is connected via a special network adapter with a separate cable to the unifying device.

If necessary, you can combine several networks together with a star topology, resulting in branched network configurations.

From a reliability point of view, this topology is not

the best solution, since failure of the central node will lead to the shutdown of the entire network. However, when using a star topology, it is easier to find faults in the cable network.

The “ring” topology is also used (Fig. 3). In this case, data is transferred from one computer to another as if in a relay race. If a computer receives data intended for another computer, it passes it on around the ring. If the data is intended for the computer that received it, it is not transmitted further.

The local network can use one of the listed topologies. This depends on the number of computers being combined, their relative location and other conditions. You can also combine several local networks using different topologies into a single local network. Maybe, for example, a tree topology.

Figure 3. Ring topology.

3. METHODS OF ACCESS TO THE TRANSMISSION MEDIA IN THE LAN

The undoubted advantages of information processing in computer networks result in considerable difficulties in organizing their protection. Let us note the following main problems:

Sharing shared resources.

Due to the sharing of a large number of resources by various network users, possibly located at a great distance from each other, the risk of unauthorized access greatly increases - it can be done easier and more inconspicuously on the network.

Expansion of control zone.

The administrator or operator of a particular system or subnetwork must monitor the activities of users outside its reach, perhaps in another country. At the same time, he must maintain working contact with his colleagues in other organizations.

Combination of various software and hardware.

Connecting several systems, even homogeneous in characteristics, into a network increases the vulnerability of the entire system as a whole. The system is configured to meet its specific security requirements, which may be incompatible with those on other systems. When disparate systems are connected, the risk increases.

Unknown perimeter.

The easy expandability of networks means that it is sometimes difficult to determine the boundaries of a network; the same node can be accessible to users various networks. Moreover, for many of them it is not always possible to accurately determine how many users have access to a particular node and who they are.

Multiple attack points.

In networks, the same set of data or message can be transmitted through several intermediate nodes, each of which is a potential source of threat. Naturally, this cannot improve the security of the network. In addition, many modern networks can be accessed using dial-up lines and a modem, which greatly increases the number of possible points of attack. This method is simple, easy to implement and difficult to control; therefore it is considered one of the most dangerous. The list of network vulnerabilities also includes communication lines and different kinds communication equipment: signal amplifiers, repeaters, modems, etc.

Difficulty in managing and controlling access to the system.

Many attacks on a network can be carried out without gaining physical access to a specific node - using the network from remote points. In this case, identifying the offender may be very difficult, if not impossible. In addition, the attack time may be too short to take adequate measures.

At their core, the problems of protecting networks are due to the dual nature of the latter: we talked about this above. On the one hand, the network is a single system with uniform rules for processing information, and on the other hand, it is a collection of separate systems, each of which has its own rules for processing information. In particular, this duality applies to protection issues. An attack on a network can be carried out from two levels (a combination of these is possible):

1. Upper - an attacker uses the properties of the network to penetrate another node and perform certain unauthorized actions. The protection measures taken are determined by the potential capabilities of the attacker and the reliability of the security measures of individual nodes.

2. Bottom - an attacker uses the properties of network protocols to violate confidentiality or integrity individual messages or the flow as a whole. Disturbance in the flow of messages can lead to information leakage and even loss of control over the network. The protocols used must ensure the security of messages and their flow as a whole.

Network protection, like the protection of individual systems, pursues three goals: maintaining the confidentiality of information transmitted and processed on the network, the integrity and availability of resources and network components.

These goals determine actions to organize protection against attacks from the top level. The specific tasks that arise when organizing network protection are determined by the capabilities of high-level protocols: the wider these capabilities, the more tasks have to be solved. Indeed, if the network's capabilities are limited to the transfer of data sets, then the main security problem is to prevent tampering with data sets available for transfer. If the network capabilities allow you to organize remote launch of programs or work in virtual terminal mode, then it is necessary to implement a full range of protective measures.

Network protection should be planned as a single set of measures covering all features of information processing. In this sense, the organization of network protection, the development of security policy, its implementation and protection management are subject to general rules which were discussed above. However, it must be taken into account that each network node must have individual protection depending on the functions performed and the capabilities of the network. In this case, the protection of an individual node must be part of the overall protection. On each individual node it is necessary to organize:

Control access to all files and other data sets accessible from the local network and other networks;

Monitoring processes activated from remote nodes;

Network diagram control;

Effective identification and authentication of users accessing this node from the network;

Controlling access to local node resources available for use by network users;

Control over the dissemination of information within the local network and other networks connected to it.

However, the network has a complex structure: to transfer information from one node to another, the latter goes through several stages of transformation. Naturally, all these transformations must contribute to the protection of the transmitted information, otherwise attacks from the lower level can compromise the network's security. Thus, the protection of the network as a single system consists of the protection measures for each individual node and the protection functions of the protocols of this network.

The need for security functions for data transfer protocols is again determined by the dual nature of the network: it is a collection of separate systems that exchange information with each other using messages. On the way from one system to another, these messages are transformed by protocols at all levels. And because they are the most vulnerable element of the network, protocols must be designed to secure them to maintain the confidentiality, integrity, and availability of information transmitted over the network.

Network software must be included with the network node, otherwise network operation and security may be compromised by changing programs or data. At the same time, protocols must implement requirements for ensuring the security of transmitted information, which are part of the overall security policy. The following is a classification of network-specific threats (low-level threats):

1. Passive threats (violation of confidentiality of data circulating on the network) - viewing and/or recording of data transmitted over communication lines:

Viewing a message - an attacker can view the contents of a message transmitted over the network;

Graph analysis - an attacker can view the headers of packets circulating in the network and, based on the service information contained in them, make conclusions about the senders and recipients of the packet and the conditions of transmission (time of departure, message class, security category, etc.); in addition, it can figure out the message length and graph size.

2. Active threats (violation of the integrity or availability of network resources) - unauthorized use of devices with access to the network to change individual messages or a flow of messages:

Failure of messaging services - an attacker can destroy or delay individual messages or the entire flow of messages;

- “masquerade” - an attacker can assign someone else’s identifier to his node or relay and receive or send messages on someone else’s behalf;

Injection of network viruses - transmission of a virus body over a network with its subsequent activation by a user of a remote or local node;

Message Flow Modification - An attacker can selectively destroy, modify, delay, reorder, and duplicate messages, as well as insert forged messages.

It is quite obvious that any manipulations described above with individual messages and the flow as a whole can lead to network disruptions or leakage of confidential information. This is especially true for service messages that carry information about the state of the network or individual nodes, about events occurring on individual nodes (remote launch of programs, for example) - active attacks on such messages can lead to loss of control over the network. Therefore, protocols that generate messages and put them into the stream must take measures to protect them and ensure undistorted delivery to the recipient.

The tasks solved by protocols are similar to those solved when protecting local systems: ensuring the confidentiality of information processed and transmitted in the network, the integrity and availability of network resources (components). These functions are implemented using special mechanisms. These include:

Encryption mechanisms that ensure the confidentiality of transmitted data and/or information about data flows. The encryption algorithm used in this mechanism can use a private or public key. In the first case, the presence of mechanisms for managing and distributing keys is assumed. There are two encryption methods: channel, implemented using the data link layer protocol, and end (subscriber), implemented using the application or, in some cases, representative layer protocol.

In the case of channel encryption, all information transmitted over the communication channel, including service information, is protected. This method has following features:

Revealing the encryption key for one channel does not lead to compromise of information in other channels;

All transmitted information, including service messages, service fields of data messages, is reliably protected;

All information is open at intermediate nodes - relays, gateways, etc.;

The user does not participate in the operations performed;

Each pair of nodes requires its own key;

The encryption algorithm must be sufficiently strong and provide encryption speed at the level of channel throughput (otherwise there will be a message delay, which can lead to blocking of the system or a significant decrease in its performance);

The previous feature leads to the need to implement the encryption algorithm in hardware, which increases the cost of creating and maintaining the system.

End-to-end (subscriber) encryption allows you to ensure the confidentiality of data transferred between two application objects. In other words, the sender encrypts the data, the recipient decrypts it. This method has the following features (compare with channel encryption):

Only the content of the message is protected; all proprietary information remains open;

No one except the sender and recipient can recover the information (if the encryption algorithm used is strong enough);

The transmission route is unimportant - information will remain protected in any channel;

Each pair of users requires a unique key;

The user must be familiar with encryption and key distribution procedures.

The choice of one or another encryption method or a combination of them depends on the results of the risk analysis. The question is as follows: what is more vulnerable - the individual communication channel itself or the content of the message transmitted through various channels. Channel encryption is faster (other, faster algorithms are used), transparent to the user, and requires fewer keys. End-to-end encryption is more flexible and can be used selectively, but requires user participation. In each specific case, the issue must be resolved individually.

Mechanisms digital signature, which include procedures for closing data blocks and checking a closed data block. The first process uses secret key information, the second process uses public key information, which does not allow the recovery of secret data. Using secret information, the sender forms a service data block (for example, based on a one-way function), the recipient, based on publicly available information verifies the received block and determines the authenticity of the sender. Only a user who has the appropriate key can form a genuine block.

Access control mechanisms.

They check the authority of a network object to access resources. Authorization is checked in accordance with the rules of the developed security policy (selective, authoritative or any other) and the mechanisms implementing it.

Mechanisms to ensure the integrity of transmitted data.

These mechanisms ensure the integrity of both an individual block or field of data and a stream of data. The integrity of the data block is ensured by the sending and receiving objects. The sending object adds an attribute to the data block, the value of which is a function of the data itself. The receiving object also evaluates this function and compares it with the received one. In case of discrepancy, a decision is made on violation of integrity. Detection of changes may trigger data recovery efforts. In the event of a deliberate violation of integrity, the value of the control sign can be changed accordingly (if the algorithm for its formation is known); in this case, the recipient will not be able to detect the violation of integrity. Then it is necessary to use an algorithm for generating a control feature as a function of the data and the secret key. In this case, it will be impossible to correctly change the control characteristic without knowing the key and the recipient will be able to determine whether the data has been modified.

Protection of the integrity of data streams (from reordering, adding, repeating or deleting messages) is carried out using additional forms of numbering (control of message numbers in the stream), time stamps, etc.

The following mechanisms are desirable components of network security:

Mechanisms for authenticating network objects.

To ensure authentication, passwords, verification of object characteristics, and cryptographic methods (similar to a digital signature) are used. These mechanisms are typically used to authenticate peer network entities. The methods used can be combined with the “triple handshake” procedure (three times exchange of messages between the sender and the recipient with authentication parameters and confirmations).

Text filling mechanisms.

Used to provide protection against chart analysis. Such a mechanism can be used, for example, by generating fictitious messages; in this case, the traffic has a constant intensity over time.

Route control mechanisms.

Routes can be selected dynamically or predefined in order to use physically secure subnets, repeaters, and channels. End systems, when detecting intrusion attempts, may require the connection to be established via a different route. In addition, selective routing can be used (that is, part of the route is set explicitly by the sender - bypassing dangerous sections).

Inspection mechanisms.

Characteristics of data transferred between two or more objects (integrity, source, time, recipient) can be confirmed using an attestation mechanism. Confirmation is provided by a third party (arbitrator) who is trusted by all parties involved and who has the necessary information.

In addition to the security mechanisms listed above, implemented by protocols at various levels, there are two more that do not belong to a specific level. Their purpose is similar to control mechanisms in local systems:

Event detection and processing(analogous to means of monitoring dangerous events).

Designed to detect events that lead or may lead to a violation of network security policy. The list of these events corresponds to the list for individual systems. In addition, it may include events indicating violations in the operation of the protection mechanisms listed above. Actions taken in this situation may include various recovery procedures, event logging, one-way disconnect, local or peripheral event reporting (logging), etc.

Security scan report (similar to a scan using the system log).

The security check is independent verification system records and activities for compliance with the specified security policy.

The security functions of protocols at each level are determined by their purpose:

1. Physical layer - control electromagnetic radiation communication lines and devices, maintaining communication equipment in working order. Protection on this level is provided with the help of shielding devices, noise generators, means physical protection transmission medium.

2. Data link level - increasing the reliability of protection (if necessary) by encrypting data transmitted over the channel. In this case, all transmitted data, including service information, is encrypted.

3. The network level is the most vulnerable level from a security point of view. All routing information is generated on it, the sender and recipient appear explicitly, and flow control is carried out. In addition, protocols network layer packets are processed on all routers, gateways and other intermediate nodes. Almost all specific network violations are carried out using protocols of this level (reading, modification, destruction, duplication, redirection of individual messages or a flow as a whole, masquerading as another node, etc.).

Protection against all such threats is carried out by network and transport layer protocols and using cryptographic protection tools. At this level, for example, selective routing can be implemented.

4. Transport layer - controls the functions of the network layer at the receiving and transmitting nodes (at intermediate nodes the transport layer protocol does not function). Transport layer mechanisms check the integrity of individual data packets, packet sequences, the route traveled, departure and delivery times, identification and authentication of the sender and recipient, and other functions. All active threats become visible at this level.

The integrity of transmitted data is guaranteed by cryptoprotection of data and service information. No one other than those who have the secret key of the recipient and/or sender can read or change the information in such a way that the change goes unnoticed.

Graph analysis is prevented by the transmission of messages that do not contain information, but which, however, appear to be real. By adjusting the intensity of these messages depending on the amount of information transmitted, you can constantly achieve a uniform schedule. However, all these measures cannot prevent the threat of destruction, redirection or delay of the message. The only defense against such violations may be the parallel delivery of duplicate messages along other paths.

5. Upper-level protocols provide control over the interaction of received or transmitted information with the local system. Session and representative level protocols do not perform security functions. Application layer protocol security features include controlling access to specific data sets, identifying and authenticating specific users, and other protocol-specific functions. These functions are more complex in the case of implementing an authoritative security policy on the network.

4. CORPORATE INTERNET NETWORK

The corporate network is a special case corporate network large company. It is obvious that the specifics of the activity impose strict requirements on information security systems in computer networks. An equally important role when building a corporate network is played by the need to ensure trouble-free and uninterrupted operation, since even a short-term failure in its operation can lead to huge losses. Finally, large amounts of data must be transferred quickly and reliably because many applications must operate in real time.

Corporate network requirements

The following basic requirements for a corporate network can be identified:

The network unites all information devices belonging to the company into a structured and managed closed system: individual computers and local area networks (LAN), host servers, workstations, telephones, faxes, office PBXs.

The network ensures the reliability of its functioning and powerful systems information protection. That is, trouble-free operation of the system is guaranteed both in the event of personnel errors and in the event of an unauthorized access attempt.

There is a well-functioning communication system between departments at different levels (both city and non-resident departments).

In connection with modern development trends, there is a need for specific solutions. The organization of prompt, reliable and secure access of a remote client to modern services plays a significant role.

5. PRINCIPLES, TECHNOLOGIES, INTERNET PROTOCOLS

The main thing that distinguishes the Internet from other networks is its protocols - TCP/IP. In general, the term TCP/IP usually means everything related to protocols for communication between computers on the Internet. It covers an entire family of protocols, application programs, and even the network itself. TCP/IP is an internetworking technology, internet technology. A network that uses internet technology is called "internet". If we are talking about global network, combining many networks with internet technology, it is called the Internet.

The TCP/IP protocol gets its name from two communication protocols (or communication protocols). These are Transmission Control Protocol (TCP) and Internet Protocol (IP). Despite the fact that the Internet uses a large number of other protocols, the Internet is often called the TCP/IP network, since these two protocols are, of course, the most important.

Like any other network on the Internet, there are 7 levels of interaction between computers: physical, logical, network, transport, session level, presentation and application level. Accordingly, each level of interaction corresponds to a set of protocols (i.e. rules of interaction).

Physical layer protocols determine the type and characteristics of communication lines between computers. The Internet uses almost all currently known communication methods, from a simple wire (twisted pair) to fiber-optic communication lines (FOCL).

For each type of communication line, a corresponding logical level protocol has been developed to control the transmission of information over the channel. Towards logical level protocols for telephone lines The protocols include SLIP (Serial Line Interface Protocol) and PPP (Point to Point Protocol). For communication via LAN cable, these are package drivers for LAN cards.

Network layer protocols are responsible for transmitting data between devices on different networks, that is, they are responsible for routing packets in the network. Network layer protocols include IP (Internet Protocol) and ARP (Address Resolution Protocol).

Transport layer protocols control the transfer of data from one program to another. Transport layer protocols include TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

Session layer protocols are responsible for establishing, maintaining, and destroying appropriate channels. On the Internet, this is done by the already mentioned TCP and UDP protocols, as well as the UUCP (Unix to Unix Copy Protocol).

Representative layer protocols serve application programs. Representative-level programs include programs that run, for example, on a Unix server to provide various services to subscribers. These programs include: telnet server, FTP server, Gopher server, NFS server, NNTP (Net News Transfer Protocol), SMTP (Simple Mail Transfer Protocol), POP2 and POP3 (Post Office Protocol), etc.

Application layer protocols include network services and programs for providing them.

6. INTERNET DEVELOPMENT TRENDS

In 1961, DARPA (Defense Advanced Research Agency), on behalf of the US Department of Defense, began a project to create an experimental packet transmission network. This network, called ARPANET, was originally intended to study methods for providing reliable communications between computers various types. Many methods for transmitting data via modems were developed on the ARPANET. At the same time, network data transfer protocols - TCP/IP - were developed. TCP/IP is a set of communication protocols that define how different types of computers can communicate with each other.

The ARPANET experiment was so successful that many organizations wanted to join it to use it for daily data transfer. And in 1975, ARPANET evolved from an experimental network into work network. Responsibility for network administration was assumed by DCA (Defense Communication Agency), currently called DISA (Defense Information Systems Agency). But ARPANET's development didn't stop there; TCP/IP protocols continued to evolve and improve.

In 1983, the first standard for the TCP/IP protocols was released, included in the Military Standards (MIL STD), i.e. to military standards, and everyone who worked on the network was required to switch to these new protocols. To facilitate this transition, DARPA approached company executives with a proposal to implement TCP/IP protocols on Berkeley(BSD) UNIX. This is where the union of UNIX and TCP/IP began.

After some time, TCP/IP was adapted into a common, that is, publicly available, standard, and the term Internet came into general use. In 1983, MILNET was spun off from ARPANET and became part of the US Department of Defense. The term Internet began to be used to refer to a single network: MILNET plus ARPANET. And although the ARPANET ceased to exist in 1991, the Internet exists, its size is much greater than its original size, as it united many networks around the world. Figure 4 illustrates the growth in the number of hosts connected to the Internet from 4 computers in 1969 to 8.3 million in 1996. A host on the Internet refers to computers that multitask. operating system(Unix, VMS), supporting TCP\IP protocols and providing users with any network services.

7. MAIN COMPONENTS WWW, URL, HTML

World Wide Web is translated into Russian as “ The World Wide Web" And, in essence, this is true. WWW is one of the most advanced tools for working on the global Internet. This service appeared relatively recently and is still rapidly developing.

The largest number of developments are related to the homeland of WWW - CERN, European Particle Physics Laboratory; but it would be a mistake to think of the Web as a tool designed by physicists and for physicists. The fruitfulness and attractiveness of the ideas underlying the project have turned WWW into a system of global scale, providing information in almost all areas of human activity and covering approximately 30 million users in 83 countries.

The main difference between WWW and other tools for working with the Internet is that WWW allows you to work with almost all types of documents currently available on your computer: these can be text files, illustrations, sound and video clips, etc.

What is WWW? This is an attempt to organize all the information on the Internet, plus any local information you choose, as a set of hyper text documents. You navigate the web by following links from one document to another. All these documents are written in a language specially developed for this purpose, called HyperText Markup Language (HTML). It is somewhat reminiscent of the language used to write text documents, only HTML is simpler. Moreover, you can use not only the information provided by the Internet, but also create your own documents. In the latter case, there are a number of practical recommendations for writing them.

The whole benefit of hypertext is to create hypertext documents; if you are interested in any item in such a document, then you just need to point your cursor there to get the information you need. It is also possible to make links in one document to others written by other authors or even located on a different server. While it appears to you as one whole.

Hypermedia is a superset of hypertext. In hypermedia, operations are performed not only on text but also on sound, images, and animation.

There are WWW servers for Unix, Macintosh, MS Windows and VMS, most of them are freely distributed. By installing a WWW server, you can solve two problems:

1. Provide information to external consumers - information about your company, catalogs of products and services, technical or scientific information.

2. Provide your employees with convenient access to the organization’s internal information resources. This could be the latest management orders, internal telephone directory, answers to frequently asked questions for users application systems, technical documentation and everything that the imagination of the administrator and users suggests. The information you want to provide to WWW users is formatted as files on HTML language. HTML is a simple markup language that allows you to mark fragments of text and set links to other documents, highlight headings at several levels, break text into paragraphs, center them, etc., turning simple text into a formatted hypermedia document. It is quite easy to create an HTML file manually, however, there are specialized editors and converters for files from other formats.

Basic components of World Wide Web technology

By 1989, hypertext represented a new, promising technology that had a relatively large number of implementations on the one hand, and on the other hand, attempts were made to build formal models of hypertext systems that were more descriptive in nature and were inspired by the success of the relational approach to describing data. T. Berners-Lee's idea was to apply the hypertext model to information resources distributed on the network, and to make it as efficient as possible. in a simple way. He laid three cornerstones of the four existing system, developing:

HTML document hypertext markup language (HyperText Markup Lan-guage);

* a universal way to address resources on the URL network (Universal Resource Locator);

* protocol for exchanging hypertext information HTTP (HyperText Transfer Protocol).

* CGI (Common Gateway Interface) universal gateway interface.

The HTML idea is an example of an extremely successful solution to the problem of building a hypertext system using special means display controls. The development of hypertext markup language was significantly influenced by two factors: research in the field of interfaces of hypertext systems and the desire to provide simple and quick way creating a hypertext database distributed over a network.

In 1989, the problem of the interface of hypertext systems was actively discussed, i.e. methods for displaying hypertext information and navigation in the hypertext network. The importance of hypertext technology has been compared with the importance of printing. It was argued that a sheet of paper and computer display/reproduction means are significantly different from each other, and therefore the form of presentation of information should also be different. Contextual hypertext links were recognized as the most effective form of hypertext organization, and in addition, the division into links associated with the entire document as a whole and its individual parts was recognized.

The easiest way to create any document is to type it in text editor. There was experience in creating documents well marked for subsequent display in CERN - it is difficult to find a physicist who does not use the TeX or LaTeX system. In addition, by that time there was a markup language standard - Standard Generalized Markup Language (SGML).

It should also be taken into account that, according to his proposals, Berners-Lee intended to combine the existing informational resources CERN, and the first demonstration systems were to be systems for NeXT and VAX/VMS.

Typically hypertext systems have special software building hypertext connections. The hypertext links themselves are stored in special formats or even constitute special files. This approach is fine for a local system, but not for one distributed across many different computer platforms. In HTML, hypertext links are embedded in the body of the document and stored as part of it. Systems often use special data storage formats to improve access efficiency. In WWW, documents are ordinary ASCII files that can be prepared in any text editor. Thus, the problem of creating a hypertext database was solved extremely simply.

...

Similar documents

    Computer networks and their classification. Computer network hardware and local network topologies. Technologies and protocols of computer networks. Addressing computers on the network and basic network protocols. Advantages of using network technologies.

    course work, added 04/22/2012

    Purpose and classification of computer networks. Generalized structure of a computer network and characteristics of the data transfer process. Managing the interaction of devices on the network. Typical topologies and access methods of local networks. Working on a local network.

    abstract, added 02/03/2009

    Topologies and concepts for building computer networks. Services provided by the Internet. Teaching the course "Computer Networks" at Vyatka State Polytechnic University. Guidelines on creating a course "Network Technologies".

    thesis, added 08/19/2011

    Classification of computer networks. Purpose of a computer network. Main types of computer networks. Local and global computer networks. Methods for building networks. Peer-to-peer networks. Wired and wireless channels. Data transfer protocols.

    course work, added 10/18/2008

    Advantages of computer networks. Fundamentals of construction and operation of computer networks. Selection of network equipment. Layers of the OSI model. Basic network technologies. Implementation of interactive communication. Session level protocols. Data transmission medium.

    course work, added 11/20/2012

    Classification and characteristics of access networks. Multiple access network technology. Selecting broadband access technology. Factors influencing ADSL quality parameters. Configuration Methods subscriber access. Basic components of a DSL connection.

    thesis, added 09.26.2014

    Controlling access to the transmission medium. Procedures for data exchange between workstations of subscriber network systems, implementation of access methods to the transmission medium. Estimation of the maximum response time to a network subscriber request for various access methods.

    course work, added 09/13/2010

    Computer network topologies. Methods of accessing communication channels. Data transmission media. Structural model and OSI levels. IP and TCP protocols, principles of packet routing. Characteristics of the DNS system. Creation and calculation of a computer network for an enterprise.

    course work, added 10/15/2010

    The role of computer networks, principles of their construction. Network building systems Token Ring. Information transfer protocols, topologies used. Methods of data transmission, means of communication in the network. Software, deployment and installation technology.

    course work, added 10/11/2013

    The essence and classification of computer networks according to various criteria. Network topology - diagram of how computers are connected in local networks. Regional and corporate computer networks. Internet networks, the concept of WWW and the uniform resource locator URL.




Top