The structure of the Internet is client and server. Main characteristics of the Internet - abstract. Current Internet Governance

13.01.2015 11:14:48

The Internet began its existence in 1982, when the largest US national networks, such as ARPANET, NFSNET and several others, were united. The main one was ARPANET, which appeared in 1969. It united the computers of military and research centers and was used for the purposes of the US Department of Defense.

There is no single definition of the Internet. The Internet consists of many local and global networks belonging to various companies and enterprises and interconnected by various communication lines. The main function of the Internet is to connect individual devices, as well as to provide communication between various networks on a global scale.

Distinctive feature The Internet is peer-to-peer. This means that all devices on the network enjoy equal rights, meaning each device can communicate with another device connected to the Internet. Each computer can send requests to provide any resources to other computers in within this network and thus act as a client. At the same time, each computer can act as a server and process requests from other computers innetwork, sending the requested data.

To transmit data and organize a network, communication lines are needed. They can be wired (telephone), cable (for example, twisted pair, coaxial cable, fiber optic), wireless (radio, satellite, cellular).

Providers

Users connect to the network thanks to providers - organizations that provide Internet access services and other Internet-related services, for example, allocating disk space for storing and running websites (hosting); support for mailboxes or virtual mail server; maintenance of communication lines, that is, maintaining them in working order, and others.

Peering agreement– a bilateral commercial agreement between providers on the mutual transmission of traffic.

There are several types of access providers: local, regional, backbone. Local providers have a permanent connection to the Internet through regional providers and usually operate within the same city. The regional provider connects to the backbone provider, which, in turn, covers large regions, for example, a country, a continent. Trunk companies own backbone communication channels, while regional ones rent communication channels from them. Relationships between providers are carried out on the basis of peering agreements.

Backbone providers typically have peering agreements with all other backbone providers, and regional providers typically have peering agreements with one of the backbone providers and several other regional providers.

To simplify the organization of peer-to-peer networks, there are special traffic exchange centers (NAP, Network Access Point), in which the networks of a large number of providers are connected. Large providers have so-called points of presence (POP, Point of Presence), in which Hardware to connect local users to the Internet. Typically, a large provider has points of presence in several large cities.



Internet addressing

A computer connected to the Internet is called a host. Its actual computing power is not of fundamental importance. To become a host, you must be connected to another host, which, in turn, will be connected to a third, and so on. This is how the constantly working part of the Internet is formed. Knowing the host addresses, data packets make their way from the sender to the recipient.

The Internet has a unified addressing system that helps computers find each other in the process of exchanging information. The Internet operates using the TCP/IP protocol , which is responsible for physically sending messages between computers on the Internet. A protocol is a rule for transmitting information on the Internet.

Data transmission occurs through a unique IP address. The IP protocol is a set of rules that allows data to be delivered from one computer to another thanks to knowledge of IP - addresses of the sender and recipient.

The modern Internet uses IPv 4. In this version of the protocol IP -address is a sequence of four dotted decimal numbers, each of which can have a value from 1 to 255, for example 19.226.192.108. This number can be permanently assigned to the computer or assigned dynamically - at the moment when the user connects to the provider, but at any given time there are no two computers on the Internet with the same IP addresses.

This version of the protocol allows for more than four billion unique IP addresses, but as the Internet continues to grow, Protocol I is gradually being introduced Pv6. It expands the length of the IP address to 128 bits, which allows you to increase the number of available identifiers almost to infinity.

Routing

Information is delivered to the required address using routers that select the optimal route. This happens thanks to the TCP protocol . Its task is to split the transmitted data into small packets of a fixed structure and length that pass through several servers. Delivery routes may vary. The packet sent first may arrive last, since the speed of receipt depends not on how close the recipient and sender's servers are located to each other, but on the chosen route. The optimal route is the one that reduces the load on the network. The shortest path (through the nearest servers) is not always optimal, since a communication channel to another continent may work faster than a channel to a neighboring city.

The TCP (Transfer Control Protocol) ensures reliable delivery of data by controlling the optimal packet size and resending it in case of failure.

Domain system

People find it difficult to remember sequences of numbers, so the Domain Name System (DNS) was introduced ). It ensures compliance with the numerical IP -the address of each computer with a unique domain name, which usually consists of two to four words separated by dots (domains).

Domain name read from left to right. The rightmost word in a domain name is the top, or first, level domain. There are two types of domains top level: geographical (two-letter - indicate the country in which the node is located) and administrative (three-letter) - indicate the type or profile of the organization. Each country in the world has its own geographic domain. For example, Russia owns the geographical domain ru, in which Russian organizations and citizens have the right to register a second-level domain.

For the USA, the name of the country is traditionally omitted; the largest associations there are networks of educational (edu), commercial (com), government (gov), military (mil) institutions, as well as networks of other organizations (org) and network resources (net).

The top-level domain is followed by a second-level domain, then a third, etc. For example, in the domain name gosuslugi. samara. ru RU is a top level domain, SAMARA is the second, and GOSUSLUGI is the third.

Tables of correspondence between DNS addresses and IP addresses are placed on special DNS servers connected to the Internet. If the device does not know the IP address of the computer with which it is going to establish a connection, but only has a symbolic DNS address, then it queries the DNS server, providing it with a text version, and receives the IP address of the desired recipient in response.

Current Internet Governance

There are special organizations that deal with addressing and routing issues.

In 1998, the international non-profit organization ICANN (Internet Corporation for Assigned Names and Numbers) was created. ICANN provides universal capabilities Internet communications, supervision and coordination of the IP address space and the DNS system.

Addressing and routing tasks began to appear in the early 1970s, and were previously performed by the American non-profit organization IANA (Internet Assigned Numbers Authority). Currently IANA is a division ICANN and is responsible for distributing IP addresses and top-level domains, as well as registering parameters of various Internet protocols.

To ICANN task also includes the registration of regional Internet registrars, which deal with the technical side of the functioning of the Internet: allocating IP addresses, numbers autonomous systems, reverse DNS zone registration and other technical projects. On this moment there are five of them: for North America; for Europe, Middle East and Central Asia; for Asia and the Pacific; for Latin America and the Caribbean and for Africa.

In Russia there is a Coordination Center for the National Internet Domain - the administrator of the national top-level domains .RU and .РФ. This organization has the authority to develop rules for registering domain names in the .RU and .РФ domains, accreditation of registrars and researching promising projects related to the development of Russian top-level domains.

Popular Internet services

World Wide Web

The World Wide Web is a distributed system that provides access to interconnected documents located on various computers connected to the Internet. Internet is physical basis For World Wide Web. The World Wide Web uses hypertext technology in which documents are linked together by hyperlinks. Documents containing hyperlinks are called web pages, and the Internet servers that store them are called web servers. Transmission of web pages over the Internet is carried out using the Hypertext Transfer Protocol Hypertext Transfer Protocol (HTTP). Via HTTP You can transmit any information, including images, sound, video. The World Wide Web operates on a client-server principle. A web server accepts HTTP requests from clients, which are typically web browsers, and issues HTTP responses.

To access documents (web pages) linked via hyperlinks,The web uses, for example, browsers such as Internet Explorer or Google Chrome . To start your journey on the World Wide Web, you need to connect toInternet and launch the browser.

With the help of the World Wide Web, it is convenient to search for information on the Internet, since it integrates text, graphics, audio and video data in the form of hypertext without any visible “joints”. Hypertext allows the user, while viewing one document, to simultaneously navigate to adjacent elements of another document using hyperlinks.

Email

Another popular Internet service is email. It appeared before the advent of the Internet, in 1971, and served for the exchange of messages between local computer users (personal computers intended for one user did not yet exist). In operating systems for early computers, where up to several hundred terminals could be connected to one machine, e-mail completely simulated the work of regular mail. You could send a letter, as in real life, without a return address, and to receive it you had to have your own box.

The idea of ​​using such a service for exchanging documents and messages between network users turned out to be so popular that e-mail became one of the key applications, which stimulated the development early internet. And today, despite the variety of information exchange opportunities, email is still one of the most frequently used services.

Streaming media

Streaming media is rich Internet content (audio, video, or audio-video files) that a user can watch or listen to as a continuous stream in real time, without having to wait for the entire file to finish downloading. Personal Computer. Streaming media is sent in a continuous stream as a sequence of compressed packets and is played as it is transmitted to the recipient's computer.

The popularity of streaming media is due to quick access to content-rich media material. More and more people today prefer to watch movies, sports broadcasts, and music videos without first downloading, so as not to waste time downloading files.

If the media is not streaming, then you can watch the file only after it has been completely downloaded to HDD, and this process, taking into account big size many media files takes considerable time (sometimes up to several hours).

File Transfer Service (FTP)

The FTP service provides remote access to files on other computers on the network and to huge amounts of information on the Internet. The purpose of an FTP server is to store a set of files for a wide variety of purposes.

Thanks to the FTP service, users can send (copy, transfer) files on the Internet from a remote computer to a local one and from local to a remote one. Unlike web servers, which provide read-only information, FTP servers allow users to not only download information, but also add it to the server.

This service remains one of the main distribution methods free programs, as well as various additions and corrections to commercial versions of programs.

Search engines

Search engines solve the problem of searching large volumes of unstructured information. These are software and hardware systems designed to search the Internet and respond to user requests by providing a list of links to sources of information in order of relevance, a service that helps users quickly find the necessary information.

Web forums

Nowadays, newsgroups have been almost completely replaced by web forums. The essence of a web forum is that a user can go to a special web page tied to a specific topic and post a message there that everyone can discuss. Users can comment on a posted topic, ask questions related to it and receive answers, as well as answer questions from other forum users and give them advice. Polls and voting can also be organized within the topic, if the engine allows it. Forum users, unlike chat users where everyone participates at the same time, can log in at different times.

IMS (Instant Messaging Service) service

Via instant messaging service except text messages can be transferred sound signals, pictures, videos, files, and also, for example, perform actions such as joint drawing or games. To do this, you need a client program, the so-called messenger (from the English Messenger - courier). Instant messaging is one of the most accessible and popular means of communication on the Internet in real time. The most well-known messaging networks and clients are IRC, Skype, ICQ, MSN, Yahoo!, Windows Live Messenger . If one of the recipients uses only the ICQ network, and the other uses only the MSN network, then you can communicate with them simultaneously by installing both ICQ and MSN Messenger on your computer.

Telnet

This is a service that allows remote access to another computer system. Information is entered on one computer, transferred to another for processing, and the results are returned to the first. Telnet allows you to work as if the keyboard of one computer is connected directly to another, that is, it makes it possible to use all the facilities that the remote computer provides to local terminals, log in, execute commands, or access a variety of special service tools.

OSI model

The work of a network is to transfer data from one computer to another. The International Standards Organization (ISO) has developed a model that clearly defines the different levels of interaction between systems, gives them standard names, and specifies what work each level should do. The OSI Open Systems Interconnection Reference Model links open systems, that is, systems that are open to communication with other systems. It is used by Windows and most other network operating systems.

The OSI model has seven layers: physical, data link, network, transport, session, presentation, and application.

Thus, the problem of systems interaction is divided into seven separate tasks, each of which can be solved independently of the others.

Physical layer

Task physical level, the lowest, consists of creating a physical channel, such as a fiber optic cable, to send the bits. The physical layer is a medium for transmitting data, which is presented in the form of electrical impulses, beams of light, and electromagnetic waves.

Data Link Layer

Data Link Layer provides data transmission through a physical channel. This layer associates a computer's abstract address (such as its IP address) with a physical computer. If the data comes from below, that is, from the physical layer, then the data link layer converts the electrical signal into frames or packets, if the data comes from network layer, then the packets are converted into electrical signals. The tasks of the link layer include checking the availability of the transmission medium and implementing error detection and correction mechanisms, that is, resending a damaged frame. An example of a protocol operating at the data link layer is Ethernet for Local Area Networks.

Session layer

Session layer allows users of different computers to establish communication sessions with each other. It is needed to negotiate and maintain connections with other devices and ensures that sending and receiving devices can communicate without interrupting each other. The session layer is responsible for maintaining the session while data is being transferred and also releasing the connection when the data exchange ends. Protocols such as ASP, L2TP, PPTP and others operate at this level.

Presentation layer

Presentation layer ensures that information sent by an application layer will be understood by an application layer in another system. The presentation layer can, if necessary, perform a conversion of data formats into some common presentation format, and at the reception, accordingly, performs the reverse conversion. Here the data is encoded, compressed or encrypted. For example, when sending a message, it must be pre-compressed to reduce traffic. Protocols such as RDP, LPP, NDR and others operate at this level.

Application layer

Application layer contains a set of popular protocols needed by users. This layer does not deal with the applications that the user runs on the computer, but rather provides the infrastructure on top of which the application itself runs. In the context of the OSI model, "application" does not mean Excel, Word, or similar programs. The application layer is the protocol that a program, such as Outlook or Internet Explorer, uses to send data over the network. For example, the file transfer program you use to send a file interacts with the application layer and determines which protocol (such as FTP, TFTP, or SMB) will be used for sending.

Let us list the most popular protocols of the upper, application level through which data is transmitted.

A widely used protocol that is used to exchange files between computers.

IRC (Internet Relay Chat) protocol

IRC is a protocol for real-time messaging.

RTSP (Real-Time Streaming Protocol) and RTP (Real-Time Transport Protocol) protocols

RTSP is a protocol that allows controlled transmission of video streams on the Internet. The protocol ensures the transfer of information in the form of packets between the server and client. In this case, the recipient can simultaneously play the first data packet, decode the second and receive the third.

The RTP (Real-Time Control Protocol) protocol works together with RTSP. It detects and compensates for lost packets, ensures security of content transmission and information recognition, is responsible for verifying the identity of sent and received packets, identifies the sender and monitors network congestion.

A computer network consists of several computers connected to each other to process data together. Computer networks are divided into local and global. Local networks unite computers located in the same room or building, and wide area networks unite local networks or individual computers located at a distance of more than 1 km. The Internet is a worldwide computer network consisting of a variety of computer networks united by standard agreements on how to exchange information and unified system addressing. The unit of the Internet is a local area network, the totality of which is united by some regional (global) network (departmental or commercial). At the highest level, regional networks are connected to one of the so-called Internet backbone networks. (In fact, regional networks can be interconnected without access to the core network.) Wire lines, fiber optics, radio communications, satellite communications, etc. are used as connecting lines on the Internet. Generalized structure of the Internet There is a certain analogy between the diagram of transport highways and the topology of the Internet, reminiscent of a map of automobile, railways and air transportation scheme. Internet protocols comply with cargo transportation regulations; addressing system - traditional postal addresses; transport highways - communication channels between networks on the Internet.

The World Wide Web is the Internet.

It is also called a network.

A browser is a program on your computer with which you access the Internet, view information there, and navigate it. It processes data on the World Wide Web and allows browsing. In fact, this is a program with which you can surf the Internet. Very often the browser icon is located on the Desktop of your computer. When you click on it, you go to the Internet. The standard browser that most people know is Internet Explorer.

The Internet can work differently in different browsers: what one browser cannot do, another can do. If you are unable to view, listen to information, or do anything on the Internet, try accessing the Internet through a different browser. You may be able to do this in another browser. The most popular browsers are Google Chrome, Opera, Mozilla Firefox. They can be found through an Internet search and installed on your computer.

A site (website) is an Internet resource that contains information (texts, pictures or videos), can perform certain functions (for example, receiving and sending letters), has its own address, its own name, its own owner and consists of separate pages. A page on a website is called a web page.

The address (name) of a site is a set of characters that consists of three parts: it begins with “http://” or “www.”, then the name of the site, at the end of which a specific zone (domain zone) is a short designation of the country or type of organization : .com, .ru, .net, .biz, .org, .kz, .ua, etc. For example, let’s look at the address http://website-income.ru/. First – http://, then the name – website-income.ru, at the end – .ru. As a rule, when you see the name of a site on any page on the Internet and click on it, you go to that site.

Email is a system through which you can send and receive letters on the Internet. In fact, this is a website or program that is used to send letters. Email is also called a personal address. Email specific person or organization. In this case, they usually say whose email it is.

The distinctive sign of email is the @ symbol (pronounced "dog" or "doggy"). To type it in text, on your computer keyboard you need to press the Shift key and the number 2 in Russian font mode.

Differences between website and email (i.e. email address): in the name of the site there are http:// or www symbols at the very beginning, and in the name of the email there is an @ sign in the middle. For example, http://website-income.ru/- this is a site, and [email protected] - this is email.

The browser bar is the top line on the Internet page where you enter the address of the site you want to visit. It is on any page of any website. Computers may have different settings: on some computers, to go to a site, you only need to enter its address in the browser; and on other computers, you still need to press the “Enter” key on the keyboard.

A search engine is a system that allows you to find the necessary information on the Internet. In fact, it is a site that provides search capabilities. When you go online, a certain search system. For example, Google. If you want to use another search engine, then you enter its name in the browser line and go to its website.

The search bar is a blank line in a search engine, usually located in the middle of the page, in which you write the words you want to find on the Internet. Next to this line there is a button labeled “Search”, “Find”, “Go”, etc. Once you enter words into the search bar and click on this button, the search process begins.

Services: Telnet- a network program that allows remote access to computers via the command line. Requires knowledge of a special command language. FTP File Transfer Protocol - file transfer protocol is a protocol of the TCP/IP family. There are many FTP applications that have available GUI, and allowing you to find and copy files from FTP servers. Email Email is one of the most popular Internet services. Allows people with email addresses to exchange email messages. You can attach files in any format to text email messages. Gopher Although FTP is great for transferring files, it doesn't have a good way to deal with files scattered across multiple computers. In this regard, an improved file transfer system was developed. It is called Gopher.

Using a menu system, Gopher not only allows you to browse lists of resources, but also sends the material you need without knowing where it is located. Gopher is one of the most comprehensive browsing systems available, integrated with other programs such as FTP or Telnet. It is widespread on the Internet.

Gopher computers are linked - through distributed indexes - into a single search engine called Gopherspace. Access to Gopher spaces is carried out through the menus they offer, and search is carried out using several types of search engines. The most famous among them are the Veronica system and the index search system of the global information server (wAIS - wide Area Information Server).

WAIS Wide Area Information Servers is a system for storing and retrieving documents in thematic databases. For fast work, an index search is used. WWW The World Wide Web is the most popular Internet service. It is a collection of tens of millions of Web servers scattered around the world and containing a huge amount of information. Web documents, called Web pages, are magazine-style documents containing multimedia elements (graphics, audio, video, etc.), as well as hyperlinks that, when clicked, move users from one document to another.

Teleconferences

The teleconferencing system emerged as a means for groups of people with similar interests to communicate. Since its inception, it has spread widely, becoming one of the most popular Internet services.

This type of service is similar to Internet mailing lists, except that messages are not sent to all subscribers of a given newsgroup, but are placed on special computers called newsgroup servers or news servers. Subscribers to the teleconference can then read the incoming message and, if desired, respond to it.

A teleconference is like a bulletin board where everyone can post their own announcement and read the announcements posted by others. To make it easier to work with this system, all teleconferences are divided into topics, the names of which are reflected in their names. There are currently about 10,000 different newsgroups discussing everything you can imagine.

To work with the teleconferencing system, you need special software with which you can establish a connection to the news server and gain access to the teleconference articles stored on it. Since the news server stores articles from a very large number of newsgroups, users usually select those that are of interest to them (or, in other words, subscribe to them) and then work only with them.

After subscribing to the selected newsgroups, you will have to establish a connection to the news server to view incoming messages. The difference is that you can configure the newsgroup reader to only track the status of the newsgroups you've signed up for, rather than forcing you to view the entire list.

Thus, teleconferences are virtual communication clubs. Each teleconference has its own address and is accessible from almost any other part of the Internet. Teleconferences usually have a more or less constant circle of participants.

    Principles of information retrieval.

The basic principles of information retrieval were formulated back in the first half of this century. Between 1939 and 1945, W. E. Batten developed a system for finding patents. Each patent was classified according to the concepts to which it related. An 800-position punch card was created for each concept used in the system. When registering a new patent in the system, there were cards corresponding to the concepts covered in it, and the patent numbers were punched into the position. To find a patent that covered several concepts simultaneously, it was necessary to combine the cards corresponding to these concepts. The number of the required patent was determined from the position of the lumen. The basic principles of information retrieval have not changed since then. Using this IPS as an example, you can see how the search process occurs. First, an array of pointers to information resources must be created. A pointer (index) contains a certain property of a document and links to documents that have this property. Pointers can be various types. For example, the author's index is widespread. Such an index allows us to obtain links to the works of the author of interest to us. Indexes can also be compiled based on other document attributes. The Batten system used a subject index, that is, documents were classified according to the concepts (subjects) that they covered. The process of creating pointers to documents is called indexing, and the terms used for indexing are called indexing terms. In the case of an author's index, the role of indexing terms will be played by the names of the authors of the works stored in the collection. The collection of indexing terms used is called a dictionary. The array of pointers obtained after indexing information resources is called an index (Index database). Once an index is created, it is accessed through queries. Since the search process involves matching a user's query with available data, the resulting query must also be translated into an indexing language. The index searches for documents that match the query, and the user is given a list of links to suitable resources. To increase the speed of indexing and searching, the dictionary and index must be organized according to a system that best suits the search tasks in a given subject area.

Question 77 is not necessary

    Indexing and search engine.

A computer network consists of several computers connected to each other to process data together. Computer networks are divided into local and global. Local networks unite computers located in the same room or building, and wide area networks unite local networks or individual computers located at a distance of more than 1 km. The Internet is a worldwide computer network consisting of a variety of computer networks united by standard agreements on methods of information exchange and a unified addressing system. The unit of the Internet is a local area network, the totality of which is united by some regional (global) network (departmental or commercial). At the highest level, regional networks are connected to one of the so-called Internet backbone networks. (In fact, regional networks can be interconnected without access to the core network.) Wire lines, fiber optics, radio communications, satellite communications, etc. are used as connecting lines on the Internet. Generalized structure of the Internet There is a certain analogy between the diagram of transport highways and the topology of the Internet, reminiscent of a map of roads, railways and an air transportation scheme. Internet protocols comply with cargo transportation regulations; addressing system - traditional postal addresses; transport highways - communication channels between networks on the Internet.

The World Wide Web is the Internet.

It is also called a network.

A browser is a program on your computer with which you access the Internet, view information there, and navigate it. It processes data on the World Wide Web and allows browsing. In fact, this is a program with which you can surf the Internet. Very often the browser icon is located on the Desktop of your computer. When you click on it, you go to the Internet. The standard browser that most people know is Internet Explorer.

The Internet can work differently in different browsers: what one browser cannot do, another can do. If you are unable to view, listen to information, or do anything on the Internet, try accessing the Internet through a different browser. You may be able to do this in another browser. The most popular browsers are Google Chrome, Opera, Mozilla Firefox. They can be found through an Internet search and installed on your computer.

A site (website) is an Internet resource that contains information (texts, pictures or videos), can perform certain functions (for example, receiving and sending letters), has its own address, its own name, its own owner and consists of separate pages. A page on a website is called a web page.

The address (name) of a site is a set of characters that consists of three parts: it begins with “http://” or “www.”, then the name of the site, at the end of which a specific zone (domain zone) is a short designation of the country or type of organization : .com, .ru, .net, .biz, .org, .kz, .ua, etc. For example, let’s look at the address http://website-income.ru/. First – http://, then the name – website-income.ru, at the end – .ru. As a rule, when you see the name of a site on any page on the Internet and click on it, you go to that site.

Email is a system through which you can send and receive letters on the Internet. In fact, this is a website or program that is used to send letters. Email is also the name given to the personal email address of a specific person or organization. In this case, they usually say whose email it is.

The distinctive sign of email is the @ symbol (pronounced "dog" or "doggy"). To type it in text, on your computer keyboard you need to press the Shift key and the number 2 in Russian font mode.

Differences between a website and an email (i.e., email address): the site name has the http:// or www symbols at the very beginning, and the email name has the @ sign in the middle. For example, http://website-income.ru/- this is a site, and [email protected] - this is email.

The browser bar is the top line on the Internet page where you enter the address of the site you want to visit. It is on any page of any website. Computers may have different settings: on some computers, to go to a site, you only need to enter its address in the browser; and on other computers, you still need to press the “Enter” key on the keyboard.

A search engine is a system that allows you to find the necessary information on the Internet. In fact, it is a site that provides search capabilities. When you go online, a specific search engine automatically opens. For example, Google. If you want to use another search engine, then you enter its name in the browser line and go to its website.

The search bar is a blank line in a search engine, usually located in the middle of the page, in which you write the words you want to find on the Internet. Next to this line there is a button labeled “Search”, “Find”, “Go”, etc. Once you enter words into the search bar and click on this button, the search process begins.

Services: Telnet- a network program that allows remote access to computers via the command line. Requires knowledge of a special command language. FTP File Transfer Protocol - file transfer protocol is a protocol of the TCP/IP family. There are many FTP applications that have an accessible graphical interface and allow you to find and copy files from FTP servers. Email Email is one of the most popular Internet services. Allows people with email addresses to exchange email messages. You can attach files in any format to text email messages. Gopher Although FTP is great for transferring files, it doesn't have a good way to deal with files scattered across multiple computers. In this regard, an improved file transfer system was developed. It is called Gopher.

Using a menu system, Gopher not only allows you to browse lists of resources, but also sends the material you need without knowing where it is located. Gopher is one of the most comprehensive browsing systems available, integrated with other programs such as FTP or Telnet. It is widespread on the Internet.

Gopher computers are linked - through distributed indexes - into a single search engine called Gopherspace. Access to Gopher spaces is carried out through the menus they offer, and search is carried out using several types of search engines. The most famous among them are the Veronica system and the index search system of the global information server (wAIS - wide Area Information Server).

WAIS Wide Area Information Servers is a system for storing and retrieving documents in thematic databases. For fast work, an index search is used. WWW The World Wide Web is the most popular Internet service. It is a collection of tens of millions of Web servers scattered around the world and containing a huge amount of information. Web documents, called Web pages, are magazine-style documents containing multimedia elements (graphics, audio, video, etc.), as well as hyperlinks that, when clicked, move users from one document to another.

Teleconferences

The teleconferencing system emerged as a means for groups of people with similar interests to communicate. Since its inception, it has spread widely, becoming one of the most popular Internet services.

This type of service is similar to Internet mailing lists, except that messages are not sent to all subscribers of a given newsgroup, but are placed on special computers called newsgroup servers or news servers. Subscribers to the teleconference can then read the incoming message and, if desired, respond to it.

A teleconference is like a bulletin board where everyone can post their own announcement and read the announcements posted by others. To make it easier to work with this system, all teleconferences are divided into topics, the names of which are reflected in their names. There are currently about 10,000 different newsgroups discussing everything you can imagine.

To work with the teleconferencing system, you need special software with which you can establish a connection to the news server and gain access to the teleconference articles stored on it. Since the news server stores articles from a very large number of newsgroups, users usually select those that are of interest to them (or, in other words, subscribe to them) and then work only with them.

After subscribing to the selected newsgroups, you will have to establish a connection to the news server to view incoming messages. The difference is that you can configure the newsgroup reader to only track the status of the newsgroups you've signed up for, rather than forcing you to view the entire list.

Thus, teleconferences are virtual communication clubs. Each teleconference has its own address and is accessible from almost any other part of the Internet. Teleconferences usually have a more or less constant circle of participants.

    Principles of information retrieval.

The basic principles of information retrieval were formulated back in the first half of this century. Between 1939 and 1945, W. E. Batten developed a system for finding patents. Each patent was classified according to the concepts to which it related. An 800-position punch card was created for each concept used in the system. When registering a new patent in the system, there were cards corresponding to the concepts covered in it, and the patent numbers were punched into the position. To find a patent that covered several concepts simultaneously, it was necessary to combine the cards corresponding to these concepts. The number of the required patent was determined from the position of the lumen. The basic principles of information retrieval have not changed since then. Using this IPS as an example, you can see how the search process occurs. First, an array of pointers to information resources must be created. A pointer (index) contains a certain property of a document and links to documents that have this property. Pointers can be of various types. For example, the author's index is widespread. Such an index allows us to obtain links to the works of the author of interest to us. Indexes can also be compiled based on other document attributes. The Batten system used a subject index, that is, documents were classified according to the concepts (subjects) that they covered. The process of creating pointers to documents is called indexing, and the terms used for indexing are called indexing terms. In the case of an author's index, the role of indexing terms will be played by the names of the authors of the works stored in the collection. The collection of indexing terms used is called a dictionary. The array of pointers obtained after indexing information resources is called an index (Index database). Once an index is created, it is accessed through queries. Since the search process involves matching a user's query with available data, the resulting query must also be translated into an indexing language. The index searches for documents that match the query, and the user is given a list of links to suitable resources. To increase the speed of indexing and searching, the dictionary and index must be organized according to a system that best suits the search tasks in a given subject area.

Question 77 is not necessary

    Indexing and search engine.


Content

Introduction…………………………………………………………….... ...….3
1. History of the development of the Internet……………………….......4
2. Structure and basic principles of construction
Internet…………………………………………………………………….…. …8
3. Problems and prospects for the development of the Internet……..11
Conclusion………………………………………….. ……14
List of used literature…………………………16

Introduction

Computer science is a new information industry associated with the use of personal computers and the Internet.
In the new millennium, most of the information related to human activities will be stored in computer memory.
Computers - electronic computers - are one of the most important inventions of the 20th century. Abroad, and later in our country, computers were called computers. Computers are used as universal devices for processing, transmitting and storing a wide variety of information.
The basis of the modern information industry is the Internet computer network.
The Internet is an international computer network that connects computers in all countries and continents, stores gigantic amounts of information and gives quick access to this information to almost all people.
The Internet has become an inseparable part of modern civilization. Rapidly breaking into the spheres of education, trade, communications, services, it gives rise to new forms of communication and learning, commerce and entertainment. The “Network Generation” is a real socio-cultural phenomenon of our days. For its representatives, the Internet has long become a familiar and convenient life partner. Humanity is entering a new – informational – stage of its development, and network technologies play a huge role in it.

History of the development of the Internet.

About 20 years ago, the US Department of Defense created a network that was the forerunner of the Internet - it was called ARPAnet. ARPAnet was an experimental network - it was created to support scientific research in the military-industrial sphere, - in particular, to study methods for constructing networks that were resistant to partial damage received, for example, from air bombing and could continue to function normally under such conditions. This requirement provides the key to understanding the principles of construction and structure of the Internet. In the ARPAnet model, there was always communication between the source computer and the destination computer (destination station). The network was supposed to be unreliable: any part of the network could disappear at any moment.
The communicating computers—not just the network itself—also have the responsibility of establishing and maintaining communications. The basic principle was that any computer could communicate peer-to-peer with any other computer.
Data transmission on the network was organized based on the IP protocol. The IP protocol is the rules and description of how a network operates. This set includes rules for establishing and maintaining communications in the network, rules for handling and processing IP packets, descriptions of network packets of the IP family (their structure, etc.). The network was conceived and designed so that users were not required to have any information about the specific structure of the network. In order to send a message over the network, the computer must place data in a certain “envelope” called, for example, IP, indicate on this “envelope” a specific network address and transmit the resulting packets to the network.
These decisions may seem strange, like the assumption of an "untrusted" network, but experience has shown that most of these decisions are quite reasonable and correct. While the Organization for International Standardization (ISO) spent years creating the final standard for computer networks, users did not want to wait.Internet activists began installing IP software on all possible types of computers. This soon became the only acceptable way to connect disparate computers. This scheme appealed to the government and universities, which had a policy of buying computers from different manufacturers. Everyone bought the computer that he liked and had the right to expect that it would be able to work on a network together with other computers.
About 10 years after the advent of ARPAnet, Local Area Networks (LANs) appeared, for example, such as Ethernet, etc. At the same time, computers appeared, which began to be called workstations. Most workstations had the UNIX operating system installed. This OS had the ability to work on a network with the Internet Protocol (IP). In connection with the emergence of fundamentally new problems and methods for solving them, a new need arose: organizations wanted to connect to ARPAnet with their local network. Around the same time, other organizations emerged and began creating their own networks using IP-like communication protocols. It became clear that everyone would benefit if these networks could all communicate together, because then users on one network could communicate with users on another network.
One of the most important of these new networks was NSFNET, developed as an initiative of the National Science Foundation (NSF). In the late 1980s, NSF created five supercomputing centers, making them available for use in any scientific institution. Only five centers were created because they are very expensive even for rich America. That is why they should have been used cooperatively. A communication problem arose: a way was needed to connect these centers and provide access to them to different users. An attempt was first made to use ARPAnet communications, but this solution failed when faced with defense industry bureaucracy and staffing problems.
NSF then decided to build its own network based on ARPAnet IP technology. The centers were connected by special telephone lines with a capacity of 56 KBPS (7 KB/s). However, it was obvious that it was not worth even trying to connect all universities and research organizations directly with the centers, because laying such a quantity of cable is not only very expensive, but practically impossible. Therefore, it was decided to create networks on a regional basis. In every part of the country the institutions concerned were to connect with their nearest neighbours. The resulting chains were connected to the supercomputer at one of their points, thus the supercomputer centers were connected together. In this topology, any computer could communicate with any other by passing messages through its neighbors.
This decision was successful, but the time came when the network could no longer cope with the increased needs. Sharing supercomputers allowed connected communities to use many other things outside of supercomputers. Suddenly, universities, schools and other organizations realized that they had a sea of ​​data and a world of users at their fingertips. The flow of messages on the network (traffic) grew faster and faster until, in the end, it overloaded the computers managing the network and the telephone lines connecting them. In 1987, the contract to manage and develop the network was awarded to Merit Network Inc., which operated the Michigan Educational Network in partnership with IBM and MCI. The old physical network was replaced by faster (about 20 times faster) telephone lines. They were replaced by faster and more networked control machines.
The process of improving the network is ongoing. However, most of these changes occur unnoticed by users. When you turn on your computer, you will not see an advertisement stating that the Internet will not be available for the next six months due to modernization. Perhaps even more importantly, network congestion and improvements have created a mature and practical technology. Problems were solved, and development ideas were tested in practice.

Structure and basic principles of building the Internet.

The Internet is a worldwide information computer network, which is a union of many regional computer networks and computers that exchange information with each other via public telecommunications channels (dedicated analog and digital telephone lines, optical communication channels and radio channels, including satellite communication lines).
Information on the Internet is stored on servers. Servers have their own addresses and are controlled by specialized programs. They allow you to forward mail and files, search databases, and perform other tasks.
Information exchange between network servers is carried out via high-speed communication channels (dedicated telephone lines, fiber optic and satellite communication channels). Individual users' access to Internet information resources is usually carried out through a provider or corporate network.
Provider - network service provider - a person or organization providing services for connecting to computer networks. The provider is an organization that has a modem pool for connecting to clients and accessing the World Wide Web.
The main cells of the global network are local area networks. If a local network is directly connected to a global network, then every workstation on this network can be connected to it.
There are also computers that are directly connected to the global network. They are called host computers (host - master).
A host is any computer that is a permanent part of the Internet, i.e. connected via the Internet protocol to another host, which in turn is connected to another, and so on.
To connect communication lines to computers, special electronic devices are used, which are called network cards, network adapters, modems, etc.
Almost all Internet services are built on the client-server principle. All information on the Internet is stored on servers. Information exchange between servers is carried out via high-speed communication channels or highways. Servers connected by high-speed highways make up the basic part of the Internet.
Individual users connect to the network through the computers of local Internet service providers, Internet Service Providers (ISPs), which have a permanent connection to the Internet. A regional provider connects to a larger national provider that has nodes in various cities of the country. Networks of national providers are combined into networks of transnational providers or first-tier providers. United networks of first-tier providers make up the global Internet.
The transfer of information to the Internet is ensured by the fact that each computer on the network has a unique address (IP address), and network protocols provide interaction between different types of computers running different operating systems.
The Internet primarily uses the TCP/IP family of network protocols (stack). At the data link and physical layers, the TCP/IP stack supports Ethernet, FDDI, and other technologies. The basis of the TCP/IP protocol family is the network layer, represented by the IP protocol, as well as various routing protocols. This layer facilitates the movement of packets across the network and controls their routing. The packet size, transmission parameters, and integrity control are carried out at the TCP transport layer.
The application layer integrates all the services that the system provides to the user. The main application protocols include: telnet remote access protocol, FTP file transfer protocol, HTTP hypertext transfer protocol, email protocols: SMTP, POP, IMAP, MIME.

Problems and prospects for the development of the Internet.

Today, the speed of development of the Internet has reached unprecedented levels. Due to its convenience and inexpensive price, this method of exchanging information is becoming more and more popular among people in different countries peace.
Billions of websites and information resources are attracting an increasing number of visitors. On the Internet, since its inception, more and more new communities have already been formed and continue to form, which have their own traditions, ethics, common tasks and goals.
We only have to plunge once into the vast expanses of www, quickly and easily gain access to the necessary information - and we immediately understand that we have at our disposal the greatest invention of mankind, which has already made the planet a large common home (a village or a metropolis, as is more convenient for us). Every time we connect to the network, we realize that on the other side of the monitor there are people and opportunities waiting for us, the existence of which we did not even know yesterday. It is these features of www that explain its rapid development.
Today, according to Internet World Stats (http://www., etc.........

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Internet is a global computer network covering the whole world. Today the Internet has about 15 million subscribers in more than 150 countries. The network size increases monthly by 7-10%. The Internet forms a kind of core that connects various information networks belonging to various institutions around the world with one another.

If previously the network was used exclusively as a medium for transferring files and email messages, today more complex problems of distributed access to resources are being solved. About three years ago, shells were created that support network search functions and access to distributed information resources, electronic archives.

The Internet, which once served exclusively research and academic groups whose interests extended to access to supercomputers, is becoming increasingly popular in the business world.

Companies are seduced by speed, cheap global communications, and ease of joint work, available programs, a unique Internet database. They view the global network as a complement to their own local networks.

At a low cost of service (often just a flat monthly fee for the lines or telephone used), users can access commercial and non-commercial information services in the United States, Canada, Australia and many European countries. In the archives of free access to the Internet you can find information on almost all areas of human activity, from new scientific discoveries to weather forecasts for tomorrow.

In addition, the Internet provides unique opportunities for low-cost, reliable and confidential global communications around the world. This turns out to be very convenient for companies with branches around the world, transnational corporations and management structures. Typically, using the Internet infrastructure for international communications is much cheaper than direct computer communications via satellite or telephone.

E-mail is the most common Internet service. Currently, approximately 20 million people have an email address. Sending a letter by e-mail is much cheaper than sending a regular letter. In addition, a message sent by e-mail will reach the recipient in a few hours, while a regular letter may take several days, or even weeks, to reach the recipient.

The Internet is currently experiencing a period of growth, largely due to the active support of European governments and the United States. Every year in the United States, about 1-2 billion dollars are allocated to create new network infrastructure. Research in the field of network communications is also funded by the governments of Great Britain, Sweden, Finland, and Germany.

However, government funding is only a small part of the incoming funds, because The "commercialization" of the network is becoming increasingly visible (80-90% of funds are expected to come from the private sector).

History of the Internet

In 1961, the Defense Advanced Research Agency (DARPA), on behalf of the US Department of Defense, began a project to create an experimental packet transmission network. This network, called ARPANET, was originally intended to study methods for providing reliable communications between computers various types. Many methods for transmitting data via modems were developed on the ARPANET. At the same time, network data transfer protocols - TCP/IP - were developed. TCP/IP is a set of communication protocols that define how different types of computers can communicate with each other.

The ARPANET experiment was so successful that many organizations wanted to join it to use it for daily data transfer. And in 1975, ARPANET evolved from an experimental network into work network. Responsibility for administering the network was assumed by the Defense Communication Agency (DCA), now called the Defense Information Systems Agency (DISA). But ARPANET's development didn't stop there; TCP/IP protocols continued to evolve and improve.

In 1983, the first standard for TCP/IP protocols was released, included in the Military Standards (MIL STD), i.e. to military standards, and everyone who worked on the network was required to switch to these new protocols. To facilitate this transition, DARPA approached Berkley Software Design with a proposal to implement TCP/IP protocols on Berkley (BSD) UNIX. This is where the union of UNIX and TCP/IP began.

After some time, TCP/IP was adapted into a common, that is, publicly available, standard, and the term Internet came into general use. In 1983, MILNET was spun off from ARPANET and became part of the Defense Data Network (DDN) of the US Department of Defense. Internet term began to be used to refer to a single network: MILNET plus ARPANET. And although the ARPANET ceased to exist in 1991, the Internet exists, its size is much greater than its original size, as it united many networks around the world. Figure 1 illustrates the growth in the number of hosts connected to the Internet from 4 computers in 1969 to 3.2 million in 1994. An Internet host is a computer running a multitasking operating system (Unix, VMS), supporting TCP\IP protocols and providing users of any network services.

What does the Internet consist of?

This is a rather difficult question, the answer to which changes all the time. Five years ago, the answer was simple: the Internet is all networks that interact using the IP protocol to form a “seamless” network for their collective users. This includes various federal networks, a set of regional networks, university networks and some foreign networks.

Recently, there has been interest in connecting to the Internet networks that do not use the IP protocol. In order to provide Internet services to clients of these networks, methods have been developed to connect these “foreign” networks (for example, BITNET, DECnets, etc.) to the Internet. At first these connections, called gateways, were intended simply to forward email between two networks, but some have grown to provide other services on an internetwork basis. Are they part of the Internet? Yes and no - it all depends on whether they want it themselves.

Currently, the Internet uses almost all known communication lines from low-speed telephone lines to high-speed digital satellite channels. The operating systems used on the Internet also vary. Most computers on the Internet run Unix or VMS. Special network routers such as NetBlazer or Cisco, whose OS resembles Unix, are also widely represented.

In fact, the Internet consists of many local and global networks belonging to various companies and enterprises, interconnected by various communication lines. The Internet can be imagined as a mosaic made up of small networks of different sizes that actively interact with one another, sending files, messages, etc.

How does the Internet know where to send your data? If you are sending a letter, then simply placing it in Mailbox Without an envelope, you cannot count on the correspondence being delivered to its intended destination. The letter must be placed in an envelope, the address must be written on the envelope and a stamp must be affixed. Just as the post office follows rules that govern how the postal network operates, certain rules govern how the Internet operates. These rules are called protocols. The Internet Protocol (IP) is responsible for addressing, i.e. ensures that the router knows what to do with your data when it arrives. Following our analogy with the post office, we can say that the Internet Protocol performs the functions of an envelope.

Some address information is provided at the beginning of your message. It gives the network enough information to deliver the data packet.

Internet addresses consist of four numbers, each of which does not exceed 256. When writing, numbers are separated from each other by dots, for example:

The address actually consists of several parts. Since the Internet is a network of networks, the beginning of the address contains information for routers about which network your computer belongs to. The right side of the address tells the network which computer should receive the packet. It is quite difficult to draw the line between a network subaddress and a computer subaddress. This boundary is established by agreement between neighboring routers. Luckily, as a user, you never have to worry about this. This only matters when creating a network. Each computer on the Internet has its own unique address. Here again the analogy with the mail delivery service will help us. Let's take the address "50 Kelly Road, Hamden, CT." The "Hamden, CT" element is similar to a network address. Thanks to this, the envelope gets to the required post office, one that knows about the streets in a certain area. The "Kelly Road" element is similar to a computer address; it indicates a specific mailbox in the area that the post office serves. The Post Office accomplished its task by delivering the mail to the correct local office, and that office placing the letter in the appropriate mailbox. Likewise, the Internet served its purpose when its routers forwarded data to the appropriate network, and that the local network- to the appropriate computer.

For a variety of technical reasons (mainly hardware limitations), the information sent via IP networks, is divided into portions called packets. One packet usually sends from one to 1500 characters of information. This prevents one user from monopolizing the network, but allows everyone to count on timely service. This also means that if the network is overloaded, the quality of its service deteriorates somewhat for all users: it does not die if it is monopolized by a few established users.

One of the advantages of the Internet is that it only needs an internet protocol to operate at a basic level. The network will not be very friendly, but if you behave reasonably enough, you will solve your problems. Because your data is contained within an IP envelope, the network has all the information it needs to move that packet from your computer to its destination. Here, however, several problems arise at once.

Firstly, in most cases the amount of information sent exceeds 1500 characters. If the post office only accepted postcards, you would naturally be disappointed.

Secondly, an error may occur. The Post Office sometimes loses letters, and networks sometimes lose packets or damage them in transit. You will see that, unlike post offices, the Internet successfully solves such problems.

Third, the sequence of packet delivery may be disrupted. If you sent two letters one after another to the same address, there is no guarantee that they will go along the same route or arrive in the order in which they were sent. The same problem exists on the Internet.

Therefore, the next level of the network will give us the opportunity to send larger pieces of information and take care of eliminating the distortions that the network itself introduces.

To solve the problems mentioned above, the Transmission Control Protocol (TCP), which is often referred to together with the IP protocol, is used. What should you do if you want to send someone a book, but the post office only accepts letters? There is only one way out: tear out all the pages from the book, put each one in a separate envelope and throw all the envelopes into the mailbox. The recipient would have to collect all the pages (assuming no letters were missing) and glue them back into a book. These are the tasks that TSR performs.

TCP breaks the information you want to convey into portions. Each portion is numbered so that you can check whether all the information has been received and put the data in the correct order. To transmit this sequence number over the network, the protocol has its own “envelope” on which the necessary information is “written”. A portion of your data is placed in a TCP envelope. The TCP envelope, in turn, is placed in the IP envelope and transmitted to the network.

At the receiving end, the TCP protocol software collects the envelopes, extracts the data from them, and places them in the correct order. If any envelopes are missing, the program asks the sender to send them again. Once all the information is in the correct order, the data is transferred to the application program that uses the TCP services.

This, however, is a somewhat idealized view of TSR. In real life, packets are not only lost, but also undergo changes along the way due to short-term failures in telephone lines. TSR solves this problem too. When data is placed in an envelope, a so-called checksum is calculated. The checksum is a number that will allow the receiving TCP to detect errors in the packet. Let's say you transmit raw digital data in 8-bit chunks or bytes. The simplest version of a checksum is to add the values ​​of these bytes and place an additional byte containing this amount at the end of this piece of information. (Or at least that part of it that fits in 8 bits.) The receiving TCP performs the same calculation. If any byte changes during transmission, the checksums will not match and you will be notified of an error. Of course, if there are two errors, they can cancel each other out, but such errors can be identified by more complex calculations. When the packet arrives at the destination receiving TCP, it calculates checksum and compares it with the one sent by the sender. If the values ​​do not match, then an error occurred during transmission. The receiving TCP discards this packet and requests retransmission.

The TCP protocol creates the appearance of a dedicated communication line between two application programs, because ensures that information entering at one end comes out at the other. In reality there is no dedicated channel between the sender and the recipient (other people may be using the same routers and network wires to transmit their information in between your packets), but it appears as if there is one, and in practice this is usually sufficient.

This is not the best approach to using the network. Forming a TCP connection requires significant costs and time; if this mechanism is not needed, it is better not to use it. If the data that needs to be sent fits in one packet and guarantee of delivery is not particularly important, TCP can become a burden.

There is another standard protocol that avoids such overhead. It's called the user datagram protocol (UDP) and is used in some applications. Instead of putting your data in a TCP envelope and placing that envelope in an IP envelope, the application program puts the data in a UDP envelope, which is then placed in an IP envelope.

UPD is simpler than TCP because this protocol does not care about missing packets, putting data in the correct order, and other subtleties. UDP is used for those programs that send only short messages and can retransmit the data if the response is delayed. Let's assume that you are writing a program that looks up phone numbers in one of the online databases. There is no need to establish a TCP connection in order to transmit 20-30 characters in all directions. You can simply put the name in one UDP packet, put it in an IP packet and send it out. The receiving application will receive this packet, read the name, look up the phone number, wrap it in another UDP packet, and send it back. What happens if the package gets lost along the way? This is the problem with your program: if there is no response for too long, it sends another request.

How to make the network friendly

To do this, you need to configure the software for a specific task and use names, not addresses, when accessing computers.

Most users have no interest in the flow of bits between computers, no matter how fast the lines are and no matter how exotic the technology that made it possible. They want to quickly use this stream of bits for some useful task, be it moving a file, accessing data, or just playing a game. Application programs are parts software that allow these needs to be met. Such programs constitute another layer of software built on top of the TCP or UDP service. Application programs provide the user with tools to solve a specific problem.

The range of application programs is wide: from home-grown to proprietary ones supplied by large development companies. The Internet has three standard application programs (remote access, file transfer, and e-mail), as well as other widely used but not standardized programs. Chapters 5-14 show you how to use the most common Internet applications.

When we're talking about One thing to keep in mind about application programs is that you perceive the application program as it appears on your local system. The commands, messages, invitations, etc. that appear on your screen may be slightly different from those that you see in a book or on your friend’s screen. Don’t worry if the book says “connection refused” and the computer says “Unable to connect to remote host: refused”; It is the same. Don't cling to words, but try to understand the essence of the message. Don't worry if some commands have different names; Most application programs are equipped with fairly solid help subsystems that will help you find the required command.

Digital addresses - and this became clear very quickly - are good for computers communicating, but names are preferable for people. It's inconvenient to speak using digital addresses, and even more difficult to remember them. That's why computers on the Internet are given names. All Internet applications allow you to use system names instead of numeric computer addresses.

Of course, using names has its drawbacks. First, you need to make sure that the same name is not accidentally assigned to two computers. In addition, it is necessary to ensure that names are converted to numeric addresses, since names are good for people, but computers still prefer numbers. You can give a program a name, but it must have a way to look up that name and convert it into an address.

In its early days, when the Internet was a small entity, names were easy to use. The Network Information Center (NIC) was creating a special registration service. You sent the completed form (of course by electronic means), and the NIC added you to its list of names and addresses. This file, called hosts (a list of host computers), was regularly distributed to all computers on the network. Simple words were used as names, each of which was necessarily unique. When you specified a name, your computer looked for it in this file and substituted the corresponding address.

As the Internet has grown, unfortunately, the size of this file has also grown. Significant delays began to occur in registering names, and the search for unique names became more difficult. In addition, sending this large file to all the computers specified in it took a lot of network time. It became obvious that such growth rates required a distributed interactive system. This system is called the Domain Name System (DNS).

The Domain Name System is a method of assigning names by assigning responsibility for subsets of names to different groups of users. Each level in this system is called a domain. Domains are separated from one another by dots:

A name can have any number of domains, but more than five are rare. Each subsequent domain in the name (when viewed from left to right) is larger than the previous one. In the name ux.cso.uiuc.edu the ux element is the name of a real computer with an IP address. (See picture).

The name of this computer is created and curated by the cso group, which is nothing more than the department in which this computer is located. The cso department is a department of the university of illinois (uiuc). uiuc is part of the national group of educational institutions (edu). Thus, the edu domain includes all computers of educational institutions in the United States; domain uiuc.edu - all computers of the University of Illinois, etc.

Each group can create and change all names under its control. If uiuc decides to create new group and call her ncsa, she doesn't have to ask anyone for permission. All you have to do is add a new name to your part of the worldwide database, and sooner or later whoever needs it will know about this name (ncsa.uius.edu). Similarly, cso can buy new computer, give it a name and include it in the network without asking anyone for permission. If all groups from edu on down follow the rules and ensure that names are unique, then no two systems on the Internet will have the same name. You can have two computers named fred, as long as they are on different domains (for example, fred.cso.uiuc.edu and fred.ora.com).

It's easy to find out where domains and names come from in an organization like a university or business. But where do top-level domains like edu come from? They were created when the domain system was invented. There were initially six top-level organizational domains.

As the Internet became an international network, it became necessary to give foreign countries the ability to control the names of the systems they host. For this purpose, a set of two-letter domains has been created that correspond to the top level domains for these countries. Since ca is the code for Canada, a computer in Canada may have the following name:

hockey.guelph.ca

The total number of country codes is 300; computer networks exist in approximately 170 of them.

The final plan for expanding the Internet resource naming system has finally been announced by the IAHC (International Ad Hoc Committee). Data from February 24, 1997. According to the new decisions, the following will be added to the top-level domains, which today include com, net, org:

firm - for business resources of the Network;

store - for trading;

web - for organizations related to the regulation of activities on the WWW;

arts - for humanities education resources;

rec - games and entertainment;

info - provision of information services;

nom - for individual resources, as well as those who are looking for their own ways of implementation that are not in the given wretched list.

In addition, the IAHC decisions state that 28 designated naming agencies are established throughout the world. As stated, new system will allow you to successfully overcome the monopoly that was imposed by the only authorized one - Network Solutions. All new domains will be distributed among the new agencies, and the old ones will be monitored jointly by Network Solutions and the National Science Foundation until the end of 1998.

Currently, approximately 85 thousand new names are registered monthly. The annual name fee is $50. New registration agencies will be required to represent seven geographic regions. Lotteries will be held for agency applicants from each region. Companies wishing to participate must pay an entry fee of $20,000 and have at least $500,000 in insurance against failure to act as a domain name registrar.

Now that you understand how domains are related to each other and how names are created, you can think about how to apply this wonderful system. You use it automatically whenever you give a name to a computer that is “familiar” with it. You do not need to search for this name manually, nor do you need to give it a search the desired computer a special command, although this can also be done if desired. All computers on the Internet can use the domain system, and most of them do.

When you use a name, such as ux.cso.uiuc.edu, the computer must convert it to an address. To do this, your computer begins asking DNS servers (computers) for help, starting with the right side of the name and moving to the left. It first asks local DNS servers to find the address. There are three possibilities here:

The local server knows the address because that address is in the part of the worldwide database that it curates this server. For example, if you work at NSTU, then your local server probably has information about all NSTU computers.

The local server knows the address because someone has already asked about it recently. When you ask for an address, DNS server keeps it “on hand” for a while in case someone else asks about it a little later. This significantly increases the efficiency of the system.

The local server does not know the address, but knows how to determine it.

How local server determines the address? Its software knows how to contact the root server, which knows the addresses of the top-level domain nameservers (the rightmost part of the name, for example, edu). Your server asks the root server for the address of the computer responsible for the edu domain. Having received the information, it contacts this computer and asks it for the uiuc server address. Your software then contacts this computer and asks it for the cso domain server address. Finally, from the cso server it receives the address of ux, the computer that was the target of the application.

Some computers are still configured to use the old-fashioned hosts file. If you work on one of them, you may have to ask its administrator to find the address you need manually (or do it yourself). The administrator will need to enter the name of the desired computer in the local hosts file. Hint to him that it would be a good idea to install DNS software on his computer to avoid such complications in the future.

Internet

What you can do on the Internet is a difficult question. The Internet is not just a network, but a network of networks, each of which may have its own policies and rules. Therefore, legal and ethical standards as well as political considerations must be taken into account. The relationship between these factors and the degree of their importance are not always the same.

In order to feel completely confident, it is enough to remember a few basic principles. Fortunately, these principles do not limit the user too much; If you don't go beyond the established limits, you can do whatever you want. If you find yourself in a difficult situation, contact your service provider to determine exactly what you can and cannot do. It is possible that what you want to do is permitted, but it is your responsibility to find out if this is the case. Let's look at some principles to help define what's acceptable.

Legal standards

When using the Internet, three legal rules must be observed:

A significant part of the Internet is financed by federal subsidies, as a result of which purely commercial use of the network is excluded.

Internet is an international network. When sending anything, including bits, across state borders, you should be guided by the laws governing exports, not by the legal regulations of that state.

In the case of shipping software (or, for example, just an idea) from one place to another, regional legal regulations regarding intellectual property and licenses should be taken into account

Many Internet networks are funded by federal agencies. According to federal law, the department can spend its budget only on what is within the scope of its activities. For example, the Air Force cannot secretly increase its budget by ordering rockets at NASA's expense. The same laws apply to the network: if NASA funds the network, then it should only be used for space exploration. As a user, you may not have the slightest idea about what networks your packets are traveling over, but it is better that the contents of these packets do not conflict with the activities of the agency that funds this or that network.

In reality, all this is not as scary as it seems. A couple of years ago in Washington they realized that having many parallel IP networks (NSFNET, NASA Science Internet, etc., one per federal department) was a waste of money (a very radical idea). Legislation was passed to create NREN, a national network for scientific research and education. Part of the Internet was dedicated to supporting scientific research and education - a task common to all federal bodies. This means that you can use the services of NREN. More precisely, NREN is an active network that has yet to be created. The bill authorizes traffic over existing federal networks. It would be more correct to call what exists today - Interim Interagency NREN (Temporary Interdepartmental NREN). to conduct research and teaching or to support research and teaching.

The importance of the clause “to support scientific research and education” cannot be overestimated. This provision includes among those permitted many possibilities for using the network, which may, at first glance, not correspond to its purpose. For example, if a company sells software that is used in scientific research and education, then it has the right to distribute add-ons and answer questions by email. This use is considered to be in support of scientific research and education. But at the same time, the company cannot use NREN for commercial tasks, such as marketing, accounting, etc. - for this purpose there is a commercial part of the Internet. A list of regulations governing the use of NSFNET is provided in Appendix A. These regulations are among the most stringent for commercial use. If your work meets them, then it meets the requirements of all other networks.

There has been a lot of talk lately about the National Information Infrastructure (NII). This is a voluminous and rather general project for creating networks on a national scale. It can equally well be seen as a long-term development plan for NREN and as an alternative to NREN. There are many players in this game (such as network service providers, telephone companies, cable TV companies and even energy corporations) trying to make the chips fall on their territory. In this essay, NII will not be given much attention, since we are really looking at existing network, and not the network that may appear in a few years. It is clear that NII will have a significant impact on the development of computer networks, but it has not yet been clarified how exactly this impact will manifest itself. All interested parties promise faster access, reduced prices and increased data transfer speeds, but, as they say, it is better to see once than to hear a hundred times.

When your organization negotiated an Internet connection, someone had to tell the network service provider whether the connection would be used for research and education or commercial purposes. In the first option, your traffic will be sent along subsidized NREN routes, in the second - through private channels. Your organization's network access fee will depend on the option you choose; Commercial use of the network usually costs more because it is not subsidized by the state. Only employees of your network administration can tell you whether commercial tasks can be solved through your connection. Check this before using the connection for commercial purposes.

Of course, many corporations will join the Internet as centers of research and teaching - and this is acceptable, since the motivation for connecting is often scientific research. For example, a seed company intends to collaborate with a university on research into the properties of soybean seeds. Many corporate legal departments, on the contrary, declare their connections to be commercial. This ensures that there will be no legal liability in the future if, for example, an uninformed employee leaks commercial information over a research data connection.

There are a number of commercial providers on the Internet, such as Advanced Networks and Service (ANS), Performance Systems International (PSI), and UUNET. Each of these companies has its own market niche and its own network within the state to provide commercial services on the Internet. In addition, state and regional networks provide commercial services to their members. There are connections between each of these networks and federal networks. The interaction of all these networks with each other, in compliance with all laws and regulations, is carried out through the use of these connections and the conclusion of certain accounting agreements.

Did you know that exporting bits is subject to Department of Commerce export restrictions? These notes apply to the United States only. In other countries, servers are subject to different laws. That the Internet is essentially holistic global network, makes it possible to export information without your knowledge. Since I'm not a lawyer, I won't go into technical details, but I'll try to briefly outline what it takes to comply with the law. If, after reading these notes, you still believe that you are at risk of breaking the law, please seek competent legal advice.

Expert legislation is built on two provisions:

Exporting anything requires a license.

Exports of a service are considered to be approximately equivalent to exports of the components needed to provide that service.

The first point is self-explanatory: if you ship, transport, send a file or send anything by email outside the country, then you must obtain an export license. Fortunately, there is a loophole here - a general license, the scope of which is very broad. A general license allows the export of anything that is not expressly prohibited from exporting and that can be openly discussed in the United States. Thus, anything you may learn at a conference or in a classroom is likely to be covered by a general license, unless the information is classified.

However, the list of what is subject to restrictions contains many surprises. It may also include some information available to a student at any university. Export of texts may be prohibited network programs and encrypted information. It often happens that at first we are talking about some small point, but then, when the corresponding instructions have already been drawn up, it turns out that the restrictions cover a much wider area. For example, during the Gulf War, the Iraqi military network was much more difficult to disable than planned. It turned out that Iraq was using commercial IP routers, which very quickly found alternative routes. Therefore, the export of all routers that could find alternative routes was immediately prohibited. It is quite possible that this story is one of the “network legends”. Everyone on the Internet was talking about this case, but when I tried to verify the accuracy of this information, I could not find a single reliable source.

The second point is even simpler. If the export of certain hardware, such as supercomputers, is prohibited, then remote access to that hardware located within a given state is also not permitted. Therefore, be careful about providing access to “special” resources (such as supercomputers) to foreign users. The nature of these restrictions naturally depends on the specific foreign country and (as you can judge based on the events of recent years) may undergo significant changes.

In reviewing its potential legal liability, the consortium that operates the Bitnet network (CREN) has come to the following conclusions. The network operator is liable for illegal exports only if it knew about the violation and did not inform the relevant authorities about it. The network operator is not responsible for the user's actions and is not responsible for determining their compliance or non-compliance with the law. Therefore, network maintenance personnel do not check the contents of packets that you send abroad. But if the operator discovers any violations of the law in your packages, he is obliged to inform government authorities.

Another factor to consider when sending something to anyone is ownership. The problem is compounded when data is transferred across national borders. Copyright and patent laws vary greatly from country to country. You may discover on the Internet an interesting collection containing the basics of a forgotten teaching whose copyright has already expired in the United States. Sending these files to England may breach UK law. Be sure to find out who owns the rights to what you transmit over the network, and if necessary, do not forget to obtain the appropriate permission.

Laws governing electronic data transmission have not kept pace with technical progress. If you have a book, magazine, or personal letter, almost any lawyer or librarian will be able to answer your question about whether it can be copied or used in any way. They will inform you whether you have the right to do this, or whose permission you need to obtain. Asking the same question about an article in an e-newsletter, an email message, or a file will not give you an accurate answer. Even if you knew whose permission you needed and received it via email, it is still not clear how you can ensure real protection of information using messages received by email. In this part, the legislation is quite vague, and it will probably be possible to bring it back to normality no earlier than in the next decade.

Ownership issues can arise even when using public files. For some software, access to which is open on the Internet, you must obtain a license from the supplier. For example, a workstation vendor makes additions to its operating system, accessible through anonymous FTR. You can easily obtain this software, but to use it legally, you must have a software maintenance license. The mere fact of having a file online does not mean that by taking it you will not break the law.

Politics and the Internet

Many netizens see the political process as both a blessing and a disaster. The good thing is money. Subsidies enable many people to obtain services that were previously unavailable to them. The trouble is that users' actions are under constant surveillance. Someone in Washington will suddenly decide that your actions can be used for political purposes. It is possible that the digitized color image of a naked girl that is recorded on your disk will one day become the topic of an editorial under the catchy headline “Taxpayer dollars go to the distribution of pornography.” A similar incident took place. The contents of the files were somewhat more explicit than the magazine illustrations, and the incident threatened funding for the entire NSFNET. This could cause a lot of trouble for those responsible for funding the Internet.

It is important to understand that the Internet has many supporters in the highest echelons of government - including members of the US Congress, representatives of the Clinton administration, leading scientists and heads of federal agencies. They support the Internet because it benefits the country by increasing the ability of the United States to compete with foreign countries in the fields of science and commerce. Increasing the speed of data exchange helps accelerate progress in scientific research and education; Thanks to the Internet, American scientists and students can find more effective solutions to technical problems.

As it should be, in the world of politics, there are those who consider these advantages to be trivial. In their opinion, the millions of dollars going online could be spent on “barrels of lard” in their home constituencies. “Barrel of Lard” is an event held in the USA by politicians to gain popularity, etc.

The network enjoys the support of quite a large number of politicians, but at the same time this support can hardly be called reliable, which is fraught with the source of possible troubles: any event that receives political resonance can tip the scales in the other direction.

Network ethics

The Internet gives rise to many ethical problems, but the ethics here are somewhat different from the generally accepted ones. To understand this, consider the term “laws of pioneers.” When the West first began to be explored, the laws of the United States west of the Mississippi River were interpreted differently than those east of it. The network is at the forefront of introducing new technologies, so it is fair to apply the above term to it. You can delve into it without fear if you know what to expect.

Network ethics is based on two main principles:

Individualism is respected and encouraged.

The network is good and should be protected.

Please note: these rules are very close to the ethics of the pioneers of the West, where individualism and preservation of the way of life were paramount. Let's consider how these principles manifest themselves in the activities of the Internet.

In an ordinary society, everyone can claim individuality, but in many cases an individual is forced to reconcile his interests with the interests of a fairly large group of people who, to a certain extent, share the views of this individual. This is where the “critical mass” effect comes into play. You may love medieval French poetry, but it is unlikely that you will be able to organize a circle in your city to study it. Most likely, you will not be able to gather a sufficient number of people interested in this subject and willing to meet from time to time to discuss this topic. In order to have at least a minimal opportunity for communication, you will have to join a society of poetry lovers, which unites people with more common interests, but it is unlikely that there will be at least one lover of French medieval poetry there. There may be no other poetry societies in your city, and the members of the only one available are constantly discussing bad pseudo-religious poems. Thus, the problem of “critical mass” arises. If you cannot gather a group of like-minded people, your interests suffer. At worst, you can join another, larger group, but this will not be what you need.

In a network, the critical mass is two. You communicate when you want and how you want - it's always convenient, and no coercion is required. Geographic location does not matter. Your interlocutor can be located anywhere on the network (almost anywhere in the world). Therefore, creating a group on any topic is absolutely possible. You can even form alternative groups. Some want to “meet” via email, others through teleconferences, some using open access to files, etc. Everyone is free in their choice. Since reaching critical mass does not require joining a larger group, each user is a member of a minority group. Harassment of dissidents is not welcome. For this reason, no one will claim that “ this topic should not be discussed online.” If I allowed myself to attack lovers of French poetry, you would have every right to speak out against my favorite conference. Everyone understands that for all other network users the opportunity to obtain the information they are interested in is no less important than for themselves. However, many Internet users fear (with good reason) that a movement in favor of external censorship may arise, and as a result the Internet will become much less useful.

Of course, individualism is a double-edged sword. It makes the web a great repository of information and a community of people, but it can put your altruism to the test. There can be debate about what behavior should be considered acceptable. Since you interact most often with a remote computer, most people are not aware of how you behave while doing so. Anyone who knows may not pay attention, or may pay attention. If you connect your computer to the network, you must be aware that many users consider all the files they can access to be theirs. Their argument goes something like this: If you weren't going to allow others to use the files, then there's no point in putting them anywhere that can be accessed over the network. This point of view, of course, is illegal, but much of what happened in the border regions during the development of the West was also not supported by laws.

Regular Internet users consider it a very valuable tool for both work and entertainment. Although access to the Internet is often funded by organizations rather than by the users themselves, Internet users nevertheless feel it is their duty to protect this valuable resource. There are two sources of threats to the Internet:

intensive use for other purposes;

political pressure;

NREN is created for a specific purpose. The commercial connection of a company to the Internet also has a specific purpose. It is possible that no one on site will prosecute someone who misuses the connection, but such abuse can be dealt with in other ways. If you use your boss's computer for personal purposes for a short time, such as balancing your checkbook, then no one will probably notice. Likewise, no one will pay attention to small amounts of network time spent for unintended purposes. (In fact, sometimes there is another way to look at misuse. For example, when a student plays cards over a network, this may qualify as a learning process: in order to get this far, the student must have learned quite a lot about computers and networks ). Problems only arise when a user does something blatantly inappropriate, such as organizing a nationwide "multiplayer dungeon" day online.

Misuse of the network can take the form of inappropriate use of resources. The network was not created to compensate for the lack of necessary hardware. For example, you cannot use a disk system located somewhere in another hemisphere just because your boss did not buy a disk for his computer for $300. This disk may be needed for very important research, but the cost of such a network service is extremely high. The Network is designed to provide efficient and quick access to special-purpose resources rather than treating them as free public resources.

Regular netizens and service providers are quite normal people. They get the same satisfaction from games as your neighbor. In addition, they are not stupid, they read the news, and work online regularly. If the quality of services drops for no obvious reason, people try to find out what happened. Having found out that in some area the graph has increased hundreds of times, they will begin to look for the reason, and if it turns out that you are using the network for other purposes, you will receive a polite message by e-mail asking you to stop behaving this way. The messages may then become less polite and finally a call to your network provider will follow. The result for you may be a complete loss of access to the network, or an increase in access fees for your boss (who, I think, will not be very happy about this).

Self-control when using the network is also very important for political reasons. Any sane person understands that the network cannot exist without abuse and problems. But if these problems are not solved among Internet users, but splash out on the pages of newspapers and become the subject of discussion in the US Congress, then everyone loses. Here are some actions that should be avoided when working online:

excessively frequent and long games;

persistent abuse;

malicious, aggressive attitude towards other users and other antisocial actions;

intentionally causing harm to or interfering with the actions of others (for example, using the program Internet Worm Internet Worm is a program that uses the Internet to “attack” certain types of computers. Having gained unauthorized access to a computer, it uses it to “break into” the next This program is similar computer viruses, but is called a worm because it does not intentionally harm computers. Its detailed description is given in the book “Computer Security Basics” (Russell and Gangemi), O"Reilly & Associates.;

creation of publicly accessible files of obscene content.

It will be very difficult to pass Congressional funding for NREN if the television program Sixty Minutes aired a story about online abuse the day before the hearing.

Ethics and Private Commercial Internet

In previous sections, we talked about the political and social conditions that contributed to the formation of the Internet as we know it today. But these conditions are changing. Every day the share of Internet funding from the federal budget is decreasing, as the share of funding through commercial use of the network is increasing. The government's goal is to exit the network business and transfer the functions of providing services to private capital. The obvious question is: If the government is getting out of the online business, should I continue to play by its rules? There are two aspects to this problem: personal and commercial.

Similar documents

    History of development and legal regulation on the Internet. The American military-industrial territorial network ARPANet as a prototype of the modern Internet. Scientific environment of network existence. Social relations and security in the Internet environment.

    report, added 05/02/2011

    General ideas about the Internet. Communications using the Internet Protocol Transmission Control Protocol family. The largest Internet channels in the USA, AT&T. Underwater transoceanic channels. Scheme of interaction of computers on the Internet.

    presentation, added 02/28/2012

    Theoretical basis Internet technologies and basic Internet services. Familiarization with the possibilities of connecting to the Internet. Basic network services. Principles of searching for information on the WWW. Review modern Internet browsers. Programs for online communication.

    course work, added 06/18/2010

    History of the creation of the Internet. Characteristics and reasons for “flight” into it. The problem of security, information protection. Classifications of methods of communication on the Internet. Rules of behavior in chat. The concept of flame and flood. Signs of a virtual affair, its consequences.

    certification work, added 10/09/2009

    The history of the creation of the Internet, its characteristics and the reasons for the “flight” to it, as well as the problem of the Internet in the workplace. Principles of communication and rules of conduct in chat. The essence and classification of nicknames. The concept and meaning of flame and flood.

    test, added 10/14/2009

    The history of the creation of the Internet, its administrative structure and architecture. Organization of access to the network, structure of its functioning. Characteristics of Internet protocols. Features of network ethics. Occupational health and safety when working on a PC.

    course work, added 05/20/2013

    Prerequisites for the emergence of the Global information network. Internet structure. Network connection and Internet addressing. TCP/IP protocol family. The most popular Internet technologies. Technologies for creating server parts of Web applications.

    abstract, added 12/01/2007

    History of the Internet. What does the Internet consist of? Internet protocols. Packet switching networks. Internet Protocol (IP). Transmission Control Protocol (TCP). Domain name system. Legal norms. Network ethics. Security considerations.

    abstract, added 11/23/2006

    The purpose of the global computer network Wide Area Networks. The history of the creation of the Internet, ways to connect a computer to it. Searching for information, doing business and distance learning. Structure of ARPANET, NSFNET networks. Internet protocols and addresses.

    test, added 02/24/2014

    The concept and history of the development of the Internet as a global computer network, covering the whole world. The essence and principle of operation of e-mail, its role and significance in society and the economy. Development and operation of an internetworking protocol, transmission control.




Top