Profiling queries in MySQL. What are “server logs”, how to view Mysql server logs, query logs

MySQL query profiling is a useful technique for analyzing the overall performance of database-based applications. When developing medium to large applications, there are typically hundreds of requests spread across a large code base, and the database processes many requests per second. Without query profiling, it becomes very difficult to determine the location and causes of occurrence bottlenecks applications. This tutorial describes some useful query profiling techniques using MySQL's built-in tools.

MySQL slow query log

Log slow queries MySQL (or slow query log) is the log where MySQL sends slow and potentially problematic queries.

This feature comes with MySQL but is disabled by default. MySQL determines which queries should be included in this log using special variables that allow you to profile the query based on the performance requirements of the application. Typically, queries that take longer to process and queries that have incorrect indexes are entered into this log.

Profiling Variables

The basic server variables for configuring the MySQL slow query log are:

slow_query_log global
slow_query_log_file global
long_query_time global/session
log_queries_not_using_indexes global
min_examined_row_limit global/session

slow_query_log – a logical variable to enable or disable the slow query log.

slow_query_log_file – absolute path of the query log file. The file directory must be owned by the mysqld user and have appropriate read and write permissions. The mysql daemon will most likely be started as mysql, but to be sure, run the command in a Linux terminal:

ps -ef | grep bin/mysqld | cut -d" " -f1

The output will show the current user and the mysqld user.

cd /var/log
mkdir mysql
chmod 755 mysql
chown mysql:mysql mysql

  • long_query_time – time in seconds to check the query length. If the value is 5, all requests that take more than 5 seconds to process will be logged.
  • log_queries_not_using_indexes – A boolean value that determines whether queries that do not use indexes should be logged. When analyzing, such queries are important.
  • min_examined_row_limit – defines the minimum number of rows to be analyzed. With a value of 1000, all queries that parse fewer than 1000 rows will be ignored.

MySQL server variables can be set in the MySQL configuration file or dynamically using user interface or MySQL command line. If variables are set in the configuration file, they will persist when the server is restarted, but the server must be rebooted to activate them. The MySQL configuration file is usually located in /etc/my.cnf or /etc/mysql/my.cnf. To find the configuration file, enter (you may need to expand your search to other root directories):

find /etc -name my.cnf
find /usr -name my.cnf

Once you have found the configuration file, add the required variables to the section:


….
slow-query-log = 1
slow-query-log-file = /var/log/mysql/localhost-slow.log
long_query_time = 1
log-queries-not-using-indexes

For the changes to take effect, you need to restart the server. If changes need to be activated immediately, set the variables dynamically:

mysql> SET GLOBAL slow_query_log = "ON";
mysql> SET GLOBAL slow_query_log_file = "/var/log/mysql/localhost-slow.log";
mysql> SET GLOBAL log_queries_not_using_indexes = "ON";
mysql> SET SESSION long_query_time = 1;
mysql> SET SESSION min_examined_row_limit = 100;

To check variable values:

mysql> SHOW GLOBAL VARIABLES LIKE "slow_query_log";
mysql> SHOW SESSION VARIABLES LIKE "long_query_time";

One of the disadvantages of dynamically changing MySQL variables is that the variables will be lost when the server is rebooted. Therefore, all important variables that need to be saved should be added to the file.

Generating a Profiling Query

Now you are familiar with the slow query log settings. Try generating query data for profiling.

Note: The example given here was run on a running MySQL instance without any slow query logs configured. These test queries can be run via GUI or command line MySQL.

When monitoring the log of slow requests, it is useful to open two terminal windows: one connection for sending MySQL statements, and the second is for viewing the request log.

Log into the MySQL server using the console as a user with SUPER ADMIN privileges. To get started, create a test database and table, add dummy data to it, and enable slow query logging.

Note: Ideally, this example is best run in an environment without any other applications using MySQL to avoid cluttering the query log.

$> mysql -u -p
mysql> CREATE DATABASE profile_sampling;

mysql> USE profile_sampling;


mysql> CREATE TABLE users (id TINYINT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255));


mysql> INSERT INTO users (name) VALUES ("Walter"),("Skyler"),("Jesse"),("Hank"),("Walter Jr."),("Marie"),("Saul "),("Gustavo"),("Hector"),("Mike");


mysql> SET GLOBAL slow_query_log = 1;


mysql> SET GLOBAL slow_query_log_file = "/var/log/mysql/localhost-slow.log";


mysql> SET GLOBAL log_queries_not_using_indexes = 1;


mysql> SET long_query_time = 10;


mysql> SET min_examined_row_limit = 0;

Now you have a test database and a table with some data. Slow query log is enabled. We deliberately set the request processing time to high and disabled the row count check. To view the log, enter:

cd /var/log/mysql
ls -l

For now, there should not be a log of slow requests in the folder, since this moment there were no requests. If such a log already exists, this means that the database has already encountered slow queries since you enabled support for the slow query log. This may skew the results of this example. Go back to the MySQL tab and run:

mysql> USE profile_sampling;
mysql> SELECT * FROM users WHERE id = 1;

The executed query simply retrieves the data and uses the index of the first key from the table. This query was fast and used an index, so it is not recorded in the slow query log. Return to the directory and make sure that no query log has been created. Now go back to the MySQL window and run:

mysql>

This query does not use an index. Now something like this should appear in the log /var/log/mysql/localhost-slow.log:

# Time: 140322 13:54:58

use profile_sampling;
SET timestamp=1395521698;

One more example. Increase the minimum number of rows to parse and send a request like this:

mysql> SET min_examined_row_limit = 100;
mysql> SELECT * FROM users WHERE name = "Walter";

The data will not be added to the log because less than 100 rows were analyzed during the request.

Note: If the data has not been added to the log, you need to check several factors. First check the permissions of the directory in which the log is created. It must be owned by the mysqld user/group and have chmod 755 privileges. You should then check to see if there are other slow query settings on the server that are overriding your settings. Reset the defaults to remove all slow request variables from the config file and reboot the server. You can also dynamically set global variables to their default values. If you are making changes dynamically, log out and log back into MySQL to update the settings.

Analyzing Query Profiling Data

Consider the following data:

# Time: 140322 13:54:58
#User@Host: root@localhost
# Query_time: 0.000303 Lock_time: 0.000090 Rows_sent: 1 Rows_examined: 10
use profile_sampling;
SET timestamp=1395521698;
SELECT * FROM users WHERE name = "Jesse";

This entry displays:

  • Query execution time
  • Who sent it
  • How long did it take to process the request?
  • Length
  • How many rows were returned
  • How many rows were parsed

This is useful because any request that violates the performance requirements specified in the variables ends up in the log. This allows a developer or administrator to quickly track down requests that are not working. Additionally, query profiling data can help you determine what circumstances are causing your application to perform poorly.

Using mysqldumpslow

Profiling can be included in database-based applications to ensure moderate data flow.

As the log size grows, it becomes difficult to parse all the data, and problematic queries can easily get lost in it. MySQL offers a tool called mysqldumpslow, which helps avoid this problem by splitting the log of slow queries. The binary is linked to MySQL (on Linux), so you can simply run the command:

mysqldumpslow -t 5 -s at /var/log/mysql/localhost-slow.log

The command can accept various parameters to customize its output. The above example will display the top 5 queries sorted by average query time. Such strings are more readable and are also grouped by request.

Count: 2 Time=68.34s (136s) Lock=0.00s (0s) Rows=39892974.5 (79785949), root@localhost
SELECT PL.pl_title, P.page_title
FROM page P
INNER JOIN pagelinks PL
ON PL.pl_namespace = P.page_namespace
WHERE P.page_namespace = N

The output shows the following data:

  • Count: how many times the request was logged.
  • Time: average and total request time (in parentheses).
  • Lock: table lock time.
  • Rows: The number of rows returned.

The command excludes numeric and string values, so identical queries with different WHERE conditions are treated as the same. The mysqldumpslow tool eliminates the need to constantly review the log of slow queries, instead allowing you to perform regular automatic checks. The mysqldumpslow command options allow you to run complex expressions.

Request Breakdown

Another profiling tool to keep in mind is the Complex Query Breakdown Tool. It allows you to identify problematic queries in the slow query log and run it in MySQL. First you need to enable profiling and then run the query:

mysql> SET SESSION profiling = 1;
mysql> USE profile_sampling;
mysql> SELECT * FROM users WHERE name = "Jesse";
mysql> SHOW PROFILES;

Once profiling is enabled, SHOW PROFILES will show a table that associates the Query_ID with the SQL expression. Find the Query_ID corresponding to the running query and run the following query (replace # with your Query_ID):

mysql> SELECT * FROM INFORMATION_SCHEMA.PROFILING WHERE QUERY_ID=#;

The command will return a table:

SEQ STATE DURATION
1 starting 0.000046
2 checking permissions 0.000005
3 opening tables 0.000036

STATE is a step in the query execution process, and DURATION is the time it takes to complete that step in seconds. It's not very useful tool, but it can help determine which part of the query execution is causing the most latency.

Note Note: This tool should not be used in a production environment.

Slow query log performance

All that remains is to figure out how the slow query log affects performance. In general, it is safe to run slow query logs in a production environment; Neither CPU nor I/O should be affected. However, you should have a strategy for monitoring log size to ensure that the log does not become too large for the file system. Additionally, when running slow query logging in a production environment, you should set long_query_time to 1 or higher.

Conclusion

A slow query log can help you identify problematic queries and evaluate overall query performance. At the same time, the developer can get a detailed understanding of how the application performs MySQL queries. The mysqldumpslow tool allows you to manage slow query logs and easily incorporate them into your development process. By identifying problematic queries, you can optimize query processing to improve performance.

Tags:

Event logs are the first and simplest tool for determining system status and identifying errors. There are four main logs in MySQL:

  • Error Log— standard error log that is collected while the server is running (including start and stop);
  • Binary Log— a log of all database modification commands, needed for replication and backups;
  • General Query Log— main query log;
  • Slow Query Log— log of slow requests.

Error log

This log contains all errors that occurred while the server was running, including critical errors, as well as server shutdowns, server startups, and warnings. This is where you should start in case of a system failure. By default, all errors are output to the console (stderr), you can also log errors to syslog (default on Debian) or a separate log file:

Log_error=/var/log/mysql/mysql_error.log

# Errors will be written to mysql_error.log

We recommend keeping this log enabled to quickly identify errors. And to understand what this or that error means, MySQL has the perror utility:

Shell> perror 13 64 OS error code 13: Permission denied OS error code 64: Machine is not on the network

# Explains the meaning of error codes

Binary (aka binary) log

All database modification commands are recorded in the binary log, useful for replication and recovery.

It turns on like this:

Log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 5 max_binlog_size = 500M

# Indicates the location, lifetime and maximum size file

Please note that if you are not going to scale the system and implement fault tolerance, then it is better not to enable the binary log. It is resource intensive and reduces system performance.

Request log

This log contains all received SQL queries and information about client connections. Can be useful for index analysis and optimization, as well as identifying erroneous queries:

General_log_file = /var/log/mysql/mysql.log general_log = 1

# Includes the log and indicates the file location

You can also enable/disable it while the MySQL server is running:

SET GLOBAL general_log = "ON"; SET GLOBAL general_log = "OFF";

# You don't need to restart the server to use it

Slow request log

The log is useful for identifying slow, that is, inefficient queries. Read more in this article.

Viewing logs

To view logs on Debian (Ubuntu) you need to run:

# Error log tail -f /var/log/syslog #Query log tail -f /var/log/mysql/mysql.log # Log slow requests tail -f /var/log/mysql/mysql-slow.log

# If the logs are not specified separately, they are located in /var/lib/mysql

Log rotation

Don’t forget to compress (archive, rotate) log files so that they take up less space on the server. To do this, use the utility logrotate by editing the configuration file /etc/logrotate.d/mysql-server:

# - I put everything in one block and added sharedscripts, so that mysql gets # flush-logs"d only once. # Else the binary logs would automatically increase by n times every day. # - The error log is obsolete, messages go to syslog now./var/log/mysql.log /var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log( daily rotate 7 missingok create 640 mysql adm compress sharedscripts postrotate test -x /usr/bin/mysqladmin || exit 0 # If this fails, check debian.conf! MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf" if [ -z "`$MYADMIN ping 2>/dev/null`" ]; then # Really no mysqld or rather a missing debian-sys-maint user? # If this occurs and is not an error please report a bug. #if ps cax | grep -q mysqld; then if killall -q -s0 -umysql mysqld; then exit 1 fi else $MYADMIN flush-logs fi endscript )

# Compresses and archives necessary logs, cleans files

DDL Log

MySQL also maintains a data language log. It collects data from operations like DROP_TABLE and ALTER_TABLE. The log is used to recover from failures that occurred during such operations. DDL Log is a binary file and is not intended to be read by the user, so do not modify or delete it.

The most important

Always turn on the error log, use the query log to check the application's connection to the database, check queries and operation. The log of slow queries is useful for optimizing MySQL performance.

Profiling queries in Mysql used to evaluate the performance of your application. When developing medium to large applications, you have to deal with hundreds of requests distributed throughout your code that are executed every second. Without query profiling techniques, it can be very difficult to find out what is causing your application's performance to suffer.

What is the slow query log in MySQL?

MySQL Slow Query Log - a log that flags slow and potentially problematic queries. MySQL supports this functionality by default, but it is disabled. By setting certain server variables, we can specify which requests we are interested in. Most often, we need queries that require a certain amount of time to complete or queries that do not process indexes correctly.

Setting Profiling Variables

Main variables for setting up the query log:

Slow_query_log G slow_query_log_file G long_query_time G / S log_queries_not_using_indexes G min_examined_row_limit G / S

Comment: G - global variables, S - system variables

  • slow_query_log - boolean value including log
  • slow_query_log_file - absolute path to the log file. The owner of the directory must be a user mysqld, and the directory must have the correct read and write permissions. Most often the mysql daemon runs as a user mysql.

To check, run the following commands:

Ps-ef | grep bin/mysqld | cut -d" " -f1

The output of the command will give you the name of the current user and the mysqld user. Example of setting up the /var/log/mysql directory:

Cd /var/log sudo mkdir mysql sudo chmod 755 mysql sudo chown mysql:mysql mysql

  • long_query_time - time in seconds to check the query duration. For example, with a value of 5, every request lasting more than 5 seconds will be logged.
  • log_queries_not_using_indexes - boolean value, enables saving queries that do not use indexes. Such queries are very important in analysis.
  • min_examined_row_limit - specifies the minimum value for the number of data rows to be analyzed. A value of 1000 will ignore queries returning less than 1000 rows of values.

These variables can be set in the MySQL configuration file, dynamically through the MySQL GUI or the MySQL command line. If the variables are specified in the configuration file, the server will install them the next time it starts. Typically this file is located at /etc, /usr, /etc/my.cnf or /etc/mysql/my.cnf. Here are the commands to search for the configuration file (sometimes you should expand the search to other root directories):

Find /etc -name my.cnf find /usr -name my.cnf

When you find the file, add the required variables in the section:

; ... slow-query-log = 1 slow-query-log-file = /var/log/mysql/localhost-slow.log long_query_time = 1 log-queries-not-using-indexes ; no meaning needed here

The changes will take effect only the next time you start MySQL; if you need to dynamically change parameters, use other methods for setting variables:

Mysql> SET GLOBAL slow_query_log = "ON"; mysql> SET GLOBAL slow_query_log_file = "/var/log/mysql/localhost-slow.log"; mysql> SET GLOBAL log_queries_not_using_indexes = "ON"; mysql> SET SESSION long_query_time = 1; mysql> SET SESSION min_examined_row_limit = 100;

You can check the values ​​of variables as follows:

Mysql> SHOW GLOBAL VARIABLES LIKE "slow_query_log"; mysql> SHOW SESSION VARIABLES LIKE "long_query_time";

The main disadvantage of a dynamic installation is that the values ​​will be lost when the system starts. It is recommended to specify important parameters in the MySQL config.

The note: Syntax for dynamically setting parameters via SET command and using the config file is slightly different, for example slow_query_log / slow-query-log . You will find a complete description of the syntax in the official DBMS documentation. The Option-File format is used for the configuration file, System Variable Name - variable names when setting values ​​dynamically.

Generating Data for Query Profiling

We have reviewed the main points for setting up profiling, now we will create the queries that interest us. This example was used on a running MySQL server without any preliminary log settings. Sample queries can be launched both through the MySQL GUI and DBMS command tools. When monitoring the log of slow queries, it is common to open two windows with a connection: one to run queries, the other to view the log.

$> mysql -u -p mysql> CREATE DATABASE profile_sampling; mysql> USE profile_sampling; mysql> CREATE TABLE users (id TINYINT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255)); mysql> INSERT INTO users (name) VALUES ("Walter"),("Skyler"),("Jesse"),("Hank"),("Walter Jr."),("Marie"),("Saul "),("Gustavo"),("Hector"),("Mike"); mysql> SET GLOBAL slow_query_log = 1; mysql> SET GLOBAL slow_query_log_file = "/var/log/mysql/localhost-slow.log"; mysql> SET GLOBAL log_queries_not_using_indexes = 1; mysql> SET long_query_time = 10; mysql> SET min_examined_row_limit = 0;

Now we have a database with test data. We launched profiling, but we deliberately set the response time and number of lines to be small. To view the log use the command:

Cd /var/log/mysql ls -l

In theory, the log file should not exist yet, since we have not made queries to the database. If it exists, it means that profiling was configured earlier, and this may distort the results of the example. Run in the console:

Mysql> USE profile_sampling; mysql> SELECT * FROM users WHERE id = 1;

Our query uses the Primary Key index from the table. The request processed very quickly using the index, so it should not be reflected in the log. Please note that the log file should not have been created.

Now do the following:

Mysql> SELECT * FROM users WHERE name = "Jesse";

Here we did not use indexes. We should now see this request in the log:

Sudo cat /var/log/mysql/localhost-slow.log # Time: 140322 13:54:58 # User@Host: root @ localhost # Query_time: 0.000303 Lock_time: 0.000090 Rows_sent: 1 Rows_examined: 10 use profile_sampling; SET timestamp=1395521698; SELECT * FROM users WHERE name = "Jesse";

Let's look at another example. Raise the bar on the number of lines in the response and run the following query:

Mysql> SET min_examined_row_limit = 100; mysql> SELECT * FROM users WHERE name = "Walter";

The request will not be reflected in the log, since we did not exceed 100 lines in the response to the request.

The note: If the data is not displayed in the log, then first of all you need to consider the following factors. The first is the rights to the directory where the log file is stored. The group and user must correspond to the mysqld user, the rights must be chmod 755. Secondly, profiling may have been configured earlier. Remove any existing profiling variable values ​​from the configuration file and restart the server or set the variables dynamically. If you used the dynamic method, you will exit and log back into the MySQL console.

Analysis of query profiling data

Consider the above example:

# Time: 140322 13:54:58 # User@Host: root @ localhost # Query_time: 0.000303 Lock_time: 0.000090 Rows_sent: 1 Rows_examined: 10 use profile_sampling; SET timestamp=1395521698; SELECT * FROM users WHERE name = "Jesse";

Here we see:

  • Time when the request was started
  • The user who made the request
  • Opening hours requests
  • Duration of blocking
  • Number of selected rows
  • Number of lines parsed

This data is very useful, since with its help we can find and eliminate the cause of the system slowdown. Also, a MySQL developer or administrator will always be able to see problematic queries and I would like to note that finding them here is much faster than by studying the application code. With long-term profiling, you can monitor operating conditions at low speed.

Using mysqldumpslow

The log constantly records data; as a rule, it writes much more than is read from it. At large size log, reading it becomes problematic. MySQL includes a tool called mysqldumpslow that helps maintain log integrity. The program itself is combined with MySQL (on Linux systems). To use it, follow the necessary command and pass it the path to the log file:

Sudo mysqldumpslow -t 5 -s at /var/log/mysql/localhost-slow.log

There are a number of parameters that help you customize the command output. In the example below we will see the last five requests sorted by average duration. As a result, reading the log becomes much more convenient. (output modified to show actual log values):

Count: 2 Time=68.34s (136s) Lock=0.00s (0s) Rows=39892974.5 (79785949), root@localhost SELECT PL.pl_title, P.page_title FROM page P INNER JOIN pagelinks PL ON PL.pl_namespace = P.page_namespace WHERE P.page_namespace = N ...

What we see:

  • Count - number of occurrences of the request in the log
  • Time - average and total request time
  • Lock - table lock time
  • Rows - Number of selected rows

The command excludes numeric and string query data, meaning queries with the same WHERE clause will be considered the same. Thanks to this tool, you don’t have to constantly review the log. Due to the large number of command parameters, you can sort the output as you wish. There are also third-party developments with similar functionality, for example pt-query-digest.

Request Breakdown

You should pay attention to another tool that allows you to break down complex queries. Most often you have to take a query from the log and then run it directly in the MySQL console. First you need to enable profiling and then run the query:

Mysql> SET SESSION profiling = 1; mysql> USE profile_sampling; mysql> SELECT * FROM users WHERE name = "Jesse"; mysql> SHOW PROFILES;

After enabling profiling, SHOW PROFILES will show a table linking Query_ID and SQL expression. Find the corresponding Query_ID and run the following query (replace # with your Query_ID):

Mysql> SELECT * FROM INFORMATION_SCHEMA.PROFILING WHERE QUERY_ID=#;

Example output:

SEQ STATE DURATION 1 starting 0.000046 2 checking permissions 0.000005 3 opening tables 0.000036

STATE- a step in the process of executing a request, DURATION- step duration in seconds. This tool is not used very often, but sometimes it can be extremely useful in determining the cause of slow query performance.

Detailed description of the columns:

Detailed description of steps:

The note: This tool should not be used in server production mode, except for analyzing specific queries.

Slow query log performance

The last question is how profiling affects the performance of the server as a whole. In the production mode of the server, you can quite safely use such logging; it should not affect either the CPU or I/O. However, it is worth paying attention to the size of the log file; it should not become prohibitively large. I would also like to note from experience that setting the value of the long_query_time variable to 1 second or higher.

Important: You should not use the profiling tool - SET profiling = 1 - to record all requests, i.e. It is not recommended to use the general_log variable in product mode and under heavy loads.

Conclusion

Query profiling can help you a lot in isolating the problematic query and assessing the overall performance. The developer can also study how the MySQL queries of his application work. The mysqldumpslow tool helps you view and process query logs. After identifying problematic queries, all that remains is to tune them for maximum performance.

Concept

Server logs (log files, server log)- files stored on the server that contain system information of the server, as well as logging all possible data about the visitor of the web resource.

Logs are used by system administrators to analyze visitors, studying the patterns of behavior of certain groups of users, as well as obtaining various information about them, such as: the browser used, IP address, data on the client’s geographic location and much more. In addition to analysis, in this way you can find out about unauthorized access to the site, find out more precisely who exactly made it, and transfer data about this case to the appropriate authorities.

The data in the log file, in its pure form, will not be understandable to ordinary users, who will see in all this just a set of characters in an incomprehensible order. But for system administrators and web developers, this is a very readable text and quite useful information.


Sequence of events

Every time a client accesses a web resource, several events are triggered at once, the sequence of which we will talk about.

1. Making a page request. When you enter an address into the browser line, or when you follow an active web link, for example, from a search engine results page, the browser searches and connects to the server on which the page is located and makes a request for it. At the same time, it transmits the following information to the server:
- IP address of the client computer that requests the page (if using a proxy server, the IP address of your proxy);
- address of the Internet page requested by the user (IP address);
- the exact time and date when the request was made;
- data about the actual location of the client (if a proxy server is used, then the actual proxy address);
- information about the browser used by the client (name, version, etc.);
- data about the web page from which the client transferred.

2. Transfer of the requested data. The requested data (web page, files, cookies, etc.) is transferred from the server to the user’s computer.

3. Write to the server log. After everything, a log entry occurs, which indicates all the data that appeared in the past two events. This is all the information sent in the first paragraph, as well as information about the transmitted data.

How to view server logs

Log files are stored in a file access.log no matter what type of web server you use (Apache, Nginx, squid proxy, etc.) This file is text document, on each line of which one appeal is written. Recording formats in access.log quite a lot, but the most popular is combined, in which the entry has the following form and sequence:

Code: %h %l %u %t \"%r\" %>s %b \"%(Referer)i\" \"%(User-Agent)i\"
Where:

%h- host/IP address from which the request was made;
%t- time of the request to the server and server time zone;
%r- version, content and type of request;
%s- HTTP status code;
%b- the number of bytes sent by the server;
%(Referer)- URL source of the request;
%(User-Agent)- HTTP header, with information about the request (client application, language, etc.);
%(Host)- the name of the Virtual Host that is being accessed.

When finished, this line looks something like this:

127.0.0.1 - - "GET /index.php HTTP/1..0 (compatible; MSIE 7.0; Windows NT 5.1)"

Reading logs manually will take quite a lot of time and effort. Therefore, experienced webmasters use special software called “Log File Analyzers”. They analyze all the data, which is quite difficult for humans to read, and produce structured data. These are programs such as: Analog, WebAnalizer, Webalizer, Awstats, Webtrends, etc. Types of special software quite a lot, among them there are like paid programs, and free. Therefore, I am sure that everyone will find something to their liking.

Where to find site logs

If you have regular hosting, then most likely you will have to write to your hoster and request logs from him. Also, quite often, you can request them through the hosting panel. Different hosters do it differently. For example, to request from my hoster, just click on home page panels:


If you have access to system folders server, then you can find the logs at /etc/httpd/logs/access_log in 99 cases out of 100.

Error log error.log

Error.log- a file in which logs are also kept. But not visitors, but errors that occurred on the server. As is the case with access.log, each line of the file is responsible for one error that occurred. The recording is carried out taking into account such information as: the exact date and time of the error occurrence, the IP address to which the error was issued, the type of error, as well as the reason for its occurrence.

Conclusion

Logs are quite a powerful and informative tool to work with. But nowadays, they are being replaced by tools such as Yandex.Metrica, Google Analytics, etc., thereby making our lives easier. However, if you plan to develop, grow and learn something new, I certainly recommend that you get to know this topic better.




Top