Software product testing. How to test the functionality of a function that uses other functions within it? Change related testing

Even if you are so tolerant that you can restart the program 18 times after a crash within half an hour and only then throw the monitor directly at the window, you will agree that working with this program would be more comfortable if it did not “crash” .

How can you make sure that cases of crashes, freezing, or failure to perform the necessary actions of the program you have developed become very rare?

The exact answer to this question No. But over the centuries, the wisest scientists have thought about this topic for years and were able to find a remedy that, if it does not eliminate all program errors, then at least creates the illusion of action to eliminate them.

And this remedy is called TESTING software product .

According to wise people, Testing is one of the most established ways to ensure the quality of development software and is included in the set effective means modern system ensuring the quality of the software product.

The quality of a software product is characterized by a set of properties that determine how “good” the product is from the point of view of stakeholders, such as the product customer, sponsor, end user, product developers and testers, support engineers, marketing, training and sales personnel. Each of the participants may have a different idea about the product and how good or bad it is, that is, how high the quality of the product is. Thus, setting the problem of ensuring product quality results in the task of identifying stakeholders, their quality criteria and then finding the optimal solution that satisfies these criteria.

When and who?

According to experienced developers, testing of a software product should be carried out right from the very beginning of its creation. But at the same time, experienced developers themselves should not take part in testing, since this is not a royal matter. The software product must be tested by specially trained employees called testers, because even the most experienced developer will not be able to see his mistake, even using the latest optical instruments.

However, all developers agree that testing a software product from the point of view of classification by purpose should be divided into two classes:

  • Functional testing
  • Non-functional testing

Functional testing

Functional testing means checking the compliance of a software product with the functional requirements specified in the technical specifications for the creation of this product. To put it simply, functional testing checks whether the software product performs all the functions it should.

So, you finally decided to conduct functional testing. You look into the technical specification, read the functional requirements and realize that at least they are not in the order in which testing can be done. You will be surprised that quite a long time ago, others already noticed this discrepancy and figured out how to overcome it.

To conduct functional testing, the staff of the technical control department is developing a document program and methodology for testing the functionality of the application (API). The PMI document contains a list of software product testing scenarios (test cases) with detailed description steps. Each step of the testing scenario is characterized by the actions of the user (testing specialist) and the expected results - the program’s response to these actions. The test program and methodology must simulate the operation of the software product in real mode. This means that the testing scenario should be built on the basis of an analysis of the operations that future users of the system will perform, and not be an artificially compiled sequence of manipulations understandable only to the developer.

Typically, functional testing is carried out at two levels:

  • Component (unit) testing. Testing of individual components of a software product, focused on their specifics, purpose and functional features.
  • Integration testing. This type testing is carried out after component testing and is aimed at identifying defects in the interaction of various subsystems at the level of control flows and data exchange.

Non-functional testing

Non-functional testing evaluates the qualities of a software product, such as ergonomics or performance.

I think the importance of this type of testing is clear and does not require justification. After all, everyone understands that if, for example, the system performance is not sufficient, then users will have to wait half a day for a response to their actions, which can lead to their mass hibernation.

As the name suggests, non-functional testing verifies that a software product meets the non-functional requirements of terms of reference for its creation. And, as in the case of functional testing, a test program and methodology are developed for non-functional testing.

Embedded Software Testing and Compliance in the Agile Era

Compliance with industry standards is not something you can neglect or do later; it is an integral part of the embedded software (software) development process. For some industries - such as avionics, automotive and healthcare - strict adherence to quality standards in the development of complex and trouble-free embedded systems is vital to bringing a product to market. Traditionally, testing has played an important role in the development of embedded systems for standards-regulated industries. However, in recent years, established testing practices and processes, their place and role in such projects, have changed significantly. It was a game changer, and when the rules of the game change, you have to change with them to win.

With the constant development of new, cutting-edge technologies, companies need to quickly offer products to the market that are reliable, safe, easy to use and compatible with other systems - just to avoid getting lost in the rapidly changing technological world. In such a situation, the traditional waterfall model, where the software development process is strictly sequential and testing is performed at the very end, becomes a thing of the past. DevOps and Agile methods are becoming increasingly popular because they allow engineers to complete tasks that previously followed each other simultaneously.

Performance testing

During the performance testing phase, the first step is load testing, the purpose of which is to check whether the system will adequately respond to external influences in a mode close to real-life operation.

In addition to load testing, tests are carried out under conditions of minimal hardware and maximum load - stress testing, as well as tests under conditions of maximum volumes of processed information - volumetric testing.

There is another type of testing: stability and reliability testing, which includes not only long-term testing of a software product under normal conditions, but also its ability to return to normal operation after short periods of stressful loads.

Documentation for testing

As mentioned above, testing is carried out in accordance with the program and test methodology, which is developed in accordance with GOST 34.603-92.

To carry out testing, a test case is developed, which must contain enough data to test all modes of operation of the software product. Typically, a test case is created jointly by the customer and the contractor based on real data.

To carry out all types of performance testing, a so-called data generator is most often created, which allows automatic mode create a sufficient amount of data to achieve an objective result when assessing performance.

During testing, a testing protocol is drawn up, which contains information about the completion of all stages and steps of testing and comments received during testing.

If the test result is negative, the deficiencies are corrected and retested.

Exploratory testing

Exploratory testing (ad hoc testing is a subtype of functional testing. It is used in fast-growing projects with flexible development methods, where there are no clear documentation and requirements. Exploratory testing is the highest aerobatics in software testing. Qualitative testing is available to highly qualified specialists and almost entirely depends on the performer, his experience, knowledge (both in the subject area and in testing methods), and the ability to quickly get to the essence.

Stress Testing

Load testing is the process of analyzing the performance of the system under test under load. The purpose of load testing is to determine the application's ability to withstand external loads. Typically, tests are carried out in several stages.

1. Generating test scripts

For effective analysis, scenarios must be as close as possible to actual use cases. It is important to understand that exceptions are always possible, and even the most detailed test plan may not cover a single case.

2. Development of a test configuration

Having test scenarios, it is important to distribute the order of increasing load. For successful analysis, it is necessary to identify performance evaluation criteria (response speed, request processing time, etc.).

3. Carrying out a test test

When conducting tests, it is important to timely monitor the execution of scenarios and the response of the system under test. Emulating high loads requires serious hardware and software infrastructure. In some cases, mathematical modeling methods are used to reduce the cost of work. Data obtained at low loads are taken as a basis and approximated. The higher the level of simulated load, the lower the accuracy of the estimate. However, this method significantly reduces costs.

Test automation

The main feature of automated testing is the ability to quickly conduct regression tests. The main advantages of automation (according to a report from Worksoft) are increased staff efficiency, earlier detection of defects and more high quality business processes. These advantages are offset by a significant disadvantage: high cost - due to the high cost of implementing and supporting test automation, about 50% of companies still use mainly manual testing.

Usability testing

Any application is created in order to be used. Ease of use is an important quality indicator of a program. The IT industry knows many examples of projects taking off after a successful usability fix. The wider the audience, the more important the usability factor. Usability testing involves detailed analysis of user behavior. To assess ergonomics, it is important to have data not only on the speed of completing a business task, but also on the user’s emotions, facial expressions, and voice timbre.

Configuration testing

Configuration testing gives confidence that the application will work on different platforms, and therefore for the maximum number of users. For WEB applications, cross-browser testing is usually chosen. For Windows applications - testing on various operating systems and bit sizes (x86, x64). An important component of configuration testing is the test infrastructure: to conduct tests, you need to constantly maintain a fleet of test machines. Their number varies from 5 to several dozen.

Integration testing

If your project has more than one component, it needs integration testing. With a complex application architecture, a necessary condition for quality assurance is checking the interaction of program parts. Testing is achieved by developing and conducting “end-to-end” cases. Integration testing is carried out after component testing. Therefore, it is very important to take into account the experience of component testing, while respecting the business orientation of the test cases.

Stress testing

Every system has a limit normal functioning. When the limit is exceeded, the system goes into a state of stress and changes its behavior significantly. Stress testing tests the operation of an application under conditions exceeding normal functioning limits. This is especially important for “critical” programs: banking software, aviation industry programs, medicine. Stress testing is carried out not only at the software development stage, but also throughout the entire operating cycle in order to obtain and process system behavior data over a long period of time.

Let's assume there is a get-data function, which returns a map of information about the user ID that passed. Now this function uses 3 functions source-a , source-b and source-c to produce three different kinds of maps. Now we will combine all these maps into one map and return from get-data.

When I test get-datashould I check for data availability for keys? Does it make sense for this function to fail unit tests if one of source-a , source-b and source-c fails? If the job of thats function is to join data, and it does, that should be enough, right?

1

2 answers

Let's assume there is a get-data function that returns a map of information about the user ID passed to.

Great. Then you should check it out. For a given ID, are you returning the correct data?

now this function uses 3 functions source-a, source-b and source-c to produce three different kinds of maps.

Which implementation detail should you ignore in the test. All you're testing is that your unit of work (this method) does what it's supposed to do (take an ID and return XYZ data for that ID). How this method doesn't really matter - after all, the key benefit of this unit test is that you can refactor the method's implementation and the test will check that you did it correctly.

However, you'll probably have to mock the data sources, so at some point the test will probably need to know how this code works. You need to balance three competing goals here: making the test isolated (by mocking the data), while making the test focus on requirements and pragmatism.

After all, this is important code. There are tests to support the actual code, spending a lot of time and the hassle of checking polish is not as useful as tests making .

In unit testing you should only test the functionality of one class, if your source-a, source-b and source-c methods call other classes you should mock them (they should be unit tested in their classes).

In integration testing, you are testing the behavior of multiple classes interacting between them, this means that your get-data function must check that the data that is being retrieved is correct (source-a, source-b and source-c are correct and the data is connected properly) .

Unit tests are simpler and more focused and should be written by developers. Integration tests usually become outdated relatively quickly (if any internal component has been changed), making them more difficult to perform. Must be created by a QA profile.

Even if you are so tolerant that you can restart the program 18 times after a crash within half an hour and only then throw the monitor directly at the window, you will agree that working with this program would be more comfortable if it did not “crash” .

How can you make sure that cases of crashes, freezing, or failure to perform the necessary actions of the program you have developed become very rare?

There is no exact answer to this question. But over the centuries, the wisest scientists have thought about this topic for years and were able to find a remedy that, if it does not eliminate all program errors, then at least creates the illusion of action to eliminate them.

And this remedy is called TESTING the software product.

According to wise people, Testing is one of the most established ways to ensure the quality of software development and is included in the set of effective tools of a modern software product quality assurance system.

The quality of a software product is characterized by a set of properties that determine how “good” the product is from the point of view of stakeholders, such as the product customer, sponsor, end user, product developers and testers, support engineers, marketing, training and sales personnel. Each of the participants may have a different idea about the product and how good or bad it is, that is, how high the quality of the product is. Thus, setting the problem of ensuring product quality results in the task of identifying stakeholders, their quality criteria and then finding the optimal solution that satisfies these criteria.

When and who?

According to experienced developers, testing of a software product should be carried out right from the very beginning of its creation. But at the same time, experienced developers themselves should not take part in testing, since this is not a royal matter. The software product must be tested by specially trained employees called testers, because even the most experienced developer will not be able to see his mistake, even using the latest optical instruments.

However, all developers agree that testing a software product from the point of view of classification by purpose should be divided into two classes:

  • Functional testing
  • Non-functional testing

Functional testing

Functional testing means checking the compliance of a software product with the functional requirements specified in the technical specifications for the creation of this product. To put it simply, functional testing checks whether the software product performs all the functions it should.

So, you finally decided to conduct functional testing. You look into the technical specification, read the functional requirements and realize that at least they are not in the order in which testing can be done. You will be surprised that quite a long time ago, others already noticed this discrepancy and figured out how to overcome it.

To conduct functional testing, the staff of the technical control department is developing a document program and methodology for testing the functionality of the application (API). The PMI document contains a list of software product testing scenarios (test cases) with a detailed description of the steps. Each step of the testing scenario is characterized by the actions of the user (testing specialist) and the expected results - the program’s response to these actions. The test program and methodology must simulate the operation of the software product in real mode. This means that the testing scenario should be built on the basis of an analysis of the operations that future users of the system will perform, and not be an artificially compiled sequence of manipulations understandable only to the developer.

Typically, functional testing is carried out at two levels:

  • Component (unit) testing. Testing of individual components of a software product, focused on their specifics, purpose and functional features.
  • Integration testing. This type of testing is carried out after component testing and is aimed at identifying defects in the interaction of various subsystems at the level of control flows and data exchange.

Non-functional testing

Non-functional testing evaluates the qualities of a software product, such as ergonomics or performance.

I think the importance of this type of testing is clear and does not require justification. After all, everyone understands that if, for example, the system performance is not sufficient, then users will have to wait half a day for a response to their actions, which can lead to their mass hibernation.

As the name suggests, non-functional testing checks the compliance of a software product with non-functional requirements from the technical specifications for its creation. And, as in the case of functional testing, a test program and methodology are developed for non-functional testing.

Embedded Software Testing and Compliance in the Agile Era

Compliance with industry standards is not something you can neglect or do later; it is an integral part of the embedded software (software) development process. For some industries - such as avionics, automotive and healthcare - strict adherence to quality standards in the development of complex and trouble-free embedded systems is vital to bringing a product to market. Traditionally, testing has played an important role in the development of embedded systems for standards-regulated industries. However, in recent years, established testing practices and processes, their place and role in such projects, have changed significantly. It was a game changer, and when the rules of the game change, you have to change with them to win.

With the constant development of new, cutting-edge technologies, companies need to quickly offer products to the market that are reliable, safe, easy to use and compatible with other systems - just to avoid getting lost in the rapidly changing technological world. In such a situation, the traditional waterfall model, where the software development process is strictly sequential and testing is performed at the very end, becomes a thing of the past. DevOps and Agile methods are becoming increasingly popular because they allow engineers to complete tasks that previously followed each other simultaneously.

Performance testing

During the performance testing phase, the first step is load testing, the purpose of which is to check whether the system will adequately respond to external influences in a mode close to real-life operation.

In addition to load testing, tests are carried out under conditions of minimal hardware and maximum load - stress testing, as well as tests under conditions of maximum volumes of processed information - volumetric testing.

There is another type of testing: stability and reliability testing, which includes not only long-term testing of a software product under normal conditions, but also its ability to return to normal operation after short periods of stressful loads.

Documentation for testing

As mentioned above, testing is carried out in accordance with the program and test methodology, which is developed in accordance with GOST 34.603-92.

To carry out testing, a test case is developed, which must contain enough data to test all modes of operation of the software product. Typically, a test case is created jointly by the customer and the contractor based on real data.

To conduct all types of performance testing, a so-called data generator is most often created, which allows you to automatically create a sufficient amount of data to achieve an objective result when assessing performance.

During testing, a testing protocol is drawn up, which contains information about the completion of all stages and steps of testing and comments received during testing.

If the test result is negative, the deficiencies are corrected and retested.

Exploratory testing

Exploratory testing (ad hoc testing is a subtype of functional testing. It is used in fast-growing projects with flexible development methods, where there are no clear documentation and requirements. Exploratory testing is the highest aerobatics in software testing. Qualitative testing is available to highly qualified specialists and almost entirely depends on the performer, his experience, knowledge (both in the subject area and in testing methods), and the ability to quickly get to the essence.

Stress Testing

Load testing is the process of analyzing the performance of the system under test under load. The purpose of load testing is to determine the application's ability to withstand external loads. Typically, tests are carried out in several stages.

1. Generating test scripts

For effective analysis, scenarios must be as close as possible to actual use cases. It is important to understand that exceptions are always possible, and even the most detailed test plan may not cover a single case.

2. Development of a test configuration

Having test scenarios, it is important to distribute the order of increasing load. For successful analysis, it is necessary to identify performance evaluation criteria (response speed, request processing time, etc.).

3. Carrying out a test test

When conducting tests, it is important to timely monitor the execution of scenarios and the response of the system under test. Emulating high loads requires serious hardware and software infrastructure. In some cases, mathematical modeling methods are used to reduce the cost of work. Data obtained at low loads are taken as a basis and approximated. The higher the level of simulated load, the lower the accuracy of the estimate. However, this method significantly reduces costs.

Test automation

The main feature of automated testing is the ability to quickly conduct regression tests. The main advantages of automation (according to a report from Worksoft) are increased staff efficiency, earlier detection of defects and higher quality of business processes. These advantages are offset by a significant disadvantage: high cost - due to the high cost of implementing and supporting test automation, about 50% of companies still use mainly manual testing.

Usability testing

Any application is created in order to be used. Ease of use is an important quality indicator of a program. The IT industry knows many examples of projects taking off after a successful usability fix. The wider the audience, the more important the usability factor. Usability testing involves detailed analysis of user behavior. To assess ergonomics, it is important to have data not only on the speed of completing a business task, but also on the user’s emotions, facial expressions, and voice timbre.

Configuration testing

Configuration testing gives confidence that the application will work on different platforms, and therefore for the maximum number of users. For WEB applications, cross-browser testing is usually chosen. For Windows applications - testing on various operating systems and bit rates (x86, x64). An important component of configuration testing is the test infrastructure: to conduct tests, you need to constantly maintain a fleet of test machines. Their number varies from 5 to several dozen.

Integration testing

If your project has more than one component, it needs integration testing. With a complex application architecture, a necessary condition for quality assurance is checking the interaction of program parts. Testing is achieved by developing and conducting “end-to-end” cases. Integration testing is carried out after component testing. Therefore, it is very important to take into account the experience of component testing, while respecting the business orientation of the test cases.

Stress testing

Any system has a limit to its normal functioning. When the limit is exceeded, the system goes into a state of stress and changes its behavior significantly. Stress testing tests the operation of an application under conditions exceeding normal functioning limits. This is especially important for “critical” programs: banking software, aviation industry programs, medicine. Stress testing is carried out not only at the software development stage, but also throughout the entire operating cycle in order to obtain and process system behavior data over a long period of time.

Among all types, functional testing rightfully occupies a leading position, since the program must first of all work correctly, otherwise ease of use, security and sufficient speed will be of absolutely no use. In addition to mastering various testing techniques, each specialist must understand how to properly conduct a test in order to obtain the most effective result.

Functional testing: where to direct the main efforts?

For unit and system testing;

To check the “white” or “black” box;

For manual testing and automation;

To test new functionality or;

For “negative” or “positive” tests.

Between all these areas of activity, it is important to find the right path, which will be the “middle”, in order to balance the efforts, using the advantages of each of the areas to the maximum.

Software verification is carried out different ways, one of which is black box or data driven testing.

The program in this case is presented from the point of view of a “black box”, and the test is carried out to determine the circumstances in which the behavior of the program will not comply with the specification. All errors are identified through data management, which is carried out through exhaustive testing, that is, using all possible

If for a program the execution of a command depends on the events preceding it, then it will require checking all possible sequences. It is quite obvious that for most cases it is simply impossible to carry out exhaustive testing, so more often they choose an acceptable or reasonable option that is limited to running the program on a small subset of all input data. This option fully guarantees no deviations from specifications.

Functional testing involves choosing the right test. At the same time, it is customary to distinguish between the following methods for generating sets for them:

Boundary value analysis;

Equivalent partition;

Error assumption;

Analysis of connections between causes and effects.

You can consider each of them separately.

Boundary value analysis. Boundary values ​​are usually understood as those located on the boundaries of equivalence classes. In such places it is most likely to find an error. The use of such a method requires a certain creativity from the specialist, as well as specialization in this specific problem under consideration.

Equivalent partition. All possible sets of input parameters are divided into several equivalence classes. The data is combined based on the principle of detecting similar errors. It is generally accepted that if a set of one class shows an error, then the equivalent ones will also indicate it. Functional testing by this method is carried out in two stages: at the first, equivalence classes are identified, and at the second, special tests are already formed.

Analysis of cause and effect relationships. The system can select tests with high performance by conducting such checks. In this case, a separate input condition is taken as the cause, and the output condition is seen as the effect. The method is based on the idea of ​​attributing all types of causes to certain consequences, that is, on clarifying those same cause-and-effect relationships. Testing of a software product is carried out in several stages, resulting in a list of causes and ensuing consequences.

  • unintentional deviations of developers from working standards or implementation plans;
  • specifications of functional and interface requirements are made without complying with development standards, which leads to disruption of the functioning of programs;
  • organization of the development process - imperfect or insufficient management of the project manager's resources (human, technical, software, etc.) and issues of testing and integration of project elements.

Let's look at the testing process based on the recommendations of the ISO/IEC 12207 standard, and give the types of errors that are detected in each life cycle process.

Requirements Development Process. When determining the initial concept of the system and the initial requirements for the system, analysts make specification errors top level systems and building a conceptual model of the subject area.

Typical errors in this process are:

  • inadequacy of the requirements specification for end users; - incorrect specification of the interaction of the software with the operating environment or with users;
  • non-compliance with customer requirements for individual and general software properties;
  • incorrect description of functional characteristics;
  • lack of availability of tools for all aspects of implementing customer requirements, etc.

Design Process.Errors in the design of components can occur when describing algorithms, control logic, data structures, interfaces, data flow modeling logic, input-output formats, etc. These errors are based on defects in analyst specifications and design flaws. These include errors related to:

  • with the definition of the user interface with the environment;
  • with a description of the functions (inadequacy of the goals and objectives of the components that are discovered when checking a set of components);
  • with the definition of the information processing process and the interaction between processes (the result of incorrect determination of the relationships of components and processes);
  • with incorrect specification of data and their structures when describing individual components and the software as a whole;
  • with incorrect description of module algorithms;
  • with determination of the conditions of occurrence possible errors in a programme;
  • in violation of the standards and technologies adopted for the project.

Encoding stage.At this stage, errors arise that are the result of design defects, errors of programmers and managers during the development and debugging of the system. The causes of errors are:

  • lack of control over the values ​​of input parameters, array indices, loop parameters, output results, division by 0, etc.;
  • incorrect handling of irregular situations when analyzing return codes from called subroutines, functions, etc.;
  • violation of coding standards (bad comments, irrational allocation of modules and component, etc.);
  • the use of one name to designate different objects or different names of one object, poor name mnemonics; - inconsistent changes to the program by different developers, etc.

Testing process.In this process, errors are made by programmers and testers when performing assembly and testing technology, selecting test sets and test scenarios, etc. Failures in software caused by this kind of errors must be identified, eliminated and not affect the statistics of component and software errors in in general.

Maintenance process.During the maintenance process, errors are discovered that are caused by shortcomings and defects in operational documentation, insufficient indicators of modifiability and readability, as well as the incompetence of persons responsible for maintaining and/or improving the software. Depending on the nature of the changes being made, almost any errors similar to the previously listed errors at previous stages may occur at this stage.

All errors that occur in programs are usually divided into the following classes [7.12]:

  • logical and functional errors;
  • calculation and runtime errors;
  • input/output and data manipulation errors;
  • interface errors;
  • data volume errors, etc.

Logical errors are the cause of violation of the logic of the algorithm, internal inconsistency of variables and operators, as well as programming rules. Functional errors are a consequence of incorrectly defined functions, violation of the order of their application or lack of completeness of their implementation, etc.

Calculation errors arise due to inaccuracy of source data and implemented formulas, method errors, incorrect application of calculation operations or operands. Runtime errors are associated with failure to provide the required request processing speed or program recovery time.

I/O errors and data manipulation are a consequence of poor-quality preparation of data for program execution, failures when entering them into databases or when retrieving them from it.

Interface errors refer to errors in the relationship of individual elements with each other, which manifests itself during the transfer of data between them, as well as during interaction with the operating environment.

Volume errors relate to data and are a consequence of the fact that the implemented access methods and database sizes do not satisfy the real volumes of system information or the intensity of their processing.

The given main classes of errors are characteristic of different types of software components and they manifest themselves in programs in different ways. Thus, when working with a database, errors in data presentation and manipulation occur, logical errors in specifying applied data processing procedures, etc. In computational programs, computational errors predominate, and in control and processing programs, logical and functional errors predominate. Software, which consists of many diverse programs that implement different functions, may contain errors different types. Interface errors and volume violations are typical for any type of system.

Analyzing the types of errors in programs is a prerequisite for creating test plans and test methods to ensure software correctness.

At the present stage of development of software development support tools (CASE technologies, object-oriented methods and tools for designing models and programs), a design is carried out in which the software is protected from the most common errors and thereby prevents the occurrence of software defects.

Relationship between error and failure.The presence of an error in a program, as a rule, leads to a failure of the software during its operation. To analyze the cause-and-effect relationships of "error-failure", the following actions are performed:

  • identification of flaws in design and programming technologies;
  • the relationship between flaws in the design process and human errors;
  • classification of failures, flaws and possible errors, as well as defects at each stage of development; - comparison of human errors made in a certain development process and defects in the object, as a consequence of errors in the project specification, program models;
  • verification and error protection at all stages of the life cycle, as well as detection of defects at each stage of development;
  • comparison of defects and failures in software to develop a system of interconnections and methods for localizing, collecting and analyzing information about failures and defects;
  • development of approaches to the processes of documenting and testing software.

The ultimate goal of error-failure causation is to define methods and means for testing and detecting errors of certain classes, as well as criteria for completing testing on multiple data sets; in identifying ways to improve the organization of the process of software development, testing and maintenance.

Here is the following classification of failure types:

  • hardware, in which the system-wide software is inoperable;
  • informational, caused by errors in input data and data transmission via communication channels, as well as failure of input devices (a consequence of hardware failures);
  • ergonomic, caused by operator errors during his interaction with the machine (this failure is a secondary failure and can lead to information or functional failures);
  • software, if there are errors in components, etc.

Some errors may be the result of deficiencies in requirements definition, design, output code generation, or documentation. On the other hand, they are generated during the development of a program or during the development of interfaces of individual program elements (violation of the order of parameters, fewer or more parameters, etc.).

Sources of errors.Errors can be generated during the development of the project, components, code and documentation. As a rule, they are discovered during the execution or maintenance of software at the most unexpected and different points.

Some errors in a program may be the result of deficiencies in requirements definition, design, code generation, or documentation. On the other hand, errors are generated during the development of a program or the interfaces of its elements (for example, when the order of setting communication parameters is violated - less or more than required, etc.).

The reason for errors is a lack of understanding of customer requirements; inaccurate specification of requirements in project documents, etc. This leads to the fact that some system functions are implemented that will not work as proposed by the customer. In this regard, a joint discussion between the customer and the developer of some details of the requirements is carried out to clarify them.

The system development team may also change the syntax and semantics of the system description. However, some errors may not be detected (for example, the indexes or variable values ​​of these statements are incorrectly set).




Top