In Part II, I have introduced the terminology starting with the letter E to the character I. In this part, I will introduce to you the remaining common terms.
Link part II: https://viblo.asia/p/thuat-ngu-trong-kiem-thu-phan-mem-phan-i-bWrZnEmOKxw
K
Terms | Meaning | Note |
---|---|---|
keyword driven testing | This is known as a scripting technique that uses data files to contain not only test data and expected results but also keywords related to the application being tested. The keywords are interpreted by special support scripts called by control scripts |
L
Terms | Meaning | Note |
---|---|---|
load test | One type of test involves measuring the behavior of a component or system with an increased load, for example, the number of parallel users and / or the number of transactions to determine which load can be processed by component or system. | Currently load testing is almost considered a mandatory testing technique for websites, apps that have interactions, big data. There are many tools that support QA, Dev in testing the performance of the system, For example: Jmeter, … |
low level test case | A test case with specific values (performance levels) for input data and expected results at a low priority | Determining the test case at a low level depends on most requirements that the system is developing. For example, case check font-size, usually typed at a low level. However, if it is the home screen, showing an important message that the font-size is too small, resulting in users not being able to capture the information, this is considered a quite serious bug and the level of case check fon -size cannot be rated low |
M
Terms | Meaning | Note |
---|---|---|
maintenance testing | Check for changes to the operating system or the impact of the changing environment on the operating system. | Typically used in maintenance projects. Testing in this manner needs to be very careful and meticulous, requiring developers and QA to have a deep level of understanding of the system because only a few small modifications can lead to unforeseen consequences. |
management review | A systematic review of the process of purchasing, delivering, developing, operating or maintaining software, conducted by progress monitoring managers, determining the status of plans and schedules, confirming request and allocate the system of heirs or evaluate the effectiveness of management methods to achieve the purpose | |
measure | The number or category assigned to an attribute of an entity by making a measurement | |
memory leak | A defect in the logic of the program makes it unable to regain memory after use is completed, eventually causing the program to fail due to lack of memory. | |
monitor | A software tool or hardware device that runs simultaneously with a component or system that is tested and monitored, which records and / or analyzes the behavior of the component or system | |
multiple condition coverage | The combined percentage of all unique condition results in a statement has been made by a test set. 100% multiple insurance implies 100% conditional coverage. | |
multiple condition testing | A white box test design technique in which the test cases are designed to perform a combination of single conditional results (in a statement) | |
mutation analysis | A method for determining the thoroughness of a test set by measuring how well a test suite can distinguish a program from small (mutant) variants of a program. |
N
Terms | Meaning | Note |
---|---|---|
negative testing | Tests intended to show a component or system are not working. Negative testing is related to the attitude of the tester rather than a specific test method or test design technique. | |
non-functional requirement | A requirement is not related to functionality, but relates to attributes such as reliability, efficiency, usability, maintainability, and portability. | |
non-functional testing | Examining properties of a component or system unrelated to functionality, e.g. reliability, efficiency, usability, maintainability, and portability | |
non-functional test design techniques | The methods used to design or select testing for non-functional testing. |
O
Terms | Meaning | Note |
---|---|---|
operability | The capabilities of a software product allow users to operate and control it | |
operational environment | Hardware and software products installed at users User or customer sites where tested components or systems will be used. Software may include operating systems, database management systems, and other applications. | |
operational testing | Tests are conducted to evaluate a component or system in its operating environment. | |
output value | Output value | In a system, when there is an input value, there is always an output value |
P
Terms | Meaning | Note |
---|---|---|
pair testing | Two testers work together to find defects. Usually, they share a computer and trade control of it while testing. | |
Pass | A test is considered to be passed if its actual results match its expected results. | |
pass / fail criteria | A decision rule is used to determine whether a test item (function) or feature has passed or failed a test. | |
path coverage | Percentage of links that were taken by a test set. | |
path testing | A white box test design technique in which test cases are designed to perform paths. | |
performance | The extent to which a system or component fulfills the functions specified in certain constraints on processing time and throughput speed. | |
performance testing | Testing process to determine the performance of a software product. | |
performance testing tool | A tool that supports performance testing and usually has two main means: load creation and test transaction measurement. Generating load can simulate multiple users or high input data volume. During implementation, response time measurements are taken from the selected transactions and they are recorded. Performance testing tools often provide reports based on test logs and load charts based on response time. | |
phase test plan | An inspection plan usually addresses a test level. | |
portability testing | Testing process to determine the mobility of software products. | |
precondition | Environmental and state conditions must be met before a component or system can be executed with a specific test or test procedure. | |
Priority | Priority level | Often used to evaluate the priority of the test case set to help QA have the order of priority to check |
process cycle test | A black box test design technique in which test cases are designed to perform processes | |
project | The project is a unique set of activities that are coordinated and controlled with a start and end date to achieve an objective in accordance with specific requirements, including time and cost constraints. and resources. | |
project test plan | An inspection plan usually addresses multiple test levels. |
Q
Terms | Meaning | Note |
---|---|---|
quality | The extent to which a component, system or process satisfies the specific requirements and / or needs and wants of the user / customer | |
quality assurance | Part of quality management focuses on providing the belief that quality requirements will be met | |
quality management | Coordinated activities to direct and control an organization related to quality. Quality-oriented orientations and controls often include setting quality policies and quality objectives, quality planning, quality control, quality assurance and quality improvement. |
R
Terms | Meaning |
---|---|
random testing | A black box test design technique in which the selected test cases can use a pseudo-randomization algorithm to match the operation profile. This technique can be used to test non-functional properties such as reliability and performance. |
recoverability testing | Testing process to determine the resilience of software products |
regression testing | Testing the program has been previously tested after modification to ensure that errors are not introduced or detected in unchanged areas of the software, as a result of the changes made. It is done when the software or its environment is changed. |
release note | A document that identifies test items, their configuration, current status and other distribution information distributed by the developer for testing and possibly other stakeholders at the beginning of the test implementation period. investigate. |
reliability testing | Testing process to determine the reliability of software products. |
requirement | A condition or ability required by the user to solve a problem or achieve a goal must be met or owned by a system or system component to meet the contract, standard, specification. or other official imposing document. |
requirements management tool | A tool that supports logging requests, request attributes (e.g. priority, responsible for knowledge) and comments, and facilitates traceability through request classes and Change management requirements. Some request management tools also provide means for static analysis, such as consistency checks and violation of predefined request rules. |
result | Consequences / results of performing a test. It includes screen outputs, data changes, reports and media messages sent. See more actual results, expected results. |
re-testing | The test runs the failed test cases the last time they were run, to verify the success of the corrective actions. |
review | A review of the status of a product or project to identify differences from planned results and propose improvements. Examples include management evaluations, informal assessments, technical reviews, audits and instructions. |
risk | One factor may lead to negative future consequences; Often expressed as impact and ability. |
S
Terms | Meaning |
---|---|
scalability | Software product capabilities are upgraded to accommodate increased loads. |
scalability testing | Check to determine the scalability of software products. |
security | The properties of software products are capable of preventing unauthorized access, whether accidentally or intentionally, to programs and data |
security testing | Check to determine the security of software products |
simulation | Representation of selected behavioral characteristics of one physical system or abstraction by another. |
simulator | A device, computer program or system used during the test, which operates or behaves like a certain system when provided with a controlled set of inputs |
smoke test | A subset of all defined / planned test cases including the main function of a component or system, to determine that the program’s most important functions operate, but not bother with the finer details. A daily construction and smoke test is one of the best practices in the industry |
specification | A complete, accurate and verifiable designation of documentation, requirements, design, behavior or other characteristics of a component or system, and, often, procedures procedure to determine if these terms are met |
stability | Ability of a software product to avoid unwanted effects from software modifications |
state transition testing | Black box test design technique in which test cases are designed to perform a valid and invalid state transition |
statement coverage | Percentage of execution reports that were performed by a test suite |
statement testing | A white box test design technique in which test cases are designed to execute statements |
static testing | Testing a component or system at a specification or deployment level without implementing the software, e.g. static code evaluation or analysis |
statistical testing | A test design technique in which an input statistical distribution model is used to construct representative test cases |
Stress testing | Tests are conducted to evaluate a system or component at or beyond the limits of its specified requirements. |
suitability | Ability of a software product to provide an appropriate set of functions for specified user tasks and goals |
system integration testing | Check the integration of systems and packages; testing interfaces for external organizations (eg Electronic data exchange, Internet) |
system testing | The process of testing an integrated system to verify that it meets specified requirements |
T
Terms | Meaning |
---|---|
technical review | A peer group discussion activity focused on reaching consensus on the technical method will be implemented. Technical review is also known as peer review |
test approach | Implementation of a testing strategy for a specific project. It usually includes decisions that are made based on the project’s (test) objectives and the risk assessment to be carried out, the starting point related to the testing process, the applied test design techniques. Use, exit criteria and type of test will be performed |
test automation | Use of software to perform or support inspection activities, e.g. test management, test design, performance testing and test results |
test basis | All documents from which the requirements of a component or system can be inferred. The documents on which the test cases are based. If a document can only be modified by an official revision procedure, the testing facility is called the basic testing facility. |
test case | A set of input values, performance prerequisites, expected results and post-performance conditions, developed for a specific objective or test condition, such as to perform a line lead specific programs or to verify compliance with a specific requirement |
test case specification | The document specifies a set of test cases (objectives, inputs, test actions, expected results, and prerequisites) for a test item. |
test charter | A statement about testing goals, and maybe testing ideas. The test chart is one of the other charts used in exploration testing. See also test exploration |
test comparator | A testing tool to perform automated test comparisons |
test condition | An item or event of a component or system may be verified by one or more test cases, e.g. a function, transaction, quality attribute or structural element. |
test data | Data that exists (for example, in a database) before a test is executed and that affects or is affected by the tested component or system |
test design specification | The document specifies the test conditions (insurance section) for the test item, the detailed test method and identifies relevant high-level inspection cases. |
test design tool | A tool that supports the test design by generating test inputs from a specification that can be kept in the CASE tool store, for example, a request management tool or from test conditions. The indication is kept in the tool itself |
test environment | An environment containing hardware, equipment, simulations, software tools and other support elements necessary to conduct testing |
test evaluation report | A document is created at the end of the testing process that summarizes all activities and test results. It also contains an assessment of the testing process and lessons learned |
test execution | The process of running tests by tested components or systems, producing realistic results |
test item | The individual factors are tested. There is usually a test object and multiple test items. See more test subjects |
test log | A time record of relevant details about the performance of the test |
test manager | The person responsible for testing and evaluating a test object. Individuals, persons directing, controlling, operating plans and adjusting the evaluation of test subjects |
Test Maturity Model (TMM) | Five-stage phased framework to improve the testing process, with regards to the capacity maturity model (CMM), which describes the key elements of an effective testing process |
Test Process Improvement (TPI) | A continuous framework for improving test procedures that describes the key elements of an effective test process, with particular emphasis on system testing and acceptance testing. |
test object | Component or system is tested |
test performance indicator | A metric, generally at a high level, indicates how well a certain target value or criterion is met. Usually related to testing process improvement objectives, for example: Defect detection rate (DDP) |
test phase | A set of separate test activities is collected at the manageable stage of a project, e.g. the activities carried out by a test level. |
test plan | A document describing the scope, approach, resources and schedule of the intended inspection activities. It identifies among other test items, features to be tested, test tasks, who performs each task, the degree of independence of the tester, the testing environment, the technical setting The test plan and the test technique used and the reason for the selection and any risk for contingency planning. It is a record of the test planning process |
test procedure specification | A document specifying a sequence of actions to perform a test. Also known as a manual test or test script |
test process | The basic testing process includes planning, specification, implementation, recording and testing to complete |
test script | Often used to refer to a test procedure specification, especially automated procedures |
test specification | A document including a test design specification, test case specification and / or test procedure specification |
test suite | A set of several test cases for a component or system under test, in which the following condition of a test is often used as a prerequisite for the next test. |
test tool | A software product that supports one or more inspection activities, such as planning and control, specification, initial file and data construction, performance testing and test analysis. |
test type | A group of inspection activities aimed at testing a component or system related to one or more quality attributes that are related to each other. A type of test is focused on a specific test goal, i.e. reliability test, usability test, regression test, etc. and can take place on one or more test levels. or test phases |
tester | A skilled technical expert is involved in testing a component or system |
testing | The process includes all life cycle activities, both static and dynamic, involving the planning, preparation, and evaluation of software products and related work products to determine that they meet meet specific requirements, to prove that they are suitable for purpose and to detect errors |
thread testing | A version of a component integration test in which the progressive integration of components obeys the implementation of subsets of requirements, as opposed to integrating components into hierarchies. |
top-down testing | An incremental approach to integration testing where components at the top of the component hierarchy are pre-tested, with lower-level components modeled by stubs. The components to be tested are then used to test lower level components. The process is repeated until the lowest grade components have been tested |
U
Terms | Meaning |
---|---|
understandability | The ability of a software product to allow users to understand whether the software is appropriate and how to use the software for specific tasks and conditions of use |
usability | The ability of the software to understand, learn, use and appeal to users when used under specified conditions |
usability testing | Testing to determine the level of understanding of software products, easy to learn, operate and attract users under specified conditions |
use case testing | Black box testing design techniques in which test cases are designed to execute user scenarios |
user test | An experiment in which actual users participate to assess the usability of a component or system |
V
Terms | Meaning |
---|---|
V-model | A framework for describing software development life cycle activities from specification to maintenance Model V illustrates how testing activities can be integrated into each stage of the software development life cycle |
validation | Confirm by examining and by providing objective evidence that requirements for specific uses or applications have been met. |
variable | An element stored in a computer is accessible by a software program by calling it by name |
verification | Validated by inspection and through the provision of objective evidence that specific requirements have been met |
volume testing | Check where the system is subjected to large amounts of data. See also check resource usage |
W
Terms | Meaning |
---|---|
walkthrough | A step-by-step presentation of an author of a document to gather information and establish a common understanding of its content |
white box testing | The test is based on analyzing the internal structure of a component or system |
Reference link: https://www.softwaretestinghelp.com/software-testing-terms-complete-glossary/