Black-Box testing – A testing approach whereby the program is considered as a complete entity and the internal structure is ignored. Test data are derived solely from the application’s specification.
Bottom-up testing – A form of incremental module testing in which the terminal module is tested first, then its calling module, and so on.
Boundary-value analysis – A black-box testing methodology that focuses on the boundary areas of a program’s input domain.
Cause-effect graphing – A technique that aids in identifying a set of high-yield test cases by using a simplified digital-logic circuit (combinatorial logic network) graph.
Code inspection – A set of procedures and error-detection techniques used for group code readings that is often used as part of the testing cycle to detect errors. Usually a checklist of common errors is used to compare the code against.
condition coverage – A white-box criterion in which one writes enough test cases that each condition in a decision takes on all possible outcomes at least once.
Decision/condition coverage – A white-box testing criterion that requires sufficient test cases that each condition in a decision takes on all possible outcomes at least once, each decision takes on all possible outcomes at least once, and each point of entry is invoked at least once.
Decision coverage – A criterion used in white-box testing in which you write enough test cases that each decision has a true and a false outcome at least once.
Desk checking – A combination of code inspection and walk-through techniques that the program performs at the user’s desk.
Equivalence Partitioning – A black-box methodology in which each test case should invoke as many different input conditions as possible in order to minimize the total number of test cases; you should try to partition the input domain of a program into equivalent classes such that the test result for an input in a class is representative of the test results for all inputs of the same class.
Exhaustive input testing – A criterion used in black-box testing in which one tries to find all errors in a program by using every possible input condition as a test case.
External specification – A precise description of a program’s behavior from the viewpoint of the user of a dependent system component.
Facility testing – A form of system testing in which you determine if each facility (a.k.a. function) stated in the objectives is implemented. Do not confuse facility testing with function testing.
Function testing – The process of finding discrepancies between the program and its external specification.
Incremental testing – A form of module testing whereby the module to be tested is combined with already-tested modules.
LDAP – Lightweight Directory Application Protocol.
White-box testing – A white-box criterion in which one writes enough test cases that all possible combination of condition outcomes in each decision, and all points of entry, are invoked at least once.
Non-incremental testing – A form of module testing whereby each module is tested independently.
Performance testing – A system test in which you try to demonstrate that an application does not meet certain criteria, such as response time and throughput rates, under certain workloads or configurations.
Random-input testing – The processes of testing a program by randomly selecting a subset of all possible input values.
Security testing – A form of system testing whereby you try to compromise the security mechanisms of an application or system.
Stress testing – A form of system testing whereby you subject the program to heavy loads or
stresses. Heavy stresses are considered peak volumes of data or activity over a short time span. Internet applications where large numbers of concurrent users can access the applications typically require stress testing.
stresses. Heavy stresses are considered peak volumes of data or activity over a short time span. Internet applications where large numbers of concurrent users can access the applications typically require stress testing.
System testing – A form of higher-order testing that compares the system or program to the original objectives. To complete system testing, you must have a written set of measurable objectives.
Testing – The process of executing a program, or a discrete program unit, with the intent of finding errors.
Top-down testing – A form of incremental module testing in which the initial module is tested first, then the next subordinate module, and so on.
Usability testing – A form of system testing in which the human- factor elements of an application are tested. Components generally checked include screen layout, screen colors, output formats, input fields, program flow, spellings, and so on.
Volume testing – A type of system testing of the application with large volumes of data to determine whether the application can handle the volume of data specified in its objectives. Volume testing is not the same as stress testing.
Walkthrough – A set of procedures and error-detection techniques for group code readings that is often used as part of the testing cycle to detect errors. Usually a group of people act as a “computer” to process a small set of test cases.
white-box testing – A type of testing in which you examine the internal structure of a program.
No comments:
Post a Comment