Technical Terms Used in Testing World

Audit: An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria. 

Acceptance testing: Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. 

Alpha Testing: Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in a setting approximating the target environment with the developer observing and recording errors and usage problems. 

Assertion Testing: A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes. 

Boundary Value: (1) A data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component. (2) A value which lies at, or just inside or just outside a specified range of valid input and output values. 

Boundary Value Analysis: A selection technique in which test data are chosen to lie along "boundaries" of the input domain [or output range] classes, data structures, procedure parameters, etc. Choices often include maximum, minimum, and trivial values or parameters. 

Branch Coverage: A test coverage criteria which requires that for each decision point each possible branch be executed at least once. 

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. 

Beta Testing: Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not controlled by the developer. 

Boundary Value Testing: A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just below, and just above, the defined limits of an output domain. 

Branch Testing: Testing technique to satisfy coverage criteria which require that for each decision point, each possible branch [outcome] is executed at least once. Contrast with testing, path; testing, statement. See: branch coverage. 

Compatibility Testing: The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems. 

Cause Effect Graph: A Boolean graph linking causes and effects. The graph is actually a digital-logic circuit (a combinatorial logic network) using a simpler notation than standard electronics notation. 

Cause Effect Graphing: This is a Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. It is a systematic method of generating test cases representing combinations of conditions. 

Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. 

Code Review: A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. 

Code Walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions. 

Coverage Analysis: Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program. 

Crash: The sudden and complete failure of a computer system or component. 

Criticality: The degree of impact that a requirement, module, error, fault, failure, or other item has on the development or operation of a system. 

Cyclomatic Complexity: The number of independent paths through a program. The cyclomatic complexity of a program is equivalent to the number of decision statements plus 1. 

Error: A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. 

Error Guessing: This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors. 

Error Seeding: The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. Contrast with mutation analysis. 

Exception: An event that causes suspension of normal program execution. Types include addressing exception, data exception, operation exception, overflow exception, protection exception, and underflow exception. 

Exhaustive Testing: Executing the program with all possible combinations of values for program variables. This type of testing is feasible only for small, simple programs. 

Failure: The inability of a system or component to perform its required functions within specified performance requirements. 

Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. 

Functional Testing: Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and execution conditions. (2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results. 

Integration Testing: An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated. 

Usability Testing: Tests designed to evaluate the machine/user interface. 

Validation: Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes. 

Validation, Verification and Testing: Used as an entity to define a procedure of review, analysis, and testing throughout the software life cycle to discover errors, determine functionality, and ensure the production of quality software. 

Volume Testing: Testing designed to challenge a system's ability to manage the maximum amount of data over a period of time. This type of testing also evaluates a system's ability to handle overload situations in an orderly fashion


1 comments:

  Pravesh Singh

29 December 2011 at 21:35

Very informative post. Its really helpful for me and beginner too. Check out this link too its also having a nice post related to this post over the internet which also explained very well...
http://mindstick.com/Articles/8a26775f-9cef-4376-9cb4-718d177006c5/?Technical%20Terms%20Used%20in%20Testing

Thanks

Post a Comment

Who ever writes Inappropriate/Vulgar comments to context, generally want to be anonymous …So I hope U r not the one like that?
For lazy logs u can at least use Name/URL option which don’t even require any sign-in, good thing is that it can accept your lovely nick name also and URL is not mandatory too.
Thanks for your patience
~Best job portal admin(I love "Transparency")