Glossary: QA & Software Testing





Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.
Alpha Testing: Testing of a software product or system conducted at the developer’s site by the end user.
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.
Agile Testing: Testing practice which emphasize on a test-first design paradigm.
Automated Testing: That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.
Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Black Box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.
Bottom-up Testing: An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.
Boundary Testing: Testing that focuses on the boundary or limit conditions of the software being tested. Stress Testing can also be considered as form of boundary testing.
Boundary Value Analysis: A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.
Branch Coverage Testing: A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed atleast once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail. Its in a way is used when there is no enough time to execute all the test cases
Bug: A design or implementation flaw that will result in symptoms exhibited by some module when module is subjected to an appropriate test.
Code Complete: Phase of development where functionality is implemented in entirety. Bug fixes are all that are left. All functions
found in the functional Specifications have been implemented. Code complete module may be far from release as it may have many bugs
Code Coverage: An analysis method that determines which parts of the software/code have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Concurrency Testing:
Multi-user testing geared toward determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores. This is one area where the cause for many bugs which were considered random can be identified.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Debugging:
The process of finding and removing the causes of software failures. Tools used for debugging are called debuggers.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware,applications, or systems if appropriate.
Exhaustive Testing:
Testing which covers all combinationsof input values and preconditions for an element of the software under test. This is practically infeasible.
Failure:
The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.
Fault:
A manifestation of an error in software. A fault, if encountered, may cause a failure.
Fault-based Testing:
Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.
Function Points:
A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 =minor influence, 5 = strong influence.
Functional Testing:
Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.
Gorilla Testing:
Testing one particular module, functionality heavily.
Gray Box Testing:
A combination of Black Box and WhiteBox testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Heuristics Testing:
Another term for failure-directed testing.
Incremental Analysis:
Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.
Infeasible Path:
Program statement sequence that can never be executed. i,e the unreachable code
Inspection:
A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems. Its generic term for all inspections similar to code inspections.
Integration Testing:
An orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.
Intrusive Testing:
Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.
Installation Testing:
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
IV&V:
Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.
Life Cycle:
The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.
Localization Testing:
This term refers to making software specifically designed for a specific locality.
Loop Testing:
A white box testing technique that exercises program loops.
Manual Testing:
That part of software testing that requires operator input, analysis, or evaluation.
Monkey Testing:
Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out. Its a form of adhoc testing.
Mutation Testing:
A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.
Non-intrusive Testing:
Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.
Negative Testing:
Testing aimed at showing software does not work. Also known as "test to fail".
Path Analysis:
Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.
Path Coverage Testing:
A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.
Peer Reviews:
A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.
Path Testing:
Testing wherein all paths in the program source code are tested at least once.
Performance Testing:
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing:
Testing aimed at showing software works. Also known as "test to pass".
Proof Checker:
A program that checks formal proofs of program properties for logical correctness.
Qualification Testing:
Formal testing, usually conducted by the developer for the end user, to demonstrate that the software meets its specified requirements.
Random Testing:
An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment.
Regression Testing:
Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements.
Ramp Testing:
Continuously raising an input signal until the system breaks down.Form of stress testing
Recovery Testing:
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Reliability:
The probability of failure-free operation for a specified period.
Run Chart:
A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation.
Statement Coverage Testing:
A test method satisfying coverage criteria that requires each statement be executed at least once.
Static Testing:
Verification performed without executing the system’s code. Also called static analysis.
Statistical Process Control:
The use of statistical techniques and tools to measure an ongoing process for change or stability.
Structural Coverage:
This requires that each pair of module invocations be executed at least once.
Structural Testing:
A testing method where the test data is derivedsolely from the program structure.
Sanity Testing:
Brief test of major functional elements of a piece of software to determine if it is basically operational.
Scalability Testing:
Performance testing focused on ensuring the application under test gracefully handles increases in workload.
Security Testing:
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing:
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Soak Testing:
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification
: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
Software Testing:
A set of activities conducted with the intent of finding errors in software.
Static Analysis:
Analysis of a program carried out without executing the program.
Static Analyzer:
A tool that carries out static analysis.
Static Testing:
Analysis of a program carried out without executing the program.
Storage Testing:
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing:
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
Structural Testing:
Testing based on an analysis of internal workings and structure of a piece of software.
System Testing:
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
Test Bed:
1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a logically or physically separate component.2) A suite of test programs used in conducting the test of a component or system.
Test Development:
The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans, software, procedures, cases, documentation, etc.
Test Harness:
A software tool that enables the testing of softwarecomponents that links test capabilities to perform specific tests, accept program inputs, simulate missing components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.
Testability:
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testing:
The process of exercising software to verify that it satisfies specified requirements and to detect errors. The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Case:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development:
Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Test Driver:
A program or test tool used to execute tests. Also known as a Test Harness.
Test Environment:
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test First Design:
Test-first design is one of the mandatory practices of Extreme Programming (XP). It requires that programmers do not write any production code until they have first written a unit test.
Test Plan:
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.
Test Procedure
: A document providing detailed instructions for the execution of one or more test cases.
Test Script:
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
Test Specification:
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
Test Suite:
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.
Test Tools:
Computer programs used in the testing of a system, a component of the system, or its documentation.
Thread Testing:
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top Down Testing:
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Total Quality Management:
A company commitment to develop a process that achieves high quality product and customer satisfaction.
Traceability Matrix
: A document showing the relationship between Test Requirements and Test Cases.
Test Objective:
An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.
Unit Testing:
The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.
Usability Testing:
Testing the ease with which users can learn and use a product.
Use Case:
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
V- Diagram (model):
a diagram that visualizes the orderof testing activities and their corresponding phases of development
Verification:
The process of determining whether or not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
Volume Testing:
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files),can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
Validation:
The process of evaluating software to determine compliance with specified requirements.
Walkthrough:
Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.
White-box Testing:
Testing approaches that examine the program structure and derive test data from the program logic. This is also known as clear box testing, glass-box or open-box testing. White box testing determines if program-code structure and logic is faulty. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.
Workflow Testing:
Scripted end-to-end testing which duplicates specific workflows, which are expected to be utilized by the end-user.


3 comments:

  1. Anonymous11:21 AM

    hello prof. Mallik ,
    good course material on testing fundaes .. r u holding any online classes as well?

    -nagesh :)

    ReplyDelete
  2. Nagesh, if you need any consultancy on testing you can contact me :). Sorry no online courses for now.

    ReplyDelete
  3. Anonymous11:12 PM

    Your blog is worth bookmarketing, and I'm going to let some of my marketing partners know about your useful resource. Discover how 2 top affiliate marketers lay out entire affiliate marketing systems and reveal all their short cuts, proven tactics, success stories, and guarded secrets, so you too can start to earn jumbo-size affiliate checks EDC Gold Profits

    ReplyDelete