7) Metrics Used In Testing
The Product Quality Measures:
1. Customer satisfaction index
This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:
1. Number of system enhancement requests per year
2. Number of maintenance fix requests per year
3. User friendliness: call volume to customer service hotline
4. User friendliness: training time per new user
5. Number of product recalls or fix releases (software vendors)
6. Number of production re-runs (in-house information systems groups)
2. Delivered defect quantities
They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.
3. Responsiveness (turnaround time) to users
1. Turnaround time for defect fixes, by level of severity
2. Time for minor vs. major enhancements; actual vs. planned elapsed time
4. Product volatility
Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)
5. Defect ratios
1. Defects found after product delivery per function point.
2. Defects found after product delivery per LOC
3. Pre-delivery defects: annual post-delivery defects
4. Defects per function point of the system modifications
6. Defect removal efficiency
1. Number of post-release defects (found by clients in field operation), categorized by level of severity
2. Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects
3. All defects include defects found internally plus externally (by customers) in the first year after product delivery
7. Complexity of delivered product
1. McCabe's cyclomatic complexity counts across the system
2. Halstead’s measure
3. Card's design complexity measures
4. Predicted defects and maintenance costs, based on complexity measures
8. Test coverage
1. Breadth of functional coverage
2. Percentage of paths, branches or conditions that were actually tested
3. Percentage by criticality level: perceived level of risk of paths
4. The ratio of the number of detected faults to the number of predicted faults.
9. Cost of defects
1. Business losses per defect that occurs during operation
2. Business interruption costs; costs of work-around
3. Lost sales and lost goodwill
4. Litigation costs resulting from defects
5. Annual maintenance cost (per function point)
6. Annual operating cost (per function point)
7. Measurable damage to your boss's career
10. Costs of quality activities
1. Costs of reviews, inspections and preventive measures
2. Costs of test planning and preparation
3. Costs of test execution, defect tracking, version and change control
4. Costs of diagnostics, debugging and fixing
5. Costs of tools and tool support
6. Costs of test case library maintenance
7. Costs of testing & QA education associated with the product
8. Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)
11. Re-work
• Re-work effort (hours, as a percentage of the original coding hours)
• Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
• Re-worked software components (as a percentage of the total delivered components)
12. Reliability
• Availability (percentage of time a system is available, versus the time the system is needed to be available)
• Mean time between failure (MTBF).
• Man time to repair (MTTR)
• Reliability ratio (MTBF / MTTR)
• Number of product recalls or fix releases
• Number of production re-runs as a ratio of production runs
Metrics for Evaluating Application System Testing:
Metric = Formula
Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)
Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).
Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria
Defects per size = Defects detected / system size
Test cost (in %) = Cost of testing / total cost *100
Cost to locate defect = Cost of testing / the number of defects located
Achieving Budget = Actual cost of testing / Budgeted cost of testing
Defects detected in testing = Defects detected in testing / total system defects
Defects detected in production = Defects detected in production/system size
Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100
Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
System complaints = Number of third party complaints / number of transactions processed
Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10
Source Code Analysis = Number of source code statements changed / total number of tests.
Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation
Test Execution Productivity = No of Test cycles executed / Actual Effort for testing
Handling Alerts or Popups using Selenium webdriver
-
package selenium_examples; import java.util.concurrent.TimeUnit;import
org.openqa.selenium.Alert;import org.openqa.selenium.By;import
org.openqa.selenium.W...
11 years ago
0 comments:
Post a Comment
Who ever writes Inappropriate/Vulgar comments to context, generally want to be anonymous …So I hope U r not the one like that?
For lazy logs u can at least use Name/URL option which don’t even require any sign-in, good thing is that it can accept your lovely nick name also and URL is not mandatory too.
Thanks for your patience
~Best job portal admin(I love "Transparency")