Effective Software Testing

Effective Software Testing
A 1994 study in US revealed that only about “9% of software projects were successful”
It is an understatement to say that –

Unreliable software can severely hurt businesses and endanger lives depending on the criticality of the application. The simplest application poorly written can deteriorate the performance of your environment such as the servers, the network and thereby causing an unwanted mess.
To ensure software application reliability and project success Software Testing plays a very crucial role. Everything can and should be tested –
• Test if all defined requirements are met
• Test the performance of the application
• Test each component
• Test the components integrated with each other
• Test the application end to end
• Test the application in various environments
• Test all the application paths
• Test all the scenarios and then test some more
What is Effective Software Testing?
How do we measure ‘Effectiveness’ of Software Testing?
The effectiveness of Testing can be measured if the goal and purpose of the testing effort is clearly defined. Some of the typical Testing goals are:
1. Testing in each phase of the Development cycle to ensure that the “bugs”(defects) are eliminated at the earliest
2. Testing to ensure no “bugs” creep through in the final product
3. Testing to ensure the reliability of the software
4. Above all testing to ensure that the user expectations are met
5. The effectiveness of testing can be measured with the degree of success in achieving the above goals.
Steps to Effective Software Testing:
Several factors influence the effectiveness of Software Testing Effort, which ultimately determine the success of the Project.
A) Coverage:
1. The testing process and the test cases should cover
2. All the scenarios that can occur when using the software application
3. Each business requirement that was defined for the project
4. Specific levels of testing should cover every line of code written for the application
There are various levels of testing which focus on different aspects of the software application. The often-quoted V model best explains this:

The various levels of testing illustrated above are: Unit Testing 2. Integration Testing 3. System Testing 4. User Acceptance Testing
The goal of each testing level is slightly different thereby ensuring the overall project reliability.
Each Level of testing should provide adequate test coverage.
Unit testing should ensure each and every line of code is tested
Integration Testing should ensure the components can be integrated and all the interfaces of each component are working correctly
System Testing should cover all the “paths”/scenarios possible when using the system
The system testing is done in an environment that is similar to the production environment i.e. the environment where the product will be finally deployed. There are various types of System Testing possible which test the various aspects of the software application.
B) Test Planning and Process:
To ensure effective Testing Proper Test Planning is important
An Effective Testing Process will comprise of the following steps:
1. Test Strategy and Planning
2. Review Test Strategy to ensure its aligned with the Project Goals
3. Design/Write Test Cases
4. Review Test Cases to ensure proper Test Coverage
5. Execute Test Cases
6. Capture Test Results
7. Track Defects
8. Capture Relevant Metrics
9. Analyze
Having followed the above steps for various levels of testing the product is rolled: It is not uncommon to see various “bugs”/Defects even after the product is released to production. An effective Testing Strategy and Process helps to minimize or eliminate these defects. The extent to which it eliminates these post-production defects (Design Defects/Coding Defects/etc) is a good measure of the effectiveness of the Testing Strategy and Process. As the saying goes - 'the proof of the pudding is in the eating'

Unit Testing: Why? What? & How?

5) Unit Testing: Why? What? & How?
There are various levels of testing:
• Unit Testing
• Integration Testing
• System Testing
• There are various types of testing based upon the intent of testing such as:
• Acceptance Testing
• Performance Testing
• Load Testing
• Regression Testing
• Based on the testing Techniques testing can be classified as:
• Black box Testing
• White box Testing
How does Unit Testing fit into the Software Development Life Cycle?
This is the first and the most important level of testing. As soon as the programmer develops a unit of code the unit is tested for various scenarios. As the application is built it is much more economical to find and eliminate the bugs early on. Hence Unit Testing is the most important of all the testing levels. As the software project progresses ahead it becomes more and more costly to find and fix the bugs.
In most cases it is the developer’s responsibility to deliver Unit Tested Code.
Unit Testing Tasks and Steps:
Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the code is ready execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the “unit” is free of all bugs
What is a Unit Test Plan?
This document describes the Test Plan in other words how the tests will be carried out.
This will typically include the list of things to be Tested, Roles and Responsibilities, prerequisites to begin Testing, Test Environment, Assumptions, what to do after a test is successfully carried out, what to do if test fails, Glossary and so on
What is a Test Case?
Simply put, a Test Case describes exactly how the test should be carried out.
For example the test case may describe a test as follows:
Step 1: Type 10 characters in the Name Field
Step 2: Click on Submit
Test Cases clubbed together form a Test Suite
Test Case Sample
Test Case ID Test Case Description Input Data Expected Result Actual Result Pass/Fail Remarks
Additionally the following information may also be captured:
a) Unit Name and Version Being tested
b) Tested By
c) Date
d) Test Iteration (One or more iterations of unit testing may be performed)
Steps to Effective Unit Testing:
1) Documentation: Early on document all the Test Cases needed to test your code. A lot of times this task is not given due importance. Document the Test Cases, actual Results when executing the Test Cases, Response Time of the code for each test case. There are several important advantages if the test cases and the actual execution of test cases are well documented.
a. Documenting Test Cases prevents oversight.
b. Documentation clearly indicates the quality of test cases
c. If the code needs to be retested we can be sure that we did not miss anything
d. It provides a level of transparency of what was really tested during unit testing. This is one of the most important aspects.
e. It helps in knowledge transfer in case of employee attrition
f. Sometimes Unit Test Cases can be used to develop test cases for other levels of testing
2) What should be tested when Unit Testing: A lot depends on the type of program or unit that is being created. It could be a screen or a component or a web service. Broadly the following aspects should be considered:
a. For a UI screen include test cases to verify all the screen elements that need to appear on the screens
b. For a UI screen include Test cases to verify the spelling/font/size of all the “labels” or text that appears on the screen
c. Create Test Cases such that every line of code in the unit is tested at least once in a test cycle
d. Create Test Cases such that every condition in case of “conditional statements” is tested once
e. Create Test Cases to test the minimum/maximum range of data that can be entered. For example what is the maximum “amount” that can be entered or the max length of string that can be entered or passed in as a parameter
f. Create Test Cases to verify how various errors are handled
g. Create Test Cases to verify if all the validations are being performed
3) Automate where Necessary: Time pressures/Pressure to get the job done may result in developers cutting corners in unit testing. Sometimes it helps to write scripts, which automate a part of unit testing. This may help ensure that the necessary tests were done and may result in saving time required to perform the tests

Integration Testing: Why? What? & How?

4) Integration Testing: Why? What? & How?
Each level of testing builds on the previous level.
“Unit testing” focuses on testing a unit of the code.
“Integration testing” is the next level of testing. This ‘level of testing’ focuses on testing the integration of “units of code” or components.
How does Integration Testing fit into the Software Development Life Cycle?
Even if a software component is successfully unit tested, in an enterprise n-tier distributed application it is of little or no value if the component cannot be successfully integrated with the rest of the application.
Once unit tested components are delivered we then integrate them together.
These “integrated” components are tested to weed out errors and bugs caused due to the integration. This is a very important step in the Software Development Life Cycle.
It is possible that different programmers developed different components.
A lot of bugs emerge during the integration step.
In most cases a dedicated testing team focuses on Integration Testing.
Prerequisites for Integration Testing:
Before we begin Integration Testing it is important that all the components have been successfully unit tested.
Integration Testing Steps:
Integration Testing typically involves the following Steps:
Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the components have been successfully integrated
What is an ‘Integration Test Plan’?
As you may have read in the other articles in the series, this document typically describes one or more of the following:
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary
How to write an Integration Test Case?
Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control from one component to the other.
So the Integration Test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure the app works when the different components are brought together.
The various Integration Test Cases clubbed together form an Integration Test Suite
Each suite may have a particular focus. In other words different Test Suites may be created to focus on different areas of the application.
As mentioned before a dedicated Testing Team may be created to execute the Integration test cases. Therefore the Integration Test Cases should be as detailed as possible.
Sample Test Case Table:
Test Case ID Test Case Description Input Data Expected Result Actual Result Pass/Fail Remarks
Additionally the following information may also be captured:
a) Test Suite Name
b) Tested By
c) Date
d) Test Iteration (One or more iterations of Integration testing may be performed)
Working towards Effective Integration Testing:
There are various factors that affect Software Integration and hence Integration Testing:
1) Software Configuration Management: Since Integration Testing focuses on Integration of components and components can be built by different developers and even different development teams, it is important the right version of components are tested. This may sound very basic, but the biggest problem faced in n-tier development is integrating the right version of components. Integration testing may run through several iterations and to fix bugs components may undergo changes. Hence it is important that a good Software Configuration Management (SCM) policy is in place. We should be able to track the components and their versions. So each time we integrate the application components we know exactly what versions go into the build process.
2) Automate Build Process where Necessary: A Lot of errors occur because the wrong version of components were sent for the build or there are missing components. If possible write a script to integrate and deploy the components this helps reduce manual errors.
3) Document: Document the Integration process/build process to help eliminate the errors of omission or oversight. It is possible that the person responsible for integrating the components forgets to run a required script and the Integration Testing will not yield correct results.
4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked correctly. Each defect should be documented and tracked. Information should be captured as to how the defect was fixed. This is valuable information. It can help in future integration and deployment processes.

What is Regression Testing?

3) What is Regression Testing?
If a piece of Software is modified for any reason testing needs to be done to ensure that it works as specified and that it has not negatively impacted any functionality that it offered previously. This is known as Regression Testing
Regression Testing attempts to verify:
- That the application works as specified even after the changes/additions/modification were made to it
- The original functionality continues to work as specified even after changes/additions/modification to the software application
- The changes/additions/modification to the software application have not introduced any new bugs
When is Regression Testing necessary?
Regression Testing plays an important role in any Scenario where a change has been made to a previously tested software code. Regression Testing is hence an important aspect in various Software Methodologies where software changes enhancements occur frequently.
Any Software Development Project is invariably faced with requests for changing Design, code, features or all of them.
Some Development Methodologies embrace change.
For example ‘Extreme Programming’ Methodology advocates applying small incremental changes to the system based on the end user feedback.
Each change implies more Regression Testing needs to be done to ensure that the System meets the Project Goals.
Why is Regression Testing important?
Any Software change can cause existing functionality to break.
Changes to a Software component could impact dependent Components.
It is commonly observed that a Software fix could cause other bugs.
All this affects the quality and reliability of the system. Hence Regression Testing, since it aims to verify all this, is very important.
Making Regression Testing Cost Effective:
Every time a change occurs one or more of the following scenarios may occur:
- More Functionality may be added to the system
- More complexity may be added to the system
- New bugs may be introduced
- New vulnerabilities may be introduced in the system
- System may tend to become more and more fragile with each change
After the change the new functionality may have to be tested along with all the original functionality.
With each change Regression Testing could become more and more costly.
To make the Regression Testing Cost Effective and yet ensure good coverage one or more of the following techniques may be applied:
- Test Automation: If the Test cases are automated the test cases may be executed using scripts after each change is introduced in the system. The execution of test cases in this way helps eliminate oversight, human errors,. It may also result in faster and cheaper execution of Test cases. However there is cost involved in building the scripts.
- Selective Testing: Some Teams choose execute the test cases selectively. They do not execute all the Test Cases during the Regression Testing. They test only what they decide is relevant. This helps reduce the Testing Time and Effort.
Regression Testing – What to Test?
Since Regression Testing tends to verify the software application after a change has been made everything that may be impacted by the change should be tested during Regression Testing. Generally the following areas are covered during Regression Testing:
- Any functionality that was addressed by the change
- Original Functionality of the system
- Performance of the System after the change was introduced
Regression Testing – How to Test?
Like any other Testing Regression Testing Needs proper planning.
For an Effective Regression Testing to be done the following ingredients are necessary:
- Create a Regression Test Plan: Test Plan identified Focus Areas, Strategy, Test Entry and Exit Criteria. It can also outline Testing Prerequisites, Responsibilities, etc.
- Create Test Cases: Test Cases that cover all the necessary areas are important. They describe what to Test, Steps needed to test, Inputs and Expected Outputs. Test Cases used for Regression Testing should specifically cover the functionality addressed by the change and all components affected by the change. The Regression Test case may also include the testing of the performance of the components and the application after the change(s) were done.
- Defect Tracking: As in all other Testing Levels and Types It is important Defects are tracked systematically, otherwise it undermines the Testing Effort.

What is System Testing- Why? What? & How?

2) System Testing: Why? What? & How?
‘Unit testing’ focuses on testing each unit of the code.
‘Integration testing’ focuses on testing the integration of “units of code” or components.
Each level of testing builds on the previous level.
‘System Testing’ is the next level of testing. It focuses on testing the system as a whole.

How does System Testing fit into the Software Development Life Cycle?
In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the individual components are working OK. The ‘Integration testing’ focuses on successful integration of all the individual pieces of software (components or units of code).
Once the components are integrated, the system as a whole needs to be rigorously tested to ensure that it meets the Quality Standards.
Thus the System testing builds on the previous levels of testing namely unit testing and Integration Testing.
Usually a dedicated testing team is responsible for doing ‘System Testing’.
Why System Testing is important?
System Testing is a crucial step in Quality Management Process.
........- In the Software Development Life cycle System Testing is the first level where
...........the System is tested as a whole
........- The System is tested to verify if it meets the functional and technical
........- The application/System is tested in an environment that closely resembles the
...........production environment where the application will be finally deployed
........- The System Testing enables us to test, verify and validate both the Business
...........requirements as well as the Application Architecture
Prerequisites for System Testing:
The prerequisites for System Testing are:
........- All the components should have been successfully Unit Tested
........- All the components should have been successfully integrated and Integration
..........Testing should be completed
........- An Environment closely resembling the production environment should be
When necessary, several iterations of System Testing are done in multiple environments.
Steps needed to do System Testing:
The following steps are important to perform System Testing:
........Step 1: Create a System Test Plan
........Step 2: Create Test Cases
........Step 3: Carefully Build Data used as Input for System Testing
........Step 3: If applicable create scripts to
..................- Build environment and
..................- to automate Execution of test cases
........Step 4: Execute the test cases
........Step 5: Fix the bugs if any and re test the code
........Step 6: Repeat the test cycle as necessary
What is a ‘System Test Plan’?
As you may have read in the other articles in the testing series, this document typically describes the following:
.........- The Testing Goals
.........- The key areas to be focused on while testing
.........- The Testing Deliverables
.........- How the tests will be carried out
.........- The list of things to be Tested
.........- Roles and Responsibilities
.........- Prerequisites to begin Testing
.........- Test Environment
.........- Assumptions
.........- What to do after a test is successfully carried out
.........- What to do if test fails
.........- Glossary
How to write a System Test Case?
A Test Case describes exactly how the test should be carried out.
The System test cases help us verify and validate the system.
The System Test Cases are written such that:
........- They cover all the use cases and scenarios
........- The Test cases validate the technical Requirements and Specifications
........- The Test cases verify if the application/System meet the Business & Functional
...........Requirements specified
........- The Test cases may also verify if the System meets the performance standards
Since a dedicated test team may execute the test cases it is necessary that System Test Cases. The detailed Test cases help the test executioners do the testing as specified without any ambiguity.
The format of the System Test Cases may be like all other Test cases as illustrated below:
• Test Case ID
• Test Case Description:
1. What to Test?
2. How to Test?
• Input Data
• Expected Result
• Actual Result
Sample Test Case Format:
Test Case ID What To Test? How to Test? Input Data Expected Result Actual Result Pass/Fail
Additionally the following information may also be captured:
........a) Test Suite Name
........b) Tested By
........c) Date
........d) Test Iteration (The Test Cases may be executed one or more times)
Working towards Effective Systems Testing:
There are various factors that affect success of System Testing:
1) Test Coverage: System Testing will be effective only to the extent of the coverage of Test Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered by the test cases are sufficient. The Test cases should “cover” all scenarios, use cases, Business Requirements, Technical Requirements, and Performance Requirements. The test cases should enable us to verify and validate that the system/application meets the project goals and specifications.
2) Defect Tracking: The defects found during the process of testing should be tracked. Subsequent iterations of test cases verify if the defects have been fixed.
3) Test Execution: The Test cases should be executed in the manner specified. Failure to do so results in improper Test Results.
4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is a compilation of the various components that make the application deployed in the appropriate environment. The Test results will not be accurate if the application is not ‘built’ correctly or if the environment is not set up as specified. Automating this process may help reduce manual errors
5) Test Automation: Automating the Test process could help us in many ways:
a. The test can be repeated with fewer errors of omission or oversight
b. Some scenarios can be simulated if the tests are automated for instance
simulating a large number of users or simulating increasing large amounts
of input/output data
6) Documentation: Proper Documentation helps keep track of Tests executed. It also helps create a knowledge base for current and future projects. Appropriate metrics/Statistics can be captured to validate or verify the efficiency of the technical design /architecture

What is User Acceptance Testing?

1) What is User Acceptance Testing?
User Acceptance Testing is often the final step before rolling out the application.
Usually the end users who will be using the applications test the application before ‘accepting’ the application.
This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.
This testing also helps nail bugs related to usability of the application.
User Acceptance Testing – Prerequisites:
Before the User Acceptance testing can be done the application is fully developed.
Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As various levels of testing have been completed most of the technical bugs have already been fixed before UAT.
User Acceptance Testing – What to Test?
To ensure an effective User Acceptance Testing Test cases are created.
These Test cases can be created using various use cases identified during the Requirements definition stage.
The Test cases ensure proper coverage of all the scenarios during testing.
During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment.
The Test cases are written using real world scenarios for the application
User Acceptance Testing – How to Test?
The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing.
However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment.
The steps taken for User Acceptance Testing typically involve one or more of the following:
.......1) User Acceptance Test (UAT) Planning
.......2) Designing UA Test Cases
.......3) Selecting a Team that would execute the (UAT) Test Cases
.......4) Executing Test Cases
.......5) Documenting the Defects found during UAT
.......6) Resolving the issues/Bug Fixing
.......7) Sign Off
User Acceptance Test (UAT) Planning:
As always the Planning Process is the most important of all the steps. This affects the effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing Strategy. It also describes the key focus areas, entry and exit criteria.
Designing UA Test Cases:
The User Acceptance Test Cases help the Test Execution Team to test the application thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios.
The Use Cases created during the Requirements definition phase may be used as inputs for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used for creating.
Each User Acceptance Test Case describes in a simple language the precise steps to be taken to test something.
The Business Analysts and the Project Team review the User Acceptance Test Cases.
Selecting a Team that would execute the (UAT) Test Cases:
Selecting a Team that would execute the UAT Test Cases is an important step.
The UAT Team is generally a good representation of the real world end users.
The Team thus comprises of the actual end users who will be using the application.
Executing Test Cases:
The Testing Team executes the Test Cases and may additional perform random Tests relevant to them
Documenting the Defects found during UAT:
The Team logs their comments and any defects or issues found during testing.
Resolving the issues/Bug Fixing:
The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users.
Sign Off:
Upon successful completion of the User Acceptance Testing and resolution of the issues the team generally indicates the acceptance of the application. This step is important in commercial software sales. Once the User “Accept” the Software delivered they indicate that the software meets their requirements.
The users now confident of the software solution delivered and the vendor can be paid for the same.
What are the key deliverables of User Acceptance Testing?
In the Traditional Software Development Lifecycle successful completion of User Acceptance Testing is a significant milestone.
The Key Deliverables typically of User Acceptance Testing Phase are:
1) The Test Plan- This outlines the Testing Strategy
2) The UAT Test cases – The Test cases help the team to effectively test the application
3) The Test Log – This is a log of all the test cases executed and the actual results.
4) User Sign Off – This indicates that the customer finds the product delivered to their satisfaction


SNMP | Krishna's page

(1) We test our SNMP agent with a popular NMS and it works just fine. Why would we need SilverCreek, the official SNMP Test Suite?

Interacting with a manager station does not prove correct implementation of the SNMP protocol. There are still issues such as lexicographic ordering, 32-bit signed integer arithmetic, read-only and read-write variables correctly implemented, checking that the agent implements the objects it claims to, and does not implement objects it does not claim to, and other areas that implementors tend to get wrong. These implementation errors will result in the reporting of wrong data to the manager.

Doing "gets" and "sets" from a Network Management Station to your agent is an indicator that some of your implementation is correct. However, you have no way of determining that the "get" retrieved the right data and that the "set" set the correct variable.

An NMS is a great mechanism for cheating at agent testing! A Network Management Systems goes out of its way to work around badly implemented agents. If your agent works with an NMS that proves that the NMS is very good; it proves nothing about your agent.

Agent developers forget that Fortune 2000 companies often write their own applications for collecting and analyzing SNMP agent information. In such environments, these companies typically do not add a lot of padding or scaffolding to work around bad agent errors. A common complaint of end user network managers is that the information they collect from SNMP agents is unreliable. Two products from different vendors that should be reporting identical information often report different results.

We must also remind developers that all releases and improvements need to be tested. Using HP Open View for this purpose requires an SNMP expert employee or contractor who can interpret the results to know what is correct and what is not. Using SilverCreek will find the majority of your implementation errors and provide definitive test result information so that you don't need an SNMP expert to handle testing.

(2) We already did interoperability testing at the (Fill in Name of) Tradeshow, so why would we need your test product?

Interoperability testing is a great way to check against another engineer's interpretation of the specification. This can be very helpful in checking your own thinking about the specification and areas of potential misinterpretation. This can be a valuable learning experience for all implementors. We have seen situations where seven engineers implemented something one way, and six implemented it another way. A meeting was held with the RFC authors and the paragraphs were rewritten to eliminate the ambiguity. This is an extremely useful experience, but not sufficient for testing a product.

Interoperability testing does not determine your product's compliance with boundary conditions, ability to handle various error conditions, correct implementation of the SNMP protocol, including lexicographic ordering, 32-bit signed integer arithmetic, read-only and read-write variables implemented correctly, and other areas that implementors tend to get wrong.

(3) We implemented our SNMP agent from the RFCs (coded it from scratch), so why would we need SilverCreek?

The RFCs are not robust specifications free from ambiguity, so it is likely that aspects of the implementation will be incorrect. There may be email archives, minutes from meetings, or papers that clarify these issues, but unless one knows what to look for, the required information may be missed. In fact, it is most likely that products based on commercial SNMP engines will be less error-prone than "from-scratch" implementations. The commercial SNMP engine developers have full time staff devoted to understanding the clarifications and subtleties as well as attending the meetings where these issues are discussed.

In addition, there is no "gold" implementation with IETF RFCs. When a single company dominates a standard, an implementor can test and check his product against that "gold" implementation standard (such as Windows95 from Microsoft). Within the IETF standards-creating process, there are experimental implementations, but no "gold" implementation to test against.

Our experience, and the experience of countless SNMP developers, is that there are many parts of SNMP that are _hard_ to get right. Lexicographic ordering, for instance, seems like a fairly simple thing to get right -- but in reality, most developers do not. They may have OID's out of order or wrap around at the end to the beginning of the OID tree. Sometimes you cannot even walk the MIB because GET-NEXT's will not work from an arbitrary starting point.

(4) We only need to test our private MIBs, so why would we buy SilverCreek?

SilverCreek is extensible. Through simple scripts written in Tcl, you can write tests for your proprietary MIBs. It is a very simple process to load a private MIB into SilverCreek. Once it is loaded, SilverCreek can run a number of the standard tests (such as writing to read-only variables) on the objects defined in your MIB. From there you can extend SilverCreek to check the specific functionality of your MIB.

(5) There are some publicly available test tools, so why should we use SilverCreek?

We have evaluated a number of the publicly available test tools, and determined them to be inadequate. Many of these tests report false information. Many others are not rigorous enough.

Many of the publicly available tools are extremely difficult to install, requiring as much as five hours. There is no technical support, bug fixes, or continuing efforts in place to improve these tests.

Many of the other testing products available are not extensible; there is no means to test private MIBs, or customize the tests to fit your environment.

(6) Since we beta tested our product when it first came out and got rid of all the bugs, why would we need to retest with your test product now?

Beta testers tend to be users who will find the obvious user level bugs or bugs that only appear in user's equipment configuration environment that is different from the manufacturer's test environment. While this effort can provide good information on usability of the product, and glitches in various combinations of equipment, it does not provide thorough product testing.

Beta testing will not determine your product's compliance with boundary conditions, ability to handle various error conditions, correct implementation of the SNMP protocol, including lexicographic ordering, 32-bit signed integer arithmetic, read-only and read-write variables implemented correctly, and other areas that implementors tend to get wrong.

(7) We purchased our SNMP engine from a commercial organization specializing in SNMP engines so obviously they did all this testing and their code will be correct, so why would we need your test product?

It is most likely that products based on commercial SNMP engines will be less error-prone than "from-scratch" implementations. The commercial SNMP engine providers usually have full time staff devoted to understanding the RFC clarifications and subtleties as well as attending the IETF meetings where these issues are discussed. (In fact, we often see them there!)

However, there is no guarantee that commercial SNMP engine providers will have implemented everything correctly. Getting your agent code from a third party is a very good reason to perform your own testing in order to evaluate the quality of the product delivered to you. We have, sadly, tested products based on third party agents that could not pass any of our tests.

Differences between Application and Embedded Testing

Differences between Application and Embedded Testing


Most embedded software is, well embedded. So they do not generally have a key board, hard disk and monitor attached to them. In that case, special mechanism needs to be created to validate the software and the complete system before actual usage.

Real-time behaviour:

Most of the embedded systems are real-time. It is important that output received from the system needs to be correct and also that it arrives within a specific time. Later response may be is correct.

Difficult to simulate

Due to the conditions under which embedded systems are used, the simulation of its actual environment may be expensive, difficult, or dangerous. A simple embedded device such as an electric or smoke detector creates problems since the validation system needs to circumvent the actual sensor and send a signal directly to the software executed because of the signal.

Difficulty in seeing the output

Since embedded systems are usually connected to devices, their generated outputs are not in the form of a message on the screen, but they may be a command to handle the device or write something in memory. So the test suite needs to explore the internal parts of the embedded system in order to verify if the desired operation was performed.

No down time

Unlike application software, many embedded systems are expected to run continuously. This poses its own problems, and they are very difficult to detect during validation. Problems such as memory leaks, recovery from incorrect states, hardware malfunction, etc. usually are difficult to simulate.

Testing Process

Testing Life Cycle
A good testing life cycle begins during the requirements elucidation phase of software development, and concludes when the product is ready to install or ship following a successful system test.
A formal test plan is more than an early step in the software testing process--it's a vital part of your software development life cycle. This presents a series of tasks to help you develop a formal testing process model, as well as the inputs and outputs associated with each task. These tasks include.

• review of program plans
• development of the formal test plan
• creation of test documentation (test design, test cases, test software, and test producers
• acquisition of automated testing tools
• test execution
• updating the test documentation
• tailoring the model for projects of all sizes

What is bug?
A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports about bugs in a program are referred to as bug reports, also called PRs (problem reports), trouble reports, CRs (change requests), and so forth.
Bugs can have a wide variety of effects, with varying levels of inconvenience to the user of the program. Some bugs have only a subtle effect on the program's functionality, and may thus lie undetected for a long time. More serious bugs may cause the program to crash or freeze. Other bugs lead to security problems; for example, a common type of bug which allows a buffer overflow may allow a malicious user to execute other programs that are normally not allowed to run.
Bug tracking procedure
An issue (bug) tracking system (ITS) is a software application that allows an enterprise to record and follow the progress of every problem or "issue" that a computer system user identifies until the problem is resolved. With an ITS, an "issue", which can be anything from a simple customer question to a detailed technical report of an error or bug, can be tracked by priority status, owner, or some other customized criteria.
An ITS generally provides the user with a way to report an issue, track progression towards its resolution, and know who is responsible for resolving the issue. It also allows the manager of the system to customize the tracking procedure so that unnecessary documentation on the part of the problem solvers does not become a waste of time.
Bug Life Cycle
Input when reporting a bug; be as detailed as possible regarding execution, environment, how the bug can be replicated and any other relevant information.
- Program that caused the error
- Code or program that is not part of the package
- Package sample
- Error messages, error codes
- Message transmission confirmation
- Server or service settings

Bug Entry Details

1. Select a Product
2. Specify a Version
- 1.0.pre1
- 1.0.pre2
3. Which Component?
- Server
- Other
4. Which Platform?
- All
- HP
5. Which OS
Ex: - Windows / 98 / ME / 2000 / NT / XP
6. Indicate a Priority
Available Levels of Priority
- P1 Most Critical
- P2
- P3
- P4
- P5 Least Critical
7. Initial Owner
8. Cc:
9. URL:
10. Bug Summary
11. Additional Comments
12. Specify Inspection Group

Testing Methodologies

There is three-step approach to testing software.

  1. Test Strategy
  2. Test Planning
  3. Test Execution
Test Strategy
A product's test strategy ties the product's release and sign-off criteria to its business objectives.
In order to take key dependencies into account, test planning, testcase design, test automation or manual and test execution are aligned with the development schedule. Meaningful test scheduling requires a clear understanding and sequencing for:
• Completion of low- and high-level specifications;
• Code-complete (coding for everything but bug-fixes stops);
• Completion of component unit-testing
• UI-freeze
Identifying key trade-offs is essential, for it is impossible to test all scenarios, cover the full configuration matrix, and automate all test cases, while remaining within the practical limits of time and budget. Identify the features, components, sub-components, and items to be tested and the range of tests to be carried out.

Test Planning
The next step is Test Planning, which defines the approach for testing and a clear understanding of the project and its deliverables. Exhaustive analysis ensures no mismatch of requirements. All relevant products interface, component, and other external dependencies are identified and the timeframe for delivering the results is computed. The resulting plan is presented in industry-standard format to the customer; further steps are not taken without customer acceptance.
Here are the key steps for Test Planning:
• Define release criteria (with the release manager)
• Outline and prioritize the testing effort.
• Chart test automation requirements
• Identify resource requirements at various stages of testing
• Set up calendar-based activity plan
• State reporting mechanism & establish communication model
• Configure team including number, type, and seniority of resources and length of time required, mapped each resource onto the activity plan.

Test Execution
Execution of the test plan begins with fulfillment of test-start criteria, and ends with the fulfillment of test-complete criteria. Intermediary steps are:
• Prepare comprehensive test plan specifications and test cases for each level of testing.
• Review all test plans and test cases
• Prepare test data and test logs.
• Set up the test environment so that all operations can be completed promptly, accurately, and efficiently.
• Execute Error/Trap tests to ensure testers accuracy.
• Execute tests as described, noting where test cases require revision and updating.
• Report all bugs in the manner agreed upon with the customer, following all defect management protocols, informing customer of current status, monitoring and driving to resolution all red-flag conditions, and ensuring that communication between all parties is complete and accurate.
• Run spot checks to ensure quality.
• When the project has been completed, review all aspects of the project and submit a Project Retrospective report to the customer that objectively evaluates the project's execution

Software development lifecycle - Process Models

Process Models
There are several methodologies or models that can be used to guide the software development lifecycle. Some of these include:

1. Water fall model
2. V-Model
3. Spiral Model
4. Incremental model

1. Water fall model
This is the best-known and oldest process, where developers (roughly) follow these steps in order. They state requirements, analyze them, design a solution approach, architect a software framework for that solution, develop code, test (perhaps unit tests then system tests), deploy, and maintain. After each step is finished, the process proceeds to the next step, just as builders don't revise the foundation of a house after the framing has been erected. The process has no provision for correcting errors in early steps (for example, in the requirements), so the entire (expensive) engineering process may be executed to the end, resulting in unusable or unneeded software features, just as a house built on an incorrect foundation might be uninhabitable after it is handed over to the customer. The original description of the methodology did include iteration, but that part of the process is usually overlooked.
The weakness of the Waterfall Model is at hand:
It is very important to gather all possible requirements during the first phase of requirements collection and analysis. If not all requirements are obtained at once the subsequent phases will suffer from it. Reality is cruel. Usually only a part of the requirements is known at the beginning and a good deal will be gathered during the complete development time.
Iterations are only meant to happen within the same phase or at best from the start of the subsequent phase back to the previous phase. If the process is kept according to the school book this tends to shift the solution of problems into later phases which eventually results in a bad system design. Instead of solving the root causes the tendency is to patch problems with inadequate measures.
There may be a very big "Maintenance" phase at the end. The process only allows for a single run through the waterfall. Eventually this could be only a first sample phase which means that the further development is squeezed into the last never ending maintenance phase and virtually run without a proper process

  • Simple and easy to use.
  • Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
  • Phases are processed and completed one at a time.
  • Works well for smaller projects where requirements are very well understood.
  • Adjusting scope during the life cycle can kill a project
  • No working software is produced until late during the life cycle.
  • High amounts of risk and uncertainty.
  • Poor model for complex and object-oriented projects.
  • Poor model for long and ongoing projects.
  • Poor model where requirements are at a moderate to high risk of changing.

2. V-model
The V-model is the improved version of water-fall model. A further development of the waterfall model issued into the so called "V-Model". If you look at it closely the individual steps of the process are almost the same as in the waterfall model. Therefore I will not describe the individual steps again, because the description of the waterfall steps may substitute this. However, there is on big difference. Instead of going down the waterfall in a linear way the process steps are bent upwards at the coding phase, to form the typical V shape. The reason for this is that for each of the design phases it was found that there is a counterpart in the testing phases which correlate to each other.

The time in which the V-model evolved was also the time in which software testing techniques were defined and various kinds of testing were clearly separated from each other. This new emphasis on software testing (of course along with improvements and new techniques in requirements engineering and design) led to the evolution of the waterfall model into the V-model. The tests are derived directly from their design or requirements counterparts. This made it possible to verify each of the design steps individually due to this correlation.
Another idea evolved which was the traceability down the left side of the V. This means that the requirements have to be traced into the design of the system, thus verifying that they are implemented completely and correctly. Another feature can be observed when you compare the waterfall model to the V-model. The "Operation & Maintenance" phase was replaced in later versions of the V-model with the validation of requirements. This means that not only the correct implementation of requirements has to be checked but also if the requirements are correct. In case there is the need of an update of the requirements and subsequently the design and coding, etc. there are two options. Either this has to be treated like in the waterfall model in a never ending maintenance phase, or in going over to another V-cycle. The earlier versions of V-models used the first option. For later versions a series of subsequent V-cycles was defined.

Simple and easy to use.
Each phase has specific deliverables.
Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.
Works well for small projects where requirements are easily understood.
  • Very rigid, like the waterfall model.
  • Little flexibility and adjusting scope is difficult and expensive.
  • Software is developed during the implementation phase, so no early prototypes of the software are produced.
  • Model doesn’t provide a clear path for problems found during testing phases

3. The Spiral Model

The process begins at the center position. From there it moves clockwise in traversals. Each traversal of the spiral usually results in a deliverable. It is not clearly defined what this deliverable is. This changes from traversal to traversal. For example, the first traversals may result in a requirement specification. The second will result in a prototype, and the next one will result in another prototype or sample of a product, until the last traversal leads to a product which is suitable to be sold. Consequently the related activities and their documentation will also mature towards the outer traversals. E.g. a formal design and testing session would be placed into the last traversal.

These regions are:
a. The planning task - to define resources, responsibilities, milestones and schedules.
b. The goal determination task - to define the requirements and constraints for the product and define possible alternatives.
c. The risk analysis task - to assess both technical and management risks.
d. The engineering task - to design and implement one or more prototypes or samples of the application
The most outstanding distinction between the spiral model and other software models is the explicit risk evaluation task. Although risk management is part of the other processes as well, it does not have an own representation in the process model. For other models the risk assessment is a sub-task e.g. of the overall planning and management. Further there are no fixed phases for requirements specification, design or testing in the spiral model. Prototyping may be used to find and define requirements. This may then be followed by "normal" phases as they can be found in other process models to handle design and testing.

The advantages of the spiral model are that it reflects the development approach in many industries much better than the other process models do. It uses a stepwise approach which e.g. goes hand in hand with the habit of maintaining a number of hardware sample phases in cases where the product to be produced is not only software for a given environment, but also contains the development of hardware. This way the developers and the customer can understand and react much better to risks in the evolutionary process. By having an iterative process which reduces formalisms and omit-table activities in the earlier phases the use of resources is optimized. Further, any risks should be detected much earlier than in other process models and measures can be taken to handle them.
The disadvantages of the spiral model are that the risk assessment is rigidly anchored in the process. First of all it demands risk-assessment expertise to perform this task and secondly in some cases the risk assessment may not be necessary in this detail. For completely new products the risk assessment makes sense. But I dare to say that the risks for programming yet another book keeping package are well known and do not need a big assessment phase. Also if you think of the multitude of carry over projects in many industries i.e. applying an already developed product to the needs of a new customer by small changes, the risks are not a subject generating big headaches. Generally speaking the spiral model is not much esteemed and not much used; although it has many advantaged and could have even more if the risk assessment phases would be tailored down to the necessary amount.

  • High amount of risk analysis
  • Good for large and mission-critical projects.
  • Software is produced early in the software life cycle.
  • Can be a costly model to use.
  • Risk analysis requires highly specific expertise.
  • Project’s success is highly dependent on the risk analysis phase.
  • Doesn’t work well for smaller projects.

4. Incremental Model
The incremental model is an intuitive approach to the waterfall model. Multiple development cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles are divided up into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases.
A working version of software is produced during the first iteration, so you have working software early on during the software life cycle. Subsequent iterations build on the initial software produced during the first iteration.

  • Generates working software quickly and early during the software life cycle.
  • More flexible – less costly to change scope and requirements.
  • Easier to test and debug during a smaller iteration.
  • Easier to manage risk because risky pieces are identified and handled during its iteration.
  • Each iteration is an easily managed milestone.

  • Each phase of an iteration is rigid and do not overlap each other.
  • Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle

Software Development Life cycle

Definition of SDLC:
A Software Development Life Cycle process is a structure imposed on the development of a software product. In other words it is a process of formal, logical steps taken to develop the software. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.

Phases of SDLC
The phases of SDLC can vary somewhat but generally include the following:

1. System/Information Engineering and Modeling
2. Software Requirements Analysis
3. Systems Analysis and Design
4. Code Generation
5. Testing
6. Maintenance
SDLC processes are composed of many activities, notably the following. They are considered sequential steps in the waterfall process, but other processes may rearrange or combine them in different ways.

1. System/Information Engineering and Modeling
As software is always of a large system (or business), work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when software must interface with other elements such as hardware, people and other resources. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. In some cases, to extract the maximum output, the system should be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system.

2. Software Requirement Analysis
Extracting the requirements of a desired software product is the first task in creating it. While customers probably believe they know what the software is to do, it may require skill and experience in software engineering to recognize incomplete, ambiguous or contradictory requirements.

3. Specification
Specification is the task of precisely describing the software to be written, in a mathematically rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable.

4. Software architecture
The architecture of a software system refers to an abstract representation of that system. Architecture is concerned with making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware or the host operating system.

5. Coding
Reducing a design to code may be the most obvious part of the software engineering job, but it is not necessarily the largest portion.

6. Testing
Testing of parts of software, especially where code by two different engineers must work together falls to the software engineer.

7. Documentation
An important (and often overlooked) task is documenting the internal design of software for the purpose of future maintenance and enhancement. Documentation is most important for external interfaces.

8. Maintenance
Maintaining and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. Not only may it be necessary to add code that does not fit the original design but just determining how software works at some point after it is completed may require significant effort by a software engineer. About 2/3 of all software engineering work is maintenance, but this statistic can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to do new things, which in many ways can be considered new work. In comparison, about 2/3 of all civil engineering, architecture, and construction work is maintenance in a similar way.

Overview of Software Testing

Software Testing is the process of executing a program or system with the intent of finding errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible.
Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes.

To improve quality
The minimum requirement of quality, means performing as required under specified circumstances.

For Verification & Validation (V&V)
• Verification: Confirms that products work properly reflect the requirements specified for them. In other words, verification ensures that "you built it right”
• Validation: Confirms that the product, as provided, will fulfill its intended use. In other words, validation ensures that "you built the right thing."
Verification Techniques
Dynamic testing - Testing that involves the execution of a system or component. Basically, a number of test cases are chosen where each test case consists of test data. These input test cases are used to determine output test results. Dynamic testing can be further divided into three categories - functional testing, structural testing, and random testing.
Functional testing - Testing that involves identifying and testing all the functions of the system as defined within the requirements. This form of testing is an example of black-box testing since it involves no knowledge of the implementation of the system.
Structural testing - Testing that has full knowledge of the implementation of the system and is an example of white-box testing. It uses the information from the internal structure of a system to devise tests to check the operation of individual components. Functional and structural testing both chooses test cases that investigate a particular characteristic of the system.
Random testing - Testing that freely chooses test cases among the set of all possible test cases. The use of randomly determined inputs can detect faults that go undetected by other systematic testing techniques. Exhaustive testing, where the input test cases consists of every possible set of input values, is a form of random testing. Although exhaustive testing performed at every stage in the life cycle results in a complete verification of the system, it is realistically impossible to accomplish. [Andriole86]
Static testing - Testing that does not involve the operation of the system or component. Some of these techniques are performed manually while others are automated. Static testing can be further divided into 2 categories - techniques that analyze consistency and techniques that measure some program property.
Consistency techniques - Techniques that are used to insure program properties such as correct syntax, correct parameter matching between procedures, correct typing, and correct requirements and specifications translation.
Measurement techniques - Techniques that measure properties such as error proneness, understandability, and well-structuredness.
Validation Techniques
There are also numerous validation techniques, including formal methods, fault injection, and dependability analysis. Validation usually takes place at the end of the development cycle, and looks at the complete system as opposed to verification, which focuses on smaller sub-systems. Formal methods - Formal methods is not only a verification technique but also a validation technique. A formal method means the use of mathematical and logical techniques to express, investigate, and analyze the specification, design, documentation, and behavior of both hardware and software.
Fault injection - Fault injection is the intentional activation of faults by either hardware or software means to observe the system operation under fault conditions.
Hardware fault injection - Can also be called physical fault injection because we are actually injecting faults into the physical hardware.
Software fault injection - Errors are injected into the memory of the computer by software techniques. Software fault injection is basically a simulation of hardware fault injection.
Dependability analysis - Dependability analysis involves identifying hazards and then proposing methods that reduces the risk of the hazard occurring.
Hazard analysis - Involves using guidelines to identify hazards, their root causes, and possible countermeasures.
Risk analysis - Takes hazard analysis further by identifying the possible consequences of each hazard and their probability of occurring
For reliability estimation
Software reliability has important relations with many aspects of software, including the structure, and the amount of testing it has been subjected to. Based on an operational profile (an estimate of the relative frequency of use of various inputs to the program), testing can serve as a statistical sampling method to gain failure data for reliability estimation.

Testing concepts - Definitions

Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Failover testing - typically used interchangeably with 'recovery testing'
Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
Context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
User acceptance testing - determining if software is satisfactory to an end-user or customer.
Comparison testing - comparing software weaknesses and strengths to competing products.
Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Trains, jobs and news information on Gtalk - gtalk bots | Krishna's page

Trains, jobs and news information on Gtalk - gtalk bots | Krishna's page

This google bots information provided by krishna will definitely help the job serchers as well as; normal internet users who search for their nominal questions say trains and astrology.

Openings at Aricent

Openings at Aricent

We take this opportunity to thank you for your response to our last appeal to refer candidates.

We have uploaded new requirements on

The critical requirements that need immediate attention are:

Position ID              Title                        City              Posted on

J0309-0336  Mainframe (Vantage)  Bangalore  11/3/2009

J0309-0278   T1- SQL Server DBA    Bangalore   10/3/2009

J0309-0235   COGNOS Reportnet   Bangalore    10/3/2009

J0109-0118    T 1 - Messaging   Bangalore   10/3/2009

J0309-0115    Senior Software Programmer - .Net, Sharepoint    Pune   10/3/2009

J0309-0161   Senior Software Engineer - Oracle   Pune   10/3/2009

J0309-0112   Wintel Admin     Bangalore    9/3/2009

J0109-0119   T 1 - Citrix  Bangalore   9/3/2009

J0109-0120    T 1 - Service Delivery Manager   Bangalore    9/3/2009

J0109-0121   T 1 - Technical Project manager   Bangalore   9/3/2009

J0109-0116    T 1 - UNIX Administration   Bangalore   9/3/2009

J1208-0091   Oracle DBA   Pune   9/3/2009

J0309-0233   Lead Analyst - MCMS (Content Management) Bangalore   9/3/2009

J0309-0163   Senior Software Engineer - Gentran   Bangalore   5/3/2009

J0209-0712   Senior Software Engineer - Lotus Notes, Notrix, Domino   Pune   27/02/09

J0309-0351   Centura Gupta   Bangalore   12/3/2009   

J0309-0234   SSIS ETL Informatica   Bangalore    12/3/2009

J0309-0127    .Net Developer   Bangalore   12/3/2009

J0209-0027    Configuration Manager - Harvest, Rational Clear Case   Bangalore    3/2/2009

J0209-0026   Build and Release Engineer - Rational Clear Case, Harvest    Bangalore   3/2/2009

J0309-0409    Lead Analyst- Mainframes(CICS)   Bangalore     3/13/2009

J0309-0410   Senior Software Engineer - Java    Bangalore    3/13/2009

J0309-0411    Team Lead - Siebel Admin and Configuration   Bangalore   3/13/2009

J0309-0412    Senior Software Engineer - Siebel Admin and Configuration   Bangalore    3/13/2009

J0309-0413   Senior Software Engineer - Informatica ETL   Bangalore    3/13/2009

J0309-0414   Senior Software Engineer   Bangalore    3/13/2009

J0309-041    Senior Software Engineer- Access, SQL    Bangalore    3/13/2009

Send your resumes to : mallela_krishnakanth@emc.com

Employee Referral - EMC Network Operations Center

Employee Referral - EMC Network Operations Center

Group                  -     NOC

Req Number       -     37105BR

Position              -     Network Engineer

Exp(yrs)              -     3-6


        -          3-5 years relevant experience 

-          Good Communication

-          Hands-on experience in Cisco Routers / Catalyst & IOS Switches / Cisco VPN Concentrators / Load Balancers

-          Hands-on experience in Troubleshooting LAN Switching technologies like – VLAN / STP / Layer-3 Switching / MLS

-          Hands-on experience of Routing protocols  like – OSPF / BGP / EIGRP

-          Hands-on experience of Troubleshooting WAN Circuits / MPLS Networks / ATM / Frame Relay Networks

-          Ready for 24X7 Shift Operations   


 # of Positions     -       1  
Do you know somebody for this role?  

EMC encourages our employees to refer individuals for employment who can help us continue our success in meeting EMC´s customers' requirements for Enterprise Storage products and services. 

If you know of a friend, family member or business associate, who is actively looking to make a career move and have expressed an interest in joining EMC please send their resume through 

Openings for PostgreSQL DBA in Synechron Technologies

Openings for PostgreSQL DBA in Synechron Technologies 

looking for PostgreSQL DBA, resources for the following profiles to start immediately at offshore. The positions are very critical  and we need  the resources by next month, hence would need your help in getting us the right talent at the earliest. 

 Requirement - PostgreSQL DBA

 Position Purpose (Summary):  

The Database Administrator is primarily responsible for the monitoring and maintenance of production database systems in a team environment.  Assists in the design and development of new application databases as well as encourage SQL best practices.

Major Job Accountabilities:

1.Monitor and maintain OLTP and OLAP database systems in a PostgreSQL environment.

2.Ensure database backups and disaster-recovery plans are current.

3.Participate in capacity planning and system monitoring improvements.

4.Develop and improve performance of SQL, functions, and procedures to ensure timely access to data.

5.Develop knowledge of web analytics and click-stream data warehousing techniques. 

6.Develop a working knowledge of Search Engine Optimization (SEO) or Search Engine Marketing (SEM) principles and techniques.




1.Understanding of database administration tasks and data models, oracle certification .


Knowledge/Technical Skills:

1.2+ yrs database administration/development experience. Experience should include installation, tuning, replication, backup, and
  monitoring of database systems.  Expertise in RDBMS such as Oracle, Sybase, SQL Server is acceptable however, PostgreSQL is

2.Comfortable writing and tuning SQL.

3.2+ yrs experience with UNIX/Linux OS and concepts (processes, cron, bash, top).

4.Familiarity with source control using CVS or Subversion.

5.1+ yrs experience creating and reading data models (ERDs).

6.Familiarity with indexing and other optimization techniques.

7.Ability to write in pl/pgsql, pl/perl or pl/sql desired.

8.Thorough understanding of relational models, joins and sub-queries.


1.Ability to be “on-call” 1 week per month.

2.Must possess effective interpersonal and communication skills and ability to work successfully in a team environment.

3.Good organizational and time-management, prioritization skills.

4.Attention to detail and follow-through.

5.High adaptability to rapidly-changing environments.

Kindly send in the references to reference@synechron.com

Openings in RSA

You know that RSA, the Security Division of EMC, is a great place to work. We are looking for more amazing people like you !!  Why not spread the word to some of your friends looking to join an industry leader?

 We are looking for QE Engineers for the different Business Units of RSA. Please apply or  refer  suitable candidates for the open positions listed below.  All positions are for Bangalore 


 For a combination of Manual Functional, Non Functional and Automation Requirements

·         Experience in Linux/Unix and Windows platforms 

·         Experience in databases, preferably Oracle, SQL and DB2

·         Scripting languages : Hands on in [Shell or Perl or Python,..]

·         Good programming skills : Java / C / C++/ C# / .Net

·         Experience in Test planning / Estimation/ Tracking / Cross Functional Collaboration

·         Experience in API, Automation, functional and non-Functional testing (e.g., Performance, Scalability, Reliability)  

·         Should be flexible to work on the various test types as mentioned above as per the business needs. 

·         Should be Self-motivated, proactive and able to execute tasks independently and within a team environment

·         Experience – 4- 10 years

Senior Test Engineer - Printer Driver

Celstream Technologies

Refer your friends for the position of Senior Test Engineer - Printer Driver for P&P BU.

We are looking for candidates with 4 - 6 years of experience. A minimum of 2-3 years in Test development with Printer Driver Testing background would be mandatory.

mail me: kumarayita@gmail.com

Jobs in Aris Global

Jobs in Aris Global
For contact information click this

Send your resumes to mahendra.bhide@arisglobal.co.in

1.Lead Developers- 5 to 9 Years
- Core Java , J2EE, Struts, Spring,Hibernate,JavaScript and Design Concepts
2.Test Leads (Automation)- 5 to 8 years
- Selenium & Silk Test, Automation Frame work
3.PErformance Test Leads - 6 to 8 years
- Load Runner, Database, Scripting
4.Automation Testing - 4 to 8 yrs
- Selinium,Silktest,Automation Frame work, Scripting
5.Oracle Developer - 6 to 10 Yrs
- ORacle , Pl/SQL, BO
6.Java Architects - 6 to 10 years
- MVC Architecture,SOA, Core Java, J2EE,JSF, Struts

For other details contact Naveen PS Or GuruPrakash KR