Sunday, 18 August 2013

Guidelines for Test Case Preparation – Integration Testing & Unit Testing

Unit Test Case Preparation Guidelines
The following are the suggested action points based on which a test case can be derived and executed for unit testing.
# Test case action which acts as input to the AUT
1. Validation rules of data fields do not match with the program/data specification.
2. Valid data fields are rejected.
3. Data fields of invalid class, range and format are accepted.
4. Invalid fields cause abnormal program end.
# Test case action point to check output from the AUT
1. Output messages are shown with misspelling, or incorrect meaning, or not uniform.
2. Output messages are shown while they are supposed not to be; or they are not shown while
they are supposed to be.
3. Reports/Screens do not conform to the specified layout, with misspelled data labels/titles,
mismatched data label and information content, and/or incorrect data sizes.
4. Reports/Screens page numbering is out of sequence.
5. Reports/Screens breaks do not happen or happen at the wrong places.
6. Reports/Screens control totals do not tally with individual items.
7. Screen video attributes are not set/reset as they should be.
# Test case action points to check File Access
1. Data fields are not updated as input.
2. “No-file” cases cause program abnormal end and/or error messages.
3. “Empty-file” cases cause program abnormal end and/or error messages.
4. Program data storage areas do not match with the file layout.
5. The first and last input record (in a batch of transactions) is not updated.
6. The first and last record in a file is not read while it should be.
7. Deadlock occurs when the same record/file is accessed by more than 1 user.
# Test case action points to check internal Logic of the AUT
1. Counters are not initialized, as they should be.
2. Mathematical accuracy and rounding do not conform to the prescribed rules.
# Test case action points to check Job Control Procedures
1. A wrong program is invoked and/or the wrong library/files are referenced.
2. Program execution sequence does not follow the JCL condition codes setting.
3. Run time parameters are not validated before use.
# Test case action point to check the program documentation
Supportive documentation (Inline Help, Manual etc.) is not consistent with the program behavior.
The information inside the operation manual is not clear and concise with the application system.
The operational manual does not cover all the operation procedures of the system.
# Test case action point to check program structure (through program walkthrough)
Coding structure does not follow coding standards.
# Test case action point to check the performance of the AUT
The program runs longer than the specified response time.
Sample Test Cases
1. Screen label checks.
2. Screen video checks with test data set.
3. Creation of record with valid data set.
4. Rejection of record with invalid data set.
5. Error handling upon empty file.
6. Batch program run with test data set.
Integration Test Case Preparation Guidelines
The following are the suggested action points based on which the test case can be derived and executed for integration testing.
# Test case action point to check global data (e.g. Linkage Section)
Global variables have different definition and/or attributes in the programs that referenced them.
# Test case action point to check program interfaces
1. The called programs are not invoked while they are supposed to be.
2. Any two interfaced programs have different number of parameters, and/or the attributes of these parameters are defined differently in the two programs.
3. Passing parameters are modified by the called program while they are not supposed to be.
4. Called programs behave differently when the calling program calls twice with the same set of input data.
5. File pointers held in the calling program are destroyed after another program is called.
#Test case action point to check consistency among programs
The same error is treated differently (e.g. with different messages, with different termination status etc.) in different programs.
Sample Test Cases
1. Interface test between programs xyz, abc & jkl.
2. Global (memory) data file 1 test with data set 1.

How and When Testing Starts

For the betterment, reliability and performance of an Information System, it is always better to involve the Testing team right from the beginning of the Requirement Analysis phase. The active involvement of the testing team will give the testers a clear vision of the functionality of the system by which we can expect a better quality and error-free product.
Once the Development Team-lead analyzes the requirements, he will prepare the System Requirement Specification, Requirement Traceability Matrix. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). The Development Team-lead will explain regarding the Project, the total schedule of modules, Deliverables and Versions.
The involvement of Testing team will start from here. Test Lead will prepare the Test Strategy and Test Plan, which is the scheduler for entire testing process. Here he will plan when each phase of testing such as Unit Testing, Integration Testing, System Testing, User Acceptance Testing. Generally Organizations follow the V – Model for their development and testing.
After analyzing the requirements, Development Team prepares System Requirement Specification, Requirement Traceability Matrix, Software Project Plan, Software Configuration Management Plan, Software Measurements/metrics plan, Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie., Design. Here they will prepare some important Documents like Detailed Design Document, Updated Requirement Traceability Matrix, Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers), Integration Test Cases Document, System Test Plan Document, Review and SQA audit Reports for all Test Cases.
After preparation of the Test Plan, Test Lead distributes the work to the individual testers (white-box testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds.
After that the Testing team do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not.
And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Some times the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product.

Software Testing Requirements

Software testing is not an activity to take up when the product is ready. An effective testing begins with a proper plan from the user requirements stage itself. Software testability is the ease with which a computer program is tested. Metrics can be used to measure the testability of a product. The requirements for effective testing are given in the following sub-sections.
Operability: The better the software works, the more efficiently it can be tested.
•The system has few bugs (bugs add analysis and reporting overhead to the test process)
•No bugs block the execution of tests
The product evolves in functional stages (allows simultaneous development & testing)
Observability: What is seen is what is tested
•Distinct output is generated for each input
•System states and variables are visible or queriable during execution
•Past system states and variables are visible or queriable (eg., transaction logs)
•All factors affecting the output are visible
•Incorrect output is easily identified
•Incorrect input is easily identified
•Internal errors are automatically detected through self-testing mechanism
•Internally errors are automatically reported
•Source code is accessible
Controllability: The better the software is controlled, the more the testing can be automated and optimized.
•All possible outputs can be generated through some combination of input
•All code is executable through some combination of input
•Software and hardware states can be controlled directly by testing
•Input and output formats are consistent and structured
•Tests can be conveniently specified, automated, and reproduced.
Decomposability: By controlling the scope of testing, problems can be isolated quickly, and smarter testing can be performed.
•The software system is built from independent modules
•Software modules can be tested independently
Simplicity: The less there is to test, the more quickly it can be tested
•Functional simplicity
•Structural simplicity
•Code simplicity
Stability: The fewer the changes, the fewer the disruptions to testing
•Changes to the software are infrequent
•Changes to the software are controlled
•Changes to the software do not invalidate existing tests
•The software recovers well from failures
Understandability: The more information we have, the smarter we will test
•The design is well understood
•Dependencies between internal external and shared components are well understood.
•Changes to the design are communicated.
•Technical documentation is instantly accessible
•Technical documentation is well organized
•Technical documentation is specific and detailed
•Technical documentation is accurate

Creating Backup of Files While Deleting

Problem Description – We usually tend to use rm -f or rm -rf commands to delete files and directories in UNIX/Linux. There is high possibility that some of important files get deleted by mistake (as in the case of rm -f *).
Solution – one way is to create link with other file which starts with dot(.)
normally rm or copy … will not effect on dot files
example:
ln ABC_file .ABC_file
rm -f * # you will save “ABC_file” because there is one link
“.ABC_file” …

Using Built-in Environment Variables in QTP

QuickTest provides a set of built-in variables that enable you to use current information about the test and the QuickTest computer running your test. These can include the test name, the test path, the operating system type and version, and the local host name.
For example, you may want to perform different checks in your test based on the operating system being used by the computer that is running the test. To do this, you could include the OSVersion built-in environment variable in an If statement. You can also select built-in environment variables when parameterizing values. For more information, see Setting Environment Variable Parameter Options. The following built-in environment variables are available:
Name Description
ActionIteration – The action iteration currently running.
ControllerHostName – The name of the controller’s computer. This variable is relevant only
when running as a GUI VUser from the LoadRunner controller.
GroupName – The name of the group in the running scenario. This variable is relevant only
when running as a GUI VUser from the LoadRunner controller.
LocalHostName The local host name.
OS – The operating system.
OSVersion – The operating system version.
ProductDir – The folder path where the product is installed.
ProductName – The product name.
ProductVer – The product version.
ResultDir - The path of the folder in which the current test results are located.
ScenarioId – The identification number of the scenario. This variable is relevant only
when running as a GUI VUser from the LoadRunner controller.
SystemTempDir - The system temporary directory.
TestDir – The path of the folder in which the test is located.
TestIteration – The test iteration currently running.
TestName – The name of the test.
UpdatingActiveScreen – Indicates whether the Active Screen images and values are being updated
during the update run process. For more information, see Updating a Test
Using the Update Run Mode Option.
UpdatingCheckpoints – Indicates whether checkpoints are being updated during the update run
process. For more information, see Updating a Test Using the Update Run
Mode Option.
UpdatingTODescriptions - Indicates whether the set of properties used to identify test objects are
being updated during the update run process. For more information, see
Updating a Test Using the Update Run Mode Option.
UserName – The Windows login user name.
VuserId – The VUser identification under load. This variable is relevant only when
running as a GUI VUser from the LoadRunner controller.
Note: You cannot use the ResultDir environment variable when running a test from Business Availability Center, LoadRunner, or the Silent Test Runner in QuickTest.
Source – QuickTest Professional User’s Guide.