Sunday, 18 August 2013

Different Types of Testing

This article explains about different testing types Unit Test. System Test, Integration Test, Functional Test, Performance Test, Beta Test and Acceptance Test.
Introduction:
The development process involves various types of testing. Each test type addresses a specific testing requirement. The most common types of testing involved in the development process are:
• Unit Test.
• System Test
• Integration Test
• Functional Test
• Performance Test
• Beta Test
• Acceptance Test.
Unit Test – The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each unique path of the project performs accurately to the documented specifications and contains clearly defined inputs and expected results.
System Test – Several modules constitute a project. If the project is long-term project, several developers write the modules. Once all the modules are integrated, several errors may arise. The testing done at this stage is called system test.
System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.
Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.
Functional Test – Functional test can be defined as testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.
Acceptance Testing – Testing the system with the intent of confirming readiness of the product and customer acceptance.
Ad Hoc Testing – Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.
Alpha Testing – Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
Automated Testing – Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.
Beta Testing – Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.
Black Box Testing – Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.
Compatibility Testing – Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.
Configuration Testing
Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
Independent Verification & Validation – The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn’t fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.
Installation Testing – Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs.
Integration Testing -Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing)
Load Testing -Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.
Performance Testing – Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.
Pilot Testing – Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing).
Regression Testing – Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred.
Security Testing – Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.
Software Testing – The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn’t fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation)
Stress Testing – Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.
White Box TestingTesting in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.

Load Runner Facts (Goal of Performance Testing)

Load runner Facts
1. if we donot have baseline timings for an application then we should run the scripts for one
users and then comapre that timings with the load response time.
Goal of Performance Testing
The goal of performance testing is not finding bugs, but to remove the bottlenecks from the application and improve the efficiency.
Before doing a performance testing we basically need to know the following points –
1. Expected no of concurrent users or HTTP connections with your application
2. Acceptable response time for your pages
For performance tuning basically we have two approach.
In Approach1(white-box), we can do the following,
Code Analysis, We can search for poor algorithms or looping which is the reason for inefficiency.
Database Analysis, We can use query optimizers and profilers to optimize the database.
Hardware & Network, We can use utilities such as top, iostat to monitor hardware resources and ntop, netstat to monitor the network and Sockets.
In Approach2(black-box), for a Web application, testers will use tools that simulate concurrent users/HTTP connections and measure the response times automatically. If the response time does not meet your expectations tuning has to be done at application/hardware/database level.
In Tuning,
First we need to enhance the application code efficiency, then we can optimize the database.
If still your application doesn’t meet your requirements then the following steps will help you.
1. Using cache mechanisms.
2. Publish highly requested pages statically, so that they don’t hit the database.
3. Scaling Web servers horizontally via load balancing.
4. Scaling database servers horizontally and split them into read/write servers and read-only
servers.
5. Scale the servers vertically by adding more hardware resources (CPU,RAM)
Points to remember,
we should take care such that one variable is modified at a time and redo the measurements.
Functionally the application should be well tested and must be in good quality. i.e., the software under test is already stable enough so that performance testing process can proceed smoothly.

Guideline for Database Testing

You have to do the following for writing the database testcases.
1. First of all you have to understand the functional requirement of the application throughly.
2. Then you have to find out the back end tables used, joined used between the tables, cursors used (if any), tiggers used(if any), stored procedures used (if any), input parmeter used and output parameters used for developing that requirement.
3. After knowing all these things you have to write the testcase with different input values for checking all the paths of SP.
One thing writing testcases for backend testing not like functinal testing. You have to use white box testing techniques.
Steps to writing test cases for database testing
***********************************************
1) Learn the functional requirement of the application (SRS) completely
2) Find out the back end tables used, joined used between the tables, cursors used (if any),
triggers used(if any), stored procedures used (if any), input parameter used and output
parameters used for developing that requirement.
3) By knowing all these things write the test case with different input values for checking and
comparing the actual results with the expected results for the application.
Note : For writing test cases for back end operations, the tester must know the white box testing.

EDI over Internet

EDI over Internet – Electronic Data Interchange (EDI) was developed as the de facto standard for organizations to exchange information electronically. However, it involves significant investment for deployment, which limits the usage to large organizations only. With the aim to broaden the user base and allow every one to enjoy the benefits of e-commerce in a lower cost and simpler manner, various public standards have emerged. Individual organizations may use different standards from that of their business partners. This generates a need to deploy several applications to support different standards being used by their business partners.

Limitations of QTP 8.2

The limitations listed below are specifically for QTP 8.2:
Maximum worksheet size—65,536 rows by 256 columns
Column width—0 to 255 characters
Text length—16,383 characters
Formula length—1024 characters
Number precision—15 digits
Largest positive number—9.99999999999999E307
Largest negative number— -9.99999999999999E307
Smallest positive number—1E-307
Smallest negative number— -1E-307
Maximum number of names per workbook—Limited by available memory
Maximum length of name—255
Maximum length of format string—255
Maximum number of tables (workbooks)—Limited by system resources (windows and memory)
Theoritically we can say that there is no limitation in creating Actions in QTP.
But Microsoft Excel supports only 256 sheets. A seperate local Data sheet will be created for every action. Because there is a limitation in the no. of sheets in the Microsoft Excel, we can create 255 Actions. After creating 255 Actions in the QTP Test, an action will created but no Data sheet will be created for the Action which exceeded 255.
Generally test scripts will fail incase the system gets locked. This limitation is not limited to only QTP but happens with most of the tools. Reason behind this being, focus is shifted from the current execution process to OS.We have overcome this problem by getting Admin Pack installed for Windows XP. Admin pack basically enables the OS activites to operate independently of the ongoing process. Thus the execution process still continues to have focus even if the system gets locked. I suppose you could contact your sysadmin personnel to get more info on Admin Pack.