Sunday, 18 August 2013

Limitations of QTP 8.2

The limitations listed below are specifically for QTP 8.2:
Maximum worksheet size—65,536 rows by 256 columns
Column width—0 to 255 characters
Text length—16,383 characters
Formula length—1024 characters
Number precision—15 digits
Largest positive number—9.99999999999999E307
Largest negative number— -9.99999999999999E307
Smallest positive number—1E-307
Smallest negative number— -1E-307
Maximum number of names per workbook—Limited by available memory
Maximum length of name—255
Maximum length of format string—255
Maximum number of tables (workbooks)—Limited by system resources (windows and memory)
Theoritically we can say that there is no limitation in creating Actions in QTP.
But Microsoft Excel supports only 256 sheets. A seperate local Data sheet will be created for every action. Because there is a limitation in the no. of sheets in the Microsoft Excel, we can create 255 Actions. After creating 255 Actions in the QTP Test, an action will created but no Data sheet will be created for the Action which exceeded 255.
Generally test scripts will fail incase the system gets locked. This limitation is not limited to only QTP but happens with most of the tools. Reason behind this being, focus is shifted from the current execution process to OS.We have overcome this problem by getting Admin Pack installed for Windows XP. Admin pack basically enables the OS activites to operate independently of the ongoing process. Thus the execution process still continues to have focus even if the system gets locked. I suppose you could contact your sysadmin personnel to get more info on Admin Pack.

Introduciton of Performance Engineering Quick Review Sheet

Goals
—–
* Determine the client’s needs and expectations.
* Ensure accountability can be assigned for detected performance issues.
Pre-Signing
————
1. How many users (human and system) need to be able to use the system concurrently.
a. What is the total user base?
b. What is the projected acceptance rate?
c. How are the users distributed across the day/week/month?
d. Example: Assuming evenly distributed billing cycles throughout the month, users
spending about 15 min each viewing and paying their bill, and the site is generally
accessed between 9AM EST and 6PM PST (15 hours). Calculated concurrent
users with the following formula.
(total monthly users)/(30 days a month * 15 hours a day * 4 {note, 60min/15min per
user} = daily average concurrent user load.
Normally test to 200% of daily average concurrent user load.
If 1 million monthly users…
1,000,000/(30*15*4) = 556 concurrent users. (2,222 hourly users) Recommend testing
up to 1000 concurrent users (4,000 hourly users).
2. General performance objectives
a. Can service up to a peak of ???? customers an hour without losing customers due
to performance issues?
b. System stable and functional under extreme stress conditions?
3. What are the project boundaries, such as:
a. Are all bottlenecks resolved Or….
b. Best possible performance in ?? months Or…..
c. Continue tuning until goals are met Or…..
d. Continue tuning until bottlenecks deemed “unfixable” for current release?
Pre-Testing
———–
1. What are the specific/detailed performance requirements?
a. User Experience (preferred) – i.e. With 500 concurrent users, a user accessing the
site over a LAN connection will experience not more than 5 seconds to display a
small or medium bill detail report 95% of the time
b. Component Metric (not recommended) – i.e. Database server will keep memory
usage under 75% during all tested user loads
2. Detailed description of the test and production environments.
a. Associated risks if not identical
b. How to best mark up performance if not identical
3. What is the availability of client system administrators/developers/ architects?
4. What is the test schedule
a. Can tests be executed during business hours?
b. Must tests be executed during off hours?
c. Must tests be executed during weekends?
5. Are system monitoring tools already installed/being used on the systems under test
a. Perfmeter, Perfmon, Top
b. Who will monitor/evaluate monitoring tools on client machines?
6. Are any other applications/services running on the systems to be tested.
a. Think about associated risks to shared environments, like memory, disk I/O, drive
space, etc.
7. What are of the types of tests/users/paths desired
a. Target for 80% of user activity
b. Always model system intensive activity
8. Based on answers, create Test Strategy Document
Test Design/Execution
———————-
1. Design tests to validate requirements
1 Create User Experience Tests
2 Generate loads and collect Component Metric data while under load.
2. Always Benchmark application in a known environment first
1 Architecture need not match
2 Benchmark tests do not need to follow user community model exactly
3 Benchmark tests should represent about 15% of the expected peak load
4 Look for problems “low hanging fruit”
5 Do not focus on maximizing User Experience, just look for show stopping
bottlenecks
3. Benchmark in Prod environment if possible
1 Look for problems with code/implementation
2 Look for problems out of scope that client must fix to meet performance goals (i.e.
Network, Architecture, Security, Firewalls)
3 This benchmark must be identical to the benchmark conducted in the known
environment (taking into account difference in Architecture)
4 Do not focus on maximizing User Experience, just look for show stopping
bottlenecks
4. Load Test
1 Iteratively test/tune/increase load
2 Start with about the same load as benchmark, but accurately depicting user
community
3 Do not tune until critical (primary) bottleneck cause is found – do not tune
symptoms, do not tune components that “could be faster” unless you can prove
they are the primary bottleneck.
4 Re-test after each change to validate that it helped – if it didn’t, change it back and
try the next most likely cause of the bottleneck. Make a note of the change
in case it becomes the primary bottleneck later.
5 If bottlenecks are client environment related (i.e. out of scope) document the
bottleneck and present to PM for guidance. Stop testing until client/PM
agree on approach.
6 *Note* If you need performance improved by 200% to reach the goal, tuning a
method to execute in .2 seconds instead of .25 seconds will not fix the problem.
Results Reporting
—————–
1. Report test results related to stated requirements only.
a. Show summarized data validating requirements
b. If requirements are not met, show data as to why and what needs to be fixed/who
needs to fix it by when
c. Show areas of potential future improvement that are out of scope
d. Show summary of tuning/settings/configurations if it adds value to the client
e. Be prepared to deliver formatted raw data as back-up

Introduction of QTP Framework

Linear FW is the individual scripts written in QTP and executing the scripts individually.
Modualr FW is the collection of actions created as reusable in one test and calling these actions in the other test
Keyword Driven FW is creating functional libraries and calling funtion name as keywords in the test using default.xls data table.
Here is the reasons why we should use custom framework.
1) I created some of script using QTP built-in Object Repository on my PC’s C drive. Now
management decided that everyone should able to log into a network via Citrix server, OPEN
QTP there and execute different scripts. But, could not execute and I got error message
Action 1 not found. The reason all the script I have created into my PC on C drive. If I did not
have used QTP built-in Object Repository and have used custom framework then I could have
execute script from network.
2) You can start coding even when your application is not ready
Lets take the Login page as an example. The descriptions of the Login page will be
Code:
——————————————————————————–
Object Type Property Property Value
———– ——— —————
Browser title ScopePage name ScopeWebEdit name User IdWebEdit name PwdWebButton name Submit
——————————————————————————–
Based on the above descriptions you can create Dynamic Object Repository as below.
B=”title:=ABC”
P=”name:=XYZ”
Set BP=Browser(B).Page(P)
LO_UserId_WE=”name:=User Id”
LO_Pwd_WE=”name:=Pwd”
LO_Submit_WB=”name:=Submit”
The script could be:
BP.WebEdit(LO_UserId_WE).Set “Prashant”
BP.WebEdit(LO_Pwd_WE).Set “Patel”
BP.WebButton(LO_Submit_WB).Click
3) No need to keep same types of objects
For example: Page 1, Page 2, and Page 3 have Submit, Cancel and Reset buttons. When
QTP learns the objects it captures three objects for each page, totalling 9. However with
Dynamic Object Repository you have to declare only once, totalling 3.
4) You can make changes to Dynamic Object Repository without making a change to the script.
For example: If object property changed then script does not need to be changed.

Difference Between Alpha and Beta Testing

Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.
Following alpha testing, “beta versions” of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.
Difference between Alpha and Beta Testing
In-house developers and software QA personnel perform alpha testing.
The public, a few select prospective customers, or the general public performs beta testing.
Difference between Verification and Validation
Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help.
Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.
Difference between Verification and Validation:
Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.

Introduction of Web Testing Parameter

1. Validation
1. Validate the HTML(w3c)
2. Validate the CSS(Cascading Style Sheets (CSS): A style sheet language used to describe the
presentation of a document written in a markup language.)
3. Check for broken links,
2. Flexibility
1. Try varying window sizes
2. Try varying font sizes & consistency
3. Performance with benchmarking.
1. Access the site via a modem
2. Check image size specifications
4. Accessibility
1. Test accessibility
2. View in text browser(ALT text, consistency)
5. Browser independence and then Compatibility testing.
1. Try different browsers
2. Check printed pages
3. Switch Javascript off(in internet scripts)
4. Switch plug-ins off
5. Switch images off
6. Other checks
1. Check non-reliance on mailto
2. Check no orphan pages
3. Check sensible page titles