Wednesday, 14 August 2013

QTP Interview Question

List of QTP interview question are as follows -

1. What are the types of Object Repositories in QTP?
2. What is extension of shared repository?
3. Explain the check points in QTP (bitmap & Image checkpoint)?
4. Difference between re-usable & external actions?
5. How regular Expressions will be useful & when ?
6. What is the use of Split Method?
7. What is Smart-Identification?
8. Difference Between Re-Testing & Regression Testing?
9. Explain about Call to copy & Call to Existing?
10. Explain about Bug-life cycle?
11. What is the purpose of creating Synchronization Points in QTP?
12. How many type of parameters are in QTP?
13. What is the use of Environment parameters in QTP?
14. What is difference between Action & Function
15. What is the use of Ordinal Identifiers in QTP?
16. What is the use of Repository Parameters in QTP?
17. What is the use of Transaction Statement?
18. When to use a Recovery Scenario and When to use “on error resume next”?
19. How to close all browsers from QTP?
20. How to count all links on Web page?
21. I have a Microsoft Access database that contains data I would like to use in my test. How do I do this?
22. How can I check that a child window exists
23. How can i check the properties of an object in an application without using checkpoints? For
24. How can I check if a parameter exists in DataTable or not?
25 How can import environment from a file on disk
26 How to load the library file at run-time. What’s its limitation?
27 Explain the difference between .qfl and .vbs file in QTP? 

How do you build a testing team to take their capability levels to the next level

This is a for Test Managers / Testing Heads / Sr Test personnel who managed / are managing a testing pool.
This article discuss about how to go about building testing teams and enhancing their capability to ensure they manage expectations of the management, customer and the testing team itself.
The following are some of the necessary ingredients for building effective teams, including a Testing organization:
- Smart people are the first priority to build a test team besides process framework.
- Recruiting with care.
- Test Leader to stamp his influence
- Aspects like induction, early involvement of testers matters a lot
- the organization structure of the testing team (my preference is centralize)
- Strong knowledge acquisition to complement the BA
- keeping the team happy and motivated via rewards, promotions, team workshops, outings
- Training, share sessions
- Visionary Leadership
- Customer focus
- Focus on delivering excellence to customer
- Managing for innovation
- Management by facts
- Developing talent
- Agility
- Focus on results and creating value
- System perspective
- Organizational and self learning
- Giving new challenges
- No repetitive work
- Encourage them to take initiatives
- Assign coding tasks
- Automation Automation Automation
- Team spirit should be maintained and individualism should be barred.
- Understanding the strength and capabilities of the team and serving the right balance of
mentoring, motivating, providing regular feedbacks, KT and group discussions.
- Establishing effective and transparent metrics to measure the team performance to arrive at
patterns and arrive at corrective/preventive actions through appropriate action plans. This ensures
that everyone in the team are measured on the same scale and also creates a healthy
competitive environment.
- Setting short team and long term goals for the team and individuals and guiding them to
accomplish their goals.
There should be a place emphasis on the leadership vision, as creating a shared vision is vital for developing an effective team. The strategic direction of the testing group is determined by this vision. Very often the vision is stated in high-sounding jargons, which tends to dissipate somewhere into nothingness while percolating to the grass-root level. It is hence imperative to word the vision statement into more meaningful one, which everyone in the organization can relate to. Then next part is to convert the vision into a strong mission statement and communicate effectively to the entire team. If the team shares and owns the vision and mission statements, then the it creates a self sustaining positive spiral.
The second part is to create a strong customer focus. The scope of Testing activity should be defined in the broader business context of the customer. The innovations in testing team – in terms of building products and solutions – should focus on bringing better value to the customer at optimized costs, leading to better Return on Investment. Such value articulation is very critical. The entire process and work-system design should focus on the customer outcomes.
The third important element is developing the talent for Testing. The talent development revolves around the axes of (i) Developing the right attitude (ii) Imparting the right skills – both technical and behavioral and (iii) Empowering the people at all levels. Apart from developing the team, it is also important to maintain the engagement levels of the team. Proper reward and recognition system, which is based on merit and which creates a healthy competition, is very important.
The fourth element is developing the core processes for the testing function. These core processes should address both the Management and the Operating processes. These processes should be agile, in the sense that, they should not be too rigid, but should be able to be tailored appropriately to meet the requirements of the customer.
Another important element is developing the right systems for defining, tracking and governance of the right metrics for measuring the effectiveness of the team. These metrics should cover not only the operational part, but also the results part. These systems along with the right knowledge management system are the bedrock for sustained effectiveness of the testing team. This system should afford ample scope for recording the learning and to integrate them back into the core processes to enhance their relevance and effectiveness.
All above are valuable points. Building testing teams should start from you leading from front. Extra efforts needs be taken to achieve this.As a leader one should have significant achievements under your belt.

LoadRunner Interview Question

1. What is load testing? Can we test J2ME application with load runner ? What is Performance
testing?
2. Which protocol has to be selected for record/playback Oracle 9i application?
3. What are the enhancements which have been included in loadrunner 8.0 when compared to
loadrunner 6.2?
4. Can we use Load Runner for testing desktop applications or non web based applications and how
do we use it.?
5. How to call winrunner script in Loadrunner?
6. What arr the types of parameterisation in load runner? List the step to do strees testing?
7. What are the steps for doing load and performance testing using Load Runner?
8. What is concurrent load and corollation? What is the process of load runner?
9. What is planning for the test?
10. What enables the controller and the host to communicate with each other in Load Runner?
11. Where is Load testing usually done?
12. What are the only means of measuring performance?
13. Testing requirement and design are not part of what?
14. According to Market analysis 70% of performance problem lies with what?
15. What is the level of system loading expected to occur during specific business scenario?
16. What is run-time-setting.
17. When load runner is used .
18. What protocols does LoadRunner support?
19. What do you mean by creating vuser script.?
20. What is rendezvous point?

How to Handle Pop-up Windows in Oracle NCA

We will see step-by-step procedure of how to handle the pop-up windows while using Oracle NCA protocol:
1. Put the title of the pop-up window in nca_obj_status function.
2. Find out where the pop-up is occurring, put the handling statement below it.
3. The handling statement could be nca_popup_message_press or nca_message_box_press.
To find out which function is suitable for your script, record a script using data that generates that popup window, click on the button and check which function gets recorded.
Example:
This piece of code will trigger a pop-up:
nca_set_window( “PopUpObjects”);
nca_lov_retrieve_items(”PopUpObjects”,1,20);
nca_lov_select_item(”PopUpObjects”,”POP UP NOTIFICATIONS”);
If title of the window is “Warning”, put it inside the nca_obj_status function. The code would be something like-
int status;
status=nca_obj_status(”Warning”);
if (status = = 0)
nca_popup_message_press(”Warning”,”OK”);
// nca_message_box_press(”Forms”,1); Any one of them

Introduction of LoadRunner

LoadRunner is an industry-leading performance and load testing product by Hewlett-Packard (since it acquired Mercury Interactive in November 2006) for examining system behavior and performance, while generating actual load.
LoadRunner can emulate hundreds or thousands of concurrent users to put the application through the rigors of real-life user loads, while collecting information from key infrastructure components (Web servers, database servers etc). The results can then be analysed in detail, to explore the reasons for particular behaviour.
Consider the client-side application for an automated teller machine (ATM). Although each client is connected to a server, in total there may be hundreds of ATMs open to the public. There may be some peak times — such as 10 a.m. Monday, the start of the work week — during which the load is much higher than normal. In order to test such situations, it is not practical to have a testbed of hundreds of ATMs. So, given an ATM simulator and a computer system with LoadRunner, one can simulate a large number of users accessing the server simultaneously. Once activities have been defined, they are repeatable. After debugging a problem in the application, managers can check whether the problem persists by reproducing the same situation, with the same type of user interaction. Modern client/server architectures are complex. While they provide an unprecedented degree of power and flexibility, these systems are difficult to test. Whereas single-user testing focuses primarily on functionality and the user interface of a single application, client/server testing focuses on performance and reliability of an entire client/server system.
For example, a typical client/server testing scenario might depict 200 users that login simultaneously to a system on Monday morning: What is the response time of the system? Does the system crash? To be able to answer these questions– and more–a complete client/server performance testing solution must
• test a system that combines a variety of software applications and hardware platforms
• determine the suitability of a server for any given application
• test the server before the necessary client software has been developed
• emulate an environment where multiple clients interact with a single server application
• test a client/server system under the load of tens, hundreds, or even thousands of potential
users
Load Runner is divided up into 3 smaller applications:
The Virtual User Generator allows us to determine what actions we would like our Vusers, or virtual users, to perform within the application. We create scripts that generate a series of actions, such as logging on, navigating through the application, and exiting the program.
The Controller takes the scripts that we have made and runs them through a schedule that we set up. We tell the Controller how many Vusers to activate, when to activate them, and how to group the Vusers and keep track of them.
The Results and Analysis program gives us all the results of the load test in various forms. It allows us to see summaries of data, as well as the details of the load test for pinpointing problems or bottlenecks.