Wednesday, 14 August 2013

SQL Joins with Examples

Have used the following 2 tables Employee and Department as examples.
Employee Table :- Department Table:-
EmployeeID EmployeeName DepartmentID DepartmentID DepartmentName
1 Smith 1 1 HR
2 Jack 2 2 Finance
3 Jones 2 3 Security
4 Andrews 3 4 Sports
5 Dave 5 5 HouseKeeping
6 Jospeh 6 Electrical
************************************************************************************************
Inner Join
An Inner Join will take two tables and join them together based on the values in common columns ( linking field ) from each table.
Example 1 :- To retrieve only the information about those employees who are assinged to a department.
Select Employee.EmployeeID,Employee.EmployeeName,Department.DepartmentName From Employee Inner Join Department on Employee.DepartmentID = Department.DepartmentID
The ResultSet will be :-
EmployeeID EmployeeName DepartmentName
1 Smith HR
2 Jack Finance
3 Jones Finance
4 Andrews Security
5 Dave HouseKeeping
Example 2:- Retrieve only the information about departments to which atleast one employee is assigned.
Select Department.DepartmentID,Department.DepartmentName From Department Inner Join Employee on Employee.DepartmentID = Department.DepartmentID
The ResultSet will be :-
DepartmentID DepartmentName
1 HR
2 Finance
3 Security
5 HouseKeeping
************************************************************************************************
Outer Joins :-
Outer joins can be a left, a right, or full outer join.
Left outer join selects all the rows from the left table specified in the LEFT OUTER JOIN clause, not just the ones in which the joined columns match.
Example 1:- To retrieve the information of all the employees along with their Department Name if they are assigned to any department.
Select Employee.EmployeeID,Employee.EmployeeName,Department.DepartmentName From Employee LEFT OUTER JOIN Department on Employee.DepartmentID = Department.DepartmentID
The ResultSet will be :-
EmployeeID EmployeeName DepartmentName
1 Smith HR
2 Jack Finance
3 Jones Finance
4 Andrews Security
5 Dave HouseKeeping
6 Jospeh
Right outer join selects all the rows from the right table specified in the RIGHT OUTER JOIN clause, not just the ones in which the joined columns match.
Example 2:- use Right Outer join to retrieve the information of all the departments along with the detail of EmployeeName belonging to each Department, if any is available.
Select Department.DepartmentID,Department.DepartmentName,Employee.EmployeeName From Employee Outer Join Department on Employee.DepartmentID = Department.DepartmentID
The ResultSet will be :-
DepartmentID DepartmentName EmployeeName
1 HR Smith
2 Finance Jack
2 Finance Jones
3 Security Andrews
4 Sports NULL
5 HouseKeeping Dave
6 Electrical NULL
This query will result in Null value for Employee Name where no Employee is assigned to that department.

Fundamental of Performance Testing

Performance testing: This is a type of testing intended to determine the responsiveness, throughput, reliability, and/or scalability of a system under a given workload. Performance testing is commonly conducted to accomplish the following:
• Assess production readiness
• Evaluate against performance criteria
• Compare performance characteristics of multiple systems or system
Configurations
• Find the source of performance problems
• Support system tuning
• Find throughput levels
Core Activities of Performance Testing
Performance testing is typically done to help identify bottlenecks in a system, establish a baseline for future testing, support a performance tuning effort, determine compliance with performance goals and requirements, and/or collect other performance-related data to help stakeholders make informed decisions related to the overall quality of the application being tested. In addition, the results from performance testing and analysis can help you to estimate the hardware configuration required to support the application when you “go live” to production operation.
Core performance Testing Activities can be described in 7 steps & they are as follows –
1. Identify Test Environment
2. Identify Performance Acceptance Criteria
3. Plan and Design Tests
4. Configure Test Environment
5. Implement Test Design
6. Execute Tests
7. Analyze, Report and Retest
The performance testing approach consists of the following activities:
1. Identify the Test Environment. Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle.
2. Identify Performance Acceptance Criteria. Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints, for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.
3. Plan and Design Tests. Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
4. Configure the Test Environment. Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary.
5. Implement the Test Design. Develop the performance tests in accordance with the test design.
6. Execute the Test. Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment.
7. Analyze Results, Report, and Retest. Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Reprioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.
Why Do Performance Testing?
At the highest level, performance testing is almost always conducted to address one or more risks related to expense, opportunity costs, continuity, and/or corporate reputation. Some more specific reasons for conducting performance testing include:
• Assessing release readiness by: Enabling you to predict or estimate the performance characteristics of an application in production and evaluate whether or not to address performance concerns based on those predictions. These predictions are also valuable to the stakeholders who make decisions about whether an application is ready for release or capable of handling future growth, or whether it requires a performance improvement/hardware upgrade prior to release.
Providing data indicating the likelihood of user dissatisfaction with the performance characteristics of the system.
Providing data to aid in the prediction of revenue losses or damaged brand credibility due to scalability or stability issues, or due to users being dissatisfied with application response time.
• Assessing infrastructure adequacy by: Evaluating the adequacy of current capacity. Determining the acceptability of stability. Determining the capacity of the application’s infrastructure, as well as determining the future resources required to deliver acceptable application performance. Comparing different system configurations to determine which works best for both the application and the business. Verifying that the application exhibits the desired performance characteristics, within budgeted resource utilization constraints.
• Assessing adequacy of developed software performance by: Determining the application’s desired performance characteristics before and after changes to the software. Providing comparisons between the applications’s current and desired performance characteristics.
• Improving the efficiency of performance tuning by: Analyzing the behavior of the application at various load levels. Identifying bottlenecks in the application. Providing information related to the speed, scalability, and stability of a product prior to production release, thus enabling you to make informed decisions about whether and when to tune the system.

Introduction of Test Case

A test case is a set of conditions or variables under which a tester will determine if a requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub requirement must have at least one test case. Some methodologies like RUP recommend creating at least two test cases for each requirement. One of them should perform positive testing of requirement and other should perform negative testing.
If the application is created without formal requirements, then test cases are written based on the accepted normal operation of programs of a similar class. Written test cases are usually collected into Test suites.
Formal, written test cases consist of three main parts with subsections:
Introduction/overview contains general information about Test case.
• Identifier is unique identifier of test case for further references, for example, while
describing found defect.
• Test case owner/creator is name of tester or test designer, who created test or is responsible
for its development.
• Version of current Test case definition.
• Name of test case should be human-oriented title which allows to quickly understand test
case purpose and scope.
• Identifier of requirement which is covered by test case. Also here could be identifier of use
case or functional specification item.
• Purpose contains short description of test purpose, what functionality it checks.
• Dependencies
Test case activity
• Testing environment/configuration contains information about configuration of hardware or
software which must be met while executing test case.
• Initialization describes actions, which must be performed before test case execution is
started. For example, we should open some file.
• Finalization describes actions to be done after test case is performed. For example if test
case crashes database, tester should restore it before other test cases will be performed.
• Actions step by step to be done to complete test.
• Input data description
Results
• Expected results contains description of what tester should see after all test steps has been
completed.
• Actual results contain a brief description of what the tester saw after the test steps has been
completed. This is often replaced with a Pass/Fail. Quite often if a test case fails, reference
to the defect involved should be listed in this column.

What is Unit Testing?

Unit Testing is the first and the most important level of testing wherein each and every line of code is checked, so the testing is performed at the root level. As soon as the programmer develops a unit of code the unit is tested for various scenarios. As the application is built it is much more economical to find and eliminate the bugs early on. Hence Unit Testing is the most important of all the testing levels. As the software project progresses ahead it becomes more and more costly to find and fix the bugs.
In most cases it is the developer’s responsibility to deliver Unit Tested Code.
Unit Testing Tasks and Steps:
Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the code is ready execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the “unit” is free of all bugs

What is Integration Testing

Integration testing (sometimes called Integration and testing and abbreviated I&T) is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.
The purpose of Integration testing is to verify functional, performance and reliability requirements placed on major design items. These “design items”, i.e. assemblages (or groups of units), are exercised through their interfaces using Black box testing, success and error cases being simulated via appropriate parameter and data inputs.
The overall idea is a “building block” approach, in which verified assemblages are added to a verified base which is then used to support the Integration testing of further assemblages.
The different types of integration testing are Big Bang, Top Down, Bottom Up, and Back bone.