Search

Thursday, December 10, 2009

Software Development Process...

What is a Software Development Process?

A software development process or life cycle is a structure imposed on the development of a software product. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.

Processes

More and more software development organizations implement process methodologies.
The Capability Maturity Model (CMM) is one of the leading models. Independent assessments can be used to grade organizations on how well they create software according to how they define and execute their processes.
There are dozens of others, with other popular ones being ISO 9000, ISO 15504, and Six Sigma.

Process Activities/Steps

Software Engineering processes are composed of many activities, notably the following:


• Requirements Analysis
Extracting the requirements of a desired software product is the first task in creating it. While customers probably believe they know what the software is to do, it may require skill and experience in software engineering to recognize incomplete, ambiguous or contradictory requirements.

Specification
Specification is the task of precisely describing the software to be written, in a mathematically rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable.

Software architecture
The architecture of a software system refers to an abstract representation of that system. Architecture is concerned with making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed.

• Implementation
Reducing a design to code may be the most obvious part of the software engineering job, but it is not necessarily the largest portion.

• Testing
Testing of parts of software, especially where code by two different engineers must work together, falls to the software engineer.

• Documentation
An important task is documenting the internal design of software for the purpose of future maintenance and enhancement.

• Training and Support
A large percentage of software projects fail because the developers fail to realize that it doesn't matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are occasionally resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, its very important to have training classes for the most enthusiastic software users (build excitement and confidence), shifting the training towards the neutral users intermixed with the avid supporters, and finally incorporate the rest of the organization into adopting the new software. Users will have lots of questions and software problems which leads to the next phase of software.

• Maintenance
Maintaining and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. Not only may it be necessary to add code that does not fit the original design but just determining how software works at some point after it is completed may require significant effort by a software engineer. About 60% of all software engineering work is maintenance, but this statistic can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to do new things, which in many ways can be considered new work.


Process Models

A decades-long goal has been to find repeatable, predictable processes or methodologies that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management is proving difficult.

Waterfall processes
The best-known and oldest process is the waterfall model, where developers follow these steps in order. They state requirements, analyze them, design a solution approach, architect a software framework for that solution, develop code, test, deploy, and maintain. After each step is finished, the process proceeds to the next step.

Monday, November 23, 2009

Types of Testing


Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.


White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.


Unit testing - Unit is the smallest compilable component. A unit typically is the work of one programmer.This unit is tested in isolation with the help of stubs or drivers.Typically done by the programmer and not by testers
Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing - black-box testing aimed to validate to functional requirements of an application; this type of testing should be done by testers.
System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
End-to-end testing - similar to system testing but involves testing of the application in a environment that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Even the transactions performed mimics the end users usage of the application.
Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Smoke testing - The general definition (related to Hardware) of Smoke Testing is: Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and drain lines to detect sources of unwanted leaks and sources of sewer odors.In relation to software, the definition is Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
Static testing - Test activities that are performed without running the software is called static testing. Static testing includes code inspections, walkthroughs, and desk checks
Dynamic testing - test activities that involve running the software are called dynamic testing.
Regression testing - Testing of a previously verified program or application following program modification for extension or correction to ensure no new defects have been introduced.Automated testing tools can be especially useful for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Performance testing - Validates that both the online response time and batch run times meet the defined performance requirements.Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing - testing how well software performs in a particular hardware/software/ operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
Monkey testing:-monkey testing is a testing that runs with no specific test in mind. The monkey in this case is the producer of any input data (whether that be file data, or input device data). Keep pressing some keys randomely and check whether the software fails or not.
User acceptance testing - determining if software is satisfactory to an end-user or customer.Comparison testing - comparing software weaknesses and strengths to competing products.
Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by users within the development team.
Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources
Cross browser testing - application tested with different browser for usablity testing & compatiblity testing
Concurrent testing - Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores etc.
Negative testing - Testing the application for fail conditions,negative testing is testing the tool with improper inputs.for example entering the special characters for phone number
Load testing -Load testing is a test whose objective is to determine the maximum sustainable load the system can handle. Load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions suffer (application-specific) excessive delay.Stress testing - Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

Friday, October 09, 2009

LOAD RUNNER 8.0 Ver

Ø Performance Engineering Basic Concepts.
Ø My Performance Testing-Concepts.
Ø HTTP Protocol Fundamentals.
Ø Differences HTTP 1.0/1.1
Ø And their Architectures Fundamentals

Load Testing Tool hp Load runner 8.1


Ø VUGEN _- Scripting Fundamentals.
Ø Advanced Scripting.
Ø Controller.
Ø Worked distribution – case study and Practice sessions.
Ø Concurrent users Estimation.
Ø Server Monitoring.


1. Windows
2. Unix
3. Apache.

Ø Configuring Load generators.
Ø Executing different types of Scenarios.
Ø Monitoring UNIX resources using UNIX utilities.

Analysis

Ø Mathematical Fundamentals
(Averages, Percentiles, data interpretation, Queuing theory basics)
Ø Sats pack Understanding LR graps.
Ø Correlating data.

Server side metrics & Analysis

Ø (VA) OS basics (CPU, Memory, I/O, N/W)
Ø (VB) web server Apache Monitoring- case study
Ø (VC) DB Monitoring using "Stats Pack"

Reporting

(1) Briefly about Manual Testing
(2) Briefly about Functional Testing.

COMMNICATION SKILLS

Ø Personalizing &Behavioral Training
Ø Soft Skills
Ø Interview Skills
Ø Group Discussion
Ø Speaking on Stage
Ø Facing the public and public speaking
Ø Just a minute (JAM)
Ø HR Interviews
Ø Dress Code
Ø Body Languages for Interview
Ø Important Interview Questions

Wednesday, September 02, 2009

Complete QTP Learning

QTP (QuickTest Professional)


Introduction to QuickTest

Over view of Quick Test Pro 8.0
QuickTest Pro is Mercury Imperative’s functional enterprise testing tool.

QuickTest Professional is a fresh approach to automated software and application testing .It is designed to provide a robust application verification solution without the need for advanced technical or programming

QuickTest Pro is similar to Astra QuickTest. The key difference lies in the number of environments that QuickTest Professional supports (e.g. ERP/CRM, Java applets and applications, multiple multimedia environments, etc.).

QuickTest Professional enables you to test standard Windows applications, Web objects, ActiveX controls, Visual Basic applications, and multimedia objects on Web pages.

We Can Use QuickTest add-ins for a number of special environments (such as Java, Oracle, SAP solutions, .NET Windows and Web Forms, Siebel, PeopleSoft, Web services, and terminal emulator applications).

Sunday, August 02, 2009

Complete QTP Learning

QTP (QuickTest Professional )
1.0 Introduction to QuickTest

Over view of Quick Test Pro 8.0
QuickTest Pro is Mercury Imperative’s functional enterprise testing
QuickTest Professional is a fresh approach to automated software and application testing ।It is designed to provide a robust application verification solution without the need for advanced technical or programming

QuickTest Pro is similar to Astra QuickTest. The key difference lies in the number of environments that QuickTest Professional supports (e.g. ERP/CRM, Java applets and applications, multiple multimedia environments, etc.).

QuickTest Professional enables you to test standard Windows applications, Web objects, ActiveX controls, Visual Basic applications, and multimedia objects on Web pages.

We Can Use QuickTest add-ins for a number of special environments (such as Java, Oracle, SAP solutions, ।NET Windows and Web Forms, Siebel, PeopleSoft, Web services, and terminal emulator applications).
QuickTest Pro 8।0 Environment Support

Windows Applications
(MFC)
• Visual Basic
• Java
• ActiveX

Enterprise Applications
• SAP
• Oracle
• PeopleSoft
• Siebel
Web Technologies
HTML
• DHTML
• JavaScript
Browsers
IE
• Netscape
• AOL
Emerging Technologies
• .Net Winforms,
Webforms, Web services
• J2EE Web services
• XML, WSDL, UDDI
Terminal Emulators
• 3270
• 5250
• VT100
Server Technologies
• Oracle
• Microsoft
• IBM
• BEA
• ODBC
• COM/COM+
Multimedia
• RealAudio/RealVideo
• Windows Media Player
• Flash

Languages
• European
• Japanese
• Chinese (traditional and
simplified)
• Korean
Web Environments
IE NS AOL
ActiveX
XML DHTML HTML

Client/Server
Windows
Win32/MFC
Visual Basic

Operating Systems
Windows 98, 2000, NT, ME, XP
QuickTest Pro Add-ins
.Net Add-in Winforms, Webforms, Net Controls

Java Add-in
JDK 1.1-1.4.2

Terminal Emulator Add-in 3270,5250,vt100

MySAP Add-in
SAP GUI, Web, Portals 6.2

Oracle Add-in
11i

PeopleSoft Add-in
8.0-8.8

Siebel Add-in 7.0 & 7.5

Webservices Add-in WSDL,.Net,J2EE
1.Start QuickTest and open a new test

· If QuickTest is not currently open, choose Start > Programs > QuickTest Professional > QuickTest Professional.

In the Add-in Manager, confirm that the Web Add-in is selected, and clear all other add-ins. Click OK to close the Add-in Manager and open QuickTest.

Note: While QuickTest loads your selected add-ins, the QuickTest splash screen is displayed. This may take a few seconds. If the Welcome window opens, click Blank Test.
Otherwise, choose File > New, or click the New button .
A blank test opens.

· If QuickTest is already open, check which add-ins are loaded by selecting Help > About QuickTest Professional. If the Web Add-in is not loaded, you must exit and restart QuickTest. When the Add-in Manager opens, select the Web Add-in, and clear all other add-ins.
Choose File > New, or click the New button .
A blank test opens.

Note: If the Add-in Manager does not open when starting QuickTest, choose Tools > Options. In the General tab, select Display Add-in Manager on startup. When you exit and restart QuickTest, the Add-in Manager opens.

2.Start recording

· Choose Test > Record or click the Record button The Record and Run Settings dialog box opens. In the Web tab, select Open the following browser when a record or run session begins.

· Choose a browser from the Type list and confirm that the URL in the Address box is for example . http://newtours.mercuryinteractive.com.

· Confirm that Close the browser when the test is closed is selected।

· In the Windows Applications tab, confirm that Record and run on these applications is selected, and that there are no applications listed।This setting prevents you from inadvertently recording operations performed on various Windows applications (such as e-mail) during a recording session.
· Click OK.

QuickTest begins recording, and your browser opens to the Mercury Tours Web

3.Login to the Mercury Tours Web site.
In the User Name and Password boxes, type the name and password you registered with Mercury Tours

Click Sign-In.

The Flight Finder page opens.

4.Enter flight details.

Change the following selections:
Departing From: New York
On: Dec 29
Arriving In: San Francisco
Returning: Dec 31
Service Class: Business class

Click CONTINUE to accept the other default selections. The Select Flight page opens.

Note: When entering dates while recording this test, do not click the View Calendar button, which opens a Java-based calendar. Your test will not record the date selected using this calendar because you did not load the Java Add-in for this tutorial.
To check which add-ins have been loaded, click Help > About QuickTest Professional. To change the available add-ins for your tests, you must close and reopen QuickTest Professional.

5.Select a flight.
Click CONTINUE to accept the default flight selections। The Book a Flight page opens।

6.Enter required passenger and purchase information.

Enter the required information (fields with red text labels) in the Passengers and Credit Card sections. (You may enter fictitious information.)
In the Billing Address section, select Ticketless Travel.
At the bottom of the page, click SECURE PURCHASE. The Flight Confirmation page opens.

7.Review and complete your booking.
Click BACK TO HOME। The Mercury Tours home page opens।
8.Stop recording.
In QuickTest, click Stop on the test toolbar to stop the recording process.
You have now reserved an imaginary business class ticket from New York to San Francisco। QuickTest recorded your Web browser operations from the time you clicked the Record button until you clicked the Stop button.

Save your test

Select File > Save or click the Save button. The Save dialog box opens to the Tests folder.
Create a folder named Tutorial, select it, and click Open.
Type Recording in the File name field.
Confirm that Save Active Screen files is selected.
Click Save. The test name (Recording) is displayed in the title bar of the main QuickTest window.

Friday, July 24, 2009

White-Box Testing Techniques


1) Control Flow

Statement Coverage: To write test cases such that every statement is executed at least once, it is a weak criteria, it will not test the validity of logical operators

Decision (Branch) Coverage: Decision Coverage (also called Branch Coverage) states that test cases must be written such that each decision has a True and False outcome at least once, Decision Coverage emphasizes that each branch direction must be executed at least once, It usually satisfies Statement Coverage

Condition Coverage: Condition Coverage technique states that test cases must be written such that every condition in a decision takes all possible outcomes at least once, Condition Coverage technique may not cover all decision outcomes


Decision/Condition Coverage: Decision/Condition coverage technique states that each condition should take on all possible outcomes at least once and each decision must take on all possible outcomes at least once


Multiple Condition Coverage: Multiple Condition coverage technique states that test cases must be written such that all possible combinations of conditions in each decision are taken at least once

Loop Coverage: Loop coverage technique states that test cases must be written to test the loop counters

2) Data Flow

All Definitions
All Predicates Uses
All Computation Uses
All P-uses/some C-uses
All Uses



Walkthrough - Main purpose: understanding
  • Author guides the group through a document, so all understand the same thing, concensus on changes to make

Review - Main purpose: decision-making

  • Group discusses document and makes a decision about the content, e.g. how something should be done, go or no-go decision

Inspection - Main purpose: find defects

  • Formal individual and group checking, using sources and standards, according to detailed and specific rules
  • Inspection is a well-defined process

Thursday, June 18, 2009

QTP (Quick Test Professional)

QTP (Quick Test Professional)

Quick Test Professional is software from Mercury Interactive. It is a Graphical User Interface functional testing tool that basically allows user actions to be automated on a client or web based computer application. QTP makes use of Visual Basic Script in order to allow the testing tool and the controls and objects of the application being tested to interact.

What is Smart Identification in QTP

Smart Identification is applicable for web application only. Record one object from application and modify the object property and execute your script. You will get one warning message in result. Navigate to working message in result file and read the note for smart identification.
Smart Identification is nothing but, if any property of test object is not matching with run time object property, it will write a warning to result and execute rest of the steps.Note: In the result Cap symbol will added for smart identification. (By looking in to this we can say that properties of test object is not matching with run-time object)


How to increase the execution speed of QTP scripts?


Go to Tools>Options>Run>select Fast
with using descriptive programming also u can do fast the execution

Reduce the lines of script. Writing the good test case means which is finding the correct error. Without using the object repository i.e. descriptive programming. its the optimization we call just decreasing the input and producing the quality


What is Key Word driven Testing?
Software test scripts are conventionally composed ad hoc by a coder. Some software development tools help automate testing by recording tests that are run, allowing "playback" of the test routines. However, an entire test routine is rarely, if ever, applicable to more than one release of one application. Data-driven testing adds some modularity by keeping test input and output values separate from the test procedure, but the procedure itself is still in a monolithic script. Keyword-driven testing breaks the test procedure into logical components that can then be used repeatedly in the assembly of new test scripts.

Friday, May 22, 2009

Testing Vocabulary


Every profession has its own vocabulary. To learn a profession, the first and crucial step is to master its vocabulary. The entire knowledge of a profession is compressed and kept it in its vocabulary.Take our own software testing profession, while communicating with our colleagues, we frequently use terms like 'regression testing', 'System testing', now imagine communicating the same to a person who is not in our profession or who doesn't understand our testing vocabulary, we need to explain in detail each and every term .Communication becomes so difficult and painful. To speak the language of testing, you need to learn its vocabulary.

Find below a huge collection of testing vocabulary

What is quality? or Define quality?


Lot of quality pioneers defined quality in different ways
A quality product is defined as the one that meets product requirements But Quality can only be seen through customer eyes. So the most important definition of quality is meeting customer needs or Understanding customer requirements, expectations and exceeding those expectations. Customer must be satisfied by using the product, then its a quality product.

Affinity Diagram: A group process that takes large amounts of language data, such as developing by brainstorming, and divides it into categories


Audit: This is an inspection/assessment activity that verifies compliance with plans, policies and procedures and ensures that resources are conserved.


Baseline:A quantitative measure of the current level of performance.


Benchmarking: Comparing your company's products, services or processes against best practices or competitive practices, to help define superior performance of a product,service or support processes.


Black-box Testing: A test technique that focuses on testing the functionality of the program component or application against its specifications without knowlegde of how the system constructed.


Boundary value analysis: A data selection technique in which test data is chosen from the "boundaries" of the input or output domain classes, data structures and procedure parameters. Choices often include the actual minimum and maximum boundary values, the maximum value plus or minus one and the minimum value plus or minus one.


Branch Testing: A test method that requires that each possible branch on each decision be executed on at least once.Brainstorming: A group process for generating creative and diverse ideas.


Bug: A catchall term for all software defects or errors.


Certification testing: Acceptance of software by an authorized agent after the software has been validated by the agent or after its validity has been demonstrated to the agent.


Checkpoint(or verification point): Expected behaviour of the application which must be validated with the actual behaviour after certain action has been performed on the application.


Client: The customer that pays for the product received and receives the benefit from the use of the product.


Condition Coverage: A white-box testing technique that measures the number of or percentage of decision outcomes covered by the test cases designed.100% condition coverage would indicate that every possible outcome of each decision had been executed at least once during testing.


Configuration Management Tools Tools that are used to keep track of changes made to systems and all related artifacts. These are also known as version control tools.


Configuration testing: Testing of an application on all supported hardware and software platforms.This may include various combinations of hardware types, configuration settings and software versions.


Completeness: A product is said to be complete if it has met all requirements.


Consistency: Adherence to a given set of rules.


Correctness: The extent to which software is free from design and coding defects. It is also the extent to which software meets the specified requirements and user objectives.


Cost of Quality: Money spent above and beyond expected production costs to ensure that the product the customer receives is a quality product. The cost of quality includes prevention, appraisal, and correction or repair costs.


Conversion Testing: Validates the effectiveness of data conversion processes, including field-field mapping and data translation.


Customer: The individual or organization, internal or external to the producing organization that receives the product.


Cyclomatic complexity: The number of decision statements plus one.

Debugging: The process of analysing and correcting syntactic, logic and other errors identified during testing.


Decision Coverage: A white-box testing technique that measures the number of - or percentage - of decision directions executed by the test case designed. 100% Decision coverage would indicate that all decision directions had been executed at least once during testing. Alternatively each logical path through the program can be tested.


Decision Table A tool for documenting the unique combinations of conditions and associated results in order to derive unique test cases for validation testing.

Defect Tracking Tools Tools for documenting defects as they are found during testing and fortracking their status through to resolution.


Desk Check: A verification technique conducted by the author of the artifcat to verify the completeness of their own work. This technique does not involve anyone else.


Dynamic Analysis: Analysis performed by executing the program code.Dynamic analysis executes or simulates a development phase product and it detects errors by analyzing the response of the product to sets of input data.


Entrance Criteria: Required conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process.

Equivalence Partitioning: A test technique that utilizes a subset of data that is representative of a larger class. This is done in place of undertaking exhaustive testing of each value of the larger class of data.


Error or defect: 1.A discrepancy between a computed, observed or measured value or condition and the true, specified or theortically correct value or conditon 2.Human action that results in software containing a fault (e.g., omission or misinterpretation of user requirements in a software specification, incorrect translation or omission of a requirement in the design specification)


Error Guessing: Test data selection techniques for picking values that seem likely to cause defects. This technique is based upon the theory that test cases and test data can be developed based on intuition and experience of the tester.


Exhaustive Testing: Executing the program through all possible combination of values for program variables.


Exit criteria: Standards for work product quality which block the promotion of incomplete or defective work products to subsequent stages of the software development process.

Flowchart Pictorial representations of data flow and computer logic. It is frequentlyeasier to understand and assess the structure and logic of an application system by developing a flow chart than to attempt to understand narrative descriptions or verbal explanations. The flowcharts for systems are normally developed manually, while flowcharts of programs can be produced.


Force Field Analysis A group technique used to identify both driving and restraining forces thatinfluence a current situation.


Formal Analysis Technique that uses rigorous mathematical techniques to analyze thealgorithms of a solution for numerical properties, efficiency, and correctness.

Functional Testing Testing that ensures all functional requirements are met without regard to the final program structure


Histogram A graphical description of individually measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.


Inspection A formal assessment of a work product conducted by one or more qualified independent reviewers to detect defects, violations of development standards, and other problems. Inspections involve authors only when specific questions concerning deliverables exist. An inspection identifies defects, but does not attempt to correct them. Authors take corrective actions and arrange follow-up reviews as needed.


Integration Testing This test begins after two or more programs or application components have been successfully unit tested. It is conducted by the development team to validate the interaction or communication/flow of information between the individual components which will be integrated.


Life Cycle Testing The process of verifying the consistency, completeness, and correctness of software at each stage of the development life cycle.


Pass/Fail Criteria Decision rules used to determine whether a software item or feature passes or fails a test.


Path Testing A test method satisfying the coverage criteria that each logical path through the program be tested. Often, paths through the program are grouped into a finite set of classes and one path from each class is tested.


Performance Test Validates that both the online response time and batch run times meet thedefined performance requirements.


Policy Managerial desires and intents concerning either process (intended objectives) or products (desired attributes).


Population Analysis Analyzes production data to identify, independent from the specifications, the types and frequency of data that the system will have to process/produce. This verifies that the specs can handle types and frequency of actual data and can be used to create validation tests.


Procedure The step-by-step method followed to ensure that standards are met.


Process1. The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.2. A statement of purpose and an essential set of practices (activities) that address that purpose.


Proof of Correctness The use of mathematical logic techniques to show that a relationship between program variables assumed true at program entry implies that another relationship between program variables holds at program exit.


Quality A product is a quality product if it is defect free. To the producer, a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to: quality means meets requirements. From a customer’s perspective, quality means “fit for use.”


Quality Assurance (QA) Deals with 'prevention' of defects in the product being developed.It is associated with a process.The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved to produce products that meet specifications andare fit for use.


Quality Control (QC) Its focus is defect detection and removal. Testing is a quality control activity


Quality Improvement To change a production process so that the rate at which defective products (defects) are produced is reduced. Some process changes may require the product to be changed


Recovery Test Evaluates the contingency features built into the application for handlinginterruptions and for returning to specific points in the application processing cycle, including checkpoints, backups, restores, and restarts. This test also assures that disaster recovery is possible.


Regression Testing Testing of a previously verified program or application following programmodification for extension or correction to ensure no new defects have been introduced.

Risk Matrix Shows the controls within application systems used to reduce the identified risk, and in what segment of the application those risks exist. One dimension of the matrix is the risk, the second dimension is the segment of the application system, and within the matrix at the intersections are the controls. For example, if a risk is “incorrect input” and the systems segment is “data entry,” then the intersection within the matrix would show the controls designed to reduce the risk of incorrect input during the data entry segment of the application system.


Scatter Plot Diagram A graph designed to show whether there is a relationship between twochanging variables.


Standards The measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.


Statement of Requirements The exhaustive list of requirements that define a product.


Statement Testing A test method that executes each statement in a program at least once during program testing.


Static Analysis Analysis of a program that is performed without executing the program. Itmay be applied to the requirements, design, or code.


Stress Testing This test subjects a system, or components of a system, to varyingenvironmental conditions that defy normal expectations. For example, high transaction volume, large database size or restart/recovery circumstances. The intention of stress testing is to identify constraints and to ensure that there are no performance problems.


Structural Testing A testing method in which the test data is derived solely from the program structure.


Stub Special code segments that when invoked by a code segment under testing, simulate the behavior of designed and specified modules not yet constructed.


System Test During this event, the entire system is tested to verify that all functional,information, structural and quality requirements have been met.


Test Case Test cases document the input, expected results, andexecution conditions of a given test item.


Test Plan A document describing the intended scope, approach, resources, and schedule of testing activities. It identifies test items, the features to be tested, the testing tasks, the personnel performing each task, and any risks requiring contingency planning.


Test Scripts A tool that specifies an order of actions that should be performed during a test session. The script also contains expected results. Test scripts may be manually prepared using paper forms, or may be automated usingcapture/playback tools or other kinds of automated scripting tools.


Test Suite Manager A tool that allows testers to organize test scripts by function or other grouping.


Unit Test Testing individual programs, modules, or components to demonstrate that the work package executes per specification, and validate the design and technical quality of the application. The focus is on ensuring that the detailed logic within the component is accurate and reliable according to pre-determined specifications. Testing stubs or drivers may be used to simulate behavior of interfacing modules.


Usability Test The purpose of this event is to review the application user interface and other human factors of the application with the people who will be using the application. This is to ensure that the design (layout and sequence, etc.) enables the business functions to be executed as easily and intuitively as possible. This review includes assuring that the user interface adheres to documented User Interface standards, and should be conducted early in the design stage of development. Ideally, an application prototype is used to walk the client group through various business scenarios, although paper copies of screens, windows, menus, and reports can be used.


User Acceptance Test User Acceptance Testing (UAT) is conducted to ensure that the system meets the needs of the organization and the end user/customer. It validates that the system will work as intended by the user in the real world, and is based on real world business scenarios, not system requirements. Essentially, this test validates that the right system was built.


Validation Determination of the correctness of the final program or software produced from a development project with respect to the user needs and requirements.


Verification1. The process of determining whether the products of a given phase of the software development cycle fulfill the requirements established during the previous phase.2. The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements.


Walkthroughs During a walkthrough, the producer of a product “walks through” orparaphrases the products content, while a team of other individuals follow along. The team’s job is to ask questions and raise issues about the product that may lead to defect identification.


White-box Testing A testing technique that assumes that the path of the logic in a program unit or component is known. White-box testing usually consists of testing paths, branch by branch, to produce predictable results. This technique is usually used during tests executed by the development team, such as Unit or Component testing

Friday, April 24, 2009

Mercury Certification Frequently Asked Questions

Mercury Certification Frequently Asked Questions


1. How often are Mercury Product Certification Exams conducted in India?Once per quarter.

2. What exams are available to take in India?The following Mercury product exams are available: QuickTest Pro, WinRunner, TestDirector, and LoadRunner.

3. What types of exams are available?
The CPC (Certified Product Consultant) and the Specialist exam. The CPC exam is an advanced Certification Exam designed for people with a least 1 year experience with Mercury products, who may wish to be a Mercury Product Consultant. The Specialist exam is designed for new users of the product who wish to demonstrate their mastery of the product.

4. What are the prerequisites for taking the exam?We recommend the following:
1. A working knowledge on Mercury products. At least 1 year for CPC exams and 3-6 months for Specialist exams.
2. The completion of Mercury product training courses in the product you wish to take a Certification exam for.

5. How do I enroll in an exam and what is the exam Process?
1. Enroll with Mercury for the Specialist or CPC exam and pay the exam fee. Enrollment form will be made available by Mercury.
2. You will be notified two weeks prior to exam and provide the following details:
a) Exam policy
b) Study Guides
c) Exam venue
d) Installation process
3. Next we will schedule your examination date.
4. A Mercury Certification expert will contact participants for assessment on test preparedness a week after sending exam instructions to participant.
5. Examination fees are approximately US$ 700 for Product Specialist & US$ 2500 for CPC
6. The Certification Exam will then be Proctored by a Mercury Certification representative.
7. A score of at least 70% must be attained to “Pass” the exam.
8. Upon the grading of your exam results, an official Mercury Certification pack will be forwarded to you via post within 2 weeks.

If you require any further assistance or clarifications please contact via below given email at beena.rajeev@mercury.com.

Tuesday, March 24, 2009

IEEE Definitions in Software Testing

IEEE Definitions


1.Black-box testing , or 2.Functional testing: Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.



3.White-box testing , or 4.Stnsctural testing: Testing that takes into account the internal mechanism of a system or component Types include branch testing, path testing, statement testing. (Also Glass box or white box testing)



5.Unit testing: Testing of individual hardware or software units or groups of related units.

6.Integretion testing: Testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them

7.system testing: Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements

8.Acceptance testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system.

9.Regression testing: Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements.

10.Anomaly: Anything observed in the documentation or operation of software that deviates from expectations based on previously verified software products or reference documents.

11.Performance testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements.

12.Formal testing: Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management.

13.1nformal testing: Testing conducted in accordance with test plans and procedures that have not been reviewed and approved by a customer, user, or designated level of management.

14.Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

15.Test: An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component

16.Testing: The process of operating a system or component under specified conditions, Observing or recording the results, and making an evaluation of some aspect of the System or component

17.Test item: A software item, which is an object of testing

18.Test plan: A document describing the scope, approach, resources, and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.

19.Test log: A chronological record of all relevant details about the execution of a test

20.Test case: It has two definitions:
1.A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
2.Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.

21.Test design: Documentation specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests.

22.Test procedure: It has two definitions
1.Detailed instructions for the set-up, execution, and evaluation of results for a given
test case.
2. A document containing a set of associated instructions as in (1).
3.Documentation specifying a sequence of actions for the execution of a test.

23.Test report: A document that describes the conduct and results of the testing carried out for a system or component

24.Test Scenario: A test that verifies one specific instance of a single product feature.
For example, when testing a word-processing application, one scenario would be to verify that the Paste command correctly pastes text a the beginning of a line

Tuesday, February 24, 2009

ISO 9000-3 and ISO 9000:2000 Definitions

1.Regression Testing: Testing to determine that changes made in order to correct defects have not introduced additional defects.

2.Quality: Degree to which a set of inherent characteristics fulfills requirements.
Requirement Need or Expectation that is stated, generally implied or obligatory.

Characteristic A distinguishing feature
Inherent Existing permanently in something as opposed to assigned.
Example for assigned is Price of a product or owner of a product.

3.Quality Control: Part of Quality Management focused on fulfilling quality requirements.

4.Quaiity Assurance: Part of Quality Management focused on providing confidence that Quality Requirements will be fulfilled

5.Quality Management: Coordinated activities to direct and control an organization with regard to Quality.

6.Defect: A serious form of an anomaly (which affects an intended use).

7.Software, 8.software Product: The set of computer programs, procedures, and possibly associated documentation and data.

9.Software item: Any identifiable part of a software product.

10.Development: Software life cycle process that comprises the activities of requirements
analysis, design, coding, integration, testing, installation and support for acceptance of software products.

11.Life cycle model: A frame work containing the processes, activities, and tasks involved in the development, operation and maintenance of a software product spanning the life of the system from the definition of its requirements to the termination of its use.

12.incremental integration testing - continuous testing of an application as new functionality is added;

13.end-to-end testing - testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

14.sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

15.load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

16.usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

17.install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

18.recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

19.security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

20.compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

21.exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

22.ad-hoc testing - a creative, informal software test that is not based on formal test plans or test cases, but often taken to mean that the testers have significant understanding of the software before testing it.

23.user acceptance testing - determining if software is satisfactory to an end-user or customer.

24.comparison testing - comparing software weaknesses and strengths to competing products.

25.alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

26.beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

27.Reliability: Software is reliable if user depends on it or Extent to which software requirements

28.Correctness: A program is functionally correct it behaves according to the specification of the functions it provide. Any deviation from specification made the product incorrect. An incorrect project may be reliable

29.Usability: Easy to use or Effort required to learn and use the software

30.Portability: Effort required to shift the software from one operating system to other
Components of SRS

i. Functional Requirements or constraints
ii. Design requirements or constraints
1. Architectural design
2. Detailed design
iii. Security Requirements
iv. Internal Requirements
1. Software Interface
2. Human Interface
3. Hardware Interface
v. Performance Requirements
1. Static performance
2. Dynamic Performance

31.Functional Requirements: Functions Required to develop Software

32.Architectural design: Identifying Modules, Integration between Modules

33.Detailed design: Internal logic design written in this phase
Psudocode(algorithm), complete database (Triggers, Procedures) written in detailed design.

34.Software Interface: Operating system, Languages to develop Software

35.Human Interface: User Interface (Design of Screens)

36.Hardware Interface(Design requirements): what Hardware is required for the Project

37.Static performance: In this we identify whether it is single user or Multi user

38.Dynamic Performance : Response time of Request

39.Functional Requirements: Functions Required to develop Software

ISO(international Organization for Standardization)

It is established in 1947, Geneva, Switzerland. It is non-governmental organization.

Work of ISO is to Promote International standards or series, they are related to Quality Management System.
In 1994 2nd version
In 2000 3rd version.

Advantages: 1. Productivity Improvement.
2. Improvement of profits
3. Congesting of Activities
4. New employees can be easily trained
5. International Reorganization.
6. Marketing advantages.

ISO 9000:2000-Quality Management- Fundamentals and Vocabulary

ISO 9001:2000-Quality Management- Requirements, design/development, production, installation, service.

ISO 9002:2000-Except design/development all the aspects of ISO 9001:2000

ISO 9003:2000- Quality Management-Testing and Installation

ISO 9004:2000-Quality Management- Guidelines for performance and improvement.

For certificate you have to follow Quality Manual, Quality Procedures, Records, work instructions if any

Quality Manual: A document specifying Quality Management system of an organization.

Quality procedure: A specified way of doing an activity

Quality policy: Overall intentions and direction of an organization towards quality as formally expressed by company.

Quality Objectives: Something aimed at related to Quality

Types of Audits
1.First part audit or Internal Audit
2.Second party audit or Customer Audit
3. Third party audit or Certification by Audits.

Types of certification Audits
1.Initial Audit
Adequacy audit or Documentation audit or Desk top audit
Site audit or conformance audit or compliance audit
2.Surveillance Audit: For every 3 or 6 months.
3. Re-certification audit: For 3 years.

Adequacy Audit: In Adequacy audit, Auditors verify Manuals & Procedures with ISO 9001:2000

Site Audit: In site audit, Auditors verify whether company is following Manuals, procedures by interviewing or checking.

Clauses
1. Scope
2. Non-motive Reference
3. Definition
4. Quality Management System
5. Management Responsibility
6. Resource Management
7. Project Realization
8. Measurement, Analysis and Improvement
7.5.1: Control of Production and service provision
8.2.4: Monitoring & Measurement of product
8.3: Control of Non conforming product
SEI-CMM
SEI-Software Engineers Institution
CMM-Capacity Maturity Model
CMM is only for software companies
CMM v1.0 →1991
CMM v1.1 →1993 [current]
By the end of 2003 CMM levels are ignored and Capability Maturity Model Integration (CMMI) will Introduced

Process: A set of Activity is Process, Process is what we do.

Process performance: A way of performing a process is process performance. Process performance determine the actual results achieved by a software product.

Process Capability: Process capability determine the range of expected results that can be achieved by following software process. Process capability is ability to achieve Results.

Process Maturity: Process Maturity determines the extent to which specify process explicitly defined, Manage, Measure control and effective.

There are 5 Levels in CMM
1. Initial
2.Repeatable
3. Defined
4. Managed
5. Optimizing

18 key process areas overall in 5 Levels
At Level one there is no key process areas. Even they have they don’t follow
Process Capability of Level 2 Organization is Disciplined process
Process Capability of Level 3 Organization is Standard and congested process
Process Capability of Level 4 organization is Predictable
Process Capability of Level 5 is Continuously improving process

Key Process areas at Level 2
At Level 2 Basic Management Process is Established
1. Requirements Management
2. Software Project Planning
3. Software Project tracking & Over sighting
4. Software Subcontract Management
5. Software Quality Assurance
6. Software Configuration Management.

Key Process Areas at Level 3
1.Organization process focus
2.Organization Process Definition
3. Training Program
4. Integrated Software Management
5. Software product Engineering: Actual Development Process
6. Inter-group Co-ordination
7. Peer reviews: To identify defects affectivity

Key Process Areas at Level 4
1. Quantitative Process Management.
2. Software Quality Management.

Key Process Areas at Level 5
1. Defect Prevention
2. Technology change Management
3. Process Change Management

Saturday, January 24, 2009

Black Box Testing Techniques
  • Equivalence Partitioning
  • Boundary Value Analysis
  • State Transitions
  • Exploratory Testing

1) Equivalence Partitioning
Partition the input domain into a finite number of equivalence classes
Create test cases to invoke as many different conditions as possible
It is based on the premise that if a test case in an equivalence class detects an error, all other test cases in the same class detect the same error

Step 1
Identify the input parameters
Identify the allowable classes of input
Select a representative value for each class
Create test cases to cover as many classes as possible

Step 2
Select invalid values for each class
Create additional test cases to cover invalid classes

Step 3
Identify the output parameters
Identify the possible classes of output
Create test cases to generate each possible output

Example:
Strategy:
•Identify input equivalence classes
–Based on conditions on inputs / outputs in specification / description
–Both valid and invalid input equivalence classes
–Based on heuristics and experience
•“Input x in [1..10]” ® classes: x <> 10
•“Enumeration A, B, C ® classes: A, B, C, not {A, B, C,}
• Define one / couple of test cases for each class
–Test cases that cover valid eq. classes
–Test cases that cover at most one invalid eq. class
•Test a function for calculation of absolute value of an integer
•Equivalence classes :
2) Boundary Value Analysis

Boundary Value Analysis technique tests conditions on, above and beneath the edges of input and output Equivalence Classes

Test cases are created to test the edge of each Equivalence Class
Test cases are created to test edges of both input and output classes

For each equivalence class identified:
Select a value on the class boundary
Pick a value just under the boundary
Pick a value just over the boundary

3) Finite State Testing Process:
Create test cases to:
Force each transition
Force each action

4) Exploratory Testing
Some people can design test cases that will discover failures based on their experience
Allow these people to write test cases
Address complex areas
Address changes
Report failures, faults and errors so that experience level may grow