Search

Sunday, December 07, 2008

What is Acceptance Testing?
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

What is Accessibility Testing?
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

What is Ad Hoc Testing?
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

What is Agile Testing?
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

What is Application Binary Interface (ABI)?
A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

What is Application Programming Interface (API)?
A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

What is Automated Software Quality (ASQ)?
The use of software tools, such as automated testing tools, to improve software quality.

What is Automated Testing?
Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

What is Backus-Naur Form?
A metalanguage used to formally describe the syntax of a language.

What is Basic Block?
A sequence of one or more consecutive, executable statements containing no branches.

What is Basis Path Testing?
A white box test case design technique that uses the algorithmic flow of the program to design tests.

What is Basis Set?
The set of tests derived using basis path testing.

What is Baseline?
The point at which some deliverable produced during the software engineering process is put under formal change control.

What you will do during the first day of job?
What would you like to do five years from now?
Tell me about the worst boss you've ever had.
What are your greatest weaknesses?
What are your strengths?
What is a successful product?
What do you like about Windows?
What is good code?
What are basic, core, practices for a QA specialist?
What do you like about QA?
What has not worked well in your previous QA experience and what would you change?
How you will begin to improve the QA process?
What is the difference between QA and QC?
What is UML and how to use it for testing?
What is Beta Testing?
Testing of a rerelease of a software product conducted by customers.
What is Binary Portability Testing?
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
What is Black Box Testing?
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
What is Bottom Up Testing?
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
What is Boundary Testing?
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
What is Bug?
A fault in a program which causes the program to perform in an unintended or unanticipated manner.
What is Boundary Value Analysis?
BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
What is Branch Testing?
Testing in which all branches in the program source code are tested at least once.
What is Breadth Testing?
A test suite that exercises the full functionality of a product but does not test features in detail.
What is CAST?
Computer Aided Software Testing.
What is CMMI?
What do you like about computers?
Do you have a favourite QA book? More than one? Which ones? And why.
What is the responsibility of programmers vs QA?
What are the properties of a good requirement?
Ho to do test if we have minimal or no documentation about the product?
What are all the basic elements in a defect report?
Is an "A fast database retrieval rate" a testable requirement?
What is software quality assurance?
What is the value of a testing group? How do you justify your work and budget?
What is the role of the test group vis-à-vis documentation, tech support, and so forth?
How much interaction with users should testers have, and why?
How should you learn about problems discovered in the field, and what should you learn from those problems?
What are the roles of glass-box and black-box testing tools?
What issues come up in test automation, and how do you manage them?
What is Capture/Replay Tool?
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
What is CMM?
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
What is Cause Effect Graph?
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
What is Code Complete?
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
What is Code Coverage?
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
What is Code Inspection?
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
What is Code Walkthrough?
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
What is Coding?
The generation of source code.
What is Compatibility Testing?
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
What is Component?
A minimal software item for which a separate specification is available.
What is Component Testing?
See the question what is Unit Testing.
What is Concurrency Testing?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
What is Conformance Testing?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
What is Context Driven Testing?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
What development model should programmers and the test group use?
How do you get programmers to build testability support into their code?
What is the role of a bug tracking system?
What are the key challenges of testing?
Have you ever completely tested any part of a product? How?
Have you done exploratory or specification-driven testing?
Should every business test its software the same way?
Discuss the economics of automation and the role of metrics in testing.
Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
When have you had to focus on data integrity?
What are some of the typical bugs you encountered in your last assignment?
How do you prioritize testing tasks within a project?
How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
When should you begin test planning?
When should you begin testing?
What is Conversion Testing?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
What is Cyclomatic Complexity?
A measure of the logical complexity of an algorithm, used in white-box testing.
What is Data Dictionary?
A database that contains definitions of all data items defined during analysis.
What is Data Flow Diagram?
A modeling notation that represents a functional decomposition of a system.
What is Data Driven Testing?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
What is Debugging?
The process of finding and removing the causes of software failures.
What is Defect?
Nonconformance to requirements or functional / program specification
What is Dependency Testing?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
What is Depth Testing?
A test that exercises a feature of a product in full detail.
What is Dynamic Testing?
Testing software through executing it. See also Static Testing.
What is Emulator?
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
What is Endurance Testing?
Checks for memory leaks or other problems that may occur with prolonged execution.
What is End-to-End testing?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
What is Equivalence Class?
A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
What is Equivalence Partitioning?
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
What is Exhaustive Testing?
Testing which covers all combinations of input values and preconditions for an element of the software under test.
What is Functional Decomposition?
A technique used during planning, analysis and design; creates a functional hierarchy for the software.
What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended features.
What is Functional Testing?
Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
See also What is Black Box Testing.
What is Glass Box Testing?
A synonym for White Box Testing.
Do you know of metrics that help you estimate the size of the testing effort?
How do you scope out the size of the testing effort?
How many hours a week should a tester work?
How should your staff be managed? How about your overtime?
How do you estimate staff requirements?
What do you do (with the project tasks) when the schedule fails?
How do you handle conflict with programmers?
How do you know when the product is tested well enough?
What characteristics would you seek in a candidate for test-group manager?
What do you think the role of test-group manager should be? Relative to senior management? Relative to other technical groups in the company? Relative to your staff?
How do your characteristics compare to the profile of the ideal manager that you just described?
How does your preferred work style work with the ideal test-manager role that you just described? What is different between the way you work and the role you described?
Who should you hire in a testing group and why?
What is Gorilla Testing?
Testing one particular module, functionality heavily.
What is Gray Box Testing?
A combination of Black Box and White Box testing methodologies? testing a piece of software against its specification but using some knowledge of its internal workings.
What is High Order Tests?
Black-box tests conducted once the software has been integrated.
What is Independent Test Group (ITG)?
A group of people whose primary responsibility is software testing,
What is Inspection?
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
What is Integration Testing?
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
What is Installation Testing?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Load Testing?
See Performance Testing.
What is Localization Testing?
This term refers to making software specifically designed for a specific locality.
What is Loop Testing?
A white box testing technique that exercises program loops.
What is Metric?
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
What is Monkey Testing?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
What is Negative Testing?
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.
What is Path Testing?
Testing in which all paths in the program source code are tested at least once.
What is Performance Testing?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
What is Positive Testing?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
What is Quality Assurance?
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
What is Quality Audit?
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
What is Quality Circle?
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
What is Quality Control?
The operational techniques and the activities used to fulfill and verify requirements of quality.
What is Quality Management?
That aspect of the overall management function that determines and implements the quality policy.
What is Quality Policy?
The overall intentions and direction of an organization as regards quality as formally expressed by top management.
What is Quality System?
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
What is Race Condition?
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
What is Ramp Testing?
Continuously raising an input signal until the system breaks down.
What is Recovery Testing?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Regression Testing?
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
What is Release Candidate?
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
What is Sanity Testing?
Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.
What is Scalability Testing?
Performance testing focused on ensuring the application under test gracefully handles increases in work load.
What is the role of metrics in comparing staff performance in human resources management?
How do you estimate staff requirements?
What do you do (with the project staff) when the schedule fails?
Describe some staff conflicts youÂ’ve handled.
Why did you ever become involved in QA/testing?
What is the difference between testing and Quality Assurance?
What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?
What are two of your strengths that you will bring to our QA/testing team?
What do you like most about Quality Assurance/Testing?
What do you like least about Quality Assurance/Testing?
What is the Waterfall Development Method and do you agree with all the steps?
What is the V-Model Development Method and do you agree with this model?
What is Security Testing?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
What is Smoke Testing?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
What is Soak Testing?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
What is Software Requirements Specification?
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/
What is Software Testing?
A set of activities conducted with the intent of finding errors in software.
What is Static Analysis?
Analysis of a program carried out without executing the program.
What is Static Analyzer?
A tool that carries out static analysis.
What is Static Testing?
Analysis of a program carried out without executing the program.
What is Storage Testing?
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
What is Stress Testing?
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
What is Structural Testing?
Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.
What is System Testing?
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
What is Testability?
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
What is Testing?
The process of exercising software to verify that it satisfies specified requirements and to detect errors.
The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
What is Test Automation? It is the same as Automated Testing.
What is Test Bed?
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
What is Test Case?
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development? Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
What is Test Driver?
A program or test tool used to execute a tests. Also known as a Test Harness.
What is Test Environment?
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
What is Test First Design?
Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.
What is a "Good Tester"?
Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?
List 5 words that best describe your strengths.
What are two of your weaknesses?
What methodologies have you used to develop test cases?
In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test functionality associated with that module?
How do you go about going into a new organization? How do you assimilate?
Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.
What is ISO 9000? Have you ever been in an ISO shop?
When are you done testing?
What is the difference between a test strategy and a test plan?
What is ISO 9003? Why is it important
What is Test Harness?
A program or test tool used to execute a tests. Also known as a Test Driver.
What is Test Plan?
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.
What is Test Procedure?
A document providing detailed instructions for the execution of one or more test cases.
What is Test Script?
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
What is Test Specification?
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
What is Test Suite?
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.
What is Test Tools?
Computer programs used in the testing of a system, a component of the system, or its documentation.
What is Thread Testing?
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
What is Top Down Testing?
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
What is Total Quality Management?
A company commitment to develop a process that achieves high quality product and customer satisfaction.
What is Traceability Matrix?
A document showing the relationship between Test Requirements and Test Cases.
What is Usability Testing?
Testing the ease with which users can learn and use a product.
What is Use Case?
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
What is Unit Testing?
Testing of individual software components.
What is Validation?
The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing
What is Verification?
The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
What is Volume Testing?
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
What is Walkthrough?
A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.
What is White Box Testing?
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.
What is Workflow Testing?
Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
What are ISO standards? Why are they important?
What is IEEE 829? (This standard is important for Software Test Documentation-Why?)
What is IEEE? Why is it important?
Do you support automated testing? Why?
We have a testing assignment that is time-driven. Do you think automated tests are the best solution?
What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?
Are reusable test cases a big plus of automated testing and explain why.
Can you build a good audit trail using Compuware's QACenter products. Explain why.
How important is Change Management in today's computing environments?
Do you think tools are required for managing change. Explain and please list some tools/practices which can help you managing change.
We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.
When is a good time for system testing?
Are regression tests required or do you feel there is a better use for resources?
Our software designers use UML for modeling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this approach or would this mean more effort for the testers.
Tell me about a difficult time you had at work and how you worked through it.
Give me an example of something you tried at work but did not work out so you had to go at things another way.
How can one file compare future dated output files from a program which has change, against the baseline run which used current date for input. The client does not want to mask dates on the output files to allow compares
Test Automation
What automating testing tools are you familiar with?
How did you use automating testing tools in your job?
Describe some problem that you had with automating testing tool.
How do you plan test automation?
Can test automation improve test effectiveness?
What is data - driven automation?
What are the main attributes of test automation?
Does automation replace manual testing?
How will you choose a tool for test automation?
How you will evaluate the tool for test automation?
What are main benefits of test automation?
What could go wrong with test automation?
How you will describe testing activities?
What testing activities you may want to automate?

Monday, November 03, 2008

Software Testing concepts and Vocabulary

SDLC
Quality Assurance
Quality Control
The V Model
White Box Testing
Black Box Testing
Unit Testing
Integration Testing
Exploratory Testing
Smoke Testing
Adhoc Testing
Regression Testing
Progression Testing




System Testing
Performance
Load
Recovery
Acceptance Testing
Test Design Techniques
Test Automation
Test Planning
Risks
Estimation
Test Focus
Entry & Exit Criteria
Test Reporting
Coverage > Traceability Matrix
Reviews
Inspections
Walkthroughs
Formal Technical Reviews

Friday, October 31, 2008

BLACK – BOX TESTING

Black box testing attempt to final errors in the following categories:1. I correct or missing Fns2. Interface errors.3. Errors in data structures or external data base access.4. Behavior or performance errors.5. Initialization and termination errors

I Graph Based Testing:First step in black box testing is to understand the objects that are modeled in s/w and the relationships that connect their object. Testing begins with creating a graph of imp. Objects and their relationship and then devising a series of tests that will cover the graph so that each object and relationship is exercised and errors are uncovered.We start with creating a graph

Directed link ----> relations bet. ObjectsNode weight ----> attributes of the objects.Nodes ----> Objects.Graph based testing begins with the defn of all nodes and node weights. ie, objects and attributes are identified. The data model can be used as a starting point.Based on this, we can conduct the following behavioral testing methods: Transaction flow modelling Finite state modeling Data flow modeling.Now we must go for node coverage and link coverages to ensure that all objects and their relations are executed.

window.google_render_ad();

II Equivalence partitioning:Here the ip domain is divided in to classes of data from which test cases can be derived. An ideal test case is one which single – handedly uncovers a class of errors. If the ip class defines a range, one valid and two invalid classes are defined. If an ip cond,n req. a spec. value, one valid and two invalid equivalence classes are defined. If an ip condn specifies a member of set. One valid and one invalid equivalence class are defined. If an ip cond’n is one valid and one invalid class are defined.III Boundary Value Analysis :Boundary value analysis leads to a selection of teat cases that exercise bounding values. BVA leads to the selection of test cases at the edges of the class. BVA derives test cases from the output domain. If ip specifies a range of values bounded by a & b, then test cases should be designed with values a and b and just above and below a and b. Test care should be designed to create an op report that produces the max (win) allowable no. of table entries.III Boundary Value Analysis : (Back – to – back)And when the reliability of the s/w is very critical. In such situations, redundant s/w and L/W are used to minimize the possibility of error. When redundant s/w is dev, sep. s/w engg. Teams dev. independent versions of an applies using the same spec. In such cases each version can be tested with the same test data to ensure that all provide identical output.If the op from each version is the same, it is assumed that all implementations are correct. If the op is different, each of the applies is investigated to det. if a defect is one or more versions is responsible for the difference.If the spec. from which all versions have been dev. is in error. If each of there versions will provide identical but incorrect results, condn’s testing will fail to detect the error.

Wednesday, September 03, 2008

SDLC

A well structured approach is required for the successful creation of any product. System Development Life Cycle establishes a logical order in which an information system could be developed successfully. Aka; delivering a quality system.
A quality system, in short; will:
Meet or exceed the agreed customer expectations
Fall with in the agreed cost estimates
Work effectively and efficiently within the planned infrastructure.
The major SDLC that has been in the IT domain are
Water Fall Model
Iterative Model
Spiral or Prototyping - the ones that follow AGILE methodology like XP (Extreme Programming)

Waterfall Model:

Sunday, August 31, 2008

The V-Concept of Testing - The various steps involved in V-concept are listed below:

Step 4: Test Software DesignThis step tests both external and internal design primarily through verification techniques. The testers are concerned that the design will achieve the objectives of the requirements, as well as the design being effective and efficient on the designated hardware.Step 5: Program (Build) Phase TestingThe method chosen to build the software from the internal design document will determine the type and extensiveness of tests needed. As the construction becomes more automated, less testing will be required during this phase. However, if software is constructed using the waterfall process, it is subject to error and should be verified. Experience has shown that it is significantly cheaper to identify defects during the construction phase, than through dynamic testing during the test execution step.Step 6: Execute and Record ResultsThis involves the testing of code in a dynamic state. The approach, methods, and tools specified in the test plan will be used to validate that the executable code in fact meets the stated software requirements, and the structural specifications of the design.Step 7: Acceptance TestAcceptance testing enables users to evaluate the applicability and usability of the software in performing their day-to-day job functions. This tests what the user believes the software should perform, as opposed to what the documented requirements state the software should perform.

window.google_render_ad();

Step 8: Report Test ResultsTest reporting is a continuous process. It may be both oral and written. It is important that defects and concerns be reported to the appropriate parties as early as possible, so that corrections can be made at the lowest possible cost.Step 9: The Software InstallationOnce the test team has confirmed that the software is ready for production use, the ability to execute that software in a production environment should be tested. This tests the interface to operating software, related software, and operating procedures.Step 10: Test Software ChangesWhile this is shown as Step 10, in the context of performing maintenance after the software is implemented, the concept is also applicable to changes throughout the implementation process. Whenever requirements change, the test plan must change, and the impact of that change on software systems must be tested and evaluated.Step 11: Evaluate Test EffectivenessTesting improvement can best be achieved by evaluating the effectiveness of testing at the end of each software test assignment. While this assessment is primarily performed by the testers, it should involve the developers, users of the software, and quality assurance professionals if the function exists in the IT organization.

Thursday, July 31, 2008



The V-Concept of Testing - Points to Ponder on Agile

1.Cost/Schedule Impact – Agile does NOT have a cost/schedule advantage over a typical waterfall model, assuming that the requirements remain stable over the duration of the project. Where Agile makes a difference is Iterative development addresses the impact of changing requirements far better than waterfall. Reduces the risk of severe defects during customer review by having multiple iterations and getting continuous feedback from the customer. Continuous integration and validation mean that defect fixes and minor requirement changes during development can be accommodated within the iteration. 2. Iterative Development – Split the dev phase into multiple iterations, depending on the total effort and risk of application Given our resource situation, iterations should ideally not be more than 8-12 weeks The first iteration while focusing on the framework and infrastructure components should include at least some UI intensive scenarios, to facilitate early feedback from the client.3. Development focus Test-driven design and programming – Agile depends heavily on the premise that the developed code is well-tested. The programmers should write code that is easily testable from an object oriented perspective. Automated and Complete Unit Testing – Code coming out of the developer’s workshop should have undergone complete unit testing amounting to 100% of achievable code coverage (who certifies this?). Emphasis is also put on an automated unit test framework, so that multiple iterations can be executed with minimal additional effort. Whenever a new code unit is tested, test previously released units as a sort of regression. Pair Programming – Can be explored based on cost & availability of skilled resources.
4. Continuous Integration Frequent incremental integration as opposed to big bang integration.i. Eliminates high risk integration issues which may lie hidden till the end in a big bang integration scenarioii. Quick turn around time for integration defects – dev or testing is not held up for long periods. Unit Tested Code – Only 100% unit tested code qualifies for continuous integration. Any integration issues should NOT be due to code defects missed during unit testing Frequency – Varies depending on the maturity of team, complexity of requirements, % of UI component in application etc. In our scenario, the recommended target would be at least two integrations a week, after the initial build on the integration box. Validation – Each of the cone integrations shall be smoke tested, and milestones shall be system tested.

window.google_render_ad();

5. Integration Team – A small team of developers will work closely with QA for managing the continuous integration process, and for test support. The responsibilities of the team will be Controlling the contents of each integration Sorting out integration issues Fixing defects found during testing.6. Testing – Agile relies heavily on “nipping defects in the bud”. Agile does not define a single, intensive integration testing phase, for the simple reason that there is no “integration phase”. How do we validate the continuously integrated build then? – Using the system test plan instead, to test minor milestonesi. Milestones – Define milestones that comprise of a small number of related functionalities, which can be tested independently. Ideally the cone integrations and milestones should be planned in such a way that every three or four cone integrations should add up to a milestone.ii. The system test plan for a milestone should be prepared and reviewed at all levels before the milestone is reached.iii. At the end of 2 or 3 milestones, do a full regression of the previous milestones. Defect fixes – Fixes for defects encountered during the system testing should be fixed during the integration for the next milestone, before a lot more functionality gets added on top of the buggy code. This ensures that by the time the application (or the iteration) reaches QA, the number of defects is minimal.
7. QA Involvement – QA Team gets involved from requirements phase itself An experienced QA resource shall be involved during initial requirements gathering and take control of requirement management from the QA side. From the end of the initial requirements phase, the QA plan preparation will go hand in hand with design. The QA plan should be granular in the extreme, containing all of the system or integration test scenarios in detail. QA will be responsible for validation the milestones during continuous integration.8. Team Composition – notes The dev team should aim for at least 50% mix of J2EE resources with quality experience in executing web projects, and usage of tools identified. The QA team should have 50% of its resources well experienced in requirements handling, preparation and management of QA plan from requirements/design, and Agile Testing processes.

Tuesday, June 03, 2008

The V-Concept of Testing

The V Concept of testing detail on the sequence with which the testing should be performed. The life cycle testing is performed against the deliverables at the pre-determined specified points. The SDLC has to be pre-determined for this to happen.
The V concept recommends both the System Development and the System test process to start at the same point referring the same information. The development will have the responsibility of documenting the requirements for the development purpose; which the test team could use for testing purpose as well.In the V-testing concept, your project’s Do and Check procedures slowly converge from start to finish, which indicates that as the Do team attempts to implement a solution, the Check team concurrently develops a process to minimize or eliminate the risk. If the two groups work closely together, the high level of risk at a project’s inception will decrease to an acceptable level by the project’s conclusion.

The various steps involved in V-concept are listed below:

Step 1: Assess Development Plan and StatusThis first step is a prerequisite to building the VV&T Plan used to evaluate the implemented software solution. During this step, testers challenge the completeness and correctness of the development plan. Based on the extensiveness and completeness of the Project Plan the testers can estimate the amount of resources they will need to test the implemented software solution.Step 2: Develop the Test PlanForming the plan for testing will follow the same pattern as any software planning process. The structure of all plans should be the same, but the content will vary based on the degree of risk the testers perceive as associated with the software being developed.Step 3: Test Software RequirementsIncomplete, inaccurate, or inconsistent requirements lead to most software failures. The inability to get requirements right during the requirements gathering phase can also increase the cost of implementation significantly. Testers, through verification, must determine that the requirements are accurate, complete, and they do not conflict with one another.

Saturday, May 31, 2008

WHITE BOX TESTING

Using white box testing method, the software engineer can drive cases that1) Guarantee that all independent paths within a module have been exercised at least once.2) Exercise all logical divisions on their true and false sides.3) Exercise all loops at their boundaries and within their operational bounds.4) Exercise internal data structures to ensure their validity.

CONTROLE STRUCTURE TESTING : This alludes to white box-testing method.1) Basic path Testing: This enables the tester to drive a logical complexity measure of a procedural design and are this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every during testing.We may are ‘FLOW GRAPH NOTATION’ as a useful loop for understanding control flow and illustrating the approach.We can are ‘CYLLOMETRIC COMPLEXITY’ is a software metric that provides a quantitative measure of the logical complexity of a program. Here, the value computed for cyclometric complexity defines the No. of independent paths is the basis set of program and provides as with the upped bound for the No. of tests that be conducted to ensure that all statements have been exercise at least once.

window.google_render_ad();

GRAPH METRIX 2) Condition Testing :This test case design method exercise the logical condn’s contained in the program module. It has adv. like (i) measurement of test coverage of a cond’n is sample. (ii) The test coverage of condn’s is a program provides guidance for the gen of additional tests for the program.BRANCH TESTING : for a compound condn’s C, the true and false branches of C and every sample cond’n in C need to be exec. at least onceDOMAIN TESTING, required there of four tests to be derived for a relational exprsm.3) Data flow TestingThis method selects test paths of a program according to the locations of defines and cases variable in the program.4) Loop Testing:1) Start at the internal loop set all other loops to2) Conduct simple test for investment loops while holding the outer loops their min interaction parameter ( loop counter) values. Add other tests for out of range or excluded values.3) Work outward, conducting test for the next loops but keeping all other outer loops at min. values.4) Continue till the last loop.

Thursday, April 03, 2008

TEST CASE DESIGN

Any engineered product can be listed is one of two ways.1) Knowing the specified in that a product has been designed to perform, tests can be conducted that demonstrate each for fully operational while at the same time searching for errors in each tn, called BLACK BOX TESTING. It alludes to test that are conducted at the S/W interface. Although they are designed to uncover errors, black-box tests are and to demonstrate that S/W fn’s are designed to uncover errors black box tests are and to demonstrate that s/w fn’s are operational, that ip is properly accepted and op is correctly of external info. (eg. Data) is maintained. A black – box test examines little regard for the internal logical structure of the software testing.

window.google_render_ad();

2) Knowing the internal working of a product, tests can be conducted to ensure that all internal operations are performed according to spec and all internal components have been adequately exercised. This is called WHITE BOX TESTING or GLASS BOX TESTING. Which box testing assert an close examination of procedural detail. Logical paths through the s/w are listed by providing test cases that exercise specific sets of and/ or loops. The status of the program can be examined at various pts to det of the expected or asserted status corresponds to the actual status. Here the test case design is using structure of the procedural design.

Monday, March 31, 2008

WHITE BOX TESTING

Using white box testing method, the software engineer can drive cases that1) Guarantee that all independent paths within a module have been exercised at least once.2) Exercise all logical divisions on their true and false sides.3) Exercise all loops at their boundaries and within their operational bounds.4) Exercise internal data structures to ensure their validity.

CONTROLE STRUCTURE TESTING : This alludes to white box-testing method.1) Basic path Testing: This enables the tester to drive a logical complexity measure of a procedural design and are this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every during testing.We may are ‘FLOW GRAPH NOTATION’ as a useful loop for understanding control flow and illustrating the approach.We can are ‘CYLLOMETRIC COMPLEXITY’ is a software metric that provides a quantitative measure of the logical complexity of a program. Here, the value computed for cyclometric complexity defines the No. of independent paths is the basis set of program and provides as with the upped bound for the No. of tests that be conducted to ensure that all statements have been exercise at least once.

window.google_render_ad();

GRAPH METRIX 2) Condition Testing :This test case design method exercise the logical condn’s contained in the program module. It has adv. like (i) measurement of test coverage of a cond’n is sample. (ii) The test coverage of condn’s is a program provides guidance for the gen of additional tests for the program.BRANCH TESTING : for a compound condn’s C, the true and false branches of C and every sample cond’n in C need to be exec. at least onceDOMAIN TESTING, required there of four tests to be derived for a relational exprsm.3) Data flow TestingThis method selects test paths of a program according to the locations of defines and cases variable in the program.4) Loop Testing:1) Start at the internal loop set all other loops to2) Conduct simple test for investment loops while holding the outer loops their min interaction parameter ( loop counter) values. Add other tests for out of range or excluded values.3) Work outward, conducting test for the next loops but keeping all other outer loops at min. values.4) Continue till the last loop.

Sunday, February 03, 2008

BLACK – BOX TESTING

Black box testing attempt to final errors in the following categories:1. I correct or missing Fns2. Interface errors.3. Errors in data structures or external data base access.4. Behavior or performance errors.5. Initialization and termination errors

I Graph Based Testing:First step in black box testing is to understand the objects that are modeled in s/w and the relationships that connect their object. Testing begins with creating a graph of imp. Objects and their relationship and then devising a series of tests that will cover the graph so that each object and relationship is exercised and errors are uncovered.We start with creating a graph

Directed link ----> relations bet. ObjectsNode weight ----> attributes of the objects.Nodes ----> Objects.Graph based testing begins with the defn of all nodes and node weights. ie, objects and attributes are identified. The data model can be used as a starting point.Based on this, we can conduct the following behavioral testing methods: Transaction flow modelling Finite state modeling Data flow modeling.Now we must go for node coverage and link coverages to ensure that all objects and their relations are executed.

window.google_render_ad();

II Equivalence partitioning:Here the ip domain is divided in to classes of data from which test cases can be derived. An ideal test case is one which single – handedly uncovers a class of errors. If the ip class defines a range, one valid and two invalid classes are defined. If an ip cond,n req. a spec. value, one valid and two invalid equivalence classes are defined. If an ip condn specifies a member of set. One valid and one invalid equivalence class are defined. If an ip cond’n is one valid and one invalid class are defined.III Boundary Value Analysis :Boundary value analysis leads to a selection of teat cases that exercise bounding values. BVA leads to the selection of test cases at the edges of the class. BVA derives test cases from the output domain. If ip specifies a range of values bounded by a & b, then test cases should be designed with values a and b and just above and below a and b. Test care should be designed to create an op report that produces the max (win) allowable no. of table entries.III Boundary Value Analysis : (Back – to – back)And when the reliability of the s/w is very critical. In such situations, redundant s/w and L/W are used to minimize the possibility of error. When redundant s/w is dev, sep. s/w engg. Teams dev. independent versions of an applies using the same spec. In such cases each version can be tested with the same test data to ensure that all provide identical output.If the op from each version is the same, it is assumed that all implementations are correct. If the op is different, each of the applies is investigated to det. if a defect is one or more versions is responsible for the difference.If the spec. from which all versions have been dev. is in error. If each of there versions will provide identical but incorrect results, condn’s testing will fail to detect the error.

Wednesday, January 02, 2008

Load Testing Interview Questions and Answers

  1. What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.
  2. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
  3. Did u use LoadRunner? What version? - Yes. Version 7.2.
  4. Explain the Load testing process? - Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario. We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.
  5. When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
  6. What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
  7. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
  8. What Component of LoadRunner would you use to play Back the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.
  9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
  10. What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  11. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.
  12. Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
  13. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  14. How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
  15. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  16. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
  17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.
  18. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
  19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external
    library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
  20. What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.
  21. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
  22. How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
  23. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be
    specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
  24. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use multithreading. This enables more Vusers to be run per
    generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
    generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of
    Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
  25. If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the “Continue on error” option in Run-Time Settings.
  26. What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
  27. Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
  28. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
  29. If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.
  30. How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
  31. How did you find database related issues? - By running “Database” monitor and help of “Data Resource Graph” we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues
  32. Explain all the web recording options?
  33. What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.
  34. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
  35. What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
  36. What does vuser_end action contain? - Vuser_end section contains log off procedures.
  37. What is think time? How do you change the threshold? - Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.
  38. What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.
  39. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
  40. Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered.
  41. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario:
    • The number of concurrent Vusers
    • The number of hits per second
    • The number of transactions per second
    • The number of pages per minute
    • The transaction response time that you want your scenario
  42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response
    time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.
  43. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  44. Where do you set automatic correlation options? - Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  45. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.