Sunday 29 April 2012

Functionality Testing


    This compartment mainly emphasizes on the Functionality aspect of the Application. The first step to test the Functionality aspect of the Application is to check whether all the requirements are covered in the software. The actual functionality testing totally depends from software to software. Still one can frame general guidelines. General guidelines are:
    1. Check the functionality is covered as per Requirement specifications or Functional specifications developed for the software.
    2. Within a dialog box identify the dependent fields. Depending on the dependency check the enabling and disabling of the fields.
  • For e.g.: to create Contact addresses in any application.
  • To create contact addresses user should be able to add, delete and modify the information. Contact Addresses will contain information like, First Name, Last Name, Address1, Address2, City, State, Country, Zip, Phone, etc., any other information may also be added.
    This form will have the required fields and in addition to that will have Add, Delete and Update buttons. The functionality of the buttons is as follows:
  • Initially only Add button will be enabled. Delete, Update buttons will be disabled. This is because initially there is no data available and unless one adds one cannot delete or update. In short, unless there is a single valid record available it is not possible to update or delete.
  • Only on selecting a record from the list Delete and Update buttons are enabled and Add button is disabled. By default No records will be selected.
  • Delete and Update should always give confirmation message before actually performing the operation.
  • Delete operation should not show the deleted item in the list

Friday 27 April 2012

Validation Testing


    This compartment mainly emphasizes on the Validation aspect of the Application. Validation testing mainly depends on the fields set in the dialog box and the functions it has to perform. But still there are certain common rules that can be applied. General guidelines are:
    1. For text box fields where value entered has to be numeric check following:
  • It should accept numbers only and not alphabets.
  • If field usage is such that for e.g., To accept
  • Total number of days
  • Telephone number
  • Zip code etc.
  • Then it should not accept 0 and negative values.

    2. For text box fields where value entered has to be alpha-numeric check following:
  • It should accept alphabets and numbers only.
  • If field usage is such that for e.g., accepting
  • First Name
  • Middle Name
  • Last Name
  • City
  • Country etc.
  • Then field value should start with an alphabet only.
  • Depending on the condition this fields may accept special characters like -, _, . etc.

    3. If the field is a combo box then it has to be checked for following points:
  • Check the combo box has drop down values in it, it is not empty.
  • Drop down values should be alphabetically sorted. This might change as per requirement but as standard practices it should be alphabetically sorted. For e.g. to select data type from the list it will be as follows:
  • Date
  • Integer
  • String
  • Text, etc.
  • Selection of any drop down value is displayed on closing and opening the same dialog box.
    ¨ By default some value like "Select Value" or "_______" string is displayed. This is because
    User comes to know that value is to be selected for this field. Avoid displaying the first default value in the list.

    4. If the field is a list box then it has to be checked for following points:
  • Check the list box has values in it, it is not empty.
  • List box values should be alphabetically sorted and displayed. This might change as per requirement but as standard practices it should be alphabetically sorted.
  • Selection of any list box value should put a check before the value and should display the correct value(s) selected on closing and opening of the same dialog box.
  • If the list box supports multiple selection then check whether multiple values can be selected.

    5. If the field is a list of radio button then it has to be checked for following points:
  • Check whether as per requirements all the values are listed. For e.g. to select date format. Possible values displayed will be as follows:
    mm/dd/yyyy
    dd/mm/yyyy
    mm/dd/yy
    dd/mm/yy
    yyyy/mm/dd etc.
    ¨ Same selected value should be displayed on closing and opening of the same dialog box.

    6. Data Controls are to be tested as part of functionality testing.


GUI Testing


    This compartment mainly emphasizes on the GUI - Graphics User Interface aspect of the Application. It is not concrete that once GUI guidelines are set that can be followed blindly. GUI standards may vary from company to company and also from application to application. But still one can set general guidelines to have an overall idea on how to start GUI testing. These guidelines apply for every screen/ dialog box of the application. General guidelines are:
    1. All the dialog box should have a consistent look through out the Application system. For e.g.- If the heading within a dialog box is blue then for each dialog box the heading should be of this color.
    2. Every field on the screen should have an associated Label.
    3. Every screen should have an equivalent OK and cancel button.
    4. The color combination used should be appealing.
    5. Every field in the dialog box should have a Short Cut Key support. For e.g.- User Name
    6. Tab order should be normally set horizontally for the fields. In some case as per the case the Tab Order can be set vertically.
    7. Mandatory fields should have * (RED ASTERIK) marked to indicate that they are mandatory fields.
    8. Default key <Enter> should be set as OK for the dialog box.
    9. Default key <Esc> should be set as Cancel for the dialog box.

#JE0PgP#

Wednesday 25 April 2012

What Is Test Cases



  • A case that tests the functionality of specific object.
  • Test case is a description of what to be tested, what data to be given and what actions to be done to check the actual result against the expected result.
  • It is document which includes step description of i/p, o/p conditions with test data along with expected results.
  • It is a specific input that you’ll try and procedure that you’ll follow when you test the software”.
A good test case
  • effective
  • exemplary
  • evolvable
  • economic
  • self standing
  • self cleaning
  • traceable(requirement coverage)
  • accurate
  • repeatable/reusable
Review Format
    1. TC_ID
    2.Objective
    3.Test Data
    4. Expected Result
    5. Actual Result
    6. Sataus
    7. Remark
Test Case Execution
  • Execution and execution results plays a vital role in the testing. Each and every activity should have proof.
  • The following activities should be taken care:
  • Number of test cases executed.
  • Number of defects found
  • Screen shots of successful and failure executions should be taken in word document.
  • Time taken to execute.
  • Time wasted due to the unavailability of the system.

Tuesday 24 April 2012

Test Strategy



A Test Strategy document is a high level document and normally developed by project manager. This document defines “Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.
The Test Stategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document.
Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing. Components of the Test Strategy document

  • Scope and Objectives
  • Business issues
  • Roles and responsibilities
  • Communication and status reporting
  • Test deliverability
  • Industry standards to follow
  • Test automation and tools
  • Testing measurements and metrices
  • Risks and mitigation
  • Defect reporting and tracking
  • Change and configuration management
  • Training plan


#u8YLb4iAny#

Test Plan


The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.
It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents.
There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.

  • Test Plan id
  • Introduction
  • Test items
  • Features to be tested
  • Features not to be tested
  • Test techniques
  • Testing tasks
  • Suspension criteria
  • Features pass or fail criteria
  • Test environment (Entry criteria, Exit criteria)
  • Test delivarables
  • Staff and training needs
  • Responsibilities
  • Schedule




Thursday 19 April 2012

Software Testing Types




  • Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

  • White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

  • Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

  • Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

  • Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

  • Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

  • System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

  • End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

  • Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

  • Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

  • Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
  • Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

  • Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

  • Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

  • Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

  • Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

  • Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

  • Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.

  • Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination of above.

  • Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.

  • Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

  • Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

Agile Development



  • Agile methods break tasks into small increments with minimal planning, and do not directly involve long-term planning
  • Iterations are short time frames (timeboxes) that typically last from one to four weeks. .
  • Each iteration involves a team working through a full software development cycle including planning,requirements,analysis, design, coding, unit testing, and acceptance testing when a working product is demonstrated to stakeholders
  • This minimizes overall risk and allows the project to adapt to changes quickly.
  • Used for time-critical applications.

Wednesday 18 April 2012

Rapid Application Development Model (RAD Model)



  • The main objective of Rapid Application Development is to avoid extensive pre-planning, generally allowing software to be written much faster and making it easier to change requirements.
  • Rapid Application Development Model (RAD Model) is a linear sequence of the software development process model where we focus a very short development cycle by using a component based construction approach.
  • When organizations adopt rapid development methodologies, care must be taken to avoid role and responsibility confusion and communication breakdown within the development team, and between the team and the client.
  • To facilitate rapid development, strong emphasis was placed on the idea of software re-use. The notion of software components began to be nurtured.
ADVANTAGES
  • It increases speed of developing software. It can be achieved using methods like rapid prototyping, virtualization of system related routines, the use of CASE tools and other techniques.
  • Re-usability of components help to speed up development.
    It increases the quality.
  • Some systems also deliver advantages of interoperability, extensibility, and portability.
    It incorporates short development cycles.
  • Promotes strong collaborative atmosphere and dynamic gathering of requirements.
DISADVANTAGES
  • Unknown cost of product.
  • Difficult to commit the time required for success of the RAD process.
  • Short iteration may not add enough functionality, leading to significant delays in final iterations.
  • Early RAD systems faces reduced scalability occurs because a RAD developed application starts as a prototype and evolves into a finished application.
  • Early RAD systems have reduced features that occur due to time boxing, where features are pushed to later versions in order to finish a release in a short amount of time.
  • Dependency on strong cohesive teams and individual commitment to the project.

Spiral Model



  • The spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
  • Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.
  • Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
  • In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Advantages
  • High amount of risk analysis
  • Good for large and mission-critical projects.
  • Software is produced early in the software life cycle.
Disadvantages
  • Can be a costly model to use.
  • Risk analysis requires highly specific expertise.
  • Project’s success is highly dependent on the risk analysis phase.
  • Doesn’t work well for smaller projects.


Sunday 15 April 2012

What Is Prototype Model



          Advantages of Prototyping Model
    • Prototyping model starts with requirements gathering.The basic idea here is that instead of freezing the requirements before a design or coding can proceed, a throwaway prototype is built to understand the requirements. Prototyping is an attractive idea for complicated and large systems for which there is no manual process or existing system to help determining the requirements.
    • Prototype is generally developed at the cost of the developer and not at the cost of the client.
    • A prototype is developed using prototyping tools such as scripting languages or Rapid Application Development.
    • Prototyping is generally required to obtain user interface requirements.
    • Disadvantages of Prototyping Model
    • It is good for developing software for users who are not IT-literate.
    • Errors can be detected much earlier as the system is mode side by side.
    • Quicker user feedback is available leading to better solutions.
    • Reduced time and costs.
    • Development cost is borne by the developer.
    • Customer could believe the prototype as the working version.
    • Users can begin to think that a prototype, intended to be thrown away, is actually a final system that merely needs to be finished or polished.
    • Expense of implementing prototyping.

        Friday 13 April 2012

        What Is Incremental Model




        • Construct a partial implementation of a total system
        • Then slowly add increased functionality
        • The incremental model prioritizes requirements of the system and then implements them in groups.
        • Each subsequent release of the system adds function to the previous release, until all designed functionality has been implemented.
        Incremental Model Strengths
        • Develop high-risk or major functions first
        • Each release delivers an operational product
        • Lowers initial delivery cost
        • Risk of changing requirements is reduced
        Incremental Model Deficiencies
        • Requires good planning and design
        • Well-defined module interfaces are required
        • Total cost of the complete system is more.
        When to use the Incremental Model
        • Reduce risk by developing in Increments and getting client approvals
        • Most of the requirements are known before hand but are expected to develop over several iterations
        • A need to get basic functionality to the market early
        • On projects which have lengthy development schedules
        • On a project with new technology where end results are not completely understood

        Wednesday 11 April 2012

        When Start or Stop Testing?


          Start Testing-When?
        • Testing starts right from the requirements phase and continues till the end of SDLC
        • Objective of starting early: Requirements related defects caught later in the SDLC result in higher cost to fix the defect.

          Stop Testing – When?
        • Test cases executed with acceptable percentage of defects
        • Project Deadlines e.g. release deadlines, testing deadlines
        • Test budget has run out
        • Coverage of code, functionality or requirements reaches to specific point.
        • Bug rate falls below acceptable level
        • Beta and Alpha testing has been completed


        Saturday 7 April 2012

        Goals of Testing process



        Validation testing

        • To demonstrate to the developer and the system customer that the software meets its requirements;
        • A successful test shows that the system operates as  intended.

        Defect testing

        • To discover faults or defects in the software where its behaviour is incorrect or not in conformance with its specification;
        • A successful test is a test that makes the system perform incorrectly and so exposes a defect in the system. 

        Friday 6 April 2012

        What is System testsing




        1 . Test the login mechanism using correct and incorrect logins to check that valid users are accepted and  invalid users are rejected.
        2 . Test the search facility using different queries against known sources to check that the search mechanism is actually finding documents.
        3 . Test the system presentation facility to check that information about documents is displayed properly.
        4. Test the mechanism to request permission for downloading.
        5 . Test the e-mail response indicating that the downloaded document is available.



        Testing Techniques




        • White Box Testing
        • Black Box Testing
        • Gray Box Testing

        White Box Testing:
        • Also known as glass box, structural, clear box and open box testing.
        • A software testing technique whereby explicit knowledge of the internal workings of the item being tested are used to select the test data. Unlike black box testing, white box testing uses specific knowledge of programming code to examine outputs.
        • The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal.
        Black Box Testing:
        Also known as functional testing. A software testing technique whereby the internal workings of the item being tested are not known by the tester. For example, in a black box test on software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.

        Black Box Testing Techniques
            • Boundary Value Analysis: This is a technique used to minimize the test cases.It checks for the corner cases,One value greater than the maximum and one value less than the minimum.
           • Equivalence Partitioning: This is another technique for restricting the test cases. Here test data is  divided into valid and invalid classes and both the classes are tested.
          • Error Guessing: The use of past experience and an understanding of the weaknesses of human  developer

        Gray Box Testing :
        • Grey box testing is the combination of black box and white box testing. Intention of this testing is to find out defects related to bad design or bad implementation of the system.
        • in gray box testing, test engineer is equipped with the knowledge of system and designs test cases or test data based on system knowledge.

        Thursday 5 April 2012

        Software quality



        • SQA: Software Quality Assurance
        • SQC: Software Quality control

        • SQA:”The Monitoring & Measuring the strength of development process is called SQA.”
        • SQC: “The Validation of final product before release to the customer is called SQC.”

          SQA (Software Quality Assurance)
        • Quality assurance activities are work process oriented.
        • Attempts defect prevention by concentrating on the process
        • The direct results of these activities are changes to the process.
        • Defect Prevention activity.

          SQC (Software Quality Control)
        • Quality control activities are work product oriented.
        • Attempts to test a product after it is built.
        • The direct results of these activities are changes to the product.
        • Defect-detection and defect-correction oriented.

        Wednesday 4 April 2012

        PDCA LIFE CYCLE


        PLAN
        Establish the objectives and processes necessary to deliver results in
        accordance with the expected output.
        DO
        Implement the new processes. Often on a small scale if possible,
        to test possible effects.

        CHECK
        Measure the new processes and compare the results against the
        expected results to ascertain any differences.

        ACT
        Analyze the differences to determine their cause. Each will be part of
        either one or more of the P-D-C-A steps.