Thursday, 29 November 2012

What is SVN and how to use svn

SVN stands for Subversion , it allow developers to simultaneously work on a centralised project.

svn checkout

To check out means that you copy all the documents that your projects are working on to your computer.

The first commands
Go to home directory  and write cd  and give the following command:

svn co <user name>

where you have replaced <your_username> with the username you have aquired from the admin. This will enable you to check in your work. If you don't have a user name or just want to browse our code, just skip the username
Update your working copy
svn up
Scedule a file for addition
svn add filename ex(test.pl)

Schedule a file for deletion
svn delete filename
You may also copy and move files and directories with these two commands, but read about them in the svn 
svn copy filename ex(last.pl)
svn move filename ex(last1.pl)

Examine your changes
svn status
Examine the file history
svn log FILE
 
Change the commit message for a specific revision
svn propedit svn:log --revprop -r REV FILE
This will bring up the existing log text for the specified revision in your deta.This is useful if you accidentally committed some changes with an empty or uninformative log message.
Compare your modified file to the version in the repository
svn diff FILE
 
Compare some earlier versions, say here versions 100 and95
svn diff -r 100:95 FILE
 
Undo your local changes (ie revert to the repository status)
svn revert FILE
 
Resolve Conflicts  and Others' Changes
svn update
svn resolved

Commit your changes
svn ci -m "Your description of the changes here." FILE
 
  the above changes, add, delete, copy, move, must all be committed by ci in order to take effect

Saturday, 24 November 2012

How to Set Up a Password-less SSH Login

First, on the local machine you will want to generate a secure SSH key:
ssh-keygen

Walk through the key generator and set a password, the key file by default goes into ~/.ssh/id_rsa

Next, you need to copy the generated key to the remote server you want to setup passwordless logins with, this is easily done with the following command string but you can use scp if you’d prefer:
cat ~/.ssh/id_dsa.pub | ssh user@remotehost 'cat >> ~/.ssh/authorized_keys'

This command takes the generated SSH key from the local machine, connects to the remote host via SSH, and then uses cat to append the key file to the remote users authorized key list. Because this connects with SSH to the remote machine you will need to enter the password to use this command.
Finally, confirm that you can now login to the remote SSH server without a password:
ssh user@remotehost.com

Assuming initial setup went as intended, you will connect to the remote machine without having to log in.

Thursday, 22 November 2012

Basic UNIX Commands

  • cat --- for creating and displaying short files
  • chmod --- change permissions
  • cd --- change directory
  • cp --- for copying files
  • date --- display date
  • echo --- echo argument
  • ftp --- connect to a remote machine to download or upload files
  • grep --- search file
  • head --- display first part of file
  • ls --- see what files you have
  • lpr --- standard print command (see also print )
  • more --- use to read files
  • mkdir --- create directory
  • mv --- for moving and renaming files
  • print --- custom print command (see also lpr )
  • pwd --- find out what directory you are in
  • rm --- remove a file
  • rmdir --- remove directory> %
  • rsh --- remote shell
  • setenv --- set an environment variable
  • sort --- sort file
  • tail --- display last part of file
  • tar --- create an archive, add or extract files 
  • telnet --- log in to another machine


cat
This is one of the most flexible Unix commands. We can use to create, view and concatenate files. For our first example we create a three-item English-Spanish dictionary in a file called "dict."
% cat >dict
 test1
 test2
 test3
<control-D>
   %
chmod

This command is used to change the permissions of a file or directory. For example to make a file new_file.pl readable by everyone, we do this:
   % chmod a+r new_file.pl
To make a file, e.g., a shell script mycommand executable, we do this
   % chmod +x mycommand
Now we can run mycommand as a command.
To check the permissions of a file, use ls -l . For more information on chmod, use man chmod.

cd
Use cd to change directory. Use pwd to see what directory you are in.
   % cd english
   % pwd
   % /test_file/test
   % ls
scrapbook
   % cd

cp
Use cp to copy files or directories.
   % cp test_file test_file.bk
This makes a copy of the file test_file.
   % cp ~/new_file/new_file.bk
This copies the file jabber in the directory poems to the current directory. The symbol "." stands for the current directory. The symbol "~" stands for the home directory.

date
Use this command to check the date and time.
   % date
Fri Nov  6 08:52:42 MST 2012

echo
The echo command echoes its arguments. Here are some examples:

grep

Use this command to search for information in a file or files. For example, suppose that we have a file dict whose contents are
redtest1
white test1

head

Use this command to look at the head of a file. For example,
   % head test.pl
displays the first 10 lines of the file essay.001 To see a specific number of lines, do this:
   % head -n 20 test.pl
This displays the first 20 lines of the file.

ls
Use ls to see what files you have. Your files are kept in something called a directory.
   % ls
     test      letter2
     test1    letter3
     test3   maple-assignment1
  

lpr
This is the standard Unix command for printing a file. It stands for the ancient "line printer." See
   % man lpr
for information on how it works. See print for information on our local intelligent print command.

mkdir
Use this command to create a directory.

   % mkdir test

mv
Use this command to change the name of file and directories.
   % mv foo foobar
The file that was named foo is now named foobar

print
This is a moderately intelligent print command.
   % print foo
   % print notes.ps
   % print manuscript.dvi

pwd
Use this command to find out what directory you are working in.
   % pwd

rm
Use rm to remove files from your directory.
   % rm test.pl
     remove test.pl? y
   % rm letter*
     remove letter1? y
     remove letter2? y

 rmdir
Use this command to remove a directory. For example, to remove a directory called "essays", do this:
   % rmdir essays
A directory must be empty before it can be removed. To empty a directory, use rm.

rsh
Use this command if you want to work on a computer different from the one you are currently working on. One reason to do this is that the remote machine might be faster. For example, the command
% rsh solitude
connects you to the machine solitude. This is one of our public workstations and is fairly fast.

sort
Use this commmand to sort a file. For example, suppose we have a file dict with contents
red test
white test1

tail

Use this command to look at the tail of a file. For example,
   % tail new_test.pl
displays the last 10 lines of the file new_test.pl To see a specific number of lines, do this:

   % tail -n 20 new_test.pl
This displays the last 20 lines of the file.

tar
Use create compressed archives of directories and files, and also to extract directories and files from an archive. Example:
   % tar -tvzf test.tar.gz
displays the file names in the compressed archive foo.tar.gz while
   % tar -xvzf test.tar.gz
extracts the files.

telnet
Use this command to log in to another machine from the machine you are currently working on. For example, to log in to the machine "solitude", do this:
   % telnet solitude
See also: rsh.

wc
Use this command to count the number of characters, words, and lines in a file. Suppose, for example, that we have a file dict with contents
test file
test1 file1

Then we can do this
   % wc dict
     5      10      56 tmp
This shows that dict has 2 lines, 4 words, and 18 characters.


 

Sunday, 24 June 2012

What is Alpha and Beta Testing


User Acceptance Testing

Both testers and developers are involved
After completion of system testing, the project management concentrates on UAT to collect feed back from real customer or model customer.
There are 2 ways to conduct UAT.
Alpha Testing
Beta Testing


Alpha Testing

1.Performed by end users inside the development organization.
2.Done in controlled environment
3.Defects found by end users are noted down by the development team and fixed before release
4.Developers are present

Beta Testing

1.Performed by end users outside the development organization and inside the end user organization.
2.Environment is not under control
3.Defects found by end users are reported to the development organization
4.Developers are not present

#WduJiM3XB7#

Monday, 18 June 2012

How to test sample registration form with validation.

Validation using a Sample Registration Form.




Thursday, 10 May 2012

Difference between QA and QC and Software Testing




    Quality Assurance  1. QA is process related
    2. QA focuses on building in quality and hence preventing defects
    3. QA: Deals with process
    4. QA: for entire life cycle
    5. Quality Assurance makes sure you are doing the right things, the right way.
    6. QA is preventive process.
    Quality Control
    1. QC is the actual testing of the software
    2. QC focuses on testing for quality and hence detecting defects
    3. QC: Deals with product
    4. QC: for testing part in SDLC
    5. Quality Control makes sure the results of what you've done are what you expected
    6. QC is corrective process.
    Testing
    1.The process of executing a system with the intent of finding defects
    2.Software testing is a planned process that is used to identify the correctness, completeness, security and quality of software.
    3.Testing is generally done to demonstrate that the software is doing what it is supposed to do as well as the software is not doing what it is not supposed to do.
    4.The goal of testing or software tester is to locate defects and make sure that they get fixed.


Monday, 7 May 2012

Error Guessing



  • Why can one Tester find more errors than another Tester in the same piece of software?
  • More often than not this is down to a technique called ‘Error Guessing’. To be successful at Error Guessing, a certain level of knowledge and experience is required. A Tester can then make an educated guess at where potential problems may arise. This could be based on the Testers experience with a previous iteration of the software, or just a level of knowledge in that area of technology. This test case design technique can be very effective at pin-pointing potential problem areas in software. It is often be used by creating a list of potential problem areas/scenarios, then producing a set of test cases from it. This approach can often find errors that would otherwise be missed by a more structured testing approach.
  • An example of how to use the ‘Error Guessing’ method would be to imagine you had a software program that accepted a ten digit customer code. The software was
    designed to only accept numerical data.

    Here are some example test case ideas that could be considered as Error Guessing:
    1. Input of a blank entry
    2. Input of greater than ten digits
    3. Input of mixture of numbers and letters
    4. Input of identical customer codes



#KvEjwpLK7F#

Friday, 4 May 2012

Ad-hoc Testing or Informal Testing



  • In general, every testing team conducts planned testing, but testing team adopts informal testing sometimes due to some challenges or risks.
  • E.g : Lack of time, lack of resources, lack of team size, lack of skill, etc.
  • There are different ways of Ad-hoc testing.
    Ways of Adhoc Testing
  • Monkey Testing
    Due to lack of time, the testing team concentrates on some of the main activities in the software build for testing. This style of testing is known as “Monkey testing” or “Chimpanzee testing” or “Gorilla testing”.
  • Buddy Testing
    Due to lack of time, the management groups programmers & testers as “Buddies”. Every buddy group consists of programmers & testers.
    E.g.: 1:1 (or) 2:1 (or) 3:1 (preferable)
  • Exploratory Testing
    Due to lack of proper documentation of the software being built, the test engineers depend on past experience, discuss with others, browse the Internet or Operate similar projects and contact customer side people if possible. This style of testing is called “Exploratory Testing”.
  • Pair Testing
    Due to lack of knowledge on project domain the management groups a senior tester & a Junior Programmers and conducted testing, these all are called Pair testing.
  • Defect Seeding
    To estimate the efficiency of test engineers, the programmers add some bugs to the build. This task is called defect seeding / debugging.

How to create a Test Plan without docs?


    1. Try to break up your huge application into modules that are functionally independent.
    2. Within each module you start with the functions one by one.
    3. For a simple function write all possible test cases that arise to be tested, while using the application as there are no specs.
    4. In this way you could complete one function and in turn whole application.
    5. To prepare test cases or plan make use of Excel sheet. Each sheet will define each function within the module. This is best way to organize the test cases. 

Wednesday, 2 May 2012

Priority and severity



    1) Severity is the seriousness of the problem.
    2) Priority is the urgency of fixing a problem.
    Examples of Severity and Priority
  • High severity and high priority
    System crashes in Log-in Page
  • High severity and Low priority
    System crashes on clicking a button. The page on which the button is located is part of next release.
  • Low severity and high priority
    Spelling mistake in any company name web site.
  • Low severity and Low priority
    Location of button in any web based application


#rgAliZ#

Defect life cycle



  • New : Tester found new bug and report it to test lead.
  • Open : Test lead open this bug and assign it to the developer.
  • Assign : Developer has three choices after assigning the bug.
  • Reject :He can say that this is not a bug because of some hardware or other problems you might getting defect in application..
  • Differed : He can postpone bug fixing according to priority of bug.
  • Duplicate : If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
  • Fixed : If there are no such conditions like reject, duplicate the developer has to fix the bug.
  • Testing : Test the whole application to find the defects if the defect is still there they will re-open the bug and if bug is not there it will move to verify.
  • Re-open : If defect raised during testing we re-open bug.
  • Verify : Retest whole application.
  • Close : Close the application.   

#uPUJOTBonP#

Sunday, 29 April 2012

Functionality Testing


    This compartment mainly emphasizes on the Functionality aspect of the Application. The first step to test the Functionality aspect of the Application is to check whether all the requirements are covered in the software. The actual functionality testing totally depends from software to software. Still one can frame general guidelines. General guidelines are:
    1. Check the functionality is covered as per Requirement specifications or Functional specifications developed for the software.
    2. Within a dialog box identify the dependent fields. Depending on the dependency check the enabling and disabling of the fields.
  • For e.g.: to create Contact addresses in any application.
  • To create contact addresses user should be able to add, delete and modify the information. Contact Addresses will contain information like, First Name, Last Name, Address1, Address2, City, State, Country, Zip, Phone, etc., any other information may also be added.
    This form will have the required fields and in addition to that will have Add, Delete and Update buttons. The functionality of the buttons is as follows:
  • Initially only Add button will be enabled. Delete, Update buttons will be disabled. This is because initially there is no data available and unless one adds one cannot delete or update. In short, unless there is a single valid record available it is not possible to update or delete.
  • Only on selecting a record from the list Delete and Update buttons are enabled and Add button is disabled. By default No records will be selected.
  • Delete and Update should always give confirmation message before actually performing the operation.
  • Delete operation should not show the deleted item in the list

Friday, 27 April 2012

Validation Testing


    This compartment mainly emphasizes on the Validation aspect of the Application. Validation testing mainly depends on the fields set in the dialog box and the functions it has to perform. But still there are certain common rules that can be applied. General guidelines are:
    1. For text box fields where value entered has to be numeric check following:
  • It should accept numbers only and not alphabets.
  • If field usage is such that for e.g., To accept
  • Total number of days
  • Telephone number
  • Zip code etc.
  • Then it should not accept 0 and negative values.

    2. For text box fields where value entered has to be alpha-numeric check following:
  • It should accept alphabets and numbers only.
  • If field usage is such that for e.g., accepting
  • First Name
  • Middle Name
  • Last Name
  • City
  • Country etc.
  • Then field value should start with an alphabet only.
  • Depending on the condition this fields may accept special characters like -, _, . etc.

    3. If the field is a combo box then it has to be checked for following points:
  • Check the combo box has drop down values in it, it is not empty.
  • Drop down values should be alphabetically sorted. This might change as per requirement but as standard practices it should be alphabetically sorted. For e.g. to select data type from the list it will be as follows:
  • Date
  • Integer
  • String
  • Text, etc.
  • Selection of any drop down value is displayed on closing and opening the same dialog box.
    ¨ By default some value like "Select Value" or "_______" string is displayed. This is because
    User comes to know that value is to be selected for this field. Avoid displaying the first default value in the list.

    4. If the field is a list box then it has to be checked for following points:
  • Check the list box has values in it, it is not empty.
  • List box values should be alphabetically sorted and displayed. This might change as per requirement but as standard practices it should be alphabetically sorted.
  • Selection of any list box value should put a check before the value and should display the correct value(s) selected on closing and opening of the same dialog box.
  • If the list box supports multiple selection then check whether multiple values can be selected.

    5. If the field is a list of radio button then it has to be checked for following points:
  • Check whether as per requirements all the values are listed. For e.g. to select date format. Possible values displayed will be as follows:
    mm/dd/yyyy
    dd/mm/yyyy
    mm/dd/yy
    dd/mm/yy
    yyyy/mm/dd etc.
    ¨ Same selected value should be displayed on closing and opening of the same dialog box.

    6. Data Controls are to be tested as part of functionality testing.


GUI Testing


    This compartment mainly emphasizes on the GUI - Graphics User Interface aspect of the Application. It is not concrete that once GUI guidelines are set that can be followed blindly. GUI standards may vary from company to company and also from application to application. But still one can set general guidelines to have an overall idea on how to start GUI testing. These guidelines apply for every screen/ dialog box of the application. General guidelines are:
    1. All the dialog box should have a consistent look through out the Application system. For e.g.- If the heading within a dialog box is blue then for each dialog box the heading should be of this color.
    2. Every field on the screen should have an associated Label.
    3. Every screen should have an equivalent OK and cancel button.
    4. The color combination used should be appealing.
    5. Every field in the dialog box should have a Short Cut Key support. For e.g.- User Name
    6. Tab order should be normally set horizontally for the fields. In some case as per the case the Tab Order can be set vertically.
    7. Mandatory fields should have * (RED ASTERIK) marked to indicate that they are mandatory fields.
    8. Default key <Enter> should be set as OK for the dialog box.
    9. Default key <Esc> should be set as Cancel for the dialog box.

#JE0PgP#

Wednesday, 25 April 2012

What Is Test Cases



  • A case that tests the functionality of specific object.
  • Test case is a description of what to be tested, what data to be given and what actions to be done to check the actual result against the expected result.
  • It is document which includes step description of i/p, o/p conditions with test data along with expected results.
  • It is a specific input that you’ll try and procedure that you’ll follow when you test the software”.
A good test case
  • effective
  • exemplary
  • evolvable
  • economic
  • self standing
  • self cleaning
  • traceable(requirement coverage)
  • accurate
  • repeatable/reusable
Review Format
    1. TC_ID
    2.Objective
    3.Test Data
    4. Expected Result
    5. Actual Result
    6. Sataus
    7. Remark
Test Case Execution
  • Execution and execution results plays a vital role in the testing. Each and every activity should have proof.
  • The following activities should be taken care:
  • Number of test cases executed.
  • Number of defects found
  • Screen shots of successful and failure executions should be taken in word document.
  • Time taken to execute.
  • Time wasted due to the unavailability of the system.

Tuesday, 24 April 2012

Test Strategy



A Test Strategy document is a high level document and normally developed by project manager. This document defines “Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.
The Test Stategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document.
Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing. Components of the Test Strategy document

  • Scope and Objectives
  • Business issues
  • Roles and responsibilities
  • Communication and status reporting
  • Test deliverability
  • Industry standards to follow
  • Test automation and tools
  • Testing measurements and metrices
  • Risks and mitigation
  • Defect reporting and tracking
  • Change and configuration management
  • Training plan


#u8YLb4iAny#

Test Plan


The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.
It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents.
There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.

  • Test Plan id
  • Introduction
  • Test items
  • Features to be tested
  • Features not to be tested
  • Test techniques
  • Testing tasks
  • Suspension criteria
  • Features pass or fail criteria
  • Test environment (Entry criteria, Exit criteria)
  • Test delivarables
  • Staff and training needs
  • Responsibilities
  • Schedule




Thursday, 19 April 2012

Software Testing Types




  • Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

  • White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

  • Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

  • Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

  • Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

  • Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

  • System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

  • End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

  • Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

  • Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

  • Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
  • Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

  • Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

  • Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

  • Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

  • Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

  • Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

  • Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.

  • Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination of above.

  • Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.

  • Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

  • Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.