Friday, 31 January 2014

Globalization, Internationalization and Localization in Software

What is Globalization, Internationalization and Localization in Software Testing?

In Today’s competitive world many of the clients are targeting the global audience, which means going beyond borders and working with clients to make sure application has proper global sets in terms of functional, readable, and viewable in multiple platforms and cross-browsers. Along with that there are many languages in the world, so in this situation do we need to create a separate application or website for each languages & countries? The answer is NO. This can be accomplish by simply doing the code in such a way that changing the text in the file they can localize the product in any language & this type of testing is called as Globalization (Internationalization) and Localization Testing.

Summary of what is mean by Globalization, Internationalization and Localization Testing?

  • Translation is one part of Localization
  •  Internationalization is a pre-requisite of Localization
  •  Internationalization and Localization are parts of Globalization
  •  Globalization includes many business-related activities outside of the product itself.

 What is Globalization Internationalization and Localization?

Aim of Internationalization and Localization testing is to make sure usability, acceptability, reliability to audience and users worldwide & check whether the application under test is ready for world-readiness. First application under test needs to be localized & then tested for many other counts like locale, copy text, language, compatibility, reliable functionality and interoperability.
What is Globalization (Internationalization) Testing?

Globalization definition: Globalization Testing is testing process to check whether software can perform properly in any locale or culture & functioning properly with all types of international inputs and steps to effectively make your product truly global. This type of testing validates whether the application is capable for using all over the world and to check whether the input accepts all the language texts.

It is also called as “G11N“, because there as 11 characters in between G & N. It ensures that the product will handle international support without breaking functionality. Globalization testing mainly focuses on the functionality of the product with any culture/locale settings and every type of possible international input. It also helps uncover issues that may increase the costs of localization and future product support later on.


Localization definition: Localization testing is testing process to validate whether application is capable enough for using in a particular location or country. In this testing localization, testing is to carried out to check the quality of the product for particular locale/culture.  To check the quality of translation in localization testing, we should request local staff as well. It is to be carried out to check the localized version of the product, For example: French product for French users. It is also called as “L10N“, because there as 10 characters in between L & N.

Let’s see another example of a Zip code field in Sign up form:
1) For globalized, it should allow to enter alphanumeric inputs
2) For localized (country like INDIA),  it should allow only numbers in input field.

One more nice example of Localization:
Localization Example of Steering Wheel

Localization Example of Steering Wheel



What we need to test in Internationalization or Globalization?

Check for the functionality with different language setting. It might possible that functionality may not work other than English setting. Here is one example of API which cause problem in the communication between Consumer and Owner. In the API, they forgot to make agreement for data format, one is using English language & one is using local format.
    Check if no any hard-coded string use in the code. You can test with the different language by changing the language setting from Computer.
    Check the Numbers, Currencies, Character sets for different countries.

Benefits of for Localization and Globalization Testing

    It reduces overall testing costs
    It reduces the support costs
    It help to reduce time for testing which result faster time-to-market
    It has more flexibility and scalability.

Conclusion: The Localization Testing and Globalization testing is done for adapting a product to a local or regional market & it’s goal is to make appropriate linguistic and cultural aspects of product. These testing performed with the help of by translators, language engineers & localizers. Now we understand the importance of Localization Testing and Globalization Testing and the risk if we don’t have this type of software testing. It is very much important to execute the Globalization Testing and Localization Testing in international product.




Monday, 27 January 2014

Traceability Matrix or Requirement Traceability Matrix


What is Traceability Matrix(RTM) ??

Traceability Matrix (also known as Requirement Traceability Matrix - RTM) is a table which is used to trace the requirements during the Software development life Cycle. It can be used for forward tracing (i.e. from Requirements to Design or Coding) or backward (i.e. from Coding to Requirements). There are many user defined templates for RTM. Each requirement in the RTM document is linked with its associated test case, so that testing can be done as per the mentioned requirements. Furthermore, Bug ID is also include and linked with its associated requirements and test case. 

The main goals for this matrix are: 
  1. Make sure Software is developed as per the mentioned requirements.  
  2. Helps in finding the root cause of any bug.  
  3. Helps in tracing the developed documents during different phases of SDLC.


Levels Of Testing

Levels Of Testing

Levels of testing include the different methodologies that can be used while conducting Software Testing. Following are the main levels of Software Testing:
  1. Functional Testing.
  2. Non- functional Testing.

1.Functional Testing.
This is a type of black box testing that is based on the specifications of the software that is to be tested. The application is tested by providing input and then the results are examined that need to conform to the functionality it was intended for. Functional Testing of the software is conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.

There are five steps that are involved when testing an application for functionality.
Step I - The determination of the functionality that the intended application is meant to perform.
Step II - The creation of test data based on the specifications of the application.
Step III - The output based on the test data and the specifications of the application.
Step IV - The writing of Test Scenarios and the execution of test cases.
Steps V - The comparison of actual and expected results based on the executed test cases. An effective testing practice will see the above steps applied to the testing policies of every organization and hence it will make sure that the organization maintains the strictest of standards when it comes to software quality.


  
2.Non- functional Testing

  1. Performance Testing
  2. Load Testing 
  3. Stress Testing 
  4. Usability Testing 
  5. Reliability Testing 



Non-Functional Testing

Non-Functional Testing

This section is based upon the testing of the application from its non-functional attributes. Non-functional testing of Software involves testing the Software from the requirements which are non functional in nature related but important a well such as performance, security, user interface etc.
Some of the important and commonly used non-functional testing types are mentioned as follows:

Performance Testing

It is mostly used to identify any bottlenecks or performance issues rather than finding the bugs in software. There are different causes which contribute in lowering the performance of software:
  • Network delay.
  • Client side processing.
  • Database transaction processing.
  • Load balancing between servers.
  • Data rendering.
Performance testing is considered as one of the important and mandatory testing type in terms of following aspects:
  • Speed (i.e. Response Time, data rendering and accessing)
  • Capacity
  • Stability
  • Scalability
It can be either qualitative or quantitative testing activity and can be divided into different sub types such as Load testing and Stress testing.

Load Testing

A process of testing the behavior of the Software by applying maximum load in terms of Software accessing and manipulating large input data. It can be done at both normal and peak load conditions. This type of testing identifies the maximum capacity of Software and its behavior at peak time.
Most of the time, Load testing is performed with the help of automated tools such as Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer, Visual Studio Load Test etc.
Virtual users (VUsers) are defined in the automated testing tool and the script is executed to verify the Load testing for the Software. The quantity of users can be increased or decreased concurrently or incrementally based upon the requirements.

Stress Testing

This testing type includes the testing of Software behavior under abnormal conditions. Taking away the resources, applying load beyond the actual load limit is Stress testing.
The main intent is to test the Software by applying the load to the system and taking over the resources used by the Software to identify the breaking point. This testing can be performed by testing different scenarios such as:
  • Shutdown or restart of Network ports randomly.
  • Turning the database on or off.
  • Running different processes that consume resources such as CPU, Memory, server etc.

Usability Testing

This section includes different concepts and definitions of Usability testing from Software point of view. It is a black box technique and is used to identify any error(s) and improvements in the Software by observing the users through their usage and operation.
According to Nielsen, Usability can be defined in terms of five factors i.e. Efficiency of use, Learn-ability, Memor-ability, Errors/safety, satisfaction. According to him the usability of the product will be good and the system is usable if it possesses the above factors.
Nigel Bevan and Macleod considered that Usability is the quality requirement which can be measured as the outcome of interactions with a computer system. This requirement can be fulfilled and the end user will be satisfied if the intended goals are achieved effectively with the use of proper resources.
Molich in 2000 stated that user friendly system should fulfill the following five goals i.e. Easy to Learn, Easy to Remember, Efficient to Use, Satisfactory to Use and Easy to Understand.
In addition to different definitions of usability, there are some standards and quality models and methods which define the usability in the form of attributes and sub attributes such as ISO-9126, ISO-9241-11, ISO-13407 and IEEE std.610.12 etc.

UI vs Usability Testing

UI testing involves the testing of Graphical User Interface of the Software. This testing ensures that the GUI should be according to requirements in terms of color, alignment, size and other properties.
On the other hand Usability testing ensures that a good and user friendly GUI is designed and is easy to use for the end user. UI testing can be considered as a sub part of Usability testing.

Security Testing

Security testing involves the testing of Software in order to identify any flaws ad gaps from security and vulnerability point of view. Following are the main aspects which Security testing should ensure:
  • Confidentiality.
  • Integrity.
  • Authentication.
  • Availability.
  • Authorization.
  • Non-repudiation.
  • Software is secure against known and unknown vulnerabilities.
  • Software data is secure.
  • Software is according to all security regulations.
  • Input checking and validation.
  • SQL insertion attacks.
  • Injection flaws.
  • Session management issues.
  • Cross-site scripting attacks.
  • Buffer overflows vulnerabilities.
  • Directory traversal attacks.

       Portability Testing

Portability testing includes the testing of Software with intend that it should be re-useable and can be moved from another Software as well. Following are the strategies that can be used for Portability testing.
  • Transferred installed Software from one computer to another.
  • Building executable (.exe) to run the Software on different platforms.
Portability testing can be considered as one of the sub parts of System testing, as this testing type includes the overall testing of Software with respect to its usage over different environments. Computer Hardware, Operating Systems and Browsers are the major focus of Portability testing. Following are some pre-conditions for Portability testing:
  • Software should be designed and coded, keeping in mind Portability Requirements.
  • Unit testing has been performed on the associated components.
  • Integration testing has been performed.
  • Test environment has been established.

     

    Installation Testing: -

        It is a type of testing in which one will install the application in to the environment by following the guide lines provided in the installation document and if the installation is successful then he will come a conclusion that the given guide lines are correct other wise he will come to a conclusion that the given guidelines are not correct.


Friday, 24 January 2014

GREY BOX TESTING

Grey Box testing


Grey Box testing is a technique to test the application with limited knowledge of the internal workings of an application. In software testing, the term “the more you know the better” carries a lot of weight when testing an application.
Mastering the domain of a system always gives the tester an edge over someone with limited domain knowledge. Unlike black box testing, where the tester only tests the application’s user interface, in grey box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scenarios when making the test plan.


Advantages

  • Offers combined benefits of black box and white box testing wherever possible.
  •  Grey box testers don’t rely on the source code; instead they rely on interface definition and functional specifications.
  • Based on the limited information available, a grey box tester can design excellent test scenarios especially around communication protocols and data type handling.
  • The test is done from the point of view of the user and not the designer.

Disadvantages


  • Since the access to source code is not available, the ability to go over the code and test coverage is limited.
  • The tests can be redundant if the software designer has already run a test case.
  • Testing every possible input stream is unrealistic because it would take an unreasonable amount of time; therefore, many program paths will go untested.

Differences Between terms in Software Testing

Differences Between Audit and Inspection

Audit: A systematic process to determine how the actual testing process is conducted within an organization or a team. Generally, it is an independent examination of processes which are involved during the testing of software. As per IEEE, it is a review of documented processes whether organizations implements and follows the processes or not. Types of Audit include the Legal Compliance Audit, Internal Audit, and System Audit.


Inspection: A formal technique which involves the formal or informal technical reviews of any artifact by identifying any error or gap. Inspection includes the formal as well as informal technical reviews. As per IEEE94, Inspection is a formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems.
Formal Inspection meetings may have following process: Planning, Overview Preparation, Inspection Meeting, Rework, and Follow-up.


Difference between Testing and Debugging

Testing: It involves the identification of bug/error/defect in the software without correcting it. Normally professionals with a Quality Assurance background are involved in the identification of bugs. Testing is performed in the testing phase.

Debugging: It involves identifying, isolating and fixing the problems/bug. Developers who code the software conduct debugging upon encountering an error in the code. Debugging is the part of White box or Unit Testing. Debugging can be performed in the development phase while conducting Unit Testing or in phases while fixing the reported bugs.

Difference Between Testing, Quality Assurance and Quality Control

Most people are confused with the concepts and difference between Quality Assurance, Quality Control and Testing. Although they are interrelated and at some level they can be considered as the same activities, but there is indeed a difference between them. Mentioned below are the definitions and differences between them:


Difference Between Verification and Validation




Difference Between Black Box Testing White Box Testing and Grey Box Testing





User Acceptance Testing (UAT)

What is User Acceptance Testing?
-----------------------------------

User Acceptance Testing (UAT) - also called beta testing, application testing, and/or end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience or a business representative. Whilst the technical testing of IT systems is a highly professional and exhaustive process, testing of business functionality is an entirely different proposition.

User Acceptance Testing (UAT) is one sure way to reduce or eliminate change requests, and drastically reduce project costs.

Advantages of UAT

•       Ensuring that the application behaves exactly as expected.
•       Reducing the total cost of ownership.
•       Reducing the cost of developing the application.


Tasks of User Acceptance Testing
When performing UAT, there are seven (7) basic steps to ensure the system is tested thoroughly and meets the business needs.

1 – Analyze Business Requirements
2 – Identify UAT Scenarios
3 – Define the UAT Test Plan
4 – Create UAT Test Cases
5 – Run the Tests
6 – Record the Results
7 – Confirm Business Objectives are met


Alpha Testing

This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application:
  • Spelling Mistakes
  • Broken Links
  • Cloudy Directions
  • The Application will be tested on machines with the lowest specification to test loading times and any latency problems.

Beta Testing

This test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. Beta test versions of software are ideally distributed to a wide audience on the Web, partly to give the program a "real-world" test and partly to provide a preview of the next release. In this phase the audience will be testing the following:
  • Users will install, run the application and send their feedback to the project team.
  • Typographical errors, confusing application flow, and even crashes.
  • Getting the feedback, the project team can fix the problems before releasing the software to the actual users.
  • The more issues you fix that solve real user problems, the higher the quality of your application will be.
  • Having a higher-quality application when you release to the general public will increase customer satisfaction.




Summary
Whether your organization designates the functional role associated with testing as a Business Analyst, Tester, or Quality Assurance professional, User Acceptance Testing done well will engage those responsible very early on in the project development cycle. DevelopMentor Business Analysis curriculum offers many opportunities for learning the skills required of those responsible for UAT.

Thursday, 23 January 2014

DEFECT TRACKING


DEFECT TRACKING

Defect : If a feature is not working according to the requirement, it is called a defect.
Deviation from requirement specification is called as defect.

Developer develops the product – test engineer starts testing the product – he finds a defect – now the TE must send the defect to the development team.

He prepares a defect report – and sends a mail to the Development lead saying “bug open.
Development lead looks at the mail and at the bug – and by looking at the bug – he comes to know to which development engineer developed that feature which had a bug – and sends the defect report to that particular developer and says “bug assigned”.

The development engineer fixes the bug – and sends a mail to the test engineer saying “bug fixed” – he also “cc mail” to the development lead.
Now the TE takes the new build in which the bug is fixed – and if the bug is really fixed – then sends a mail to the developer saying “bug closed” and also “cc mail” to the development lead.

Every bug will have an unique number.
If the defect is still there – it will be sent back as “bug reopen”. 

DEFECT LIFE CYCLE



DEFECT REPORT

Defect ID – it is an unique number given to the defect

Test Case Name – whenever we find a defect, we send the defect report and not the test case to the developer. For defect tracking, we only track the defect report and not the test case. Test case is only for reference for the TE. We always only send the defect report whenever we catch a bug.

Given below is – how a defect report looks like




SOFTWARE TEST LIFE CYCLE (STLC)

SOFTWARE TEST LIFE CYCLE (STLC)
Testing itself has many phases i.e is called as STLC.
STLC is part of SDLC
Defect Life Cycle is a part of STLC


Requirement is the input for testing.

Test Plan – is a document which derives all future activities of the project. All future testing activities is planned and put into a document and this document is known as Test Plan. It contains – number of engineers needed for the project, who should test which feature, how the defects must be communicated to the development team, when we should start and finish writing test cases, executing test cases, what are the types of testing we use to test for the application etc.

Write test case – we write test cases for each feature. These test cases are reviewed, and after all mistakes are corrected and once the test cases are approved – then they are stored in the test case repository.

Traceability Matrix – it is a document which ensures that every requirement has a test case .
Test cases are written by looking at the requirements and test cases are executed by looking at the test cases. If any requirement is missed i.e, test cases are not written for a particular requirement, then that particular feature is not tested which may have some bugs. Just to ensure that all the requirements are converted, traceability matrix is written. This is shown below,

Defect Tracking – any bug found by the testing team is sent to the development team. This bug has to be checked by the testing team if it has been fixed by the developers.

Test Execution Report :- Send it to customer – contains a list of bugs(major, minor and critical), summary of test pass, fail etc and when this is sent, according to the customer – the project is over.
TER is prepared after every test cycle and sent to development team, testing team, management and customer(depends if it is a fixed bid project or time & material bid project).
The last TER of the last test cycle is always sent to the customer. And this means that the project is over-according to the customer.

Retrospect meeting – (also called Post Mortem Meeting / Project Closure Meeting)
The Test Manager calls everyone in the testing team for a meeting and asks them for a list of mistakes and achievements in the project.
This is done by test lead or test manager. Here, the manager documents this retrospect meeting and stores it in QMS (Quality Management System). It is a folder, where inside this folder, there is another folder called Retrospect folder and here this excel sheet document is stored. When we get new project, while we write the test plan – we will open this retrospect file and will try and implement the good practices and correct the mistakes.

REQUIREMENTS COLLECTION / SYSTEM STUDY

The requirements can be in any of the following forms,
·         CRS (Customer Requirement Specification)
·         SRS (System Requirement Specification)
·         FS (Functional Specification)
·         If we don’t have requirements and if we are given only the application, then we do exploratory testing.
·         Use case

Use Case
Use case is a pictorial representation of requirements. It explains how the end user interacts with the application. It gives all possible ways of how the end user uses the application.

TEST PLAN
Test plan is a document which drives all future testing activities.
Test plan is prepared by Test manager(20%), Test Engineer(20%) and by Test Lead(60%).

There are 15 sections in a test plan. We will look at each one of them below:-

1) Objective 
It gives the aim of preparing test plan i.e, why are we preparing this test plan. 

2) SCOPE

3) TESTING METHODOLOGIES

For example, 
Smoke testing                        Functional testing                  Integration testing
System testing                       Adhoc testing                         Compatibility testing
Regression testing                Globalization testing             Accessibility testing
Usability testing                   Performance testing

4) APPROACH

The way we go about testing the product in future,
a)      By writing high level scenarios
b)      By writing flow graphs

5) ASSUMPTIONS
When writing test plans, certain assumptions would be made like technology, resources etc.

6) RISKS
If the assumptions fail, risks are involved.

7) CONTINGENCY PLAN OR MITIGATION PLAN OR BACK-UP PLAN
To overcome the risks, a contingency plan has to be made. Atleast to reduce the percentage from 100% to 20%

8) ROLES AND RESPONSIBILITIES


9) SCHEDULES :-
This section contains – when exactly each activity should start and end? Exact date should be mentioned and for every activity, date will be specified.

10) DEFECT TRACKING
In this section, we mention – how to communicate the defects found during testing to the development team and also how development team should respond to it. We should also mention the priority of the defect – high, medium, low.

11) Test Environment
It describes the environment where the test has to be carried out.
It will give details about the software, hardware and the procedure to install the software.

12) Entry and Exit Criteria

Entry Criteria
a) WBT should be over
b) Test cases should be ready
c) Product should be installed with proper test environment
d) Test data should be ready

e) Resources should be availablea) WBT should be over

Exit Criteria
1)Based on %age test execution
2)Based on %age test pass

3) Based on severity

13) TEST AUTOMATION

At this stage it is decided that 
1)Which features should be automated
2) Features not to be automated
3)Which is the automation tool you are planning to use
4) What is the automation framework you are planning to use

14) DELIVERABLES

It is the output from the testing team. It contains what we will deliver to the customer at the end of the project. It has the following sections :-
v   Test Plan
v  Test Cases
v  Test Scripts
v  Traceability Matrix
v  Defect Report
v  Test Execution Report
v  Graphs and Metrics
v  Release Note

15)TEMPLATES
This section contains all the templates for the documents which will be used in the project. Only these templates will be used by all the test engineers in the project so as to provide uniformity to the entire project.

The various documents which will be covered in the Template section are,
·         Test Case
·         Traceability Matrix
·         Test Execution Report
·         Defect Report
·         Test Case Review Template


TEST DESIGN

At this stage the Test plan is implemented.The Test case are designed for each functionality at this stage

TRACEABILITY MATRIX

Traceability Matrix is a document which has got the mapping between requirements and test cases. We write TM to make sure that every requirement has got atleast 1 test case.

Advantages of Traceability Matrix
·         Ensures that every requirement has atleast 1 test case
·         If suddenly requirement is changed – we will be knowing which is the exact test case or automation script to  be modified
·         We will come to know which test case should be executed manually and which are to be done automatically.

TEST EXECUTION

Here, we test the product.
We test repeatedly for 40 – 60 cycles. We do all types of testing on the application. Test Execution is the phase where we spend 80% of our time on the project. Only 20% is spent on the remaining stages.

TEST REPORT
At this stage the Defect is raised and assigned to the test manager who will assign it to the responsible person to resolve the defect.