QA

We are using the following strategies for the last 25 years to provide the best quality product.

QA Process followed by us:

Project Kick-off

  • Team Introduction
  • Goals Setting
  • Timeline

Project SRS is shared with the QA team.

As we follow agile methodology, it's possible that specs can change during a project, which is handled by our daily meetings,chats or update stories (Jira tool).

Planning:

  • Identify tools
  • Prepare Test Plan
  • Identify Testing Level
  • Create stories/features
  • Grooming and Estimation
  • Define the best practices
  • Decide communication Channel

Project managers study specifications and convert all business rules to stories.Each story could have several tasks and tasks are divided into multiple subtasks. 

Test Plan is created as per SRS.

Requirement Analysis:

  • Impact Analysis
  • Finalize acceptance criteria
  • QA Planning Design
  • Test scenario Writing
  • Test Data creation
  • QA Review
  • Test Harness

Test Execution:

Setup QE environment to do manual testing

  • Execution
  • Defect Reporting
  • Prepare Test Report


Test Closure

  • Release note
  • Go / No-Go Live


We recommend following as best practices:

Make sure all stories/bugs are assigned to QA team members, unit test comments are added by developer, code freeze and build is released on QA environment.

Make sure all positive and negative criteria's are included in test cases.

Keep all test cases pre-condition, data and expected result documented. Very important point is to keep all data, flow and the environment needed for testing ready.


Automation:

We recommend automating all test cases which you run over and over.

Sanity: Test cases that validate the functionality.

Smoke: Test cases  that validate all critical functionality

Regression: Automate all tests from Test Plan.


Execution plan: 

  • Run every night
  • Run on every release to QA, Staging and Prod environment.

Basic things need to be covered in the automation test :

  • For test case execution, keep track of passed and failed.
  • Try to capture time duration for each action and reaction, which will help to check performance also.
  • For a failed test case, auto creates a new bug and assigned to developer/lead.
  • Make sure all test data and screenshot is stored for failed test case and updated same in newly created bug.

To view or add a comment, sign in

More articles by Vinod Nehete

Explore content categories