Thursday 12 April 2012

Agile Process in Software Development/Testing


Agile software development refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams. 

The main features of Agile Development are:-
Incremental, Iterative, Adaptive-Agile Development follows a descriptive approach and builds the system gradually. Typically, it has 2 weeks of iterations, which includes requirements development and testing. Thus it has multiple checkpoints during a project.
Regularly delivers business value-Work is broken into stories also known as use-cases and each of them is defined with some acceptance criteria.
Collaborative-Agile Development also members to work in different modules and does not require specialized knowledge.
No Backsliding-Agile development automatically includes unit testing and continuous integration testing in a test driven development method. Also, it leaves little scope of failure as regression testing is also a crucial part of Agile Development.

 Agile in Software Testing:-
Agile methods were developed as a response to the issues that the traditional V-Model and waterfall methodologies had with defining requirements and delivering a product that turned out to be not what the end user actually wanted and needed.
A software tester’s role in traditional software development methodology, a.k.a Waterfall & the V-model can be generally summarised as:
1.       Finding defects in development products, such as requirements and design documents
2.       Proving that the software meets these requirements
3.       Finding where the software under test breaks (whether that is through verification of requirements or validation that it is fit for purpose)
A tester’s life in a V-Model or waterfall based software project world is, for most traditionally trained testers, the basic process they steps they perform are similar to the following:
  • They receive a requirements document which they proceed to review
  • They eventually get a requirements document that is considered baselined or signed-off
  • They analyse these requirements to create test conditions and test cases
  • They write their test procedures
  • Then they wait for a piece of software to miraculously appear in their test environment.
  • They now start executing their tests
  • Oh and now they begin re-executing some of these tests as they now start iterating through new builds which are released to fix bugs or they may even include new functionality
  • Then they reach the acceptable risk= enough testing point (or the fixed immovable deadline) and the software is released
Now, while all the above sounds logical and “easy” to do, the real world we live in makes it not quite so straight forward! Requirements are never complete and there are always ambiguities to deal with. The worst case is the software meets its specifications but doesn’t meet the user needs.
Wouldn’t it be better to build smaller parts of the system, have the business work with the developers and testers to confirm that what’s being built is indeed what they want and need? So lets build the system in small increments, increasing the systems functionality in each release, and potentially deliver a working system at the end of each increment that actually meets the end users needs?



No comments:

Post a Comment