Sunday, 5 August 2012

EAI Blues | Developer Vs Tester

After a couple of hard core technical entries, here is one which although is true in its own nature, but not heavy on the mind.

We all know and agree that Development and Testing teams are not always kind to each other. Till date, the reasons are unknown but everyone agrees that yes, there is a cold war.

In projects implementing EAI, Middleware solutions include numerous applications, end to end scenarios, business rules, multiple use cases, so the software testing part is truly a bigger challenge than development.

The main reasons resulting in multiple config defects, mapping defects, design changes, deployment failures can be attributed to the differences and absence of synergies between these two teams.

Testers keep opening defects on documents, test scripts, deployment manuals, mappings, result sets and keep complaining that the developers have not done a thorough job. Developers keep on saying the testers are a bunch of idiots who have little functional knowledge and do nothing but raise useless defects.
Amidst all this chaos, the issue is, that key functionality and business logics are often missed and defects arise in SIT or end to end or full data set testing.

So now, what can we do to avoid these defects in the first place? The answer according to experienced program managers is to treat both the dev and test team with equal importance and like two little brothers fighting for the candy. You should always pay equal importance to map the requirements of the testers and deliverables of the developers.

Furthermore below are some of the best practices that we can follow to ensure a 'Fast and Flawless' (a quote taken from my current company) delivery.

  • Involve resources from the testing team right from the beginning. It is better if they understand the Interface design as well, and like everyone always say ' See the Big Picture '.
  • They should have an approved test catalog by the time Interface design is complete. This is will help a lot while creating the test scripts and expected results.
  • Developers and Testers - Both these teams should do a cross review. Developers should get their deployment manuals and unit test cases reviewed by the Testing team, but the Testers should get the test scripts and expected results verified with a Business Analyst and NOT the Development team.
  • Developers tend to have a blind eye towards the result set. If the Interface has run fine, data has been published, there are no errors in the Logs, they WILL (I can tell you as I have been in development for 5 years) mark the test as passed without verifying if the data entered is absolutely correct. So ask the Developers to create expected results before unit testing, NOT after. (I have seen developers copying Interface execution results into expected results!!!!!)
  • Keep the documents UP TO DATE. People NEVER do it, big project, small project, big company, small company, 10 yr experienced, 1 yr experienced. Developers and Testers both tend to finish the work at hand, before they update the relevant documents which often results in the documents never been updated. The Program Manager should always and always make sure that the info in the repository is up to date.
  • Big or small, document or code, please please track all defects and make sure to analyze those defects on regular intervals to incorporate into best practices and guidelines.
That being it on a crude level, the talks are endless, viewpoints to the tunes of stars in the universe and standards like alien languages. There is no set rules for success, only a few guidelines and best practices that might help you reduce the risk and increase quality.

Thank you for your time and please do post your valuable comments to make the post and the blog more helpful.

No comments:

Post a Comment