Milind Yadav
2 min readMay 3, 2019

--

10 Things we avoid during scripting for performance testing project.

  1. Not using repository features of the tools, although we can explicitly use git or svn etc. to keep our code as per the prod or testing env branches, many of us ignore that during our projects.

Tools provide these features explicitly now (Loadrunner , SOASTA etc.) , not following this results in last minute changes to already working scripts which sometimes breaks it , also transaction , page and other aesthetic changes became more and more random , which is not good and reflects during reporting .

2. Not using many assertions or validations. Using only the http response code as validation instead of response body text.

3. Not making sure that the Runtime Settings are agreed upon by all application teams involved in tests.

4. Not starting monitors before the test to make sure load generators are also monitored and are healthy throughout the test.

5. Not checking the delay settings whenever a new script is added.

6. Not involving functional testers to randomly browse the site during load tests.

7.Not developing Jmeter script in parallel even if you have commercial tools like Load Runner or Cloudtest soasta at your disposal .

Having JMX gives us ability to change the testing framework easily . Loadrunner to blazemeter or Redline13 or to any other easily . Without having to bother about creating script from scratch.

Validations can be done on the commercial tools themselves (like Loadrunner during the time for which you have licence).

8.Not cleaning up synthetic data after the test .Always have a strategy to keep cleaning unused reports etc. using the testing tools cli by way of creating Jenkins jobs after few days /months , example all results older than 500 days should be cleaned by the Jenkins job keeping the system in healthy state.

9. Not cleaning up unused scripts.

10.Not removing stale data from data files and ignoring the same failures as “known errors”.

I feel that by not following above we do a lot of rework , which takes away our time that can be used for analysis of test.

This also effects the results of the tests and they don’t give exact picture .

The schedule of the project also gets derailed, which results in loss of confidence .

--

--