Webdriver challenges working in an agile project – Part 3

Challenge 7 – Tests ran away?


If your automated tests are only run manually and not through the Continuous Integration, then there is significant risk that they are not being run regularly and therefore may in fact be failing.

It is crucial that in an agile project you are getting feedback early and often so ensure that you have a continuous integration process in place. Your automation pack should run against the development environment at least nightly. If tests have been broken, don’t allow the build to be deployed into your test environment until every single test has passed. Also remember quick feedback means the change is fresh in the developers mind allowing for a quicker fix!

Challenge 8 – Don’t lose your Sanity


At some stage your automation pack will grow to the level where it is taking too long for all your tests to run against the build to provide quick feedback. At this stage make sure you create a Sanity/Smoke test set which is your key test set that you need to get quick feedback from the build. There is nothing like having confidence from a set of tests displaying green ticks!

Challenge 9 – Test Data


Test Data is often key to a number of webdriver tests being able to be run.

Plan carefully at the start of the sprint what test data you require for your automated tests and where your data will come from. Will you require automation tests to create test data, will they come from an excel spreadsheet or a database?

Often a good way to ensure test data you need is in the build is to create seed scripts which are run as part of any new build deployment process. Plan this with the development team at the start of the sprint.


Challenge 10 – Webdriving a Manual or an automatic?


As we know, as much as we’d like and as flexible as Webdriver is, not everything can be automated in a web application. If you are banging your head against a brick wall with trying to automate tricky areas of the application, be pragmatic and mark these tests as manual tests to be performed and communicate this back to the project. If you are using a behavior driven development tool like cucumber or specflow, you can always tag your tests as automated or manual so you can still keep all your acceptance tests together but they are split into your automated pack and manual pack respectively.