Tuesday, April 1, 2008

Automated Testing Experience

Keith Braithwaite presented his recent experience of automated testing, particularly using Test Driven Development, at SPA 2008 Conference.

The audience, a mixture of youth and experience, had already some good experience of automated testing tools such as Xunit, Selenium, Cruise Control, jMock, dbUnit and Build-o-Matic (funny how all of these tools are open source).

Keith outlined some of the problems that he saw with manual testing such as:

  • Takes a long time...

  • Not consistently applied

  • Not particularly effective

  • Requires scheduling of people
Keith advocated automated testing as a better way with a clear view that 'Checked Examples' need to be performed automatically. Before describing his approach he went back to the basis of software engineering with regards how the system requirements were specified -natural language which is often vague and ambiguous. He advocated that unless the customer talked the same language as the engineers, then there would always be problems in building systems. One of the major problems in systems are specifying the boundaries or constraints of the system to be developed; writing these rules precisely was often very difficult with multiple exceptions to the norm. However, the customer could often give examples of 'what the system should do' - these weren't the rules but were sufficient for the reference results to be manually calculated by hand. There was no guarantee that the reference results were correct but by ensuring that there were multiple contributors to the production of this data, the probability of erroneous data was reduced.

With reference data available, a test framework was developed using the FIT library which was then used to automatically compare the results from the system with the reference results. At the end there were several hundred scenarios (or examples) used, with the test data being available before the features were developed, thereby facilitating the incremental development. Keith stated that the approach using reference data and automatic comparison of the system against the reference data detected problems early and also revealed issues with the design (if it couldn't be instrumented, then the design was questionable).

Keith stated that the delivered system was defect-free with no failures being reported.

Is Defect-Free possible?

  • Developers make mistakes and errors, ...so

  • The system contains defects,... so

  • The user experiences failures

So, if the user doesn't experience failures, does this mean the system is without defects? Probably not, but this is hard to verify. I would agree with this by saying that it is very difficult to claim that any development is defect free, only that the defects haven't revealed themselves during the testing and normal (and presumably intended) operation of the system. It also depends on the impact that the failure has, as clearly some systems have a more severe impact than others. By adopting Keith's approach of developing reference data with the customer, the potential for misunderstanding the system before delivery is reduced.

Keith summarised by saying that the tests weren't really tests at all, merely gauges of how the system is working for the users. The reference data were 'checked examples' which were a very powerful way of ensuring the customer and developers worked together.

No comments: