Cem Kaner: Testing Checklists = Good / Testing Scripts = Bad?

I highly recommend this presentation by Cem Kaner (available here as a pdf download of slides). It is provocative, funny, and insightful. In it, Cem Kaner makes a strong case for using checklists (and mercilessly derides many aspects of using completely scripted tests). Cem Kaner, as I suspect most people reading this already know, is one of the leading lights of software testing education. He is a professor of computer sciences at Florida Institute of Technology and has contributed enormously to software testing education by writing Testing Computer Software “the best selling software testing book of all time,” founding the Center for Software Testing Education & Research, and making an excellent free course available online for Black Box Software Testing. <Trivia: Cem Kaner is one of two people I know about who work in the software testing today that have a law degree; the other person is me. After graduating from the University of Virginia Law School, I worked as a lawyer in London and Hong Kong for a large global firm before coming to my senses and realizing my interests, happiness and competence lay elsewhere>.

Here are a couple of my favorite slides from the presentation.

My own belief is that the presentation is very good and makes the points it wants to quite well. If I have a minor quibble with it, it is that in doing such a good job at laying out the case for checklists and against scripted testing, the presentation – almost by definition/design – does not go into as much detail as I would personally like to see about a topic that I think is extremely important and not written about enough; namely, how practitioners should use an approach that blends the advantages of scripted tests (that can generate some of the huge efficiency benefits of combinatorial testing methods for example) and checklist-based Exploratory Testing (which have the advantages pointed out so well in the presentation). A “both / and” option is not only possible; it is desirable.

– – –

Credit for bringing this presentation to my attention: Michael Bolton (the testing expert, of course, not the singer, [ {— “Office Space” video snippet] , posted a link to this presentation. Thanks again, Michael. Your enthusiastic recommendation to pick up boxed sets of the BBC show Connections was also excellent as well; the presenter of Connections is like a slightly tipsy genius with ADHD who possesses incredible grasp of history, an encyclopedic knowledge of quirky scientific developments and a gift for story-telling. I like how your mind works.

Advertisements

4 thoughts on “Cem Kaner: Testing Checklists = Good / Testing Scripts = Bad?

  1. I have been testing with checklists for collaborative test events now for 7 years and find it much more useful, but this doesn’t discuss the use of models and “mindmaps” which are also very useful in some situations! I’d love to see a comparison with valid controls of the same tester using all of these methods at once.

    I really wish we had more hard science around these practices, but right now it’s just what process is sold best the the company based on little data that determines what process is used for testing. Standardizing on an unproven hypothesis is the status quo in many companies and I’d love to see more consideration given to checklists and models.

  2. Pingback: Tweets that mention Cem Kaner: Testing Checklists = Good / Testing Scripts = Bad? « Hexawise Blog -- Topsy.com

  3. Pingback: uberVU - social comments

  4. Interesting perspective and a good read. If you would indulge me, I’d like to propose a few corrections.

    Scripted:
    Design the test early, review with the development team and project team (users, BAs, etc…) and modify as needed when needed.

    Execute the test when the code has been delivered.

    Verify the results match the results documented in the test script expected results and against the requirements. Additionaly, the tester will be looking for anomolies during execution and monitoring screen changes, logs and abnormal behaviors.

    When financially feasabile, the testing will occur in parallel between at least 2 testers and the results and execution will be reviewed by a senior member of the team.

    The test scripts are designed by a member of the team who has been engaged since the planning/analysis phase of the project.

    The tester (often in parallel with the test designer) will execute the tests according the test scripts, observing the results and system behavior (see above).

    The tester will also make note of functionality not listed in the requirements or test scripts. These findings will be presented to the project team to determine if additional test cases are required.

    The tester/designer will maintain the documented test results and scripts and execute them as need for each code delivery, updating them accordingly as new test cases are identified.

    I’m not adept in the ET methodology so I’ll defer to what you have listed. One question I do have is related to this comment:
    “Execute the test at time of design or reuse it later”.

    Is the implication that ‘time of design’ refers to designing the test? I guess so, because I’m not sure how you could execute the tests during the project design phase. Is the test design being created after the code is delivered?

    I’m still trying to get my hands around this whole ET thing. I keep finding references to it across several blog sites.

    Thanks and happy testing!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s