A Fun Presentation on a Powerful Software Test Design Approach

Combinatorial Software Test Design – Beyond Pairwise Testing


I put this together to explain combinatorial software test design methods in an accessible manner.  I hope you enjoy it and that, if you do, that you’ll consider trying to create test cases for your next testing project (whether you choose our Hexawise test case generator or some other test design tool).


Where I’m Coming From

As those of you know who read my posts, read my articles, and/or have attended my testing conference presentations, I am a passionate proponent of these approaches to software test design that maximize variation from test case to test case and minimize repetition.  It’s not much of an exaggeration to say I hardly write or talk publicly about any other software testing-related topics.  My own consistent experiences and formal studies indicate that pairwise, orthogonal array-based, and combinatorial test design approaches often lead to a doubling of tester productivity (as measured in defects found per tester hour) as compared to the far more prevalent practice in the software testing industry of selecting and documenting test cases by hand.  How is it possible that this approach generates such a dramatic increase in productivity? What is so different between the manually-selected test cases and the pair-wise or combinatorial testing cases?  Why isn’t this test design technique far more broadly adopted than it is?

A Common Challenge to Understanding: Complicated, Wonky Explanations

My suspicion is that a significant reason that combinatorial software testing methods are not much more widely adopted is that many of the articles describing it are simply too complex and/or too abstract for many testers to understand and apply.  Such articles say things like:

A. Mathematical Model

A pairwise test suite is a t-way interaction test suite where t = 2. A t-way interaction test suite is a mathematical structure, called a covering array.

Definition 1 A covering array, CA(N; t, k, |v|), is an N × k array from a set, v, of values (symbols) such that every N × t subarray contains all tuples of size t (t-tuples) from the |v| values at least once [8].

The strength of a covering array is t, which defines, for example, 2-way (pairwise) or 3-way interaction test suite. The k columns of this array are called factors, where each factor has |v| values. In general, most software systems do not have the same number of values for each factor. A more general structure can be defined that allows variability of |v|.

Definition 2 A mixed level covering array, MCA (N; t, k, (|v1|,|v2|,…, |vk|)), is an N × k array on |v| values, where

| v |␣ ␣k | vi | , with the following properties: (1) Each i␣1

column i (1 ␣ i k) contains only elements from a set Si of size |vi|. (2) The rows of each N × t subarray cover all t-tuples of values from the t columns at least once.

– “Construct Pairwise Test Suites Based on the Bak-Sneppen Model of Biological Evolution” World Academy of Science, Engineering and Technology 59 2009 – Jianjun Yuan, Changjun Jiang

If you’re a typical software tester, even one motivated to try new methods to improve your skills, you could be forgiven for not mustering up the enthusiasm to read such articles.  The relevancy, the power, and the applicability of combinatorial testing – not to mention that this test design method can often double your software testing efficiency and increase the thoroughness of your software testing – all tend to get lost in the abstract, academic, wonky explanations that are typically used to describe combinatorial testing.  Unfortunately for pragmatic, action-oriented software testing practitioners, many of the readily accessible articles on pairwise testing and combinatorial testing tend to be on the wonky end of the spectrum; an exception to that general rule are the good, practitioner-oriented introductory articles available at combinatorialtesting.com.

A Different Approach to Explaining Combinatorial Testing and Pairwise Testing

In the photograph-rich, numbers-light, presentation embedded above, I’ve tried to explain what combinatorial testing is all about without the wonky-ness.  The benefits from structured variation and from using combinatorial test design  is, in my view, wildly under-appreciated.  It has the following extremely important benefits:

  • Less repetition from test case to test case
    • In the context of discussing testing’s “pesticide paradox” James Bach, I believe, used the analogy that following in someone’s footsteps is a very good way to survive traversing through a mine field but a generally lousy way to find software defects efficiently.
    • Maximizing variation from test case to test case, as a general rule, is an absolutely spectacular way to find defects quickly.
    • There are thousands, if not trillions of relevant combinations to select from when identifying test cases to execute; computer algorithms will be able to solve the problem of “how can maximum variation be achieved?” far better than human brains can.
  • More coverage of combinations of test inputs
    • Most of the time, since awareness of pairwise and combinatorial testing methods remain low in the software testing community, combining all possible pairs of values in at least one test case is not even a conscious goal of testers.
    • Even if this were a goal of their test design strategy, testers would have a tremendous challenge in trying to achieve such a goal: with hundreds, thousands or tens of thousands of targeted combinations to cover, losing track of a significant number of them and/or forgetting to include them in software tests is virtually a foregone conclusion unless a test case generator is used.
    • More thorough coverage leads to more defects being found.
  • Efficiency (Testers can “turn the coverage dial” to achieve maximum efficiency with a minimal number of tests)
    • The efficiency and effectiveness benefits of pairwise testing have been demonstrated in testing projects every major industry.
    • I wanted to prominently include the message that testers using test case generators have the option to dramatically increase the testing thoroughness levels of the tests they generate because it is a topic that often gets ignored in introductions to pairwise testing case studies and introductions
  • Thoroughness – (Testers can also “turn the coverage dial” to achieve maximum thoroughness if that is their goal)
    • Too often, tester’s view pairwise as a technique that focuses on a very small number of curiously strong tests; that is only part of the story.
    • This can lead to the /false/ impression that combinatorial testing methods are inappropriate where high levels of testing thoroughness are required.
    • You can create very different sets of tests that are as thorough as possible (given your understanding of what you are testing) no matter whether you have 1 hour to execute tests or one month to test.

Other Recommended Sources of Information on Pairwise and Combinatorial Testing:

Questions or Comments?

If you have questions or comments, please leave a note below.  I’d love to hear about people’s experiences using these test design approaches.  Thank you.

13 Great Questions To Ask Software Testing Tool Vendors

There are good reasons James Bach is so well known among the testing community and constantly invited to give keynote presentations around the globe at software testing conferences. He’s passionate about testing and educating testers; he’s a gifted, energetic, and entertaining speaker with a great sense of humor; and he takes joy in rattling his saber and attacking well-established institutions and schools of thought that he disagrees with. He doesn’t take kindly to people who make inflated claims of benefits that would materialize “if only you’d perform testing in XYZ way or with ABC tool” given that (a) he can always seem to find exceptions to such claims, (b) he doesn’t shy away from confrontation, and (c) he (rightly, in my view) thinks that such benefits statements tend to discount the importance of critical thinking skills being used by testers and other important context-specific considerations.

Leave it up to James to create a list of 13 questions that would be great to ask the next software testing tool vendor who shows up to pitch his problem-solving product. In his blog post titled “The Essence of Heuristics,” he posed this exact set of questions in a slightly different context, but as a software testing tool vendor myself, they really hit home. They are:

1. Do they teach you how to tell if it’s working?
2. Do they teach you how to tell if it’s going wrong?
3. Do they teach you heuristics for stopping?
4. Do they teach you heuristics for knowing when to apply it?
5. Do they compare it to alternative heuristics?
6. Do they show you why it works?
7. Do they help you understand when it probably works best?
8. Do they help you know how to re-design it, if needed?
9. Do they let you own it?
10. Do they ask you to practice it?
11. Do they tell stories about how it has failed?
12. Do they listen to you when you question or challenge it?
13. Do they praise you for questioning and challenging it?

[Side note: Apparently I wasn’t the only one who thought of Hexawise and pairwise / combinatorial test design approaches when they saw these 13 questions. I was amused that after I drafted this post, I saw Jared Quinert’s / @xflibble’s tweet just now:]

Where do I come down on each of James’ 13 questions with respect to people I talk to about our test design tool, Hexawise, and the types of benefits and the size of benefits it typically delivers? Quite simply, “Yes” to all 13. I enjoy talking about exactly the kinds of questions that James raised in his list. In fact, when I sought out James to ask him questions at a conference in Boston earlier this year, it was because I wanted his perspective on many of the points above, particularly #11: (hearing stories about how James has seen pairwise and combinatorial approaches to test design fail), and #7 (hearing his views on where it works best and where it would be difficult to apply it). I’ll save my specific answers to another post, but I am serious about wanting to share my thoughts on them; time constraints are holding me back today. I gave a speech at the ASQ World Conference on Quality Improvement in St. Louis last week though that addressed many, but not all, of James’ questions.

I’m not your typical software tool vendor. Basically, my natural instincts are all wrong for sales. I agree with the premise that “a fool with a tool is still a fool”; when talking to target clients and/or potential partners, I’m inclined to point out deficiencies, limitations, and various things that could go wrong; I’m more of an introvert than an extrovert, etc. Not exactly the typical characteristics of a successful salesman… Having said that, I believe that we’ve built a very good tool that helps enable dramatic efficiency and thoroughness benefits in many testing situations but our tool, along with the pairwise and combinatorial test design approaches that Hexawise enables both have their limitations. It is primarily by talking to software testers about their positive and negative experiences that our company is able to improve our tool, enhance our training, and provide honest, pragmatic guidance to users about where and how to use our tool (and where and how not to).

Tool vendors who defend their tools (and/or the approaches by which their tools helps users solve problems) as magical, silver bullet solutions are being both foolish and dishonest. Tool vendors who choose not to engage in serious, honest and open discussions with users about the challenges that users have when applying their tools in different situations are being short-sighted. From my own experiences, I can say that talking about the 13 topics raised by James have been invaluable.

25 Great Quotes for Software Testers

All the quotes below are from the inside cover of Statistics for Experimenters written by George Box, Stuart Hunter, and William G. Hunter (my late father).  The Design of Experiments methods expressed in the book (namely, the science of finding out as much information as possible in as few experiments as possible), were the inspiration behind our software test case generating tool.  In paging through the book again today, I found it striking (but not surprising) how many of these quotes are directly relevant to efficient and effective software testing (and efficient and effective test case design strategies in particular):

  • “Discovering the unexpected is more important than confirming the known.”
  • “All models are wrong; some models are useful.”
  • “Don’t fall in love with a model.”
  • How, with a minimum of effort, can you discover what does what to what?  Which factors do what to which responses?
  • “Anyone who has never made a mistake has never tried anything new.” – Albert Einstein
  • “Seek computer programs that allow you to do the thinking.”
  • “A computer should make both calculations and graphs.  Both sorts of output should be studied; each will contribute to understanding.”  – F. J. Anscombe
  • “The best time to plan an experiment is after you’ve done it.” – R. A. Fisher
  • “Sometimes the only thing you can do with a poorly designed experiment is to try to find out what it died of.”  – R. A. Fisher
  • The experimenter who believes that only one factor at a time should be varied, is amply provided for by using a factorial experiment.
  • Only in exceptional circumstances do you need or should you attempt to answer all the questions with one experiment.
  • “The business of life is to endeavor to find out what you don’t know from what you do; that’s what I called ‘guessing what was on the other side of the hill.'”  – Duke of Wellington
  • “To find out what happens when you change something, it is necessary to change it.”
  • “An engineer who does not know experimental design is not an engineer.”  – Comment made by to one of the authors by an executive of the Toyota Motor Company
  • “Among those factors to be considered there will usually be the vital few and the trivial many.”  – J. M. Juran
  • “The most exciting phrase to hear in science, the one that heralds discoveries, is not ‘Eureka!’ but ‘Now that’s funny…'” – Isaac Asimov
  • “Not everything that can be counted counts and not everything that counts can be counted.” – Albert Einstein
  • “You can see a lot by just looking.”  – Yogi Berra
  • “Few things are less common than common sense.”
  • “Criteria must be reconsidered at every stage of an investigation.”
  • “With sequential assembly, designs can be built up so that the complexity of the design matches that of the problem.”
  • “A factorial design makes every observation do double (multiple) duty.”  –  Jack Couden

Where the quotes are not attributed, I’m assuming the quote is from one of the authors.  The most well known of the quotes not attributed, above, “All models are wrong; some models are useful.” is widely attributed to George Box in particular, which is accurate.  Although I forgot to confirm that suspicion with him when I saw him over Christmas break, I suspect most of them are from George (as opposed to from Stu or my dad); George is 90 now and still off-the-charts smart, funny, and is probably the best story teller I’ve met in my life.  If he were younger and on Twitter, he’d be one of those guys who churned out highly retweetable chestnuts again and again.

Related thoughts

As you know if you’ve read my blog before, I am a strong proponent of using the Design of Experiments principles laid out in this book and applying them in field of software testing to improve the efficiency and effectiveness of software test case design (e.g., by using pairwise software testing, orthogonal array software testing, and/or combinatorial software testing techniques).  In fact, I decided to create my company’s test case generating tool, called Hexawise, after using Design of Experiments-based test design methods during my time at Accenture in a couple dozen projects and measuring dramatic improvements in tester productivity (as well as dramatic reductions in the amount of time it took to identify and document test cases).  We saw these improvements in every single pilot project when we  used these methods to identify tests.

My goal, in continuing to improve our Hexawise test case generating tool, is to help make the efficiency-enhancing Design of Experiments methods embodied in the book, accessible to “regular” software testers, and more more broadly adopted throughout the software testing field.  Some days, it feels like a shame that the approaches from the Design of Experiments field (extremely well-known and broadly used in manufacturing industries across the globe, in research and development labs of all kinds, in product development projects in chemicals, pharmaceuticals, and a wide variety of other fields), have not made much of an inroad into software testing.  The irony is, it is hard to think of a field in which it is easier, quicker, or immediately obvious to prove that dramatic benefits result from adopting Design of Experiments methods than software testing.  All it takes is for a testing team to decide to do a simple proof of concept pilot.  It could be for as little as a half-day’s testing activity for one tester.  Create a set of pairwise tests with Hexawise or another t00l like James Bach’s AllPairs tool.  Have one tester execute the tests suggested by the test case generating tool. Have the other tester(s) test the same application in parallel.  Measure four things:

  1. How long did it take to create the pairwise / DoE-based test cases?
  2. How many defects were found per hour by the tester(s) who executed the “business as usual” test cases?
  3. How many defects were found per hour by the tester who executed the pairwise / DoE-based tests?
  4. How many defects were identified overall by each plan’s tests?

These four simple measurements will typically demonstrate dramatic improvements in:

  • Speed of test case identification and documentation
  • Efficiency in defects found per hour

As well as consistent improvements to:

  • Overall thoroughness of testing.

A Suggestion: Experiment / Learn / Get the Data / Let the Efficiency and Effectiveness Findings Guide You

I would be thrilled if this blog post gave you the motivation to explore this testing approach and measure the results.  Whether you’ve used similar-sounding techniques before or never heard of DoE-based software testing methods before,  whether you’re a software testing newbie or a grizzled veteran, I suspect the experience of running a structured proof of concept pilot (and seeing the dramatic benefits I’m confident you’ll see) could be a watershed moment in your testing career.  Try it!  If you’re interested in conducting a pilot, I’d be happy to help get you started and if you’d be willing to share the results of your pilot publicly, I’d be able to provide ongoing advice and test plan review.  Send me an email or leave a comment.

To the grizzled and skeptical veterans, (and yes, Mr, Shrini Kulkarni / @shrinik who tweeted “@Hexawise With all due respect. I can’t credit any technique the superpower of 2X defect finding capability. sumthng else must be goingon” before you actually conducted a proof of concept using Design of Experiments-based testing methods and analyzed your findings, I’m lookin’ at you),  I would (re)quote Sophocles: “One must try by doing the thing; for though you think you know it, you have no certainty until you try.” For newer testers, eager to expand your testing knowledge (and perhaps gain an enormous amount of credibility by taking the initiative, while you’re at it), I’d (re)quote Cole Porter: “Experiment and you’ll see!

I’d welcome your comments and questions.  If you’re feeling, “Sounds too good to be true, but heck, I can secure a tester for half a day to run some of these DoE-based / pairwise tests and gather some data to see whether or not it leads to a step-change improvement in efficiency and effectiveness of our testing” and you’re wondering how you’d get started, I’d be happy to help you out and do so at no cost to you.  All I’d ask is that you share your findings with the world (e.g., in your blog or let me use your data as the firms did with their findings in the “Combinatorial Software Testing” article below).

– Justin

Related: (Introductory Hexawise video overview showing 6.5 trillion possible tests reduced, using Design of Experiments techniques to the 37 tests most likely to find defects)

Related: (Article explaining behind Design of Experiments-based software testing techniques such as pairwise, OA, and n-wise testing: Combinatorial Software Testing by Kuhn, Kacker, Lei, and Hunter (pdf download)

Related: (Prior blog post) “In Praise of Data-Driven Management (AKA “Why You Should be Skeptical of HiPPO’s”)”

Related: (My brother’s blog: he’s in IT too and is also a strong proponent of using Design of Experiments-based software test design methods to improve software testing efficiency and effectiveness).

“I feel honored…”

There are some phrases in English that, as often as not, come off sounding obligatory and/or insincere. The phrase “I’m honored…” comes to mind (particularly if someone is accepting an award in front of a room full of people).

Be that as it may, I genuinely felt really honored last night and again today by a couple comments James Bach has said about me, including these:

Here’s the quick background: (1) James knows much more about software testing than I do and I respect his views a lot. (2) He has a reputation for not suffering fools gladly and pretty bluntly telling people he doesn’t respect them if he doesn’t respect the content of their views. (3) in addition to his extremely broad expertise on “testing in general” James, like Michael Bolton, knows a lot about pairwise and combinatorial testing methods and how to use them. (4) I firmly (and passionately) believe that pairwise and combinatorial testing methods are (a) dramatically under-appreciated, and (b) dramatically under-utilized. (5) James has published a very good and well-reasoned article about some of the limitations of pairwise testing methods that I wanted to talk to him about. (6) I co-wrote an article that IEEE Computer recently published about Combinatorial Testing that I wanted to discuss with him. (7) James and I have been at the STP Conference in Boston over the past few days. (8) I reached out to him and asked to meet at the conference to talk about pairwise and combinatorial testing methods and share with him my findings that – in the dozens of projects I’ve been involved with that have compared testers efficiency and effectiveness – I’ve routinely seen defects found per tester hour more than double. (9) I was interested in getting his insights into where are these methods most applicable? Least applicable? What have his experiences been in teaching combinatorial testing methods to students, etc.

In short, frankly, my goals in meeting with him were to: (a) meet someone new, interesting and knowledgeable and learn as much as could and try to understand from his experiences, his impressive critical thinking and his questioning nature, and (b) avoid tripping up with sloppy reasoning (when unapologetically expressing the reasons I feel combinatorial testing methods are dramatically under-appreciated by the software testing community) in front of someone who (i) can smell BS a mile away, and (ii) doesn’t suffer fools gladly.

I learned a lot, heard some fantastic war stories and heard his excellent counter-examples that disproved a couple of the generalizations I was making (but didn’t dampen my unshaken assertions that combinatorial testing methods are wildly under-utilized by the software testing community). I thoroughly enjoyed the experience. Moving forward, as a result of our meeting, I will go through an exercise which will make me more effective (namely carefully thinking through and enumerating all of the assumptions behind my statements like: “I’ve measured the effectiveness of testers dozens of times – trying to control external variables as much as reasonably possible – and I’m consistently seeing more than twice as many defects per tester hour when testers adopt pairwise/combinatorial testing methods.”

His complement last night was private so I won’t share it but it ranks up there in my all time favorite complements I’ve ever received. I’m honored. Thanks James.

What Else Can Software Development and Testing Learn from Manufacturing? Don’t Forget Design of Experiments (DoE).

Lessons_from_Car_Manufacturing-20090826-171852

Tony Baer from Ovum recently wrote a blog post titled: Software Development is like Manufacturing which included the following quotes:

“More recently, debate has emerged over yet another refinement of agile – Lean Development, which borrows many of the total quality improvement and continuous waste reduction principles of lean manufacturing. Lean is dedicated to elimination of waste, but not at all costs (like Six Sigma). Instead, it is about continuous improvement in quality, which will lead to waste reduction….

In essence, developing software is like making a durable good like a car, appliance, military transport, machine tool, or consumer electronics product…. you are building complex products that are expected to have a long service life, and which may require updates or repairs.”

Here are my views: I see valid points on both sides of the debate.  Rather than weigh general high-level pro’s and cons, though, I would like to zero in on what I see as an important topic that is all-too-often missing from the debate.  Specifically,  Design of Experiments has been central to Six Sigma, Lean Manufacturing, the Toyota Production System, and Deming’s quality improvement approaches, and is equally applicable to software development and testing, yet adoption of Design of Experiments methods in software design and testing remains low.  This is unfortunate because significant benefits consistently result in both software development and software testing when Design of Experiments methods are properly implemented.

What are Design of Experiments Methods and Why are they Relevant?

In short, Design of Experiments methods are a proven approach to creating and managing experiments that alter variables intelligently between each test run in a structured way that allows the experimenter to learn as much as possible in as few experiments as possible.  From wikipedia: “Design of experiments, or experimental design, (DoE) is the design of all information-gathering exercises where variation is present, whether under the full control of the experimenter or not. Often the experimenter is interested in the effect of some process or intervention (the “treatment”) on some objects (the “experimental units”).”

Design of Experiments methods are an important aspect of Lean Manufacturing, Six Sigma, the Toyota Production System, and other manufacturing-related quality improvement approaches/philosophies.  Not only have Design of Experiments methods been very important to all of the above in manufacturing settings, they are also directly relevant to software development. By way of example, W. Edwards Deming, who was extremely influential in quality initiatives in manufacturing in Japan and the U.S. was an applied statistician. He and thousands of other highly respected quality executives in manufacturing, including Box, Juran and Taguchi (and even my dad), have regularly used Design of Experiments methods as a fundamental anchor of quality improvement and QA initiatives and yet relatively few people who write about software development seem to be aware of the existence of Design of Experiments methods.

What Benefits are Delivered in Software Development by Design of Experiments-based Tools?

Application Optimization applications, like Google’s Website Optimizer are a good example of Design of Experiments methods can deliver powerful benefits in the software development process.  It allows users to easily vary multiple aspects of web pages (images, descriptions, colors, fonts, colors, logos, etc.) and capture the results of user actions to identify which combinations work the best. A recent YouTube multi-variate experiment (e.g., and experiment created using Design of Experiment methods) shows how they used the simple tool and increased sign-up rates by 15.7%.  The experiment involved 1,024 variations.

What Benefits are Delivered in Software Testing by Design of Experiments-based Tools?

In addition, software test design tools, like the Hexawise test design tool my company created, enable dramatically more efficient software testing by automatically varying different elements of use cases that are tested in order to achieve an optimal coverage. Users input the things in the application they want to test, push a button and, as in the Google Web Optimizer example, the tool uses DoE algorithms to identify how the tests should be run to maximize efficiency and thoroughness.  A recent IEEE Computer article I contributed to, titled “Combinatorial Testing” shows, on average, over the course of 10 separate real-world projects, tester productivity (measured in defects found per tester hour) more than doubled, as compared to the control groups which continued to use their standard manual methods of test case selection: http://tinyurl.com/nhzgaf

Unfortunately, Design of Experiments methods – one of the most powerful methods in Lean Manufacturing, Six Sigma, and the Toyota Production System – are not yet widely adopted in the software development industry. This is unfortunate for two reasons, namely:

  1. Design of Experiments methods will consistently deliver measurable benefits when implemented correctly, and
  2. Sophisticated new tools designed with very straightforward user interfaces make it easier than ever for software developers and testers to begin using these helpful methods.

– Justin

In Praise of Data-Driven Management (AKA “Why You Should be Skeptical of HiPPO’s”)

Hipp

Jeff Fry recently linked to a fantastic webcast in Controlled Experiments To Test For Bugs In Our Mental Models. I would highly recommend it to anyone without any reservations.  Ron Kohavi, of Microsoft Research does a superb job of using interesting real-world examples to explain the benefits of conducting small experiments with web site content and the advantages of making data-driven decisions.  The link to the 22-minute video is here.

I firmly believe that the power of applied statistics-based experiments to improve products is dramatically under-appreciated by businesses (and, for that matter, business schools), as well as the software development and software testing communities.  Google, Toyota, and Amazon.com come to mind as notable exceptions to this generalization; they “get it”.  Most firms though still operate, to their detriment, with their heads in the sand and place too much reliance on untested guesswork, even for fundamentally important decisions that would be relatively easy to double-check, refine, and optimize through small applied statistics-based experiments that Kohavi advocates.  Few people who understand how to properly conduct such experiments are as articulate and concise as Kohavi.  Admittedly, I could be accused of being biased as: (a) I am the son of a prominent applied statistician who passionately promoted broader adoption of such methods by industry and (b) I am the founder of a software testing tools company that uses applied statistics-based methods and algorithms to make our tool work.

Here is a short summary of Kohavi’s presentation: Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO

1:00 Amazon: in 2000, Greg Linden wanted to add recommendations in shopping carts during the check out process. The “HiPPO” (meaning the Highest Paid Person’s Opinion) was against it thinking that such recommendations would confuse and/or distract people. Amazon, a company with a good culture of experimentation, decided to run a small experiment anyway, “just to get the data” – It was wildly successful and is in widespread use today at Amazon and other firms.

3:00 Dr. Footcare example: Including a coupon code above the total price to be paid had a dramatic impact on abandonment rates.

4:00 “Was this answer useful?” Dramatic differences in user response rates occur when Y/N is replaced with 5 Stars and whether an empty text box is initially shown with either (or whether it is triggered only after a user clicks to give their initial response)

6:00 Sewing machines: experimenting with a sales promotion strategy led to extremely counter-intuitive pricing choice

7:00 “We are really, really bad at understanding what is going to work with customers…”

7:30DATA TRUMPS INTUITION” {especially on novel ideas}. Get valuable data through quick, cheap experimentation. “The less the data, the stronger the opinions.”

8:00 Overall Evaluation Criteria: “OEC” What will you measure? What are you trying to optimize? (Optimizing for the “customer lifetime value”)

9:00 Analyzing data / looking under the hood is often useful to get meaningful answers as to what really happened and why

10:30 A/B tests are good; more sophisticated multi-variate testing methods are often better

12:00 Some problems: Agreeing upon Overall Evaluation Criteria is hard culturally. People will rarely agree. If there are 10 changes per page, you will need to break things down into smaller experiments.

14:00 Many people are afraid of multiple experiments [e.g., multi-variate experiments or MVE] much more than they should be.

(A/B testing can be as simple as changing a single variable and comparing what happens when it is changed, e.g., A = “web page background = Blue” / B = “web page background = Orange.” Multi-variate experiments involve changing multiple variables in each test run which means that people running the tests should be able to efficiently and effectively change the variables in order to ensure not only that each of the variables is tested but also that the each of the variables is tested in conjunction with each of the others because they might interact with one another). <My views on this: before software tools made conducting multi-variate experiments (and understanding the results of the experiments) a piece of cake, this fear had some merit; you would need to be able to understand books like this to be able to competently run and analyze such experiments. Today however, many tools, such as Google’s Website Optimizer (used for making web sites better at achieving their click through goals, etc.) and Hexawise (used to find defects with fewer test cases) build the complex Design of Experiments-based optimization algorithms into the tool’s computation engine and provide the user of the tool with a simple user interface and user experience. In short, in 2009, you don’t need a PhD in applied statistics to conduct powerful multi-variate experiments. Everyone can quickly learn how to, and almost all companies should, use these methods to improve the effectiveness of applications, products and/or production methods. Similarly, everyone can quickly learn how to, and almost all companies should, use these methods to dramatically improve the effectiveness of their software testing processes.>

16:00 People do a very bad job at understanding natural variation and are often too quick to jump to conclusions.

17:00 eBay does A/B testing and makes the control group ~1%. Ron Kohavi, the presenter, suggests starting small then quickly ramping up to 50/50 (e.g., 50% of viewers will see version A, 50% will see version B).

19:00 Beware of launching experiments than “do not hurt,” there are feature maintenance costs.

20: 00 Drive to a data-driven culture. “It makes a huge difference. People who have worked in a data-driven culture really, really love it… At Amazon… we built an optimization system that replaced all the debates that used to happen on Fridays about what gets on the home page with something that is automated.”

21:00 Microsoft will be releasing its controlled experiments on the web platform at some point in the future, but probably not in the next year.

21:00 Summary

  1. Listen to your customers because our intuition at assessing new ideas is poor.
  2. Don’t let the HiPPO drive decisions; they are likely to be wrong.  Instead, let the customer data drive decisions.
  3. Experiment often create a trustworthy system to accelerate innovation.

Justin Hunter
Founder and CEO
Hexawise
“More coverage. Fewer tests.”

Related: Statistics for Experimentersarticles on design of experiments