Information, and its relationship with testing and checking

Information, and its relationship with testing and checking

One of the biggest problems that is hitting the testing industry at this point in time is the misconception that automation can replace testing. Michael Bolton, James Bach and others within the testing industry have been working hard to try and dispel this misconception, but I still see daily struggles in most companies that I see today. Not just with the management within those organisations or with the developers or other project members, but with the vast majority of the people in the testing or automation roles within these organisations too.

Many testers believe that they need to learn how to automate in order to stay in a job. And many people online (some who supposedly consider themselves as industry experts) believe this too and preach it in their posts…

I read Michael and James’ 10,000 word white paper which offers some insightful lessons on further differences between “testing” and “checking” along with where automation fits within context driven testing. I enjoyed reading this white paper, and I’ve tried to hand out this white paper to many organisations that I know have problems understanding this. But these people still have these same problems. And when asking about whether people have actually read the paper, the general response was either that it is too long or that they don’t like the term “checking”… (Many people in automation roles see this as demeaning to the work they do in their role – they see the word “checking” similarly to how I view the word “manual”).

So how do I simplify this? How do I get people to start understanding?

Initially, I started to change my language. I still spoke about “testing” and “checking”, but I didn’t speak about them directly… I spoke about them via their relationship with information. I think people started to understand when I spoke about information and how investigation uncovers more information, and how we can then check to confirm any information that we have.

After having a few lengthy conversations, I decided that it would be far easier to talk about information’s relationship with testing and checking via a model.

So without further ado, here is the model:

It’s fairly rudimental right now. I have intentionally kept it basic, without adding information about the different types of knowledge, the artifacts that we create from our information/knowledge, or anything about reporting on our testing of checking activities, etc.

It also might not seem new, or might seem obvious to some people – the big difference with this model is that I’m intentionally putting the focus on INFORMATION.

Let me explain the model:

– Information –

This is our currency with software projects and products – Information regarding our requirements, our designs & artifacts, our feature and products, our platforms, our processes… Information is the centre of it all.

– Testing Activities –

Testing is an investigatory activity (exploration) that has the effect of uncovering more information. We can explore requirements through questioning them as a team. We can explore the design of the product through questioning the wireframes and further discussions regarding the UX and UI design of the product. And of course, we can explore the product to uncover more information about it.

Also, the information that we uncover then informs our testing. It helps us stem more ideas. The previous question asked or lesson learned regarding what we have tested will stem and influence the next question or idea that we think of.

It’s also important to remember that there are many activities involved in the lifecycle of a product. Exploratory testing is just one. Code reviewing is another. Requirements analysis is a testing activity conducted by the whole team ideally…

– Checking Activities – 

If we think about what we do when we check a claim made by any source of information regarding a product, then it’s clear that the information that we are using is enabling to perform that check. If we didn’t have that information, we couldn’t assert whether that expectation is true or not.

There are many checking activities that we conduct against the product. A prime example is regression checking, where we are asserting our expectation that new features we are introducing to our software have not adversely affected pre-existing features that we have already built. Or when we check our expectations of the behaviour of single lines of code under the conditions that we know about.

And these checking activities are usually scripted. Either in a “steps” and “expectation” format for a human to perform these checks, or in code for a computer to automatically check these expectations that we write as assertions.

– Automation can assist our testing –

The term “tool assisted testing” is one that Richard Bradshaw has been using for a long time. And it’s one that Michael and James used in their white paper too. I used to use the term throwaway scripts myself, but now I much prefer Richards terminology. But the main thing that is important to remember: although automation can assist our testing with data generation or manipulating the software in order to get from A to B to be able to start testing from B to uncover new information – automation is assisting our testing. It’s not performing our testing at all.

 

This all relates to the “5 orders of ignorance” regarding information.

I’m increasingly referring to the 5 orders of ignorance when discussing this model with people too. For those not familiar with the 5 orders:

  • The 0th order of ignorance is: KNOWLEDGE – This is our knowns. Our explicit information.
    It’s the information that we can use to enable our checking.
  • The 1st order of ignorance is: LACK OF KNOWLEDGE – This is our unknowns. It’s things that we are aware that we do not know. When we ask questions such as “what if…” or “what about…”, then we know that we don’t know the answer, so we can test to uncover the answer. To test, we ask the question – be it of a product owner regarding a requirement, or of the product through operating the product and observing how it responds.
  • The 2nd order of ignorance is: LACK OF AWARENESS – This is our unknown unknowns. Its ultimately when we are unaware that we don’t know something. Therefore we cant ask questions about it. We don’t even know to ask questions about it as we are unaware of it. And this is caused by:
  • The 3rd order of ignorance: LACK OF PROCESS – This is where we have no process (or activity) in place that enables us to uncover information (gaining awareness) regarding our unknown unknowns.

    Testing activities are this process…
    If you think of exploring a requirement: Rob Lambert had a great blog post a while ago about an activity that he ran with his team, where he asked them to think of all the possible purposes of a brick. If that was a requirement and I shouted out “use it to break a car window”, then that would most likely trigger an idea in your head about either smashing other windows for different reasons (a house window to gain entry to the house), or other uses for the brick in the car (e.g. using it to hold down the accelerator). But if we didn’t have this activity of testing occurring, then would we even think about these possible uses?

    Or if you think about a product: If we were pairing and I mentioned an idea to try and put a double barrelled surname with a dash in the surname field (i.e. “Harding-Rolls”), that will most likely stem an idea in your head, perhaps about adding an apostrophe in the surname (“O’Brien”) or even a foreign character surname (“張”). These may have previously been unknown unknowns (2nd order) that we have triggered to become unknowns that we are now aware of (1st order) through our testing activity, that we can then perform a test by asking the question of the product to transfer this unknown into knowledge (0th order).

  • The 4th order of ignorance is: META IGNORANCE – This is cheesy. It’s when there is a lack of awareness of the 5 orders of ignorance. We can ignore this in the context of this post… :)

 

With all this in mind, the last thing to be aware of is CONTEXT. There will always be outside influences from the project, from the product, from the customers and users, from our organisations, from the community, from our environments, etc that will form our contexts that we work in and will influence our testing.

Overall, I think this model (putting the emphasis on information) has helped many people to finally understand the differences between testing and checking. It’s helped those organisations reform their “testing” strategies to actually include investigative testing activities.

And this model isn’t intended to replace anything or annoy anyone. It’s hopefully useful in addition to other models, blogs and articles on this topic – in helping to dispel this misconception that has been haunting the testing industry and affecting software that people use on a daily basis.

I encourage your feedback on the model too. All models are fallible but can be useful, so feedback will help refine the model to help make it more useful.

 

(This post was originally posted on DanAshby.co.uk)

Dan, very good article. The checking element aside it resonates with a lot of discussions I have been having over the last few years in that testing really specialises in risk and information. Where I talk about risk that seems to relate fairly closely to your thoughts on orders of ignorance. For example in determining a suitable testing mission for a project, just one of the questions I might ask is “what level of risk are the team comfortable with?” In a “very simplified” view of levels of risk coverage Covering lower known risks of “what if the product does not do what it is desired to do?” Investigating Higher known risks “what if the product does not do what it is not desired to do?” Leveraging from exploring and learning approaches on the known risks to intentionally expose previously unknown (potentially catastrophic) risks, where the entire investigation at the heart of it has the intention of addressing the risk of “under informed quality related decisions being made”. Now I am not going to change my wording to “what level of ignorance are the team comfortable with?” but it is a question in a similar vein to the risk question and definitely good meta information to have insight into. I also really like the diagram.

Like
Reply

Hi Dan, Great article. For me, context is important but it is our response to it that I think is more important. I.e. the testing solution within that context to get and provide the information from and to our customers.

Excellent article. Automation can augment and assist in some testing activities, but I see too many companies that cling to the fallacy that automation will resolve all testing issues and provide massive cost savings.

Good article. Wouldn't you say scripted information can also inform exploratory testing activities? Or can't it? I think some confusion comes when we seem to suggest scripted can't be exploratory.

To view or add a comment, sign in

More articles by Dan Ashby

Others also viewed

Explore content categories