There are loads of people who use the term “acceptance testing”… Most people have different definitions of what this means. And more still will struggle to even define what they mean personally with the term when they say it.
I also regularly hear the words “feature files” mentioned in some of those conversations when people are talking about “acceptance testing”. Personally, I don’t think it’s hard to realise that the primary function for feature files are that they are requirement examples that stem conversations which help define the user stories and requirements when you are forming them. They are supposed to be part of a BDD process which is all about the conversations stemming the design and refinement of artefacts used to help the development and testing of the software. But somewhere, there always seems to be a confusion relating feature files to testing, sometimes specifically to “acceptance testing”.
Anyway, Sorry for the side track on feature files… Back to this term “acceptance testing”! What does it actually mean?
The Agile Alliance describe the term as “a formal behaviour of the software product“. And the controversial ISTQB describe it as: “Formal testing with respect to user needs, requirements, and business processes to determine whether a system satisfies acceptance criteria…“. (They also appear to have questionable definitions on their site for other types of acceptance testing, such as: “factory acceptance testing”, and other terms).
These definitions just don’t sit very well with me as they seem like they are written by people from other areas of the software world that don’t quite understand the different aspects and different layers of software testing and all the other testing activities, risks, techniques, contexts and variables that are entangled in the craft of software testing…
For me there are 2 things I like to think (or hope) it could mean when people use the term:
- The activity of asserting any explicit expectations that are specified as acceptance criteria.
If we form acceptance criteria artefacts as part of our conversations about the requirements when we are forming our stories, then this criteria can be useful in stemming design, development, checking and testing activities throughout the life-cycle of that story. (But don’t think that checking the acceptance criteria is enough to know whether your product “works or not”!!).
I’m not caught up on terminology here. If you don’t like the word “checking”, you don’t have to use it. I’m just saying that there needs to be a recognition that asserting our expectations, although essential, you’ll need to do more than just that type of activity to discover the level of quality that your software possesses. You can never think of everything up front. And you’ll always need many investigative activities (such as code reviews and exploratory testing) to discover more information that you were previously unaware of. Equally, I believe that good automation practices should be used for this, if this is what you mean by “acceptance testing”. If you have acceptance criteria that are specifying how the system SHOULD behave, then why not assert that expectation through automation?Or
- The activity of users accepting the software by using the system on a test environment in the same way they would in a live environment (also more commonly known as “User Acceptance Testing”).
For the software to be “accepted” by the customer and users, then many companies offer the users an opportunity to use the software in a live-like environment, in the same way that they would be using the software in their day-to-day jobs if it were live. Completely unscripted, work like simulations. This gives the users insight into whether the software is acceptable to them for using it to do their work. They can do what they want with the software – maybe try a subset of people running last weeks tasks, or this weeks tasks in parallel with the system that this software under UAT will replace, and they can “accept” the software if they are happy.
If you mean this type of testing when you say “acceptance testing”, then imposing scripted checks on the users is a bad idea… Chances are, those scripts have already been ran as part of the testing or ideally automation, so having the users run through them again wouldn’t really make sense. Plus you’ll miss all the things that the users might do when using it in a live like environment too, since they wouldn’t be doing that by only following what your script tells them to do.
If you seem to be conflating the two together, writing acceptance criteria at the beginning and are waiting until the very end of the development cycle before having the users checking that expected behaviour then you have big problems.
Personally, regarding the term, “acceptance testing” is a term I just don’t tend to use. If I have acceptance criteria that needs asserted, I just talk about checking the acceptance criteria. If I have users that I want to invite to use the software on a safe environment on a live like basis, then I talk about inviting the users to trial the software to supply their feedback. I think the term can cause confusion, so that’s ultimately why I avoid it.
I know I’ve only highlighted 2 ways that I think might be the most common usage for the term “acceptance testing”, and I’m absolutely positive there are more definitions for this term that might be preferential.
What do you think? Do you use the term? If so, what do you really mean when you say it? And are you sure that everyone on your team have the same understanding?
Please leave your thoughts below in a comment!