This is a big misconception that usually comes from other members of the project team not understanding what testers do. The fact is that some people don’t really know what testers do. And I believe a huge part of a tester’s job should be to be vocal about what they actually do.
Running through a set of test cases with step by step instructions of how the software is intended to work, with columns for the expected results and a tick box to indicate “pass/fail” is not testing. I repeat – this is not testing.
This is checking. I’m not going to go on about the testing vs checking debate… Michael Bolton has a blog that perfectly details how I feel about it: http://www.developsense.com/blog/2013/03/testing-and-checking-redefined/
What I will say though is that I suggest that these scripts are the type of things that really ought to be automated.
Automation should predominantly be the focus if you have a set of checks such as these, where you anticipate how the software should work and you have expectations that you can check against. You always need to weigh up the value of whether to actually automate these checks though.
My advice: if you have a document in front of you that has columns for “Description“, “Steps“, “Expected Result” and “Pass/Fail?” and you plan to use this for your sapient testing – take out the 2 columns of “steps” and “expected results” from your document (put them aside to be automated), change the title of the “Pass/Fail” column to “Notes“, and open your creativity with some exploratory testing using your brain. And then talk about how you are actually testing to others so that they can learn that testing isn’t about following a script.