Anatomy of an Automated Test Case

February 4th, 2016 by inflectra

automated testing test cases

When test cases are written well, the automation follows simply from the steps. Writing a good test case isn’t difficult, it’s just tedious, boring and sometimes downright painful. But when it’s done right, the results are worth it.

Why Write About Test Cases First?

If you are building an automated system, it’s best to know the final goal from the start. Moreover, a well-designed framework is both independent of and focused on the test case. Whatever happens behind the scenes in the automation framework, test cases will determine the quality of the Application Under Test (AUT). If a framework isn’t designed to meet the needs of the automated test cases, then it fails to me the most critical reason for its existence.
When test cases are done right, the QA team can be confident that it knows the software through and through. This knowledge is obtained through writing good test cases. Test cases built on the idea that the QA team should always be able to predict the outcome of any action taken anywhere within the AUT. Wherever QA falls short of this goal, is where the greatest number of bugs will lie fallow. Automation is manual testing made repeatable. It doesn’t reduce the amount of manual testing up front because an automated test case is only as good as the manual test case it sprang from. And if you’ve ever written a test case, you know that you’ve looked at it from every angle by the time it is fully automated.

Why I Hate Writing Test Cases

I consider this paragraph, the big DUH! If writing test cases is anything but drudgery, you’re doing it wrong. There are as many definitions of a test case as there are people giving them but I give mine for the sake of having a common starting point. What follows describes what I mean by my definition. Note to disbelievers, it’s fine to disagree vehemently with my definition, just follow with me in order to understand my methods for writing automated test cases and I’m sure you’ll be able to use your own definition to achieve the ends I describe here.

A good test case is a set of well-defined steps with one or more verifiable outcomes.

The implementation I give here, based on this definition, is the key to writing great automation. Test cases, both manual and automated should differ in only one way, syntax. If computers could read English as it’s written, you’d only write a test case once.

You Want Me to Do What?

I hate writing test cases for the simple reason that every, single piece of information required to perform the test and determine the outcome – pass or fail – must be written out explicitly. If you were writing these test cases as code, and you will be, you couldn’t leave a piece of information to the discretion of the CPU. Think of the person executing the manual test case as a CPU, capable of carrying out instructions but requiring every detail in writing to do it. When writing test case automation, always remember the compiler is a harsh task master! The need to describe in excruciating detail what is happening in the test is what makes writing test cases such a painful experience. But far more importantly, detail is the determining factor in getting the test case done right.
Now that I’ve whined about how painful writing test cases is, here’s the cheese. While writing great test cases challenges my patience, I have never been handed a test case written in such explicit detail, that when I looked at it, I was unhappy to have all that information. I have however, on many, many occasions dreaded reading someone else’s test case because I knew I would have no idea what to do with the information I’d been given. If you can’t begin clicking and typing from the first line of a test case right through to the last, it needs to be rewritten. And who you might ask should make this call?

Who is the Enforcer of Quality Test Cases?

This is the most important question to answer. In government there are checks and balances for one important reason, everyone is human and without checks and balances, individuals would mess things up for everyone. In QA terms: every person who writes test cases is human and will, at times, write bad test cases. We get tired, we get over-worked, we get bored and then we slip into shorthand where it’s inappropriate (I’ll get back to this “appropriate” bit). The person exercising a test case must not be the person who wrote it. Why? I know my shorthand. I know exactly what I mean (at least I think I do) by every step I’ve written out so if I exercise my own test cases, they’ll be as fuzzy as they were when I wrote them. And the results of all my testing will be equally fuzzy and unreproducible. There is really only one good way to get people to write good test cases, that is by sending back every test case to be fixed by the original writer if it isn’t up to snuff. No matter how trivial the change, consider the test case holy and inviolable. Never, ever change someone else’s test case. This is where good QA software will help. Good software will record every change made and every person who made the change. With the history being unchangeable, violating this rule will not go unnoticed. Then, if I have to make a request for a nit-picky change to one step of a test case, I can because there’s no alternative.
And why is there no alternative? Because if you don’t know exactly what the original writer intended for that test case, and you guess (remembering the CPU doesn’t guess) then you are responsible if you miss a bug because you didn’t test what the writer had in mind.

Who Executes Test Cases?

It is widely understood that not every test case can be automated. If you have a team writing automation, you have people who can determine whether or not a test case can be automated. The person sifting through test cases, marking them manual or automated will determine who executes the test case.
If the test is manual, from here the work is straight-forward. Follow the steps. If you can’t follow them, send them back to be rewritten. If you can, then pass the test, or fail it and write a bug.
If the test is marked for automation, then the engineer writing test case automation will be following the steps. Just as in the case of the manual tester, the person writing automation must stop if the test case’s steps aren’t absolutely crystal clear and return the test case to its author. Unlike a manual tester, the engineer cannot overlook ambiguity. Why? The compiler won’t let you call a method, passing the parameter “?” (Well it might, but the odds of that being the correct parameter aren’t so good). So the test is returned for a rewrite until every necessary parameter for automating the test case is clearly defined. The automation engineer doesn’t get out of writing the test manually. That’s why automation is so valuable. To write the automated script, the engineer will almost certainly run the test a number of times to get the automation to match the test case step by step. Writing automation provides clarity in its precision and attention to detail that will lead to far fewer bugs in the AUT as long as these rules are applied, along with this last one.

Brass Tacks Time

Now for the guts of an automated test case. I want to demonstrate the principles I’ve mentioned using a simple example. The AUT for my example is Wikipedia. In every case, this automation begins at the home page https://www.wikipedia.org/. Imagine you receive the following test case:
Synopsis: This test goes to the English landing page from the central logo link and verifies the URL is correct

  1. Click on the English link next to the Central Logo that leads to the English Wiki landing page
  2. Get the actual URL of the English landing page
  3. Get the expected URL for the English landing page  https://en.wikipedia.org/wiki/Main_Page
  4. Compare actual and expected URLs Compare actual and expected URLs, noting details if the test fails

The automated test case can use the manual test case as comments. Most importantly, there should be one method call for each step in the written test case. Here’s an idealized example that exists in running code available on GitHub:

        [TestMethod]
        public void goesToEnglishLandingPageFromCentralLogoLink()
        {
            // Click on the English link next to the Central Logo
            // that leads to the English landing page

            homePage.LinkCentralLogoEnglish.Click();

            // Get the actual URL of the English landing page
            string actualResult = homePage.getCurrentUrl();

            // Get the expected URL for the English landing page
            // https://en.wikipedia.org/wiki/Main_Page

            string expectedResult =
            HomePageTestResources.HomePageLinkToEnglishLandingPage;

            // Compare actual and expected URLs, noting details if the test fails
            Assert.IsTrue(actualResult.Equals(expectedResult),
            CommonMethods.FormatAssertMessage(expectedResult, actualResult));
}


The point here is that the code is readable with little or no understanding of the C# language. The people writing the automation are using methods created in the framework that model the steps in a test case. This is why automation is a harsh task master. The code is written clearly, step by step from the test case. There is no question what is being tested. There is no question what the results are. If the test case fails, it will be abundantly clear what went wrong.

I’m a failure! Now what?

Here’s what I get back when this test fails. For the sake of the example, imagine that while writing the automated script, I mistakenly entered the URL as https://en.wikipedia.org/wiki/MainPage (this link doesn’t fail if you use it, but it fails in the automation).
Here is part of the message from the output of the failure:

Test Name:         goesToEnglishLandingPageFromCentralLogoLink
Test Outcome:  Failed
Test Duration:   0:00:06.9473255
Result Message:               Assert.IsTrue failed.
Expected Value: https://en.wikipedia.org/wiki/MainPage
Actual Value:  https://en.wikipedia.org/wiki/Main_Page


There is no question what the test was intended to do. There is no question that it failed or how it failed. How do you choose to fix this? That’s a question for another time.

Author Bio

Denice Carver

When I started in the software industry, the number one Word Processor was WordStar. I got my first job there as a technical support specialist and had no formal training as a software engineer. It was at WordStar that I was introduced to software automation. While there I got my first introduction to writing an automated testing system. The hardware involved one machine executing the commands on a second machine and the automation language was Fortran. I left WordStar to work at Borland as a software quality assurance engineer. By my third job, I was working as a software engineer, writing the user interface (an early Wizard system) for Symantec’s LiveUpdate in C++, my first object oriented language. I have worked for a wide range of companies during my career: everything from Internet startups to large software companies like Borland, Symantec and EMC. My recent efforts have included work for companies like Wells Fargo and American Specialty Health. I currently work at QualiTest Group where our team manages test automation for Associated Press.

Spira Helps You Deliver Quality Software, Faster and with Lower Risk.

Get Started with Spira for Free

And if you have any questions, please email or call us at +1 (202) 558-6885

Free Trial