Pages

Wednesday, September 21, 2016

Legacy Code: Characterize or Test Protect?

In the context of working with legacy code, I often hear the terms "Characterization Tests" and "Test Protecting". Here is how I think about each term:


Characterization Tests: a suite of tests that I build against legacy code that uses sufficient representative inputs to convince myself that I have characterized the behavior of the object I am testing. This suite also documents the behavior.

Test Protecting: building a suite of tests against legacy code that covers the object under test sufficiently that I am confident that the suite protects me from inadvertently changing the object's behavior during refactoring or when I add features.


One thing that is interesting to me is that the exact same suite of tests can both characterize and protect an object's behavior.

I was running a workshop today and introducing the concept of test protecting. Here are the steps I introduced today. If you are new to test protecting, they should be a good starting point. If you've done some test protecting, I hope you'll find some new nuance to your understanding. If you're already awesome at test protecting, maybe you'll want to teach from my steps, or comment with your insights about test protecting.

Steps I Teach to Test Protect Legacy Code:

Select a class to protect. 

Usually this is the class that I need to fix a bug in or add a feature to. Or maybe I have a few spare cycles and I want to add coverage to a frequently edited class. I do not select a rarely-modified class to test protect - that would be a waste of time (and thereby money, mine or my employer or client's)

Create a test class.

Legacy code, by one definition is code that doesn't have tests, so it follows that a legacy class won't have a test corresponding class. Create a test file with a name and location that follows your unit testing framework's conventions.
(Commit to your version control.)

Choose a public method to start with.

Often legacy classes are large and complex enough that it is difficult to test protect them well through only the public methods. I see several options (of varying risk) in that case:

  1. Change private methods to public
  2. Change private methods to package protected
  3. Don't protect private methods that are difficult to test through public methods.
  4. Refactor the class so that it is easier to test.
  5. Use a testing tool that allows you to test non-public methods.
  6. Protect as much as you can in a reasonable amount of time, assess risks posed by unprotected code, then decide on your next action.
The only answer that I am comfortable with is number 6.

Write a test that exercises the method you selected.

If you are at least somewhat familiar with the behavior of the method you selected, think about edge case inputs and happy and sad path inputs. Choose one to start with. Some devs like to start with edge cases, some like to start happy or sad path. I'm not sure it matters.

Give your test an absurd assertion against the result.

This step is as much like characterization as it is like test protecting. I want to be sure my test fails before it passes, so I choose a value that is unlikely to be correct. For instance, if the method I am exercising is:
int lookupDogAverageWeightLbs(string breed);
then I will choose a number like 5000:

assertEquals(5000, lookupDogAverageWeightLbs("Goldendoodle"));

Run the test so it fails.

In the previous example, your test should fail with a message something like:
Expected [5000] got [40].
If it doesn't, check your assumptions and look for mistakes in your exercising of the method or in how you wrote your assertion.

Fix the test so it passes.

When we test drive, we assert what the expected value should be, run it to see it fail, and then write the least amount of code needed to make the test pass. In test protection, the existing implementation is the standard behavior, so we fix our expectation to make the test pass. From absurd fail message above, you can easily see the actual result delivered by the implementation and fix the expectation with it so that the test passes:
assertEquals(40, lookupDogAverageWeightLbs("Goldendoodle"));
(Commit to your version control.)

Examine the test coverage of the method you are exercising.

If your coverage tool doesn't automatically run when you run tests (why doesn't it?), then run your coverage tool. Find which lines of your exercised method are not executed by your first test. If there are none, your method is test protected.

Write another test that hopefully increases coverage. 

Write a new test and give your exercised method new input data. Use inputs that, based on whatever understanding you have of the method so far, will exercise a non-covered line. Sometimes the name of the method will give you an idea, sometimes you will have gleaned some tentative understanding by scanning the implementation code while looking at coverage, and sometimes you still won't have a clue. Pick new inputs however you can and run your test.

Get the new test passing.

You might have needed to write a green test in order to increase the method's coverage. That's ok. In test driving, a test that doesn't go red before going green is worthless and time-wasting at best and possibly dangerous at worst. But in test protecting, such a test is only worthless if it does not add to the method's coverage. If your new test didn't increase the method's coverage, change your inputs until it does. This is another difference from when we test drive. When we test drive, we choose a set of inputs that fail and implement code to make it pass. We change the inputs only if the ones we chose did not make the test fail.
(Commit to your version control.)

Check the coverage of your method again.

If you have not covered all the lines in the method, jump back to "Write another test that hopefully increases coverage." If you have covered all the lines in the method, start over again with"Choose a public method to start with." If you've covered all the lines in the class, you have test protected the class. Congratulations!

Make sure new tests are run in CI.
If there are already other test classes in your project, your continuous integration server may already be configured to run tests in all test classes. If not, adjust your CI server's configuration to be sure that it runs your new tests. Kick off a build and inspect the build output to be sure your new tests ran.


No comments:

Post a Comment