Saturday, July 6, 2024
HomeDatabasesSentryOne Unit Test Generator – Our Journey (Part 3)

SentryOne Unit Test Generator – Our Journey (Part 3)

In my previous posts, I covered how the Unit Test Generator works and why we chose to implement it. If you haven’t read them, check out Part 1 and Part 2. In this third post, I will talk about what we did to implement it and the structure of the extension.

What We Did

So, full disclosure: we did not finish this all during the Innovation Sprint. We made a great start, but like any software project there was a lot left to do at the end of the first frenetic development phase.

The basic structure is a Visual Studio extension wrapper and a core project that contains the actual code generation. This was a great side-benefit of moving to Roslyn—the code generation could run in isolation outside of Visual Studio, which had an important aspect on testability since Visual Studio is not a pleasant environment from the inside. Unit testing Visual Studio extensions is extremely hard—you very quickly descend into COM-based interop, depending on which internal services you rely on. The idea was to keep the Visual Studio specific code as light as possiblereally the bare minimum that we could write while still achieving the integrations that we wanted.

The old extension worked through the solution explorer exclusively—and I thought that was good enough until I experienced the code editor integration offered by the new extension. I really did not know what I was missing. A case in point was navigating to the existing tests. Previously that involved moving the mouse across to the top of the solution explorer, clicking the ‘sync’ button to highlight the currently open document, right clicking the solution explorer, and clicking ‘go to tests.’ Now, you just right-click on any member in the type and click ‘go to tests.’ While the previous method doesn’t really sound like a herculean effort, it really did interrupt flow more than it needed to.

SourceContextMenuGeneration of tests available in the code editor

The generation of tests is now available in the code editor as well, which again is a massive improvement. One additional feature that we implemented was regeneration—generating tests for certain members again and over-writing the existing tests. This is particularly useful when you’ve added a constructor parameter, for example, and you want to re-create the parameter null check tests for the constructor. (Pro Tip: hold Shift while opening the context menu to see the regeneration option).

There are also some niceties we added to the extension, such as configurable naming for projects, files, and types and automatic creation of test projects, including installing dependencies for your chosen frameworks from NuGet.

Getting to the Core of the Issue

The core library is where all the magic happens. This is where we query the Roslyn semantic model to inspect the type we want to generate tests for and generate the output. The generation is based around strategies—small bits of focused code that understand one particular type of source code and generate the tests for that type.

We have several types of strategy in the code:

  • Class—This strategy generates the base code for the test class, including creating fields for each constructor parameter and the setup method. The abstract class strategy is by far the most complex because it involves generating a derived type and working out what abstract members require implementation. This is harder than it sounds!
  • Class level—These are constructor tests, where we exercise the various constructors of the type under test. We also check that any null or string parameters guard against null objects, or null, empty or white space strings.
  • Method—For methods, we generate boiler plate methods to call the method and string / object guard checks like we do for constructors.
  • Properties—We recognise members whose name matches a constructor parameter name and check that the value matches. We also check that properties that belong to an INotifyPropertyChanged class correctly raise the PropertyChanged event.
  • Indexers—Like methods, we emit boiler plate tests for indexers.
  • Values—This one is quite interesting in that we try to select values randomly. So, all built-in types will pick a random value of the correct type. But equally we try and pick values for more complex types too, like arrays. If we detect that a required value is a POCO type, then we will emit an inline initializer for the type.

Some things happen outside of the strategies. For example, any time we emit a type reference, we store it so that we can add the relevant using statement to the namespace, and, optionally, add the assembly reference to the test project.

Testing the Test Generator

We wanted to test the test generator using the test generator—which was an interesting exercise. We bootstrapped the test generator enough to be able to run it in Debug, opened the test generator project, and started writing the tests. This really helped in the development of the generator itself, even though it was a weird “chicken and egg” experience.

chicken egg Which came first, the test generator, or the tests for it?

For the non-strategy-based code, this bootstrapping worked really well. For the strategy code, classic unit tests didn’t meet the standard that we wanted to achieve. Our first priority was to ensure that output could be compiled for every combination of mocking and test framework and didn’t interfere with the methods generated by any of the other strategies. We used a multi-pronged approach here:

  • We created a set of text file resources that covered all the cases that are catered for by the strategies and then made a test that parses a semantic model from the resource, generates the test, and ensures that the generated test and the resources compile. We do this against the cartesian product of the available mocking and testing frameworks to provide us with great coverage of the strategies and make sure that we generate syntactically correct code.
  • We also created a set of SpecFlow tests to verify the output of tests. In this instance, we’re worried less about whether the code compiles in all variations of framework and more worried about the actual output from the strategies. This helps us to answer the question, “Do the generated tests actually cover the scenarios that we want them to cover?” This part of the process was largely completed by an engineering intern who came in to learn about software development in the real world. While that’s a topic for another post, I’d like to say, “Thank you!” to Will Atherton for his efforts on the SpecFlow tests.

Remember: our aim here isn’t to ensure the tests that are generated pass when you run them—we are generating tests to be completed. For simple POCOs or model types, we’ll generate tests that compile, pass, and completely cover the source type. For methods, however, we don’t want to get into generating test code that completely exercises the content of the method body. That would firstly be outside of the intended scope and secondly would take away a large chunk of the benefit of testing—verifying that the code matches the intent. If we generate tests to completely exercise the content of a method, all we are really testing is that the compiler works, rather than testing that what was written does what we thought it would.

The Future

We are currently open sourcing the Unit Test Generator—we feel like we have made a great start. It will be fascinating to see how the community engages with it. There are some things that we would like to do in the future:

  • Extensibility—It would be nice to offer people the ability to plug in their own strategies without having to fork the repo.
  • Wider array of built-in strategies—This one speaks for itself.
  • More respectful regeneration—When regenerating tests, it would be nice if we carried over any user customizations to the test to the new method. This quickly becomes frighteningly complex, but it would still be awesome to try!
  • Support for other languages—It would be nice to be able to support building tests for F#. Perhaps the community will be interested in support for VB.NET too?
  • Further enable the “go to” feature—We’d like to make the ‘go to’ feature able to navigate to the first of the relevant generated tests in the test class when a symbol is selected.

Perhaps you’d be interested in helping move it forward? Feel free to join in by downloading the extension and contribute your ideas!

 

Matt is Director of Platform Delivery at SentryOne, facilitating development activities across our product portfolio. Having spent the first part of his career working in payment and loyalty systems, working with several high volume databases, Matt developed a passion for tooling around database systems. He took some time to develop the tools that eventually became DBA xPress when they were acquired by Pragmatic Works. After working with Pragmatic Works to build out their database tooling, Matt joined SentryOne where he is excited to have the opportunity to take that tooling to the next level.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments