REQUEST A DEMO

Elevating consulting-ware

Back when I was working at Clarify, a good chunk of my time was spent doing custom development work, i.e. consulting-ware. We developed the code, and handed it off to the customer. We (the consultants) did some testing, but the customer was responsible for the bulk of the testing.

There seems to be some sort of difference in mindset when developing a product versus developing consulting-ware. I’m not sure why.

I’ve been working with one of our developers on a customer engagement for the last few months, working on some custom web services, amongst other things. For this project, we’ve incorporated some of our standard development practices into the consulting project.

Unit Tests and Integration Tests

The biggest practice we’ve pushed into custom development work is tests. We’re delivering to the customer a set of Unit Tests, and a set of Integration Tests along with the solution.

The project itself is about a dozen web services. We currently have 155 unit tests and 77 integration tests.

Now, we can prove that the system works, and the customer can prove it to themselves as well. If there’s a problem, we can have the customer run the tests, and we can easily track down where the issue is.

When speaking with one of the developers at the customer, he said that they don’t do any automated testing. He wasn’t familiar at all with TestDriven.NET or nUnit. I’m hoping that our practice of delivering tests will assist and encourage our customer onto the road that is filled with testing goodness.

Specification Report

In addition, we’ve started experimenting with nunit-spec, which is a Behavior-Driven Design Extension Library for NUnit. It was developed by one of our developers, Scott Bellware. You point it at an assembly, and it generates a report of all of the tests contained therein. It’s really interesting to see how the tests read when you take them out of Visual Studio, and put them into a document format (nunit-spec generates an HTML document).

For example, on the first iteration of work for this customer project, the developer had test names such as:

  • Test Missing Case ID
  • Test SitePart Data
  • Test Case Closed Data Closed

After running, nunit-spec, and feeding that info back to the developer, the next iteration contained test names such as:

  • Create Case should throw exception when the contract has expired
  • AddNotesToCase should throw exception with invalid CaseId
  • Contract with no parts should return empty results

Much better! The tests clearly convey the behavior of the system.

We’ve also made this report available to the customer. This allows the business analyst at the customer to understand what the system does, without having to open Visual Studio and pour through the code of the test project, which I don’t think she should ever have to do.

If you’re interested, Roy Osherove has some good background info on this style of test naming:

 

I’ve been very excited over the last year or so at the changes we’ve made in our product development practices, and now I’m very jazzed to see these practices elevating our consulting work as well.