Given: a .net BDD framework

are you familiar with behavior driven development?  the high points are that each work item’s acceptance criteria should be executable, and that at least these ‘acceptance tests’ should be written using a ubiquitous language co-crafted by development and product.  the general idea is something like:

+Scenario 1: Account is in credit+
Given the account is in credit
And the card is valid
And the dispenser contains cash
When the customer requests cash
Then ensure the account is debited
And ensure cash is dispensed
And ensure the card is returned

Dan North : Introducing BDD

breaking that down a little, you are describing a scenario, and the rest of the prose describes what makes up that scenario.  each ‘given’ builds up a context  that the ‘when’ statement is executed inside of.  each ‘then’ statement is an expected behavior.  if you’re wanting more detail i’d suggest a read through Dan North’s blog.

as a .net developer there some really great tools for test driven development.  tools like resharper, ncrunch, nunit, and the list could go on for miles.  there are also some very neat tools for bdd like specflow.  there’s always been a catch though: test runners like nunit are much too focused on the outcome rather than the behavior, and tools like specflow are hard to use unless you have a really awesome BA/Product Owner.  to alleviate what i see as a gap in tooling i’ve created what i hope will one day be a comprehensive, test runner agnostic bdd framework for .net.  its called Given, and this is what it looks like:

given is currently has wrappers for both nunit (seen above), and mstest(example project).  the syntax is almost entirely the same, and i’ve done my darnedest to  get to feature parity between nunit and mstest despite microsoft’s best efforts.  the main goal here is readability.  you could pretty much hand the results of these tests to a product owner at the end of the sprint to show that all of their acceptance criteria had been met and were being tested, if only you could somehow export the results…

given-output

Given has a built in reporting engine, and the default output uses t4 templating to export your test results to an html file.  the interface that the report engine uses is public, and can be overridden by your test project simply by having an implementation present.  you’d rather the output be xml or json or a pdf?  have at it.  i’d love a pull request when you’re done so i can integrate it as an option in the core.

using Given is as easy as installing the right variant as a nuget package.  for nunit use


install-package given.nunit

For mstest use


install-package given.mstest

so that pretty much covers the basics.  there is obviously a lot of nuance left out of this post, and i’d suggest that you take a look at the example projects on github for a little more depth.  thanks for taking a look!

Tagged , , ,

how to know when things aren’t going well

IMG_20130628_115751

why isn’t it always this obvious?

i worked at the place the above picture is from not too terribly long ago.  there had been some major product upheaval, toes were stepped on, and teams were redistributed.  people were not happy.  one guy took the time to type and print a 2 page list of all of the bad things that had happened in this particular sprint.  we all knew things were going poorly.  within four weeks that entire scrum team had quit working at the company and moved on to other jobs with varying degrees of happiness.  sometimes its very obvious when things aren’t going well at a company.  i’m finding that is rarely the case though.

don’t be scared, it’s just me, aop…

aop (aspect-oriented programming)  is a concept i’ve always liked/hated in equal parts.  i think any typical object-oriented programmer likely rails against the concept of having aspects injected into their code at compile/run-time based on a couple of class decorators.  i think i’ve come to the point though where i get the benefit.  the whole cross-cutting concerns thing kind of hits home, so i’ve been investigating it.

a long time ago, a friend and i were discussing aop, and postsharp came up.  If you don’t know what it is, or what it does, let me begin by saying it is crazy.  hats off to the guys at sharpcrafters for devising something so wickedly magical.  the basic idea is that you inherit from certain attributes that they define, and then decorate classes with those new aspects.  thats all well and good, normal aspect oriented stuff so far.  it gets crazy at compile-time.  postsharp hooks into the compiler and actually alters the IL that is run on the CLR, basically adding code to your code, without you having to do anything but add an attribute.  want every method on your class to be wrapped in a try/catch where the catch logs the exception and then throws it back out?  add a attribute, build, and voila, you now have logging around every method.  not only do you have logging, but unlike other dynamic proxy based solutions, it’s testable.  i’m not sure that i’d ever want/need to test this kind of boiler plate logging, but lets assume a more complex scenario like wrapping methods with a TransactionScope, or decorating certain methods for audit logging.  these things both reflect real business behavior, and should be testable, but using an IoC container + dynamic proxy will not allow for unit testing that these aspects are appropriately applied.

postsharp’s IL weaving allows for testable AOP, and i think i can stand behind that.  For now.

Tagged ,

i’d really like to write more.  i honestly believe i have it in me to do it.  i just lack discipline.  well, discipline isn’t the only thing i lack, but we won’t get into that at the moment.  i’ve got some ideas.  i’ll likely have more before this is through.  i just need to get started.

first

Tagged ,
Follow

Get every new post delivered to your Inbox.