Starting an open-source rewrite of a validation microframework

A few years ago I gave birth to the validation framework that I felt was uniquely capable of addressing complex validation scenarios I’ve come across. The following is a brief insight into how it came to be and where it’s at.

If you’re familiar with DDD, you’ll remember the pattern for expressing business rules via specifications. While simple and powerful, I found it to be insufficient: what we want to know in the real world scenarios is not only if something satisfies a given criteria, but also what does it mean if it does or doesn’t.

Here’s an example, given an entity E you need to ensure that a field X is of certain length. You can have a rule, here expressed as  the function:

 e => e.X.Length < MAX_LENGTH

Great you’ve got the logic, nicely encapsulated inside your domain. It can be referenced by name and it’s testable. Now what? What are we going to do with the fact that the rule evaluated to false?

What all of it lacks is context. What is the context of this evaluation? Why are we doing this? To throw an exception? No, we need more information even if we throw an exception and even having composable rules (named Specifications by Evans) isn’t going to help us tell the user what do we think is wrong.

Here’s an example of the context: a user is entering some text in a webform into the field X. Joel Spolski will have a field day ridiculing your UX, but you want to tell the user that what he’s entering is too lengthy.

Another example using the same rule but in different context: your API is called to import list of Es and you want to collect all the errors to respond with once you’re done processing the batch.

We most certainly want to reuse the rule, but what we do with the outcomes of the evaluation is completely different. How about an entirely different UI? Or an another application that makes use of your domain internally? They may all want to do something different with the outcome!

I dislike the idea of validation as a cross-cutting concern (addressed with attributes) for the same reason – it makes no effort to incorporate the context.

Here’s another example of the context: Say the entity E contains two complex properties of the same type:

class E { public A A1; public A A2; }

which contains property B:

class A { public string B; }

And your rule is to permit value “b1” for A1.B property and “b2” for A2.B property. You see the context (it’s the overall path to the property B), but your attributes – don’t. It’s very hard to make sense of things looking from inside out.

This leads me to my proposition: The validation is best performed and described from the outside, looking in.

Everything is easier as you have the access to all sorts of contexts: use-case, user, dependencies, etc. It’s easier to implement and it’s easier to understand.

So what do we need, other than the rules?

We’ll need to redefine what Specification is. We’ll use Specifications to associate Rules with outcomes.

We’ll also need to express Interpretations of the rules in a manner that would let us carry out contextual parametrization (such as reporting the offending value, constraints of the rule itself, localization, etc).

For the maximum flexibility we’ll need factories of Interpretations and Actions to carry out given an outcome, enter: Bindings.

The framework was very successful and ended up used in several Novell products (of PlateSpin fame) and I’ve decided to do an open-source rewrite. That’s all there is to the microframework I posted on github.

At the moment, I’ve got the basics setup: the build, the tests and the major components. The library is .net portable, so I’d like to have samples in MVC, WPF, Silverlight and WP.

Starting an open-source rewrite of a validation microframework

DDD – misunderstood

Over the years since finding out about Domain-Driven Design I went through several iterations of grokking it.  I’ve also seen other people completely miss the point of “tackling the complexity at the heart”, for example: and

I see others going through the same mistakes:

  • I’ve dismissed it as nothing but rehashing of  OOD principles
  • I’ve overdone “IDs are the impedance artifacts, an instance reference is my ID”
  • I’ve overdone “everything is an object therefore every class will manage its own behavior”
  • I’ve modeled my entities and called it my domain model

I think Evan’s book is partly to blame, the man just wasn’t a writer. Partially, the reason is that what we seek is like Plato’s cave shadows – we may be able to glean a shape of the ultimate form, but it’s only an approximation of the real thing, lacking the detail.

Here’s my current understanding:

  • It’s the principles!
    We use patterns and principles in our solutions all the time. OOD is great, but it is too abstract to be of any value by itself. SOLID is a good starting set, but it doesn’t go into domain modelling. There are lots of other principles that are suitable in one case or another. DDD is just like that, it’s a way to capture desired characteristics of your problem domain in your code. Bringing any kind of dependency, especially a large framework into the domain model deserves a frown. The focus of the modelling, the principles applied, will come at a cost and will be wasted if something is sacrificed for a framework.
  • It’s about the code!
    We have the models everywhere, some of them are for communicating to the database, some – to communicate to/from external services and some are to talk to people. DDD is about the code at the core of the problem you are working to solve. There maybe a need for a different model for each of the purposes. In CQRS, for example, you are expected to have two separate models within the same app!
  • It’s about modelling the behavior!
    Which leads me to the ultimate purpose of the domain modelling: capturing the behavior as close to the way business treats it as possible. Have you heard about coding dojos where you have to try and solve a problem w/o ever using the setters? Well, it’s kinda like that, only with a real purpose. The immutability, for example, is of paramount importance when modelling the domain, because that’s one of the few ways to express the meaning and the intent about particular interaction.
  • It’s about testability!
    Presumably, there’s a significant cost attached to a model that permits a wrong behavior.  Keep the model small, abstract and inject all the infrastructure. We shouldn’t need a complex setup to test a theory or modify a behavior and we should test it continuously. The model and the Behavior-Driven tests is our documentation, it’s for the coder to capture and to understand the requirements.

Happy coding!

DDD – misunderstood

Taking control of your application development – managing dependencies

If we look at wikipedia article for coupling, we’ll find that it’s synonymous with dependency. Just about everybody know that coupling is bad and yet you can find a dependency graph like this:
Presentation->Business->Data Source
(meaning Presentation depends on Business that Depends on Data Source)
Assuming the implementation is not superficial and nothing like typeless datasets are being passed directly up to the Presentation, let’s look at what this graph allows:
– direct use of Business abstractions in Presentation (good)
– no direct coupling of Presentation with Data Source (good)
– Data Source driving the interface/protocol for communications with Business (bad)
– testing of Data Source in isolation (worthless, because unless our Business abstractions are just DTOs we are not involving any Business rules)
And what it doesn’t allow:
– testing Business in isolation (bad)
– testing Presentation in isolation from Data Source (bad)
– substitution of Data Source with another implementation (bad)
– and related to the previous, reuse of Business w/o the reuse of Data Source (bad)
Does that look like Pit of Success to you? I don’t think so.
Here’s another graph suggested by Domain-Driven folks like Eric Evans and Jimmy Nilsson:
Presentation->Business<-Data Source
This graph allows us to keep all the good stuff from the previous graph and eliminate all the bad stuff.
I want to elaborate a bit on the substitution of Data Source implementation. While this might seem unlikely, recent years have proven that Data Source and Presentation are the most likely targets of change (once the Business functionality has been achieved).
Imagine porting your ASP.NET application to Silverlight and you’ll realize:
1) Silverlight means you’re adding another application (client for you web-app server!). There are of course other likely clients that you might get. If you’re like me you’ll want to involve the Business rules as early as possible and that means… Business layer, hopefully reusing the implementation you’ve created for the server.
2) the change is not unlikely given the feasibility
3) the change is not feasible if you have to drag the Data Source implementation along.
It might seem at first that we have a circular dependency here, but that’s not the case.
What are the Business abstractions that would allow as to do it? What is required from the infrastructure to let it happen?
Tune in later for the spectacular conclusion!
Taking control of your application development – managing dependencies