The complicated relationship of domain and persistence

It was only yesterday when I blogged about .NET Solution structure of an enterprise application. I got some feedback from my colleagues about that post. One of the confusions was, why would I make the persistence project depend on the domain project? In this blog post I try to give a complete answer to that design choice.

The argument against persistence depending on domain

The message I received was that Persistence project should not depend on Domain project. Argument was that this violates the layered architecture and leaks the domain outside of it’s project. One question was, how is persistence different from the other layers which operates through the request and response DTOs? Why not follow the same reasoning when it comes to Persistence? There was also a concern that persistence layer need to be changed every time the model changes. Decoupling layers would solve this issue, I heard.

That’s a bunch of really good questions! Let me try to navigate through them and illustrate on the code level why making Persistence project to depend on Domain project is actually a good idea.

The domain leakage

One of the main concerns seemed to be that no other project should know about the inner-structure of the domain excluding the domain project itself. Fair enough, even the business layer doesn’t know what the aggregates consist of, instead it just uses the interfaces of the aggregate roots to manipulate the state of the system. The keyword here is uses. I’m totally onboard when it comes to restricting usage and modification of the domain aggregates. One of the aggregate’s responsibility is, after all, to make sure that it is always in a consistent state. It would be hard to obey this rule, if business code could change the internal state of an aggregate as it wishes.

However, the relationship between persistence and domain is completely different. Persistence doesn’t use the domain model. It doesn’t change the state of it. Instead persistence layer’s only responsibility is to persist the current state of an aggregate as it is. And the most convenient and clear way to do that is to allow it access the state of the aggregate directly.

Also consider this, the business layer using the domain depends on it on a conceptual level and this dependency exists at runtime. On the other hand, the dependency between persistence layer and domain is technical and exists only at the compile time. This is exactly why inverting the dependency is such a powerful concept. We can depend on code level to the opposite direction of logical dependency.

Why do we decouple?

We decouple domain from persistence and other technical code, because we want to protect it from change. The domain is the unique and valuable part of our application. Persistence is something that all enterprise apps do. That’s why there are so many tools that can help us to implement it. Persistence is a detail, domain is what matters. Domain deserves the protection, persistence, not so much. For more, it is conceptually impossible to any persistence implementation not to depend on an aggregate state. The only question is, do we depend on it directly or indirectly?

What would decoupling mean?

To find an answer to the question “Should we depend on aggregate state directly or should we introduce another level of abstraction between these layers”, let’s consider what it would mean to make persistence project not depend on the domain project.

The first obvious thing we notice is that we need some objects to represent the state of the aggregates. It’s also clear that persistence should be able to depend on those objects. Let’s call these objects state DTOs. So where do we put them? Surely they can’t be in the domain project since we are trying to avoid referencing that from the persistence. Therefore we need to introduce a new project for these state DTOs and reference it from the persistence instead. The next question is, where do we map back and forth between these state DTOs and domain aggregates? Given the leakage premise, the only acceptable answer is in the domain project. Any other option would leak the implementation of the aggregates outside from domain and that is exactly what we are trying not to do here. This decision leads us to reference our new state DTOs project from domain as well.

comparsion

Next we need to change our thinking a bit. Now the persistence layer does not persist domain objects anymore, but the state DTOs. This means that persistence layer shouldn’t implement the repository interfaces in the domain. After all, those depend on domain concepts, because they contain methods like Save(IInvoice invoice) and IInvoice GetByNumber(string number).

So what do the persistence layer implement then? We need to introduce a new interface for these “state DTO repositories”. I find it confusing to call these repositories as well, so I’ll call them persistence adapters to make clear distinction between domain repositories and these DTO driven persisters. The next question is, where to put this interface? Our new project seems appropriate place since it’s the shared part between persistence and domain. In practice this state persistence adapter interface is a copy of the repository interface with the exception of using DTOs in a interface instead of domain concepts. So instead of having Save(IInvoice invoice) it has Save(InvoiceStateDto dto).

The last missing part is the repository implementation. With this model that goes into domain project. Basically this repository implementation works as a gateway between the domain and persistence adapter.

I implemented this decoupled version of the architecture and put it into Github for anyone to take a closer look. For comparison, you can find the original version from here. Now we have implemented a new version of the architecture which doesn’t introduce any direct dependency between the persistence and domain.

The benefits of decoupling

Let’s take a step back and see what we have achieved with this exercise. Persistence project does not depend on the domain project. Checked!

Now we can modify domain aggregate structure without modifying the persistence layer. True, but we do need to update our mapper that we didn’t have before. So instead of modifying the mapping in the persistence layer we modify mapping in the domain layer. For more, if you check the repository I had in my previous blog post, you’ll notice that there wasn’t actually any mapping in it. Indeed, using e.g. MongoDb as a persistence technology you can modify your domain aggregate freely without modifying anything else at all. This is of course true only as long as the system is not in the production. After that step, mapping needs to be updated in the mongo repository as well. Even still, the development phase will be so much faster and more enjoyable when you don’t need to keep things in sync all the time. If you use traditional SQL database and ORM mapper, then the mapping needs to be changed in persistence the same way it needs to be changed in new mapper in domain layer. So here, the benefit seems to be that we isolate the modification into one project instead of two. This can be useful, if you dedicate separate teams to work on different layers of the application.

Now we can modify persistence without affecting the domain. Well, this was true already when persistence depended on the domain. So, this isn’t really a benefit added by the updated architecture, but it is still important!

Any other benefits I couldn’t find from this architecture compared to the original one introduced in the previous blog post. I truly wish someone tells me if there is something more to gain by this decoupling. I’m really eager to understand the whole picture here.

The price of decoupling

When we went through the required modifications to the architecture in the beginning of post, it became quickly evident that implementing this decoupling doesn’t come for free. We needed to sacrifice many things to achieve it. Let’s consider what is the price of this change in more detail.

We added a new project to the solution and introduced a new level of abstraction by introducing new repository implementations on top of the persistence adapter implementations which mostly stayed the same. The new repository implementation maps the entity into state DTO and delegates persisting to the persistence adapter in the persistence layer.

We also introduced new mapping code. Not only that, the DTO mapper was implemented into domain project. It’s clearly not a place for such a technical requirement as DTO mapping. Domain should be all about the modeling. There is no way around this issue if we decide that aggregate structure shouldn’t be visible to other projects. Mapping is clearly a cross-cutting concern, its not domain specific, its in all enterprise applications and there are nice libraries to automate the effort. It would be tempting to add a reference from domain project to such a library, but it would make things only worse. Our domain would depend on technical library that has nothing to do with domain.

Consider an application with multiple complex aggregates. This is a considerable amount of new code to be maintained. Mapping, DTOs, extra interfaces and repositories. Not to mention how it increases the complexity of the overall system architecture. The worst part is that this is boilerplate code. It’s something you always need to do just to keep things running. Everytime you refactor, add or remove properties, there are these extra steps you need to take.

Finally, introducing a new abstraction layer between persistence and domain prevents us utilizing some of the features available in the most advanaced ORM-tools. Lazy loading is one of those features. For example, NHibernate can be configured so that it lazy loads parts of an aggregate only when needed. Of course, this lazy loading is not visible at all in the domain and business layers. When working with SQL databases, it can be a huge benefit that not all parts of the aggregate are always loaded eagerly. There might be an expensive part to load that is rarely needed in business operations, yet it still clearly belongs to the aggregate. In these cases we want to utilize lazy loading. However, if you have a DTO layer in the middle, this is not possible anymore. DTO layer forces aggregates to be loaded eagerly everytime needed or not.

Conclusions

I’m not saying it’s a bad idea to decouple the persistence and domain from each other on a conceptual level. What I’m saying is, that the price we need to pay as added complexity, maintenance and polluting domain with technical concerns does not justify the benefits it provides. Not all of these prices are obvious until you start to implement decoupling on code level. As always with design decisions, it comes down to comparing the benefits against the price and here the conclusion seems clear to me.

There is this special case of having different teams working on these layers. This is rather old-school way of scaling the development and luckily not too common in a modern day development. Nowadays, we scale rather vertically than horizontally. However, if you are stuck in an organization that imposes this type of scaling, then this complete decoupling might be feasible solution to consider. Still I would consider it really really long, before applying. It won’t completely remove dependency between the teams anyway. It’s probably still easier to work with the dependency between the projects than with all the extra whistles and blows.

 

.NET Solution structure of an enterprise application

After reading multiple DDD-books (all two of them) and studying architectures like Onion, Clean and Hexagonal, I have tried to come up with a good .NET solution structure that enables developing a well architected enterprise application with domain-driven development while following the SOLID principles. In this blog post I will build such a solution step-by-step while explaining the reasons behind each design decision. You can find the complete solution from the GitHub with one commit for each step of the post.

I hope this blog post is useful for other developers struggling to map the high-level concepts of SOLID, DDD and architecture into an actual code.

Preface

Before we dive into code and architecture, it’s worth to mention that this is a huge topic to discuss. So huge, that there are in fact several books on each smaller subtopic. That in mind, it’s obvious that I can’t address every detail in this post. For example, I won’t go into details what aggregate roots are or how dependency injection works. The architecure that I present here is not my invention. I just map the high level concepts that I have learned from the masters, into actual code and solution structure.

The architecture and it’s implementation is not specific to any domain, but to illustrate it with actual code, I have chosen to use Invoicing as an example domain for this application. We will implement an application that allows user to change a duedate of an invoice.

1. Domain

We start by creating a new solution with Visual Studio. Let’s choose a library project and give it a name EnterpriseApplication. Visual studio creates a new solution with one library project. Let’s rename this project to Domain. This project will be the core of our application containing all the entities. I recommend creating a folder for each aggregate root right under the domain project. This convention makes aggregate boundaries visible on the solution level and you can immediately see what are the key concepts of the domain. Notice that we organize the code around domain concepts instead of technical concepts like factories, entities, validators etc. This does not only help communicate the domain, but also makes it easier to find the piece of code that you are looking for.

For the sake of demonstration let’s add Invoice aggregate root. First I add IInvoice interface that represents the concept in our domain. Next we need to create a factory to be able to create our invoices, so I add InvoiceFactory class and also an actual implementation class for the IInvoice interface. Factories and the entities that they create are highly cohesive and belong next to each other in the solution, that’s why I put them side-by-side into the domain project.

namespace EnterpriseApplication.Invoice
{
    public interface IInvoice
    {
        void ChangeDuedate(DateTime newDuedate);
    }
}
namespace EnterpriseApplication.Invoice
{
    public class Invoice : IInvoice
    {
        public Guid Id { get; set; }
        public string Number { get; set; }
        public DateTime Duedate { get; set; }

        public void ChangeDuedate(DateTime newDuedate)
        {
            // There is more complex logic, but thats not the 
            // point of this blog post and therefore skipped.
            Duedate = newDuedate;
        }
    }
}
namespace EnterpriseApplication.Invoice
{
    public class InvoiceFactory
    {
        public virtual IInvoice Create(string number, DateTime duedate)
        {
            return new Invoice
            {
                Id = Guid.NewGuid(),
                Number = number,
                Duedate = duedate
            };
        }
    }
}

There is still one concept to add to our domain project and that is repository for our aggregate root. Not the implementation, but the interface! So let’s add IInvoiceRepository to the Invoice folder. We add the interface there, because whoever depends on the domain and it’s aggregates also wants to create (factory) and persist or reconstitute (repository) those aggregates. Therefore it’s a natural place for the interface.

using System;

namespace EnterpriseApplication.Invoice
{
    public interface IInvoiceRepository
    {
        IInvoice GetById(Guid id);
        IInvoice GetByNumber(string invoiceNumber);

        void Save(IInvoice invoice);
    }
}

DomainTo conclude, our Domain project consists of folders that represent aggregates of the domain. Each folder contains factory, entity and repository interface of the specific aggregate. This is an aggregate in its simplest form. Usually there are also multiple value objects related to the aggregate.

2. Business

Next step is to create a new project for Business layer. We add a new library project to the solution and call it Business. Where as the domain layer contains the domain logic in the form of aggregates, the business layer is a home of the business logic. This layer implements all the use cases of the application. One use case is usually one business operation that operates on one or more aggregates of the domain. Therefore business layer naturally depends on the domain layer. So let’s add a project reference from Business to Domain.

If you think about any business application, they always consist of two kind of operations: commands and queries. Commands modify the state of the system, but never return any data. On the contrary, queries allow reading the system state without modifying it. This idea is also known as Command Query Responsibility Segregation (CQRS). However, I won’t introduce separate read model in this example. Instead I use one model to implement both, queries and commands, meanwhile still making a clear separation on those operations on the architectural level. There is no reason that would prevent introducing the new read model for queries as the need arises, but I would always start with a single model and use it as long as it’s feasible.

BusinessTo make this idea visible in our solution architecture, let’s create two new folders under our Business project and name them Commands and Queries. Now under the Commands folder I create a folder for each business operation / use-case. This way by looking the business project you can instantly see all the business operations that are supported by the application. Folders and business operations should be part of the Ubiquitous Language just like the domain aggregates are.

For this example, let’s add a simplified business operation to change the invoice due date. We create a new folder for it and call it  ChangeInvoiceDuedate and right under it we create new class called ChangeInvoiceDuedateCommand. I use a naming convention of suffixing all the entry points of the business commands with Command. This becomes handy later when we configure our DI container.

namespace Business.Commands.ChangeInvoiceDuedate
{
    public class ChangeInvoiceDuedateCommand
    {
        IInvoiceRepository invoiceRepository;
        InvoiceDueDateChangeValidator validator;

        public ChangeInvoiceDuedateCommand(IInvoiceRepository invoiceRepository)
        {
            this.invoiceRepository = invoiceRepository;
            this.validator = new InvoiceDueDateChangeValidator();
        }

        public void Execute(ChangeInvoiceDuedateRequest request)
        {
            if(validator.IsDuedateValid(request.NewDuedate))
            {
                IInvoice invoice = invoiceRepository.GetByNumber(request.Number);
                invoice.ChangeDuedate(request.NewDuedate);
                invoiceRepository.Save(invoice);
            }
            else
            {
                throw new InvalidDueDateException();
            }
        }
    }
}

To implement ChangeInvoiceDuedateCommand I use constructor injection pattern to inject dependencies of the command. As a dependency we need IInvoiceRepository to be able to fetch the invoice which due date should be changed. Notice that we can access this repository interface since it was located to domain project that business depends on. For the sake of this example, I also added a class InvoiceDueDateChangeValidator to illustrate that business layer contains not only entity calls but also business rules that are not part of the aggregates. Rules when due date of an invoice can be changed are part of the business operation. How the due date is changed and how it modifies the aggregate state is part of the domain logic and therefore in the invoice aggregate. Also notice that validator is not injected, but created by the command. It’s highly cohesive with the command and there is no need to inject it from the outside.

namespace Business.Commands.ChangeInvoiceDuedate
{
    public class InvoiceDueDateChangeValidator
    {
        public bool IsDuedateValid(DateTime duedate)
        {
            // Dummy validation for illustration purposes
            return duedate > DateTime.Today;
        }
    }
}

One more noteworthy design decision here is the request object that comes as a parameter to the command. Whenever command or query is called, request object is given as a parameter (and the only parameter). The request encapsulates all the data that is needed for that specific query or command. Requests are simple data transfer objects (DTO) and are named with Request suffix.

namespace Business.Commands.ChangeInvoiceDuedate
{
    public class ChangeInvoiceDuedateRequest
    {
        public string Number { get; set; }
        public DateTime NewDuedate { get; set; }
    }
}

I won’t go into details of implementing a query in this example. I will mention however that all the queries return Response objects that are similar to requests but move to another direction. So basically in query implementation you fetch what ever aggregates are needed to produce the response and then map. To conclude, all the data that crosses the business layer boundary is within request and response DTOs.

3. Persistence

Now that we have a domain and business layer in place, let’s create a Persistence layer! I add a third project to our solution and name it Persistence. As you can imagine, the responsibility of this layer is to implement data access for our domain. You might recall that our repository interfaces were put into domain project. Persistence layer implements those interfaces so let’s add a new project reference from persistence to domain. Now that we have referenced the domain, let’s add a new class called InvoiceRepository and make it implement the IInvoiceRepository introduced earlier in Step 1. In this example I will use MongoDB to actually implement the repository. If you choose to use SQL database I recommend using Entity Framework or NHibernate instead. Below is a naive implementation of the repository that is sufficient for this example.

namespace Persistence
{
    public class InvoiceRepository : MongoRepository, IInvoiceRepository
    {
        public InvoiceRepository(MongoClient mongo) : base(mongo) { }

        public IInvoice GetById(Guid id)
        {
            return Invoices.AsQueryable<Invoice>()
                           .Single(c => c.Id == id);
        }

        public IInvoice GetByNumber(string invoiceNumber)
        {
            return Invoices.AsQueryable<Invoice>()
                           .Single(c => c.Number == invoiceNumber);
        }

        public void Save(IInvoice invoice)
        {
            Invoices.Save(invoice);
        }

        MongoCollection Invoices
        {
            get { return Database.GetCollection<IInvoice>("invoices"); }
        }
    }
}

That’s all when it comes to persistence layer. It’s a rather thin layer containing repository implementations. Since the persistence project depends on the domain project it can easily instantiate entities when reconstituting objects from the database. In case of MongoDb, there isn’t even need to do manual mapping since mongo does everything for you automatically. One thing to point out is that the domain aggregates do not use constructor injection pattern. Instead they instantiate the dependencies themselves. Again, classes within the aggregates are highly cohesive with each other and therefore can depend on each other as needed.

4. Cross-cutting Concerns

Now we have three projects in our solution: Domain, Business and Persistence. All with very specific responsibilities. Both Business and Persistence depend on Domain, but there are no other dependencies between the projects. Next it’s time to stitch everything together. Let’s create a fourth project to our solution and call it CrossCuttingConcerns. As the name of the project implies this project contains all the features that are cross-cutting to all layers of the application. These include for example logging and auditing, but more importantly dependency injection.

CCCLet’s add a new folder for each cross-cutting concern under the project. I create folders named DependencyInjection and Logging for this example app. In real apps there could be also Security, Auditing, Monitoring, RequestValidation etc.

Next, let’s add project references from the CrossCuttingConcerns project to all the three projects we created before: Domain, Business and Persistence. Due to the nature of cross-cutting concerns, it’s ok that this project depends on all the others. More importantly, domain project still depends on nothing and business depends only on domain. Those are the two projects that are the core of our system, the part that make our system unique and valuable. In contrast, Persistence and CrossCuttingConcerns are the projects that implement responsibilities that are common to all enterprise systems including database access, logging and DI, just to mention few. If you think about it, these two projects are the places where we want to utilize already existing technologies like IOC containers, ORMs, Mongo drivers, Validation frameworks, Logging frameworks and the list just goes on.

This is great, because our architecture makes it so that all these external dependencies are in the projects that do not contain any business or domain logic. Decoupling the external dependencies is important, because we don’t want to depend on those details. Instead we want those details to depend on our core! Dependency injection (see DIP) allows us to inject these cross-cutting concerns into appropriate places of the application stack without making the stack depend on those concerns or their implementations. This can be done with decorator pattern or by using intercepting supported by some DI containers.

In this example app I use Castle Windsor as IOC. I won’t go into details how Castle works, but I have chosen it because it implements two crucial features: Intercepting and Convention based registering.

Let’s start by implementing simple error logging. Castle makes it easy, because it provides integration for log4net out-of-the-box. I’ll create an ErrorLogger class into the Logging folder and make it implement IInterceptor which is castle’s interface for intercepting method calls. Implementing the logger itself is easy. Below is the full implementation of the error logger that can be used throughout the system.

namespace CrossCuttingConcerns.Logging
{
    public class ExceptionLogger : IInterceptor
    {
        ILogger log;

        public ExceptionLogger(ILogger log)
        {
            this.log = log;
        }

        public void Intercept(IInvocation invocation)
        {
            try
            {
                invocation.Proceed();
            }
            catch (Exception e)
            {
                var message = string.Format("Method '{0}' of class '{1}' failed.", 
                                            invocation.Method.Name, invocation.TargetType.Name);
                log.Error(message, e);
                throw;
            }
        }
    }
}

Now that we have all the pieces in place, let’s implement Composition root of the application. I will do this by creating a new class CompositionRoot under DependencyInjection folder. Castle Windsor supports splitting the composition root into smaller components called Installers. I prefer creating an installer per project/layer to keep my codebase well organized. Below you can see the implementation of the CompositionRoot and all the installers.

namespace CrossCuttingConcerns.DependencyInjection
{
    public class CompositionRoot
    {
        public virtual void ComposeApplication(IWindsorContainer container)
        {
            container.AddFacility<LoggingFacility>(f => f.UseLog4Net());

            container.Install(
                new CrossCuttingConcerns(),
                new Persistence(),
                new Domain(),
                new Business()
            );
        }
    }
}
namespace CrossCuttingConcerns.DependencyInjection
{
    public class Domain : IWindsorInstaller
    {
        public void Install(IWindsorContainer container, IConfigurationStore store)
        {
            container.Register(Classes.FromAssemblyContaining<InvoiceFactory>()
                                      .Where(type => type.Name.EndsWith("Factory"))
                                      .WithServiceSelf()
                                      .LifestyleSingleton());
        }
    }

    public class Business : IWindsorInstaller
    {
        public void Install(IWindsorContainer container, IConfigurationStore store)
        {

            container.Register(Classes.FromAssemblyContaining<ChangeInvoiceDuedateCommand>()
                                      .Where(type => type.Name.EndsWith("Query"))
                                      .WithServiceSelf()
                                      .Configure(c => c.LifestyleSingleton().Interceptors<ExceptionLogger>()));

            container.Register(Classes.FromAssemblyContaining<ChangeInvoiceDuedateCommand>()
                                      .Where(type => type.Name.EndsWith("Command"))
                                      .WithServiceSelf()
                                      .Configure(c => c.LifestyleSingleton().Interceptors<ExceptionLogger>()));
        }
    }

    public class Persistence : IWindsorInstaller
    {
        public void Install(IWindsorContainer container, IConfigurationStore store)
        {
            RegiterMongoDb(container);
            RegisterRepositories(container);
        }

        protected virtual void RegiterMongoDb(IWindsorContainer container)
        {
            var mongoClient = new MongoClient("mongodb://localhost");
            container.Register(Component.For<MongoClient>().Instance(mongoClient));
        }

        void RegisterRepositories(IWindsorContainer container)
        {
            container.Register(Classes.FromAssemblyContaining<MongoRepository>()
                                      .BasedOn<MongoRepository>()
                                      .WithServiceFirstInterface()
                                      .LifestyleSingleton());
        }
    }

    public class CrossCuttingConcerns : IWindsorInstaller
    {
        public void Install(IWindsorContainer container, IConfigurationStore store)
        {
            container.Register(Component.For<ExceptionLogger>());
        }
    }
}

As you can see, I register most the classes by convention for each layer. I also bind the logging interceptor to all Commands and Queries. This guarantees that any exception thrown from domain, business or persistence will always get logged. Thanks to convention based configuration, there is no need to modify composition root when we add new aggregates, repositories or business operations to our application. It all just works as long as the classes are named by convention.

One important aspect of the dependency injection here is that we inject dependencies only at the boundaries of the layers to decouple them from each other. However, within the domain and business layers I tend to create dependencies locally, since the classes are highly cohesive with each other within use-cases and aggregates.

5. Services layer

So far we have four projects in our solution: Domain, Business, Persistence and CrossCuttingConcerns. These four projects together fully implement the system, but there is still one minor problem to solve. We can’t use the system at all! We need a delivery mechanism over business layer to be able to call commands and queries of the system. In this example, I will create a WCF service over the business layer to enable access to our domain. This layer could be REST, MVC, WPF or even a command line application, but for this example it’s WCF. There is also no reason why this layer should be limited to only one. We could have WCF and WPF living side-by-side in our application.

Let’s create a new empty ASP.NET Web project to our solution and call it Services. This project’s responsibility is just to enable remote access to domain. It does not contain any logic and it’s a really thin layer over the others. No other project depend on this layer. Services layer itself depends on CrossCuttingConcerns and Business. So let’s add project references for those two dependencies! Next we implement a trivial InvoiceService that has a method for changing the due date of an invoice. This class binds remote interface to our business layer. We inject our ChangeInvoiceDuedateCommand as a constructor parameter into our service so that it can delegate the call to the business layer.

public class InvoiceService : IInvoiceService
{
    ChangeInvoiceDuedateCommand changeInvoiceDuedate;

    public InvoiceService(ChangeInvoiceDuedateCommand changeInvoiceDuedate)
    {
        this.changeInvoiceDuedate = changeInvoiceDuedate;
    }
        
    public void ChangeInvoiceDuedate(string invoiceNumber, DateTime newDuedate)
    {
        var request = new ChangeInvoiceDuedateRequest 
        { 
            Number = invoiceNumber, 
            NewDuedate = newDuedate 
        };
            
        changeInvoiceDuedate.Execute(request);
    }
}

That’s almost all that there is to services layer (in this example application). But there is still one trick we need to do. We need to somehow register our service to IOC so that the WCF framework can create the services for us with dependencies in place. To do this we need to tell WCF framework to use our IOC container as a dependency resolver. We also need to register services on the service layer to the container before rest of the application is configured within the composition root. To do this let’s add Global.asax file to our Services project. This class contains a method Application_Start() that is executed when the application is started by IIS. Below is the code illustrating how to bootstrap dependency injection with WCF.

public class Global : System.Web.HttpApplication
{
    WindsorContainer container;

    protected void Application_Start(object sender, EventArgs e)
    {
        container = new WindsorContainer();
        container.AddFacility<WcfFacility>();

        container.Register(Classes.FromThisAssembly()
                                  .Where(type => type.Name.EndsWith("Service"))
                                  .WithServiceDefaultInterfaces()
                                  .Configure(component => component.Named(component.Implementation.FullName)));

        new CompositionRoot().ComposeApplication(container);
    }

    protected void Application_End(object sender, EventArgs e)
    {
        if (container != null)
            container.Dispose();
    }
}

As you notice, we actually create the WindsorContainer already in the service layer, configure controllers to it by convention and then pass the container to our composition root, which in turn, configures the rest of the application. We do this, because we can’t configure controllers in the composition root that is located inside CrossCuttingConcerns project. Remember that CrossCuttingConcerns does not depend on Services project. We could move the whole dependency injection configuration to Service layer, but that would couple DI tightly to delivery mechanism. What if we want to add WebAPI next to WCF? No, we don’t want to couple those two concerns too tightly. By locating DI to CrossCuttingConcerns, we have it separated while allowing any top layer to utilize it to configure the application.

Conclusions

SolutionLet’s take a step back and see what we have achieved here. We have implemented a basic structure of an enterprise system in .NET. It consists of five projects, Domain, Business, Persistence, CrossCuttingConcerns and Services. All these have clear responsibilities and interfaces. The architecture is not specific to any domain. Domain specific code is always located in the Domain and Business projects that are completely independent from the rest of the system. This total decoupling between domain specific logic and technical requirements is one of the biggest advantages of this architecture.

This is an enterprise application architecture in its simplest form. It’s not rare that we have sub domains side by side with core domain or Application layer on top of the business layer. This blog post was already way too long without those, so I just left them out of this exercise.

Pyhsical dependencies

Physical dependencies of the projects

Pros

  • Architecture follows SOLID principles.
  • Solution structure screams the domain with a folder structure that is built around the domain concepts and processes. Notice that there are no folders named Validators, Exceptions, Factories or so.
  • Clear separation of concerns on the project level. All the code could be in one big project, but I find it helpful to organize code on layer level. Also managing dependencies between these projects enforces decoupling the high level concepts from each other.
  • Cross-cutting concerns are isolated and put into a clearly named folders. You can immediately see what are the cross-cutting features of the system on the solution level. More importantly, logging, security, audit etc. are not polluting the domain and business code.
  • Persistence implementation is separated from the domain allowing domain model to differ from the persistence model. This also enables adding technical features to persistence in OCP manner. For example, adding caching is as easy as creating a caching decorator for repository implementation and configuring it with composition root.
  • Service layer stays extremely thin and rest of the application doesn’t need to know about it’s existence.
  • Using Request and Response DTOs on the business layer boundary decouples the upper layers from the domain concepts. Business layer can provide API suitable for the upper layers. These DTOs also define the data interface of the application: what data is needed for each operation.
  • All technical frameworks are isolated from the actual business code which makes it easy to switch any as needed. Wanna use Unity as IOC? Just rewrite the stuff within DependencyInjection folder. Wanna use EntityFramework instead of MondoDb, just rewrite a repository in Persistence. Wanna use WebAPI instead of WCF? Just implement another Service layer next to WCF. None of these changes, require any changes to the core of the application.
  • Adding a new business operation to the system is easy. Add a new Command to the business layer and possibly new functionality to one or more aggregates. This follows the OCP principle which states that the system should be open for extension, but close for modifications. We don’t need to touch any existing business operations while adding a new one. We also shouldn’t need to modify aggregates since the domain concepts stay the same across the business operations. We might need to add new domain functionality though.
  • I feel I have to mention testing here. It’s a whole new topic for another blog post that I wrote earlier, but let’s just mention that testing an application following this architecture is not only easy, but fun!

Cons

  • Dependency injection cannot be kept completely in Cross-cutting concerns, because of the nature of it. We need to be able to register the very “top level” classes of the application to the container and those classes are always on the layer above everything else.

Your turn! Leave a comment and help me improve this solution structure and architecture. Tell me what are the weak points of it. Is there a way to make it less complex without compromising benefits it provides?

After feedback I wrote a follow-up blog post. Read it from here.

 

Elokuvat

ElokuvatI’m really excited to release my 3th iOS app called Elokuvat. That’s Finnish for movies. Elokuvat is an application that brings the currently playing movies and showtimes of the Finnkino movie theaters in Finland into your pocket in an elegant and easy-to-use package. The application utilized Finnkino’s open API and The movie DB for enhanced content.

This is the first release, but not the last. You can expect more features to come in the future!

For more information, see Elokuvat App’s website.

 

DDD & Testing Strategy

In my previous post I discussed the unit of unit testing. Uncle Bob happened to blog about the same subject few days after my post. In that post he proposed the following rule:

“Mock across architecturally significant boundaries, but not within those boundaries.”

In this post I’ll go through my current testing strategy of the business applications implemented using Domain Driven Design (DDD) principles and patterns. The units that I have found meaningful to test in isolation are quite close (if not exactly) what Uncle Bob suggested in his post. I hope this blog post illustrates how that high level rule can be applied in practice.

I assume that the reader is somewhat familiar with the DDD concepts like domain, entity, repository etc. I’m also expecting that test automation is not black magic to you.

DISCLAIMER: I use term mocking as a general term meaning all kind of test substitutes inluding spies, mocks, stubs and what not.

Application Architecture

Before getting into testing strategy, it’s important to understand the architecture of a DDD application. I will only present the architecture with a single picture below, but I strongly recommend you to read Uncle Bob’s Clean Architecture blog post in which he explains it really well. I also recommend to read Jeffrey Palermo’s blog post series about the Onion Architecture. It’s those kind of applications I’m talking about in this post.

DDD Architecture

Domain Layer

Domain model in a DDD application should be persistence ignorant. In other words, domain entities and aggregates doesn’t know anything about the persistence at all. They don’t have a Save() method or dependencies to repositories. This is great news from the testing point of view. It means that all the domain logic happens in-memory and there shouldn’t be any dependencies to outside domain.

Domain Model

Domain model can be seen as a collection of aggregates which have clear boundaries. To me aggregate is a good candidate for being a unit of testing. Aggregate encapsulates highly cohesive domain concepts together as illustrated in the picture above. Aggregate is persisted as one and parts of it can never be instantiated separately. Rest of the system always operates through the aggregate root and is not aware of the other parts of the aggregate. So, there is a clear public interface to test against.

The only dependencies to mock out are the other aggregate roots of the domain. This is a case when aggregate takes another aggregate as a parameter to some action. This could be the case for example when settling payment to an invoice: payment.SettleWith(invoice);

The tricky part of these tests is arranging the sut (aggregate) into desired state. This problem is unique to domain layer tests, because the aggregates are the state of your application. When testing other layers there is rarely state to worry about.

There are few ways to do aggregate arranging. One can create the aggregate under test using the same factory used in the production code. The downside of this strategy is, that when dealing with a more complex domain it might take several actions against the aggregate, before it’s in the desired state. This makes the test fragile since any failure in those steps would break the test. Personally I don’t see this as a deal breaker, because what you loose is the error pin-pointing benefit of the test, but if you work with small iterations, you will automatically know that the reason of failing tests is the last 5 lines you wrote. The other option is to arrange the aggregate by explicitly setting the internals into desired state. With this style your tests won’t fail if unrelated functionality breaks. However, you might end up having tests that arrange the aggregate into unrealistic states and as a consequence show false positive. Using factory and aggregate’s public interface guarantees that the aggregate state stays valid.

For now, I’m leaning to use factories and executing the needed actions to arrange the state, but this is an area where I’m still actively exploring different solutions.

Persistence Layer

Persistence layer in a DDD application contains implementation of the repositories. Repositories are simple classes that implement data persistence of the domain aggregates. My strategy to test persistence layer is to write test cases against the repositories without mocking out the persistence technology used by repositories. The technology might be a MongoDB or some ORM-tool like NHibernate or Entity Framework. Which it is, doesn’t really matter. The responsibility of a repository is to persist and reconstitute domain aggregates. Database and mapping tool are big part of this and mocking those out would leave little to test.

Persistence Layer

These tests are slower than in-memory unit tests, but that’s ok. They don’t need to be executed as often as unit tests. Only when persistence logic changes occur which usually happens more rarely than domain logic modifications. For more, there’s really no way around it as long as you want to test that the persistence really works.

Repository tests simply store a domain aggregate, query it back and assert that the aggregate stayed intact. This type of testing tests the persistence behavior without depending on details like which technology is in use. So, you are able to switch let’s say from NHibernate to MongoDB without breaking any test case. The same applies if you decided to refactor your database schema. Tests shouldn’t care how the data is persisted, just that it is.

Application layer

Application layer uses domain and persistence layers to implement business functionalities. This layer contains the implementation of the application use-cases. There is usually a class per use-case, but every now and then larger use-cases need to be split into multiple classes. Class that implements a use-case usually starts by getting one or more aggregates from the repository, then performing one or more actions with the aggregate and finally persisting the modified aggregates using the repositories.

Application Layer

In the application layer a use-case seems meaningful unit to be tested in isolation. By following the Uncle Bob’s rule above, we should mock out all the boundaries that are used. In the case of use-case classes those boundaries are persistence layer (repositories) and domain aggregates which are implemented in the domain layer. Sometimes the application layer might use external services or similar. Those should be mocked out as well.

Testing the features

This is the highest level of automated testing I do at the moment. Feature tests can be written in Gherkin language using for example SpecFlow testing framework. This helps communication with a non-technical persons involved with a project. These tests test complete business processes of the application on the highest level. They make sure that the application as a whole is behaving as expected.

Feature tests should depend only on the application layer. Why not test all the way from the service layer, one might wonder. Service layer is technology dependent and therefore you don’t want to make your tests depend on it. What if you want to change from ServiceStack to WebApi (Happened to me when ServiceStack became proprietary)? It would require modifying all the tests although nothing, but external framework actually changes. Service layer is a delivery mechanism. It’s a detail and very very thin layer above the application.

So, make your feature tests depend on application layer and application layer only. Don’t go directly to the persistence layer to assert the application state. When doing arrange part of the test, don’t set application to certain state directly from the persistence layer bypassing the application and domain. In feature tests, always use only the public interface of the application layer. The state changes must be observable through the application layer. Otherwise the change is meaningless. The same applies to setting application in to certain state. If you can’t make the system to certain state through the application actions then that state is not valid state for the application.

Feature tests verify the application behavior as a whole, but on top of that there are two aspects that no other tests verify in my testing strategy. These are: Dependency injection and interception. In feature tests, I initialize the system by using the same composition root that is used in the production. The only exception is that I mock out some very specific parts of the system like e-mail sending. By using the same composition root, I can be sure that all the components are properly configured to the container. Another closely related topic is interceptors. I use interceptors to implement the cross-cutting concerns like logging and auditing. Because feature tests use production composition root, also interceptors are configured and bound in the process. This enables asserting cross-cutting concerns in the feature tests when appropriate.

Bonus

When ever there is a class (or bunch of classes) that deserves to be tested separately in isolation, I write a unit test for it. Most often this is a case when aggregates of the domain model use “helper classes” that do some complex calculations or something else very specific with a lot of business rules.

Conclusion

This testing strategy reflects my current understating on how to get maximum benefit from testing, while enabling refactoring of the system and feeling confident that everything keeps working. My experience is that this strategy also works nicely with TDD approach.

If you follow this testing strategy you’ll get 4 kind of tests:

  • Feature tests
  • Use-case tests
  • Domain aggregate tests
  • Repository tests

All these “units” have clear boundaries to write tests against and clear responsibilities to be tested.

I would like to get feedback from the community. Does this strategy make any sense? What are the  weaknesses of it?

 

Unit of unit testing

Unit testing is something I have been doing for many years now. During that time, I have learned a lot about writing good tests, clean code and testability in general. I have learned mocking, TDD, BDD, you name it. One thing however has stayed the same during the whole learning process: the concept of unit. When I started unit testing I learned that class is a unit and all the external dependencies of the class should be mocked away. So, each class is tested in isolation from all the other ones and there is always a test class for each production code class.

This definition of unit is quite popular and a good guideline for developers learning the secrets of automated testing, mostly because it leaves no room for interpretation. While it has served me well on my journey becoming a better developer, during the past year I have started to feel that it might not be the best unit testing strategy for me after all.

Let me go through the reasons that have lead me to the search for more suitable unit than a class.

Refactoring – For me, one of the main benefits of the unit tests is that they enable safe refactoring of the code. The conflict I have faced with the class level unit tests and refactoring is the following. Most of the refactoring I tend to do, is reorganizing code towards some design pattern I notice applicable for the situation, or when I find new responsibilities and extract those into their own classes. None of this refactoring is happening in a scope of a single class. Therefore, I have to modify the unit tests every time I change the design, which is a sign of fragile tests. Tests that probably test implementation rather than behavior. The tests become a burden that must be kept in line with production code every time refactoring is happening. New tests must be created for new classes, some removed and the others updated. While doing all these changes to the test suite at the same time I refactor the production code, I find it difficult to feel safe doing the refactoring.

TDD - Learning test driven development has been a bit of struggle to me, to be honest. The idea in a simplified form is that you write a test, you write production code and when the test is green you can go and refactor. When doing TDD, the refactoring step almost always leads to extracting a class from the class you started with. At that point, what should you do? Create a new test class for the extracted class and start to do TDD with it? If class is your unit of unit testing, then this is the way to go. With this strategy I have found that I end up writing a lot of unit tests that just assert that the extracted classes are called with the correct parameters and the return values of those extracted classes are processed correctly. These are tests that don’t test the domain behavior as much as they test code behavior. These are the tests I need to modify each time I want to refactor the code, just because of the fact that they depend on design of the application, not the domain behavior.

The second conflict I have experienced with TDD and class as a unit, is that it ruins my TDD experience. And I care about my experience! I find it hard to focus on implementing the behavior I started with, when I need to write these did-it-call-that-with-those-parameters tests in between. That said, I still test unit in isolation and there are boundaries where mocks must be applied. In those boundaries I still test that the calls are made with expected parameters and that the inputs from the mocks are properly used.

Size of the unit – Being a fan of Clean Code has affected a lot of how I write code and how my code looks like. Over time I’ve learned to write classes that better follow the SOLID principles. That has made my classes a lot smaller than they used to be. Many classes you can fit on the screen or two and the methods in those are even smaller. Usually no more than 5 to 10 lines of code. In other words, the size of the unit has dramatically shrunken as I have learned to organize my code better. This is a good thing, no doubt. The problem is that not always one class alone implement a meaningful domain behavior. Indeed, separation of concerns usually leads into many many small pieces that together do something meaningful. I find it more meaningful to test those bigger behaviors (still not that big!) than the pieces and their boundaries.

Conclusion

It’s all about trade offs. My biggest concern was that I would loose the exact pin-point power of the unit tests. You know, they tell exactly where the issue is. That, however, never materialized in practice. If you run your unit tests continuously as you develop, then you can be pretty sure that the few lines of code you just added before test execution is the reason of failing test. This applies weather the scope of the unit is one class or few classes. Also the execution time of the test suite stayed the same.

All that said, of course there are still classes that contain significant amount of logic that deserves to be tested in isolation of all the other code. The change I have made is that I’m not that dogmatic about it anymore. I try to find better, more meaningful units and tests those in isolation. If you are in doubt, I say, try it! Time will tell if this works out great or will I revert to dogmatic class level unit testing. So far, it has worked for me.

 

Junat 2

I finally finished my second iPhone app which is pretty much the same as the first one was. My second application is Junat 2, which continues from where my first app Junat left off. Junat 2 is a complete rewrite of the application and brings a lot of new features that were missing from the original Junat application. I also had to change my strategy and made this application non-free to be able to cover my development costs. I’m hopeful that the users will understand this and are willing to pay the minimum price from my app.

Junat 2 is now in Apple’s approval process and hopefully it will hit the AppStore soon. Oh, if you’re not familiar with the original app let me tell you that Junat 2 is an app that allows you to easily access train table information in Finland. Search for connections, check notifications and follow train live updates.

Here are some screenshots of the application:

 

 

White clock for OSX

I really like the Obsidian Menu Bar that Max has created for Lion. It basically just changes the OSX menu bar into black which makes it blend very nicely with the Apple displays. The problem is that it doesn’t change the color of the clock and therefore forces users to install iStats or some other application that offers more customizable clock. Although iStats is a great application, when you only need a clock $16 might feel a bit too much.

I decided to use an hour or so and create a free alternative for those of us who just want the clock. It’s hardly fanzy or optimized any way, but it works and gives a nice white clock on the black menubar just the way I want it.

You can download it from here.

UPDATE: Download 2.0 with font selection support

White clock was briefly mentioned in the MacFormat magazine. :)

MacFormat Magazine

 

Mikrobitti features Junat app

Mikrobitti is the largest magazine in Scandinavia concentrating on new technology. They have over 330000 readers. The last issue of the magazine featured my iPhone app Junat. I’m glad to see that they liked it and were willing to recommend it to the readers. Although, I wonder why they used English screenshots when there is also Finnish available. Anyway, thanks to the article, Junat app downloads has already increased.

So far, Junat application has been already installed to over 17 000 devices in Finland. This must be my most popular application so far. :)

 

XCode color scheme for MonoDevelop

I like the colors Xcode uses for syntax highlighting, so I decided to create a new color scheme for MonoDevelop with the same colors. There are some diffrences how Xcode and MonoDevelop allows you to change the colors, but it’s still quite close to the original.

Download: XCode color scheme

To use it, open MonoDevelop preferences. Navigate to Syntax highlighting and add the downloaded scheme by clikcing the Add button. After that you of course select it. :) If you want MonoDevelop to look even more XCode you can change the editor font to Menlo reqular 11pt.

XCode color scheme for MonoDevelop