Oliver Drotbohm Archive About Tags

Implementing DDD Building Blocks in Java

March 17th, 2020

NOTE: This post has been largely superseded by “Architecturally evident code with jMolecules”. Make sure you rather read this one for the most up to date state of affairs.

When it comes to implementing building blocks of DDD, developers often struggle to find a good balance between conceptual purity and technical pragmatism. In this article I’m going to discuss an experimental idea to express some of tactical design concepts of DDD in Java code and derive the metadata to e.g. implement persistence without polluting the domain model with annotations on the one hand and an additional mapping layer on the other.

Context

In Java applications the building blocks of Domain-Driven Design can be implemented in a variety of ways. Those ways usually make different trade-offs in decoupling the actual domain model from technology specific aspects. A lot of Java projects err on the side of still annotating their model classes with e.g. JPA annotations for easy persistence so that they don’t have to maintain a separate persistence model. Whether that’s a good idea is out of scope for this article. The primary focus of it is to see how we make the model more focused on DDD even in that case.

Another aspect we’re going to touch on is how to make DDD building blocks visible within the code. Often a lot of them can be indirectly identified, e.g. by analyzing that the domain type managed by a Spring Data repository having to be an aggregate by definition. However, in that particular case we’re relying on a particular persistence technology to be in use to derive exactly that information. Also, it would be nice if we could reason about the role of a type by looking at it without any other context.

Sample

Let’s start with a quick example that allows us to highlight the challenges. Note, that the model is not the only way you could design it. I am just describing what could be the result of a design in a particular context. It’s about how an aggregate, entity or value object can be represented in code and the effect of one particular way of doing that. We model Customers that consist of Addresses, Orders that consist of LineItems which in turn point to Products and point to the Customer that placed the order. Both Customer and Order are aggregates conceptually.

A sample excerpt domain model

Sample UML

Let’s start with the Order’s relationship to the Customer aggregate. A very naive representation in code that uses JPA annotations directly would probably look something like this:

@Entity
class Order {
  @EmbeddedId OrderId id;
  @ManyToOne Customer customer;
  @OneToMany List<LineItem> items;
}

@Entity
class LineItem {
  @ManyToOne Product product.
}

@Entity
class Customer {
  @EmbeddedId CustomerId id;
}

While this constitutes working code, a lot of the semantics of the model remain implicit. In JPA, the most coarse grained concept is an entity. It doesn’t know about aggregates. It will also automatically use eager loading for to-one relationships. For a cross aggregate relationship that is not what we want.

A technology focused reaction would be to switch to lazy loading. That however creates a new problems, we’re on a start to dig down a rabbit hole and actually have moved away from domain modeling to modeling technology, something we wanted to avoid in the first place. We also might want to resort to rather only map identifiers instead of aggregate types, e.g. replacing Customer with CustomerId in Order. While that solves the cascading problem, it’s now even less clear that this property effectively establishes a cross aggregate relationship.

For the LineItems referenced, a proper default mapping would rather be eager loading (instead of lazy) and cascading all operations as the aggregate usually governs the life cycle of its internals.

The idea

To improve the situation described above, we could start introducing types that allow us to explicitly assign roles to model artifacts and constrain the composition of them by using generics. Let’s start with those (most of them originally described in John Sullivan’s Advancing Enterprise DDD - Reinstating the Aggregate but slightly renamed alongside the attempt to turn this into a library).

interface Identifier {}

interface Identifiable<ID extends Identifier> {
  ID getId();
}

interface Entity<T extends AggregateRoot<T, ?>, ID extends Identifier>
  extends Identifiable<ID> {}

interface AggregateRoot<T extends AggregateRoot<T, ID>, ID extends Identifier>
  extends Entity<T, ID> {}

interface Association<T extends AggregateRoot<T, ID>, ID extends Identifier>
  extends Identifiable<ID> {}

Identifier is just a marker interface to equip identifier types with. That encourages dedicated types to describe identifiers. Primary intent of that is to avoid every entity to be identified by a common type (such as Long or UUID). While that might seem like a good idea from a persistence point of view it’s easy to mix up a Customer’s identifier with an Order’s one. Explicit identifier types avoid that problem.

A DDD Entity is an identifiable concept which means it needs to expose its identifier. It is also bound to an AggregateRoot. That might seem counter intuitive at first but it allows to verify that an Entity such as LineItem is not accidentally referred to from a different aggregate. Using those interfaces we can set up static code analysis tooling to verify our model structure.

Association is basically an indirection towards a related aggregate’s identifier that purely serves expressiveness within the model.

These interfaces and all subsequently mentioned code are available through a library called jDDD, living in this GitHub repository.

Explicit building blocks in our sample

What would our model look like with these concepts applied (JPA annotations not declared for clarity, code to be found here)?

class OrderId implements Identifier {} // same for other ID types

class Order implements AggregateRoot<Order, OrderId> {
  OrderId id;
  CustomerAssociation customer;
  List<LineItem> items;
}

class LineItem implements Entity<Order, LineItemId> {  }

class CustomerAssociation implements Association<Customer, CustomerId> {  }

class Customer implements AggregateRoot<Customer, CustomerId> {  }

With that we can extract a lot of additional information from looking at the types and the fields alone:

It’s a pretty straight forward task to implement a verification of e.g. entities only being held in their owning aggregates using tools like jQAssistant or ArchUnit. Also the information can be extracted, used for documentation etc.

Adapting fundamental persistence technology

While all of this is nice, we still face the challenge of having to map this to a data store eventually, in this case assuming JPA. As identified before there are some default mapping rules we’d need to apply that result in boilerplate annotating of the model. Some examples of such steps:

How do we actually massage these defaults into the types from the outside? I have a prototypical implementation based on ByteBuddy available. It ships a JpaPlugin implementing ByteBuddy’s Plugin interface for usage from its build plugins like this:

Using the ByteBuddy JPA plugin to default JPA annotations based on DDD concepts.

<plugin>
  <groupId>net.bytebuddy</groupId>
  <artifactId>byte-buddy-maven-plugin</artifactId>
  <version>${bytebuddy.version}</version>
  <executions>
    <execution>
      <goals>
        <goal>transform</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <transformations>
      <transformation>
        <plugin>….JDddJpaPlugin</plugin>
      </transformation>
    </transformations>
  </configuration>
</plugin>

The plugin would essentially work as follows:

  1. Identify the DDD concept by checking whether the type handed to the Plugin implements any of the interfaces of interest.
  2. For each concept inspect type and fields for existing annotations and — if not present — add the default appropriate for the relationship at hand.

Integration with Spring Data

A final step we can take in terms of letting persistence technology adapt to the information available in the type system is the question how to resolve Associations easily. Our aforementioned sample contains these interfaces which at some point could actually make it int Spring Data:

interface AggregateLookup<T extends AggregateRoot<T, ID>, ID extends Identifier> {
  Optional<T> findById(ID id);
}

interface AssociationResolver<T extends AggregateRoot<T, ID>, ID extends Identifier>
  extends AggregateLookup<T, ID> {

  default Optional<T> resolve(Association<T, ID> association) {
    return findById(association.getId());
  }

  default T resolveRequired(Association<T, ID> association) {
    return resolve(association).orElseThrow(
      () -> new IllegalArgumentException(
        String.format("Could not resolve association %s!", association)));
  }
}

AggregateLookup.findById(…) is essentially equivalent to CrudRepository.findById(…) which is not a coincidence. AssociationResolver exposes methods that resolve the Association via the findById(…) method using the association’s exposed identifier. This allows a CustomerRepository to look like this and work out of the box without any further changes.

interface Customers extends
  o.s.d.r.Repository<Customer, CustomerId>,
  AssociationResolver<Customer, CustomerId> {  }

Order order =  // obtain order
Customers customers =  // obtain repository

Optional<Customer> customer = customers.resolve(order.getCustomer());

Note, how Customers is a standard Spring Data repository and we can easily, explicitly resolve associations to other aggregates via their repositories.

Open questions and outlook

Would the interfaces something you’d be willing to let your domain code depend on? – I know this is an extremely controversial topic. There’s a group of people that doesn’t bother at all and that would’ve happily depended on such interfaces if they were part of e.g. Spring Data. Others are rightfully concerned about keeping their domain model as independent of technology aspects as possible. However, there is a middle ground. If you think about it: implementing your domain model in a programming language is technical coupling as well. Using date abstractions in your model is, using e.g. the money APIs is. I think that one can argue that it’s okay to depend on some technology if the library depended on is focussed and the effort to remove it is small. What we have here is a set of interfaces that allows us to make DDD concepts explicit that previously had been implicit. That’s just in line with the spirit of DDD in the first place.

Is requiring Identifier to restrictive? — The generics bounds to ID extends Identifier eliminate the possibility to use simple types (Long, UUID) as identifiers. While this is intentional to some degree it might also be considered too invasive.

Is the persistence setup too confusing? — while the concept specific defaulting is nice, it also might create some confusion, especially to developers that are used to seeing JPA annotations. “How is this working at all?” is a question likely to come up easily. What do the new defaults look like?

Further persistence technology integration — It might make sense for other persistence integrations to also provide mapping defaults based on rules applicable to DDD building blocks. This could mean both other ByteBuddy plugins but also the optional inspection of the building block interfaces to reason about the model.

Build integration — While integrating the defaulting via ByteBuddy works well, it feels bit awkward to see nothing happen if everything goes well, which might just be something to get used to. Errors result in failing builds on the command line, which is fine. The integration with Eclipse however has some issues: first, the ByteBuddy Maven plugin is currently not running by default because apparently it’s lacking the metadata necessary for m2e to know what to do. You can get this to work by tweaking the m2e lifecycle mappings file adding the following configuration block:

Declaring the ByteBuddy Maven plugin

<pluginExecution>
  <pluginExecutionFilter>
    <groupId>net.bytebuddy</groupId>
    <artifactId>byte-buddy-maven-plugin</artifactId>
    <versionRange>[0.0.1,)</versionRange>
    <goals>
      <goal>transform</goal>
    </goals>
  </pluginExecutionFilter>
  <action>
    <execute>
      <runOnIncremental>true</runOnIncremental>
    </execute>
  </action>
</pluginExecution>

That said, errors in the plugin execution are signaled via a marker on the pom.xml and the <execution /> element of the plugin declaration. The hover then contains the stack trace of the exception produced by the plugin. While this is fine, it’s not very convenient. I’ve filed a ticket to ask for improvement.

Summary — tl;dr

The article described common problems expressing DDD building blocks while designing domain models and directly mapping them onto persistence technology like JPA. A set of interfaces describing the building blocks as well as their relationships was suggested potentially shipped as a library. Improved JPA default mappings based on those building blocks were implemented using a ByteBuddy Plugin. Additional Spring Data integration was suggested to make it easy to resolve associations between aggregates explicitly.

The library and sample code lives in this GitHub repository. Binaries currently available from the Spring Artifactory repository:

<repositories>
  <repository>
    <id>spring-libs-snapshot</id>
    <url>https://repo.spring.io/libs-snapshot</url>
  </repository>
</repositories>

<dependencies>
  <dependency>
    <groupId>org.jddd</groupId>
    <artifactId>jdd-core</artifactId>
    <version>0.0.1-SNAPSHOT</version> <!-- Replace with current version if needed -->
  </dependency>
</dependencies>

Looking forward to your feedback, questions and comments!

blog comments powered by Disqus