Deprecating the Decorator pattern

Design Patterns Book

This post explains how modern languages make most usages of the decorator design pattern obsolete.

For those of you, who do not know the decorator pattern, a quick recap is given, before I explain why it’s not really a good idea if your language supports multiple inheritance properly. Finally, we will look at those few remaining cases, where the decorator pattern is still feasible.

Decorator design pattern

If you already know the design pattern for decorators (as defined by the gang of four), then feel free to skim this section in order to pick up the example we’ll revisit lateron. For everyone else, I will present a shortened version of the wikipedia article.

Decorator Design Pattern
Decorator Design Pattern
(created via

According to the GoF book, the decorator design pattern (UML class diagram for the pattern is on the left) is a design pattern that allows behavior to be added to an individual object, either statically or dynamically, without affecting the behavior of other objects from the same class.

In the diagram, you can see the main interface “Component”, for which you have an implementation called “ConcreteComponent”. It is this implementation, for which the decorator(s) will extend the behavior, without affecting it directly. This pattern is particularly interesting to implement separation of concerns, where you want to separate completely different concerns (like authentication, logging, etc.) from the concrete implementations.

For this post, we’ll go with one of the most common use cases: Logging.

Our example code will be in Scala, because.. well, you’ll understand that later. Let’s start with the interface first:

trait Component {
  def doSomething(x : Int) : Int

The interface consists of a single method, whose behavior will be decorated with logging lateron. Before that, we have a concrete implementation of this method available. For the sake of simplicity, let’s just have the method return the given value plus one:

class ConcreteComponent extends Component {
  override def doSomething(x : Int) : Int = x + 1

Even if you don’t know Scala, this should be pretty straightforward to understand so far. Next, the pattern suggests that we implement the Component one more time in a class called Decorator, but this time, there’s an aggregation to another Component object (called decoratee below), to which we delegate:

class Decorator(decoratee : Component) extends Component {
  override def doSomething(x : Int) : Int = decoratee.doSomething(x)

Finally, we can create our concrete decorator to add the logging behavior as follows. Note that we simply use a global java.util logger to keep this example simple and to the point.

class LoggingDecorator(decoratee : Component) extends Decorator(decoratee) {
  override def doSomething(x : Int) : Int = {"This is a logged message")

Now how do you combine all of this together? You just wrap the objects into each other like this:

val component : Component = new LoggingDecorator(new ConcreteComponent)
component.doSomething(10) // logs the message, and returns 11

As you can see, when you call the LoggingDecorator’s doSomething method, a log message is created and afterwards we return the result of the base class. Said base class is the Decorator, which holds the ConcreteComponent as decoratee, so it calls the method that returns 10+1, which is in turn returned by the Decorator and then by the LoggingDecorator. From the outside, i.e. the usage site, you really just need to see the Component interface for all of this. So the pattern succeeded in adding logging to a concrete implementation of the interface, without ever having to touch the implementation class (ConcreteComponent) at all. Pretty awesome, isn’t it?

Deprecating the decorator pattern

So why did I insist on using Scala for this example? There are numerous ways in which it is superior to Java, but for this post, the relevant part are traits. Basically, traits are a way to deal with multiple inheritance in a sane way. Similar ideas exist in other languages, like mixins in C#. Unfortunately, Java doesn’t have any of this, so the deprecating really just applies to a modern language, but not to Java.

So what are traits and how are they related to decorators? A trait can just be an interface, as we have seen above. It could, however, also include an implementation of some or all of these methods. At the usage site, you can mix traits into concrete objects, which is similar to saying that you inherit from all the traits. The infamous diamond problem is solved in Scala in a very simple way: linearization. The implemented methods are called in the linear order given at the declaration site of the mixing.

Now all of this sounds really complicated, but as we’ll see it is really simple in practice, so let’s use traits to do the same logging behavior addition we did with the decorator pattern before. The Component interface remains unchanged, just like the ConcreteComponent. The Decorator class itself becomes pointless, so we really just need to define the trait for the logging, which is a slightly simpler version than before:

trait LoggingDecoratorTrait extends Component {
  abstract override def doSomething(x : Int) : Int = {"This is a logged message")

What is interesting in this method is the call to super.doSomething. As the trait extends from a trait, which does not define an implementation of doSomething (just like an interface), it is impossible to instantiate this trait directly. So “new LoggingDecoratorTrait” will fail and tell you that this trait is actually abstract, which we can also see from the abstract keyword being used. But we already have an implementation of the interface method via the ConcreteComponent, so all we need to do is mix these two together like this:

val mixedComponent : Component = new ConcreteComponent with LoggingDecoratorTrait
mixedComponent.doSomething(10) // logs the message, and returns 11

That’s really all there is to it. The resulting behavior is pretty similar to before. The right-most implementation given in the declaration of mixedComponent above is the one that will be called first, which causes the logging. It’s super-call then calls the ConcreteComponent’s implementation of doSomething, which calculates 10+1 and we return that result all the way back. We have all the same advantages in place as before, hence this standard behavior of traits in Scala already provides a simpler way to “decorate” your components.

In scala, the corresponding pattern is referred to as the “stackable traits” pattern, because you can stack traits on top of each other. We could consider another trait that provides a cache for successive calls (because you know.. addition can be expensive). All we need to do is change the usage site as follows:

val mixedComponent : Component = new ConcreteComponent with LoggingDecoratorTrait with CachedComponent

Here you con see the power of the separation of concerns. Each of these decorators/traits can be implemented individually. You can concentrate on one specific responsibility, and none of the other concerns gets in your way. What you really need to think about at this point is whether you want “LoggingDecoratorTrait with CachedComponent” or “CachedComponent with LoggingDecoratorTrait”. In other words, you need to decide if you want a logging message for each call to the component, or only in the case of a cache miss, when an actual computation is needed. The whole difference is just the order in which you mix the traits in – of course, this also applies to the original decorator pattern, in which you have to switch the wrapping order of decorators.

Apart from not needing a full pattern, but only an interface, this trait stacking approach has an advantage with regards to the type of the resulting object. With the decorator pattern you can only ever be of type “Component” and any “instanceof” checks are bound to end in disaster once you wrap another decorator around that thing. In contrast to this, a mixed component with stacked traits could (if you wanted to) be verified to be “instanceof” ConcreteComponent, just as well as “LoggingDecoratorTrait” – and that’s because of another small advantage: there is only one object created at runtime.

While this may not sound interesting in and of itself, it becomes more useful when combined with self-type annotations, i.e. restrictions to the mixing you can do. Let’s say that for some reason, we want users of our CachedComponent to only mix the trait into a concrete implementation, when they also mix in the logging trait. So we do not want something like “new ConcreteComponent with CachedComponent” to compile, because it misses the required logging. All we need to do is say so when we define the CachedComponent, by using a self-type annotation like this:

trait CachedComponent extends Component { self : LoggingDecoratorTrait =>
  abstract override def doSomething(x : Int) : Int = {
    // do cache stuff here

This ensures, that the resulting object has to be of type “LoggingDecoratorTrait” in some way or another, and hence, disallows the above instantiation. For more advanced applications of this, take a look at the cake pattern, which is a way to do dependency injection via stackable traits.

Summary and Disclaimer

We saw that trait linearization is a technique, which subsumes the decorator pattern in that it allows us to modify behavior of objects, without directly affecting them (i.e. without having to change their code). We don’t really need any fancy pattern, but just abstract implementations of the interface methods. Yes, the implementations are “abstract”, because they rely on a super-call, which is not defined in the trait itself. For this reason, we can completely replace all usages of the decorator pattern like this – or can we?

Actually, the topic of this post was intentionally phrased in a challenging way, and indeed, we cannot fully deprecate the decorator pattern after all. However, having a clear understanding of the above approach allows us to clearly differentiate between usages of the pattern that are deprecated/avoidable/replaceable and those that aren’t: it all depends on whether or not the decoration happens at compile time! When you revisit the above code samples, you will notice that trait linearization is based on the declaration order of traits given in the source code itself. This is simply not an approach applicable, if you want to extend the behavior at runtime. Since we have a static type system, you cannot modify the type after the initial object was created, so you can no longer mix in additional traits.

On the other hand, with the classic decorator pattern, we can dynamically modify behavior by simply wrapping existing objects into a new decorator object. With some more effort, we can even wrap and unwrap these to add or remove behavior at runtime. This is the real power of the decorator pattern – not static decoration! In the case of static decorations, we can do better, as we have seen above. In those cases, we can make more use of the compiler to enforce additional constraints and get a single properly typed object at runtime.