Embracing Change

(Or, “How I Learned to Stop Worrying and Love Dependency Injection”)

The dependency injection debate has continued to rage. Jacob posted a reply to Ayende’s two posts, which Ayende then responded to. Donald Belcham (the Igloo Coder) also got in the act. The debate has basically been geared around a few questions:

  1. Is there merit to dependency injection outside of unit testing and mocking?
  2. Is dependency injection alone enough to allow loose coupling?
  3. Is it dangerous to rely on a container to do “voodoo” to wire up objects?
  4. Doesn’t relying on interfaces violate YAGNI?

First off, I believe (as I’ve stated before) that there is a lot of benefit in dependency injection above and beyond unit testing. My development is not quite test-driven (yet), but I consistently rely on dependency injection to guide me in designing my software’s architecture. In my experience, the best feature of DI is that it makes it natural to separate concerns. Like most developers, I’m lazy, so if something is easy, it’s more likely to become second nature. These days, it’s very unlikely for me to accidentally violate SRP.

However, I think I haven’t been clear about one aspect of my stance: I also believe that dependency injection by hand is probably worse than using factories. As Jacob pointed out, by exposing the dependencies of a type, you can actually increase your coupling, because you’re requiring that any code that consumes your type also has knowledge of the type’s dependencies. Likewise, types that consume that type must know about the original types dependencies, and so on. In real-world scenarios, dependency injection by hand simply does not scale.

Enter a dependency injection framework. In the past, I’ve referred to embracing a dependency injector as going Hollywood (from the Hollywood Principle: “don’t call us, we’ll call you”). If you go Hollywood, you submit yourself to the glitz and glamour of your injector, with its fancy XML mapping files or [Inject] attributes. Relying on a bunch of voodoo like reflection to wire up your objects might seem daunting at first, but it has a very important purpose: it effectively means you never have to think about how your objects need to communicate. You just need to know which objects need to talk to which. Tell the injector, and it will figure out the rest.

I can understand the reluctance to give up control over such a fundamental aspect of your software, but you do it already. There’s this big elephant of a framework called the CLR hiding behind our software, and no one is complaining (too loudly) about the things it does to and for us. Adding a dependency injector into the mix is just another step, and if it lets me be lazy about busywork like wiring up objects, then sign me up! I think there’s a preconceived notion about people who consider software from an architectural perspective, that somehow we are more interested in the elegance of the system than in what it does for its users. I can’t speak for everyone, but for me, I just recognize that a good architecture means I can spend more time adding features, and less time doing mundane tasks.

I’d say that there’s different levels of commitment to dependency injection, too. You can use DI with or without programming against interfaces. I tend toward the latter because, while it’s more work, it’s another tool that helps me to design my software. If I’m designing a service-oriented architecture, defining interfaces for each of my services makes it easier to keep the interaction points simple. In spite of my interest in DI, I’m still very much a proponent of information hiding, and it’s very difficult to “over-expose” a type when you have to add them to an interface definition. Sure, I might never make another concrete implementation of an interface. But, if it’s there, I can take advantage of it if I need it. Not to mention that with a dependency injector, it’s perfectly natural to program against interfaces anyway — after all, there’s no constructors to call!

I made a comment, which has been repeated by Ayende, that dependency injection (particularly when used with interface-based development) makes your code easier to change. The tools that we have today for refactoring are great, but they aren’t magical. If you couple your types together too tightly, or jam all sorts of concerns into the same type, you will have a hell of a time separating them when it becomes necessary. Trust me. I’ve been there.

Great effort has been applied in .NET to supporting configuration. To me, programming against interfaces and using a dependency injector to wire everything up is a lot like putting connection strings in a configuration file. Sure, you can hardcode constants. They’re not going to change today, or maybe even for the foreseeable future. But they will. Given enough time, all requirements change. I’d rather plan for the inevitable.

I think what everyone is missing is that this is not a zero-sum game. No one is saying that if you don’t use dependency injection, your software will turn into a big ball of mud, your friends will abandon you, and your dog will bite you when you try to pet it. Likewise, no one is saying that if you embrace dependency injection, your code will turn into a symphony of light, become sentient, and begin to write itself while you nap at your desk. It’s just a principle to follow, like information hiding or single-responsibility. It can be misused, but if used correctly, it can and will change the way you think about software — and I think for the better.