Tag Archives: Architecture

Test Driven ?

TDD is a practice that I have found great success with. I have found that I became proficient in it, the name changed its meaning.

TL; DR

TDD like any skill requires time to learn and master. The meaning of the last ‘D’ in the abbreviation changes as does your mastery. As the practice becomes increasingly rote, TDD starts to become less about development and more about design.

Setup

It drives me crazy when I hear someone say “TDD is easy”. For me TDD was anything but easy. In fact, it is still easier for me to revert to coding without tests, even though I have been doing nearly everything test driven for the last 10 years.

It took me 2 years to get the hang of TDD, and 3 more to feel like I started to master it. During this time, I noticed a profound difference in my code. I spent less and less time trying to figure out what my design was. It emerged and coalesced on its own. … Well not exactly on its own.

Development

When I started, I would think about what I wanted the design to be, and write the test to prove out that design. This lead me down quite a few rabbit holes and wrong decisions. The good news was it allowed me to get better at refactoring.

As I started to get proficient in writing tests, my decisions became cleaner. My code never did, but my thought process cleared up. I would stop, and think about my tests. What does the next test highlight. What part of my design is not yet implemented. (And yes, I still got surprised when things turned out never needing to be implemented.) Here the tests were driving the development of my design. I was doing Test Driven Development.

Design

As I became even better at writing tests, I found that the questions I was asking were laborious and time consuming. I simplified them to 2 questions.

  1. Is this the API I want?
  2. What expectation is not met?

I found then that as I wrote the test to satisfy these 2 questions the design would emerge. Now a fundamental change had occurred. I was using the tests to drive my design.

 

Now a fundamental change had occurred. I was using the tests to drive my design. Click To Tweet

Kerney’s Hierarchy of Good Design

Talking with a friend made me think about what I use to classify my software architecture designs as good designs and why. I realized I have a hierarchy of design I apply to every solution to rate how good of a solution it is. The more it conforms to this list the better the design is. The items at the top are more important than the items at the bottom. Not every design meets all criteria. In applying these criteria to a design I will never give up an upper item to make a design that satisfies a lower item. For example, if in removing duplication I make a class less explicit, I will not remove that duplication until I find a better design.

Here is my hierarchy, my reasons follow below:

  1. Tested Coded is better than untested code
  2. Explicit code is better than non-explicit code
  3. Non-repetitive code is better than duplicated code
  4. SOLID code is better than code that is not SOLID
  5. Uncluttered Code is better than Cluttered Code

The reasons that follow all boil down to one fact: a good design is one that is easily changed to handle new requirements or lessons about how the system should behave.

My Reasons:

A.                 Tested Code is better than untested code

I believe that code under test is inherently better designed than code which is not tested, even if it is badly designed code that is hard to read and inappropriately abstracted. The reason for this is that if the code is tested, I can change it. I have some degree of certainty that my changes do not break intended functionality.

I seldom see the correct architectural solution the first time I do some something. In fact, I seldom have it by the third. However, if I have a complete suite of tests backing me up, I can, and have, changed the architecture drastically based on new understanding and need. The architecture I choose initially seldom resembles what I end up with.

So having tests verifying that the behavior does not change is invaluable to me. It lets me ensure my solution is correct without reliance to the architecture I am using.

B.                  Explicit code is better than non-explicit code

Code that clearly states what it is doing in English, or your native language, is a better design over anything that requires interpretation of commands. If a programmer can read a method as if it was a paragraph and gain insight to its intent, than the programmer is better armed for changing that method.

Being explicit about intent reduces a large number of errors by reducing what a programmer has to keep track of in their head. The more things that someone has to juggle to understand the impact of a change the more likely that person is to make a mistake.

C.                  DRY code is better than duplicated code

DRY stands for “Do not Repeat Yourself” and is about not having duplicated lines of code. Having 100% DRY code is often impossible and striving for that is often a waste of time. However, my general rule is that removing duplication is always better than having it there. I go to great lengths to remove duplication when I find it.

The reason is that duplication often means that my code may change in unexpected ways. If one piece of logic changes and subsequent ones do not, I may not understand the full impact of my change. I may forget something and leave a lingering bug. Worse yet is this bug may take years to find.

When I have all similar logic in a single place then everything that touches that code changes every time that code does. I have a better understanding of the impact of a change. I can ensure that everything changes as needed when all the changes happen in a single place.

D.                 SOLID code is better than code that is not SOLID

SOLID is a quality of “Object Oriented Programming” defined by Robert (Uncle Bob) Martin. Any code that adheres to each of the principles is said to have the qualities that make it object oriented. Each principle has its own reason for being on my hierarchy, but the whole is here as it defines if something is actually well-behaved object oriented design.

Just as with the DRY code, it is not reasonable to expect something to conform 100% of way. But things that do adhere to these principles are better designed in my book.

1.                  Single Responsibility Principle

Single Responsibility Principle (SRP) states that any piece of code should have one and only one reason to change. If the constructor of a class changes that should not cause code using other parts of that class to change.

SRP is about appropriately isolating dependencies. It is used to measure good design because code that has the appropriate level of isolation required by SRP is easier to understand and easier to change.

2.                  Open Closed Principle

Open Closed Principle (OCP) states that an object should be open to extension and closed to modification. The heart of OCP is that if new behavior is required in the system the programmer is able to extend the base objects to get this behavior without modification of that base object.

When a design adheres to the open closed principle then it allows for easier modification that does not ripple through the system. Those changes are isolated to the new subclasses and their introduction can be controlled.

3.                  Liskov Substitution Principle

Liskov Substitution Principle (LSP) states that any class must be able to be substituted with its base class. The easiest way to explain this principle is by giving an example of breaking this principle. If square inherits from rectangle this principle is violated because some calling code may try to set the length and width separately. With a square this is not possible, but with a rectangle it is.

A design that adheres to the LSP is better than one that does not. Since LSP eliminates a large potential for bugs and the code that defends against them. Since defense code disappears the code becomes easier to understand.

4.                  Interface Segregation Principle

Interface Segregation Principle (ISP) states that no class shall accept an interface with methods on it that the class does not use. Simply put it is better to have many specialized interfaces rather than lesser numbers of generic ones.

Many specialized interfaces are better design than a less generalized interfaces as it limits the number of reasons a piece of code may have to change. If a code relies on a more generalized interface, when that interface changes, it may impact that code without reason. It also clearly divides the use cases and makes code more readable.

5.                  Dependency Inversion Principle

Dependency Inversion Principle (DIP) states that classes should not directly depend on other classes, but instead depend on interfaces.

If classes depend on interfaces instead of classes, they are easier to modify and do not change because implementation changes. Instead a change must be made to the interface for a change to affect a piece of code.

E.                  Uncluttered Code is better than Cluttered Code

There are a lot of things that clutter code. Poor formatting, unnecessary comments, bad variable naming along with a whole host of other things. Clutter confuses intent and therefore represents bad design. When code is uncluttered it is easier to understand and therefore easier to change.

F.                  Honorable Mention

An honorable mention needs to be given to design patterns. The reason why they only get honorable mention is that they into of themselves do not represent good design. Instead they are a side effect of good design.

I should probably explain what a design pattern is. A design pattern is a common way people have solved similar problems in software. If the architecture was found in a number of successful projects it is deemed a pattern.

The risk here is that design patterns are a side effect of good design not a measure of a good design. If we identify a problem that has a pattern and then we architect our solution from that pattern we run the risk of over engineering our solution.

However, if we apply the filters listed above and when we are done we notice a design pattern… Well that could be an indication of being on the correct path.

Knowing design patterns is helpful and useful. They allow us to recognize a path once we are on it. They are good for navigation during the journey. We should not use them as the goal of the journey.

 


These represent the measures I use for determining the validity of my designs. As a friend of mine use to say: “Everything in moderation even moderation.” I do not use these like law. Instead I use them like shading in a picture. They are applied with enough vigor to enhance what I am working on and no more.