April 19, 2015

Scrum advantages explained – in 2 simple whiteboard sketches (part 2 of 2)


Image you had only ten minutes plus a whiteboard to convince people to introduce Scrum in your organization. What would you tell and show? I would try to capture the essence of how Scrum improves your software development process. Two whiteboard sketches should be enough for that. Check out sketch one first. Here’s sketch two.

Scrum is founded on the same pillars other agile methodologies consider crucial factors for successful software development – people and knowledge.

In this second post I’d like to show how Scrum embraces uncertainties and supports knowledge building by deferring decision making. This is what my second and last whiteboard picture illustrates:

Tranditional vs. Scrum approach to knowledge building in a whiteboard sketch

The “Traditional” approach

In a “traditional”, waterfall-based project, all decisions are made up front. It’s exactly laid out what will be done, how it will be done, what technologies and tools are used, how the team works together and with the customer, and exactly how long everything is going to take. This is what the sketch illustrates:
  • At the beginning of the project, all decisions are made for the entire length of the project. Detailed effort estimates are made for every single forecasted work item. These initial decisions are irrevocable as the project planning is hard-wired to match the original effort estimates.
  • After the project started, few adjustments can still be applied to the initial planning, but more profound changes which would endanger the initial planning are impossible, and shortly after that, no further decisions will be made, or can be made.
  • Even if the project’s duration is separated by milestones which provide a means to inspect, they do not provide a means to adapt as the project must go on based on the initial planning.
Unfortunately, this is exactly the opposite of how building knowledge throughout the project works. This is especially unfavorable as knowledge is the single base for the decision making process. The whiteboard sketch illustrates this very clearly:
  • At the beginning of the project, knowledge is naturally zero. We do now know anything about the customer or what he wants, we do potentially not know the project’s technological base or how the team’s individuals will work together.
  • However, after the team started to work on the project, as it collaborates and as it interacts with the customer, it will quickly build up knowledge about the stakeholders, tools and processes involved in the project. Its knowledge will typically increase sharply right after beginning, and continue to grow until it reaches a high, stable level.
Now, the problem with this project model becomes apparent. To sum it up, all decisions are made when there is the least amount of knowledge in place; and as soon as there is more knowledge, no more decisions will or can be made. That does not sound like a good plan!

The Scrum approach

Scrum, on the other hand, embraces uncertainty at project start, and allows the team to adapt its strategy as knowledge is gained. Let’s at first observe the knowledge building process in Scrum:
  • There are no differences to a traditional project and there is no magic involved. The team will build up knowledge as it works on the project. This knowledge is gathered in an empirical process.
 However, Scrum addresses the process of how knowledge is naturally gathered by adapting the way decisions are made:
  • There is no need nor use for a detailed overall plan. At the beginning of the project, only those decisions are made which are required to perform the work of the first Sprint. (In reality, in the first Sprint, there are potentially a few more decisions made than in later Sprints.)
  • As in a waterfall-based project, throughout the Sprint, there are fewer and fewer opportunities to make decisions and adapt. This is okay as only a bare minimum of decisions are involved in the Sprint’s initial planning, unable to endanger the entire project, and as a single Sprint will not last longer than four weeks.
  • At the end of the Sprint, the team’s performance will not only be inspected, but the strategy and planning can also be immediately adapted as the team can now, based on its experiences in the current Sprint, plan the next Sprint.
  • Dividing the project into Sprints obviously is key to the success of this model. Each Sprint offers the opportunity to both inspect and adopt. Thus, each Sprint could in fact be considered a small (waterfall) project on its own!
Scrum is based on the agile method of positive procrastination to defer decision-making and commitment until the utmost amount of knowledge, essential to back any decision, is gained. It also reduces risk by only allowing to make decisions about the immediate future and within a well-defined and confined scope. This process model really values knowledge as the key ingredient to software project management. Knowledge is brought back into the process frequently, and knowledge building is highly encouraged and thus potentially enhanced.

Conclusion

As these two articles have shown, Scrum’s advantages for managing two major driving forces of software development – people and knowledge – are tremendous.

Scrum gives responsibility and authority to those who have the knowledge and eligibility to perform the actual work: the Development Tean, and the customer as the domain expert and Product Owner. It facilitates communication and collaboration within the Development Team, and between the Development Team and the Product Owner (customer). This is key to increase transparency, thus helping to build trust, lower risk, and increase Team motivation.

Scrum embraces the complexity and uncertainty which come with software development, and most notably with custom software development, and it accepts initial lack of knowledge and experience, allowing the Team to learn from its mistakes, following an empirical path, whilst considerably lowering the risk of any failure by enforcing clear, time-boxed scopes for planning and decision making.

These are, in my opinion, two very strong points for Scrum and Agile. They make it very clear that any waterfall-based approach is very ill-suited for the shaky business of software development.

Please let me know in the comments section whether you agree with my position. I am also highly interest in hearing how you would try to convince the essence of Scrum “in ten minutes or so” – or even how you would support an attitude of refusing Scrum.




Pages: 1 2

Scrum advantages explained – in 2 simple whiteboard sketches (part 1 of 2)




Image you had only ten minutes plus a whiteboard to convince people to introduce Scrum in your organization. What would you tell and show? I would try to capture the essence of how Scrum improves your software development process. Two whiteboard sketches should be enough for that. Here’s sketch one.

Scrum is founded on the same pillars other agile methodologies consider crucial factors for successful software development – people and knowledge.

In this first post let’s see how Scrum transforms the interaction of the people involved in a software development project. I would illustrate it like so:

Tranditional vs. Scrum approach to team organization in a whiteboard sketch

The “Traditional” approach

In a “traditional” software project, all responsibility is kept by the project leader:
  • The project leader coordinates the work of the developers, he/she plans the project, assigns tasks and coordinates communication with the customer.
  • The developers have no responsibility. They work on their assignments. They provide the project leader with their knowledge in order to help him/her make decisions.
  • The customer supplies the project leader with product feature requests.
This kind of project structure comes in several flavors. In some of them, there are multiple project leaders, or business analysts take the role of project leaders. They all share the same drawbacks:
  • Developers and the project leader are kept in a deadlock situation of mutual dependency, for the project leader cannot make decisions without the knowledge provided by the developers, whilst the developers have no use for their knowledge as they are unable to make decisions of their own.
  • The project leader acts as a proxy between the customer and the developers, thus becoming an information bottleneck. In this situation, project leaders tend to (rightfully) judge their own position higher than the customer’s, and to keep control of the project by selectively choose which information interchange between customer and developers to allow and which to cut.
  • As all information must pass through the project leader, he alone is able to build knowledge he is able to use. Neither the customer nor the developers will be able to see the project’s “big picture” and to learn from each other.

The Scrum approach

In Scrum, roles are designed entirely different:
  • Developers act as a self-organizing team. They share information directly and transparently with each other as well as with the customer.
  • There is no such role as the project leader, as there is no need for it in a self-organizing team. There is, however, the role of the Scrum Master who is responsible to ensure that the team understands and follows the rules of Scrum, and which shields the team from external impediments.
  • In Scrum, the equivalent to the customer's role is the Product Owner role. He provides product feature requests directly to the developer team and freely interchanges information with it.
This setup offers the ensuing advantages:
  • The presence of the Scrum Master relieves the team of all administrative burden, allowing the Development Team to fully concentrate on the software development process.
  • The Product Owner has full, transparent access to all relevant information. He knows the developers in person and is thus able to build a bond of trust with them. He, and he alone, with the help of the Development Team, determines the outcome of the project. The project really is built to fulfill his very needs.
  • The Development Team shares, as a unit, responsibility and knowledge within the project. They are able to learn from their experience and to communicate directly with the customer.
In Scrum, responsibility lays within the very individuals who have the knowledge to execute it. This is a highly motivating, highly productive work environment. Transparency and thus traceability are maximized. There is no “single point of failure”. Interdependencies are minimized.

In part 2, I sketch another great advantage of the Scrum methodology – concerning knowledge.


Pages: 1 2

April 12, 2015

Java EE 7 / Java EE 6 Bean types overview





In order to facilitate understanding of the many notions of “JavaBeans” and of the many means of dependency injection Java EE 7 offers, I created a demo web application which compares the results of the five JavaBean specifications in action. See here the results or run the demo on your own server!

TL;DR? Jump into the demo immediately!

Modularity overkill in Java EE 7

Back in 2013, when the Java Community Process (JCP) released the Java Platform, Enterprise Edition (Java EE) 7 specification (Java Specification Request JSR 342), it promised to further ease the development of Java EE applications by providing a standardized, vendor-agnostic, modular, lightweight, straightforward API based on core concepts such as convention over configuration and inversion of control through dependency injection.

Not only were most of these promises kept, but one of them in particular was even surpassed: The framework is arguably too modular. The specification – which actually is an umbrella specification for 30+ JSR specs that deal with specialized aspects of EE development (service interfaces, O/R mapping, persistence, …) – contains some JSRs which for sort of “historical reasons” have overlapping responsibilities, and in some cases even terminology or API naming conflicts.

This is most obvious and most severe in the specification’s core concept of dependency injection: There are now five notions of “JavaBeans”, injectable components, each with its own means of dependency injections:
  • JSR-316 / JSR-250: Managed Beans
  • JSR-330: Dependency Injection for Java
  • JSR-346 (formerly JSR-299): Contexts and Dependency Injection for Java EE (CDI)
  • JSR-344 (formerly JSR-314): JavaServer Faces (JSF)
  • JSR-345 (formerly JSR-318): Enterprise JavaBeans (EJB) Lite
Some of them are based upon each other while others are mutually incompatible. Many of them have overlapping responsibilities, and between some of them, there are even API name clashes. Also the fact that some specifications were actually updates of Java EE 6 specifications and thus the formerly well-known JEE6 JSR number has been increased arbitrarily for the same module didn’t really make things easier.

Judging by the apparent high number of questions on stackoverflow and other platforms, this left many EE developers confused, especially as to when to prefer any given bean type over the other, and how to properly use them in conjunction. It lacked the overall picture, or the specification apparently failed to convey it to the developers.

I was one of those developers, and I decided to grab as much information about this topic as I could and build a demo application which provides an overview and a comparison of the five standards of dependency injection in Java EE in action. Now that I also have my own blog, I decided to post it online.

Comparison of the five standards for Dependency Injection in Java EE 7 / Java EE 6 Web Profile

This Java EE 7 demo web application compares the five standards for Dependency Injection available on the platform. For each module, basic information about the standard is provided; there are tests for JavaBean lookups; and there are tests for injecting each JavaBean type in any other JavaBean type.

The various JavaBean lookup tests are deliberately not implemented as “classic” JUnit tests; instead, they are implemented in JSF backing beans and executed during Bean initialization. This technique ensures that the lookup tests are actually executed by the container, and nothing is mocked away.

This demo application is based on the so-called Java EE 7 Web-Profile which includes a subset of the full Java EE 7 module stack.

Currently, the only officially Java EE 7 Web Profile compatible application servers are Glassfish 4.0+ and Wildfly 8.x.

On Glassfish

Click here to see the results when running on GlassFish Server Open Source Edition 4.1.

Glassfish is the Java EE 7 (JSR 342) reference implementation.

On Payara

Click here to see the results when running on Payara Server Open Source Edition 4.1. It matches the GlassFish output.

Payara is a distribution of GlassFish with commercial support.

On WildFly

Note: When running the original application on WildFly 8.2.0, exceptions will be thrown when trying to inject a JSR-316 / JSR-250 ManagedBean in either a JSR-330 „Guice“ Bean or a JSR-346 CDI Bean. One has to manually remove the respective @Resouce annotation at the injection point.

Click here to see the results when running on WildFly 8.2.0.

Because of the problem described above, those two Bean injections fail. The output thus differs from the Glassfish output.

LIVE on Wildfly

Click here to see a LIVE demo of the application running on WildFly 8.2.0 (hosted on OpenShift).

The same limitations explained in the previous section apply.

On your own server

Please feel free to download this demo application from GitHub and deploy it to your own server instance.

Lessons learnt?

I created this demo application to help myself answering these questions: What are the differences between the various JavaBean types? How do the various Annotations play together? And, most importantly: When should I use which JavaBean type?

I think the demo application does a great job to help finding answers to these questions. I encourage you to study the application and to come to your own conclusions. I will probably post my personal conclusions in a future blog post. Within this post, I’d like to present the facts only.

However, there are already many in-depth articles and discussions available online where Java EE / JavaBeans best practices are explained. You may find that they are also backed by the results of this demo application. Some of the most insightful online resources include the ones listed here.

(Note: Some of these articles contain are based on previous Java EE versions.)

Conclusion

I like Java EE 7. It brings everything Java EE out-of-the-box with a single Maven dependency, its programming model is lean and centered around an annotation-based “convention over configuration” design, and through its standardization, it really promotes “write once, run everywhere” (it’s still up to the server vendors to fulfill this promise).

But here are, in my opinion, serious design flaws in the platform, and studying its inversion of control mechanism reveals several of them. Really, having five standards for dependency injection is just pure overengineering. This is not DRY, this is re-inventing the wheel, and this is bad software design.

On top of that, there are several ambiguities and conflicts evolving from similar names of concepts and components, the most severe of which are probably the two @ManagedBean annotations (which, ironically, are both kind of useless), class name clashes for scope annotations and a confusing split in two similar, but incompatible transaction mechanisms.

As an end user of the API, I’m not interested in the entire Java EE back-story. I want a concise, uniform, lean API. There’s just too much history and politics which leaks through the current state of the Java EE 7 API. I wish that a future version of Java EE will see the JCPs involved gathering together with the big picture in mind. Or at least, please hand out more practical PAI manuals.

Please let me know in the comments section below whether you found this demo application helpful or whether it lacks important information. Even though I worked with great care on this application, inaccurate information may have sneaked in. If you found any error or wrong information, please report them here so I can update the application. Or start a pull request on the GitHub repository. I’ll be also interested to hear if you concur with my opinion on Java EE modularity; if you have other helpful guidelines, please let me know.

April 5, 2015

6 rules of exception handling by example (part 2 of 2)


In this blog post, I present six rules I consider crucial to exception handling design. I will illustrate each one of them with an anti-pattern and a “best practice” solution in order to show the trouble caused by improper exception handling. Make sure to check out the first part here.

Checked exceptions for business errors; unchecked exceptions for technical errors

The differentiation of checked and unchecked exceptions still seems to cause trouble to many developers, at least in my experience, and this is very bad. If you do this one wrong, you may end up with a codebase which is even harder to maintain than if any other of these anti-patterns are applied.

I will not cover here the technical differentiation of checked and unchecked exceptions, as this is covered exhaustively by basic Java tutorials. Instead, let me again explain how to implement proper exception handling design by comparing the anti-pattern and the best practice approach in action.

First of all, here again is the rule to which we must obey:
  • Use checked exceptions (sparingly) for business errors: These are expected errors that the caller can and must handle.
  • Use unchecked exceptions for technical errors: These are either unexpected errors that the caller cannot handle sensibly other than throwing them up in the call chain; or they are exposing the caller to technical internals he is not interested in or not even allowed to know.
It is however critical to understand that the notion of what is “business logic” and what is “technical logic” is highly context-dependent. Depending on the layer of abstraction you’re working on, this attribution will change. If, for example, you’re working on file access logic, retrieving a file clearly is business logic whereas when you’re working on a chat application which as a side effect writes application log messages to a log file, accessing this file may be a technical issue.

The anti-pattern

Terrible things happen if you do not follow this rule. Let’s observe this example we're already familiar with:
public class CheckedUncheckedClass {
    private static final String FILE_PATH = "my/file/path.txt";
    
    private static File tryLoadFile() throws FileNotFoundException {
        // let's assume file loading failed
        boolean fileLoadedSuccessfully = false;
        
        if (fileLoadedSuccessfully) {
            return new File(FILE_PATH);
        }
        else {
            throw new FileNotFoundException("Could not load file with path " + FILE_PATH);
        }
    }
    
    public static String getFileName() throws FileNotFoundException {
        File file = tryLoadFile();
        return file.getName();
    }
}

Let’s say this example applies a “use checked exceptions wherever possible” policy. In my experience, it’s a widespread misconception that using checked exceptions increases application robustness. Much in the contrary, this strategy can seriously diminish robustness whilst increasing component interdependence.

Here, tryLoadFile() throws a FileNotFoundException, which is a checked exception. This is reasonable. Here, file handing is business logic. In the getFileName() method, however, according to this class’s exception handling policy, that checked exception is thrown up to the caller. This has two undesired consequences:
  • The conceptual problem: The caller of getFileName() is exposed to internals of file handling he should not need to know.
  • The software design problem: The caller cannot sensibly react to that exception. According to that policy, he can now only either throw it up further, or process it, e.g. by logging.
If the client opts for throwing the exception up, the problem really is just shifted up on the stack. This reveals the true design dept of checked exceptions: They create implicit interfaces. The entire call stack is linked by the throws declarations. This really is a viral behavior by design. It maximizes coupling and diminishes maintainability. Changing the exception type one day (e.g. using an unchecked type instead) would break the complete call chain and every class involved. This is truly not an option.

If, on the other hand, the client decides to process the exception, e.g. by logging it, things get even worse. Let’s assume that the client implements exception logging like so:
public class CheckedUncheckedCaller {
     public void callBusiness() {
          String fileName;
          try {
                fileName = CheckedUncheckedClass.getFileName();
          } catch (FileNotFoundException ex) {
                DummyExceptionHandler.showMessage(ex.getMessage());
          }
     }
}

At the very moment he logs the exception, any information about it is lost. If the CheckedUncheckedCaller class is not the root class of our entire application which we assume it’s not, any subsequent caller of CheckedUncheckedCaller#callBusiness() would have no information about whether that method executed successfully or whether a FileNotFoundException occurred. Yes, logging an exception actually is equivalent to swallowing an exception! If you’re lucky, your logging works and prints the stack trace to the application / server log, but potentially, there’s no trace left of your exception. Also, this code is not testable as the test (just as any caller) would have no information about the exception occurrence.

But here’s the thing: Let’s imagine a NullPointerexception happened somewhere inside the callBusiness() execution. As it’s an unchecked exception, it would actually run through your entire call stack (given you didn’t catch it somewhere) and thus trigger proper exception handling, namely crashing the application (or rather, trigger some central logging. See the last section about AOP for details). This means that unexpected, unchecked exceptions would actually trigger proper exception handling whilst the exceptions you are painstakingly observing will get lost. This is less than unpleasant.

The solution

We already discussed in detail how both of the following strategies are foul when dealing with checked exceptions:
  • Throwing them up. This couples caller and callee tightly by an implicit interface.
  • Processing them, e.g. logging them. This is essentially exception swallowing.
Thus, the only option left is really to wrap the checked exception into an unchecked exception and throw it up, as in this example implementation:
public static String getFileName() {
 File file;
 try {
  file = tryLoadFile();
 } catch (FileNotFoundException ex) {
  throw new RuntimeException(ex);
 }
 return file.getName();
}

Here, information about the original exception is preserved, but any subsequent caller is decoupled from getFileName() internals.

Note that this is an idealized example. What I really wanted to show here is that using checked exceptions should not be the prior solution. However, checked exception can be used sensibly when it comes to actually mark business errors. They can warn the client that he must watch out for a well-defined error postcondition and react to it in a well-defined way.

Let’s observe this example as a “checked exceptions best practices” implementation:
public class CheckedClass {
     private void validate(String input) throws ValidationException {
          if (input == null) {
                throw new ValidationException(input);
          }
     }
     
     public void callBusiness() {
          try {
                validate(null);
          } catch (ValidationException ex) {
                DummyExceptionHandler.showMessage(ex.getMessage());
          }
     }
}

Here, the validate(String) method throws a ValidationException which is a checked exception. This allows callBusiness() to carefully watch for this exception and react in a well-defined way. (Note that for simplicity reasons, callBusiness() is thought to be the top of the interaction layer and thus responsible to process exceptions before they freely break in the UI layer.)

Also consider one last thing: After Java, no other major programming language ever used checked exceptions again. Programming language exception design still is a field of active research, but checked exceptions have proved to mislead to flawed software design. You should shield your application design from being lead by such a complex and disputed concept.

Fine-grained exception control

This rule states that you should use the most specific exception class possible for both throwing and catching an exception to allow for fine-grained exception control. Violations to this rule are typically related to other rule violations such as the usage of magical error codes, and swallowing exceptions.

The anti-pattern

As an extreme anti-pattern example let’s assume a project follows the policy to use “one single exception class at a layer boundary”. To make things even more severe, let’s assume this exception is designed as a checked type.

Let’s take a look at this example code:
public class FineGrainedClass {
     private static final String FILE_PATH = "my/file/path.txt";
     
     private static File tryLoadFile() throws FileNotFoundException {
          // let's assume file loading failed
          boolean fileLoadedSuccessfully = false;
          
          if (fileLoadedSuccessfully) {
                return new File(FILE_PATH);
          }
          else {
                throw new FileNotFoundException("Could not load file with path " + FILE_PATH);
          }
     }
     
     public static String getFileName(String input) throws AllmightyException {
          if (input == null) {
                throw new AllmightyException("Input must not be null");
          }
          File file;
          try {
                file = tryLoadFile();
          } catch (FileNotFoundException ex) {
                throw new AllmightyException(ex);
          }
          return file.getName();
     }
}

Let’s assume that the getFile(String) method marks the layer boundary. Thus, as to our policy, throwing our single exception type AllmightyException is enforced.

Now, for the caller, an AllmightyException could imply that either the file loading failed or that the getFileName(String)’s not-null prepondition was violated. The caller’s only hope to find out the actual error source would be if our exception type encapsulates reasonable information about its source. This would then be a violation of using magical error codes, again.

Note also that this particular policy cannot be technically guaranteed as you have no control over RuntimeExceptions or Errors (trying to capture them would undermine any of Java’s exception handling mechanics).

The solution

The solution of course is obvious: Try to use the most specific exception class both in the throws as well as in the catch block, as in this improved example:
public class FineGrainedClassFixed {
     private static final String FILE_PATH = "my/file/path.txt";
     
     private File tryLoadFile() throws FileNotFoundException {
          // let's assume file loading failed
          boolean fileLoadedSuccessfully = false;
          
          if (fileLoadedSuccessfully) {
                return new File(FILE_PATH);
          }
          else {
                throw new FileNotFoundException("Could not load file with path " + FILE_PATH);
          }
     }
     
     public String getFileName(String input) {
          if (input == null) {
                throw new IllegalArgumentException("'input' must not be null.");
          }
          File file;
          try {
                file = tryLoadFile();
          } catch (FileNotFoundException ex) {
                throw new RuntimeException(ex);
          }
          return file.getName();
     }
}

Here, we also used unchecked exceptions only. As this example class represents a layer boundary, it’s most natural for the upper layer not to care about internals of this layer – this thus fulfils the use case of technical, hence unchecked, exceptions.

It’s another best practice, too, to stick with the JDK’s exceptions whenever possible to keep your API standardized. Introduce your own exception type only if you cannot express the cause of that exception semantically coherent with any of the built-in exception types.

Use AOP

The anti-pattern

Exception handling in a so called cross-cutting concern, that is, it is common to all layers of a software. In order not to violate the DRY principle, such aspects must be implemented in one central place. There are so-called aspect-oriented frameworks (as integrated in Java EE or Spring) for that purpose, or your technology stack may offer specific means of centralizing exception handling (such as Java Web exception pages or servlet filters). Making use of those mechanics must be key to your application’s overall exception handling strategy. Scattering exception handling over multiple layers not only violates DRY, but is the typical root cause of the “swallowing exceptions” anti-pattern we previously discussed.

Consider this example:
public class CccClass {
     private static final String FILE_PATH = "my/file/path.txt";
     
     private File tryLoadFile(String fileName) throws FileNotFoundException {
          // simulate NullpointerException
          if (fileName == null) {
                throw new NullPointerException();
          }
          // let's assume file loading failed
          boolean fileLoadedSuccessfully = false;
          
          if (fileLoadedSuccessfully) {
                return new File(FILE_PATH);
          }
          else {
                throw new FileNotFoundException("Could not load file with path " + FILE_PATH);
          }
     }
     
     public String getFileName(String input) {
          if (input == null) {
                Logger.getLogger(CccClass.class.getName()).log(Level.SEVERE, "Input must not be null");
          }
          File file;
          try {
                file = tryLoadFile(input);
          } catch (FileNotFoundException ex) {
                throw new RuntimeException(ex);
          }
          return file.getName();
     }
}

Here, getFileName(String) implements its own exception handling for the case where input is null. Exception handling is implemented here as a logger call. On the other hand, other exceptions such as a file loading error would be thrown up in order to be handled by the caller.

As you can see, this just resolves to another illustration of the point I previously presented that logging an exception means swallowing an exception. Imagine what happens if input actually is null. The respective message is logged, but the program just keeps going; in this example, it would trigger a NullpointerException within the tryLoadFile(String) call, which would just be a side effect of improper exception handling. Everything we said about swallowing exception practices applies again. The original exception is lost.

The solution

Now the solution to that problem is straightforward again. Throw up any exception and implement exception handling in exactly one single place. I will not print the solution listing here as it’s literally identical to the FineGrainedClassFixed solution of the previous section.

Conclusion

Obeying the technical specifications of Java’s exception handling is not enough to build robust, safe, and maintainable applications. I see quite a lot of misconceptions and anti-patterns in practice. Here I presented six rules of exception handling which I think of as forming crucial and convenient guidelines for proper exception handling design. These six rules come in handy for code reviews as well. If you make one or more ticks for rule violations, the code is at least fishy:
  1. Magical error codes
  2. Return null on error
  3. Swallow exception
  4. Checked / unchecked
  5. Fine-grained exceptions
  6. AOP
I am highly interested in your opinion on this topic. Do you agree with these six rules? Have I missed other crucial rules which might lead to terrible code design flaws when violated? Please let me know your thoughts in the comments section below.

The complete code examples are available on GitHub. You can run the unit tests to see the results of both proper and improper exception handling in action.

You may also be interested in

Pages: 1 2

6 rules of exception handling by example (part 1 of 2)





In this blog post, I present six rules I consider crucial to exception handling design. I will illustrate each one of them with an anti-pattern and a “best practice” solution in order to show the trouble caused by improper exception handling.

Error handling is a crucial part of any software application. If you don’t pay attention to it, if you don’t take any coordinated measures, your application will become instable, error-prone and hard to test and maintain. But on the other hand, if you do take action, but you do it the wrong way, it gets even worse.

In Java, the concept of the Exception is vital to error handling in the application. It’s a very common, safe, mature concept. It’s really a core concept of the language, thus covered in any basic course as well as in many books and online resources.

Still I’ve seen many developers struggling with some of the key ideas behind the concept of exception handling, thus undermining the aim of stabilizing and securing an application. That's why I’d like to publish yet another article on that topic which illustrates anti-patterns and best practices in an example-driven way.

The Six Rules

There are six rules any exception handling strategy should obey. According to my experience, exception handling fails whenever these rules get violated. They are:
  1. Never use a magical error code as the return value.
  2. Never EVER return null to signal an error occurrence.
  3. Never swallow an exception you cannot handle; throw it instead.
  4. Use unchecked exceptions for technical errors (you can’t handle them and recover); use checked exceptions sparingly for business error (you know how to handle them and recover).
  5. Use the most fine-grained exception subclass possible.
  6. Error handling is a cross-cutting concern. That’s what AOP frameworks are for; use them.
As with any rule in software development, they need justification unless negligible. I’ll illustrate my reasoning for every single one of those rules in the ensuing sections.

Magical error codes

The anti-pattern

Well, we all know this is wrong, right? Yet I’ve seen code going to production with error handling based on magical return codes. This drastically increases coupling between caller and callee, it violates the Separation of Concerns principle (application code vs. error handling) and diminishes maintainability. It breaks compile time checking, thus diminishing stability. Because pure return values can simply be left ignored, it also impedes error safety.

That’s why we always use exceptions to encapsulate errors and error handling. There is no justification for not doing so, and most developers will agree on that. However, I’ve seen code strangely interweaving outdated error-code thinking in exception-based error handling, like so:
public void callBusiness() {
    try {
        doBusiness();
    } catch (MagicalErrorCodeException ex) {
        switch (ex.reason) {
            case INPUT_EMPTY:
                DummyExceptionHandler.handleCriticalError(ex);
                break;
            case INPUT_NOT_NUMERICAL:
                DummyExceptionHandler.showMessage("Input must be numerical.");
                break;
            case INPUT_TOO_LONG:
                DummyExceptionHandler.showMessage("Input contains too many digits.");
                break;
            default:
                throw new IllegalArgumentException("Exception reason not supported: " + ex.reason);
        }
    }
}

where the exception class is defined as:
public class MagicalErrorCodeException extends Exception {
    public final Reason reason;
    
    public static enum Reason {
        INPUT_EMPTY,
        INPUT_TOO_LONG,
        INPUT_NOT_NUMERICAL,
    }

    public MagicalErrorCodeException(Reason reason) {
        this.reason = reason;
    }
}

Whilst magical error codes are typically of type int, really any type can be abused as a magical error code; here, it’s an enum type, which, however typesafe as opposed to int or String values, does not change the underlying problem of violating the open/closed principle. We have no idea what any particular Reason implies unless we scan the source code for its usages.

In this example, there is yet another software design anti-pattern: As both INPUT_NOT_NUMERICAL and INPUT_TOO_LONG enum values trigger the same error handling code, there’s a DRY violation.

In essence, error handling should not care about our exception class internals. Because in this anti-pattern example, our exception class is nothing but a dumb state container, the logic illegally resides in the error handling code which thus also violates Java exception handling design by implementing a custom “catch switch”.

The solution

For this example, the solution is twofold, eliminating (a) the DRY violation and (b) the open/closed principle violation. At the same time, we will apply proper Java exception handling design principles.

Here’s the fixed error handling:
public void callBusiness() {
    try {
        doBusiness();
    } catch (CriticalException ex) {
        DummyExceptionHandler.handleCriticalError(ex);
    } catch (MagicalErrorCodeExceptionFixed ex) {
        DummyExceptionHandler.showMessage(ex.getMessage());
    }
}

And here’s the fixed exception class:
public class MagicalErrorCodeExceptionFixed extends Exception {
    private final Reason reason;
    
    public static enum Reason {
        CRITICAL(null),
        INPUT_TOO_LONG("Input must be numerical."),
        INPUT_NOT_NUMERICAL("Input contains too many digits.");
        
        private final String message;

        private Reason(String message) {
            this.message = message;
        }
    }

    public MagicalErrorCodeExceptionFixed(Reason reason) {
        this.reason = reason;
    }

    @Override
    public String getMessage() {
        return reason.message;
    }
}

First, we concentrate on the error handling’s catch (MagicalErrorCodeExceptionFixed ex) branch which is now implemented in a DRY-compliant manner. The Reason enum now knows the business value (here it’s a display message) associated with it, and the exception class applies the state pattern to return that value through the standard getMessage() method.

Let’s assume that the former INPUT_EMPTY Reason value marks a more severe exception case with must be handled individually. Here we use the standard Java way to distinct different kinds of exceptions: By implementing individual exception classes for each respective exception type. In the example, we implemented an additional exception class:
public class CriticalException extends MagicalErrorCodeExceptionFixed {
     public CriticalException() {
          super(Reason.CRITICAL);
     }
}

In the exception handling code, we can now make a clean distinction of the two exception types.

Note that in a strongly object oriented architecture, we could even implement exception handling using a strategy pattern on the exception class. As exception handling typically involves much context information, it is still often preferred to implement it in a more service-oriented way, just as we did here.

Returning null as an error signal

The anti-pattern

The second most severe exception handling design error is using null to signal an error occurrence. To make this very clear: You must never, ever use null to signal an error occurrence. Doing so nukes traceability. Actually, we can consider null to be abused here as just another magical error code, as we discussed them above. Yet using null is actually the worst of all error code choices as it can – by definition – literally provide no information about what went wrong. What’s more, it could even lead to side effect errors.

Let’s observe this example code:
public class NullReturnClass {
     private static final String FILE_PATH = "my/file/path.txt";
     
     private File tryLoadFile() {
          // let's assume file loading failed
          boolean fileLoadedSuccessfully = false;
          
          if (fileLoadedSuccessfully) {
                return new File(FILE_PATH);
          }
          else {
                return null;
          }
     }
     
     public String getFileName() {
          File file = tryLoadFile();
          return file.getName();
     }
}

Here, tryLoadFile() returns null in order to signal that the file has not been loaded properly. The invoking method, getFileName(), not recognizing this as a postcondition, will try to operate on the returning object which would immediately trigger a NulllPointerexception as a side effect. Note that in this case, any information about the actual error source (the fact that tryLoadFile() failed) is lost.

Things are even worse in a situation where you actually do not use the return value of a method that might return null. You will just miss any information about the error occurrence, potentially leaving the application in an undefined state.

The solution

Here is the fixed version of above implementation:
public class NullReturnClassFixed {
    private static final String FILE_PATH = "my/file/path.txt";
    
    private File tryLoadFile() throws FileNotFoundException {
        // let's assume file loading failed
        boolean fileLoadedSuccessfully = false;
        
        if (fileLoadedSuccessfully) {
            return new File(FILE_PATH);
        }
        else {
            throw new FileNotFoundException("Could not load file with path " + FILE_PATH);
        }
    }
    
    public String getFileName() {
        File file;
        try {
            file = tryLoadFile();
        } catch (FileNotFoundException ex) {
            throw new RuntimeException(ex);
        }
        return file.getName();
    }
}

Here, tryLoadFile will throw an exception if it runs into an erroneous state. (Because we are on a “file handling” abstraction layer, this is considered a “business error” here and thus using a checked exception, refer to the respective section later on for more details.)

When calling that method, we can properly handle the error condition, and we can be sure that any unchecked exceptions would be propagated. Here, traceability is kept. There are no side effects from failed method calls.

Swallowing exceptions

The anti-pattern

Swallowing an exception of course is bad as it means loss of information. It can actually occur as a side effect of many of the anti-patterns presented here (i.e. using magic / null return values, not using AOP). But it can also occur where apparently proper exception handling is used.

As an example, we will redesign the “file handling” class from the Returning null as an error signal section. This time, we want the client to handle file loading errors, so we propagate the FileNotFoundexception as a business exception (hence a checked exception, as explained in the next section in more detail). Here is the relevant method:
public String getFileName() throws SwallowingException {
    File file = null;
    try {
        file = tryLoadFile();
    } catch (FileNotFoundException ex) {
        throw new SwallowingException(FILE_PATH);
    }
    return file.getName();
}

Note that the “business error” class’s constructor (SwallowingException(String)) only takes a String as a parameter. Information about the root exception (FileNotFoundexception) is lost. Your IDE would probably even moan about the “ex” variable not being used.

The solution

If we design our own exception class which is used as a wrapper or as a “translator” of technical exceptions to business exceptions, we must be sure to always keep information about the original exception in the stacktrace. In order to do so, of course, we use the exception’s built-in stacking of causes by invoking one of the super constructors:
public class SwallowingExceptionFixed extends Exception {
    private final String filePath;

    public SwallowingExceptionFixed(String filePath, Throwable cause) {
        super(cause);
        this.filePath = filePath;
    }

    @Override
    public String getMessage() {
        return filePath + " could not be found.";
    }
}

Now all we need to do is provide the original exception when using our subclasses constructor, like so:
catch (FileNotFoundException ex) {
    throw new SwallowingExceptionFixed(FILE_PATH, ex);
}

Information about the original error is now preserved in the stacktrace.


Pages: 1 2