February 14, 2016

Java EE: Top 6 design patterns in practice (part 2 of 2)


Pages: 1 2

Singleton

Motivation: You want to create an object which exists only once.

Of course, we all know the classic Java implementation of the singleton pattern.
public class SystemConfig {
    private static class InstanceHolder {
        private static final SystemConfig INSTANCE = new SystemConfig();
    }
    
    public static SystemConfig getInstance() {
        return InstanceHolder.INSTANCE;
    }
    
    private SystemConfig() {}

    public String getContainerVersion() {
        ...
    }
    
    // other singleton methods...
}
However, this pattern is rightfully debated as being in fact an anti-pattern in many scenarios, as becomes apparent if we observe the client code:
public class Client {
    SystemConfig config = SystemConfig.getInstance();
    
    public void printInfo() {
        LOGGER.info(config.getContainerVersion());
    }
    
    ...
}
Its use of a global (static) field creates hard-wired dependencies in your code (that’s a violation of the dependency inversion principle DIP) and makes it hard to write tests (mocking!).

Java EE addresses this issue by providing an out-of-the-box facility to create singleton instances which happens to be much more concise as well. Just use the respective annotation:
@ApplicationScoped
public class SystemConfig {
    public String getContainerVersion() {
        ...
    }
    
    // other singleton methods...
}
In the client, use the default dependency injection mechanism to access the singleton. The Java EE container makes sure that only one instance of a singleton exists at any moment, and that it is thread-safe. With this implementation, we could easily swap the singleton instance (e.g. using an @Alternative declaration) or mock it in our tests.
public class Client {
    @Inject SystemConfig config;
    
    public void printInfo() {
        LOGGER.info(config.getContainerVersion());
    }
    
    ...
}
Note however that depending on the bean type you are using, you need to use different annotations to mark a bean as a singleton, and to refer to it, depending on the underlying bean definition standard:

Singleton Bean Type Singleton annotation Injection annotation Bean definition standard
- @javax.enterprise.context.ApplicationScoped @javax.inject.Inject CDI
@javax.ejb.Singleton @javax.ejb.Singleton @javax.ejb.EJB EJB
@javax.faces.bean.ManagedBean @javax.faces.bean.ApplicationScoped @javax.faces.bean.ManagedProperty JSF
@javax.inject.Named @javax.inject.Singleton @javax.inject.Inject Dependency injection for Java

I went into more details about the different Java EE bean definitions in a previous blog post.

For now please note: In Java EE, never build the singleton pattern by hand; always use the respective Java EE facility.

Factory

Motivation: You want to decouple the creator of an object from the object itself and its user.

I’ve seen terrible ideas of what a factory is and how it gets implemented in real-world projects, including the naïve assumption that simply any facility which returns an instance is to be considered a factory:
public class DiscountProducer {    
    public static Discount createDiscount(Customer customer) {
        ...
    }
}

public class Client {
    public void applyDiscount(Customer customer) {
        int discount = DiscountProducer.createDiscount(customer).amount;
    }
}
This is actually just a global function which returns an object. As with primitive singleton implementations (discussed above), globals are always a bad idea because they increase coupling and diminish testability.

In Java EE, we can realize abstract factories as an @ApplicationScoped singleton CDI bean (see above).
public interface DiscountFactory {
    public Discount createDiscount(Product product);
}

@Qualifier
@Retention(RUNTIME)
@Target({METHOD, FIELD, PARAMETER, TYPE})
public @interface Discounted {
    public DiscountType value() default DiscountType.DEFAULT;
}

@ApplicationScoped
@Discounted(DiscountType.EXTRA)
public class ExtraDiscountFactory implements DiscountFactory {    
    @Override
    public Discount createDiscount(Product product) {
        ...
    }
}

public class Client {
    @Inject
    @Discounted(DiscountType.EXTRA)
    private DiscountFactory discountProducer;
    
    public void applyDiscount(Product product) {
        int discount = discountProducer.createDiscount(product).amount;
    }
}
We can optionally have multiple factories implementing the same interface, and choose the desired factory using the @Alternative annotation (globally) or the @Qualifier meta-annotation (locally) to choose the desired factory at runtime.Through the use of dependency injection, this code is compliant to the dependency inversion principle (DIP).

Bottom line on factories: A static method doesn’t make a factory. In fact, never use static methods when dealing with information provided by a transactional / contextual resource (testability!). Also, you should have a good reason to really use the factory pattern at all rather than simply injecting / creating the instance you’re looking for. There is no “technical” need for factories in Java EE, and introducing them just “because reasons” clearly violates YAGNI (You ain’t gonna need it).

Façade

Motivation: You want to make a complex system accessible through a simple interface.

Okay, this one is probably even more simple than template method. You build an abstraction which internally uses several interfaces to other services. This is typically excessively used in the “service” layer of a Java EE application, for instance, to retrieve information from multiple other services.

The actual anti-pattern here rather lays in over-use of this pattern. It typically doesn’t make sense to create very narrow, specific interfaces just for one specific task. They are typically hard to document and maintain and may introduce a considerable amount of delegate-calls which add little value. Instead, try to  keep your service interfaces lean and self-documenting.

This typically happens if the project strives after a strongly “service-oriented” architecture with action-driven interfaces rather than an “object-oriented” architecture with resource-based interfaces (e.g. REST).

Consider this example of a service implementation:
public class AddressFacade {
    @Inject
    CustomerService customerService;
    
    @Inject
    AddressService addressService;
    
    public void updateAddressStreet(Long customerId, String street) {
        Customer customer = customerService.findById(customerId);
        Address address = customer.getAddress();
        address.setStreet(street);
        addressService.save(address);
    }
    
    public void updateAddressCity(Long customerId, String city) {
        Customer customer = customerService.findById(customerId);
        Address address = customer.getAddress();
        address.setCity(city);
        addressService.save(address);
    }
}
The overvalue of these “umbrella methods” is at least questionable. If we imagine that our services would consist of dozens of highly specialized methods, this may become a true maintainability threat, known as a violation of the interface segregation principle (ISP). This is especially true if all these methods just consist of a few ever-same invocations of underlying services, further violating the DRY (Donn’t repeat yourself) principle.

Also note that although we use container bean injection, interdependency between the beans potentially increases the more functionality a façade comprises.

In above example, the façade implementation further ignores the fact that we could cascade saving a Customer (including its address), hence not needing an explicit AddressService reference / invocation at all. I’ve actually observed in practice that lack of understanding of JPA’s cascade functionality has caused the introduction of superfluous façades. This hence may also indicate a YAGNI (You ain’t gonna need it) design violation.

In an extreme case, a whole additional abstraction layer is introduced, which typically hampers maintainability. A very common example is the introduction of an empty delegate “DAO” layer behind the service façade, a job which in Java EE 5+ is perfectly accomplished by the built-in EntityManager.

In the example use case, I would get rid of the façade altogether, and use a proper MVC pattern to let the user update the customer model (including its address properties). Then it’s enough to simply save the updated model on the persistence layer:
customerService.save(customer);
You should always strive after a clean and reusable service interface. Thinking in objects / REST / CRUD helps us achieve this goal.

Bottom line: Never build an abstraction just for the sake of the abstraction. If you need an abstraction in one place, don’t enforce building an entire delegate-layer if you don’t need it in other places.

Conclusion

Of the many patterns which have been described over time, we really only use quite a few regularly in typical Java EE projects. It’s ever more important to be able to identify opportunities for their usage and to apply them correctly.

These design patterns are so important especially in business software development, because:
  • They allow developers to use a common, abstract vocabulary to discuss and document software design (increasing software readability and maintainability).
  • They provide best-practice proven solutions to common problems (increasing robustness).
  • Their implementation is typically well-supported by software languages and frameworks (increasing ease-of-development).
Thank you for your interest. I shall probably address some other useful design patterns in future Java EE-themed articles. In the meantime, if you have any questions or if you’d like to propose corrections or additions, please post them in the comments section below.

Pages: 1 2

Java EE: Top 6 design patterns in practice (part 1 of 2)



In real world Java EE business software development projects, proper use of basic design pattern best practices will decide over software stability, development effort and long-term maintainability. Here I present 6 of my favorite design patterns as observed in real-world JEE software development, complete with anti-patterns and best practice solutions.

Note: This article does not cover higher-level architectural patterns such as MVC or DAO, many of which became de-facto deprecated with recent Java EE versions.

I will use a fictional example project as the context in which the following code samples are embedded in: It’s an online portal of a library where clients can make book reservations.

Template method

Motivation: You need to extend behavior from a parent class.

Let’s start with the most simple one which is also most often overlooked. Let’s say we have a base class, e.g. implementing a REST service endpoint which defines an add() method for REST PUT
public abstract class BaseResource<T extends Identifiable> {    
    @PUT
    public Response add(T entity) {
        entity = getService().save(entity);
        return Response.status(Response.Status.OK).entity(entity).build();
    }
    
    ...
}
which we happily use in sub-classes
@Path("customers")
@Stateless
public class CustomerResource extends BaseResource<Customer> {

}
until we discover that we need to do additional calculations in one of the sub-classes. For instance, as the REST endpoint implemented by the ReservationResource sub-class takes represents a nested resource (a reservation is part of a customer), we’d like to check whether the base resource (the customer) addressed in the path exists before invoking service#save().

Naïvely, we would implement this check by overriding the add() method, implementing additional logic before calling the super.add() method.
@Path("customers/{customerId}/reservations")
@Stateless
public class ReservationResource extends BaseResource<Reservation> {
    @Context UriInfo uri;

    @Override
    public Response add(Reservation entity) {
        Long customerId = getPathParam("customerId", Long.class);
        if (getService().findById(customerId) != null) {
            return super.add(entity);
        }
        else throw new IllegalArgumentException("Illegal customerId in resource path.");
    }
    
    private <T> T getPathParam(String key, Class <T> type) throws NumberFormatException {
        ...
    }
}
This, unfortunately, makes the design brittle. There are no (compile-time) guarantees that this class still implements the behavior as defined by the super-class. This really is a violation of Liskov’s substitution principle (LSP).

In the most simple case, you just forget to call the super method. In a more complex scenarios with many sub-classes or even a nested class hierarchy, with many classes adding and changing super class behavior at will, this will get out of hands quickly.

Here, the template method pattern comes to the rescue: Simply divide up functionality in the base class intro several smaller units (methods), and implement them as needed by the specific implementation.

In the example, the base class may provide a “hook” to later implement a check before the service#save() call:
public abstract class BaseResource<T extends Identifiable> {    
    @PUT
    public final Response add(T entity) {
        try {
            checkForeignKeyConstraints();
        } catch (ResourcePathForeignKeyNotFoundException ex) {
            throw ex.getCause();
        }
        entity = getService().save(entity);
        return Response.status(Response.Status.OK).entity(entity).build();
    }
    
    protected void checkForeignKeyConstraints() throws ResourcePathForeignKeyNotFoundException {
        // Hook. Do nothing by default
    }
    
    ...
}
which we then implement in the ReservationResource:
@Path("customers/{customerId}/reservations")
@Stateless
public class ReservationResource extends BaseResource<Reservation> {
    @Context UriInfo uri;

    @Override
    protected void checkForeignKeyConstraints() throws ResourcePathForeignKeyNotFoundException {
        Long customerId = getPathParam("customerId", Long.class);
        if (getService().findById(customerId) == null) {
            throw new ResourcePathForeignKeyNotFoundException("customerId");
        }
    }
    
    private <T> T getPathParam(String key, Class <T> type) throws NumberFormatException {
        ...
    }
}
(As a side note, we explicitly use a checked exception, not a boolean flag, for the check, to get even more compile-time safety.)

As a general rule of thumb, calling the super method is usually a sign of subpar design (a “code smell”) and a hint to apply the template method pattern.

State / Strategy

Motivation: You want to exchange behavior at runtime, possibly based on a model’s properties.

Allow me to treat the state and strategy pattern in this same section as in a Java EE environment, we’re typically referring to the same idea when talking about either of them: Decide at runtime which logic should be invoked. Here, this sounds like simply implementing an if / else-block, and this is typically also a perfectly valid solution.
public int calculateMaxNumberOfSimultaneousBorrows(Customer customer) {
    if (customer instanceof TrialCustomer) return 1;
    if (customer instanceof NormalCustomer) return 5;
    if (customer instanceof PlatinumCustomer) return 10;
    throw new IllegalArgumentException("Customer type not supported: " + customer.getClass());
}
But not if we can bind applicability of an algorithm to an object’s type, or its properties.

In above example, #calculateMaxNumberOfSimultaneousBorrows() depends on the Customer’s type, and defining its calculation outside of the Customer’s definition marks a violation of the open / closed principle (OCP): When adding a new Customer type, this central calculation need to be updated – a change in one place hence requires a change in another (I’ve actually used the same example in a previous article to illustrate the OCP).

Thus here, the strategy pattern should be applied. We need to define getMaxNumberOfSimultaneousBorrows() on the Customer base class:
public abstract class Customer {
    public abstract int getMaxNumberOfSimultaneousBorrows();
}
and override it in each subclass:
public class NormalCustomer extends Customer {
    @Override
    public int getMaxNumberOfSimultaneousBorrows() {
        return 5;
    }
}
Based on the sub-type of Customer a method holds, the correct algorithm is then invoked, and when defining a new sub-type, the compiler forces us to include its getMaxNumberOfSimultaneousBorrows() algorithm.

This is basic inheritance at work, and yet I see this pattern violated very often, which leads to code that is hard to maintain, but easy to break.

The main lesson learnt here is that except for a few cases of low-level reflection-based programming, usage of instanceof is always a code smell of bad inheritance / bad OOP application, and that you should use state / strategy in this situation.

Builder

Motivation: You want to build a structure dynamically, but compile-time safe.

The builder is probably one of my personal favorite design patterns because if properly applied, it can create highly readable, almost self-documenting fluent interfaces. This is a pattern which describes class relationships at compile time and thus helps us create compile-time-save structures.

It is most typically used when we have an object with a complex initialization routine, e.g. because it holds a lot of properties, the setting of individual properties must happen in a certain order, or setting one property may restrict the setting of other properties.

For instance, consider this Address definition:
public class Address {
    @NotNull private String street;
    @NotNull private String number;
    @NotNull private String zip;
    @NotNull private String city;
    private Country country;
    
    // Getters and setters...
}
Let’s assume that we want to make sure at compile time that we can only create Address objects with all the information set, except for country, which is optional. And we want to provide a self-documenting API to create an Address. We can do so by defining a series of builders:
public class Address {
    ...
    
    public static AddressBuilder street(String street, String number) {
        return new AddressBuilder(street, number);
    }
    
    public static class AddressBuilder {
        private final Address address = new Address();
        private String street;

        private AddressBuilder(String street, String number) {
            address.street = street;
            address.number = number;
        }
        
        public AddressBuilderLocation cityAndZip(String city, String zip) {
            address.city = city;
            address.zip = zip;
            return new AddressBuilderLocation(address);
        }
    }
    
    public static class AddressBuilderLocation {
        private final Address address;

        private AddressBuilderLocation(Address address) {
            this.address = address;
        }
        
        public AddressBuilderAll country(String countryCode) {
            address.country = Country.byCountryCode(countryCode);
            return new AddressBuilderAll(address);
        }
        
        public Address build() {
            return address;
        }
    }
    
    public static class AddressBuilderAll {
        private final Address address;
        
        private AddressBuilderAll(Address address) {
            this.address = address;
        }
        
        public Address build() {
            return address;
        }
    }
}
Which can then be invoked like so:
Address min = Address.street("First Street", "421").cityAndZip("Los Angeles", "90210").build();
Address all = Address.street("First Street", "421").cityAndZip("Los Angeles", "90210").country("US").build();
We just cannot go wrong here. Our builder’s API only allows us to build reasonable, easy-to-read object structures.

Note that we can make our life even easier by providing an even more simplified API through the builder (as with the country-by-String lookup example).

Of course, creating the builders itself may become complex, so we have to keep a balance between making the API as user-friendly as possible, and keeping the solution as lean as possible (keep it simple stupid (KISS)). (In fact, this address example would be quite over-engineered.)

Nevertheless, there are many excellent opportunities to use the builder pattern in a Java EE environment. In a more general sense, they can also be used to build rather abstract logic chains; they don’t even need to yield an instance at the end, but they can be used for their side-effects only (that would rather be a pure fluent interface then). This makes them especially useful to build any kind of “rule set” you want to fix at compile-time. I’ve created another example of such a builder-based rule set implementation in a previous blog post.

Pages: 1 2

February 7, 2016

New npm package: Rapid-prototyping CRUD REST-to-SQL



Last week, I discovered how implementing a simple CRUD REST-to-SQL web server with Node.js + Hapi + Bookshelf.js still requires an uncomfortable amount of manual work. This week I decided to do something about it. I hereby present the hapi-bookshelf-crud npm package.

Given that you have a Hapi instance server and a Bookshelf instance bookshelf, you can define your models like so:

const models = {
  Customer: bookshelf.Model.extend({
    tableName: 'customer',
    schema: {
      name: Joi.string().regex(/^[A-Za-z ]*$/).required(),
      employmentStatus: Joi.string().default('Unemployed'),
      payments: hapiCrud.empty(),
    },
  }),
  Payment: bookshelf.Model.extend({
    tableName: 'payment',
    schema: {
      amount: Joi.number().positive(),
      date: Joi.date().format('YYYY-MM-DD').allow(null),
    },
  }),
}

And define your CRUD REST endpoints like so:

const hapiCrud = require('hapi-bookshelf-crud')(server);

hapiCrud.crud({
  bookshelfModel: models.Customer,
  basePath: '/customers',
});

hapiCrud.crud({
  bookshelfModel: models.Payment,
  basePath: '/customers/{customerId}/payments',
  baseQuery: function(request) {
    return {customerId: request.params.customerId}
  },
});

That’s it. Now you have:

  • CRUD operations on the DB at the respective REST service endpoints, ready for use with Restangular (or any other REST consumer)
  • Validation error handling with I18N message support based on validation constraints on the model
  • DB error handling with I18N message support
  • Date format conversion based on model declaration
  • To-Many-relationship handling
  • And a few more handy things.
All in all it’s just a no-brainer super-rapid prototyping solution.

For more information about how to use it and in order to get started, visit its GitHub repository or its npm package page.

Read on to learn more about the motivation behind this project and what problems it is designed to solve, and how it does that.

The foundations

In 2016, Node.js is the platform of choice if we want to write a reliable server in a few lines of code. There’s no denying that a Java EE-based solution comes with more boilerplate.

If really all I want is a CRUD REST server, and I want it in as few lines as possible, Java EE is no match. Therefore, it makes sense to realize this project on top of Node.js.

Hapi is an industry-proven REST server abstraction, and Bookshelf.js is the most complete O/R mapper; it thus makes sense to use these packages as the basis.

Problem n° 1: Compact CRUD

The central idea of hapi-bookshelf-crud is to provide a simple API to build all REST service operations for full CRUD at once for a given model rather than building each route separately, as is the case with vanilla Hapi. hapi-bookshelf-crud borrows the idea of each model belonging to a “base path” from Java EE to build an API which can build CRUD with nothing more than model and base path information.

Problem n° 2: Validation / Error handling

I want to have declarative validation constraints on my model. In fact, my model definition should hardly consist of anything else than validation constraints; those just are the most important part of its definition.

Unfortunately, although Bookshelf comes with its own validation framework called Checkit, it doesn’t support out-of-the-box support for declarative validation; it need to be coded explicitly. Also Checkit’s API shows that the number of available constraints and the information yielded in case of a validation error is very limited.

On the other hand, Hapi provides a strong, elaborate validation framework called Joi. Therefore, I used this framework to do the model validations. I explicitly call the validation logic in a handler every REST endpoint access has to pass through rather than relying on Bookshelf’s callback hooks.

The validation rules can be specified in the model’s schema property. This is quite a lightweight solution which matches well Joi’s own naming and doesn’t interfere with anything from Bookshelf.js.

Joi comes with a huge amount of elaborate validation functionality, including cross-field validation and setting default values. It’s a joy (pun!) to see that those now work implicitly with hapi-bookshelf-crud.

The same goes with error handling which is implemented centrally in hapi-bookshelf-crud.

Problem n° 3: Table column mapping

SQL’s table columns which come in snake_case need to be matched against the model’s camelCased properties (camelCase is JSON object de-facto standard). Bookshelf doesn’t do this by default, and it needs a hook.

hapi-bookshelf-crud applies this hook to the model upon initialization.

Problem n° 4: Querying for nested resources

One of the most powerful aspects of REST is that its resources can be nested like a breadcrumb-navigation. For the “find all” and “find by id” GET queries, we thus need to incorporate any parent resources’ ids as an additional WHERE filter.

In the hapiCrud.crud API, the user must explicitly provide a map with the id-to-param mapping.

We could probably automate this step by regexing the basePath in a later release.

In the same way, we need to put all the foreign keys back into a model by extracting them from the query String before saving the model, otherwise we would risk overriding them with null. Typically, we can’t rely on the foreign keys being preserved in the model; but they’re guaranteed to be part of the (nested) resource URI. So we reverse above logic to put that information back into the model.

Problem n° 5: Relationship handling

hapi-bookshelf-crud handles to-one and to-many relations:
  • to-one relations set the foreign key id. This happens implicitly and matches well Bookshelf’s defaults and SQL’s expectations.
  • to-many relationships however must be deleted from a model before save because otherwise, Bookshelf tries to insert the array containing the “many” objects as a single JSON LOB into the respective column which of course must fail miserably. In order to mark a property as a to-many field which must be emptied, we use a special marker for that property in the validation schema (hapiCrud.empty()).
In fact, cascading save operations to children doesn't seem to make sense in a strict REST architecture, as a sub-model would really be addressed by another (nested) REST endpoint.

With hapi-bookshelf-crud there is hence no need for any of Bookshelf’s special “relation” declarations on the model.

(Cascading delete operations however remain an open issue for now.)

Problem n° 6: Miscellaneous mapping problems

Bookshelf includes some additional O/R mapping problems hapi-bookshelf-crud is designed to solve:
  • numbers should be initialized with 0 in order to trigger i.e. Joi positive() constraint violations rather than not-null constraints on the DB. Here, hapi-bookshelf-crud happily relies on the model schema’s definitions to find numeric model properties.
  • dates should be initialized as date and formatted appropriately when returned from the DB. Again, hapi-bookshelf-crud takes the model schema’s definitions of date()s and their format()s into account.

See it in action

See a complete demo server implementation here.

For the client, we can simply retake the Restangular-based example client application I built for use with a vanilla Node.js + Hapi + Bookshelf.js server and use it with a Node.js server built on top of hapi-bookshelf-crud.

Conclusion

In my last blog post comparing Java EE 7 with Node.js, I concluded that the Node.js ecosystem is “no match for a Java EE 7 tech stack” when it comes to building a REST-to-SQL server. Does this magically change with the hapi-bookshelf-crud package? No. There are still other concerns which don’t quite persuade me to make the change to a Node.js solution, as discussed in that article.

However, I think the hapi-bookshelf-crud package may actually make for a good solution when it comes to rapid-prototyping a CRUD REST-to-SQL backend. I’m especially happy that it supports declarative validation already and gets rid of almost all the technical boilerplate code. I would gladly use it to quickly build a REST backend in earlier iterations of a web app development project.

Please note that this npm package clearly is still very experimental and unfinished. If this has awaken your interest and you have Node.js / Hapi / Bookshelf.js knowledge, please head over to its GitHub repository to learn how to contribute.

Also, let me know your thoughts about this project in the comments section below. Please be nice; this is the first npm package I ever released…

Update March 2, 2016: hapi-bookshelf-crud 0.2.0 is now officially released.