January 31, 2016

Java EE with JAX-RS + JPA vs. Node.js with Hapi + Bookshelf.js (part 1 of 2)

Even as a Java EE developer, I have to admit that sophisticated JavaScript-based frameworks such as AngularJS nowadays are a compelling alternative to a Java-based web client. Searching for more lean and elegant solutions, I shall there examine how on the server side, a JavaScript tech stack compares to its Java EE counterpart. This is a side-by-side-comparison of a REST-to-SQL implementation in Java EE and Node.js.

I will build a REST server which serves basic CRUD operations as invoked by an example AngularJS / Restangular client. I make this comparison mainly from an ease-of-development point of view and clearly from a Java EE developer’s perspective.

At the end of the day, I really want a technology which easily allows me to:
  • Declare CRUD operations on a relational database storage
  • Define domain models with auto-ORM and declarative validation constraints
  • Out-of-the-box JSON serialization / deserialization
  • Out-of-the-box validation error handling
  • Built on production-ready components
If this technology allowed me to use the same programming language to write both client and server, this would certainly mark a big plus.

With this article, I hope to help introduce the Node.js ecosystem to fellow Java EE developers, and to offer my opinion on how the two technologies compare for anyone how is evaluating a tech stack change.

Technology choice

For the Java part, I go with the official standards, of course: I will adhere to the Java EE 7, JAX-RS (REST interface) and JPA (ORM) standards, and use the respective reference implementations.

For the JavaScript part, I try to choose what seems to me to be a “best of breed”: A Node.js server with Hapi (REST interface, because it seems to be the industry leader) and Bookshelf.js (ORM, because it seems robust, versatile, and it properly supports transactions (much in contrary to the popular Waterline framework)).

For the actual persistence storage, I will use MySQL here which I expect to be fully supported by any serious ORM framework.

Example application

I will here once again retake one of my favorite example use cases:

I will build a very simple example web application with CRUD functionality for these two business entities:

Each entity is uniquely identified by its id which is auto-generated on persist.

A customer can have one or more payments, and one payment is associated with exactly one customer.

Note that by design, I want to have transaction boundaries on a per-entity basis, i.e. no cascading relationships. Thanks to an SQL backend, we can choose our transaction boundaries arbitrarily. Transaction boundaries should always be a business decision.

I have actually implemented the Java EE version of the server as well as an AngularJS / Restangular client already in the previous blog post. In this blog post, I will build the Node.js server equivalent, and compare the two implementations. Feel free to download the complete implementation of the resulting Node.js server.

Throughout this article, I will mark the Java EE implementation in blue, and the equivalent Node.js implementation in green, in order to compare them side-by-side.

Client implementation

As stated above, I’ve implemented a Restangular-based client in the previous blog already. It’s a goal of this article to be able to switch between

the Java EE REST server
and the Node.js REST server
without any client changes other than the endpoint setting.

Server setup

For the Java implementation, I will cheat a tiny bit and use a small abstraction layer I’ve built which provides generic CRUD functionality, named Crudlet, in order to not re-invent the wheel for each CRUD-based REST server implementation. I’ll add this dependency to the Maven build; we must also configure the REST servlet and the persistence context.

A development-friendly allow-all CORS policy is implemented by Crudlet.

For the Node.js implementation, I use pure npm to manage the dependencies. We need to add Bookshelf, the underlying Knex, Hapi, and (for convenience) lodash and moment to the project. We’ll add additional packages as we need them. Then, setup the main Node.js entry file according to Bookshelf’s and Hapi’s doc.

In Node.js, there are no separate config files. Everything is JavaScript; the developer is responsible to manage the source files and keep them clean.

Setting up CORS for development purposes is a matter of setting a boolean flag in Hapi’s config:
const server = new Hapi.Server();
server.connection({ routes: {cors: true }, port: 3000 });
Please take a look at the mentioned main index.js file for the complete server setup.

Database setup: Schema / Table init

In Java EE / JPA, the DB schema / tables will be created from @Entity annotated “POJO” Java classes whereby this step is configured in the persistence.xml config file.

Note that according to the Java EE standard, automatic table creation may use a suboptimal naming schema for DB tables, using ALLCAPS table / column names, e.g. turning Customer#employmentStatus into CUSTOMER.EMPLOYMENTSTATUS. Note that here, case information is lost. In a pure Java solution, this should never be an issue because we would always access the database via the Java model which keeps the casing on its entity properties.

However, when using the database as the single source of information about the model (as will be the case for the Node.js-based solution), we need to modify automatic table creation such that case information is preserved. Because here, we want to keep the backend for the Java EE and the Node.js middle-tier the same, we will modify Java EE DB table creation such that it yields the same schema as the Node.js counterpart.

Unfortunately, it is a rather complex matter of implementing a SessionCustomizer; I’ve done this for the example application. This configures a SNAKE_CASE naming schema for DB tables e.g. turning Customer#employmentStatus into CUSTOMER.EMPLOYMENT_STATUS. Note that here, case information is preserved in the underscore (_).

Also I populated the DB with initial values in a Java bean initialized at server @Startup.

Bookshelf itself doesn’t provide DB schema / table creation support, but the underlying Knex does. Unfortunately, interplay of these two tools is not thoroughly documented by Bookshelf. It’s vital to realize that here, Knex acts as a (command line) DB schema migration tool rather than a pure server-side framework.

Here are the necessary steps:
knex init
Creates knexfile.js. You have to update this with the information in the server script’s Knex config.
knex migrate:make default
Creates a new empty migration script. The name is of no concern here, so I chose “default”.

Then, write the table creation script in the migration script file.

It would clearly be desirable that we could, as an alternative, generate the DB schema from the Bookshelf model, similar to the Java EE workflow.

Finally, call the migration script on server startup in the server script:
Just like for the Java solution, I want to initialize the DB with some demo entities. I really wanted to use the Bookshelf abstraction to insert these demo entities, but there is literally no explanation about how to build entity relations, so I resorted to a Knex-based solution. This of course comes with the drawback that one has to write “SQL-style” insert scripts.

Also note that Knex apparently distinguishes between “migration scripts” to built the DB schema (which includes the intricate steps explained above) and normal queries which include insert statements, and which happen outside of the migration scripts, inside the normal server side code.

Hence I implemented the respective insert queries.

For unknown reasons, the migration script’s dropTableIfExists() commands seem to be ignored on subsequent server startups, and the only viable solution to clean the DB before re-inserting the demo entities seems to be to include and execute the independent 'knex-cleaner' Node.js package:
Also it seems that here, promise-style programming really goes at the expense of code readability (note that for better error handling, we’d need even more sophisticated promise control).

Model definition

In Java EE, the model is defined as a POJO (literally a “Plain Old Java Object”). We use JPA annotations to define the O/R mapping.

In order for Crudlet to recognize a Java class as a model, we implement CrudIdentifiable:
public class Customer implements CrudIdentifiable {
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    @Column(nullable = false)
    @Pattern(regexp = "[A-Za-z ]*")
    private String name;
    private String address;
    private String city;
    @Column(nullable = false)
    private EmploymentStatus employmentStatus = EmploymentStatus.Unemployed;
    private String companyName;
    @OneToMany(mappedBy = "customer")
    private List<Payment> payments = new ArrayList<>();
    // Getters and setters...
In order to create a Bookshelf model, we extend bookshelf.Model (which we later change to extending ‘bookshelf-base-model’’s BaseModel, see the Model validation section):
Customer: BaseModel.extend({
    tableName: 'customer',
    parse: tableColumnRenamer.renameOnParse,
    format: tableColumnRenamer.renameOnFormat,
    payments: {
        get: function () {
            // return nothing
            return null;
        set: function (value) {
            // do nothing
    // validation configuration...
The tableName property binds a model to a data storage table. Other properties build virtual model properties, or implement special behavior, as explained below.

JSON serialization / deserialization

Both JAX-RS and Bookshelf are capable of handling the standard data types such as parsing JSON Strings to numbers. However, we need to teach each system which date format to use for any date input, which is perfectly reasonable.

In Java EE, it’s a matter of implementing an XmlAdapter and use it in a @XmlJavaTypeAdapter annotation on the field in question:
private Date date;

private static class DateFormatterAdapter extends XmlAdapter<String, Date> {
    DateFormat format = new SimpleDateFormat("yyyy-MM-dd");

    public Date unmarshal(final String date) throws Exception {
        return format.parse(date);

    public String marshal(Date date) throws Exception {
        return format.format(date);
For Bookshelf, we need to hook into the model’s format / toJSON functions. This is not trivial as we could damage the default functionality here.
format: function (attrs) {
    attrs = tableColumnRenamer.renameOnFormat(attrs);
    if (attrs.date != null) {
        // write to MySQL date, as in http://stackoverflow.com/a/27381633/1399395
        attrs.date = moment(attrs.date).format('YYYY-MM-DD HH:mm:ss');
    return attrs;
// read from MySQL date, as in https://github.com/tgriesser/bookshelf/issues/246#issuecomment-35498066
toJSON: function () {
    var attrs = bookshelf.Model.prototype.toJSON.apply(this, arguments);
    if (attrs.date != null) {
        attrs.date = moment(this.get('date')).format('YYYY-MM-DD');
    return attrs;
There is however an additional challenge for a Bookshelf-based solution which is related to the problem described earlier: As for Bookshelf, the only source of the model’s architecture is the database table, and as MySQL uses case-insensitive table / column naming, Bookshelf would expect us to query for ignore-cased model properties (e.g. customer.employment_status instead of customer.employmentStatus) and fail when querying for camelCase modeling, which is not what we desire (camelCase is JSON object de-facto standard).

Hence, we need to hook into each model’s parse and format methods to do the explicit snake_case to camelCase conversion. I’ve put this functionality in a separate tableColumnRenamer.js script and used it in the model definition (see above listing).

Model validation

Thanks to the Bean Validation Standard which is implemented as a cross-cutting concern in Java EE, adding validation constraints to the model is as simple as putting a Bean Validation annotation to the field in question:
@Pattern(regexp = "[A-Za-z ]*")
private String name;
It is, however, true that validation logic with Bean Validation constraints is quite limited; and adding custom constraints forces us to define a custom annotation which still feels like too much overhead from an enterprise developer perspective.

On the other hand, Node.js based Bookshelf.js doesn’t seem to support declarative model validation out-of-the-box at all. Although there is a separate sub-package named Checkit especially created for model validation, from the sparse documentation I see that currently, it is thought to be implemented programmatically for each model, based on callbacks and hooks. This certainly is not ideal.

The only more or less out-of-the-box solution for declarative validation constraints (with Checkit) I found in a very small GitHub project called bookshelf-base-model which I then incorporated into my solution. Its integration in Bookshelf models still is quite cumbersome:
Customer: BaseModel.extend({
    fields: {
        name: {validate: ['required', 'pattern:[A-Za-z ]*']},
        employmentStatus: {validate: ['required']},
    saving: function saving(model, attrs, options) {
        // override faulty 'bookshelf-base-model' default behavior
Plus, it comes with its own drawbacks, i.e. sub-optimal default configuration which e.g. would force us to declare every field again in the fields map. The solution is to override the saving callback on each model.

Also, the Checkit library leaves quite much to be desired. The number of available constraints seems quite limited (e.g. no regex pattern test), but most importantly, the validation error messages seem to only contain a bare minimum of information whereas the Java Bean Validation counterpart can be queried for all kinds of “contextual” information (e.g. the erroneous input value and validation constraint parameters).

I only briefly checked out Hapi’s Joi package which apparently can be used independently of Hapi and in conjunction with Bookshelf, but I didn’t manage to get it to work better than Checkit.

Pages: 1 2