January 31, 2016

Java EE with JAX-RS + JPA vs. Node.js with Hapi + Bookshelf.js (part 2 of 2)


Pages: 1 2

HTTP endpoint definition

Here, the Java EE-based solution profits for the small abstraction layer introduced with Crudlet which comes with prepared HTTP endpoint definitions for the main CRUD operations according to de-facto REST standards. All you need to do is implement a CrudResource class with an almost empty body:
1
2
3
4
5
6
7
8
9
10
11
@Path("customers")
@Stateless
public class CustomerResource extends CrudResource<Customer> {
    @Inject
    private CustomerService service; // from "DB access definition" chapter
 
    @Override
    protected CrudService<Customer> getService() {
        return service;
    }
}
For custom HTTP endpoints or more sophisticated declarations (such as the payments nested base resource), use the simplistic JAX-RS annotations, or manipulate the control flow programmatically:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
@Path("customers/{customerId}/payments")
@Stateless
public class PaymentResource extends CrudResource<Payment> {
    @Inject
    private PaymentService service;
    @Inject
    CustomerService customerService;
     
    @Override
    protected CrudService<Payment> getService() {
        return service;
    }
 
    @Override
    public List<Payment> findAll() {
        Long customerId = getPathParam("customerId", Long.class);
        return service.findAllByCustomer(customerId);
    }
     
    @Override
    public Response save(Payment entity) {
        Long customerId = getPathParam("customerId", Long.class);
        Customer customer = customerService.findById(customerId);
        entity.setCustomer(customer);
        return super.save(entity);
    }
}
A Hapi server doesn’t know the notion of a base resource to group endpoints; rather, you define several independent routes on the server:
1
2
3
4
5
6
7
8
9
10
11
12
// find all
server.route({
    method: 'GET',
    path: config.basePath,
    handler: function (request, reply) {
        doQuery(...).then(function (collection) {
            reply(collection);
        });
    }
});
 
...
The Hapi API is quite elegant; however, for the ever-same CRUD operations on every single model, the server config may get quite bloated. For a fair comparison, I’ve locally built a similar abstraction layer for the main CRUD operations as I have done with Crudlet for the Java EE part in the hapiCrud.js script.

Now we can configure the Hapi server for CRUD on each REST base path like so:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
hapiCrud(server, {
basePath: '/customers',
    bookshelfModel: Customer,
    beforeAdd: function(request) {
        delete request.payload.payments;
    },
    beforeUpdate: function(request) {
        delete request.payload.payments;
    }
});
 
hapiCrud(server, {
basePath: '/customers/{customerId}/payments',
    baseQuery: function(request) {
        return {customer_id: request.params.customerId}
    },
    bookshelfModel: Payment,
    beforeAdd: function(request) {
        request.payload.customerId = request.params.customerId
    },
});
The REST server config also includes a generic solution for error / validation error handling, as addressed presently.

Error handling

For this demo application, it’s a design goal to return localization-ready error messages in case of a DB problem. For instance, deleting a customer with a non-empty payments list should return a 404 error with a body like this:
1
2
3
4
5
6
7
{
    "error": {
        "detailMessage": "DELETE on table 'CUSTOMER' caused a violation of foreign key constraint
            'PAYMENTCUSTOMER_ID' for key (1).  The statement has been rolled back.",
        "exception": "java.sql.SQLIntegrityConstraintViolationException"
    }
}
For the Java EE implementation, this kind of error handling is implemented in Crudlet. It returns the error message printed above.

For the Node.js server, we need to catch Bookshelf’s errors and let Hapi answer with a 404 message:
1
2
3
4
5
6
7
8
9
10
11
12
// delete
server.route({
    method: 'DELETE',
    path: config.basePath + '/{id}',
    handler: function (request, reply) {
        doQuery(...).then(function (entity) {
            reply().code(204);
        }).catch(function (err) {
            reply.response({error: {exception: err.code, detailMessage: err.message}}).code(400);
        });
    }
});
Because we explicitly return the raw error codes rather than localized error messages (because we want localization to happen on the client), and because Java’s and Bookshelf’s error codes do of course diverge, we need to maintain a separate set of I18N keys, for instance in an AngularJS client:

For a Java EE server:
1
2
'error.com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException':
    'Cannot delete an object which is still referenced to by other objects.',
For a Node.js server:
1
2
'error.ER_ROW_IS_REFERENCED_2':
    'Cannot delete an object which is still referenced to by other objects.',
The important part is that we can keep the localization logic the same.

Validation error handling

For this demo application, we also want to return localization-ready error messages in case of a model validation constraint violation. For instance, trying to save a customer with illegal characters in its name should return a 404 error with a body like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
{
    "validationErrors": {
        "name": {
            "attributes": {
                "flags": "[Ljavax.validation.constraints.Pattern$Flag;@1f414540",
                "regexp": "[A-Za-z ]*"
            },
            "constraintClassName": "javax.validation.constraints.Pattern",
            "invalidValue": "Name not allowed!!",
            "messageTemplate": "javax.validation.constraints.Pattern.message"
        }
    }
}
This again is implemented in Crudlet. The ConstraintViolationException thrown by Bean Validation contains all information about a constraint violation which gets handled accordingly by Crudlet.

Inside the Hapi routes, we can also catch Checkit’s errors and return the respective answer.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
// update
server.route({
    method: ['PUT', 'POST'],
    path: config.basePath + '/{id}',
    handler: function (request, reply) {
        if (typeof config.beforeUpdate !== 'undefined')
            config.beforeUpdate(request);
        doQuery(...).then(function (entity) {
            reply(entity);
        }).catch(config.bookshelfModel.ValidationError, function (err) {
            reply.response(transformConstraintViolationMessages(err)).code(400);
        });
    }
});
However, as mentioned before, Checkit doesn’t seem to provide as much information about a constraint violation as its Java Bean Validation counterpart does.

Again, we need sparate I18N keys to localize the validation error messages coming from a Java EE or a Node.js server, respectively. Here, it becomes obvious that the Node.js / Checkit solution just doesn’t provide as much information as Java Bean Validation does:

For a Java EE server:
1
'error.javax.validation.constraints.Pattern.message': 'must match "{{regexp}}"',
For a Node.js server:
1
'error.pattern': 'must match the expected pattern',

DB access definition

Note that we still have skipped the actual DB queries. This is also arguably the most boring work of the implementation, because they’re just basically the same CRUD queries for every REST endpoint.

Therefore, these operations are implemented in Crudlet already. Just implement CrudService:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class CustomerService extends CrudService<Customer> {
    @Override
    @PersistenceContext
    protected void setEm(EntityManager em) {
        super.setEm(em);
    }
 
    @Override
    public Customer create() {
        return new Customer();
    }
 
    @Override
    public Class<Customer> getModelClass() {
        return Customer.class;
    }
}
Of course, you may add additional custom actions to the service where you’ll implement DB access using the EntityManager. This Java EE facility provides declarative and implicit transaction control.

DB access is what Bookshelf’s abstraction layer is made for. Hence, we use a Bookshelf model’s query operations to access the underlying DB (via Knex).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
// update
server.route({
    method: ['PUT', 'POST'],
    path: config.basePath + '/{id}',
    handler: function (request, reply) {
        if (typeof config.beforeUpdate !== 'undefined')
            config.beforeUpdate(request);
        config.bookshelfModel.forge(request.payload).save().then(function (entity) {
            reply(entity);
        }).catch(config.bookshelfModel.ValidationError, function (err) {
            reply.response(transformConstraintViolationMessages(err)).code(400);
        });
    }
});
Bookshelf offers a clean, promise-based API which works excellently with Hapi.

Conclusion

I stated at the beginning that my opinion would be biased in favor of Java EE, and it clearly is. Both the Java EE and Node.js tech stack undoubtedly have their strengths and their flaws, but most importantly, I don’t see a single reason which would really let me turn down Java EE just now to write REST APIs. Both technologies offer a range of frameworks, and the opportunity to build your own, to make work easier. Most specifically, the important aspect of model validation currently seems superior in Java EE.

Some of the Node.js packages I covered here clearly are still in a quite early stage. This is especially true for Bookshelf and Knex. The documentation of Bookshelf and Knex in particular makes it frankly quite hard for new developers. It’s really just the API that is there, but no information about how to solve everyday real world  problems such as Bookshelf / Knex interplay, accessing / modifying relations, and how to properly include its own validation framework.

From a project management point of view, the idea of having the client and server written in the same language (isomorphic JavaScript) truly is compelling. The possibilities of a more rapid prototyping-like / “Ruby on Rails” like feeling as enabled by JavaScript’s dynamic typing are interesting as well. But again, in total it’s not strong enough an argument to turn down Java EE.

Also from a management position, perhaps my biggest reservation on Node.js is the lack of “official” standardization. The Java EE ecosystem really is built upon officially recognized software specifications which act as a contract for software vendors. We can learn and talk about specifications, and work with the implementation of our choice. In the Node.js world, there are hardly any globally recognized specifications (except e.g. for promises). We learn to use and talk about products: Node.js, Hapi, Bookshelf, these are all just product or current de-facto standards at most. They hardy offer investment protection; not if talking about 10+ years. The Java EE standards will still be around by then, and will probably further evolve, but no-one guarantees the survival of a single Node.js based product.

I’m still quite fascinated by the Node.js ecosystem and I will continue to study it for my own interest; but for true enterprise-quality software development, in my opinion, it is currently no match for a Java EE 7 tech stack, at least not in the use case discussed here.

Please let me know whether you found this article interesting and helpful, whether you agree with my conclusions or if you spotted an error of any kind.

Feel free to download the demo implementation of the Java EE 7 server or the Node.js server from their respective GitHub repositories.

Update February 9, 2016: Meanwhile, I have created the npm module hapi-bookshelf-crud, a simple abstraction layer on top of Hapi + Bookshelf.js + Joi to make CRUD REST endpoint creation as simple as possible. Head over to its npm package page to learn more.


Pages: 1 2

Java EE with JAX-RS + JPA vs. Node.js with Hapi + Bookshelf.js (part 1 of 2)



Even as a Java EE developer, I have to admit that sophisticated JavaScript-based frameworks such as AngularJS nowadays are a compelling alternative to a Java-based web client. Searching for more lean and elegant solutions, I shall there examine how on the server side, a JavaScript tech stack compares to its Java EE counterpart. This is a side-by-side-comparison of a REST-to-SQL implementation in Java EE and Node.js.

I will build a REST server which serves basic CRUD operations as invoked by an example AngularJS / Restangular client. I make this comparison mainly from an ease-of-development point of view and clearly from a Java EE developer’s perspective.

At the end of the day, I really want a technology which easily allows me to:
  • Declare CRUD operations on a relational database storage
  • Define domain models with auto-ORM and declarative validation constraints
  • Out-of-the-box JSON serialization / deserialization
  • Out-of-the-box validation error handling
  • Built on production-ready components
If this technology allowed me to use the same programming language to write both client and server, this would certainly mark a big plus.

With this article, I hope to help introduce the Node.js ecosystem to fellow Java EE developers, and to offer my opinion on how the two technologies compare for anyone how is evaluating a tech stack change.

Technology choice

For the Java part, I go with the official standards, of course: I will adhere to the Java EE 7, JAX-RS (REST interface) and JPA (ORM) standards, and use the respective reference implementations.

For the JavaScript part, I try to choose what seems to me to be a “best of breed”: A Node.js server with Hapi (REST interface, because it seems to be the industry leader) and Bookshelf.js (ORM, because it seems robust, versatile, and it properly supports transactions (much in contrary to the popular Waterline framework)).

For the actual persistence storage, I will use MySQL here which I expect to be fully supported by any serious ORM framework.

Example application

I will here once again retake one of my favorite example use cases:

I will build a very simple example web application with CRUD functionality for these two business entities:

Each entity is uniquely identified by its id which is auto-generated on persist.

A customer can have one or more payments, and one payment is associated with exactly one customer.

Note that by design, I want to have transaction boundaries on a per-entity basis, i.e. no cascading relationships. Thanks to an SQL backend, we can choose our transaction boundaries arbitrarily. Transaction boundaries should always be a business decision.

I have actually implemented the Java EE version of the server as well as an AngularJS / Restangular client already in the previous blog post. In this blog post, I will build the Node.js server equivalent, and compare the two implementations. Feel free to download the complete implementation of the resulting Node.js server.

Throughout this article, I will mark the Java EE implementation in blue, and the equivalent Node.js implementation in green, in order to compare them side-by-side.

Client implementation

As stated above, I’ve implemented a Restangular-based client in the previous blog already. It’s a goal of this article to be able to switch between

the Java EE REST server
RestangularProvider.setBaseUrl('http://localhost:8080/CrudletDemo.server/');
and the Node.js REST server
RestangularProvider.setBaseUrl('http://localhost:3000');
without any client changes other than the endpoint setting.

Server setup

For the Java implementation, I will cheat a tiny bit and use a small abstraction layer I’ve built which provides generic CRUD functionality, named Crudlet, in order to not re-invent the wheel for each CRUD-based REST server implementation. I’ll add this dependency to the Maven build; we must also configure the REST servlet and the persistence context.

A development-friendly allow-all CORS policy is implemented by Crudlet.

For the Node.js implementation, I use pure npm to manage the dependencies. We need to add Bookshelf, the underlying Knex, Hapi, and (for convenience) lodash and moment to the project. We’ll add additional packages as we need them. Then, setup the main Node.js entry file according to Bookshelf’s and Hapi’s doc.

In Node.js, there are no separate config files. Everything is JavaScript; the developer is responsible to manage the source files and keep them clean.

Setting up CORS for development purposes is a matter of setting a boolean flag in Hapi’s config:
const server = new Hapi.Server();
server.connection({ routes: {cors: true }, port: 3000 });
Please take a look at the mentioned main index.js file for the complete server setup.

Database setup: Schema / Table init

In Java EE / JPA, the DB schema / tables will be created from @Entity annotated “POJO” Java classes whereby this step is configured in the persistence.xml config file.

Note that according to the Java EE standard, automatic table creation may use a suboptimal naming schema for DB tables, using ALLCAPS table / column names, e.g. turning Customer#employmentStatus into CUSTOMER.EMPLOYMENTSTATUS. Note that here, case information is lost. In a pure Java solution, this should never be an issue because we would always access the database via the Java model which keeps the casing on its entity properties.

However, when using the database as the single source of information about the model (as will be the case for the Node.js-based solution), we need to modify automatic table creation such that case information is preserved. Because here, we want to keep the backend for the Java EE and the Node.js middle-tier the same, we will modify Java EE DB table creation such that it yields the same schema as the Node.js counterpart.

Unfortunately, it is a rather complex matter of implementing a SessionCustomizer; I’ve done this for the example application. This configures a SNAKE_CASE naming schema for DB tables e.g. turning Customer#employmentStatus into CUSTOMER.EMPLOYMENT_STATUS. Note that here, case information is preserved in the underscore (_).

Also I populated the DB with initial values in a Java bean initialized at server @Startup.

Bookshelf itself doesn’t provide DB schema / table creation support, but the underlying Knex does. Unfortunately, interplay of these two tools is not thoroughly documented by Bookshelf. It’s vital to realize that here, Knex acts as a (command line) DB schema migration tool rather than a pure server-side framework.

Here are the necessary steps:
knex init
Creates knexfile.js. You have to update this with the information in the server script’s Knex config.
knex migrate:make default
Creates a new empty migration script. The name is of no concern here, so I chose “default”.

Then, write the table creation script in the migration script file.

It would clearly be desirable that we could, as an alternative, generate the DB schema from the Bookshelf model, similar to the Java EE workflow.

Finally, call the migration script on server startup in the server script:
knex.migrate.latest().then(...)
Just like for the Java solution, I want to initialize the DB with some demo entities. I really wanted to use the Bookshelf abstraction to insert these demo entities, but there is literally no explanation about how to build entity relations, so I resorted to a Knex-based solution. This of course comes with the drawback that one has to write “SQL-style” insert scripts.

Also note that Knex apparently distinguishes between “migration scripts” to built the DB schema (which includes the intricate steps explained above) and normal queries which include insert statements, and which happen outside of the migration scripts, inside the normal server side code.

Hence I implemented the respective insert queries.

For unknown reasons, the migration script’s dropTableIfExists() commands seem to be ignored on subsequent server startups, and the only viable solution to clean the DB before re-inserting the demo entities seems to be to include and execute the independent 'knex-cleaner' Node.js package:
knexCleaner.clean(knex).then(...)
Also it seems that here, promise-style programming really goes at the expense of code readability (note that for better error handling, we’d need even more sophisticated promise control).

Model definition

In Java EE, the model is defined as a POJO (literally a “Plain Old Java Object”). We use JPA annotations to define the O/R mapping.

In order for Crudlet to recognize a Java class as a model, we implement CrudIdentifiable:
@Entity
public class Customer implements CrudIdentifiable {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    @Column(nullable = false)
    @NotEmpty
    @Pattern(regexp = "[A-Za-z ]*")
    private String name;
    private String address;
    private String city;
    
    @Column(nullable = false)
    @NotNull
    @Enumerated(EnumType.STRING)
    private EmploymentStatus employmentStatus = EmploymentStatus.Unemployed;
    private String companyName;
    
    @OneToMany(mappedBy = "customer")
    private List<Payment> payments = new ArrayList<>();
 
    // Getters and setters...
}
In order to create a Bookshelf model, we extend bookshelf.Model (which we later change to extending ‘bookshelf-base-model’’s BaseModel, see the Model validation section):
Customer: BaseModel.extend({
    tableName: 'customer',
    parse: tableColumnRenamer.renameOnParse,
    format: tableColumnRenamer.renameOnFormat,
    payments: {
        get: function () {
            // return nothing
            return null;
        },
        set: function (value) {
            // do nothing
        }
    },
    // validation configuration...
})
The tableName property binds a model to a data storage table. Other properties build virtual model properties, or implement special behavior, as explained below.

JSON serialization / deserialization

Both JAX-RS and Bookshelf are capable of handling the standard data types such as parsing JSON Strings to numbers. However, we need to teach each system which date format to use for any date input, which is perfectly reasonable.

In Java EE, it’s a matter of implementing an XmlAdapter and use it in a @XmlJavaTypeAdapter annotation on the field in question:
@XmlJavaTypeAdapter(DateFormatterAdapter.class)
private Date date;

private static class DateFormatterAdapter extends XmlAdapter<String, Date> {
    DateFormat format = new SimpleDateFormat("yyyy-MM-dd");

    @Override
    public Date unmarshal(final String date) throws Exception {
        return format.parse(date);
    }

    @Override
    public String marshal(Date date) throws Exception {
        return format.format(date);
    }
}
For Bookshelf, we need to hook into the model’s format / toJSON functions. This is not trivial as we could damage the default functionality here.
format: function (attrs) {
    attrs = tableColumnRenamer.renameOnFormat(attrs);
    if (attrs.date != null) {
        // write to MySQL date, as in http://stackoverflow.com/a/27381633/1399395
        attrs.date = moment(attrs.date).format('YYYY-MM-DD HH:mm:ss');
    }
    return attrs;
},
// read from MySQL date, as in https://github.com/tgriesser/bookshelf/issues/246#issuecomment-35498066
toJSON: function () {
    var attrs = bookshelf.Model.prototype.toJSON.apply(this, arguments);
    if (attrs.date != null) {
        attrs.date = moment(this.get('date')).format('YYYY-MM-DD');
    }
    return attrs;
},
There is however an additional challenge for a Bookshelf-based solution which is related to the problem described earlier: As for Bookshelf, the only source of the model’s architecture is the database table, and as MySQL uses case-insensitive table / column naming, Bookshelf would expect us to query for ignore-cased model properties (e.g. customer.employment_status instead of customer.employmentStatus) and fail when querying for camelCase modeling, which is not what we desire (camelCase is JSON object de-facto standard).

Hence, we need to hook into each model’s parse and format methods to do the explicit snake_case to camelCase conversion. I’ve put this functionality in a separate tableColumnRenamer.js script and used it in the model definition (see above listing).

Model validation

Thanks to the Bean Validation Standard which is implemented as a cross-cutting concern in Java EE, adding validation constraints to the model is as simple as putting a Bean Validation annotation to the field in question:
@NotEmpty
@Pattern(regexp = "[A-Za-z ]*")
private String name;
It is, however, true that validation logic with Bean Validation constraints is quite limited; and adding custom constraints forces us to define a custom annotation which still feels like too much overhead from an enterprise developer perspective.


On the other hand, Node.js based Bookshelf.js doesn’t seem to support declarative model validation out-of-the-box at all. Although there is a separate sub-package named Checkit especially created for model validation, from the sparse documentation I see that currently, it is thought to be implemented programmatically for each model, based on callbacks and hooks. This certainly is not ideal.

The only more or less out-of-the-box solution for declarative validation constraints (with Checkit) I found in a very small GitHub project called bookshelf-base-model which I then incorporated into my solution. Its integration in Bookshelf models still is quite cumbersome:
Customer: BaseModel.extend({
    ...
    
    fields: {
        name: {validate: ['required', 'pattern:[A-Za-z ]*']},
        employmentStatus: {validate: ['required']},
    },
    saving: function saving(model, attrs, options) {
        // override faulty 'bookshelf-base-model' default behavior
    },
})
Plus, it comes with its own drawbacks, i.e. sub-optimal default configuration which e.g. would force us to declare every field again in the fields map. The solution is to override the saving callback on each model.

Also, the Checkit library leaves quite much to be desired. The number of available constraints seems quite limited (e.g. no regex pattern test), but most importantly, the validation error messages seem to only contain a bare minimum of information whereas the Java Bean Validation counterpart can be queried for all kinds of “contextual” information (e.g. the erroneous input value and validation constraint parameters).

I only briefly checked out Hapi’s Joi package which apparently can be used independently of Hapi and in conjunction with Bookshelf, but I didn’t manage to get it to work better than Checkit.

Pages: 1 2

January 10, 2016

Crudlet: Ready-to-use Restangular-to-SQL CRUD with JAX-RS



I’ve been searching for a lean, simple REST-to-SQL CRUD framework before I realized that it’s actually quite easy to implement that based on JAX-RS. I named the result Crudlet and published it open source such that you can use it either as a dependency to build your web application, or to inspect its code in order to build your own CRUD application with vanilla JAX-RS from scratch.

What I wanted to have is a server framework which provides with minimal development / deployment overhead:
  • CRUD operations on any relational database storage of my choice
  • Declarative definition of strongly-typed domain models with auto-ORM
  • Out-of-the-box JSON serialization / deserialization
  • Out-of-the-box validation error handling with I18N support
  • Built on production-ready components

Technology choice

Most of what I needed to build it is provided out-of-the-box by Java EE’s JAX-RS standard, as implemented by Java EE containers:
  • Declarative definition of REST endpoints with Java annotations
  • Declarative definition of the domain models with auto-ORM as Java POJOs
  • Out-of-the-box JSON serialization / deserialization
  • Out-of-the-box validation error handling
Being based on a Java EE / JAX-RS stack, this comes with a couple of advantages. Note that I don’t want to imply here that these ensuing technologies are bad, but that given the requirements I declared above, Java EE / JAX-RS seems a better match.

Compared to Node.js / Express

  • Vanilla Java is strongly typed (as opposed to vanilla JavaScript), with seems to suit well the requirement to build a stable domain model.
  • JAX-RS is all about convention over configuration where you can declaratively build your models, register components declaratively (a “plugin” approach) whereas writing an actual Node.js server from scratch typically forces us to imperatively write an actual server deamon.

Compared to a MongoDB database backend

Java EE allows us to work with any JDBC compliant persistence storage. I’ve specifically decided to use an SQL-based database because of some restrictions a document-based data storage such as MongoDB implies:
  • SQL supports schema validation (which is not the case with plain MongoDB)
  • SQL supports per-entity transaction control (which is not the case with plain MongoDB)

An example application

Now, I will use Crudlet to build a demo CRUD application. I will here recreate a use case I’ve used in previous blog examples.

I will build a very simple example web application with CRUD functionality for these two business entities:

Each entity is uniquely identified by its id which is auto-generated on persist.

A customer can have one or more payments, and one payment is associated with exactly one customer.

There will be a master view (list of all entities) and a detail view (edit page for current entity) for each entity type, respectively. The payments master view will be integrated in the detail view of the parent customer entity.

The rest of this tutorial is taken from Crudlet’s GitHub project usage information.

The complete source code of the example application (server and client) is available in a separate GitHub repository.

Server: Setup

JAX-RS

You need to setup the JAX-RS Application servlet in the web.xml file as shown in the demo project:

<servlet>
    <servlet-name>javax.ws.rs.core.Application</servlet-name>
</servlet>
<servlet-mapping>
    <servlet-name>javax.ws.rs.core.Application</servlet-name>
    <url-pattern>/*</url-pattern>
</servlet-mapping>

Database

Define your database connection in the persistence.xml file. Any JDBC compliant connection is supported. In the demo project, we use a JTA data source the configuration of which is set up in the application server.

CORS

Crudlet by default allows you to handle CORS request without nasty errors as is usually desired in development / debug stage. The required request / response filters are implemented in the CorsRequestFilter and CorsResponseFilter class, respectively. Set the CorsRequestFilter#ALLOW_OPTIONS and CorsResponseFilter#ALLOW_CORS boolean flag to false (e.g. in a @Startup @Singleton EJB bean) to disable CORS allow-all policy.

Server: Implementation

Crudlet provides a simple, lean framework to build (optionally RESTful) CRUD JSF applications based on common best practices. Having a basic CRUD implementation in place means that you can an any entity type:

  • Create (C) new entities
  • Read (R) persistent entities from the persistence storage
  • Update (U) entities in the persistence storage
  • Delete (D) entities from the persistence storage

Building your application around a CRUD centric approach brings a couple of advantages:

  • The service interface is very simplistic, lean and self-documenting
  • The business logic resides in the model rather than in the service interface which matches well an object-oriented language like Java
  • Because the service interface stays the same for all entities, we can make excessive use of abstraction through inheritance and generics
  • This architecture matches well a best practices compliant RESTful implementation where the four CRUD actions are really matched against HTTP verbs.

This best practices architecture is based on three central artifacts for which Crudlet provides an abstract generic base implementation:

  • CrudEntity: the entity model
  • CrudService: the persistence service
  • CrudResource: the REST web service endpoint

In a CRUD application, the relation between these artifacts is 1 : 1 : 1; you will thus build a service and a controller for every entity. Thanks to the level of abstraction provided by Crudlet, this is a matter of about 30 lines of code:

  • CrudEntity makes sure your entity implements an auto-ID generation strategy
  • CrudService implements basic persistence storage access (through an EntityManager) for the four CRUD operations
  • CrudResource implements a REST web service endpoint for editing all entities in the persistence storage including out-of-the-box support for returning I18N-ready model validation error messages.

Entity

Use either the CrudIdentifiable interface or the CrudEntity class to derive your entity model classes from. This is the only prerequisite to use them with a CrudService and a CrudResource.

The difference between the interface and the class is that the latter provides an auto-generated Long id field implementation out-of-the-box.

For instance, to create a Customer entity:

@Entity
public class Customer extends CrudEntity { 
    @NotNull
    @Pattern(regexp = "[A-Za-z ]*")
    private String name;
    private String address;
    private String city;
    ...

Use Bean Validation constraints to declaratively specify the model validation.

Service

In order to create a CRUD service for an entity type, implement CrudService for the entity and register it as a CDI bean in the container (depending on beans.xml bean-discovery-mode, explicit registration may not be necessary).

For instance, to create the service for the Customer entity:

public class CustomerService extends CrudService<Customer> {
    @Override
    @PersistenceContext
    protected void setEm(EntityManager em) {
        super.setEm(em);
    }

    @Override
    public Customer create() {
        return new Customer();
    }

    @Override
    public Class<Customer> getModelClass() {
        return Customer.class;
    }
}
  • Within the setEm(EntityManager) method, simply call the super method. The important part is that you inject your @PersistenceContext in this method by annotation.

Of course, you are free to add additional methods to your CrudService implementation where reasonable.

Web service endpoint

Finally, create the REST web service endpoint by implementing CrudResource for the entity and register it as a @Stateless EJB bean in the container.

For instance, to create the web service endpoint for the Customer entity:

@Path("customers")
@Stateless
public class CustomerResource extends CrudResource<Customer> {
    @Inject
    private CustomerService service;

    @Override
    protected CrudService<Customer> getService() {
        return service;
    }
}
  • The @Path defines the base path of the web service endpoint.
  • Within the getService() method, return the concrete CrudService for the entity type in question which you should dependency-inject into the controller.

That’s it. Now you can use e.g. the httpie command line tool to verify that you can execute RESTful CRUD operations on your entity running on the database.

Of course, you are free to add additional methods to your CrudResource implementation where reasonable.

Read on for an example client implementation based on AngularJS.

AngularJS client: Setup

In this example, we use Restangular as an abstraction layer to do RESTful HTTP requests which offers a far more sophisticated although more concise API than AngularJS’s built-in $http and $resource. It is set up as shown in the demo application’s main JavaScript file:

.config(function (RestangularProvider) {
    RestangularProvider.setBaseUrl('http://localhost:8080/CrudletDemo.server/');

    RestangularProvider.setRequestInterceptor(function(elem, operation) {
        // prevent "400 - bad request" error on DELETE
        // as in https://github.com/mgonto/restangular/issues/78#issuecomment-18687759
        if (operation === "remove") {
            return undefined;
        }
        return elem;
    });
})

You also potentially want to install and setup the angular-translate module for I18N support:

.config(['$translateProvider', function ($translateProvider) {
    $translateProvider.translations('en', translations);
    $translateProvider.preferredLanguage('en');
    $translateProvider.useMissingTranslationHandlerLog();
    $translateProvider.useSanitizeValueStrategy('sanitize');
}])

AngularJS client: Implementation

In the “controller” JavaScript file, we can use Restangular to access the RESTful web service endpoint of our Crudlet Customer service like so:

  • Get a list of entities (GET /customers/): Restangular.all("customers").getList().then(function(entities) {...})
  • Get a single entity (GET /customers/1): Restangular.one("customers", $routeParams.id).get().then(function (entity) {...})
  • Save an entity (PUT /customers/1): $scope.entity.save().then(function() {...})

Validation

An interesting aspect of Crudlet is its out-of-the-box support for localized validation error messages. If upon save, a validation error occurs, the server answers e.g. like this:

{
    "validationErrors": {
        "name": {
            "attributes": {
                "flags": "[Ljavax.validation.constraints.Pattern$Flag;@1f414540",
                "regexp": "[A-Za-z ]*"
            },
            "constraintClassName": "javax.validation.constraints.Pattern",
            "invalidValue": "Name not allowed!!",
            "messageTemplate": "javax.validation.constraints.Pattern.message"
        }
    }
}

Using the angular-translate module of AngularJS we set up previously, we can show all localized validation messages like so:

<div class="alert alert-danger" ng-show="validationErrors != null">
    <ul>
        <li ng-repeat="(component, error) in validationErrors">
            {{'payment.' + component | translate}}: {{'error.' + error.messageTemplate | translate:error.attributes }}
        </li>
    </ul>
</div>

The validationErrors.<property>.messageTemplate part is the message template returned by the bean validation constraint. We can thus e.g. base the validation error localization on Hibernate’s own validation messages:

var translations = {
    ...
    'error.javax.validation.constraints.Pattern.message': 'must match "{{regexp}}"',
    ...
};

(I preceded it with error. here.)

Because the error object returned by the server is a map, we can also use it to conditionally render special error styling, e.g. using Bootstrap’s error style class:

ng-class="{'has-error': errors.amount != null}"

Exceptions

Similar to validation errors, some runtime exceptions will also return a user-friendly error response message. For instance, let’s assume that a Customer has a list of Payments and you try to delete a Customer with a non-empty Payments list:

{
    "error": {
        "detailMessage": "DELETE on table 'CUSTOMER' caused a violation of foreign key constraint 
            'PAYMENTCUSTOMER_ID' for key (1).  The statement has been rolled back.",
        "exception": "java.sql.SQLIntegrityConstraintViolationException"
    }
}

Again, you can catch and display these in the AngularJS view:

<div class="alert alert-danger" ng-show="errorNotFound != null || error != null">
    <ul>
        <li ng-show="error != null">
            {{'error.' + error.exception | translate}}
        </li>
    </ul>
</div>

With appropriate localization:

var translations = {
    ...
    'error.java.sql.SQLIntegrityConstraintViolationException': 'Cannot delete an object which is still referenced to by other objects.',
    ...
};

Because in a real-world production environment, exposing details of an exception may be a security issue, you can suppress this user-friendly exception detail output by setting the RestfulExceptionMapper#RETURN_EXCEPTION_BODY boolean flag to false.

For a complete example, please take a look at the example application. It also shows you how to easily implement a CrudResource for nested resources.

Conclusion

With Crudlet, I feel like I finally have a save starting point to build CRUD-based REST-to-SQL applications. In fact, CRUD is an ideal match for the REST pattern, and is a best practices compliant fundament for an application architecture.

As such, Crudlet is especially useful to kickstart an AngularJS / Restangular project (or really any web service client) where you’d like to concentrate on trying out / building front-end logic, assuming the database backend is “just there”, working as expected.

A lot of my experience, as documented in other blog posts, has actually inspired this framework:
In fact, the demo use case is copied from my AngularJS – MongoDB article. It’s interesting to see that with a minimal convention-over-configuration compliant Java EE server, we can actually overcome the restrictions of a direct MongoDB access I pointed out in that article.

(Just as a side note, I also reused the same original HTML markup, but I enhanced its original version with Bootstrap component styling using CrudFaces’ auto-styling abilities.)

Of course, the Crudlet implementation at its current stage is still very rough, and I hope to find time to make it more robust and actually production-ready in the near future. In the meantime, please don’t hesitate to let me know whether you find this project useful; please leave any improvement suggestions in the comments section below.

Again, feel free to check out the source code of the complete example application built on top of Crudlet as well.

Update February 26, 2016: Crudlet 0.2 is now officially released.