November 30, 2015

December 2015: Taking a break


Dear reader,

Thank you for your interest in my blog. I’ve started writing articles in March this year and I’m happy to find that they actually seem to attract general interest. Having posted more than twenty in-depth articles about Java, Groovy, JavaScript, Agile and related topics, I decided to take a break during December.

I will probably still work on some of my projects during that time, and I may still make occasional updates to existing articles, but I will not post new articles until the beginning of the year 2016. You can follow me on my GitHub and Twitter accounts if you’d like to get updated on what I’m currently working on.

Again, thank you very much for your continued interest in this blog. Feel free to post comments on any of my blog posts as I will read them all. I whish you happy holidays, and I hope to see you again in 2016.

November 29, 2015

10 common Scrum misconceptions (part 2 of 2)

Pages: 1 2

Scrum provides clear guidance to organize work and manage the project or the development process

This is another very common misconception which hits many newly-established Scrum Teams in an organization with no former Scrum experience. Just as I wrote at the beginning of this article, Scrum’s apparent simplicity and its small set of rules may lead to the wrong assumption that the framework is easily established and that it’s hard to go wrong if you follow some simple rules.

However, Scrum is not a software development methodology, but a framework. The difference is, that Scrum does not provide specific software development-related processes or practices. This is advantageous in that it is flexible enough to be used for many purposes, and you are free to include any accompanying practice of your choice in the framework; a common example being the Extreme Programming methodology (XP). The bad thing is that Scrum does not intend to help you with everyday software development and process management tasks. Thus, even if you apply every single one of Scrum’s rules perfectly, your development process can still go terribly wrong.

Over the years, many useful techniques have been established to facilitate Scrum’s central processes, such as the famous “Planning Poker” for workload estimates, and the “Burndown Chart” for Sprint forecasts. (Yes, not even Burndown Charts are officially part of the definition of Scrum!)

Thus, one common error for Scrum-“rookies” is to kickstart full-blown Scrum on Day X, hoping to get productive right away. Typically, near the end of Sprint 1, they will start panicking as they realize they have essentially no idea how to run their Scrum processes and events and that they are left in utter vacuum. Of course, one might say that that’s actually the Scrum way: They will learn from their former mistakes, they will establish their own rules, and eventually get better, Sprint by Sprint. While this is theoretically right, this is not a very economic solution.

Personally, I would suggest two main measures to aspiring Scrum Teams: The first one is to employ an experienced Scrum expert’s coaching services throughout the first few Sprints. This is a person which is at least Scrum-certified and ideally has been former Scrum Master of multiple Scrum projects. He will not take the role of the Scrum Master. Instead, he will stay at the Scrum Team’s service as an external consultant, and he will especially teach the Scrum Master how to fulfill his role. He will accompany the Team through its first Scrum events, give tips and show potential problems. After the Team and the Scrum Master gained enough confidence, the coach leaves the Team.

My second suggestion is starting off the project with a “Sprint 0”. In this initial Sprint, the Team will behave as if the project has fully started, but they will primarily use their time to establish their processes and try out Scrum practices. Note that this time is not wasted! There are typically many things which can be done weeks before a project actually starts: development tools need to be set up, document templates need to be prepared; maybe you can work on some technical proof-of-concept-tryouts, or there’s even some requirements engineering which has already started. You can take these activities and turn them into an actual Sprint, with the desired outcome (e.g. IDE set up, initial requirements gathered) filling the Sprint Backlog. This is an excellent, however somewhat relaxed, way to start a Scrum project and try all those new Scrum processes and practices!

Scrum is not applicable for fixed price projects

This topic has been discussed lengthy in many articles. This post on the scrumalliance.org website suggests that it’s important to distinguish between what actually are the “fixed” parts of a project in order to assess Scrum’s applicability. For typical “fixed price” projects, it’s indeed implied that not only the price (cost, resources), but also the time to finish as well as the scope (quantity of functionality) is “fixed”, speaking in classical project management terms. But other project configurations are possible as well, so let’s take a closer look at these possibilities.

Scrum “by default” works with fixed time (based on a Sprint as a time-box, and with a pre-negotiated total number of Sprints), fixed costs, and fixed quality (the Definition of Done for each tasks strictly defines the quality). This does indeed work very well because for the customer, the three critical factors time, price, and quality are fixed. The only moving part is thus the scope, as defined by the Product Backlog. It is therefore of utmost importance for the customer to sort the Product Backlog by priority such that high-priority items get “done” first, before “time” or “costs” are consumed by the fixed “quality”. This is a legitimate contract model for agile projects (with the option to extend project runtime later to build in more scope).

Fixing everything but the time (the number of Sprints) is possible as well. In this model, essentially, the project team works, Sprint by Sprint, until the customer says “Stop”. In practice, there is the well-known variant known as “money for nothing / changes for free” which especially takes into account the desire to lock down costs as well. This model is guided by three simple rules:
  • Before project start, a total cost estimate is made and these costs are accepted by both parties as fixed.
  • During the project, the customer (Product Owner) is free to make changes (not additions!) to the scope at will (within the Scrum process), thus allowing him to react on new developments and lessons learnt from earlier Sprints. This is the “changes for free” part.
  • If the Development Team happens to satisfy the Product Owner’s expectation earlier than the negotiated total runtime of the project, the Product Owner may choose to halt development at that point and is usually charged a fee according to the remaining estimated runtime. This allows the customer to apply a 80/20 rule for cost efficiency. This is the “money for nothing” part.
This is truly an ingenious way to build a software development contract which satisfies both customer and developer, and gives them maximum flexibility. It has been established by Scrum co-founder Jeff Sutherland. This is thus another legitimate contract model for agile projects.

Now, let’s take a look at the incipiently discussed case where costs, time, and scope are fixed. This is indeed an unfortunate situation as we can clearly see that the only movable part here is software quality. From a Scrum perspective, this means that there are no fixed “Definitions of Done” as  the Development Team can decide to ignore them in order to save costs, time, or scope. This of course is ridiculous and the only expected outcome is that both the customer as well as the developers will end up frustrated (given that developers wish to build high quality products).

Unfortunately, throughout my career, I’ve seen quite a view projects starting out of this latter configuration, or even worse, with a management which actually expects that they can fix costs, time, scope, and quality at the same time. Of course, basic project management knowledge teaches us that this is utopistic thinking. Actually, this wouldn’t work regardless of the project management framework applied. As the scrumalliance.org article previously referred to states: this is “a recipe for a death-march project”.

Of course, this realization has to come well before project start, at the contract negotiation level. Otherwise, development teams get stuck with a project which is mathematically impossible to succeed, leaving developers and customers dissatisfied. There are indeed great contract models for agile projects, especially for fixed price projects, so please don’t choose once which is impossible to succeed.

Scrum is not applicable for big projects

This is yet another highly disputed topic: Does Scrum scale for bigger projects? Short answer: it does.

The Scrum Guide recommends Development Team sizes between three and nine members. Reasons for this best practice is also provided (and nicely illustrated) by the Scrum Dzone refcard (“Scaling” section): “Communication pathways increase geometrically with team size”. This refcard also explains the most common strategy for dealing with big-sized Scrum Teams: The famous “Scrum of Scrums” – where the team is divided into sub-teams which work according to Scrum’s rules and synchronize with each other via a delegate sent to the daily “Scrum of Scrums”. For a more complete adaptation of these principles, you may also want to take a look at the Nexus framework, or the Less framework, ready-to-use implementations of business-proven large-scale Scrum applications.

The important thing is that you realize that you are in fact obliged to apply an appropriate strategy if you want to run Scrum on Development Team sizes outside the range recommended by the Scrum Guide. Otherwise, in fact, the execution of Scrum may be severely impeded or eventually fail.

Scrum really builds on the effectiveness of one-to-one-communication, and this just doesn’t work with a dozen of people or more. This is most prominently shown in the conduction of the Daily Scrum where time is wasted because people are forced to discuss topics which don’t concern their daily work.

Scrum experts will clearly advise that when dividing a big team into sub-teams, you should keep these sub-teams cross-functional, thus forming so-called “feature teams”. However, I have observed in practice that this is especially challenging for organizations which newly introduce Scrum as they typically consist of individual departments, split technically (e.g. a requirements engineering team, a Java team, a database team), and breaking and redistributing these well-oiled teams is a difficult step. In this situation, I would really recommend to start off with these technical teams as sub-teams given that their interfaces are clearly defined through solid project-wide Definitions of Done, and that information is shared freely between the teams using a “Scrum of Scrums” approach.

This still is much better than just trying to squeeze in a clearly oversized development team into a “Scrum container”, sacrificing the framework’s efficiency and flexibility. Over time, communication flow will naturally show opportunities to come to a more feature-centric, interdisciplinary sub-team organization.

One important limitation to this approach, as I would recommend it, is to not isolate a requirements engineering team, as this would not be a technical, but a functional division, and you risk your “requirements engineering team” becoming the pseudo-Product Owner for the other sub-teams, building a classical “ivory tower” architecture. Instead, it’s important that requirements engineers and software developers work closely together to build solutions. Maybe your requirements engineers could even specialize themselves for working with a UI, a service interface, or a backend team.

A Scrum project does not need, or does not allow (broad) documentation

This seems to be a common misconception about Agile in general, and it is derived from a misunderstanding of the Agile Manifesto promoting “working software over comprehensive documentation”, but disregarding the important statement that “while there is value in the items on the right, we value the items on the left more.” Also, it may seem that when compared with heavy-weight process models such as RUP, Scrum in particular imposes only a few written artifacts.

Hence, introducing Agile or Scrum has been taken as a convenient excuse to lessen documentation effort, especially in the matter of requirements, decision making, and source code documentation.

And this is of course very wrong. Scrum isn’t a magical tool providing full efficiency with only half the effort. As described earlier, it’s not actually a tool at all: It’s a framework, and you can and must run your tools and processes within it in order to render it effective.

As documentation is a vital part of software development, it’s only logical and good engineering practice to stick to it where sensible, regardless of the process management framework applied. That famous line from the Agile Manifesto really means, in short, to get rid of documentation tasks which do not add value to the end product.

Actually, as documenting is the only way to really fix an agreement, an idea, or an observation for future reference, it is really an important part of Scrum as well which is all about revisiting and reassessing former decisions, and in order to do that, they must be fixed bindingly. This especially applies to:
  • Whatever is part of the contract with the Product Owner
  • The Product Backlog / Sprint Backlog and the Definition of Done for every Product Backlog item
  • Decisions concerning processes and tools
  • Technical decisions
  • Sprint retrospective outcome
As for the individual Product Backlog items, it’s another common myth that agile processes imply replacing classic Use cases with User Stories to describe individual elements of a software system. Whilst it is true that agile methodologies led to wide-spread use of User stories as a tool for development planning, they aren’t designed to replace Use cases, and you are free to use whatever tool you feel most comfortable with when applying Scrum, as the Scrum Guide doesn’t mention either of them.

In fact, these tools have a different purpose: Whilst Use cases are a tool for modeling software functionality (business requirements perspective), User stories are designed to plan feature development (business value and technical perspective), as it is here explained in more details by Mike Cohn.

In a real world project, these tools can be efficiently used in conjunction:
  • Top-down: Define the use case and split it in different (e.g. technical) user stories
  • Bottom-up: Start with individual user stories as “slices” of an “epic” which as a whole describes a complete use case.
So if you feel comfortable with use cases, start off by defining them, and break them into Product Backlog items, which will essentially yield user story-like slices. However, because we value “working software over comprehensive documentation”, note that use case documentation should be as lightweight as possible, so get rid of diagrams and complicated sequence hierarchies unless they add value.

The Sprint Backlog can be modified freely throughout the Sprint

No, it cannot. The Scrum Guide does indeed allow the Sprint Backlog to be updated throughout the Sprint as more is learned, but with the important constraint that “only the Development Team can change its Sprint Backlog during a Sprint” whilst “scope may be clarified and re-negotiated between the Product Owner and Development Team”.

This means that whilst the Product Owner can give input to the current Sprint planning, it’s at the Development Team’s discretion to apply actual changes to the current Sprint Backlog.

Actually, this is an essential prerequisite for the Sprint to stay a time-boxed unit which in turn is vital to ensure traceability: if the planning was allowed to alter freely, initial planning and observed effort cannot be compared, which impedes the all-important empirical learning process.

This is typically violated in projects with a very unclear Product Owner role, e.g. if a “project manager” takes the role of the Product Owner by dictating his own development plans. He may then whish to apply “last minute changes” even during development in order to reinforce control over the development process. This clearly is by no means Agile nor Scrum.

Conclusion

If there’s one important advice I would give to aspiring Scrum users, it’s this: it’s the Scrum Guide which describes what Scrum is, and everything which is not described in the Scrum guide is not part of Scrum (although it can be helpful), and everything which contradicts the Scrum Guide impedes Scrum from working properly. Verifying Scrum compatibility is thus as easy as reading the 16-pages guide and comparing it with its application in the project in question. Both seminars and coaching may be useful, but first of all, your team has to agree on what Scrum is, and that is what the Scrum Guide provides.

Furthermore, in order for Scrum to succeed, it’s vital that every member of the Scrum Team acknowledges the value of its underlying mindset, namely shared effort, collaboration, transparency, and ever-ongoing (self-)improvement. If your team does not agree on these values, Scrum will not suit you.

Learning Scrum really is how learning about tools in general works: Study it, use it, and only if experience shows that the way you’re taught to use it doesn’t work in your situation, you may start thinking about changing it. And, as always, applying modifications make you lose its warranty.

I hope this article helped you learning more about how to apply Scrum in practice, and how to overcome issues which come from its improper application and common misconceptions.

Thank you for your interest in this article. Please feel free to share your thoughts and critics in the comments section below, or mention any other common Scrum misconception I may have missed.


Pages: 1 2


10 common Scrum misconceptions (part 1 of 2)



Throughout my career, I’ve seen many teams struggle to apply Scrum or Agile practices and ultimately failing, and for the most part, these struggles come from common misconceptions about what the framework is, and how to apply it. In this article, I’d like to address the most important Scrum misconceptions I’ve met, and how to handle them. Please take your time. This is a long read.


I have seen more than one software project where the understanding about what Scrum is and how it works diverged considerably between the organization’s management, the project’s designated Scrum Master (in this situation, typically without Scrum certification), and the actual definition of Scrum. An example situation is illustrated by the graph at the right. In this case, typically a mixture of what management and the Scrum Master takes as “Scrum’s definition” is then applied (let’s call it the “violet belt”). The further apart from each other these circles lie, the worse the situation presents itself, typically resulting in an improper implementation of Scrum and its eventual failure. Why would this happen? Is Scrum such a complex framework?

Scrum is sometimes described as “easy to learn, but difficult to master”. Indeed, its concepts and its set of rules, as established in the Scrum Guide, are all simple and few in numbers – the Guide being only 16 pages long. In my experience, this has led to the false assumption that introducing the framework to a project is simple, following its rules is easy, requiring few adaptations of existing processes only. Unfortunately, this is very wrong!

Applying Scrum means introducing a whole new mindset, based on transparency and collaboration. This is a disruptive framework which aims to questioning existing structures and processes, thus clearing the way for ever-ongoing improvement and adaptation in how the team works together to accomplish its goals.

Scrum has been developed by some of the most experienced minds in software industry, and has proven highly advantageous by some of the most successful IT companies. Those, however, who are not ready to share, and to adapt, will fail to successfully apply this agile framework.

Over the years, some misconceptions about Scrum have been established, probably backed by failed Scrum experiences. These misconceptions, as I will address them presently, did however often arise from improper use of the framework or incomplete understanding of its underlying concepts.

In this article, I’d like to address some of the most wide-spread misconceptions; I hope it will help your project to keep its “Scrum circles” closely together:

Introducing Scrum solves any problems in your project / organization

Not at all. You will not solve any problems solely by introducing Scrum in your project or organization. Instead, due to its disruptive nature, Scrum will help you maximize transparency and thus identify potential problems on a methodological, organizational or communication level; it is then up to you, as the entire Scrum Team, to address these problems. Typical misuse of Scrum includes modifying the framework in a way which hinders it from exhibiting potential problems in the organization which, in this situation, stay hidden instead of being addressed transparently.

As Scrum does by intention not provide any means to solve problems in a project, it must not be blamed for not doing so.

Let’s take the example situation where a team is not able to deliver functionality in time. Scrum is being introduced in order to “solve” this problem. After introduction, the team’s progress throughout an iteration is transparently shown on a burndown chart. This information can be used by the team to e.g. identify potential bottlenecks. If however, the team fails to interpret this information, or even worse, the burndown chart is manipulated in a way to convince the customer, or management, that the project is doing better than it does, the project will head for failure regardless of its underlying methodology.

As a rule of thumb, if a Scrum project fails to reach an iteration’s goal, transparency should be further increased, not decreased, and team-wide communication should be improved, searching for actions towards improvement.

Scrum can or should be partially applied or tailored to your project's / organization's needs

No, this is wrong, and the Scrum Guide expresses this very clearly: “Scrum’s roles, artifacts, events, and rules are immutable and although implementing only parts of Scrum is possible, the result is not Scrum.” Through the experience of its creators, Scrum has been carefully designed to provide a robust framework for a highly productive process workflow. If you alter any of its rules, you risk to lesser its effectiveness, or even introduce a counter-productive effect.

As mentioned earlier, achieving mastery in Scrum is hard, and it is so because although it consists of rather simple rules and elements, those play together in a complex manner, and only as a whole, they are effective. Choosing to omit any part is like trying out a new recipe, but leaving out a crucial part just because you feel it doesn’t fit in.

Unless you are a software development process expert, you should not change what you don’t know, not even a small thing. Scrum is a well-established industry standard, and if you really think you could improve its design, you should probably write an article about it, because process engineers all over the world would be interested in it. Please leave a link in the comments section below.

In that same category falls the infamous notion of the “ScrumBut” as in “We use Scrum, but...” There’s even a section about that practice on the Scrum.org website, so does that make it… legitimate? Not actually, as above citation from the Scrum Guide inhibits to call any incomplete implementation of that framework “Scrum”. But does that matter? Can’t you create a good process model, not based on Scrum, but on a “ScrumBut”? Well, you can try, but still everything mentioned in this section applies: Unless you have an experienced Scrum expert in your team, you just cannot be sure not to leave out any vital part when limiting yourself to a “ScrumBut” approach instead of applying full Scrum. In my personal experience, whenever a “ScrumBut” practice has been introduced, it aimed to hide away some circumstances impeding proper application of agile practices. In this situation, however, it would be better to identify why Scrum cannot be fully implemented in your organization and either address this problem, or resort to any applicable alternate process framework. Please refer to the next section as well for more details.

Finally, some people will claim that Scrum itself suggests to adapt the framework in line with the Scrum Team’s increased experience, as Scrum promotes “adaptation” as one of the framework’s key elements to a team’s success. They typically recommend the Scrum Retrospective meeting as a chance for doing so. However, the Scrum Guide forbids this very practice when explaining the Scrum Retrospective: “The Scrum Master encourages the Scrum Team to improve, within the Scrum process framework, its development process and practices to make it more effective and enjoyable for the next Sprint.” As you can see, the Scrum framework is taken as immutable, for the very reasons I explained in this section. However, adaptation can and should be applied to any aspect of the development process itself.

Scrum is right for every project and every situation

This is certainly not true, and Scrum never claims to fulfill this vision.

I can shorten this paragraph considerably by simply citing what Michael James wrote on DZone’s excellent Scrum refcard about Scrum’s applicability: “Scrum is intended for the kinds of work defined processes have often failed to manage: uncertain requirements combined with unpredictable technology implementation risks. These conditions usually exist during new product development.”

As a rule of thumb, you might say that Scrum, just as other agile approaches, holds good for projects associated with a significant amount of unpredictability. This situation is also called the “unknown unknowns”. Because of the lack of specific experience, no “best practices” exist. This is where Scrum and other agile methodologies, being empirical, iterative approaches, shine. This typically applies to any new custom software development process.

If however you are in a situation which is highly predictable, and you have a well-known and broadly established knowledge base of all the components involved, a well-defined, Tayloristic approach will suffice. This situation is also called the “known knowns” or the “known unknowns”. In software development, this may apply to a “bug fixing” project where you know exactly and in advance which components of the system are erroneous and what exactly you have to do in order to fix the system.

Note that actually, unpredictability is the sole (main) prerequisite in Scrum’s applicability. All to often I’ve seen other “excuses” applied (e.g. by management) to not introduce Scrum in a project. One of those is having a customer which does not whish to collaborate closely with the Development Team, as discussed below in more details. However, in this situation, due to the customer’s behavior, unpredictability actually increases, thus making Scrum even more applicable. Here, Scrum would help your team identify the customer’s behavior as a critical impediment to the development process, and help your team find strategies to overcome this constraint. Applying any methodology which simply ignores the disadvantages this situation brings will not solve underlying problems such as difficult communication between team and customer and potential misunderstandings.

Scrum forces the customer to become a central part of the development process

This is actually true; however, it is wrong to blame Scrum as such to impose this as an additional restriction to the framework. As mentioned earlier, this is sometimes even used as an “excuse” to not introduce Scrum, because let’s face it, it’s always easy to blame the customer!

As discussed previously, Scrum is based entirely on proven best practices; if you omit any of Scrum’s ingredients, your are most likely to render it less effective or even harmful. The very same is true for the customer’s role within the development process. If you think about it: would you really trust any methodology which does not acknowledge a central place to the customer in the development process and stress the importance of common requirements understanding? This is especially true for custom software development – the “customer” is even referred to in the name of the thing!

Remember that Scrum aims to maximize transparency. Of course, transparent communication between the Development Team and the customer (the Product Owner) is most critical to the project’s success. Other methodologies which do not acknowledge this fact are simply hiding their inability to build up transparent communication with the customer and to adapt to his needs. But their ignorance will not shield you from negative consequences of this critical omission. Sooner or later, you will have to unveil everything you did to the customer; a situation in which you want to keep any chance to still react if you did not yet satisfy his requirements.

How to deal with customers who are not ready for transparent communication, close collaboration and shared effort? From the Scrum and Agile perspective, the answer is clear: you do not want to start any business relation with this type of customer as it is scheduled to fail. Unfortunately, this understanding is still not thoroughly present in all executive management positions. It’s the salesman’s job to foster the customer’s understanding of the advantages which come with agile methodologies and his close involvement in the software development process.

Scrum Master is a management position

This is very wrong, and in my experience it’s a very common misunderstanding. In the Scrum Guide, the Scrum Master’s tasks are explicitly outlined, and none of them requires, or implies, a management position. The Dzone refcard even clearly adds to the Scrum Master’s job description that “The ScrumMaster does these things (and more) without any authority on the Team. The ScrumMaster does not make business decisions or technical decisions, does not commit to work on behalf of the Team, etc.”

Violating this rule considerably harms two of Scrum’s main ingredients: That the Scrum Team is self-organizing, and that it understands its common responsibility of the project outcome (in other agile software development methodologies, these practices are referred to as “common (code) ownership”). If there was any distinguished individual (whether you call it the “Scrum Master” or a “Project leader” or a “Lead developer” or whatever) who makes decisions and thus takes sole responsibility for parts of the project, those essential ideas are lost, thus weakening the Team’s collaboration and individual motivation: Why should I care if I have no responsibility nor authority?

This is a concept especially hard to grasp for classically structured, highly hierarchical organizations. But there has to be someone in charge!, they say. Indeed, there is. It’s the Team in its entirety who is in charge and takes common, shared responsibility for the work of each individual Team member. And this makes actually perfectly sense. Software development is not warfare (usually). In military, the commander with most strategic experience becomes the one leading General. But in software development, there are far too many technologies, layers and stakeholders involved for a single individual to know the best strategy for everything. Scrum acknowledges this fact and thus gives responsibility to the Team as a whole which will organically yield its leading minds through its self-organizing behavior.

The Scrum Master, to round off this section, will actually help the Team by facilitating Scrum events and shielding it from any impediments.

Nevertheless, there is absolutely space for a “management” position in an organization whose development departments act as Scrum Teams, but management will not be part of the Scrum Team. Most certainly, management does not take the role of the Scrum Master as this would result in a conflict of interest. Not under any circumstances will line management take the role of the Scrum Master. In real world projects, management will either act as a proxy for the customer, thus taking the role and responsibility of the Product Owner, or, preferentially, it will manage the team’s surrounding with the actual Scrum Master as the main communicator between management and the Scrum Team.

Pages: 1 2


November 22, 2015

A naming convention for domain-driven programming



In this blog post I’d like to present a quite simple naming convention for identifiers but which is enormously helpful to build true domain-driven, object-oriented structures. Whenever I see this convention violated in practice, it usually comes with poor, procedural design and code which is hard to read.

The naming convention I present here proposes that every identifier’s name is domain-dependent, “domain” referring to its context. I am certain the rigorous application of this naming convention increases code readability, maintainability and accords especially well with domain-driven (DD) design and object orientation. I will here present some Java code examples, but I think it’s as well applicable to any object-oriented or even procedural programming language.

Identifiers must be unique. The compiler enforces that.

However, this DD naming convention proposes that an identifier should be unique only within its scope, its domain. Let’s start with an example.

Method variables

Let’s assume callPersonSaveService(…) is a method in a strongly service oriented interface:
public class PersonClient {
    private PersonService personService;

    public String callPersonSaveService(Person person) {
        PersonSaveRequest saveRequest = new PersonSaveRequest();
        saveRequest.setPerson(person);
        PersonSaveResponse saveResponse = personService.savePerson(saveRequest);
        return saveResponse.getMessage();
    }
}
Because this method is part of a PersonClient, we can already assume that it handles Persons, right? There’s no need to repeats this information in the variable names. Also, from the method name we can infer that it handles a “save” operation, so there’s no need to repeat that information in variable naming either.

Actually, repeating any information is a violation of the DRY (don’t repeat yourself) principle. Now consider this DRYed variation of above method:
public String callSaveService(Person model) {
    PersonSaveRequest request = new PersonSaveRequest();
    request.setPerson(model);
    PersonSaveResponse response = personService.savePerson(request);
    return response.getMessage();
}
What have we gained? First of all, readability is increased. Short, more concise variable names are easier to read. Also, I can immediately identify the main actors: Here’s the model, there’s the request, there’s the response. I know from the context, from the business domain, what kind of model etc. is involved in this method. You can say that the context actually is part of the “fully-qualified” name of an identifier: com.mycompany.PersonClient#callSaveService.model.

Okay, but what is the actual benefit? Things get more rewarding as soon as you have to add functionality. Imagine building an equivalent callPersonDeleteService(…) method.

As a very KISS (but not DRY) approach, let’s say we do copy-paste coding.
public String callDeleteService(Person model) {
    PersonDeleteRequest request = new PersonDeleteRequest();
    request.setPerson(model);
    PersonDeleteResponse response = personService.deletePerson(request);
    return response.getMessage();
}
Obviously, it’s much easier to copy existing code because you don’t have to change variable names. Less changes mean less work, and less potential errors.

But now we can do even better. Applying domain-specific variable naming helps us identifying similarities and hence potential abstractions. We can clearly see that both these methods make a model-based request and return a String message from a response. We can use this information to build a very DRY (but admittedly not KISS) solution by using a common interface to build a small abstraction layer and reduce the need for copy-paste-coding:
private <T extends PersonRequest> String callService(Person model, T request, 
    Function<T, PersonResponse> serviceCall) {
    request.setPerson(model);
    return serviceCall.apply(request).getMessage();
}

public String callSaveServiceKiss(Person model) {
    return callService(model, new PersonSaveRequest(), personService::savePerson);
}

public String callDeleteServiceKiss(Person model) {
    return callService(model, new PersonDeleteRequest(), personService::deletePerson);
}
Now, both PersonSaveRequest and PersonDeleteRequest inherit from a common superclass PersonRequest, and ditto for the response classes.

Of course, this is ridiculously overengineered for this simple case, and this code only stays readable because we use Java 8 method references here.

Still, this example illustrates how applying domain-driven thinking on variable naming helps us identify abstractions, which is even more important when thinking in bigger scale structures.

Class members

Now consider this class representing a CRUD controller for a Person entity, as it is e.g. typically used to serve as a backing-bean for JSF UI Views:
public class PersonController {
    private PersonService personService;
    private List<Person> allPersons;
    private Person selectedPerson;
    
    public void initController() {
        setAllPersons(personService.findAllPersons());
    }
    
    public void saveSelectedPerson() {
        if (getSelectedPerson().getId() == null) {
            personService.insertPerson(getSelectedPerson());
        }
        else {
            personService.updatePerson(getSelectedPerson());
        }
    }

    public List<Person> getAllPersons() {
        return allPersons;
    }

    public void setAllPersons(List<Person> allPersons) {
        this.allPersons = allPersons;
    }

    public Person getSelectedPerson() {
        return selectedPerson;
    }

    public void setSelectedPerson(Person selectedPerson) {
        this.selectedPerson = selectedPerson;
    }
}
Do you see the person identifier cluttered all over the code? Now imagine we want to add another controller with the same CRUD functionality but for a Reservation entity:
public class ReservationController {
    private ReservationService reservationService;
    private List<Reservation> allReservations;
    private Reservation selectedReservation;
    
    public void initController() {
        setAllReservations(reservationService.findAllReservations());
    }
    
    public void saveSelectedReservation() {
        if (getSelectedReservation().getId() == null) {
            reservationService.insertReservation(getSelectedReservation());
        }
        else {
            reservationService.updateReservation(getSelectedReservation());
        }
    }

    public List<Reservation> getAllReservations() {
        return allReservations;
    }

    public void setAllReservations(List<Reservation> allReservations) {
        this.allReservations = allReservations;
    }

    public Reservation getSelectedReservation() {
        return selectedReservation;
    }

    public void setSelectedReservation(Reservation selectedReservation) {
        this.selectedReservation = selectedReservation;
    }
}
For a potential reader of this source code (yes, this would be your fellow developer), at first glance, these two classes apparently have hardly anything in common; whilst in reality, of course, they are actually identical except that they serve different entity types.

We immediately realize this as we apply domain-specific naming e.g. to the PersonController and the PersonService. It then becomes apparent that we can actually build a big unified interface for both Controller and Service functionality, respectively, by using Java’s abstraction facilities, namely inheritance and generics. We can build the functionality once, in a super class, and building an implementation is a one-liner:
public abstract class Controller<T extends Entity> {
    private Service<T> service;
    private List<T> entities;
    private T selectedEntity;
    
    public void initController() {
        setEntities(service.findAll());
    }
    
    public void saveSelectedEntity() {
        if (getSelectedEntity().getId() == null) {
            service.insert(getSelectedEntity());
        }
        else {
            service.update(getSelectedEntity());
        }
    }

    public List<T> getEntities() {
        return entities;
    }

    public void setEntities(List<T> entities) {
        this.entities = entities;
    }

    public T getSelectedEntity() {
        return selectedEntity;
    }

    public void setSelectedEntity(T selectedEntity) {
        this.selectedEntity = selectedEntity;
    }
}

public class PersonController extends Controller<Person> {
    // empty
}
As far as the abstract Controller class is concerned: Again, it is clear from the context, that the PersonController will deal with person entities. There’s no need to repeat this information.

As you can see, just be using sensible variable naming, we have identified a way to build a simple yet powerful CRUD abstraction over all layers. (If you’re interested in seeing a true, complete CRUD abstraction for JSF in action, please take a look at my recent CrudFaces implementation.)

But applicability of these best practices don’t stop here.

HTML / Facelets markup

Context-sensitive naming policy, as described above, should also be applied to markup source code artifacts, especially if they do not offer the same degree of abstraction as e.g. the Java language does, thus de facto typically leading to error-prone copy-paste coding. In a typical Java EE / JSF stack, this is especially true for XHTML / Facelets files.

Consider this example:
<h:body>
  <h:dataTable value="#{personController.models}" var="person">
    <h:column>
      <f:facet name="header">
        <h:outputText value="Id"/>
      </f:facet>
      <h:outputText value="#{person.id}"/>
    </h:column>
    <h:column>
      <f:facet name="header">
        <h:outputText value="Name"/>
      </f:facet>
      <h:outputText value="#{person.name}"/>
    </h:column>
  </h:dataTable>
  <h:commandButton value="Back" action="#{personController.back}">
</h:body>
Here, we define a simple table rendering for each person item in personController.models its id and name.

The iteration variable is named person here although this violates the context-dependent naming policy: It is clear from the context of the iteration (over personController.models) that the element is of a person type. This again makes the code hard to read, and hard and inconsistent for reuse (both by copy-pasting and by refactoring into a higher abstraction). Things of course are naturally worse in XHTML code due to the typically limited IDE tool support.

Now consider this improved example code:
<h:head>
  <ui:param name="controller" value="#{personController}"/>
</h:head>
<h:body>
  <h:dataTable value="#{controller.models}" var="item">
    <h:column>
      <f:facet name="header">
        <h:outputText value="Id"/>
      </f:facet>
      <h:outputText value="#{item.id}"/>
    </h:column>
    <h:column>
      <f:facet name="header">
        <h:outputText value="Name"/>
      </f:facet>
      <h:outputText value="#{item.name}"/>
    </h:column>
  </h:dataTable>
  <h:commandButton value="Back" action="#{controller.back}">
</h:body>
We changed two things here:
  • First, as described above, the iteration variable is now neutrally named “item”. Its true nature is implicitly shown by its context.
  • We also created an alias for the controller, thus enabling the use of the neutral controller variable to refer to the controller. This is a best practice especially when your MVC architecture implies a 1 : 1 relationship between view and controller: Here, the view becomes the context within which the controller is defined. In the person view, there naturally lies the person controller. It’s thus again harmful to repeat that information. This of course is even more severe if the controller is referred to many times throughout the view.
This code is now consistently set up and ready for reuse, either to facilitate easy copy-pasting or, preferably, using abstraction techniques such as composite components.

Conclusion

I personally believe that opting for context-dependent identifier naming is a vital step in making source code more modular, and identifying code duplications and abstraction potential. As professional engineers, it’s our job to build abstractions, and to strive for DRY (without breaking KISS). This is what ultimately leads to clean, S.O.L.I.D. code.

I decided to dedicate a whole blog post to this topic because I’ve seen even experienced developers struggling to recognize the value of this simple yet powerful naming convention.

So, do you agree with what I wrote in this article? Please feel free to raise your own opinion or share your experience with this matter in the comments section below.

November 15, 2015

CrudFaces: JSF best practices out-of-the-box



Do you feel like every time you start a JSF project, you have to reinvent the wheel to realize the same CRUD operations again? Do you feel like you waste precious development time to build JSF / PrimeFaces hacks and workarounds rather than developing actual business logic? Then you should take a close look at CrudFaces.

The good parts

Use PrimeFaces with Bootstrap

Activate it with a single faces-config.xml config:
<factory>
    <render-kit-factory>
        ch.codebulb.crudfaces.renderkit.StyleClassChangeRenderKitFactory
    </render-kit-factory>
</factory>

A truly responsive, implicit grid form layout

<cf:formLayout groups="2" styleClass="clearfix" style="margin-bottom: 12px;">
    <p:outputLabel for="firstName" value="\#{I18N['firstName']}"/>
    <p:inputText id="firstName" value="\#{formLayoutController.entity.firstName}"/>
    <p:message for="firstName"/>
    <p:outputLabel for="lastName" value="\#{I18N['lastName']}"/>
    <p:inputText id="lastName" value="\#{formLayoutController.entity.lastName}"/>
    <p:message for="lastName"/>
    <p:commandButton value="\#{I18N['save']}" update="@form"/>
</cf:formLayout>

Un-hide a component without need for a parent component

<p:commandLink id="button" a:stealth="#{not stealthModeController.shown}" 
    class="btn btn-default">
    Destroy the world!
</p:commandLink>
<p:commandButton value="#{stealthModeController.controlText}"
    actionListener="#{stealthModeController.switchShown()}"
    process="@this" update="@this button"
/>

Make a request scoped backing bean flash scoped

<cf:formLayout>
    <p:outputLabel for="textFlash" value="@RequestScoped bean text:"/>
    <h:outputText id="textFlash" value="#{flashBeans.requestScopedBean.text}"/>
    <p:commandButton value="Update text and reload"
     action="#{flashBeans.requestScopedBean.updateText()}" ajax="false"/>
</cf:formLayout>

The best parts

Apart from the independently useful components presented above plus some additional components which come bundles with CrudFaces, the project mainly addresses building best-practices compliant CRUD (Create, Read, Update, Delete) applications. It does so by providing a bunch simple, lean Java base classes which you can derive from to build a CRUD application in about 30 lines of Java code:
@ViewScoped
@Named
public class CustomerController extends CrudTableController<Customer> {
    @Inject
    private CustomerService service;
    
    @Override
    protected CrudService<Customer> getService() {
        return service;
    }
}
public class CustomerService extends CrudService<Customer> {
    @Override
    @PersistenceContext
    protected void setEm(EntityManager em) {
        super.setEm(em);
    }

    @Override
    public Customer create() {
        return new Customer();
    }

    @Override
    public Class<Customer> getModelClass() {
        return Customer.class;
    }
}
Most importantly, they support building RESTful JSF applications based on PrimeFaces components like <p:dataTable>, ready for use with PrettyFaces.

CrudFaces is not a replacement for PrimeFaces nor OmniFaces. Instead, it builds on top of them in order to create a smooth, modern, robust tech stack.

With more still to come

Please take a look at the project’s GitHub page and the live showcase demo to see which functionality is currently provided.

CrudFaces 0.1 is now released. However, a lot of its functionality is in a very early stage, and some of its features really are barely more than proof of concepts or early beta versions. I hope to add more functionality to existing components, and I have lots of additional features planned to be added in future versions to make JSF development even more easier and enjoyable. I want this project to become the culmination of my experience from 5+ years of JSF development.

I hope that this project has awaken your interest. Please feel free to share your opinion, your thoughts or any questions in the comments section below.


October 17, 2015

JPA query languages compared (part 4 of 4)

Pages: 1 2 3 4

Criteria API queries

The Criteria API is the only part of the JPA standard which allows for fully dynamic query building as well as (optionally) for true strongly typed query building.

Note that Hibernate provides its own “Criteria” API as well which actually inspired the JPA Criteria API and thus shares its general “look and feel”. The Hibernate Criteria API is not discussed in this article. I believe one would either go with the standard (JPA), or look out for an API with a completely different design (such as Querydsl, as discussed below).

String-based Criteria API

1. Find customer by id 2. Find by id (generic)
CriteriaQuery<Customer> query = 
  em.getCriteriaBuilder()
  .createQuery(Customer.class);
Root<Customer> from = query.from(Customer.class);
query.where(em.getCriteriaBuilder()
  .equal(from.get("id"), id));
return em.createQuery(query).getSingleResult();
CriteriaQuery<Customer> query = 
  em.getCriteriaBuilder()
  .createQuery(getModelClass());
Root<Customer> from = query.from(getModelClass());
query.where(em.getCriteriaBuilder()
  .equal(from.get("id"), id));
return em.createQuery(query).getSingleResult();
3. Find all customers 4. Count all customers
CriteriaQuery<Customer> query = 
  em.getCriteriaBuilder()
  .createQuery(Customer.class);
Root<Customer> from = query.from(Customer.class);
return em.createQuery(query).getResultList();
CriteriaQuery<Long> query = 
  em.getCriteriaBuilder().createQuery(Long.class);
Root<Customer> from = query.from(Customer.class);
query.select(em.getCriteriaBuilder().count(from));
return em.createQuery(query).getSingleResult();
5. Find purchase by customer id 6. Count purchase by customer id
CriteriaQuery<Purchase> query = 
  em.getCriteriaBuilder()
  .createQuery(Purchase.class);
Root<Purchase> from = query.from(Purchase.class);
Join<Purchase, Customer> join = 
  from.join("customer");
query.where(em.getCriteriaBuilder()
  .equal(join.get("id"), id));
return em.createQuery(query).getResultList();
CriteriaQuery<Long> query = 
  em.getCriteriaBuilder().createQuery(Long.class);
Root<Purchase> from = query.from(Purchase.class);
Join<Purchase, Customer> join = 
  from.join("customer");
query.where(em.getCriteriaBuilder()
  .equal(join.get("id"), id));
query.select(em.getCriteriaBuilder().count(from));
return em.createQuery(query).getSingleResult();
7. Find distinct product with purchase’s customer’s special condition 8. Aggregate product price sum by purchase’s customers
// main query on products
CriteriaQuery<Product> query = 
  em.getCriteriaBuilder()
  .createQuery(Product.class);
Root<Product> from = query.from(Product.class);

// subquery on product ids
Subquery<Long> subQuery = 
  query.subquery(Long.class);
Root<Customer> subFrom = 
  subQuery.from(Customer.class);
Join<Customer, Purchase> joinPurchase = 
  subFrom.join("purchases");
Join<Purchase, Product> joinProduct = 
  joinPurchase.join("products");
subQuery.select(joinProduct.get("id")
  .as(Long.class)).distinct(true);
subQuery.where(em.getCriteriaBuilder()
  .equal(subFrom.get("premium"), true));

query.select(from);
query.where(em.getCriteriaBuilder().in(
  from.get("id")).value(subQuery));
return em.createQuery(query).getResultList();
CriteriaQuery<Tuple> query = 
  em.getCriteriaBuilder().createTupleQuery();
Root<Customer> from = query.from(Customer.class);
Join<Customer, Purchase> joinPurchase = 
  from.join("purchases");
Join<Purchase, Product> joinProduct = 
  joinPurchase.join("products");
query.multiselect(from.get("id"), 
  em.getCriteriaBuilder().sum(joinProduct
  .get("price").as(Double.class)));
query.groupBy(from.get("id"));
List<Tuple> results = 
  em.createQuery(query).getResultList();

Map<Customer, Double> ret = new HashMap<>();
for (Tuple result : results) {
  Object[] arr = result.toArray();
  ret.put(customerService.findById((Long)arr[0]),
((Double)arr[1]));
}
return ret;

The String-based version of the Criteria API really is the same as the strongly typed Criteria API (discussed below), but without making use of the so called entity meta-models generated at compile time which provide type-safe references to entity property paths.

I will not discuss the String-based Criteria API in detail here. Let me just say that if you want to use Criterias because you want that extra type-safety, you should go all the way and use the entity meta-models to ensure complete type-safety. Otherwise, figuratively speaking, it’s like buying a car but then harnessing a horse in front of it.

Please compare above example queries with their equivalent in the next subsection to see how the entity meta-model improves type safety.

Bottom line:
  • Never use “Stringly typed” Criteria, always go for the strongly typed alternative.

Strongly typed Criteria API

1. Find customer by id 2. Find by id (generic)
CriteriaQuery<Customer> query = 
  em.getCriteriaBuilder()
  .createQuery(Customer.class);
Root<Customer> from = query.from(Customer.class);
query.where(em.getCriteriaBuilder()
  .equal(from.get(Customer_.id), id));
return em.createQuery(query).getSingleResult();
CriteriaQuery<Customer> query = 
  em.getCriteriaBuilder()
  .createQuery(getModelClass());
Root<Customer> from = 
  query.from(getModelClass());
query.where(em.getCriteriaBuilder()
  .equal(from.get(Customer_.id), id));
return em.createQuery(query).getSingleResult();
3. Find all customers 4. Count all customers
CriteriaQuery<Customer> query = 
  em.getCriteriaBuilder()
  .createQuery(Customer.class);
Root<Customer> from = query.from(Customer.class);
return em.createQuery(query).getResultList();
CriteriaQuery<Long> query = 
  em.getCriteriaBuilder().createQuery(Long.class);
Root<Customer> from = query.from(Customer.class);
query.select(em.getCriteriaBuilder().count(from));
return em.createQuery(query).getSingleResult();
5. Find purchase by customer id 6. Count purchase by customer id
CriteriaQuery<Purchase> query = 
  em.getCriteriaBuilder()
  .createQuery(Purchase.class);
Root<Purchase> from = query.from(Purchase.class);
Join<Purchase, Customer> join = 
  from.join(Purchase_.customer);
query.where(em.getCriteriaBuilder()
  .equal(join.get(BaseModel_.id), id));
return em.createQuery(query).getResultList();
CriteriaQuery<Long> query = 
  em.getCriteriaBuilder().createQuery(Long.class);
Root<Purchase> from = query.from(Purchase.class);
Join<Purchase, Customer> join = 
  from.join(Purchase_.customer);
query.where(em.getCriteriaBuilder()
  .equal(join.get(BaseModel_.id), id));
query.select(em.getCriteriaBuilder().count(from));
return em.createQuery(query).getSingleResult();
7. Find distinct product with purchase’s customer’s special condition 8. Aggregate product price sum by purchase’s customers
// main query on products
CriteriaQuery<Product> query = 
  em.getCriteriaBuilder()
  .createQuery(Product.class);
Root<Product> from = query.from(Product.class);

// subquery on product ids
Subquery<Long> subQuery = 
  query.subquery(Long.class);
Root<Customer> subFrom = 
  subQuery.from(Customer.class);
ListJoin<Customer, Purchase> joinPurchase = 
  subFrom.join(Customer_.purchases);
ListJoin<Purchase, Product> joinProduct = 
  joinPurchase.join(Purchase_.products);
subQuery.select(joinProduct.get(Product_.id))
  .distinct(true);
subQuery.where(em.getCriteriaBuilder()
  .equal(subFrom.get(Customer_.premium), true));

query.select(from);
query.where(em.getCriteriaBuilder().in(
  from.get(Product_.id)).value(subQuery));
return em.createQuery(query).getResultList();
CriteriaQuery<Tuple> query = 
  em.getCriteriaBuilder().createTupleQuery();
Root<Customer> from = query.from(Customer.class);
ListJoin<Customer, Purchase> joinPurchase = 
  from.join(Customer_.purchases);
ListJoin<Purchase, Product> joinProduct = 
  joinPurchase.join(Purchase_.products);
query.multiselect(from.get(BaseModel_.id), 
  em.getCriteriaBuilder().sum(joinProduct
  .get(Product_.price)));
query.groupBy(from.get(BaseModel_.id));
List<Tuple> results = 
  em.createQuery(query).getResultList();

Map<Customer, Double> ret = new HashMap<>();
for (Tuple result : results) {
  Object[] arr = result.toArray();
  ret.put(customerService.findById((Long)arr[0]),
  ((Double)arr[1]));
}
return ret;

As with String-based Criterias, by invoking EntityManager#getCriteriaBuilder(), you get a builder instance to build statically typed queries by invoking methods on the builder or its child nodes, respectively. This is a very different approach to specifying a query in a String. Many builder nodes work with generics.

Moreover, in “strongly typed” mode, you use the entity meta-model to refer to entity property paths. The meta-model is generated from the entity classes at compile time; it is activated by simply adding the modelgen-dependency to the pom:
<dependency>
    <groupId>org.eclipse.persistence</groupId>
    <artifactId>org.eclipse.persistence.jpa.modelgen.processor</artifactId>
    <version>2.6.0</version>
    <scope>provided</scope>
</dependency>
The Criteria API thus makes is easy and safe to build queries at runtime.

Unfortunately however, its API is rightfully considered cumbersome, unintuitive and unflexible by many developers. Actually, “translating” even quite simple JPQL or SQL queries into their Criteria API typically leads to a monstrous, almost unreadable blob of code; see the example queries above. This highly harms real-world usefulness of the Criteria API.

On stackoverflow.com, you’ll find many workaround proposals to abstract the Criteria API and make it more easy to work with for everyday problems. However, I think that if you really need a solid, easy to use framework to build dynamic queries, you should just get an existing one. Don’t reinvent the wheel. Read on for a short overview of Querydsl, a very good contestant.

Bottom line:
  • If you already have all your optimized SQL / JPQL scripts nicely prepared, don’t waste your time manually “translating” them into Criteria queries. Just use named native queries instead.
  • Unless you need true dynamic queries, always use named queries instead.
  • If you need a means to build very simple dynamic / generic queries in a few places in your code (e.g. example query 2: the famous generic “find all” query you can’t realize with any other JPA query language), you may consider solving it using the Criteria API in order not to introduce an additional 3rd party dependency to the project.
  • If however you have to build many highly dynamic queries, you should use a 3rd party framework with a more reasonable API (as e.g. Querydsl, which is discussed below). But also, you should probably ask yourself whether you really need a highly dynamic solution. You may get a considerable performance boost if you’d stick with static, precompiled queries.

Mysema Querydsl

1. Find customer by id 2. Find by id (generic)
QCustomer qCustomer = QCustomer.customer;
return new JPAQueryFactory(em)
  .selectFrom(qCustomer)
  .where(qCustomer.id.eq(id))
  .fetchOne();
return new JPAQueryFactory(em)
  .selectFrom(getQModel())
  .where(getQModelId().eq(id))
  .fetchOne();
3. Find all customers 4. Count all customers
QCustomer qCustomer = QCustomer.customer;
return new JPAQueryFactory(em)
  .selectFrom(qCustomer)
.fetch();
QCustomer qCustomer = QCustomer.customer;
return new JPAQueryFactory(em)
  .selectFrom(qCustomer)
  .fetchCount();
5. Find purchase by customer id 6. Count purchase by customer id
QPurchase qPurchase = QPurchase.purchase;
QCustomer qCustomer = QCustomer.customer;
return new JPAQueryFactory(em)
  .selectFrom(qPurchase)
  .innerJoin(qPurchase.customer, qCustomer)
  .where(qCustomer.id.eq(id))
  .fetch();
QPurchase qPurchase = QPurchase.purchase;
QCustomer qCustomer = QCustomer.customer;
return new JPAQueryFactory(em)
  .selectFrom(qPurchase)
  .innerJoin(qPurchase.customer, qCustomer)
  .where(qCustomer.id.eq(id))
  .fetchCount();
7. Find distinct product with purchase’s customer’s special condition 8. Aggregate product price sum by purchase’s customers
QProduct qProduct = QProduct.product;
QPurchase qPurchase = QPurchase.purchase;
QCustomer qCustomer = QCustomer.customer;
return new JPAQueryFactory(em)
  .selectFrom(qProduct)
  .where(qProduct.id.in(JPAExpressions
    .selectFrom(qCustomer)
    .innerJoin(qCustomer.purchases, qPurchase)
    .innerJoin(qPurchase.products, qProduct)
    .select(qProduct.id)
    .where(qCustomer.premium.eq(true))
))
.fetch();
QProduct qProduct = QProduct.product;
QPurchase qPurchase = QPurchase.purchase;
QCustomer qCustomer = QCustomer.customer;
List<Tuple> results = new JPAQueryFactory(em)
  .from(qCustomer)
  .innerJoin(qCustomer.purchases, qPurchase)
  .innerJoin(qPurchase.products, qProduct)
  .groupBy(qCustomer.id)
  .select(qCustomer.id, qProduct.price.sum())
  .fetch();

Map<Customer, Double> ret = new HashMap<>();
for (Tuple result : results) {
  ret.put(customerService.findById(
  result.get(qCustomer.id)),
  result.get(qProduct.price.sum()));
}
return ret;

Mysema Querydsl is a third party alternative to the JPA standard Criteria API to dynamically build strongly typed queries. It comes with its own entity meta model generation. Take a look at the demo project’s pom.xml file for its dependencies declaration and the entity meta model generation plugin setup.

There are also some other similar third party tools. Another popular example is jOOQ. This tool, however, works very different in that it builds its meta-model out of the database schema rather than the Java entity classes. Also, apparently, support for JOINs is not yet quite good enough. Iciql, another similar tool, requires its own non-standard annotations on the entity classes.

As you can see from the example queries, Querydsl’s syntax is in any aspect superior to that of Criteria API: It is much more simple, concise, and thus easier to learn and maintain.

Unfortunately, I have found that the official documentation has not yet been updated with some breaking changes introduced in the most recent major release 4.0, making the learning curve sharper than it should be.

Other than that, I consider Querydsl a pragmatic drop-in replacement for the highly inconvenient Criteria API.

As you can see from the demo project source code, I’d consider it best practice to declare the entity meta-model references in a single place, e.g. in the respective entity class, rather than declaring them locally in each method.

Much more than Criteria API, Querydsl also offers great opportunities to apply DRY (don’t repeat yourself) best practices by writing more generic but parameterizable queries. For instance, example queries 3 / 4 could reuse a general purpose method like this:
private JPAQuery<Customer> all() {
    return new JPAQueryFactory(em).selectFrom(qCustomer);
}

@Override
public List<Customer> findAll() {
    return all().fetch();
}

@Override
public long countAll() {
    return all().fetchCount();
}
This example, for instance, wouldn’t be possible with Criteria API where the SELECT and SELECT COUNT API differs considerably.

Bottom line:
  • If you have to build dynamic queries, you should consider using Querydsl. But again, you should probably ask yourself whether you really need a highly dynamic solution. You may get a considerable performance boost if you’d stick with static, precompiled queries.

Making a decision

Finally, I’d like to quickly recap what we learnt about those persistence query languages by giving you my recommendation on how I would make a technology decision:
  • For CRUD (create / read by id / update / delete) operations
    • Use the EntityManager methods 
  • If you already have all your optimized SQL scripts nicely prepared or if you make use of native SQL constructs such as views and stored procedures
    • Use named native Queries. Mind the loss of portability!
  • If you can build static queries from scratch
    • Use named JPQL queries
  • If you only need to build trivial dynamic / generic queries (such as “find all in x”) and portability is key
    • Use strongly typed Criteria API queries
  • If you need to build non-trivial dynamic / generic queries
    • Use Mysema Querydsl

Conclusion

In this blog post, I think I have exhaustively covered the most important techniques to write database queries in a Java EE 7 landscape. To further compact my conclusions from the previous sections, I’d say that named JPQL queries and Mysema Querydsl are the most solid choices for their respective use cases.

The most important advice I would give to any Java EE project however is to clearly define once which techniques will be used and then stick to this decision until it needs to be revised. Keeping the project tech stack clean and well-defined is an important prerequisite for long-term maintainability.

I hope this article helped you getting an insight in these technologies. Please use the comments section below to let me know whether you agree or disagree with my conclusions.


Pages: 1 2 3 4

JPA query languages compared (part 3 of 4)

Pages: 1 2 3 4

Named queries

The standard way for working with DB query Strings is to specify them as so-called named queries. As for dynamic queries, SQL and JPQL commands are supported.

Named native (SQL) queries

1. Find customer by id 2. Find by id (generic)
@NamedNativeQuery(name = Customer.SQL_FIND_BY_ID,
  query = "SELECT * FROM CUSTOMER WHERE ID = ?1",
  resultClass = Customer.class)
...
public static final String SQL_FIND_BY_ID =
  "SQL.Customer.findbyId";
...
return em.createNamedQuery(
  Customer.SQL_FIND_BY_ID, Customer.class)
  .setParameter(1, id)
  .getSingleResult();
(dynamic table names not supported)
3. Find all customers 4. Count all customers
@NamedNativeQuery(name = Customer.SQL_FIND_ALL,
  query = "SELECT * FROM CUSTOMER")
...
return em.createNamedQuery(Customer.SQL_FIND_ALL,
  Customer.class)
  .getResultList();
@NamedNativeQuery(name = Customer.SQL_COUNT_ALL,
  query = "SELECT COUNT(ID) FROM CUSTOMER")
...
return em.createNamedQuery(Customer.SQL_COUNT_ALL,
  Customer.class)
  .getResultList();
5. Find purchase by customer id 6. Count purchase by customer id
@NamedNativeQuery(name = 
  Purchase.SQL_FIND_BY_CUSTOMER_ID, 
  query = "SELECT PURCHASE.* FROM PURCHASE 
  INNER JOIN CUSTOMER ON PURCHASE.CUSTOMER_ID=
  CUSTOMER.ID WHERE CUSTOMER.ID = ?1")
...
return em.createNamedQuery(
  Purchase.SQL_FIND_BY_CUSTOMER_ID, Purchase.class)
  .setParameter(1, id)
  .getResultList();
@NamedNativeQuery(name = 
  Purchase.SQL_COUNT_BY_CUSTOMER_ID,
  query = "SELECT COUNT(PURCHASE.ID) FROM PURCHASE 
  INNER JOIN CUSTOMER ON PURCHASE.CUSTOMER_ID=
  CUSTOMER.ID WHERE CUSTOMER.ID = ?1")
...
return ((Number)em.createNamedQuery(
  Purchase.SQL_COUNT_BY_CUSTOMER_ID, Long.class)
  .setParameter(1, id)
  .getSingleResult()).longValue();
7. Find distinct product with purchase’s customer’s special condition 8. Aggregate product price sum by purchase’s customers
@NamedNativeQuery(name = 
  Product.SQL_FIND_FOR_PURCHASE_CUSTOMER_PREMIUM, 
  query = "SELECT PRODUCT.* FROM PRODUCT 
  WHERE PRODUCT.ID IN (SELECT DISTINCT PRODUCT.ID 
  FROM PRODUCT INNER JOIN PURCHASE_PRODUCT 
  ON PURCHASE_PRODUCT.PRODUCTS_ID=PRODUCT.ID 
  INNER JOIN PURCHASE 
  ON PURCHASE.ID=PURCHASE_PRODUCT.PURCHASE_ID 
  INNER JOIN CUSTOMER 
  ON PURCHASE.CUSTOMER_ID=CUSTOMER.ID 
  WHERE CUSTOMER.PREMIUM = 1)",
  resultClass = Product.class)
...
return em.createNamedQuery(
Product.SQL_FIND_FOR_PURCHASE_CUSTOMER_PREMIUM,
  Product.class)
  .getResultList();
@NamedNativeQuery(name = 
Product.SQL_SUM_PRICE_BY_PURCHASE_CUSTOMER,
  query = "SELECT CUSTOMER.ID, SUM(PRODUCT.PRICE) 
  FROM CUSTOMER INNER JOIN PURCHASE 
  ON PURCHASE.CUSTOMER_ID=CUSTOMER.ID 
  INNER JOIN PURCHASE_PRODUCT 
  ON PURCHASE.ID=PURCHASE_PRODUCT.PURCHASE_ID 
  INNER JOIN PRODUCT 
  ON PURCHASE_PRODUCT.PRODUCTS_ID=PRODUCT.ID 
  GROUP BY CUSTOMER.ID")
...
List results = em.createNamedQuery(
  Product.SQL_SUM_PRICE_BY_PURCHASE_CUSTOMER)
  .getResultList();
Map<Customer, Double> ret = new HashMap<>();
for (Object result : results) {
  Object[] arr = (Object[]) result;
  ret.put(customerService.findById((Long)arr[0]),
  ((Double)arr[1]));
}
return ret;

Named native queries are specified on the entity class using the @NamedNativeQuery annotation. Because Java SE 7 doesn’t support multiple instances of the same annotation type on the same entity, multiple query annotations must be combined in a @NamedNativeQueries umbrella annotation.

They can then be invoked on the EntityManager, much similar to dynamic native queries.

Named native queries still share quite a few disadvantages with dynamic native queries because they depend on native SQL:
  • You have to deal with the full complexity of SQL queries (e.g. JOIN clauses)
  • Native SQL queries work directly with the native data types of the underlying database, which severely harms portability. For instance, note that in the example, the data type returned by a COUNT query (e.g. int or long) depends on the underlying database implementation.
  • JPA does not allow named parameters for native SQL queries, only indexed parameters, which further decreases maintainability.
Support for generics is slightly better than with dynamic named queries:
  • By specifying resultClass in the annotation, a getResultList() call doesn’t have to be casted.
As with dynamic queries, always build TypedQuery instances for better generics support and less casting.

There is, however, an important advantage over dynamic queries in general:
  • JPA is able to apply performance optimization (preparsing, precompilation).
In fact, this is why you should never use dynamic queries at all, and replace them with named queries.

As you can see in the code examples, it’s a widespread best practice to use constants for query names in order to prevent mistyping them when referenced in an EntityManager call. Introducing constants for every named parameter however I do consider overkill.

If you have non-trivial query Strings, it’s also best practice to load them from a static XML file in order not to clutter the entity Java source code file.

Note that because of the nature of annotations, you can only provide static query Strings; there’s no way to create a query String dynamically which really is what named queries are all about.

Bottom line:
  • Depending on how your development project is organized, named native queries may be useful: If your query logic resides in SQL scripts rather than in the Java code, and if you make use of native SQL constructs such as views and stored procedures, named native queries are the facility of choice for native SQL invocation. Note however that you thus create a strong dependency on the database’s native SQL dialect and hence lose portability.
  • If you already have all your optimized SQL scripts nicely prepared, don’t waste your time manually “translating” them into JPQL. Just use native queries instead.
  • If however you don’t have SQL expert knowledge in the team and you work from a Java perspective, take JPQL as a chance to abstract away nasty SQL.

Named JPQL queries

1. Find customer by id 2. Find by id (generic)
@NamedQuery(name = Customer.FIND_BY_ID, query =
  "SELECT e FROM Customer e WHERE e.id = :id")
...
public static final String FIND_BY_ID =
  "Customer.findbyId";
...
return em.createNamedQuery(
  Customer.FIND_BY_ID, Customer.class)
  .setParameter("id", id)
  .getSingleResult();
(dynamic table names not supported)
3. Find all customers 4. Count all customers
@NamedQuery(name = Customer.FIND_ALL,
  query = "SELECT e FROM Customer e")
...
return em.createNamedQuery(
  Customer.FIND_ALL, Customer.class)
  .getResultList();
@NamedQuery(name = Customer.COUNT_ALL,
  query = "SELECT COUNT(e.id) FROM Customer e")
...
return em.createNamedQuery(
  Customer.COUNT_ALL, Long.class)
  .getSingleResult();
5. Find purchase by customer id 6. Count purchase by customer id
@NamedQuery(name = Purchase.FIND_BY_CUSTOMER_ID,
  query = "SELECT e FROM Purchase e 
  INNER JOIN e.customer _customer 
  WHERE _customer.id = :id")
...
return em.createNamedQuery(
  Purchase.FIND_BY_CUSTOMER_ID, Purchase.class)
  .setParameter("id", id)
  .getResultList();
@NamedQuery(name = Purchase.COUNT_BY_CUSTOMER_ID, 
  query = "SELECT COUNT(e.id) 
  FROM Purchase e INNER JOIN e.customer _customer 
  WHERE _customer.id = :id")
...
return em.createNamedQuery(
  Purchase.COUNT_BY_CUSTOMER_ID, Long.class)
  .setParameter("id", id)
  .getSingleResult();
7. Find distinct product with purchase’s customer’s special condition 8. Aggregate product price sum by purchase’s customers
@NamedQuery(name = 
  Product.FIND_FOR_PURCHASE_CUSTOMER_PREMIUM,
  query = "SELECT e from Product e where e.id IN (
  SELECT DISTINCT _products.id FROM Customer _customer 
  INNER JOIN _customer.purchases _purchases 
  INNER JOIN _purchases.products _products 
  WHERE _customer.premium = true)")
...
return em.createNamedQuery(
  Product.FIND_FOR_PURCHASE_CUSTOMER_PREMIUM,
  Product.class)
.getResultList();
@NamedQuery(name = 
  Product.SUM_PRICE_BY_PURCHASE_CUSTOMER, 
  query = "SELECT _customer.id, SUM(_products.price) 
  FROM Customer _customer 
  INNER JOIN _customer.purchases _purchases 
  INNER JOIN _purchases.products _products 
  GROUP BY _customer.id")
...
List results = em.createNamedQuery(
  Product.SUM_PRICE_BY_PURCHASE_CUSTOMER)
  .getResultList();
Map<Customer, Double> ret = new HashMap<>();
for (Object result : results) {
  Object[] arr = (Object[]) result;
  ret.put(customerService.findById((Long)arr[0]),
  ((Double)arr[1]));
}
return ret;

Similar to named native queries, named JPQL queries are specified on the entity class using the @NamedQuery annotation / @NamedQueries collection annotation.

Finally, named JPQL queries combine the advantages of named queries with the advantages of the JPQL syntax. Most importantly, because they are precompiled, JPQL syntax errors are detected early during compilation stage (or even by the IDE) rather than on query execution.

The common best practices explained above for named native queries apply to JPQL queries as well.

They really are the best fit for all but two specific situations:
  • You need to build queries dynamically
  • You want to or have to depend on native SQL constructs such as views and stored procedures.
Bottom line:
  • Static queries should be used with named JPQL queries for maximum static checking, query optimization and generics support as well as overall maintainability.
Pages: 1 2 3 4

JPA query languages compared (part 2 of 4)


Pages: 1 2 3 4

EntityManager operations

As discussed in the preliminary section, the EntityManager supports some very basic CRUD operations directly:
1. Find customer by id 2. Find by id (generic)
return em.find(Customer.class, id);
return em.find(getModelClass(), id);
3. Find all customers 4. Count all customers
(“read all” not supported) (“count all” not supported)
5. Find purchase by customer id 6. Count purchase by customer id
Customer customer = 
  em.find(Customer.class, id);
if (customer == null) {
  return new ArrayList<>();
}

return customer.getPurchases();
Customer customer = 
  em.find(Customer.class, id);
if (customer == null) {
  return new ArrayList<>();
}

return customer.getPurchases().size();
7. Find distinct product with purchase’s customer’s special condition 8. Aggregate product price sum by purchase’s customers
(“read all” not supported) (“read all” not supported)

It’s thus best practice to use EntityManager directly for simple CRUD operations, and not write any explicit DB queries.

Note that EntityManager only supports read with id operation, not “read all”. You’ll have to use an alternate implementation for these cases.

Also note that whilst it may be fine to “explicitly” load all children for an entity using a plain EntityManager call (query 5), it would be wasteful to do so just to get the number (size()) of children (query 6): As pointed out in this stackoverflow answer, calling #size() would trigger a full fetch of all child entities, despite lazy loading settings. It’s best practice to use an alternate implementation for this case.

Bottom line:
  • For use with simple CRUD operations only and exclusively.

Short digression: CRUD and REST

As a side note: If you ever find yourself forced to write a lot of queries with conditional “where” clauses other than the entity id, this may be a sign of an architectural misconception.

Especially in a RESTful environment, it’s most typical to once receive a list of the ids of all the entities fulfilling a given condition, and then work with each individual entity based on its id.

For instance, in order to switch the “premium user” flag to true for a customer with a given customer name, you would do in REST (in pseudo-code):
Customer customer = em.createNamedQuery("Customer.findbyName")
    .setParameter("name", name)
    .getSingleResult();
customer.setPremium(true);
save(customer);
whereas in a more service-oriented architecture, you would do (in pseudo-code):
em.createNamedQuery("Customer.setPremiumByName")
    .setParameter("name", name)
    .executeUpdate();
where "Customer.setPremiumByName" is a specialized named update query implementing the business logic.

Even though in the service-oriented version, we fire only one query instead of two, the number of queries we have to define will be much higher than in a REST architecture which reuses the generic CRUD operations wherever possible. If however throughput is very critical, a service-oriented approach may be your last resort.

For any other situation, RESTful service interfaces do in general increase reusability and maintainability and match better an object-oriented environment. In any case, it’s key to define once whether you are building a RESTful or a service-oriented architecture, and to then stick to it.

Dynamic queries

As the most simple (and dynamic) option, you can run queries specified by a String on the EntityManager.

Dynamic native (SQL) queries

In the utmost simple case, you can use plain SQL (hence “native”) queries.
1. Find customer by id 2. Find by id (generic)
return (Customer) em.createNativeQuery(
  "SELECT * FROM CUSTOMER WHERE ID = ?1", 
  Customer.class)
  .setParameter(1, id)
  .getSingleResult();
(dynamic table names not supported)
3. Find all customers 4. Count all customers
return em.createNativeQuery(
  "SELECT * FROM CUSTOMER", Customer.class)
  .getResultList();
return ((Number) em.createNativeQuery(
  "SELECT COUNT(ID) FROM CUSTOMER")
  .getSingleResult()).longValue();
5. Find purchase by customer id 6. Count purchase by customer id
return em.createNativeQuery(
  "SELECT PURCHASE.* FROM PURCHASE\n" +
  "INNER JOIN CUSTOMER\n" +
  "ON PURCHASE.CUSTOMER_ID=CUSTOMER.ID\n" +
  "WHERE CUSTOMER.ID = ?1", Purchase.class)
  .setParameter(1, id)
  .getResultList();
return em.createNativeQuery(
  "SELECT COUNT(PURCHASE.ID) FROM PURCHASE\n" +
  "INNER JOIN CUSTOMER\n" +
  "ON PURCHASE.CUSTOMER_ID=CUSTOMER.ID\n" +
  "WHERE CUSTOMER.ID = ?1", Purchase.class)
  .setParameter(1, id)
  .getResultList();
7. Find distinct product with purchase’s customer’s special condition 8. Aggregate product price sum by purchase’s customers
return em.createNativeQuery(
  "SELECT PRODUCT.* FROM PRODUCT\n" +
  "WHERE PRODUCT.ID IN (\n" +
  "SELECT DISTINCT PRODUCT.ID FROM PRODUCT\n" +
  "INNER JOIN PURCHASE_PRODUCT\n" +
  "ON PURCHASE_PRODUCT.PRODUCTS_ID=PRODUCT.ID\n" +
  "INNER JOIN PURCHASE\n" +
  "ON PURCHASE.ID=PURCHASE_PRODUCT.PURCHASE_ID\n" +
  "INNER JOIN CUSTOMER\n" +
  "ON PURCHASE.CUSTOMER_ID=CUSTOMER.ID\n" +
  "WHERE CUSTOMER.PREMIUM = 1\n" +
  ")", Product.class)
  .getResultList();
}
List results = em.createNativeQuery(
  "SELECT CUSTOMER.ID, " +
  "SUM(PRODUCT.PRICE) FROM CUSTOMER\n" +
  "INNER JOIN PURCHASE\n" +
  "ON PURCHASE.CUSTOMER_ID=CUSTOMER.ID\n" +
  "INNER JOIN PURCHASE_PRODUCT\n" +
  "ON PURCHASE.ID=PURCHASE_PRODUCT.PURCHASE_ID\n" +
  "INNER JOIN PRODUCT\n" +
  "ON PURCHASE_PRODUCT.PRODUCTS_ID=PRODUCT.ID\n" +
  "GROUP BY CUSTOMER.ID")
  .getResultList();
Map<Customer, Double> ret = new HashMap<>();
for (Object result : results) {
  Object[] arr = (Object[]) result;
  ret.put(customerService.findById((Long)arr[0]), 
    ((Double)arr[1]));
}
return ret;

Important note: in order to prevent SQL injection, we always provide dynamic parameters values with the #setParameter(…) method, which sanitizes potentially harmful input, rather than using query String concatenation.

This most basic DB query facility also suffers from the most inconveniences:
  • Because the query String is built dynamically, no performance optimization (preparsing, precompilation) is possible.
  • Declaring the query as a String of course is not typesafe at all, making the implementation highly brittle.
  • It’s also arguably hard to read and maintain.
  • Even though dynamic parameters are supported, it doesn’t support to use them for dynamic table names. The only way to support dynamic table names (for a very generic solution as in example query 2) would be String concatenation which we have already revoked as being a terrible idea. The same is true for any dynamic query clauses building.
  • Support for aggregate functions (using GROUP BY clause) is bad: it can only return value tuples, not entity-value Maps (we’ll later notice that unfortunately, this is basically true for every Java query language discussed in this article).
  • You have to deal with the full complexity of SQL queries (e.g. JOIN clauses)
  • Native SQL queries work directly with the native data types of the underlying database, which severely harms portability. For instance, note that in the example, the data type returned by a COUNT query (e.g. int or long) depends on the underlying database implementation.
  • Native SQL query results have to be casted because generics are not fully supported. Still, it’s best practice to always use the #create…Query(…) method which takes an additional parameter, turning the result into a TypedQuery with basic generics support.
  • JPA does not allow named parameters for native SQL queries, only indexed parameters, which further decreases maintainability. It’s best practice to at least use explicit parameter references (?0, ?1,… instead of ?, ?,…) to make things explicit. Note that query parameter indices start with 1.
Bottom line:
  • Don’t use this facility ever. Check out named native queries if you need native SQL query support.

Dynamic JPQL queries

1. Find customer by id 2. Find by id (generic)
return em.createQuery(
  "SELECT e FROM Customer e WHERE e.id = :id",
  Customer.class)
  .setParameter("id", id)
  .getSingleResult();
(dynamic table names not supported)
3. Find all customers 4. Count all customers
return em.createQuery(
  "SELECT e FROM Customer e", Customer.class)
  .getResultList();
return em.createQuery(
  "SELECT COUNT(e.id) FROM Customer e", Long.class)
  .getSingleResult();
5. Find purchase by customer id 6. Count purchase by customer id
return em.createQuery("SELECT e FROM Purchase e\n" +
  "INNER JOIN e.customer _customer\n" +
  "WHERE _customer.id = :id", Purchase.class)
  .setParameter("id", id)
  .getResultList();
return em.createQuery(
  "SELECT COUNT(e.id) FROM Purchase e\n" +
  "INNER JOIN e.customer _customer\n" +
  "WHERE _customer.id = :id", Purchase.class)
  .setParameter("id", id)
  .getResultList();
7. Find distinct product with purchase’s customer’s special condition 8. Aggregate product price sum by purchase’s customers
return em.createQuery("SELECT e from Product e " +
  "where e.id IN (" +
  "SELECT DISTINCT _products.id" +
  "FROM Customer _customer\n" +
  "INNER JOIN _customer.purchases _purchases\n" +
  "INNER JOIN _purchases.products _products\n" +
  "WHERE _customer.premium = true)", Product.class)
  .getResultList();
List results = em.createQuery(
  "SELECT _customer.id, " +
  "SUM(_products.price)"
  "FROM Customer _customer\n" +
  "INNER JOIN _customer.purchases _purchases\n" +
  "INNER JOIN _purchases.products _products\n" +
  "GROUP BY _customer.id")
 .getResultList();
Map<Customer, Double> ret = new HashMap<>();
for (Object result : results) {
  Object[] arr = (Object[]) result;
  ret.put(customerService.findById((Long)arr[0]),
  ((Double)arr[1]));
}
return ret;

Another option for building queries dynamically is to specify them using a JPQL String rather than a SQL String. The JPQL (Java Persistence Query Language) is a DSL closely related to SQL, but adhering to the object-oriented domain model view on entities and their properties rather than on tables and columns, and generally making the language easier to use.

Historically, the JPQL was heavily inspired by Hibernate’s HQL which itself is still in use, providing a superset of JPQL functionality. HQL is not discussed in this article. If you use Hibernate rather than EclipseLink or any other persistence provider, you may want to take a look at it. Keep in mind, however, that you loose portability if you go for a proprietary solution. I would advise you to stick to JPQL which, as of now, offers a mature feature set covering virtually any DB query requirement.

When compared with dynamic native (SQL) queries, JPQL-based queries offer the following advantages:
  • Diminished complexity of the query language and close resemblance to object-oriented thinking (note the more simple JOIN clause and the object property references).
  • No dependency on native database data types, thus full portability.
  • Full support for generics.
  • Full support for named parameters.
A reasonable IDE such as NetBeans will even statically detect syntax errors and offer auto-completion within the query Strings.

As for dynamic named queries, it’s best practice to make use of the generics support provided by TypedQuery.

Still, some of the general drawbacks of String-based dynamic queries remain which would be solved using named queries (see below).

Bottom line:
  • Don’t use this facility either. Use named JPQL queries, or more strongly typed query building facilities if you still need the “dynamic” part.

Pages: 1 2 3 4