June 28, 2015

Painless Android development with Groovy and SwissKnife (part 3 of 3)


Pages: 1 2 3

O/RM with Sugar ORM

Most non-trivial Android applications will have to store data in some way. Because manually writing SQL queries is cumbersome and error-prone, there are object-relational mapping frameworks which abstract away CRUD operations on entity models. These frameworks are well known and fully integrated in the Java EE platform; for Android, one must choose a third party framework of which there are luckily quite a few.

After some quick research, I decided to go with Sugar ORM. Yes, it’s not one of the “big names” (OrmLite, GreenDAO), but it’s the only one where the usage instructions cover less than two screen pages. Given the small size of the example project and the reduced amount of time I wanted to fiddle around with ORM, I decided to give it a go and finally, I didn’t need to go back on that decision.

In action

Sugar ORM describes itself as an “insanely easy way to work with Android database” and this is certainly true. After the initial minimal setup, you don’t even need to annotate entity models. Just make sure to derive from SugarRecord:
class Card extends SugarRecord<Card> {
and all properties are auto-detected. Sugar ORM then enhances your model classes with CRUD persistence functionality:

Read a card:
card = Card.findById(Card, id)
Update a card:
card.save()
More information on the short but sweet official documentation.

Problems

Where is my database?

This is not a problem of Sugar ORM, but of Android itself. Apparently, inspecting the SQL DB is not possible on a real hardware device unless “rooted” due to security reasons. This is just ridiculous! Am I really supposed to implement persistent storage blindly? I haven’t checked all possible “hacks” and “workarounds” so maybe there is a way to achieve this. I just don’t get it why there is no straightforward support for such a vital development feature.

Dropping the database

This is not really a problem, just a lesson learnt: The easiest way to drop an app’s db seems to be to deinstall the application on the device, using the device’s application manager. Maybe there’s another way integrated into the deployment process, but I haven’t found it yet.

To-Many-Relationships, Cascading

As of Sugar ORM version 1.3, To-Many-Relationships need to be managing manually, and there is no cascading. For the minimalistic demo application, this wasn’t a big deal. For more complex entity relations, you may want to implement cascading by e.g. overwriting #save() and implement it there. Also, it’s apparently worked on for version 1.4.

Property mapping

Auto-magical property mapping comes with its price: you’re not in control over the mapping. In order to prevent a property from being mapped, you can use @Ignore (Java EE equivalent: @Transient), but that does only work for classes you own.

For instance, when we annotate our model classes with @Bindable to implement two-way data-binding (see above), this weaves a final private java.beans.PropertyChangeSupport this$propertyChangeSupport field into that class which Sugar ORM would then take as an additional property and persist as well! This may not be what you desire.

Putting it all together

This blog post really became a presentation of tools which work more or less independently of each other. Nevertheless, I’d like to emphasize here how seamlessly the Groovy + SwissKnife + Sugar ORM stack works as a unit by highlightling a complete example use case.

Let’s see here how to navigate from CardSetActivity to CardActivity when the user clicks on a card to open it, editing the card, and then saving it in CardActivity which leads back to CardSetActivity overview.

First of all, in CardSetActivity, we initialize the card set model:
List<CardSet> cardSets = CardSet.listAll(CardSet)
if (!cardSets.empty) {
    cardSet = cardSets[0]
}
else {
    cardSet = new CardSet()
}

cardSet.cards = Card.listAll(Card).sort {
    -it.correctGuesses
} as ObservableList
initListener()
And we then init the data-binding:
View table = findViewById(R.id.card_table_layout)
TableBinding.bind(cardSet.cards, table) { Card card ->
    TableBuilder.linearLayout(this,
            TableBuilder.button(this, "X", {
                card.delete()
                cardSet.cards.remove(card)
            }),
            TableBuilder.text(this, "($card.correctGuesses) $card.text1:\n$card.text2", {
                startActivity(ActivityUtil.intent(this, CardEditActivity, [
                        id: card.id
                ]))
            })
    )
}
The binding also builds the view elements, one for each object in cardSet.cards, and attaches them to the table View.

Note that TableBuilder.text(…) builds a TextView which outputs information about a card and registers an OnClickListener as a closure. Here, the listener starts an activity (startAcitvity(…) is provided by SwissKnife), which is associated with an Intent which takes a parameter named id. The current card’s id is thus passed into that activity.

In CardEditActivity, the id parameter is injected:
@Extra
Long id
and it’s used to load the card with the id provided from the database:
Card card = new Card()
if (id != null) {
    card = Card.findById(Card, id)
}
initListener()
You could even perceive this as “lazy loading”. Of course, DB access methods are provided by Sugar ORM.

Then, again, the data-binding is initialized:
TextViewBindings.bind(card, [
    text1       : text(findViewById(R.id.text1)),
    text2       : text(findViewById(R.id.text2)),
    correctGuesses: text(findViewById(R.id.correctGuesses)),
])
As this registers the respective TextWatcher, the card’s text1 and text2 properties will be updated on every key stroke, so we can just rely on the card model always containing the current value of the input components.

Thus all we need to do on save is save the model using Sugar ORM:
@OnClick(R.id.saveButton)
public void save() {
    card.save()
    setResult(RESULT_OK)
    finish()
}
Then we finish() the activity to go back to the parent activity. This latter is vanilla Android. However, note that registering the OnClick listener happens with a simple @OnClick annotation by SwissKnife.

Finally, we need to register a listener in CardSetActivity which re-initializes the model when a child activity is finished:
// Coming back from addNewCard
@Override
protected void onResume() {
    super.onResume()
    initModel()
}
Android wouldn’t call the onCreate(…) listener in that case. initModel() simply fetches the cardset and its associated cards from the database again. It’s the method with the source code shown at the beginning of this section.

I encourage you to take a look at the source code of these and the other two activities on the demo project’s GitHub repository. They contain some more variations of Groovy / SwissKnife / Sugar ORM interplay.

Conclusion

Honestly, I cannot imagine working as a professional Android app developer at the current state of the platform. Actually, I cannot imagine anyone working as a professional Android app developer! There are just too many problems left unsolved, and the developer’s burden is much too high. Coming from Java EE and Groovy, I’m accustomed to concentrate on adding business value when I develop, having everything else abstracted away. On Android, nothing is abstracted away, and I have to build everything from scratch, with every new app, before I could even think about adding business value.

Android apparently makes it easy to write poor, procedural, copy-pasted code, whilst making it hard to write proper, clean, modularized code. This is the opposite of what e.g. AngularJS does in the web app world, facilitating practices such as separation of concerns and inversion of control.

Still, there are potential solutions for these problems and I’m convinced that the Groovy + SwissKnife stack lays a great foundation to facilitate Android application development. What’s more, integration with an O/RM mapper such as Sugar ORM really is seamless.

Thus, I could actually imagine diving deeper into Android app programming, given that I’m free to choose and shape a truly productive development environment. The tools I used for this demo application would certainly be part of that tech stack. Note however that I neglected performance and security aspects for the most part so far.

Please let me know in the comments section below whether you found my insights useful and whether you share some of my concerns about Android application development. If I have missed any interesting library / tool to further increase productivity or which elegantly solves one of the problems I mentioned in this blog post, please let me know! I hate re-inventing the wheel and I would gladly add an additional tool to my stack if it helps me overcome some of Android’s shortcomings.


Pages: 1 2 3

Painless Android development with Groovy and SwissKnife (part 2 of 3)


Pages: 1 2 3

Groovy with SwissKnife

SwissKnife is “a multi-purpose Groovy library containing view injection and threading for Android using annotations”. It tremendously increases ease of development and maintainability, as we will see presently.

Setup

To use it, follow the official documentation or take a look at my module’s build.gradle file where it’s activated. I installed the accompanying SwissKnife Android Studio plugin as well for further convenience.

In action

With above setup completed, you’re ready to use the SwissKnife enhancements in your Groovy code. Please refer to its excellent documentation page for more information about its components.

Note that you always have to init SwissKnife in an activity’s #onCreate(...) method using e.g. SwissKnife.inject(this) for view injection and SwissKnife.loadExtras(this) for extra injection.

Here are some usage examples:

To toast:
toast("Correct answer was: $checked").show()
To start an activity (i.e. change to a different screen):
@OnClick(R.id.startCard)
public void onClick() {
    startActivity(intent(CardActivity))
}
To include extras from a parent activity:
@Extra
Long id

Problems

This is not a problem of SwissKnife itself, but a particular annoyance of Android: Apparently, invoking Intent#putExtra(...) with a null value leads to immediate NullPointerException. I have thus written a small helper method which allows for a concise, null-save syntax:
startActivity(ActivityUtil.intent(this, CardEditActivity, [
    id: card.id
]))
Please check out ActivityUtil’s source code for more details.

Two-way data-binding with Groovy

If you’ve ever enjoyed the pleasure of two-way data binding, keeping the model (M) and view (V) auto-magically in synch, as e.g. implemented by the AngularJS JavaScript framework, you just don’t want to get back to manual synchronization again. Android doesn’t even come with one-way data-binding out-of-the-box, neglecting any need for model / view separation. Time to change that.

After checking out a couple of available solutions such as Android’s official Data binding extension (one-way only, currently in beta), bindroid and ngAndroid I decided to try and implement it on my own using vanilla Groovy. I eventually ended up with a pretty decent solution in only about 150 lines of code. It doesn’t support conversion or validation, but it serves well the requirements of this demo application and it’s potentially extensible.

In action

This solution is based on Groovy’s @Bindable annotation which has to be applied on the model class:
@Bindable
class Card {
Then, you have to initialize the Binding:
TextViewBindings.bind(guessedCard, [
    text1       : text(findViewById(R.id.text1)),
    text2       : text(findViewById(R.id.text2)),
    text1Guessed: enabled(findViewById(R.id.text1)),
    text2Guessed: enabled(findViewById(R.id.text2))
])
#bind(...) takes the model and a map which associates each model property (by its name) with a View component. It registers the necessary TextChangedListener (for changes in the View part) and PropertyChangeListener (for changes in the model part). Note that the enabled property can be bound to a model’s Closure property returning true or false.

That’s it. Any change to the TextViews will immediately update the model (did I hear “backing bean”?), and a change to the model will update the TextView’s text and enabled property.

Please check out TextViewBindings’s source code for more details.

There’s also a data-binding solution for list models. However, I only implemented this as a one-way data-binding: From model to view. It is based on another Groovy feature, namely on ObservableList which must be the type of the list model:
ObservableList cards = [] as ObservableList
You can then initialize the Binding:
View table = findViewById(R.id.card_table_layout)
TableBinding.bind(cardSet.cards, table) { Card card ->
    TableBuilder.linearLayout(this,
            TableBuilder.button(this, "X", {
                card.delete()
                cardSet.cards.remove(card)
            }),
            TableBuilder.text(this, "($card.correctGuesses) $card.text1:\n$card.text2", {
                startActivity(ActivityUtil.intent(this, CardEditActivity, [
                        id: card.id
                ]))
            })
    )
}
Here, #bind(...) takes the model and the parent view and a closure which returns the child view created from a model which is newly added to the list (in the example, the closure is very elaborate, building a “table row” consisting of a “delete” button and a text view).

Check out TableBinding’s source code for more details.

With this setup, you can work with your model entities throughout the application. You don’t ever need to touch view components in your code.

Problems

Classes annotated with @Bindable must not be @CompileStatic. The listener would just not be fired otherwise. You thus have to stick with dynamically compiled classes, including the associated potential performance penalty and proguard minification problems. The TextViewBinding util class which initializes the actual data-binding also needs to be dynamically compiled.

Next steps

Of course, this solution is far from perfect. Still, for the demo application’s requirements, it’s almost overkill. I really want to show here that through sensible use of Groovy facilities, one could quite easily come up with a two-way data-binding framework which Android need desperately in my opinion.

The current solution even doesn’t make full use of two-way data-binding: As it works with a reference of a model variable, it has to be re-initialized as soon as that variable points to another model instance. Actually, the model variable should be @Bindable as well, and upon property change, the binding should be re-initialized.

Also, binding would need much more parameterization and support for observing / updating more View properties other than text, enabled, and hint.

Also, it should support conversion (only Strings are supported now) and maybe even validation (that could be realized using @Vetoable rather than @Bindable).

Declarative Layout building

Android views are obviously meant to be built in the layout XML file, preferably using the IDE’s “design” UI. As soon as you have to build / alter the UI in Java code, e.g. to change the UI dynamically at runtime, you’re falling back to Swing-style development which quickly devolves to a mess of procedural spaghetti code.

It’s a shame Android doesn’t provide any fluent, builder-style syntax out-of-the-box. I took a look at the grooid-tools project which aims to provide a true builder syntax for Android UIs, similar to Groovy’s built-in SwingBuilder, but as of now (June 2015), the project seems to lack any documentation and looks more or less abandoned.

In action

Thus I just built a very tiny little builder-like helper class for building a linear layout (it’s the same piece of code as in the previous section).
TableBuilder.linearLayout(this,
    TableBuilder.button(this, "X", {
        card.delete()
        cardSet.cards.remove(card)
    }),
    TableBuilder.text(this, "($card.correctGuesses) $card.text1:\n$card.text2", {
        startActivity(ActivityUtil.intent(this, CardEditActivity, [
                id: card.id
        ]))
    })
)
Please check out TableBuilder’s source code for more details.

Next steps

Obviously, this is only a very local solution for the very requirements of this one layout. A more general solution might either involve writing a true Groovy builder or just a builder-style method chaining API. Of course, it would have to provide support for all of Android’s view components.

Because we need context (activity) information to build a View component, we would probably have to implement this in a base bean for all activities in order to avoid having to pass this reference around all the time.

Escaping XML hell

Unfortunately, Android uses XML files for about every configuration, even if it is clearly ill-suited such as key-value mapping for message bundles. Java EE for example uses the properties file format for that which makes perfectly sense. XML is a valid format for many purposes, but for Android’s use cases, it is just bloated and obfuscates the code.

As a first step to overcome this problem, I ended up writing a small IntelliJ IDEA / Android Studio plugin which automatically converts XML files to properties files and vice versa on file save. I introduced that plugin in the previous blog post, and detailed usage information is provided on the plugin’s GitHub repository.

I thus used the plugin to create a properties format mirror of the values/strings.xml file and to thus be able to edit this file in the properties format rather than in the XML format, and having the changes automatically synched back to the XML file on save.

This will then e.g. turn file content like this:
<string name="cardset_template1_hint">Upper Textbox hint</string>
<string name="cardset_template2_hint">Lower Textbox hint</string>
<string name="save">Save</string>
<string name="cardset_add_card">Add new card</string>
<string name="title_activity_card_edit">Edit card</string>
Into file content like this:
cardset_template1_hint=Upper Textbox hint
cardset_template2_hint=Lower Textbox hint
save=Save
cardset_add_card=Add new card
title_activity_card_edit=Edit card
I am convinced that the latter is much easier to handle. It’s structure is much cleaner, it’s highly readable, and I have no problems bunch-editing many texts at the same time manually.

Next steps

The current implementation really is rather a proof of concept than a production-ready solution.

In a next step, the solution would have to get more generalized in order to apply it to other Android resource files as well. This really means to just implement the respective on save Groovy scripts. Eventually, I would love to have this thing for the entire layout XML files as well. I imagine being able to write the layout in a concise builder-like syntax, as sketched by the grooid-tools project, and compile it to XML on file save.

As an accompanying feature, it would be nice to have a Gradle task / plugin which reuses the on file save Groovy script to generate the Android XML files on build time. That way, the Android XML files would become a true passive artifact you wouldn’t even have to check in to source control.

Pages: 1 2 3

Painless Android development with Groovy and SwissKnife (part 1 of 3)



After Groovy recently introduced support for Android, I wanted to give that platform another try. In this blog post, I will guide you through setting up a development environment for Android based on ease of development and show you how modern frameworks may help us create more maintainable Android apps in the future.

As an experienced Java EE and Groovy developer, I was genuinely shocked when I first came in contact with Android software development. In my opinion, it simply ignores some major progress towards ease of development and maintainability as the Java EE stack achieved it and as it is implemented by Groovy. Most prominently, I see these major inconveniences:
  • No MVC pattern. Development with Android feels like development with Swing, with validation, even handling and business logic cluttered all over the place, and with view components hard-wired into the controller code.
  • No O/RM out-of-the-box. Working with the DB on Android is a step backwards to hard-coded SQL querying.
  • XML hell. Whilst Java EE makes great effort to minimize XML configuration by preferring annotations, Android uses XML for about every non-Java artifact, including message bundles.
Or, as Guillaume Laforge (former Groovy project lead) puts it, “Android is in the Java stone age”. Luckily however, tools and frameworks are emerging with the aim to make Android software development not only more productive, but also more enjoyable. Most prominently, with Version 2.4, Groovy included official support for Android’s Dalvik JVM and as such gave birth to a growing ecosystem of Groovy Android libraries upon which the SwissKnife project is the most prominent one.

With this tutorial, I want to push support for modern development practices, productive frameworks and useful tools for Android development to their current limits. What I want to achieve is:
  • Minimize any kind of boilerplate code
  • Two-way databinding to strengthen model / view decoupling
  • Getting rid of XML resource files in favor of more maintainable file formats, such as properties files.
  • ORM to allow declarative, object-oriented persistence querying
Due to the limited amount of time I’m ready to invest in this topic, I will concentrate on presenting some basic implementations and sketching ideas on how development experience could be further increased. Nevertheless, this tutorial will include a fully-functional implementation of the aims stated above.

The example application

As an example application, I will implement a very simple, fun memorization tool.

The user can create, edit and delete cards which consist of two texts.

He can start a game with all created cards. The application will then randomly choose a card and show one of the two texts that card contains; the user has to correctly enter the second text. If he guesses it right, the number of right guesses of that card is increased. That way, the user can later inspect which cards he guessed right a couple of times and delete them as he has successfully memorized them.

This is how the final application will look like. It will consist of four activities:



MainActivity: This is the starting point for playing a game or editing cards. CardActivity: This is the in-game screen. It shows one card at the time. After the user clicks OK, the current guess is evaluated and the next card is shown. If the guess was wrong, an error message appears briefly; otherwise, the card’s correct guesses number is increased and stored immediately.
CardSetActivity: Here, the user can enter the default description for the two text fields of a card. He can inspect all available cards, delete cards (with immediate effect), and change to the card edit screen to edit an existing or create a new card. CardEditActivity: Here, the user can inspect an existing card and change its two texts, or create a new card. A click on save saves the card immediately and leads back to the card set overview.

Note: The application is purposefully kept very simple. There is no support for multiple card sets, no error handling and the GUI is arguably ugly. I have neglected those aspects in order to concentrate on simple functionality showcasing the usage of tools and libraries.

Basic Setup

Android Studio

I will use Android Studio here which is Android’s official development platform. It is based on IntelliJ IDEA.

You typically want to use a real hardware device to test your application, mostly for performance reasons. Make sure that Android Studio is properly set up, follow the Android development documentation to setup the USB driver.

For convenience, follow this tutorial to add a “Groovy class” option to the “new file” dialog. The template may e.g. look like this:
#if (${PACKAGE_NAME} && $PACKAGE_NAME != "" )package ${PACKAGE_NAME}
#end

import groovy.transform.CompileStatic

@CompileStatic
class ${NAME} {
    
}

Setup Android Project

I created a new Android project targeting the 4.1 SDK, starting with a blank activity named MainActivity. According to IntelliJ / Android Studio project structuring, I stick to the “one module per project rule” for Android projects.

Groovy on Android

Setup

Follow the official Groovy Android Gradle plugin documentation to activate Groovy in your Android project.

You may as well reuse my project build.gradle file and my module build.gradle file.

According to the Groovy-Android documentation, Groovy sources must be placed in project/app/src/main/groovy folder rather than in .../java folder. You have to change to “project” view to create the folder and for it to stay visible. Moreover, Java classes which refer to Groovy classes must be placed in the .../groovy folder as well. Thus just put all source files (Java and Groovy) in the .../groovy folder to avoid trouble.

The Android build system ships with the ProGuard code minification tool which we will use to purge any non-used Groovy library files from the classpath in order to keep the release APK footprint small.

I used the example proguard-rules.pro file of this GitHub repository as a base for my project. My own proguard-rules.pro file contains some additional fixes. It minimizes a 9.20 MB APK down to 3.77 MB for a hello world application and a 9.38 MB APK down to 4.06 MB for the finished demo application. Enabling code obfuscation further minimizes APK size drastically, so don’t include the -dontobfuscate option, even though it is shown on several online resources.

To see minification in action during development, I enabled it for the debug buildType as well. I highly recommend you to do so as wrong proguard-rules will lead to exceptions at runtime, not compile time due to Groovy’s use of reflection.

In action

With this basic setup done, you’re ready to write Groovy code for your Android application. In and of itself, this is potentially a huge productivity gain already as you can now use Groovy’s very concise syntax as well as its comprehensive API.

However, you should always keep your Groovy classes @CompileStatic for performance reasons as well as for type safety and to support code minification – otherwise, reflection is used which doesn’t provide any of these goodies. Remember to do so as well if you change an Android Studio-generated Java file (e.g. an Activity class) to *.groovy.

Problems

The combination of Proguard code minification and Groovy is tricky especially if you are forced to omit @CompileStatic / use @CompileDynamic for your classes as these are then eligible for deletion during minification process. To prevent this, you can exclude every class in your project’s base package in the proguard-rules:
-keep class ch.codebulb.groovyswissknifeandroidapp.**
-keepclassmembers class ch.codebulb.groovyswissknifeandroidapp.** {*;}
You should do that anyway. After all, if you really had code in you packages which you don’t need, you would just delete it, wouldn’t you?

Otherwise, you will get strange errors at runtime when your application tries to invoke code which isn’t there:
AndroidRuntime? FATAL EXCEPTION: main
    java.lang.RuntimeException: Unable to start activity ComponentInfo
    {codebulb.ch.groovyswissknifeandroidapp/ch.codebulb.groovyswissknifeandroidapp.CardSetActivity}: 
    groovy.lang.MissingMethodException: No signature of method: 
    groovyjarjaropenbeans.PropertyChangeSupport.firePropertyChange() is applicable for argument types: 
    (java.lang.String, groovy.util.ObservableList, groovy.util.ObservableList) values: 
    [cards, [], [ch.codebulb.groovyswissknifeandroidapp.model.Card@41927e20, ...]]
    Possible solutions: firePropertyChange(groovyjarjaropenbeans.PropertyChangeEvent)
            at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2092)
            ...
     Caused by: groovy.lang.MissingMethodException: No signature of method: 
    groovyjarjaropenbeans.PropertyChangeSupport.firePropertyChange() is applicable for argument types: 
    (java.lang.String, groovy.util.ObservableList, groovy.util.ObservableList) values: 
    [cards, [], [ch.codebulb.groovyswissknifeandroidapp.model.Card@41927e20, ...]]
    Possible solutions: firePropertyChange(groovyjarjaropenbeans.PropertyChangeEvent)
            at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:56)
            ...
Also there seems to be a problem with enums. If you declare an enum yourself, you may end up with an exception like this:
AndroidRuntime? FATAL EXCEPTION: main
    java.lang.ExceptionInInitializerError
            at ch.codebulb.groovyswissknifeandroidapp.observer.ViewAttribute.text(Unknown Source)
            at ch.codebulb.groovyswissknifeandroidapp.CardActivity.initListener(Unknown Source)
            ...
     Caused by: java.lang.IllegalArgumentException: 
    This class has been compiled with a super class which is binary incompatible with 
    the current super class found on classpath. 
    You should recompile this class with the new version.
            at ch.codebulb.groovyswissknifeandroidapp.observer.ViewAttribute$Type.$INIT(Unknown Source)
            at ch.codebulb.groovyswissknifeandroidapp.observer.ViewAttribute$Type.(Unknown Source)
            at ch.codebulb.groovyswissknifeandroidapp.observer.ViewAttribute.text(Unknown Source)
            at ch.codebulb.groovyswissknifeandroidapp.CardActivity.initListener(Unknown Source)
            ...
I ended up just not using any enums myself. I should probably examine this a little bit closer at some point…

Also Proguard sometimes has a problem with closure methods, especially on collections:
AndroidRuntime? FATAL EXCEPTION: main
    b.b.cg: No signature of method: java.util.ArrayList.each() is applicable for argument types: 
    (ch.codebulb.groovyswissknifeandroidapp.observer.TextViewBinding$1$_afterTextChanged_closure1) values: 
    [ch.codebulb.groovyswissknifeandroidapp.observer.TextViewBinding$1$_afterTextChanged_closure1@41dfed48]
    Possible solutions: wait(), head(), max(), any(), last(), head()
            at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.a(Unknown Source)
            at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.a(Unknown Source)
            at org.codehaus.groovy.runtime.callsite.CallSiteArray.a(Unknown Source)
            at org.codehaus.groovy.runtime.callsite.AbstractCallSite.a(Unknown Source)
            at org.codehaus.groovy.runtime.callsite.AbstractCallSite.a(Unknown Source)
            at ch.codebulb.groovyswissknifeandroidapp.observer.TextViewBinding$1.afterTextChanged(Unknown Source)
            at android.widget.TextView.sendAfterTextChanged(TextView.java:7417)
            ...
In this case, I replaced the closure call with an equivalent for loop.


Pages: 1 2 3

June 27, 2015

IntelliJ IDEA / Android Studio plugin: Save action Groovy scripts



So I accidentally created a new IntelliJ IDEA / Android Studio plugin. This came to existence out of necessity as I was working on an upcoming blog post with Android Studio.

TL;DR? Check out the plugin on GitHub!

What it is

From the plugin’s description: This IntelliJ IDEA / Android Studio plugin allows the user to automatically run custom Groovy scripts when a file is saved / synchronized. Any valid Groovy script is supported; moreover, the plugin exposes a simple API to make file handling especially easy, allowing to simply implement source code formatting, file backups, file transformations, and more. Groovy script execution can be conditionally enabled / disabled based on a regex check on the path of the file which is saved.

Motivation

When working with Android Studio, this plugin allows to convert values/strings.xml file back and forth to values/strings.properties files.

Yes, Android Studio has built-in support to make XML editing easier. Still I prefer having my message bundles in the simplistic properties file format, just as it’s standard in Java EE projects. Using the plugin, I can write bundles in the properties file format without caring about the xml file. This is just one piece in the puzzle to make Android development less painful. I elaborate more on that in the next blog post.

Use it!

Thanks to the highly customizable nature of the plugin, you can use it to virtually do anything in a project file structure on file save!

If you ever felt the need for writing customized “on save” macros in IntelliJ in Java / Groovy, please take a look at the plugin’s GitHub page. Its usage is thoroughly explained in the accompanying README.

June 14, 2015

Clean and SOLID Java EE code in practice (part 3 of 3)


Pages: 1 2 3

LSP: Liskov substitution principle

Barbara Liskov’s principle states that you have to be able to substitute an object by a derived object without breaking the original functionality. Violating the principle damages the very basics of class inheritance which makes the code vulnerable for modification. As such, the LSP can be perceived as a very object-oriented subtype of the OCP.

Example

Here is a rather “classic” violation of the LSP: A subtype is derived from its parent without needing its entire interface. Imagine that we have a book entity that we happily use all over our library application:
public class Book {
    private String name;
    private String author;
    private int pages;
    ...
}
But then one day, we decide, we want to have audio books, too. So the audio book would make a subtype of the book in order to easily extend the application:
public class AudioBook extends Book {
    private int length;
    
    @Deprecated
    @Override
    public int getPages() {
        throw new UnsupportedOperationException("pages property is not supported by audio book.");
    }
    @Deprecated
    @Override
    public void setPages(int pages) {
        throw new UnsupportedOperationException("pages property is not supported by audio book.");
    }
    ...
}
In the class diagram, it looks like this:


But obviously, there’s a problem. For an audio book, the pages information doesn’t make sense. In the book entity, the #getPages() method is expected to return the number of pages, but this cannot be applied to the audio book entity (here, this is also a violation of the ISP (interface segregation principle)).

A “quick and dirty” solution is presented above where the “surplus” methods are marked with @Deprecated and throw runtime exceptions. However, this very implementation would break existing code, as in
public int getTotalPages(List<Book> books) {
    return books.stream().mapToInt(Book::getPages).sum();
}
The #getPages() method should thus rather return 0, but that doesn’t make it anymore right. In a way, the LSP is an “inconvenient” principle because it forces you to reconsider the design of your class hierarchies. In this example case, there are two conceivable solutions with would accord with the LSP as well as with the ISP:

The straightforward one is to create a common super class which only contains the shared properties, making the common interface as small as possible, such as
public abstract class Work {
    private String name;
    private String author;
    ...
}
In the class diagram, it looks like this:

Book (adds pages) and AudioBook (adds length) would then both extend it. Now, the aforementioned #getTotalPages(List<Book>) method would run through LSP-compliant: as it only operates on non-audio books, querying the total of pages would always make sense. However, this approach will probably force you to exchange the Book interface to the more general Work interface everywhere both a book or an audio book are acceptable. On the other hand, this is a very lean solution and is definitely the way to go if you think that one day, you may extend the subtypes of Works even further, as you’re then ready to do so.

Another solution the LSP offers is to use the composition over inheritance design where you turn a (dysfunctional) is-a-relationship in a both-have-a-relationship. In the example use case, you might say that okay, both a book and an audio book have a kind of content; for the book, it’s pages, and for the audio book, it’s audio tracks. Thus you can model:
public class Book {
    private String name;
    private String author;
    private final Content content;
    ...
}
a book having content. And Content being an abstract class which can either be
public class PaperContent extends Content {
    private int pages;
    ...
}
or
public class AudioContent {
    private int length;
    ...
}
In turn, you then have to modify the #getTotalPages(List<Book>) method:
public int getTotalPages(List<Book> books) {
    return books.stream().filter(it -> it.getContent() instanceof PaperContent)
            .mapToInt(it -> ((PaperContent)it.getContent()).getPages())
            .sum();
}
In the class diagram, it looks like this:


Generally speaking, with this approach, you will be able to leave your interfaces untouched, but the changes to the entity’s properties will break the code everywhere they are referenced.

Using composition over inheritance also has some more advantages:
  • You’re able to switch the composite relationship at runtime. If this is not desired, as in the example use case (how would you turn a book into an MP3 file?), you will typically mark the composite reference as final.
  • You’re not restricted to 1-to-n relationships, as you can model any n-to-n-relationship.
Depending on your use case, these advantages may drive you to go for a composition-based approach.

ISP: Interface segregation principle

Allow me to cite Robert C. Martin / Wikipedia directly here: The ISP states that no client should be forced to depend on methods it does not use. This is pretty simple and straightforward, and it makes sense. Bloating an interface with methods which are not used decreases readability, but most importantly, it further increases coupling between client and interface. One should thus always strive after narrow, concise interfaces.

Example

I’ve observed violations of this principle coming from a common misconception of the program to interfaces, not implementations design principle. In Java in particular, there’s the misconception of “interface” referring to a Java interface rather than the conceptual idea of a contract. However, Java EE 6+ in particular promotes usage of the “no interface view” which allows to refer to (and inject) any Java Bean without the technical need to declare an interface for that bean.

The Java EE container doesn’t need an interface to refer to a single bean implementation. Even worse, if there were multiple implementations of this interface, without further specification (e.g. though @Qualifier) the bean container would throw an exception at startup because it finds multiple matches for the interface!

This is an application of cargo-cult programming by itself, leading to the YAGNI violation of creating interfaces for (in the worst case) every public method of a class.

As an anti-pattern example, take this service implementation:
public class ReservationServiceImpl implements IReservationService {
    @Override
    public List<Reservation> getReservations(Client client) {
        …
    }
    
    @Override
    public List<Reservation> getActiveReservations(Date inbetween) {
        …
    }
    
    @Override
    public void addReservation(Client client, Reservation reservation) {
        …
    }
    
    @Override
    public void updateReservation(Reservation reservation) {
        …
    }
    
    @Override
    public void cancelReservation(Reservation reservation) {
        …
    }
    …
}
which implements the IReservationService even though there is no business value nor any technical need which would justify the existence of this interface. This anti-pattern is even worse in a service-oriented architecture where the number of “service methods” is typically very high, as opposed to a pure CRUD service implementation.

To justify the existence of the interface, it would then have to be referred to in a call from any lower layered module, e.g. a “view controller”. This approach does not only make the UML diagram look unnecessarily complicated:


but even worse, that “indirection” from the lower layer to the service layer is real and in everyday programming, it means that you have to navigate from the controller to the service by querying for the one interface implementation; and on every change on the interface implementation, the interface has to be updated. It’s just silly: the interface then depends on the implementation which reduces its existence to absurdity.

Here, the solution is straightforward: get rid of the interface. If the service bean’s implicit interface defines its own implementation anyway, let it take this role. This will reduce the UML to


(You will also probably want to remove the unnecessary “…Impl” suffix.)

DIP: Dependency inversion principle

The DIP is sometimes also referred to as the “Hollywood principle” (“don't call us, we'll call you”) applied to software architecture. It states that low level modules should depend on high level modules’s abstractions rather than the high level modules depending on low level module implementations.

This concept is key to realize loose coupling which in turn strengthens the SoC principle, as discussed earlier.

It is realized through the Inversion of control (IoC) design pattern which is most prominently implemented in a Java EE bean container through dependency injection mechanisms where dependencies on high level modules are “injected” into low level modules through abstractions (implicit or explicit interfaces).

Even though this concept is quite omnipresent in Java EE projects thanks to the “enforced” usage of IoC containers, I see developers struggling to apply the concept outside of the container, thus missing opportunities to loosen coupling between individual components.

Example

As an example, let’s consider the scenario where a user interface consists of a multi-tab view which is backed by a “controller” bean. Now, every time the user changes to a new tab, the content of this new tab should be refetched from the database so as to guarantee that the new view reflects the database’s current status (e.g. for list views: refetch all the entities).

A very naïve approach would be to just update every single view (i.e. its backing “controller”) on tab change:
public class TabController {
    private BookSearchController bookSearchController;
    private ReservationController reservationController;
    private ClientAccountController clientAccountController;
    
    public void onTabChange(TabChangeEvent event) {
        bookSearchController.update();
        reservationController.update();
        clientAccountController.update();
    }
}
Note that here, the actual violation of the DIP is the fact that TabController holds a reference to every single concrete controller which backs one of the view’s tabs. Therefore, dependency is maximized:




Note that there’s not even any need for the various …Controllers to implement a common interface (here, it’s just coincidence that their update methods all share the same name) as we do not depend on a common interface, but on the concrete implementation of each single controller. This is what violates DIP.

Also, what if you want to add a new controller? You need to modify that code; therefore, it also violates OCP.

As a more runime-efficient solution, you may want to update only the go-to-controller by querying TabChangeEvent for a clue as to which controller should be updated. However, as long as the actual controller instance is held by the TabController bean, nothing changes from a DIP / OCP point of view:
public class TabController {
    private BookSearchController bookSearchController;
    private ReservationController reservationController;
    private ClientAccountController clientAccountController;
    
    public void onTabChange(TabChangeEvent event) {
        switch (event.getControllerName()) {
            case "book": bookSearchController.update(); break;
            case "reservation": reservationController.update(); break;
            case "client": clientAccountController.update(); break;
        }
    }
}
(Also, switch on String is terribly bad anti-OCP.)

In order to get rid of the interdependency chaos, you need to find a way for the TabChangeEvent to directly return a reference to the controller which it should update.

Given that you can provide a TabChangeEvent similar to this:
public abstract class TabChangeEvent {
    public abstract BaseController getController();
}
We can then work with a BaseController abstraction which every tab content controller must implement:
public abstract class BaseController {
    public abstract void update();
}
Therefore, TabController doesn’t have to hold any dependency to a concrete tab content controller anymore:
public class TabController {
    public void onTabChange(TabChangeEvent event) {
        event.getController().update();
    }
}
Dependencies are thus minimized:



In a Java EE environment, where controllers are supposed to be container-managed beans, TabChangeEvent would typically return the respective controller instance by bean manager lookup.

Conclusion

Writing software is easy and everyone can do it. Writing maintainable software, however, is hard, and takes years of experience. This is what makes you a professional engineer and a true software craftsman.

Luckily, great minds of the industry have come up with best practices and common design patterns to increase software quality and maintainability. However, I am surprised to find that even in professional environments, knowledge and acceptance of these common best practices is not always present. I am unsure whether this is caused by lack of interest, lack of knowledge or mismanagement.

With this blogpost, I wanted to provide a comparison of anti-patterns and best practices for some of the most well-known software design principles which is backed by practical best practices application.

Please tell me in the comments whether you can relate to this post, whether you found it helpful or if, on the other hand, it lacks important information or contains any errors. I highly appreciate your feedback.

You may also be interested in


Pages: 1 2 3

Clean and SOLID Java EE code in practice (part 2 of 3)


Pages: 1 2 3

YAGNI: You ain’t gonna need it

Violating YAGNI means implementing functionality which is not actually needed. This is typically KISS applied on a broader scope. Principle violation typically comes from either pre-implementing features which are not yet needed which later turn out to not be needed at all, or from applying outdated or inappropriate design patterns based on vague, improper understanding of their actual application.

The former is prevented by properly applying agile software development practices. 

That latter is also known as the cargo-cult programming anti-pattern. It shows that improper application of best practices, or lack of understanding of best practices can actually turn code to the worse, which may severely damage software maintainability.

Example

As an example, the Java EE stack with its long history offers quite a few possibilities to apply improper or outdated design principles. Some of the most common misconceptions I have come across in my career resolve around application of both the DAO and the DTO design pattern. As Adam Bien states in his blog, both the DAO as well as the DTO have nowadays lost their meaning as the original use cases have been wiped from the Java EE landscape with newer Java EE editions, in short:
Of course, there may be other considerations in your specific case; but generally speaking, implementing DAOs or DTOs must be well justified nowadays.

In the example landscape, one could thus typically cut off both DAO and DTO layers, which shortens an architecture like this:


into this:

Yes, we can get rid of a whole layer as well as of massive information duplication just by using state-of-the-art technology and adhering to pragmatic best practices. Adam Bien identifies a lot more “retired design pattern” when working with a Java EE 6+ stack.

Lessons learnt here is that rather than sticking to whatever someone may have heard being promoted as a “best practice” some time ago, you have to investigate and understand the underlying problem, and you have to do so typically at project start. YAGNI inconveniently introduces a great amount of technical debt into the overall landscape which costs a lot of effort before finally getting killed off.

OCP: Open / closed principle

This principle can be formulated as “making components open for extensions while leaving them closed for changes”, i.e. you should be able to add new functionality without changing existing code.

Although this is a leading principle of basic inheritance mechanisms, I see it violated very frequently.

Example

In its most basic form, this violation is signaled by inappropriate use of the instanceof operator.

For instance, say that we need to determine the maximum of books a customer can borrow simultaneously based on the type of customer. Consider this implementation:
public int calculateMaxNumberOfSimultaneousBorrows(Customer customer) {
    if (customer instanceof TrialCustomer) return 1;
    if (customer instanceof NormalCustomer) return 5;
    if (customer instanceof PlatinumCustomer) return 10;
    throw new IllegalArgumentException("Customer type not supported: " + customer.getClass());
}
I assert that this implementation is easily broken. Whenever a new customer is added, this code needs to be modified. This is exactly what violates OCP: an extension in one place enforces a change in another place, namely in this method. But how is the developer supposed to remember to update this method when he adds a new customer subtype? This implementation could also be considered a violation of SoC (Separation of concerns) as well as a violation of the LSP (Liskov substitution principle). The fact that Java doesn’t allow to switch over classes makes the code even harder to maintain.

This is in fact the classic use case for inheritance, which can here be implemented like so:
public int calculateMaxNumberOfSimultaneousBorrows(Customer customer) {
    return customer.getMaxNumberOfSimultaneousBorrows();
}
where the #getMaxNumberOfSimultaneousBorrows() method is defined in the abstract base class:
public abstract class Customer {
    public abstract int getMaxNumberOfSimultaneousBorrows();
}
and implemented in each concrete subclass, e.g.
public class NormalCustomer extends Customer {
    @Override
    public int getMaxNumberOfSimultaneousBorrows() {
        return 5;
    }
}
Adding a new customer type now is easy as it provides the definition of its #getMaxNumberOfSimultaneousBorrows() method itself, without requiring any changes in existing code.

Actually, this makes the #calculateMaxNumberOfSimultaneousBorrows() (which would be implemented either statically or in a service layer bean) superfluous, because we basically changed the architecture style from service-oriented to object-oriented which in general suits an OO-based language like Java better.

The main lesson learnt here is that except for a few cases of low-level reflection-based programming, usage of instanceof is always a sign (a “code smell”) of bad inheritance / bad OOP application.

Similar OCP violations include switch/case-like conditional choices over magical ints, Strings or enums.

SRP: Single responsibility principle

SRP is about identifying a unit’s (e.g. class’s) responsibility, and implementing functionality which is not part of its responsibility in other parts of the system so that a class should have only one reason to exist, and one reason to change. This helps us achieve high cohesion, which is a measurement of how closely related a single unit’s responsibilities are.

It’s another principle which I see violated very often. This typically leads to lowered cohesion, which results in “god classes” with cluttered functionality, many lines of code and decreased maintainability.

Example

In this example, we want to return an address in a printer-friendly format. Here is the code:
public static class AddressFields {
    private String street;
    private String co;
    private String zipCode;
    private String city;
    private Country country;
    
    @Override
    public String toString() {
        StringBuilder builder = new StringBuilder();
        builder.append(street).append("n");
        appendToBuilderUnlessEmpty(builder, co);
        builder.append(zipCode).append(" ").append(city).append("n");
        appendToBuilderUnlessEmpty(builder, country.getLocalizedName());
        return builder.toString();
    }
    
    private static void appendToBuilderUnlessEmpty(StringBuilder builder, Object object) {
        if (!Strings.isEmpty(object)) {
            builder.append(object);
            builder.append("\n");
        }
    }
}
The good thing is that the code is object-oriented rather than service-oriented: The AddressFields knows how to pretty-print itself through the #toString() method (although I would consider using a less fragile, explicitly user-defined method).

The bad thing is that now, String concatenation logic is cluttered in the address entity which is actually a completely different concern. This class now obviously has two main responsibilities: Holding address information and String formatting, which clearly does not belong here.

In order to become SRP compliant, move String formatting to a dedicated class:
public class StringFormatter {
    public static String join(String separator, String... elements) {
        return Arrays.stream(elements).filter(it -> it != null && !Strings.isEmpty(it))
                .collect(Collectors.joining(separator));
    }
}
(Implementation detail: empty (null) elements are ignored and do not introduce a separator.)

Now, the entity class is free of any String manipulation logic:
public static class AddressFields {
    private String street;
    private String co;
    private String zipCode;
    private String city;
    private Country country;
    
    @Override
    public String toString() {
        return StringFormatter.join("\n",
                street, co, zipCode, city, country.getLocalizedName()
        );
    }
}

SoC: Separation of concerns

SoC is about separating and encapsulating discrete parts of the system into separate, autonomous units, shielding their internals from other parts though information hiding.

When applying separation of concerns, it is key to strive for loose coupling as well, which is a measurement of how closely interweaved autonomous units of a system are. If coupling is high, benefits from separation of concerns are minimized as proper encapsulation is broken.

Example

One of the most prominent examples of separation of concerns actually is the MVC pattern, separating the model, view and controller parts of a system from each other. I’m afraid providing examples of violating this pattern would deserve a blog post of its own, thus I will here only cover one “classic case” of the “model” being mingled with the “controller”.

Given there’s a controller which holds a list model:
public class ReservationController {
    private List<Reservation> reservations;

    public List<Reservation> getReservations() {
        return reservations;
    }
}
How would a service method look like which is in charge of saving all the controller’s reservations to the database? Well, this is the wrong way:
public class ReservationService {
    public void save(ReservationController reservationController) {
        List<Reservation> entities = reservationController.getReservations();
        for (Reservation entity : entities) {
            save(entity);
        }
    }
    
    ...
}
Because #save(…) takes the whole ReservationController as its argument, the two components are tightly coupled, and the service invades into the controller and its internal concerns.

Instead, of course, ReservationController will just pass its model (the reservation list) to the service:
public class ReservationService {
    public void save(List<Reservation> entities) {
        for (Reservation entity : entities) {
            save(entity);
        }
    }
    
    ...
}
Originally, I’ve met this kind of SoC violation on a JSF Facelet page where it was slightly harder to identify. In order not to clutter this article’s with XHTML code, I presented a purely Java-based implementation here. Note, however, that most of these anti-patterns can be applied on any code artifact.

Pages: 1 2 3

Clean and SOLID Java EE code in practice (part 1 of 3)




Writing software to deliver functionality is not enough in the real world. Software has to be maintainable right after its initial programming, and typically for many years after its release. Luckily, there are some well-established design patterns and best practices which help us achieve high software quality. Rather than publishing another article about how things are done in the perfect world, I will here present some real-world examples of violated or misinterpreted design patterns I found throughout my career – and how to fix them.

In this article, I will cover nine of the most well-known (object-oriented) software design principles, five of which make up the famous S.O.L.I.D. principles. They are:
I see some of these principles violated quite often in everyday programming. For each principle, I will here present a particular violation or misconception, based on what I have observed in a software development project I was involved, and show how the design flaw can be fixed by applying true best practices.

To achieve some uniformity, I came up with a fictional example project the following code samples will be embedded in: It’s an online portal of a library where clients can make book reservations.

KISS: Keep it simple, stupid

“Keep it simple, stupid” is probably the most important software design principle. It shortens meetings and acts as an advocate for clean, readable code.

KISS typically increases ease of initial programming, but it may decrease maintainability if overly or improperly used.

In general, I’ve found that there is a serious misconception about this principle, turning it into “Keep it stupid” which is not what the principle is about. In this case, it is used as an excuse not to elaborate a cleaner abstraction and hence introduces code which typically violates DRY and other clean code principles.

Example

As an example, I will try to demonstrate the constraints of KISS applicability.

For instance, given this simple requirement: When the user queries for a library book he’d like to borrow, a simple availability check is performed which, if failed, leads to an error message. Here’s how you could implement the logic to return an error message:
public String checkBoookAvailableSimple(String name) {
    Book book = service.getByName(name).get(0);
    if (book == null || book.getCurrentReservation().isActive()) {
        return "book.reservation.check.notAvailable";
    }
    return null;
}
This is KISS in action. We use a very simple syntax and basic JDK classes; there is no further need to encapsulate or abstract things.

Now imagine a the requirement becoming more complex: When the user queries for a library book he’d like to borrow, multiple validation checks are performed. Some of them are considered critical, immediately yielding an error message whilst others just add warning messages which are collected and once all validation checks are completed, are returned in a list.

Rigorously following the KISS principle, one could continue to implement the logic like so:
public List<String> checkBoookAvailable(String name) {
    List<String> messages = new ArrayList<>();
    List<Book> books = service.getByName(name);
    if (books.isEmpty()) {
        messages.add("book.reservation.check.noMatch");
    }
    else if (books.size() > 1) {
        messages.add("book.reservation.check.duplicate");
    }
    else {
        if (books.get(0).getCurrentReservation().isActive()) {
            messages.add("book.reservation.check.currentReservation.active");
        }
        if (books.get(0).isDamaged()) {
            messages.add("book.reservation.check.damaged");
        }
    }
    ...
    
    return messages;
}
However, this code has tremendously increased in complexity. There are complex conditional branches, and there is mutable state (as held by the messages list).

(As we will see later, this code also violates SRP (Single responsibility principle) by mixing book reservation logic with validation message building.)

This code is hard to extend and maintain. This really is a naïve approach for the problem. It is KIS (“keep it stupid”) because the developer is in charge of managing condition flow and state, which makes it error-prone.

By introducing some level of abstraction, the code can by enormously simplified:
public List<String> checkBoookAvailable(String name) {
    List<Book> books = service.getByName(name);
    BookReservationValidator validator = new BookReservationValidator();
    return validator
            .check(books.isEmpty(), "book.reservation.check.noMatch")
            .ifOkCheck(books.size() > 1, "book.reservation.check.duplicate")
            .ifOkCheck(books.get(0).getCurrentReservation().isActive(), "book.reservation.check.currentReservation.active")
            .check(books.get(0).isDamaged(), "book.reservation.check.damaged")
            ...
            .getMessages();
}
Here, I introduced the BookReservationValidator class which manages error messages and provides a simple API, based on the builder pattern, to run new checks and query the messages.

This code is far easier to maintain thanks to the self-documenting nature of the BookReservationValidator API.

(And it respects SRP. Validation message building is now encapsulated in BookReservationValidator and can be extended and tested independently of the concrete “book reservation” business case.)

Of course, this code example still is kept very simple for illustrational purposes. It’s really borderline over-engineering here. As a general rule, the more complex functionality is, the more dangerous it is to drift from KISS to KIS. A fluent interface, e.g. based on the builder pattern, may significantly increase maintainability in this case.

DRY: Don’t repeat yourself

Simply put, DRY is the opponent of “copy-paste programming” which is in general considered a very poor and potentially very dangerous coding practice as it seriously hampers maintainability.

I would judge this as my personal second leading principle right after KISS. Actually, DRY is what you should strive after, but in certain situations, eradicating any code duplication just isn’t worth the effort, would make a solution too complicated or is not even possible. Common examples are XML or JSF Facelets pages.

DRY typically demands more initial effort, but increases overall maintainability. Given these properties, it really is a counterforce of the KISS principle. On stackoverflow, I found the wonderful quote to always “keep your KISSes DRY”. In good programming practice, these two principles should not compete, but complement each other:


While I see DRY typically respected when implementing common static methods or the like, I’ve observed people struggling to apply it on a higher abstraction level in particular.

Example

Say for instance, we have client contact information which is threefold, consisting of contact, invoice and delivery information, which could be rendered in the GUI like so:



A very naïve approach to map this into a business model class would be to model each single information as an individual field of the business model, like so:
public class Information {
    private String contactStreet;
    private String contactCity;
    private String contactZipCode;
    private String contactPhone;
    private String contactEmail;
    
    private String invoiceStreet;
    ...
}
In UML terms, this would be rendered like so:

A contact information consists of 11 fields. When ORM-mapped into a DB table, this object would be represented as a single table row.

There is no inner structure, which makes the code hard to read, and there is potentially a lot of code duplication because instead of defining once what a street, a city or a state is, and then referring to it three times, we have to define it three times individually. This designs favors copy-paste-coding which is error-prone and hard to maintain.

Imagine, for example, that there is the functionality to auto-fill city when the zip code input loses focus. With this naïve approach, as there is no abstraction nor generalization, one has to implement this functionality for each individual field, like so:
public class ContactInformationController {
    public void updateContactCity(ContactInformation model) {
        String city = zipCodeService.getCity(model.getContactZipCode());
        if (city != null) {
            model.setContactCity(city);
        }
    }
    
    public void updateInvoiceCity(ContactInformation model) {
        String city = zipCodeService.getCity(model.getInvoiceZipCode());
        if (city != null) {
            model.setContactCity(city);
        }
    }
    ...
}
Did you find the copy-paste error which crept in? Anyways, this is very bad programming style. Programming really is about building abstractions, where appropriate, and not doing so leads to bad code quality.

Worst of all, this implementation does not match the reality of the business model. If we closely observe the business model as it is defined, it is apparent that we actually do have reuse of components, and we should make sure that the code reflects the business model. Thus we can identify these patterns:


Note that:
  • The blue parts are re-occurring components
  • The lighter blue parts are optional extensions to a single blue component
With Java’s basic OO mechanisms, we can easily model these relationships:
  • Composition for re-occurring parts
  • Inheritance for extensions
What I have in mind is a class diagram which looks like this:

Which is then implemented like this:
public class ContactInformation {
    private ContactFields contact;
    private AddressFields invoice;
    private AddressFields delivery;

    public static class AddressFields {
        private String street;
        private String city;
        private String zipCode;
        ...
    }
    
    private static class ContactFields extends AddressFields {
        private String phone;
        private String email;
        ...
    }
}
Of course, in the view, the input box for the street of the contact address would now e.g. map to the nested property of contactInformation.contact.street.

Because we now have the address encapsulated in a separate entity, we can implement the “update city” functionality by working with the abstraction, thus getting rid of code-copy-pasting:
public class ContactInformationController {
    ...
    public void updateCity(AddressFields model) {
        String city = zipCodeService.getCity(model.getZipCode());
        if (city != null) {
            model.setCity(city);
        }
    }
}
This design has an overwhelming number of advantages:
  • The structure of the business model source code matches the structure of the real world equivalent, including the structure of the UI, making the code easier to read.
  • The “container” objects are very simple. The main container, ContactInformation, really is just a POJO which doesn’t care about the sub-containers it consists of. This aligns with the SoC (Separation of concerns) principle (see below) and makes the design easily extensible.
  • Code duplications are minimized. Even in the GUI, you can probably (depending on the technology) build a component template for an AddressField (blue part) and include it three times with different parameters.
  • As for persistence concerns: because the individual parts are now modeled to independent entities, you are free to lower transaction boundaries to the level of individual address containers and to use lazy loading. Because you’ll typically use an automated ORM framework, the complexity introduced on the database level shouldn’t be an issue.
Before you go and abstract everything, however, still keep KISS in mind.

Pages: 1 2 3

June 7, 2015

Java 8 lambdas vs. Groovy closures (part 2 of 2)


After taking a closer look at how Java 8 lambdas work and after some practical experience I have to conclude that Groovy’s closures are in every aspect, but most importantly from an “ease of development” point of view, superior to their Java 8 counterpart. Here comes part two of my in-depth assessment. Please check out part 1 here first.

Collection API

Most prominently, predefined functions (closures / lambdas) are provided with the collection API which defines many ways to iterate over a collection.

Stream it, map it and collect it?

As you may have already observed, Groovy closure calls for collections are in general much more concise than their lambda equivalent.

This is because in Groovy, closures are directly integrated in the actual Collection API: closures work on collections, and they return collections. You can thus operate on the actual collection without the need for any “builder pattern” approach to create intermediate objects. The single method call you need is a method named after the desired functionality.

For instance, this code line collects the result of upper casing each element in a list into a returning list:
INPUT_LIST.collect {String myVar -> myVar.toUpperCase()}
Below is the equivalent Java code. As you can see, one has to apply a complex builder sequence because lambdas cannot operate on collections directly and the actual computations are executed only in intermediate builder elements:
INPUT_LIST.stream().map((String it) -> it.toUpperCase()).collect(Collectors.toList());

With or without index

The Java 8 streams API lacks the feature of getting the index of a stream element. Thus there is no Java equivalent to the following concise Groovy code:
INPUT_LIST.eachWithIndex {it, i ->
    OUTPUT_MAP[i.toString()] = it.toUpperCase()
}
return OUTPUT_MAP;
where it is the current iteration element and i is its index in the collection.

Please check out this stackoverflow thread for (rather nasty) workarounds with Java lambdas.

Super-concisely collecting elements

One of the most frequently used iterative functions is creating a new collection by applying a function on every object on a provided original collection; this is the “map” part of the famous map-reduce algorithm. It’s super easy the write it in plain Groovy:
INPUT_LIST.collect {it.toUpperCase()}
In addition to that, Groovy provides an even more concise version known as the special “spread dot operator”. This line of code is equivalent:
INPUT_LIST*.toUpperCase()
The spread dot operator applies a method not to the collection itself, but to every element of the collection, returning a new collection consisting of the method call return values.

With Java lambdas, there is no such thing. You have to stick with the one single version available for mapping:
INPUT_LIST.stream().map(it -> it.toUpperCase()).collect(Collectors.toList());

Collecting maps

Collect / map functionality which involve Map objects work quite different in Groovy and Java. As always, the Groovy version is typically more concise.

When collecting List elements into a Map, the Groovy closure version returns a MapEntry with each function call:
INPUT_LIST.collectEntries {[(it): it.toUpperCase()]}
whereas the Java lambda version uses a dedicated #toMap(…) Collector:
INPUT_LIST.stream().collect(Collectors.toMap(it -> it, it -> it.toUpperCase()));
For the opposite case: when collecting Map elements into a List, there’s a Groovy closure which takes two parameters, the first one is the key and the second one is the value:
INPUT_MAP.collect { key, value -> key + "=" + value }
This is very intuitive and straightforward. Another version which works with a single MapEntry parameter is also available.

That latter version is however the only one available with Java lambdas:
INPUT_MAP.entrySet().stream().map(it -> it.getKey() + "=" + it.getValue()).collect(Collectors.toList());

Easy finding / filtering

Another very frequent operation is finding each element in a collection which matches a given predicate, which could also be perceived as “filtering” functionality.

Again, this is very straightforward in Groovy:
INPUT_LIST.findAll {it == "c"} as List
INPUT_LIST.find {it == "c"}
Note that without as List, the first line would return a Collection object, not a List, even if the original collection is in fact a List! The second version returns the one first matching object. Above code also uses the fact that in Groovy, == is mapped to the #equals(…) method.

The Java version, on the other hand, is slightly more chatty:
INPUT_LIST.stream().filter(it -> it.equals("c")).collect(Collectors.toList());
INPUT_LIST.stream().filter(it -> it.equals("c")).findFirst().get();
Note that the last line would return an Option<T> object if it weren’t for the #get() call.

Sorting and comparing

Of course, both lambdas and closures support sorting and comparing collection elements whereas once again, the Groovy closure versions seem far more concise.

Here’s the Groovy version of sorting a List of Strings according to the first characters of each String in reverse order:
INPUT_LIST.sort(false){(it as String).charAt(0)}.reverse(false)
The two boolean parameters with value false will make sure sorting is executed on a copy of the list, not on the original.

At the moment, #sort(…) breaks type inference, thus it must be casted to String explicitly. This is a known bug.

Intriguingly, Java lambdas also have type inference problems when chaining comparators. For instance, the following line would not compile:
INPUT_LIST.stream().sorted(Comparator.comparing(it -> it.charAt(0)).reversed())
    .collect(Collectors.toList());
as it cannot be inferred as being of type String.

For the rather simple reversing use case though, there luckily is an alternate syntax which works with inference:
INPUT_LIST.stream().sorted(Comparator.comparing(it -> it.charAt(0), Comparator.reverseOrder()))
    .collect(Collectors.toList());

The details are discussed in this stackoverflow thread.

The same problem seems to apply when chaining comparators with #thenComparing(…), but this seems to be an error of the Eclipse compiler only.

Let’s see another example: finding the max value based on a calculation done on each element of the collection. Here’s the Groovy solution:
INPUT_LIST.max { it.charAt(0) }
And here’s the Java lambda counterpart:
INPUT_LIST.stream().collect(Collectors.maxBy(Comparator.comparing(it -> it.charAt(0)))).get();
Note that maxBy returns an Option<T> which you need to read through #get().

Other convenience functions

Both Groovy and Java provide many auxiliary functions on collections which have been implemented with closures or lambdas, respectively. In general, though, the Groovy closure versions are more concise.

Here’s how to flatten a nested collection in Groovy:
INPUT_LIST_NESTED.flatten()
And here’s the same instruction in Java:
INPUT_LIST_NESTED.stream().flatMap(Collection::stream).collect(Collectors.toList());
Here’s how to join collection elements into a String with separator in Groovy:
INPUT_LIST.join(", ")
And here’s the Java equivalent:
INPUT_LIST.stream().collect(Collectors.joining(", "));
Note that in this latest example, the actual functionality happens in the #collect(…) “reducer” method.

Numeric / Range API

Another quite useful application for iteration and thus functional programming is dealing with numeric ranges.

Groovy actually provides a Range collection type which makes the syntax very smooth and natural:
(RANGE_START..RANGE_END_INCLUSIVE).collect()
Java supports ranges as well, but their instantiation is clearly more cumbersome:
IntStream.rangeClosed(RANGE_START, RANGE_END_INCLUSIVE).boxed().collect(Collectors.toList());

Compiler errors

Finally, one last annoyance about Java 8 lambdas I will share here is the error reporting upon compile time error. These messages are typically just horrible, leaving the developer utterly confused.

For instance, what I did here was… wait, can you guess it from the compiler error message? Here’s the code:
IntStream.rangeClosed(RANGE_START, RANGE_END_INCLUSIVE).collect(Collectors.toList());
And that’s the error:
COMPILATION ERROR : 
-------------------------------------------------------------
projects/Java8LambdasVsGroovyClosures/src/main/java/ch/codebulb/java8lambdasvsgroovy/RangeTestsJava.java:[12,-1] 
1. ERROR in projects/Java8LambdasVsGroovyClosures/src/main/java/ch/codebulb/java8lambdasvsgroovy/RangeTestsJava.java (at line 12)
 return IntStream.rangeClosed(RANGE_START, RANGE_END_INCLUSIVE).collect(Collectors.toList());
                                                                ^^^^^^^
The method collect(Supplier<R>, ObjIntConsumer<R>, BiConsumer<R,R>) in the type IntStream is not applicable for the arguments 
    (Collector<Object,capture#1-of ?,List<Object>>)
----------

projects/Java8LambdasVsGroovyClosures/src/main/java/ch/codebulb/java8lambdasvsgroovy/RangeTestsJava.java:[12,-1] 
2. ERROR in projects/Java8LambdasVsGroovyClosures/src/main/java/ch/codebulb/java8lambdasvsgroovy/RangeTestsJava.java (at line 12)
 return IntStream.rangeClosed(RANGE_START, RANGE_END_INCLUSIVE).collect(Collectors.toList());
                                                                        ^^^^^^^^^^^^^^^^^^^
Type mismatch: cannot convert from Collector<Object,?,List<Object>> to Supplier<R>
----------
I forgot to apply the #boxed() method on the range, which would unbox the Range of Integer values to a range of int values.

This is pretty much a typical error message for any lambda compilation error (they’ll usually pop up in your IDE already rather than on the command line, but the problem stays the same). They typically include loads of references to nested generic types which of course is to be expected by the underlying generic abstraction layer of lambdas, but it’s making those error messages very hard to decipher nonetheless. I have to admit that in most cases the error message doesn’t help me at all in finding an error. I don’t care about how complex the underlying mechanisms of lambas are; I expect proper error reporting on the “lambda layer”, without inner classes and generic abstractions leaking through.

I have to admit that Groovy isn’t exactly well known for its excellent error reporting either, mostly due to its highly dynamic nature. Nonetheless, closure errors will in my experience typically either lead to quite readable error messages or Groovy will even run though anyways, with exception and stack trace at runtime (this is why dynamically typed programs are typically developed test-first).

Conclusion

As I wrote in this article’s introduction, I have been looking forward to using the new exciting concept of lambdas in Java mostly because I really got into the apparently similar closure feature in Groovy, hoping to apply the same level of conciseness and readability to Java programs. Well, I was very wrong!

Both concepts share the same basic idea of promoting functional programming thus reducing side effects and facilitate parallel computing which I highly appreciate. And both languages doubtlessly do that very well.

However, there’s a huge difference in language design. Whilst in Groovy, writing iteration logic in closures is actually more readable thus increasing maintainability over e.g. use of nested for-loops, this is in my opinion not true for Java lambdas. Quite the contrary, lambdas are hard to write and quite hard to read with lots of boilerplate code which actually forces us to tradeoff the advantages of functional programming against the negative impact lambda coding style has on maintainability, and this is very bad.

There are some good ideas behind lambdas, but they don’t really matter at the end of the day when you just want to quickly implement some common iteration algorithm. In my opinion, both the grammar as well as the API need some serious revision.

When I do make use of lambdas in real world projects, I do so primarily for two use cases:
  • for specific iteration algorithms such as #find(…) which can actually increase readability when compared with nested conditional for-loops;
  • and to implement the command pattern for runtime pluggable / interchangeable behavior.
I originally intended to write this article for Groovy developers which are interested in the differences to lambdas and as a starting point to map their Groovy closure knowledge to the lambda world which I found quite hard myself. However, I do like to address Java developers as well to simply share my thoughts on the current state of lambdas. If you are a Java developer reading this far, I am keen on hearing whether you share some of my concerns. Feel free to comment below your experience and thoughts on working with Java 8 lambdas.

Again, you may also want to check out the accompanying GitHub project which contains all the code examples presented within the text, some additional lambda / closure examples and JUnit tests which prove equality of each lambda / closure implementation pair.

Update September 6, 2015: Meanwhile, I’ve created LambdaOmega, a small wrapper API for the Java Collection API which is more simple, concise and powerful than its vanilla Java counterpart. It especially fixes many flaws of the lambda API as discussed in this article. Check out the accompanying blog post here or visit its GitHub page. Version 0.1 is now RELEASED!


Pages: 1 2