Sakila: Dagger 2 Dependency Injection

REST web server and dependency injection

The code from this article is available here.

I described basic usage of Dagger 2 already in this article, now we need to implement dependency injection mechanism in the web application. Typical web application would have at least two layers, one for the web server itself and another for client request processing.

Spark Java web server internally use embedded Jetty web server. To setup and start the server we provide some central services like configurations, authentication, statistics etc. Those services are usually instantiated once per whole application.

For each client request a new thread is allocated for the whole duration until the request is processed. Each client request trigger a method defined in the router.

Depend on the design decisions but we usually want that each request will be processed by new object instance (an example is ActorResource in the picture). We will probably have multi threaded problems or simply data leaks from one user to another if we don’t create new instance each time the request is received.

Some objects needed in the typical request processing scenario have different lifecycle requirements, for example database transaction object must be the same for the whole duration of the started transaction but different for each user. There are times when we need two independent transactions in one service request for example. Transactions are span over multiple service objects for example.

On the other side when we do not require transaction (read only processing) we would be better of if we use first available connection allocated to us from the connection pool for the smallest amount of time possible.

As we see from the use cases above there are very different lifecycle scenarios and dependency injection must support them all.

Scopes

We will need at least two scopes for our web application. First scope is application level scope. In the dagger this scope is on by default. Each time we tag a class as a “@Singleton”, the object will be instantiated on the application level and all subsequent requests for this object will return the same instance. So the singleton representing an “application scope” by default. No need for specific scope definitions.

Classes without any scope annotations (no @Singleton or any other scope) are always provided with a new instance.

To manage injection on the application level we create ApplicationComponent interface and ApplicationModule class.

Application module class:

At the application level example we present next use cases:

  • creating an instance with the supplied constructor parameter (ConfigProperties service)
  • instantiate objects  from the external dependencies with provide method (Gson)
  • instantiate specific implementation for an interface (ResponseTransformer interface)
  • instantiate an String object with named annotation (using name as differentiator)

Request scope

To create “request scope” we write one annotation interface (“@interface”) one component (RequestComponent) and at least one module class (RequestModule). The component must have sub-component annotation.

To manage DI on the request scope level we create annotation type interface RequestScope , RequestComponent interface and RequestModule class.

Just to be clear I want to emphasize that each class annotated with the “@RequestScope”  annotation will be instantiated exactly once per created instance of RequestComponent class.  It means that scope annotations represent local singletons.

Module class:

We use localized singletons especially for the transaction and jooq data access support.

Provide methods are optional, we can decorate classes in the source code with the corresponding annotations.

Service classes

If we analyze the code in the consumer classes, it become ridiculously simple.  All externalized requirements are created by the dagger code almost hassle free.

In the ActorResource class for example we analyze received request,  extract parameters and start business logic. The transaction object is created on the request scope and pass down to all service objects in need to collaborate in the same database transaction.

In the ActorService class we receive all constructor parameters from the dagger automatically.

The class ActorService require two objects at the constructor: jooq DSLContext  class and ActorDao class.

DSLContext class is part of the Jooq data access library and is instantiated with the provider method “provideDSLContext”. It is annotated as @RequestScope it means the RequestComponent will keep single instance of it for the duration of one request cycle.

ActorDao object is also generated by Jooq library so we couldn’t tag it with scope annotation in the source (so we wrote the provideActorDao method in the request module).

Summary

Dagger calculate all dependencies in the compile time and generate the required code for the whole graph of dependencies and is able to instantiate appropriate objects at appropriate times really fast.

 

The code from this article is available here.

Other resources:

More about dagger scopes, sub-components .

 

Sakila: Sample app project setup

This sample application will integrate quite a few very nice open source tools available to every developer:

  • Postgresql – database
  • Flyway – database migration tool
  • Jooq – Java object oriented querying + HikariCP connection pool
  • Dagger 2 – dependency injection
  • SparkJava – fast and simple web server + GSON Json serializer
  • JavaScript Polymer SPA application framework
  • Vaadin Elements components

The application will consist of many modules :

Postgresql – database

Initialize sample database

For start we will install sample sakila database  in the local Postgresql database. Restore downloaded file into locally created database.

Sakila sample database is open source demo database and represent database model of DVD rental store. It  consist of  15 relational tables, 7 views and few other database details and it’s full of data.

Well that’s database with full of test data for development environment. If we need empty database (for production for example) we need to start initialization DDL script to build the database.

To create the script from the existing database use the command pg_dump which is capable of exporting database in the form of sql commands :

To export database without any data , only schema definitions we use “schema only” (-s) option.

Flyway migrations

Create flyway config file and “migrations” folder under the project root.

Add “fw” command somewhere on the path.

Put the “V1.0.1__sakila_init.sql” file in the migrations folder. If everything works as expected the “info” command should report the pending migration.

Flyway migration and initial database state after database restore

After restoring the database with test data in it we need to “baseline” initial migration. Initial sql script to create empty database was bypassed with restore. The V1.0.1__sakila_init.sql migration script was still pending.

With the baseline command we agree that the migration is not needed and you mark the migration version as migrated.

 

Setup java server project

In the IDE (IntelliJ IDEA Community 2017.2) create new console type project “sakilaweb/server”.

Setup git-bash terminal as default intellij terminal

Jooq – object oriented querying

Create jooq config file and add jooq command somewhere on the path.

Bash command:

Add “jooq-3.10.1.jar” library to project dependencies. Add “postgresql-42.1.4.jar” if you use the same database.

Run code generation tool with “jooq” command in the terminal at the project root.

After code was successfully generated in the “./database” folder you will get a bunch of database related code ready made ( database schema, POJOs, and DAOs).

The project with generated code will now look like :

Setup Dagger 2

Configure IDEA for annotations processor.

Add dagger dependencies (dagger-compiler only as “Provided” because it is used only for code generation ).

Setup SparkJava web server

Add few references to the project dependencies and setup “hello-world” web sample just to be sure everything is setup us expected before start real coding.

Create main procedure as :

Now if you run the application you should already get the first page:

Publish to the github

First enable VSC support in the local project and add .gitignore file to the project. Next we add files to the local git repository created in the project.

If we want  to push code to the remote repository we need to create it to have repository to commit to. Login to the github and create new empty repository.

The code for the server side project is available here.

 

 

Next : In the next installment I will put the generated database layer into the use and expose first REST service.

 

 

 

 

 

Data migrations in Node applications

Setup db-migrate tool

As your application grow your database model evolve too.  The changes from one version to another is made by migrations. Each migration in the db-migrate tool is in essence  a small javascript program. You can code your migration changes manually in the javascript syntax or generate small javascript program (with –sql-file option) to execute your SQL script files. One for the UP and one for the DOWN function of migration.

up: you upgrade your database schema to the new version
down: you reverse latest changes to the previous version
create: create new migration, with an –sql-file option create a SQL file runner

Installation

We install db-migrate module and corresponding database driver with npm.

Depends where you install it the command is available as:

It is possible to add local ./node_module/.bin folder to the local path variable as I described in this article and call it the same way as you would if you install it in to the global space.

Configuration

Minimal config file (database.json) to work with postgres database in the development environment:

The file should be in the main project folder. More about configuration.

Migrations with SQL files

To create new empty migration use the command:

By default your migration javascript files reside in the “./migrations” folder and sql files in the “./migrations/sqls”  subfolder.

Three new files are created, all files are prefixed with the timestamp which represent order of execution.

Additional two files with the suffix sql are prepared for your upgrade script and downgrade script. If you wish to have ability to downgrade database to the previous level make sure you write down script to.

Example of upgrade script (it is just for the sake of example, you basically write DDL statements with the familiar SQL syntax):

Example of downgrade script:

Run “db-migrate up” command and all your migrations which are not yet executed will run.

The migration log is kept in the database migrations table. The table is automatically created at first run.

Other useful commands

reset: will rewind your database to the state before first migration is called

Using db-migrate with typescript

If you write node programs with the typescript you probably wish to use it with migration to. I didn’t go in this direction simply because I write my scripts in SQL language and runners are just perfect already in javascript. Because the migration are already in the javascript (the runner part), you should exclude migrations folder from the typescript compiler path.

Sample tsconfig.json file with excluded migrations folder :

I didn’t include migration scripts to production deployment code yet, that step would be necessary if you wan’t to upgrade database automatically after deployment.

External links

Database migration tool: Db-migrate.
Gui prototyping and drawing tool: The pencil.

TomEE: Java EE server database configuration

Configure database access

I use Apache TomEE server and therefore I need to configure it for database purpose before first use.

Install database driver

Drop database driver jar file in tomee/lib folder.

Configure datasource

Resources are usually defined in  server configuration file.

Add datasource resource definition in configuration file located in  tomee/conf/tomee.xml.

Verify configuration

After server (tomee service) restart, search for your datasource in log (example log file: tomee/logs/catalina.2016-11-26.log). You will find log entry with your resource id there:

If you restart server from inside netbeans,  just search in output window where log entries are shown.

Inject datasource in java code and use it

To inject instance of datasource where connection is needed you simply add annotation “@Resource” above variable declaration:

Let’s see whole example with select statement (jOOQ):

This example code is called from REST JSON service and result in the browser looks like this:

2016-11-26-16_38_26-localhost_8080_helloworld_webresources_hello

Cron job – running PHP in the background

Background jobs

If you wish to write responsive web applications, you will need to  push some operations in the background. That way you can just register request for some long running task and immediately return to the client.

If you search the web, there are many ways how to achieve this, but not so many implementation are ready to do it in constraint environment of simple web hosting.

My web page is hosted by GoDaddy with so called “Linux hosting with cPanel”. I have PHP and MySql, but not much beside this. GoDaddy luckily allows cron jobs.  We simply register some command as “cron job” to run unattended at specified frequency.

For proof of concept I will need simple php program and run it as cron job.  At each cron job iteration we will insert one record into database table. We just want to proof that php program can run in the background as  cron job.

Create some database and add “tasklog” table.

Our simple PHP program is :

To test it, just put the “taskrun.php” in your “public_html” folder and navigate to it. If something will go wrong in the program, the settings for exceptions are set to report it to the client. Please test the program until everything not running smoothly.

Register cron job

You can put program file to any folder. If folder is not under “public_html” folder, it will be inaccessible from the public web and that way will be much more secure. We create new “jobs” folder under our root home folder and move “taskrun.php” program there.

In the “cPanel” locate “Advanced” section and select “Cron Jobs”:

2015-10-22 22_42_41-cPanel - MainCreate new job with a one minute frequency as:

To prevent email to be sent for each iteration, we put redirection into the command ( > /dev/null 2>&1 ).

2015-10-22 22_45_26-cPanel - Cron JobsWait a minute and check if there are some records in the “tasklog” table. You will see something like this:

2015-10-23 00_10_00-n1plcpnl0026.prod.ams1.secureserver.net _ localhost _ bisagasamples _ tasklog _

Success !

Of course, this is only proof of concept for running something unattended in the background. But I think there are already some open source job-task runner out there.