Sakila: Sample app project setup

This sample application will integrate quite a few very nice open source tools available to every developer:

  • Postgresql – database
  • Flyway – database migration tool
  • Jooq – Java object oriented querying + HikariCP connection pool
  • Dagger 2 – dependency injection
  • SparkJava – fast and simple web server + GSON Json serializer
  • JavaScript Polymer SPA application framework
  • Vaadin Elements components

The application will consist of many modules :

Postgresql – database

Initialize sample database

For start we will install sample sakila database  in the local Postgresql database. Restore downloaded file into locally created database.

C:\Program Files\PostgreSQL\9.6\bin\pg_restore.exe --host "localhost" --port "5432" --username "postgres" --no-password --dbname "sakila" --no-owner --no-tablespaces --verbose "C:\\DVDRENTAL.TAR"

Sakila sample database is open source demo database and represent database model of DVD rental store. It  consist of  15 relational tables, 7 views and few other database details and it’s full of data.

Well that’s database with full of test data for development environment. If we need empty database (for production for example) we need to start initialization DDL script to build the database.

To create the script from the existing database use the command pg_dump which is capable of exporting database in the form of sql commands :

pg_dump -s -U postgres sakila > V1.0.1__sakila_init.sql

To export database without any data , only schema definitions we use “schema only” (-s) option.

Flyway migrations

Create flyway config file and “migrations” folder under the project root.

flyway.url=jdbc:postgresql://localhost:5432/sakila
flyway.user=postgres
flyway.password=postgres
flyway.schemas=public
flyway.locations=filesystem:./migrations

Add “fw” command somewhere on the path.

#!/bin/sh
/c/Programs/flyway-4.2.0/flyway $@

Put the “V1.0.1__sakila_init.sql” file in the migrations folder. If everything works as expected the “info” command should report the pending migration.

Flyway migration and initial database state after database restore

After restoring the database with test data in it we need to “baseline” initial migration. Initial sql script to create empty database was bypassed with restore. The V1.0.1__sakila_init.sql migration script was still pending.

With the baseline command we agree that the migration is not needed and you mark the migration version as migrated.

 

Setup java server project

In the IDE (IntelliJ IDEA Community 2017.2) create new console type project “sakilaweb/server”.

Setup git-bash terminal as default intellij terminal

Jooq – object oriented querying

Create jooq config file and add jooq command somewhere on the path.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration xmlns="http://www.jooq.org/xsd/jooq-codegen-3.10.0.xsd">
    <!-- Configure the database connection here -->
    <!-- TRACE, DEBUG, INFO, WARN, ERROR, FATAL -->
    <logging>INFO</logging>
    <jdbc>
        <driver>org.postgresql.Driver</driver>
        <url>jdbc:postgresql://localhost:5432/sakila</url>
        <user>postgres</user>
        <password>postgres</password>
    </jdbc>

    <generator>
        <!-- The default code generator. You can override this one, to generate your own code style.
             Supported generators:
             - org.jooq.util.JavaGenerator
             - org.jooq.util.ScalaGenerator
             Defaults to org.jooq.util.JavaGenerator -->
        <name>org.jooq.util.JavaGenerator</name>

        <database>
            <!-- The database type. The format here is:
                 org.util.[database].[database]Database -->
            <name>org.jooq.util.postgres.PostgresDatabase</name>

            <!-- The database schema (or in the absence of schema support, in your RDBMS this
                 can be the owner, user, database name) to be generated -->

            <inputSchema>public</inputSchema>

            <!-- All elements that are generated from your schema
                 (A Java regular expression. Use the pipe to separate several expressions)
                 Watch out for case-sensitivity. Depending on your database, this might be important! -->
            <includes>.*</includes>

            <!-- All elements that are excluded from your schema
                 (A Java regular expression. Use the pipe to separate several expressions).
                 Excludes match before includes -->
            <excludes></excludes>
        </database>

        <generate>
            <pojos>true</pojos>
            <daos>true</daos>
        </generate>

        <target>
            <!-- The destination package of your generated classes (within the destination directory) -->
            <packageName>com.bisaga.sakila.dbmodel</packageName>

            <!-- The destination directory of your generated classes. Using Maven directory layout here -->
            <directory>./database</directory>
            <encoding>UTF-8</encoding>
        </target>
    </generator>
</configuration>

Bash command:

#!/bin/sh
DIR="C:/Programs/jOOQ-3.10.1/jOOQ-lib/"
DRIVER="C:/Programs/drivers/postgresql-42.1.4.jar"
VER="3.10.1"
CP=$DIR"jooq-"$VER".jar"
CP=$DIR"jooq-meta-"$VER".jar;"$CP
CP=$DIR"jooq-codegen-"$VER".jar;"$CP
CP=$DRIVER";"$CP
java -classpath $CP org.jooq.util.GenerationTool jooq.xml

Add “jooq-3.10.1.jar” library to project dependencies. Add “postgresql-42.1.4.jar” if you use the same database.

Run code generation tool with “jooq” command in the terminal at the project root.

After code was successfully generated in the “./database” folder you will get a bunch of database related code ready made ( database schema, POJOs, and DAOs).

The project with generated code will now look like :

Setup Dagger 2

Configure IDEA for annotations processor.

Add dagger dependencies (dagger-compiler only as “Provided” because it is used only for code generation ).

Setup SparkJava web server

Add few references to the project dependencies and setup “hello-world” web sample just to be sure everything is setup us expected before start real coding.

com.sparkjava:spark-core:2.0.0
org.slf4j:slf4j-simple:1.7.25

com.google.code.gson:gson:2.8.2
com.google.dagger:dagger:2.12
com.google.dagger:dagger-compiler:2.12

Create main procedure as :

package com.bisaga.sakila;

import static spark.Spark.*;

public class Main {

    public static void main(String[] args) {
        get("/hello", (req, res)-> "Hello !");
    }
}

Now if you run the application you should already get the first page:

Publish to the github

First enable VSC support in the local project and add .gitignore file to the project. Next we add files to the local git repository created in the project.

If we want  to push code to the remote repository we need to create it to have repository to commit to. Login to the github and create new empty repository.

The code for the server side project is available here.

 

 

Next : In the next installment I will put the generated database layer into the use and expose first REST service.

 

 

 

 

 

Dependency injection with Dagger 2

Prepare project to use dagger DI in IntelliJ IDE

The IntelliJ IDEA Community version 2017.2.5 is used in this example.

Open project structure and add dagger as project dependency from Maven repository.

  • com.google.dagger:dagger:2.10
  • com.google.dagger:dagger-compiler:2.10

The dagger compiler has to be available only for development phase (annotation processing will generate code) so we added it as “Provided” dependency.

Enable annotation processing in the project settings and select content root for generated code.

If you wish to inspect generated code in the project tree it is wise to open “generated” folder manually and mark it as “Source root”.  After compiling the project tree will look like this:

Create simple service application

First we will create very simple application without DI. The service application will consist of one Application class starting everything up and with single Service class doing some work.

package com.bisaga.simple;
public class Main {
    public static void main(String[] args) {
        Application app = new Application();
        app.run();
    }
}
package com.bisaga.simple;

public class Application {
    private Service service;

    public Application() {
        service = new Service();
    }

    public void run() {
        System.out.println("Application start.");
        service.doWork();
        System.out.println("Application end.");
    }
}
package com.bisaga.simple;

public class Service {

    public void doWork() {
        System.out.println("Service do work.");
    }
}

Implement dagger dependency injection

To start as simple as possible we will not change how the application is instantiated and started.  The dagger is initialized in the Application object and the only injected instance will be the instance of Service class.

package com.bisaga.simple;
public class Main {
    public static void main(String[] args) {
        Application app = new Application();
        app.run();
    }
}

Now we need to create  “Module” object where we implement “provide” method to provide new instance of the service class. The module class must have @Module annotation and factory methods must have @Provides annotations.

package com.bisaga.simple;

import dagger.Module;
import dagger.Provides;

@Module
public class SimpleModule {

    @Provides
    Service provideService() {
        return new Service();
    }
}

Now we need a “Component” object which is an interface from which dagger generate injector class.  The component must contain a @Component annotation with defined modules and inject function signature.

package com.bisaga.simple;

import dagger.Component;
import javax.inject.Singleton;

@Singleton
@Component(modules = { SimpleModule.class})
public interface SimpleComponent {
    void inject(Application application);
}

The service class was not changed because we do not inject anything there, the service has no external requirements, so the code stay exactly the same as before:

package com.bisaga.simple;

public class Service {

    public void doWork() {
        System.out.println("Service do work.");
    }
}

Connect everything together

In the Application constructor we initialize the dagger with the build() and inject() functions. The injection is achieved with the @Inject annotation in front of the local “service” member declaration.

package com.bisaga.simple;
import javax.inject.Inject;

public class Application {
    private SimpleComponent component;

    @Inject Service service;

    public Application() {
        component = DaggerSimpleComponent.builder().build();
        component.inject(this);
    }

    public void run() {
        System.out.println("Application start.");
        service.doWork();
        System.out.println("Application end.");
    }
}

If a dagger want to inject a class member it must be publicly accessible because it is not using any reflection (that’s good for speed and debugging!). But we could also initialize private variables with the use of constructors marked with annotation @Inject taking in all required parameters.

Extending our knowledge

@Inject on the constructor instead of provide method

Now we could explain that provideXXX methods in the modules are not the only way how to explain to dagger how to construct the objects … We can instead of the method simpy annotate constructor of the object with @Inject.

Because only one constructor can have the @Inject annotation the dagger know exactly how the object should be constructed. The special provideXXX method on the module is not needed anymore.

Try to add constructor to Service class annotated with @Inject and delete provideService() method from the “SimpleModule” class , rebuild the project and everything should work as before. You can observer generated classes because know the daggergenerate special factory object with construction mechanism.

The code after changes should look like:

package com.bisaga.simple;

import javax.inject.Inject;

public class Service {

    @Inject
    public Service(){}

    public void doWork() {
        System.out.println("Service do work.");
    }
}

and the module:

package com.bisaga.simple;

import dagger.Module;

@Module
public class SimpleModule {
}

 

More about the topic

DI and the dagger tool has many features not covered in this article. You can find more about them here .

 

Create an executable JAR from IntelliJ CE

To create executable JAR we need to create new artifact and build it. Build will save everything to the output folder.

The version of java used in the article was 1.8.

Automatically created build artifact:

  1. Open File/Project structure
  2. Create new artifact, JAR, from modules with dependencies:
  3. Enter proper startup class and select proper function how to use external 3rd party libraries in the target artifact (“copy to the output directory and link via manifest“). We don’t want to extract them to the target JAR file because some of them are probably signed and those will not work inside the target JAR. 

Now you can build artifact from the menu (/Build/Build artifact/) and test it from the command line:

$ java -jar out/artifacts/sparkweb/sparkweb.jar

Customized way of defining build artifact

If you wish to have more influence on target artifact then you use “Other” artifact type and design the process manually:

You can move external libraries into some sub folder for example and then create manifest file manually to reflect that new class path.

Sample manifest file:

If you edit MANIFEST.MF file just don’t forget to put two spaces between the jar files.

Manifest-Version: 1.0
Class-Path: javax.servlet-3.0.0.v201112011016.jar
  jetty-http-9.0.2.v20130417.jar
  jetty-io-9.0.2.v20130417.jar
  jetty-security-9.0.2.v20130417.jar
  jetty-server-9.0.2.v20130417.jar
  jetty-servlet-9.0.2.v20130417.jar
  jetty-util-9.0.2.v20130417.jar
  jetty-webapp-9.0.2.v20130417.jar
  jetty-xml-9.0.2.v20130417.jar
  slf4j-api-1.7.2.jar
  slf4j-api-1.7.25.jar
  slf4j-simple-1.7.25.jar
  spark-core-2.0.0.jar
Main-Class: com.bisaga.Main

Read this blog for more detailed instructions..

Why we don’t create a single JAR from everything

Create one single JAR from everything the application consist is not a good idea because of security limitations. If any of 3rd party jar libraries is signed then the signature represent physical JAR file and when code is loaded from another  jar file (your single jar) the load mechanism will prevent loading because the signature doesn’t represent the jar file anymore.

 

Data migrations in Node applications

Setup db-migrate tool

As your application grow your database model evolve too.  The changes from one version to another is made by migrations. Each migration in the db-migrate tool is in essence  a small javascript program. You can code your migration changes manually in the javascript syntax or generate small javascript program (with –sql-file option) to execute your SQL script files. One for the UP and one for the DOWN function of migration.

up: you upgrade your database schema to the new version
down: you reverse latest changes to the previous version
create: create new migration, with an –sql-file option create a SQL file runner

Installation

We install db-migrate module and corresponding database driver with npm.

npm install db-migrate
npm install db-migrate-pg

Depends where you install it the command is available as:

$ ./node_modules/.bin/db-migrate
or
$ db-migrate

It is possible to add local ./node_module/.bin folder to the local path variable as I described in this article and call it the same way as you would if you install it in to the global space.

Configuration

Minimal config file (database.json) to work with postgres database in the development environment:

{
  "dev": "postgresql://postgres:postgres@localhost:5432/dbsam"  
}

The file should be in the main project folder. More about configuration.

Migrations with SQL files

To create new empty migration use the command:

$ db-migrate create new_mig_name --sql-file

By default your migration javascript files reside in the “./migrations” folder and sql files in the “./migrations/sqls”  subfolder.

Three new files are created, all files are prefixed with the timestamp which represent order of execution.

Additional two files with the suffix sql are prepared for your upgrade script and downgrade script. If you wish to have ability to downgrade database to the previous level make sure you write down script to.

Example of upgrade script (it is just for the sake of example, you basically write DDL statements with the familiar SQL syntax):

/*
* Table: Currency  
*
* Contains currencies 
*/
CREATE TABLE currency (
	code                varchar(60) not null,               -- code 
	abbreviation        varchar(60),						-- ex. EUR, USD
	description         text,								
	row_id              serial primary key,                 -- row identifier, autonumber, pk 
	org_id				smallint not null,					-- multitenant identifier 
	co_id				int NOT NULL,						-- company identifier  
	created_at          timestamptz not null default now(),
	created_by          varchar(60) not null,
	modified_at         timestamptz,
	modified_by         varchar(60),
	CONSTRAINT currency_sys_org_fk FOREIGN KEY (org_id) REFERENCES sys_org (row_id),
	CONSTRAINT currency_sys_co_fk FOREIGN KEY (co_id) REFERENCES sys_co (row_id)
) WITHOUT OIDS;                                               -- don't create object identifier field 
CREATE INDEX currency_org_id_fkix ON currency (org_id);
CREATE INDEX currency_co_id_fkix ON currency (co_id);
CREATE INDEX currency_created_at ON currency (created_at DESC) ;
CREATE INDEX currency_modified_at ON currency (modified_at DESC) ;

Example of downgrade script:

DROP TABLE currency;

Run “db-migrate up” command and all your migrations which are not yet executed will run.

The migration log is kept in the database migrations table. The table is automatically created at first run.

Other useful commands

reset: will rewind your database to the state before first migration is called

Using db-migrate with typescript

If you write node programs with the typescript you probably wish to use it with migration to. I didn’t go in this direction simply because I write my scripts in SQL language and runners are just perfect already in javascript. Because the migration are already in the javascript (the runner part), you should exclude migrations folder from the typescript compiler path.

Sample tsconfig.json file with excluded migrations folder :

{
    "compilerOptions": {
        "target": "es2017",
        "module": "commonjs",
        "moduleResolution": "node",
        "noEmitOnError": true,
        "noImplicitAny": true,
        "experimentalDecorators": true,
        "sourceMap": true,
        "baseUrl": ".",
        "allowJs": true,
        "paths": {
            "*":[ 
                "src/types/*"
            ]
        },
        "outDir": "dist"
    },
    "typeRoots": [
        "node_module/@types",
        "src/@types"
    ],
    "include": [
        "src/**/*",
        "spec/**/*"    
    ],
    "exclude": [
        "dist",
        "migrations",
        "node_modules"
    ]
}

I didn’t include migration scripts to production deployment code yet, that step would be necessary if you wan’t to upgrade database automatically after deployment.

External links

Database migration tool: Db-migrate.
Gui prototyping and drawing tool: The pencil.