Run Dropwizard Java application on Vagrant

Last Updated on by

Post summary: How to run Dropwizard or any other Java application on Vagrant.

The code below can be found in GitHub sample-dropwizard-rest-stub repository in Vagrantfile-jar file. Since Vagrant requires to have only one Vagrantfile if you want to run this example you have to rename Vagrantfile-jar to Vagrantfile then run Vagrant commands described at the end of this post. This post is part of Vagrant series. All of other Vagrant related posts, as well as more theoretical information what is Vagrant and why to use it, can be found in What is Vagrant and why to use it post.

Vagrantfile

As described in Vagrant introduction post all configurations are done in a single text file called Vagrantfile. Below is a Vagrant file which can be used to deploy and start as service Dropwizard Java application described in Build a RESTful stub server with Dropwizard post.

Vagrant.configure('2') do |config|

  config.vm.hostname = 'dropwizard'
  config.vm.box = 'opscode-centos-7.2'
  config.vm.box_url = 'http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-7.2_chef-provisionerless.box'

  config.vm.synced_folder './', '/vagrant'

  config.vm.network :forwarded_port, guest: 9000, host: 9000
  config.vm.network :forwarded_port, guest: 9001, host: 9001

  config.vm.provider :virtualbox do |vb|
    vb.name = 'dropwizard-rest-stub-jar'
  end

  config.vm.provision :shell do |shell|
    shell.inline = <<-SHELL
      sudo service dropwizard stop
      sudo yum -y install java
      sudo mkdir -p /var/dropwizard-rest-stub
      sudo mkdir -p /var/dropwizard-rest-stub/logs
      sudo cp /vagrant/target/sample-dropwizard-rest-stub-1.0-SNAPSHOT.jar /var/dropwizard-rest-stub/dropwizard-rest-stub.jar
      sudo cp /vagrant/config-vagrant.yml /var/dropwizard-rest-stub/config.yml
      sudo cp /vagrant/linux_service_file /etc/init.d/dropwizard
      # Replace CR+LF with LF because of Windows
      sudo sed -i -e 's/\r//g' /etc/init.d/dropwizard
      sudo service dropwizard start
    SHELL
  end

end

Vagrantfile explanation

The file starts with a Vagrant.configure(‘2’) do |config| which states that version 2 of Vagrant API will be used and defines constant with name config to be used below. Guest operating system hostname is set to config.vm.hostname. If you use vagrant-hostsupdater plugin it will add it to your hosts file and you can access it from a browser in case you are developing web applications. With config.vm.box you define which would be the guest operating system. Vagrant maintains config.vm.box = “hashicorp/precise64” which is Ubuntu 12.04 (32 and 64-bit), they also recommend to use Bento’s boxes. I have found issues with Vagrant’s as well as Bento’s boxes so I’ve decided to use one I know is working. I specify where it is located with config.vm.box_url. It is CentOS 7.2. With config.vm.synced_folder command, you specify that Vagrantfile location folder is shared as /vagrant/ in the guest operating system. This makes it easy to transfer files between guest and host operating systems. This mount is done by default, but it is good to explicitly state it for better readability. With config.vm.network :forwarded_port port from guest OS is forwarded to your hosting OS. Without exposing any port you will not have access to guest OS, only port open by default is 22 for SSH. With config.vm.provider :virtualbox do |vb| you access VirtualBox provider for more configurations, vb.name = ‘dropwizard-rest-stub-jar’ sets the name that you see in Oracle VirtualBox Manager. Finally, the deployment part takes place which is done by shell commands inside config.vm.provision :shell do |shell| block. Service dropwizard is stopped, if does not exist an error is shown, but it does not interrupt provisioning process. Command yum -y install java is CentOS specific and it installs Java by YUM which is CentOS package manager. For other Linux distributions, you have to use a command with their package manager. Folders are created, then JAR and YML files are copied to the machine. Notice that files are copied from /vagrant/ folder, this is actually the shared folder to your host OS. Installing Java application as service is done by copying linux_service_file to /etc/init.d/dropwizard. This creates service with name dropwizard. See more how to install Linux service in Install Java application as a Linux service post. Since I’m on Windows its line endings (CR+LF) is different than on Linux (LF) and service is not working, giving env: /etc/init.d/dropwizard: No such file or directory error. This is why CF+LF should be replaced with LF with sudo sed -i -e ‘s/\r//g’ /etc/init.d/dropwizard command. Finally, script starts the dropwizard service. The nicer way to do this is all installation steps to be extracted as separate batch file and in Vagrantfile just to call that file. I’ve put it in Vagrantfile just to have it in one place.

Running Vagrant

Command to start Vagrant machine is: vagrant up. Then in order to invoke provisioning section with actual deployment, you have to call: vagrant provision. All can be done in one step: vagrant up –provision. To shut down the machine use vagrant halt. To delete machine: vagrant destroy.

Conclusion

It is very easy to create Vagrantfile that install Java application. Once created file can be reused by all team members. It is executed over and over again making provisioning extremely easy.

Related Posts

Read more...

What is Vagrant and why to use it

Last Updated on by

Post summary: Brief description on Vagrant and when and why to use it.

This post is a preface to a series of posts where I will describe in details with examples how to configure and run Vagrant.

What is Vagrant

Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the “works on my machine” excuse a relic of the past. Vagrant is convenient to share virtual environment setup and configurations.

How Vagrant works

Vagrant does not provide virtualization engines but builds on top of already existing such as VirtualBox which is the default provider, VMWare, Hyper-V or Docker. Vagrant providers are available as plugins so can be easily installed and used. Simply said Vagrant spins up a virtual machine for you, configures it and installs software on it. All those actions are described in a single text file, called Vagrantfile, that can be shared among team members allowing everyone to have one and the same setup.

Why use Vagrant

Vagrant allows us very easily to share setups between team members allowing very easy spin up of a work environment. For me, the important reason to use Vagrant is test how your deployment works, i.e. provisioning, locally before pushing those changes to other environments. Other important use cases I’ve seen is to create own development/test environment which is very hard to create on a local machine. This was a huge Tomcat application consisting of tens of configuration files with hundreds of configuration values which was not possible to provision on the local box, here Vagrant came to a rescue applying Chef cookbook used for deployment on physical hosts.

Provisioning

Provisioning is all tasks related to deployment and configurations of applications making them ready to use. In the past, this was done with many scripts or manual steps, which was quite unreliable and error-prone. Nowadays tools like Chef or Ansible allow automatic deployment and configuration of applications. This is a proper way to do deployments as it eliminates the human error and speeds up deployment. Once you have your Chef cookbook or Ansible playbook ready you want to test them if they work properly. Here comes the true value of Vagrant, you can test locally changes which otherwise may break some shared environment and stop work for many people.

Why is this post existing?

This post has no real practical value. Its purpose is to introduce Vagrant and to serve as a preface to three other posts from Vagrant series:

Conclusion

Vagrant provides an easy way to define and share a different application or environment setup in a single text file called Vagrantfile. Vagrant uses virtualization engines like VirtualBox, VMWare or Hyper-V and builds on top of them. Most valuable usage I’ve seen Vagrant used for is to test your provisioning scripts and also provision an application which otherwise would be very hard to run manually on a local machine. Enjoy reading post with actual configurations and Vagrantfile examples.

Related Posts

Read more...

Install Java application as a Linux service

Last Updated on by

Post summary: Code snippet how to start Java application as a Linux service.

The code below can be found in GitHub sample-dropwizard-rest-stub repository in linux_service_file file. This post is related to Build a RESTful stub server with Dropwizard post. REST server builds there is being set up to run as Linux service with the code snippet shown below.

Service snippet

This snippet can be used for other applications to be run as Linux service, not only Java.

#!/bin/bash

BASE_DIR=/var/dropwizard-rest-stub
START_COMMAND="java -jar $BASE_DIR/dropwizard-rest-stub.jar server $BASE_DIR/config.yml"
PID_FILE=$BASE_DIR/dropwizard-rest-stub.pid
LOG_DIR=$BASE_DIR/logs

start() {
  PID=`$START_COMMAND > $LOG_DIR/init.log 2>$LOG_DIR/init.error.log & echo $!`
}

case "$1" in
start)
    if [ -f $PID_FILE ]; then
        PID=`cat $PID_FILE`
        if [ -z "`ps axf | grep ${PID} | grep -v grep`" ]; then
            start
        else
            echo "Already running [$PID]"
            exit 0
        fi
    else
        start
    fi

    if [ -z $PID ]; then
        echo "Failed starting"
        exit 1
    else
        echo $PID > $PID_FILE
        echo "Started [$PID]"
        exit 0
    fi
;;
status)
    if [ -f $PID_FILE ]; then
        PID=`cat $PID_FILE`
        if [ -z "`ps axf | grep ${PID} | grep -v grep`" ]; then
            echo "Not running (process dead but PID file exists)"
            exit 1
        else
            echo "Running [$PID]"
            exit 0
        fi
    else
        echo "Not running"
        exit 0
    fi
;;
stop)
    if [ -f $PID_FILE ]; then
        PID=`cat $PID_FILE`
        if [ -z "`ps axf | grep ${PID} | grep -v grep`" ]; then
            echo "Not running (process dead but PID file exists)"
            rm -f $PID_FILE
            exit 1
        else
            PID=`cat $PID_FILE`
            kill -term $PID
            echo "Stopped [$PID]"
            rm -f $PID_FILE
            exit 0
        fi
    else
        echo "Not running (PID not found)"
        exit 0
    fi
;;
restart)
    $0 stop
    $0 start
;;
*)
    echo "Usage: $0 {status|start|stop|restart}"
    exit 0
esac

Install as a Linux service

In order to make it a Linux service following file has to be copied into /etc/init.d/ Linux folder with the name that you want your service to be. If you want your service to be named service_name then you put the same name as filename: /etc/init.d/service_name.

Nota bene: If you are creating the service and copying the file from Windows machine it has different new line endings (CR + LF) than Linux (LF). Also by default Git amends line endings on a pull and push depending on the OS. If you receive message: env: /etc/init.d/service_name: No such file or directory then you have to replace CR+LF to LF only. This can be done with following command: sed -i -e ‘s/\r//g’ /etc/init.d/service_name.

Manage service

Assume you have named your file dropwizard then you manage your service with that name. Service has 4 commands: status, start, stop and restart. You start the service with service dropwizard start command. If you input something different than 4 options given above service will output its usage pattern.

Conclusion

In current post I have provided sample bash script that is used to install Java or any other application as a Linux service and then start, stop or restart it.

Related Posts

Read more...

Build a Dropwizard project with Gradle

Last Updated on by

Post summary: Code examples how to create Dropwizard project with Gradle.

Code sample here can be found as a buildable and runnable project in GitHub sample-dropwizard-rest-stub repository in separate git branch called gradle.

Project structure

All classes have been thoroughly described in Build a RESTful stub server with Dropwizard post. In this post, I will describe how to make project build-able with Gradle. In order to make it more understandable, I will compare with Maven’s pom.xml file elements by XPath.

Gradle

Gradle is an open source build automation system that builds upon the concepts of Apache Ant and Apache Maven and introduces a Groovy-based domain-specific language (DSL) instead of the XML form used by Apache Maven for declaring the project configuration. Gradle is much more powerful and more complex than Maven. There is a significant tendency for Java projects moving towards Gradle so I’ve decided to make this post.

Gradle artefacts

In order to make your project work with Gradle, you need several files. The list below is how files are placed in project’s root folder:

  • gradle/wrapper/gradle-wrapper.jar – Gradle Wrapper allows you to make builds without installing Gradle on your machine. This is very convenient and makes Gradle usage easy. This JAR is managing the Gradle Wrapper automatic download and installation on the first build.
  • gradle\wrapper\gradle-wrapper.properties – a configuration which Gradle Wrapper version to be downloaded and installed on the first build.
  • build.gradle – the most import file. This is where you configure your project.
  • gradlew – this Gradle Wrapper executable for Linux.
  • gradlew.bat – this is Gradle Wrapper executable for Windows.
  • settings.gradle – Project settings. Mainly used in case of multi-module projects.

setting.gradle file

This file is mainly used in case of a multi-module project. In it, we currently define project name: rootProject.name = ‘sample-dropwizard-rest-stub’. This is the same value as in /project/name form pom.xml file.

Constructing build.gradle file

This is the main file where you configure your project. You need to define version (/project/version in pom.xml), group (/project/groupId in pom.xml) and optionally description. Since this is Java project you need to apply plugin: ‘java’. Also, you need need to specify Java version, 1.8 in this case by sourceCompatibility and targetCompatibility values. Next is to set repositories. You can use mavenCentral or add a custom one by the following code, which is not shown in the example below: maven { url ‘https://plugins.gradle.org/m2/’ }. You need to define dependencies (/project/dependencies/dependency in pom.xml file) to tell Gradle what libraries this project needs. In the current example, it is a compile dependency to io.dropwizard:dropwizard-core:0.8.0 and testCompile dependency to junit:junit:4.12. This is enough to have fully functional Dropwizard project with code examples given in Build a RESTful stub server with Dropwizard post.

version '1.0-SNAPSHOT'
group 'com.automationrhapsody.reststub'
description 'Sample Dropwizard REST Stub'

apply plugin: 'java'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {
	mavenCentral()
}

dependencies {
	compile 'io.dropwizard:dropwizard-core:0.8.0'

	testCompile 'junit:junit:4.12'
}

The beauty of Dropwizard is the ability to pack everything into a single JAR file and then run that file. In Maven this was done by maven-shade-plugin in Gradle the best way to do it is Shadow JAR plugin. You need to define it via plugins closure. Now lets configure shadowJar. You can specify archiveName or exclude some artefacts from packed JAR. Optionally you can enhance you MANIFEST.MF file by adding more details to manifest closure. Nice thing for Gradle is that you can use Groovy as well as pure Java code. Constructing Build-Time requires import some Java DateTime classes and using them to make human readable time. Next piece that you need to add to your build.gradle file is:

import java.time.ZoneId
import java.time.ZonedDateTime
import java.time.format.DateTimeFormatter

plugins {
	id 'com.github.johnrengelman.shadow' version '1.2.4'
}

mainClassName = 'com.automationrhapsody.reststub.RestStubApp'

shadowJar {
	mergeServiceFiles()
	exclude 'META-INF/*.DSA', 'META-INF/*.RSA', 'META-INF/*.SF'
	manifest {
		attributes 'Implementation-Title': rootProject.name
		attributes 'Implementation-Version': rootProject.version
		attributes 'Implementation-Vendor-Id': rootProject.group
		attributes 'Build-Time': ZonedDateTime.now(ZoneId.of("UTC"))
				.format(DateTimeFormatter.ISO_ZONED_DATE_TIME)
		attributes 'Built-By': InetAddress.localHost.hostName
		attributes 'Created-By': 'Gradle ' + gradle.gradleVersion
		attributes 'Main-Class': mainClassName
	}
	archiveName 'sample-dropwizard-rest-stub.jar'
}

Once you build your JAR file with command: gradlew shadowJar you can run it with java -jar build/sample-dropwizard-rest-stub.jar server config.yml command. Gradle has another option to run your project for testing purposes. It is done by first apply plugin: ‘application’. You need to specify which is mainClassName to be run and configure run args. In order to run your project from Gradle with gradlew run command you just add:

apply plugin: 'application'

mainClassName = 'com.automationrhapsody.reststub.RestStubApp'

run {
	args = ['server', 'config.yml']
}

build.gradle file

Full build.gradle file content is shown below:

import java.time.ZoneId
import java.time.ZonedDateTime
import java.time.format.DateTimeFormatter

plugins {
	id 'com.github.johnrengelman.shadow' version '1.2.4'
}

version '1.0-SNAPSHOT'
group 'com.automationrhapsody.reststub'
description 'Sample Dropwizard REST Stub'

apply plugin: 'java'
apply plugin: 'application'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {
	mavenCentral()
}

dependencies {
	compile 'io.dropwizard:dropwizard-core:0.8.0'

	testCompile 'junit:junit:4.12'
}

mainClassName = 'com.automationrhapsody.reststub.RestStubApp'

run {
	args = ['server', 'config.yml']
}

shadowJar {
	mergeServiceFiles()
	exclude 'META-INF/*.DSA', 'META-INF/*.RSA', 'META-INF/*.SF'
	manifest {
		attributes 'Implementation-Title': rootProject.name
		attributes 'Implementation-Version': rootProject.version
		attributes 'Implementation-Vendor-Id': rootProject.group
		attributes 'Build-Time': ZonedDateTime.now(ZoneId.of("UTC"))
				.format(DateTimeFormatter.ISO_ZONED_DATE_TIME)
		attributes 'Built-By': InetAddress.localHost.hostName
		attributes 'Created-By': 'Gradle ' + gradle.gradleVersion
		attributes 'Main-Class': mainClassName
	}
	archiveName 'sample-dropwizard-rest-stub.jar'
}

Conclusion

This post is an extension to Build a RESTful stub server with Dropwizard post, in which I have described how to build REST service with Dropwizard and Maven. In the current post, I have shown how to do the same with Gradle.

Related Posts

Read more...

PowerMock examples and why better not to use them

Last Updated on by

Post summary: In this post, I have summarised all PowerMock examples I’ve given so far. More important I will try to give some justification why I think necessity to use PowerMock is considered an indicator for a bad code design.

All code examples are available in GitHub java-samples/junit repository.

PowerMock

PowerMock is a framework that extends other mock libraries giving them more powerful capabilities. PowerMock uses a custom classloader and bytecode manipulation to enable mocking of static methods, constructors, final classes and methods, private methods, removal of static initializers and more.

PowerMock series

So far in my blog, I have written a lot for PowerMock. Even more than I have written for Mockito which actually deserves better attention. Post from PowerMock series are:

Why avoid PowerMock

I have worked on a project where PowerMock was not needed at all. We had 91.6% code coverage only with Mockito. Initially, it was 85% but when we utilized PITest we increased the code coverage. See more on PITest in Mutation testing for Java with PITest post. I also have worked on an old product where without PowerMock you cannot do decent unit testing. PowerMock was a must in order to achieve our goal of 80% code coverage. I can easily compare those two projects. The old one had large classes with lots of private methods and used lots of static methods. It was really hard to maintain that code. In this post, I’m not going to talk about SOLID because I do not consider myself a total expert on the subject. There are lots of discussions over the internet about pros and cons of static methods so everyone can decide personally. For me, I’ve come to a conclusion that necessity of using PowerMock in a project is an indicator for bad code design. In later projects, PowerMock is not used at all. If something cannot be unit tested with Mockito then the class is refactored.

How to avoid PowerMock

PowerMock features described here are related to static methods, public methods and creating new objects.

Mock or verify static methods

I’m not saying don’t use static methods, but they should be deterministic and not very complex. Not being able to verify static method was called is a little pain but most important is input and output of your method under test, what internal call it is doing is not that important.

Mock or call private methods

Private methods are not supposed to be tested at all. It is like they do not exist. If a class is complex enough so you have to call private methods or to test individually private methods then this class might be good to be split up.

Mock new object creation

Instead of creating the object in the class use dependency injection to provide it to the class from outside either via a constructor or via a setter. This was you can very easily test this class by injecting a mock.

Conclusion

This is a very controversial post. On one hand, I describe how to use PowerMock and what features it has, on the other hand, I state that you’d better not use it. PowerMock is extremely powerful and can do almost anything you need in your testing, but for me, the necessity of using PowerMock is an indicator of bad code design.

Related Posts

Read more...

Mock new object creation with PowerMock

Last Updated on by

Post summary: How to control what objects are being instantiated when using PowerMock.

This post is part of PowerMock series examples. The code shown in examples below is available in GitHub java-samples/junit repository.

Mock new object creation

You might have a method which instantiates some object and works with it. This case could be very tricky to automate because you do not have any control over this newly created object. This is where PowerMock comes to help to allow you to control what object is being created by replacing it with an object you can control.

Code to test

Below is a simple method where a new object is being created inside a method that has to be unit tested.

public class PowerMockDemo {

	public Point publicMethod() {
		return new Point(11, 11);
	}
}

Unit test

What we want to achieve in the unit test is to control instantiation of new Point object so that it is replaced with an object we have control over. The first thing to do is to annotate unit test with @RunWith(PowerMockRunner.class) telling JUnit to use PowerMock runner and with @PrepareForTest(PowerMockDemo.class) telling PowerMock to get inside PowerMockDemo class and prepare it for mocking. Mocking is done with PowerMockito.whenNew(Point.class).withAnyArguments().thenReturn(mockPoint). It tells PowerMock when a new object from class Point is instantiated with whatever arguments to return mockPoint instead. It is possible to return different objects based on different arguments Point is created with withArguments() method. Full code is below:

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.mockito.Mockito.mock;

@RunWith(PowerMockRunner.class)
@PrepareForTest(PowerMockDemo.class)
public class PowerMockDemoTest {

	private PowerMockDemo powerMockDemo;

	@Before
	public void setUp() {
		powerMockDemo = new PowerMockDemo();
	}

	@Test
	public void testMockNew() throws Exception {
		Point mockPoint = mock(Point.class);

		PowerMockito.whenNew(Point.class)
			.withAnyArguments().thenReturn(mockPoint);

		Point actualMockPoint = powerMockDemo.publicMethod();

		assertThat(actualMockPoint, is(mockPoint));
	}
}

Conclusion

PowerMock allows you to control want new objects are being created and replacing them with an object you have control over.

Related Posts

Read more...

Mock private method call with PowerMock

Last Updated on by

Post summary: How to mock private method with PowerMock by using spy object.

This post is part of PowerMock series examples. The code shown in examples below is available in GitHub java-samples/junit repository.

Mock private method

In some cases, you may need to alter the behavior of private method inside the class you are unit testing. You will need to mock this private method and make it return what needed for the particular case. Since this private method is inside your class under test then mocking it is little more specific. You have to use spy object.

Spy object

A spy is a real object which mocking framework has access to. Spied objects are partially mocked objects. Some their methods are real some mocked. I would say use spy object with great caution because you do not really know what is happening underneath and whether are you actually testing your class or mocked version of it.

Code to be tested

Below is a simple code that has a private method which created new Point object based on given as argument one. This private method is used to demonstrate how private methods can be called in Call private method with PowerMock post. In the current example, there is also a public method which calls this private method with a Point object.

public class PowerMockDemo {

	public Point callPrivateMethod() {
		return privateMethod(new Point(1, 1));
	}

	private Point privateMethod(Point point) {
		return new Point(point.getX() + 1, point.getY() + 1);
	}
}

Unit test

What we want to achieve in the unit test is to mock private method so that each call to it returns an object we have control over. The first thing to do is to annotate unit test with @RunWith(PowerMockRunner.class) telling JUnit to use PowerMock runner and with @PrepareForTest(PowerMockDemo.class) telling PowerMock to get inside PowerMockDemo class and prepare it for mocking. Then a spy object has to be created with PowerMockito.spy(new PowerMockDemo()). Actually, this is real PowerMockDemo object, but PowerMock is spying on it. The mocking of the private method is done with following code: PowerMockito.doReturn(mockPoint).when(powerMockDemoSpy, “privateMethod”, anyObject()). When “privateMethod” is called with whatever object then return mockPoint which is actually a mocked object. The full code example is shown below:

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.mockito.Matchers.anyObject;
import static org.mockito.Mockito.mock;

@RunWith(PowerMockRunner.class)
@PrepareForTest(PowerMockDemo.class)
public class PowerMockDemoTest {

	private PowerMockDemo powerMockDemoSpy;

	@Before
	public void setUp() {
		powerMockDemoSpy = PowerMockito.spy(new PowerMockDemo());
	}

	@Test
	public void testMockPrivateMethod() throws Exception {
		Point mockPoint = mock(Point.class);

		PowerMockito.doReturn(mockPoint)
			.when(powerMockDemoSpy, "privateMethod", anyObject());

		Point actualMockPoint = powerMockDemoSpy.callPrivateMethod();

		assertThat(actualMockPoint, is(mockPoint));
	}
}

Conclusion

PowerMock provides a way to mock private methods by using spy objects. Mockito also has spy objects, but they are not so powerful as PowerMock’s. One example is that PowerMock can spy on final objects.

Related Posts

Read more...

Call private method with PowerMock

Last Updated on by

Post summary: How to invoke a private method with PowerMock.

This post is part of PowerMock series examples. The code shown in examples below is available in GitHub java-samples/junit repository.

Unit test private method

Mainly public methods are being tested, so it is a very rare case where you want to unit test a private method. PowerMock provides utilities that can invoke private methods via a reflection and get output which can be tested.

Code to be tested

Below is a sample code that shows a class with a private method in it. It does nothing else but increases the X and Y coordinates of given as argument Point.

public class PowerMockDemo {

	private Point privateMethod(Point point) {
		return new Point(point.getX() + 1, point.getY() + 1);
	}
}

Unit test

Assume that this private method has to be unit tested for some reason. In order to do so, you have to use PowerMock’s Whitebox.invokeMethod(). You give an instance of the object, method name as a String and arguments to call the method with. In the example below argument is new Point(11, 11).

import org.junit.Before;
import org.junit.Test;
import org.powermock.reflect.Whitebox;

import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;

public class PowerMockDemoTest {

	private PowerMockDemo powerMockDemo;

	@Before
	public void setUp() {
		powerMockDemo = new PowerMockDemo();
	}

	@Test
	public void testCallPrivateMethod() throws Exception {
		Point actual = Whitebox.invokeMethod(powerMockDemo, 
			"privateMethod", new Point(11, 11));

		assertThat(actual.getX(), is(12));
		assertThat(actual.getY(), is(12));
	}
}

Conclusion

PowerMock provides utilities which uses reflection to do certain things, as shown in the example above to invoke a private method.

Related Posts

Read more...

Soft assertions that do not fail JUnit test

Last Updated on by

Post summary: Code examples how to use assertions that do not fail the unit test immediately.

The code shown in examples below is available in GitHub java-samples/jersey1 repository.

Unit vs Functional testing

Unit testing paradigm states that each test exercises particular code behavior. So in a perfect world, one unit test would have one assertion which defines unit test result – either passed or failed. This is why unit testing frameworks provide only asserts which stop further execution of current test method. In functional testing usually, one test verifies several conditions. Not debating if this is good or bad. Assume you are doing GUI testing, once you have opened particular page you’d better do as much verification as possible to reduce the risk of bugs. Having this page opened over and over for every single check is not the most efficient way of testing. This is why when you run functional tests you need some kind of assert that indicates whether passed or failed but to let the test continue in no critical issue is present. Those are generally called “soft” asserts.

Soft assertions and JUnit

TestNG provides org.testng.asserts.SoftAssert class for soft asserts as it is more oriented towards functional testing. JUnit is a unit testing framework, so it does not provide any soft assertions. In order to create such behavior, additional libraries are needed.

AssertJ

AssertJ is a library providing fluent assertions. It is very similar to Hamcrest which comes by default with JUnit. Along with all the asserts, AssertJ provides soft assertions with its SoftAssertions class inside org.assertj.core.api package.

Usage

Below is a functional test run against Dropwizard stub described in Build a RESTful stub server with Dropwizard post. Important is to instantiate a new SoftAssertions object before the test verifications and to call assertAll() method in the end to collect results. Best way to do this is to use JUnit’s @Before and @After annotated methods.

package com.automationrhapsody.jersey1;

import com.automationrhapsody.jersey1.model.Person;
import com.automationrhapsody.jersey1.rules.PersonServiceJerseyClient;

import java.util.List;

import org.assertj.core.api.SoftAssertions;
import org.junit.After;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.core.Is.is;

public class PersonServiceTest {

	@ClassRule
	public static final PersonServiceJerseyClient CLIENT 
		= new PersonServiceJerseyClient();

	private SoftAssertions softAssertions;

	private Person person;

	@Before
	public void setUp() {
		person = new Person();
		person.setId(123);
		person.setFirstName("First Name");
		person.setLastName("Last Name");
		person.setEmail("Email");

		softAssertions = new SoftAssertions();
	}

	@After
	public void tearDown() {
		softAssertions.assertAll();
	}

	@Test
	public void testAllOperations() {
		String saveResult = CLIENT.save(person);
		assertThat(saveResult, is("Added Person with id=123"));

		Person actual = CLIENT.get(person.getId());
		softAssertions
			.assertThat(actual.getId()).isEqualTo(person.getId());
		softAssertions
			.assertThat(actual.getFirstName()).isEqualTo(person.getFirstName());
		softAssertions
			.assertThat(actual.getLastName()).isEqualTo(person.getLastName());
		softAssertions
			.assertThat(actual.getEmail()).isEqualTo(person.getEmail());

		String result = CLIENT.remove();
		assertThat(result, is("Last person remove. Total count: 4"));
	}
}

Conclusion

Soft assertions are needed in case of functional tests being run with JUnit. Since such is not available out of the box because JUnit is targeted for unit tests soft assertions can be used from external libraries such as AssertJ.

Related Posts

Read more...

Verify static method was called with PowerMock

Last Updated on by

Post summary: How to verify that static method was called during a unit test with PowerMock.

This post is part of PowerMock series examples. The code shown in examples below is available in GitHub java-samples/junit repository.

In Mock static methods in JUnit with PowerMock example post, I have given information about PowerMock and how to mock a static method. In the current post, I will demonstrate how to verify given static method was called during execution of a unit test.

Example class for unit test

We are going to unit test a class called LocatorService that internally uses a static method from utility class Utils. Method randomDistance(int distance) in Utils is returning random variable, hence it has no predictable behavior and the only way to test it is by mocking it:

public class LocatorService {

	public Point generatePointWithinDistance(Point point, int distance) {
		return new Point(point.getX() + Utils.randomDistance(distance), 
			point.getY() + Utils.randomDistance(distance));
	}
}

And Utils class is:

import java.util.Random;

public final class Utils {

	private static final Random RAND = new Random();

	private Utils() {
		// Utilities class
	}

	public static int randomDistance(int distance) {
		return RAND.nextInt(distance + distance) - distance;
	}
}

Nota bene: it is good code design practice to make utility classes final and with a private constructor.

Verify static method call

This is the full code. Additional details are shown below it.

package com.automationrhapsody.junit;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.internal.verification.VerificationModeFactory;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

@RunWith(PowerMockRunner.class)
@PrepareForTest(Utils.class)
public class LocatorServiceTest {

	private LocatorService locatorServiceUnderTest;

	@Before
	public void setUp() {
		PowerMockito.mockStatic(Utils.class);

		locatorServiceUnderTest = new LocatorService();
	}

	@Test
	public void testStaticMethodCall() {
		locatorServiceUnderTest
			.generatePointWithinDistance(new Point(11, 11), 1);
		locatorServiceUnderTest
			.generatePointWithinDistance(new Point(11, 11), 234);

		PowerMockito.verifyStatic(VerificationModeFactory.times(2));
		Utils.randomDistance(1);

		PowerMockito.verifyStatic(VerificationModeFactory.times(2));
		Utils.randomDistance(234);

		PowerMockito.verifyNoMoreInteractions(Utils.class);
	}
}

Explanation

Class containing static method should be prepared for mocking with PowerMockito.mockStatic(Utils.class) code. Then call to static method is done inside locatorServiceUnderTest .generatePointWithinDistance() method. In this test, it is intentionally called 2 times with different distance (1 and 234) in order to show the verification which consists of two parts. First part is PowerMockito.verifyStatic(VerificationModeFactory.times(2)) which tells PowerMock to verify static method was called 2 times. The second part is Utils.randomDistance(1) which tells exactly which static method should be verified. Instead of 1 in the brackets you can use anyInt() or anyObject(). 1 is used to make verification explicit. As you can see there is second verification that randomDistance() method was called with 234 as well: PowerMockito.verifyStatic(VerificationModeFactory.times(2)); Utils.randomDistance(234);.

Conclusion

PowerMock provides additional power to Mockito mocking library which is described in Mock JUnit tests with Mockito example post. In the current post, I have shown how to verify static method was called. It is very specific as verification actually consists of two steps.

Related Posts

Read more...

Manage Microsoft Excel files in Java with Apache POI

Last Updated on by

Post summary: Code examples how to manage Microsoft Excel documents with Apache POI Java library.

Code examples in the current post can be found in GitHub java-samples/apache-poi repository.

Two most important use cases where MS Excel documents are useful in automation testing are:

  • Data-driven testing where test data is input from an Excel file
  • Output test results to an Excel file for stakeholders

So far I did not need to use Excel in my automation. Where data-driven testing was needed I either used JUnit data provider (see more in Data driven testing with JUnit parameterized tests and Data driven testing with JUnit and Gradle posts) or feeding data in CSV format and reading them with Apache Commons CSV library. My colleague Petar Yordanov introduced me to Apache POI which is very powerful for Excel and other MS format documents management.

Apache POI

The Apache POI Project’s mission is to create and maintain Java APIs for manipulating various file formats based upon the Office Open XML standards (OOXML) and Microsoft’s OLE 2 Compound Document format (OLE2). In short, you can read and write MS Excel files using Java. In addition, you can read and write MS Word and MS PowerPoint files using Java. Apache POI is your Java Excel solution (for Excel 97-2008). See more on Apache POI home page. Full details on supported POI formats can be found in Apache POI Component Overview. From this page, you can navigate to more details how to work with a specific type of document. Full details on Excel management can be found on Apache POI Spreadsheet Guide.

Usage

Include in Maven project

In the current example, I will use the poi-ooxml library which for XML based formats introduced in Microsoft Office 2007.

<dependency>
	<groupId>org.apache.poi</groupId>
	<artifactId>poi-ooxml</artifactId>
	<version>3.16</version>
</dependency>

Creating a Workbook

Excel file is actually a XSSFWorkbook object from org.apache.poi.xssf.usermodel package. Workbook can be created empty or from existing file:

// Create empty workbook
XSSFWorkbook newWorkBook = new XSSFWorkbook();
// Create workbook from existing file
XSSFWorkbook existingWorkBook = new XSSFWorkbook(new File("fileName.xlsx"));

Manage sheets

XSSFWorkbook class provides different methods for sheet management: createSheet(), cloneSheet(), getNumberOfSheets(), getSheet(), getSheetAt(), getSheetName()getSheetIndex(). Create or get sheet methods return a XSSFSheet object.

Manage sheet content

Once you have the XSSFSheet object, either from create or get, you can manage rows in it. Some of the methods are: createRow()getRow(), removeRow(), getLastRowNum(), getPhysicalNumberOfRows()rowIterator(). Create or get row methods return a XSSFRow object. Inside row you can manage cells. Some of the methods are: createCell(), getCell(), removeCell(), getLastCellNum(), getPhysicalNumberOfCells(), cellIterator(). Create or get cell methods return a XSSFCell object. Some the methods are: setCellComment(), setCellFormula(), setCellStyle(), setCellType(), setCellValue().

Manage Excel documents

Below are shown examples of simple ExcelWriter class that writes to Excel file. This class is used in SampleExcelApp showing how to write the text. Reading from Excel is shown in SampleExcelAppTest verifying the correct saving of the document.

ExcelWriter

package com.automationrhapsody.apachepoi;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

import org.apache.poi.ss.usermodel.CellType;
import org.apache.poi.ss.usermodel.FillPatternType;
import org.apache.poi.ss.usermodel.IndexedColors;
import org.apache.poi.xssf.usermodel.XSSFCell;
import org.apache.poi.xssf.usermodel.XSSFCellStyle;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;

public class ExcelWriter {

	private final XSSFWorkbook workBook;

	private Map<String, Integer> nextRows = new HashMap<>();
	private String currentSheet;
	private boolean isSaved;

	public ExcelWriter() {
		workBook = new XSSFWorkbook();
	}

	public void writeAndClose(File excelFile) {
		if (isSaved) {
			throw new IllegalArgumentException("Workbook already saved!");
		}
		try {
			workBook.write(new FileOutputStream(excelFile));
			workBook.close();
			isSaved = true;
		} catch (IOException ioe) {
			// TODO log
		}
	}

	public void switchToSheet(String sheetName) {
		currentSheet = sheetName;
		if (workBook.getSheet(sheetName) == null) {
			workBook.createSheet(currentSheet);
			nextRows.put(currentSheet, 0);
		}
	}

	public void writeRow(String... values) {
		XSSFSheet sheet = workBook.getSheet(currentSheet);
		int nextRow = nextRows.get(currentSheet);
		XSSFRow row = sheet.createRow(nextRow);

		for (int i = 0; i < values.length; i++) { 
			XSSFCell cell = row.createCell(i);
			cell.setCellType(CellType.STRING);
			cell.setCellValue(values[i]);
		}

		nextRows.put(currentSheet, nextRow + 1);
	}

	public void setCellColour(int rowNumber, int cellNumber,
			IndexedColors colour) {
		XSSFCellStyle style = workBook.createCellStyle();
		style.setFillForegroundColor(colour.getIndex());
		style.setFillPattern(FillPatternType.SOLID_FOREGROUND);

		int nextRow = nextRows.get(currentSheet);
		if (rowNumber > nextRow) {
			// TODO log or exception?
			rowNumber = nextRow;
		}
		XSSFSheet sheet = workBook.getSheet(currentSheet);
		int lastCell = sheet.getRow(rowNumber - 1).getLastCellNum();
		if (cellNumber > lastCell) {
			// TODO log or exception?
			cellNumber = lastCell;
		}

		sheet.getRow(rowNumber - 1).getCell(cellNumber - 1)
			.setCellStyle(style);
	}
}

SampleExcelApp

package com.automationrhapsody.apachepoi;

import java.io.File;

import org.apache.poi.ss.usermodel.IndexedColors;

public class SampleExcelApp {

	public static void main(String[] args) {
		String sheetName = "SheetName";

		ExcelWriter excelWriter = new ExcelWriter();
		excelWriter.switchToSheet(sheetName);
		excelWriter.writeRow("A1-blue", "B1", "C1");
		excelWriter.writeRow("A2", "B2", "C2");
		excelWriter.setCellColour(1, 1, IndexedColors.BLUE);

		excelWriter.switchToSheet("NewSheetName");
		excelWriter.writeRow("A1", "B1");
		excelWriter.writeRow("A2", "B2-red");
		excelWriter.setCellColour(2, 2, IndexedColors.RED);

		excelWriter.switchToSheet(sheetName);
		excelWriter.writeRow("A3", "B3", "C3");
		excelWriter.writeRow("A4", "B4", "C4");

		File excelFile = new File("testReport.xlsx");
		excelWriter.writeAndClose(excelFile);
	}
}

SampleExcelAppTest

package com.automationrhapsody.apachepoi;

import java.io.File;

import org.apache.poi.ss.usermodel.IndexedColors;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.junit.BeforeClass;
import org.junit.Test;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;

public class SampleExcelAppTest {

	private static final File FILE = new File("testReport.xlsx");

	private static XSSFWorkbook workbookUnderTest;

	@BeforeClass
	public static void beforeClass() throws Exception {
		if (FILE.exists()) {
			FILE.delete();
		}

		App.main(null);

		workbookUnderTest = new XSSFWorkbook(FILE);
	}

	@Test
	public void testNumberOfSheets() {
		assertThat(workbookUnderTest.getNumberOfSheets(), is(2));
	}

	@Test
	public void testSheetName() {
		assertThat(workbookUnderTest.getSheetName(0), is("SheetName"));
		assertThat(workbookUnderTest.getSheetName(1), is("NewSheetName"));
	}

	@Test
	public void testFirstSheetContent() {
		XSSFSheet sheet = workbookUnderTest.getSheetAt(0);
		assertThat(sheet.getLastRowNum(), is(3));
		assertThat(sheet.getRow(3).getLastCellNum(), is((short) 3));
		assertThat(sheet.getRow(3).getCell(2).getStringCellValue(), 
				is("C4"));

		assertThat(sheet.getRow(0).getCell(0).getCellStyle()
			.getFillForegroundColor(), is(IndexedColors.BLUE.getIndex()));
		assertThat(sheet.getRow(1).getCell(1).getCellStyle()
			.getFillForegroundColor(), 
				is(IndexedColors.AUTOMATIC.getIndex()));
	}

	@Test
	public void testSecondSheetContent() {
		XSSFSheet sheet = workbookUnderTest.getSheetAt(1);
		assertThat(sheet.getLastRowNum(), is(1));
		assertThat(sheet.getRow(1).getLastCellNum(), is((short) 2));
		assertThat(sheet.getRow(1).getCell(1).getStringCellValue(), 
				is("B2-red"));

		assertThat(sheet.getRow(1).getCell(1).getCellStyle()
			.getFillForegroundColor(), is(IndexedColors.RED.getIndex()));	
		assertThat(sheet.getRow(0).getCell(0).getCellStyle()
			.getFillForegroundColor(), 
				is(IndexedColors.AUTOMATIC.getIndex()));
	}
}

Conclusion

Apache POI is a very powerful toolkit for managing MS documents especially Excel, which might be needed in your test automation for reporting of data-driven testing.

Related Posts

Read more...

Manage and automatically select needed WebDriver in Java 8 Selenium project

Last Updated on by

Post summary: Example code how to efficiently manage and automatically select needed local WebDriver using Java 8 method reference used as lambda expression.

Code examples in the current post can be found in GitHub selenium-samples-java/design-patterns repository.

Java 8 features

In this example lambda expression and method reference, Java 8 features are used. More in Java 8 features can be found in Java 8 features – Lambda expressions, Interface changes, Stream API, DateTime API post.

Functional interface

Before explaining lambda it is needed to understand the idea of a functional interface as they are leveraged for use with lambda expressions. A functional interface is an interface that has only one abstract method that is to be implemented. A functional interface may or may not have default or static methods (again new Java 8 feature). Although not mandatory, a good practice is to annotate the functional interface with @FunctionalInterface.

Lambda expressions

There is no such term in Java, but you can think of lambda expression as an anonymous method. Lambda expression is a piece of code that provides an inline implementation of a functional interface, eliminating the need for using anonymous classes. Lambda expressions facilitate functional programming and ease development by reducing the amount of code needed.

Method reference

Sometimes when using lambda expression all you do is call a method by name. Method reference provides an easy way to call the method making the code more readable.

Managing WebDriver

The proposed solution of managing WebDriver has enumeration called Browser and class called WebDriverFactory. Another important thing is web drivers should be placed in a folder with name webdrivers and named with a special pattern.

Browser enum

The code is shown below:

package com.automationrhapsody.designpatterns;

import java.util.Arrays;
import java.util.function.Supplier;

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.ie.InternetExplorerDriver;

public enum Browser {
	FIREFOX("gecko", FirefoxDriver::new),
	CHROME("chrome", ChromeDriver::new),
	IE("ie", InternetExplorerDriver::new);

	private String name;
	private Supplier<WebDriver> driverSupplier;

	Browser(String name, Supplier<WebDriver> driverSupplier) {
		this.name = name;
		this.driverSupplier = driverSupplier;
	}

	public String getName() {
		return name;
	}

	public WebDriver getDriver() {
		return driverSupplier.get();
	}

	public static Browser fromString(String value) {
		for (Browser browser : values()) {
			if (value != null && value.toLowerCase().equals(browser.getName())) {
				return browser;
			}
		}
		System.out.println("Invalid driver name passed as 'browser' property. "
			+ "One of: " + Arrays.toString(values()) + " is expected.");
		return FIREFOX;
	}
}

Enumeration’s constructor has Supplier functional interface as a parameter. When the constructor is called method reference FirefoxDriver::new is called as a lambda expression which purpose is to instantiate new Firefox driver. If only lambda expression is used is would be: () -> new FirefoxDriver(). Notice that method reference is much shorter and easy to read. getDriver() method invokes Supplier’s get() method which is implemented by the lambda expression, so lambda expression is executed hence instantiating new web driver. With this approach Firefox web driver object is created only when getDriver() method is called.

WebDriverFactory

Code is:

package com.automationrhapsody.designpatterns;

import java.io.File;

import org.openqa.selenium.WebDriver;

class WebDriverFactory {

	private static final String WEB_DRIVER_FOLDER = "webdrivers";

		public static WebDriver createWebDriver() {
		Browser browser = Browser.fromString(System.getProperty("browser"));
		String arch = System.getProperty("os.arch").contains("64") ? "64" : "32";
		String os = System.getProperty("os.name").toLowerCase().contains("win") 
				? "win.exe" : "linux";
		String driverFileName = browser.getName() + "driver-" + arch + "-" + os;
		String driverFilePath = driversFolder(new File("").getAbsolutePath());
		System.setProperty("webdriver." + browser.getName() + ".driver", 
				driverFilePath + driverFileName);
		return browser.getDriver();
	}

	private static String driversFolder(String path) {
		File file = new File(path);
		for (String item : file.list()) {
			if (WEB_DRIVER_FOLDER.equals(item)) {
				return file.getAbsolutePath() + "/" + WEB_DRIVER_FOLDER + "/";
			}
		}
		return driversFolder(file.getParent());
	}
}

This code recursively searches for a folder named webdrivers in the project. This is done because when you have a multi-module project running from IDE and from Maven has different root folder and finding web drivers is not possible from both simultaneously. Once the folder is found then proper web driver is selected based on OS and architecture. The code reads browser system property which can be passed from outside hence making the selection of web driver easy to configure. The important part is to have web drivers with special naming convention.

Web drivers naming convention

In order code above to work the web drivers should be placed in webdrivers folder in the project and their names should match the pattern: {DIVER_NAME}-{ARCHITECTURE}-{OS}, e.g. geckodriver-64-win.exe for Windows 64 bit and geckodriver-64-linux for Linux 64 bit.

Conclusion

The proposed solution is a very elegant way to manage your web drivers and select proper one just by passing -Dbrowser={BROWSER} Java system property.

Related Posts

Read more...

Run Dropwizard application in Docker with templated configuration using environment variables

Last Updated on by

Post summary: How to run Dropwizard application inside a Docker container with configuration template file which later gets replaced by environment variables.

In Build a RESTful stub server with Dropwizard post I have described how to build a Dropwizard application and run it. In the current post, I will show how this application can be inserted into Docker container and run with different configurations, based on different environment variables. Code sample here can be found as a buildable and runnable project in GitHub sample-dropwizard-rest-stub repository.

Dropwizard templated config

Dropwizard supports replacing variables from config file with environment variables if such is defined. The first step is to make the config file with a special notation.

version: ${ENV_VARIABLE_VERSION:- 0.0.2}

Dropwizard substitution is based on Apache’s StrSubstitutor library. ${ENV_VARIABLE_VERSION:- 0.0.2} means that Dropwizard will search for environment variable with name ENV_VARIABLE_VERSION and replace its value with given variable. If no variable is configured then it will replace with a default value of 0.0.2. If default is not needed then just ${ENV_VARIABLE_VERSION} can be used.

Next step is to make Dropwizard substitute config file with environment variables on its startup before the file is being read. This is done with following code:

@Override
public void initialize(Bootstrap<RestStubConfig> bootstrap) {
	bootstrap.setConfigurationSourceProvider(new SubstitutingSourceProvider(
			bootstrap.getConfigurationSourceProvider(), 
			new EnvironmentVariableSubstitutor(false)));
}

EnvironmentVariableSubstitutor is used with false in order to suppress throwing of UndefinedEnvironmentVariableException in case environment variable is not defined. There is an approach to pack config.yml file into JAR file and use it from there, then bootstrap.getConfigurationSourceProvider() should be changed with path -> Thread.currentThread().getContextClassLoader().getResourceAsStream(path).

Docker

Docker is a platform for building software inside containers which then can be deployed in different environments. A container is like a virtual machine with the significant difference that it does not build full operating system. In this way host’s resources are optimized, container consumes as much memory as needed by the application. Virtual machine itself consumes memory to run the operating system as well. Containers have resource (CPU, RAM, HDD) isolation so that application sees the container as a separate operating system.

Dockerfile

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession. In given example Dockerfile looks like this:

FROM openjdk:8u121-jre-alpine
MAINTAINER Automation Rhapsody https://automationrhapsody.com/

WORKDIR /var/dropwizard-rest-stub

ADD target/sample-dropwizard-rest-stub-1.0-SNAPSHOT.jar /var/dropwizard-rest-stub/dropwizard-rest-stub.jar
ADD config-docker.yml /var/dropwizard-rest-stub/config.yml

EXPOSE 9000 9001

ENTRYPOINT ["java", "-jar", "dropwizard-rest-stub.jar", "server", "config.yml"]

Dockerfile starts with FROM openjdk:8u121-jre-alpine which instructs docker to use this already prepared image as a base image for the container. At https://hub.docker.com/ there are numerous of images maintained by different organizations or individuals that provide different frameworks and tools. They can save you a lot of work allowing you to skip configuration and installation of general things. In given case openjdk:8u121-jre-alpine has OpenJDK JRE 1.8.121 already installed with minimalist Linux libraries. MAINTAINER documents who is the author of this file. WORKDIR sets working directory of the container. Then ADD copies JAR file and config.yml to the container. ENTRYPOINT configures the container to be run as executable. This instruction translates to java -jar dropwizard-rest-stub.jar server config.yml command. More info can be found at Dockerfile reference link.

Build Docker container

Docker container can be built with the following command:

docker build -t dropwizard-rest-stub .

dropwizard-rest-stub is the name of the container, it is later used to run container with this name. Note that dot, in the end, is mandatory.

Run Docker container

docker run -it -p 9000:9000 -p 9001:9001 -e ENV_VARIABLE_VERSION=1.1.1 dropwizard-rest-stub

Name of the container is dropwizard-rest-stub and is put in the end. With -p 9000:9000 port 9000 form guest machine is exposed to host machine, otherwise container will not be accessible. With -e ENV_VARIABLE_VERSION=1.1.1 environment variable is being set. If not passed it will be substituted with 0.0.2 in config.yml file as described above. In case of many environment variables, it is more comfortable to put them in a file with content KEY=VALUE. Then use this file with –env-file= instead of specifying each individual variable.

Conclusion

Dropwizard has a very easy mechanism for making configuration dependant on environment variables. This makes Dropwizard applications extremely suitable to be built into Docker containers and run in different environments just by changing environment variables being passed to the container.

Related Posts

Read more...

Data driven testing with JUnit and Gradle

Last Updated on by

Post summary: How to do data-driven testing with JUnit and Gradle.

In Data driven testing with JUnit parameterized tests post, I’ve shown how to create data-driven JUnit test. It should be annotated with @RunWith(Parameterized.class).

Older Gradle and Parameterized.class

Gradle cannot run JUnit tests annotated with @RunWith(Parameterized.class). There is official Gradle bug which states issue is resolved in Gradle 2.12, so if you are using older Gradle then the current post is suitable for you.

Data-Driven JUnit tests

There is a library called junit-dataprovider which is easy to use. What you have to do to use it is:

  1. Annotate the test class
  2. Define test data
  3. Create test and use test data

Annotate the test class

The class needs to be run with a specialized runner in order to be treated as data-driven one. Runner is: com.tngtech.java.junit.dataprovider.DataProviderRunner. The class looks like:

import com.tngtech.java.junit.dataprovider.DataProviderRunner;
import org.junit.runner.RunWith;

@RunWith(DataProviderRunner.class)
public class LocatorDataProviderTest {
}

Define test data

Test data is seeded from static method: public static Object[] dataProvider(). This method returns an array of Object arrays where each array is one row with input and expected output test data. This method is annotated with @DataProvider. Here is how test data is defined:

@DataProvider
public static Object[] dataProvider() {
	return new Object[][] {
		{-1, -1, new Point(1, 1)},
		{-1, 0, new Point(1, 0)},
		{-1, 1, new Point(1, 1)},

		{0, -1, new Point(0, 1)},
		{0, 0, MOCKED_POINT},
		{0, 1, MOCKED_POINT},

		{1, -1, new Point(1, 1)},
		{1, 0, MOCKED_POINT},
		{1, 1, MOCKED_POINT}
	};
}

Create test and use test data

In order to use the test data in some test method, it should be annotated with @UseDataProvider(“dataProvider”) where “dataProvider” is the name of the static method which generates the test data. Another mandatory is test method should have same number and type of arguments as each line of the test data array. Here is how test method looks like:

@Test
@UseDataProvider("dataProvider")
public void testLocateResults(int x, int y, Point expected) {
	assertTrue(PointUtils.arePointsEqual(expected, 
				locatorUnderTest.locate(x, y)));
}

Putting it all together

Combining all steps into one class leads to the code below:

import com.automationrhapsody.junit.utils.PointUtils;
import com.tngtech.java.junit.dataprovider.DataProvider;
import com.tngtech.java.junit.dataprovider.DataProviderRunner;
import com.tngtech.java.junit.dataprovider.UseDataProvider;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;

import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

@RunWith(DataProviderRunner.class)
public class LocatorDataProviderTest {

	private static final Point MOCKED_POINT = new Point(11, 11);

	private LocatorService locatorServiceMock = mock(LocatorService.class);

	private Locator locatorUnderTest;

	@DataProvider
	public static Object[] dataProvider() {
		return new Object[][] {
			{-1, -1, new Point(1, 1)},
			{-1, 0, new Point(1, 0)},
			{-1, 1, new Point(1, 1)},

			{0, -1, new Point(0, 1)},
			{0, 0, MOCKED_POINT},
			{0, 1, MOCKED_POINT},

			{1, -1, new Point(1, 1)},
			{1, 0, MOCKED_POINT},
			{1, 1, MOCKED_POINT}
		};
	}

	@Before
	public void setUp() {
		when(locatorServiceMock.geoLocate(any(Point.class)))
				.thenReturn(MOCKED_POINT);

		locatorUnderTest = new Locator(locatorServiceMock);
	}

	@Test
	@UseDataProvider("dataProvider")
	public void testLocateResults(int x, int y, Point expected) {
		assertTrue(PointUtils.arePointsEqual(expected, 
				locatorUnderTest.locate(x, y)));
	}
}

Benefits

Using junit-dataprovider has one huge benefit over JUnit’s Parameterized runner. Test data provider is used only for the method annotated with its name. JUnit’s Parameterized runner runs each and every test method with given data provider. In one test class, you can define several data providers as different static methods and use them in different test methods. This is not possible with JUnit’s Parameterized runner.

Conclusion

JUnit-dataprovider is a very nice library which makes JUnit 4 data-driven testing very nice and easy. Even if you do not have issues with Gradle I still would recommend it to use it instead of a standard Parameterized runner because it gives you the flexibility to bind data provider method with the specific unit test method.

Related Posts

Read more...

Coloured log files in Linux

Last Updated on by

Post summary: How to colour your log files for better perception under Linux.

I’m far away from being a Linux guru and honestly, I like it that way. In order to be effective as a QA, you need to have minimal knowledge how to do certain things under Linux. This post is devoted to working with logs.

Chaining commands

Linux offers a possibility to combine several commands by chaining them. In the current post, I will just one of them, the PIPE (I) operator. By using it the output of one command is used as input for other.

I would strongly recommend reading following post if you are interested in chaining Linux commands: 10 Useful Chaining Operators in Linux with Practical Examples.

Useful commands

Commands below are one I use on a daily basis when working with logs. I will show basic usage, if you need more detail on certain command then you can type: man <command> e.g. man cat in Linux console and it will display you more information.

grep

It is used to search in text files or print lines matching some pattern. Usage is: grep text filename.log. If text contains spaces it should be wrapped around single quote (‘) or double quote (“). If text contains a single quote, then you have to wrap it around with double quote and vice versa.

cat

Prints file content to standard out put. Usage is: cat filename.log. You can concatenate several files: cat file1.log file2.log. Drawback using this command is when you have large files. It combines very well with grep to search output of the file: cat filename.log | grep text.

zcat

Prints content of a zipped file. Usage: zcat filename.gz. Combines with grep: zcat filename.gz | grep text.

tail

Prints last 10 lines from a file. Usage: tail filename.log. Most valuable tail usage is with -f option: tail -f filename.log. This monitor file in real time and outputs all new text appended to the file. You can also monitor several files: tail -f file1.log file2.log.

less

Used for paging through a file. It shows one page and with arrow key up and down you can scroll through the file. Usage: less filename.txt. In order to exit just type q. Valuable with this command is that you can type a search term /text and then with n go to next appearance and with N go to previous.

Colours

Commands above are nice, but using colours aid for a much better perception of information in the files. In order to use colours perl -pe command will be used as a chained command to colour the output of commands described above. Syntax is: perl -pe ‘s/^.*INFO.*$/\e[0;36;40m$&\e[0m/g’. It is quite a complex expression and I will try to explain it in details.

Match text to be highlighted

^.*INFO.*$ is a regular expression that matches a text to be highlighted. Character ^ means from the beginning of the string or line, character $ means to end of string or line. Group .* matches any character. So this regular expression means inspect every string or line and match those that contain INFO.

Text effects

\e[0;36;40m is the colouring part of the expression. 0 is value for ANSI escape code. Possible values for escape code are shown in the table below. Note that not all of them are supported by all OS.

Code Effect
0 Reset / Normal
1 Bold or increased intensity
2 Faint (decreased intensity)
3 Italic: on
4 Underline: Single
5 Blink: Slow
6 Blink: Rapid
7 Image: Negative
8 Conceal
9 Crossed-out

More codes can be found in ANSI escape code wiki.

Text colour

36 from \e[0;36;40m is colour code of text. Colour depends and is different based on escape code. Possible combinations of escape and colour codes are:

Code Colour Code Colour
0;30 Black 1;30 Dark Grey
0;31 Red 1;31 Light Red
0;32 Green 1;32 Lime
0;33 Dark Yellow 1;33 Yellow
0;34 Blue 1;34 Light Blue
0;35 Purple 1;35 Magenta
0;36 Dark Cyan 1;36 Cyan
0;37 Light Grey 1;37 White

Background colour

40m from \e[0;36;40m is colour code of background. Background colours are:

Code Colour
40m Black
41m Red
42m Green
43m Yellow
44m Blue
45m Purple
46m Cyan
47m Light Grey

Sample colour scheme for logs

One possible colour scheme I like is: cat application.log | perl -pe ‘s/^.*FATAL.*$/\e[1;37;41m$&\e[0m/g; s/^.*ERROR.*$/\e[1;31;40m$&\e[0m/g; s/^.*WARN.*$/\e[0;33;40m$&\e[0m/g; s/^.*INFO.*$/\e[0;36;40m$&\e[0m/g; s/^.*DEBUG.*$/\e[0;37;40m$&\e[0m/g’ which will produce following output:

linux-logs-colour

Conclusion

Having coloured logs makes it much easier to investigate logs. Linux provides tooling for better visualisation so it is good to take advantage of those.

Related Posts

Read more...

Mutation testing for Java with PITest

Last Updated on by

Post summary: Introduction to mutation testing and examples with PITest tool for Java.

Mutation testing

Mutation testing is a form of white-box testing. It is used to design new unit tests and evaluate the quality of the existing ones. Mutation testing involves modifying a program code in small ways, based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as dividing each expression by zero). Each mutated version is called a mutant. Existing unit tests are run against this mutant. If some unit test fails then mutant is killed. If no unit test fails then mutant survived. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. The purpose of mutation testing is to help the tester develop effective tests or locate weaknesses in the test data used in the existing tests.

It is not very often when I get surprised by discovering new testing technique I’ve never heard about, so I must give credits to Alexander Todorov since I learned this one from his presentation.

PITest

Mutation testing can be done manually by changing program code and running the tests, but this is not really effective and can lead to serious problems where you commit mutated code by mistake. Most effective and recommended way of doing mutation testing is by using tools. PITest is a tool for mutation testing in Java. It seems to be fast growing and has a big community.

Integrate PITest

Examples given in current post can be found in GitHub sample-dropwizard-rest-stub repository. It is very easy to use PITest. The first step is to add it to pom.xml file:

<plugin>
	<groupId>org.pitest</groupId>
	<artifactId>pitest-maven</artifactId>
	<version>1.1.10</version>
	<configuration>
		<targetClasses>
			<param>com.automationrhapsody.reststub.persistence*</param>
		</targetClasses>
		<targetTests>
			<param>com.automationrhapsody.reststub.persistence*</param>
		</targetTests>
	</configuration>
</plugin>

The example given above is the most simple one. PITest provides various configurations very well described into PITest Maven Quick Start page.

Run PITest

Once configured it can be run with: mvn org.pitest:pitest-maven:mutationCoverage or if you want to ensure clean build every time: mvn clean test org.pitest:pitest-maven:mutationCoverage

PITest results

PITest provides very understandable reports. Along with the line code coverage, it measures the mutation coverage. There are statistics on package level.

pitest-package

PersonDB is the only class that has been uni tested. It has 91% line coverage and 1 not killed mutation. Opening the PersonDB class gives more details what is not covered by tests and what the mutation is:

pitest-class

PITest has negated the condition on line 44, making the mutated code to be: PERSONS.get(person.getId()) == null. Unit tests had passed although this mutation. Full reports can be found in PITest report example.

Action PITest results

Current results indicate that unit tests are not good enough because of the survived mutation. They are also insufficient as one line of code is not tested at all. The first improvement is to kill the mutation by change line 37 of PersonDBTest.java from PersonDB.save(person); to assertEquals(“Added Person with id=11”, PersonDB.save(person)); and PITest is run again then results show all mutations are killed.

pitest-class-no-mutations

Still, there is one line of code not covered. This will require adding a new unit test to cover the update person functionality.

PITest performance

Doing a mutation testing requires significant resources to run a large amount of unit tests. Examples given above work very fast, but they are far away from the reality. I was curious how this works on a real project so I run it on one which is relatively small one. It has very good unit tests though with 95% line coverage – 2291 out of 2402 lines. Still, PITest found only 90% mutation coverage (849/940). 51 out of 940 mutations that survived and 40 with no unit test coverage which gives a room for improvement. The total run took 3 mins 11 secs. See results below:

PIT >> INFO : Found  464 tests
PIT >> INFO : Calculated coverage in 22 seconds.
PIT >> INFO : Created  132 mutation test units

- Timings
==================================================================
> scan classpath : < 1 second
> coverage and dependency analysis : 22 seconds
> build mutation tests : < 1 second
> run mutation analysis : 2 minutes and 48 seconds
--------------------------------------------------------------------------------
> Total : 3 minutes and 11 seconds
--------------------------------------------------------------------------------
==================================================================
- Statistics
==================================================================
>> Generated 940 mutations Killed 849 (90%)
>> Ran 2992 tests (3.18 tests per mutation)
==================================================================
- Mutators
==================================================================
> org.pitest.mutationtest.engine.gregor.mutators.ConditionalsBoundaryMutator
>> Generated 18 Killed 9 (50%)
> KILLED 7 SURVIVED 7 TIMED_OUT 2 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
> NO_COVERAGE 2
--------------------------------------------------------------------------------
> org.pitest.mutationtest.engine.gregor.mutators.VoidMethodCallMutator
>> Generated 115 Killed 96 (83%)
> KILLED 81 SURVIVED 12 TIMED_OUT 15 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
> NO_COVERAGE 7
--------------------------------------------------------------------------------
> org.pitest.mutationtest.engine.gregor.mutators.ReturnValsMutator
>> Generated 503 Killed 465 (92%)
> KILLED 432 SURVIVED 16 TIMED_OUT 33 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
> NO_COVERAGE 22
--------------------------------------------------------------------------------
> org.pitest.mutationtest.engine.gregor.mutators.MathMutator
>> Generated 10 Killed 9 (90%)
> KILLED 8 SURVIVED 0 TIMED_OUT 1 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
> NO_COVERAGE 1
--------------------------------------------------------------------------------
> org.pitest.mutationtest.engine.gregor.mutators.NegateConditionalsMutator
>> Generated 294 Killed 270 (92%)
> KILLED 254 SURVIVED 16 TIMED_OUT 15 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 1
> NO_COVERAGE 8
--------------------------------------------------------------------------------

Case study

I used PITest on production project written on Java 8 extensively using Stream APIs and Lambda expressions. The initial run of 606 existing test cases gave 90% mutation coverage (846/940) and 95% line coverage (2291/2402).

Note that PITest calculates in statistic given above that existing tests are 464. This is because some of them are data driven tests and JUnit calculates the total number of 606 because every data row is counted as a test. Understand how to make JUnit data-driven tests in Data driven testing with JUnit parameterized tests post.

After analysis and adding new tests total test cases number was increased to 654 which is almost 8% increase. PITest run shows 97% mutation coverage (911/939) and 97% line coverage (2332/2403). During the analysis, no bugs in code were found.

Conclusion

Mutation testing is a good additional technique to make your unit tests better. It should not be the primary technique though as tests will be written just to kill the mutations instead of actually testing the functionality. In projects with well-written unit tests mutation testing does not bring much of a value, but still, it is a very good addition to your testing arsenal. PITest is a very good tool to do mutation testing in Java I would say to give it a try.

Related Posts

Read more...

Mock/Stub REST API with WireMock for better unit testing

Last Updated on by

Post summary: Examples how to use WireMock to stub (mock also is possible as a term) REST API in order make better unit testing.

The code shown in examples below is available in GitHub java-samples/wiremock repository.

WireMock

WireMock is a simulator for HTTP-based APIs. Some might consider it a service virtualization tool or a mock server. It enables you to stay productive when an API you depend on doesn’t exist or isn’t complete. It supports testing of edge cases and failure modes that the real API won’t reliably produce. And because it’s fast it can reduce your build time from hours down to minutes.

When to use it

One case where WireMock is very helpful is when building a REST API client. Create simple REST API client using Jersey post describes a way to achieve this with Jersey. In most of the cases REST API might not be forced to fail with certain errors, so WireMock is an excellent addition to standard functional tests to verify that client is working correctly in corner cases. Also, it is mandatory for unit testing because it eliminates dependencies to external services. The mock server is extremely fast and under complete control. Another case where WireMock helps is if you need to create API tests, but API is not ready yet or not working. WireMock can be used to stub the service in order to make testing framework and structure. Once the real server is ready tests will just be elaborated and details cleared up.

How to use it

WireMock is used in JUnit as a rule. More info on JUnit rules can be found in Use JUnit rules to debug failed API tests post. There are WireMockClassRule and WireMockRule. The most appropriate is the class rule, there is no need to create a mock server for each and every test, also additional logic is needed for port collision avoidance. In case you use other unit testing framework there is WireMockServer which can be started before tests and stopped afterward. The code given below is used to REST API client from Create simple REST API client using Jersey post. First JUnit class rule is created.

public class JerseyPersonRestClientTest {

	private static final int WIREMOCK_PORT = 9999;

	@ClassRule
	public static final WireMockClassRule WIREMOCK_RULE
		= new WireMockClassRule(WIREMOCK_PORT);

	private JerseyPersonRestClient clientUnderTest;

	@Before
	public void setUp() throws Exception {
		clientUnderTest
			= new JerseyPersonRestClient("http://localhost:" + WIREMOCK_PORT);
	}
}

Port should be free, otherwise there is com.github.tomakehurst.wiremock.common.FatalStartupException: java.lang.RuntimeException: java.net.BindException: Address already in use: bind exception thrown.

Usage is very simple. There are several methods which are important. Method stubFor() is initializing the stub. Method get() notifies that stub is called with HTTP GET request. Method urlMatching() uses regular expression to match which API path is invoked, then willReturn() returns aResponse() withBody(). There are several with() methods which gives a variety of options for testing. Complete test is below:

@Test
public void testGet_WithBody_PersonJson() {
	String personString = "{\"firstName\":\"FN1\",\"lastName\":\"LN1\"}";
	stubFor(get(urlMatching(".*/person/get/.*"))
		.willReturn(aResponse().withBody(personString)));

	Person actual = clientUnderTest.get(1);

	assertEquals("FN1", actual.getFirstName());
	assertEquals("LN1", actual.getLastName());
}

This is the very straightforward case, where the client should work, but when you start to elaborate on with() scenarios you can sometimes catch an issue with the code being tested. See test below is working correctly in a case where API returns HTTP response code 500 – Internal Server Error. The client might need to add some verification on response codes as well:

@Test
public void testGet_WithStatus() {
	String personString = "{\"firstName\":\"FN1\",\"lastName\":\"LN1\"}";
	stubFor(get(GET_URL)
		.willReturn(aResponse()
		.withStatus(500)
		.withBody(personString)));

	Person actual = clientUnderTest.get(1);

	assertEquals("FN1", actual.getFirstName());
	assertEquals("LN1", actual.getLastName());
}

Wiremock stateful behavior

You can configure Wiremock to respond with series of different responses, hence keeping an internal state. This might happen when you want to perform tests with more steps or some end-to-end scenario. On the first request, Wiremock can respond with one response, on the second request it can respond with a totally different response. See more in Wiremock stateful behaviour.

Difference between stub and mock

In Mock JUnit tests with Mockito example post, I’ve shown how to use Mockito to mock given classes and control their behavior in order to control and eliminate dependencies. Mockito is not suitable for this particular case because if you mock JerseyPersonRestClient‘s get() method it will just return an object, there is no testing whatsoever. Stubbing with WireMock on other hand tests all code for invoking the service, getting a response (controlled by you) and deserializing this response from network stream to Java object. It is much more adequate and close to reality testing.

Conclusion

WireMock is a very powerful framework for API stubbing in order to make your test better and it is a must for unit testing some REST API client.

Related Posts

Read more...

Create simple REST API client using Jersey

Last Updated on by

Post summary: Code examples how to create REST API client using Jersey.

In the current post, I will give code examples how to build REST API client using Jersey. The code shown in examples below is available in GitHub java-samples/wiremock repository.

REST API client

REST API client is needed when you want to consume given REST API, either for production usage or for testing this API. In the latter case, the client does not need to be very sophisticated since it is used just for testing the API with Java code. In the current post, I will show how to create REST API client for Persons functionality of Dropwizard Rest Stub described in Build a RESTful stub server with Dropwizard post.

Jersey 2 and Jackson

Jersey is a framework which allows an easier building of RESTful services. It is one of the most used such frameworks nowadays. Jackson is JSON parser for Java. It is also one of the most used ones. The first step is to import libraries you are going to use:

<dependency>
	<groupId>org.glassfish.jersey.core</groupId>
	<artifactId>jersey-client</artifactId>
	<version>2.25.1</version>
</dependency>
<dependency>
	<groupId>org.glassfish.jersey.media</groupId>
	<artifactId>jersey-media-json-jackson</artifactId>
	<version>2.25.1</version>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.core</groupId>
	<artifactId>jackson-databind</artifactId>
	<version>2.8.7</version>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.core</groupId>
	<artifactId>jackson-annotations</artifactId>
	<version>2.8.7</version>
</dependency>

Not mandatory but it is good practice to create an interface for this client and then do implementations. The idea is you may have different implementations and switch between them.

import com.automationrhapsody.wiremock.model.Person;

import java.util.List;

public interface PersonRestClient {

	List<Person> getAll();

	Person get(int id);

	String save(Person person);

	String remove();
}

Now you can start with implementation. In given example constructor take the host which also includes port and scheme. Then it creates ClientConfig object with specific properties. The full list is shown in ClientProperties Javadoc. In the example, I set up timeouts only. Next is to create WebTarget object to query different API endpoints. It could not be simpler than that:

import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.WebTarget;

import org.glassfish.jersey.client.ClientConfig;
import org.glassfish.jersey.client.ClientProperties;
import org.glassfish.jersey.filter.LoggingFilter;

public class JerseyPersonRestClient implements PersonRestClient {

	private final WebTarget webTarget;

	public JerseyPersonRestClient(String host) {
		ClientConfig clientConfig = new ClientConfig()
				.property(ClientProperties.READ_TIMEOUT, 30000)
				.property(ClientProperties.CONNECT_TIMEOUT, 5000);

		webTarget = ClientBuilder
				.newClient(clientConfig)
				.register(new LoggingFilter())
				.target(host);
	}
}

Once WebTarget is instantiated it will be used to query all the endpoints. I will show an implementation of one GET and one POST endpoints consumption:

@Override
public List<Person> getAll() {
	Person[] persons = webTarget
		.path(ENDPOINT_GET_ALL)
		.request()
		.get()
		.readEntity(Person[].class);
	return Arrays.stream(persons).collect(Collectors.toList());
}

@Override
public String save(Person person) {
	if (person == null) {
		throw new RuntimeException("Person entity should not be null");
	}
	return webTarget
		.path(ENDPOINT_SAVE)
		.request()
		.post(Entity.json(person))
		.readEntity(String.class);
}

The full code can be seen in GitHub repo: JerseyPersonRestClient full code.

jersey-media-json-jackson

This is the bonding between Jersey and Jackson. It should be used otherwise Jersey’s readEntity(Class var1) method throws:

Exception in thread “main” org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException: MessageBodyReader not found for media type=application/json, type=class …

or

Exception in thread “main” org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException: MessageBodyWriter not found for media type=application/json, type=class …

Client builder

In the code, there is a class called PersonRestClientBuilder. In the current case it does not do many things, but in reality, it might turn out that a lot of configurations or input is provided to build a REST API client instance. This is where such builder becomes very useful:

public class PersonRestClientBuilder {

	private String host;

	public PersonRestClientBuilder setHost(String host) {
		this.host = host;
		return this;
	}

	public PersonRestClient build() {
		return new JerseyPersonRestClient(host);
	}
}

Unit testing

It is common and best practice that each piece of code is covered by unit tests. In Mock JUnit tests with Mockito example post, I’ve described how Mockito can be used. The problem in the current example is if we use Mockito we have to mock readEntity() method to return some response objects. This is way too much mocking and will not do adequate testing, actually, it does not test at all. We want to test that out REST API client successfully communicates over the wire. In order to do proper testing, we need to use a library called WireMock. In Mock/Stub REST API with WireMock for better unit testing post, I will add more details how to use it.

Conclusion

REST API consuming or testing requires building a client. Jersey is a perfect candidate to be used as an underlying framework. WireMock can be used for unit testing the REST API client you have created.

Related Posts

Read more...

Amazon S3 file upload with cURL and Java code

Last Updated on by

Post summary: Working Java code to upload a file to Amazon S3 bucket with cURL.

This post gives a solution to a very rare use case where you want to use cURL from Java code to upload a file to Amazon S3 bucket.

Amazon S3

Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.

Upload to Amazon S3 with Java

Amazon S3 has very good documentation how to use their Java client in order to upload or retrieve files. Ways described in the manual are:

  • Upload in a single operation – few lines of code to instantiate AmazonS3Client object and to upload the file in one chunk.
  • Upload using multipart upload API – provides the ability to upload a file into several chinks. It is possible to use low and high-level operations on the upload.
  • Upload using pre-signed URLs – with this approach you can upload to some else’s bucket with having access key and shared key.

More on each of the approaches can be found in Amazon S3 upload object manual.

Upload with cURL

cURL is widely used and powerful tool for data transfer. It supports various protocols and functionalities. In Windows world, it is not widely used, but there are cURL implementations for Windows which can be used. Uploading to Amazon S3 bucket with cURL is pretty easy. There are bash scripts that can do it. One such is described in File Upload on Amazon S3 server using CURL request post. It is the basis I took to create the Java code for upload.

Upload with cURL and Java

Upload with Java and cURL is a pretty rare case. Benefits of using this approach are memory and CPU optimization. If an upload is done through Java code file to be uploaded is read and stored in heap. This reading is optimized and parts already uploaded part is removed from the heap. Anyway reading and removing the not needed file parts requires memory to keep it and CPU for garbage collection, especially when a huge amount of data is to be transferred. In some cases where resources and performance are absolutely important this memory and CPU usage can be critical. cURL also uses memory to upload the file, but this becomes no longer problem of the JVM, rather than a problem of the OS. Upload Java code is:

import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.file.Path;
import java.security.InvalidKeyException;
import java.security.NoSuchAlgorithmException;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Base64;

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;

import org.apache.commons.io.IOUtils;

public class AmazonS3CurlUploader {

	private static final String ALGORITHM = "HmacSHA1";
	private static final String CONTENT_TYPE = "application/octet-stream";
	private static final String ENCODING = "UTF8";

	public boolean upload(Path localFile, String s3Bucket, String s3FileName,
			String s3AccessKey, String s3SecretKey) {
		boolean result;
		try {
			Process cURL = createCurlProcess(localFile, s3Bucket,
					s3FileName, s3AccessKey, s3SecretKey);
			cURL.waitFor();
			String response = IOUtils.toString(cURL.getInputStream())
					+ IOUtils.toString(cURL.getErrorStream());
			result = response.contains("HTTP/1.1 200 OK");
		} catch (IOException | InterruptedException e) {
			// Exception handling goes here!
			result = false;
		}
		return result;
	}

	private Process createCurlProcess(Path file, String bucket, String fileName,
			String accessKey, String secretKey) throws IOException {
		String dateFormat = ZonedDateTime.now()
				.format(DateTimeFormatter.RFC_1123_DATE_TIME);
		String relativePath = "/" + bucket + "/" + fileName;
		String stringToSign = "PUT\n\n" + CONTENT_TYPE + "\n"
				+ dateFormat + "\n" + relativePath;
		String signature = Base64.getEncoder()
				.encodeToString(hmacSHA1(stringToSign, secretKey));

		return new ProcessBuilder(
				"curl", "-X", "PUT",
				"-T", file.toString(),
				"-H", "Host: " + bucket + ".s3.amazonaws.com",
				"-H", "Date: " + dateFormat,
				"-H", "Content-Type: " + CONTENT_TYPE,
				"-H", "Authorization: AWS " + accessKey + ":" + signature,
				"http://" + bucket + ".s3.amazonaws.com/" + fileName)
				.start();
	}

	private byte[] hmacSHA1(String data, String key) {
		try {
			Mac mac = Mac.getInstance(ALGORITHM);
			mac.init(new SecretKeySpec(key.getBytes(ENCODING), ALGORITHM));
			return mac.doFinal(data.getBytes(ENCODING));
		} catch (NoSuchAlgorithmException | InvalidKeyException
				| UnsupportedEncodingException e) {
			return new byte[] {};
		}
	}
}

Conclusion

Upload to Amazon S3 with cURL from Java code is a rare case, which could be beneficial in the case where memory and CPU usage by JVM is crucial. Delegating file upload to cURL does not disturb JVM heap and Garbage Collection process.

Read more...

Mock static methods in JUnit with PowerMock example

Last Updated on by

Post summary: Examples how to mock static methods in JUnit tests with PowerMock.

This post is part of PowerMock series examples. The code shown in examples below is available in GitHub java-samples/junit repository.

In Mock JUnit tests with Mockito example post, I have shown how and why to use Mockito java mocking framework to create good unit tests. There are several things that Mockito is not supporting, but one of them is mocking of static methods. It is not that common to encounter such situation is real life, but the moment you encounter it Mockito is not able to solve the task. This is where PowerMock comes to the rescue.

PowerMock

PowerMock is a framework that extends other mock libraries giving them more powerful capabilities. PowerMock uses a custom classloader and bytecode manipulation to enable mocking of static methods, constructors, final classes and methods, private methods, removal of static initializers and more.

Example class for unit test

We are going to unit test a class called LocatorService that internally uses a static method from utility class Utils. Method randomDistance(int distance) in Utils is returning random variable, hence it has no predictable behavior and the only way to test it is by mocking it:

public class LocatorService {

	public Point generatePointWithinDistance(Point point, int distance) {
		return new Point(point.getX() + Utils.randomDistance(distance), 
			point.getY() + Utils.randomDistance(distance));
	}
}

And Utils class is:

import java.util.Random;

public final class Utils {

	private static final Random RAND = new Random();

	private Utils() {
		// Utilities class
	}

	public static int randomDistance(int distance) {
		return RAND.nextInt(distance + distance) - distance;
	}
}

Nota bene: it is good code design practice to make utility classes final and with a private constructor.

Using PowerMock

In order to use PowerMock two things has to be done:

  1. Import PowerMock into the project
  2. Annotate unit test class
  3. Mock the static class

Import PowerMock into the project

In case of using Maven import statement is:

<dependency>
	<groupId>org.powermock</groupId>
	<artifactId>powermock-module-junit4</artifactId>
	<version>1.6.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.powermock</groupId>
	<artifactId>powermock-api-mockito</artifactId>
	<version>1.6.5</version>
	<scope>test</scope>
</dependency>

Nota bene: there is a possibility of version mismatch between PowerMock and Mockito. I’ve received: java.lang.NoSuchMethodError: org.mockito.mock.MockCreationSettings.isUsingConstructor()Z exception when using PowerMock 1.6.5 with Mockito 1.9.5, so I had to upgrade to Mockito 1.10.19.

Annotate JUnit test class

Two annotations are needed. One is to run unit test with PowerMockRunner: @RunWith(PowerMockRunner.class). Other is to prepare Utils class for testing: @PrepareForTest({Utils.class}). The final code is:

import org.junit.runner.RunWith;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

@RunWith(PowerMockRunner.class)
@PrepareForTest({Utils.class})
public class LocatorServiceTest {
}

Mock static class

Explicit mocking to static class should be made in order to be able to use standard Mockito when().thenReturn() construction:

int distance = 111;
PowerMockito.mockStatic(Utils.class);
when(Utils.randomDistance(anyInt())).thenReturn(distance);

Putting it all together

Final JUnit test class is shown below. The code in tests verifies logic in LocatorService, if a point is given then new point is returned by adding random to its X and Y coordinates. By removing the random element with mocking code can be tested with specific values.

package com.automationrhapsody.junit;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.anyInt;
import static org.mockito.Mockito.when;

@RunWith(PowerMockRunner.class)
@PrepareForTest({Utils.class})
public class LocatorServiceTest {

	private LocatorService locatorServiceUnderTest;

	@Before
	public void setUp() {
		PowerMockito.mockStatic(Utils.class);

		locatorServiceUnderTest = new LocatorService();
	}

	@Test
	public void testGeneratePointWithinDistance() {
		int distance = 111;

		when(Utils.randomDistance(anyInt())).thenReturn(distance);

		Point input = new Point(11, 11);
		Point expected = new Point(input.getX() + distance, 
				input.getY() + distance);

		assertTrue(arePointsEqual(expected, 
			locatorServiceUnderTest.generatePointWithinDistance(input, 1)));
	}

	public static boolean arePointsEqual(Point p1, Point p2) {
		return p1.getX() == p2.getX()
			&& p1.getY() == p2.getY();
	}
}

Conclusion

PowerMock is a powerful addition to standard mocking libraries as Mockito. Using it has some specifics, but once you understand them it is easy and fun to use it. Keep in mind that if you encounter a need to use PowerMock that can mean that code under test is not well designed. In my experience, it is possible to have very good unit tests with more than 85% coverage without any PowerMock usage. Still, there are some exceptional cases where PowerMock can be put in operation.

Related Posts

Read more...