Soft assertions that do not fail JUnit test

Last Updated on by

Post summary: Code examples how to use assertions that do not fail the unit test immediately.

Code shown in examples bellow is available in GitHub java-samples/jersey1 repository.

Unit vs Functional testing

Unit testing paradigm states that each test exercises particular code behaviour. So in a perfect world one unit test would have one assertion which defines unit test result – either passed or failed. This is why unit testing frameworks provide only asserts which stop further execution of current test method. In functional testing usually one test verifies several conditions. Not debating if this is good or bad. Assume you are doing GUI testing, once you have opened particular page you’d better do as much verification as possible to reduce the risk of bugs. Having this page opened over and over for each single check is not the most efficient way of testing. This is why when you run functional tests you need some kind of assert that indicates whether passed or failed but to let the test continue in no critical issue is present. Those are generally called “soft” asserts.

Soft assertions and JUnit

TestNG provides org.testng.asserts.SoftAssert class for soft asserts as it is more oriented towards functional testing. JUnit is unit testing framework, so it does not provide any soft assertions. In order to create such behaviour additional libraries are needed.

AssertJ

AssertJ is library providing fluent assertions. It is very similar to Hamcrest which comes by default with JUnit. Along with all the asserts AssertJ provides soft assertions with its SoftAssertions class inside org.assertj.core.api package.

Usage

Bellow is a functional test run against Dropwizard stub described in Build a RESTful stub server with Dropwizard post. Important is to instantiate new SoftAssertions object before the test verifications and to call assertAll() method in the end to collect results. Best way to do this is to use JUnit’s @Before and @After annotated methods.

package com.automationrhapsody.jersey1;

import com.automationrhapsody.jersey1.model.Person;
import com.automationrhapsody.jersey1.rules.PersonServiceJerseyClient;

import java.util.List;

import org.assertj.core.api.SoftAssertions;
import org.junit.After;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.core.Is.is;

public class PersonServiceTest {

	@ClassRule
	public static final PersonServiceJerseyClient CLIENT 
		= new PersonServiceJerseyClient();

	private SoftAssertions softAssertions;

	private Person person;

	@Before
	public void setUp() {
		person = new Person();
		person.setId(123);
		person.setFirstName("First Name");
		person.setLastName("Last Name");
		person.setEmail("Email");

		softAssertions = new SoftAssertions();
	}

	@After
	public void tearDown() {
		softAssertions.assertAll();
	}

	@Test
	public void testAllOperations() {
		String saveResult = CLIENT.save(person);
		assertThat(saveResult, is("Added Person with id=123"));

		Person actual = CLIENT.get(person.getId());
		softAssertions
			.assertThat(actual.getId()).isEqualTo(person.getId());
		softAssertions
			.assertThat(actual.getFirstName()).isEqualTo(person.getFirstName());
		softAssertions
			.assertThat(actual.getLastName()).isEqualTo(person.getLastName());
		softAssertions
			.assertThat(actual.getEmail()).isEqualTo(person.getEmail());

		String result = CLIENT.remove();
		assertThat(result, is("Last person remove. Total count: 4"));
	}
}

Conclusion

Soft assertions are needed in case of functional tests being run with JUnit. Since such are not available out of the box because JUnit is targeted for unit tests soft assertions can be used from external libraries such as AssertJ.

Read more...

Verify static method was called with PowerMock

Last Updated on by

Post summary: How to verify that static method was called during unit test with PowerMock.

In Mock static methods in JUnit with PowerMock example post I have given information about PowerMock and how to mock a static method. In current post I will demonstrate how to verify given static method was called during execution of an unit test.

Example class for unit test

Code shown in examples bellow is available in GitHub java-samples/junit repository. We are going to unit test a class called LocatorService that internally uses static method from utilities class Utils. Method randomDistance(int distance) in Utils is returning random variable, hence it has no predictable behaviour and only way to test it is by mocking it:

public class LocatorService {

	public Point generatePointWithinDistance(Point point, int distance) {
		return new Point(point.getX() + Utils.randomDistance(distance), 
			point.getY() + Utils.randomDistance(distance));
	}
}

And Utils class is:

import java.util.Random;

public final class Utils {

	private static final Random RAND = new Random();

	private Utils() {
		// Utilities class
	}

	public static int randomDistance(int distance) {
		return RAND.nextInt(distance + distance) - distance;
	}
}

Nota bene: it is good code design practice to make utilities classes final and with private constructor.

Verify static method call

This is the full code. Additional details are shown bellow it.

package com.automationrhapsody.junit;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.internal.verification.VerificationModeFactory;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

@RunWith(PowerMockRunner.class)
@PrepareForTest(Utils.class)
public class LocatorServiceTest {

	private LocatorService locatorServiceUnderTest;

	@Before
	public void setUp() {
		PowerMockito.mockStatic(Utils.class);

		locatorServiceUnderTest = new LocatorService();
	}

	@Test
	public void testStaticMethodCall() {
		locatorServiceUnderTest
			.generatePointWithinDistance(new Point(11, 11), 1);
		locatorServiceUnderTest
			.generatePointWithinDistance(new Point(11, 11), 234);

		PowerMockito.verifyStatic(VerificationModeFactory.times(2));
		Utils.randomDistance(1);

		PowerMockito.verifyStatic(VerificationModeFactory.times(2));
		Utils.randomDistance(234);

		PowerMockito.verifyNoMoreInteractions(Utils.class);
	}
}

Explanation

Class containing static method should be prepared for mocking with PowerMockito.mockStatic(Utils.class) code. Then call to static method is done inside locatorServiceUnderTest .generatePointWithinDistance() method. In this test it is intentionally called 2 times with different distance (1 and 234) in order to show the verification which consists of two parts. First part is PowerMockito.verifyStatic(VerificationModeFactory.times(2)) which tells PowerMock to verify static method was called 2 times. Second part is Utils.randomDistance(1) which tells exactly which static method should be verified. Instead of 1 in the brackets you can use anyInt() or anyObject(). 1 is used to make verification explicit. As you can see there is second verification that randomDistance() method was called with 234 as well: PowerMockito.verifyStatic(VerificationModeFactory.times(2)); Utils.randomDistance(234);.

Conclusion

PowerMock provides additional power to Mockito mocking library which is described in Mock JUnit tests with Mockito example post. In current post I have shown how to verify static method was called. It is very specific as verification actually consists of two steps.

Read more...

Manage Microsoft Excel files in Java with Apache POI

Last Updated on by

Post summary: Code examples how to manage Microsoft Excel documents with Apache POI Java library.

Code examples in current post can be found in GitHub java-samples/apache-poi repository.

Two most important use cases where MS Excel documents are useful in automation testing are:

  • Data driven testing where test data is input from an Excel file
  • Output test results to an Excel file for stakeholders

So far I did not needed to use Excel in my automation. Where data driven testing was needed I either used JUnit data provider (see more in Data driven testing with JUnit parameterized tests and Data driven testing with JUnit and Gradle posts) or feeding data in CSV format and reading them with Apache Commons CSV library. My colleague Petar Yordanov introduced me to Apache POI which is very powerful for Excel and other MS format documents management.

Apache POI

he Apache POI Project’s mission is to create and maintain Java APIs for manipulating various file formats based upon the Office Open XML standards (OOXML) and Microsoft’s OLE 2 Compound Document format (OLE2). In short, you can read and write MS Excel files using Java. In addition, you can read and write MS Word and MS PowerPoint files using Java. Apache POI is your Java Excel solution (for Excel 97-2008). See more on Apache POI home page. Full details on supported POI formats can be found in Apache POI Component Overview. From this page you can navigate to more details how to work with specific type of document. Full details on Excel management can be found on Apache POI Spreadsheet Guide.

Usage

Include in Maven project

In current example I will use the poi-ooxml library which for XML based formats introduced in Microsoft Office 2007.

<dependency>
	<groupId>org.apache.poi</groupId>
	<artifactId>poi-ooxml</artifactId>
	<version>3.16</version>
</dependency>

Creating a Workbook

Excel file is actually a XSSFWorkbook object from org.apache.poi.xssf.usermodel package. Workbook can be created empty or from existing file:

// Create empty workbook
XSSFWorkbook newWorkBook = new XSSFWorkbook();
// Create workbook from existing file
XSSFWorkbook existingWorkBook = new XSSFWorkbook(new File("fileName.xlsx"));

Manage sheets

XSSFWorkbook class provides different methods for sheet management: createSheet(), cloneSheet(), getNumberOfSheets(), getSheet(), getSheetAt(), getSheetName()getSheetIndex(). Create or get sheet methods return a XSSFSheet object.

Manage sheet content

Once you have the XSSFSheet object, either from create or get, you can manage rows in it. Some of the methods are: createRow()getRow(), removeRow(), getLastRowNum(), getPhysicalNumberOfRows()rowIterator(). Create or get row methods return a XSSFRow object. Inside row you can manage cells. Some of the methods are: createCell(), getCell(), removeCell(), getLastCellNum(), getPhysicalNumberOfCells(), cellIterator(). Create or get cell methods return a XSSFCell object. Some the methods are: setCellComment(), setCellFormula(), setCellStyle(), setCellType(), setCellValue().

Manage Excel documents

Bellow are shown examples of simple ExcelWriter class that writes to Excel file. This class is used in SampleExcelApp showing how to write the text. Reading from Excel is shown in SampleExcelAppTest verifying the correct saving of the document.

ExcelWriter

package com.automationrhapsody.apachepoi;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

import org.apache.poi.ss.usermodel.CellType;
import org.apache.poi.ss.usermodel.FillPatternType;
import org.apache.poi.ss.usermodel.IndexedColors;
import org.apache.poi.xssf.usermodel.XSSFCell;
import org.apache.poi.xssf.usermodel.XSSFCellStyle;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;

public class ExcelWriter {

	private final XSSFWorkbook workBook;

	private Map<String, Integer> nextRows = new HashMap<>();
	private String currentSheet;
	private boolean isSaved;

	public ExcelWriter() {
		workBook = new XSSFWorkbook();
	}

	public void writeAndClose(File excelFile) {
		if (isSaved) {
			throw new IllegalArgumentException("Workbook already saved!");
		}
		try {
			workBook.write(new FileOutputStream(excelFile));
			workBook.close();
			isSaved = true;
		} catch (IOException ioe) {
			// TODO log
		}
	}

	public void switchToSheet(String sheetName) {
		currentSheet = sheetName;
		if (workBook.getSheet(sheetName) == null) {
			workBook.createSheet(currentSheet);
			nextRows.put(currentSheet, 0);
		}
	}

	public void writeRow(String... values) {
		XSSFSheet sheet = workBook.getSheet(currentSheet);
		int nextRow = nextRows.get(currentSheet);
		XSSFRow row = sheet.createRow(nextRow);

		for (int i = 0; i < values.length; i++) { 
			XSSFCell cell = row.createCell(i);
			cell.setCellType(CellType.STRING);
			cell.setCellValue(values[i]);
		}

		nextRows.put(currentSheet, nextRow + 1);
	}

	public void setCellColour(int rowNumber, int cellNumber,
			IndexedColors colour) {
		XSSFCellStyle style = workBook.createCellStyle();
		style.setFillForegroundColor(colour.getIndex());
		style.setFillPattern(FillPatternType.SOLID_FOREGROUND);

		int nextRow = nextRows.get(currentSheet); if (rowNumber > nextRow) {
			// TODO log or exception?
			rowNumber = nextRow;
		}
		XSSFSheet sheet = workBook.getSheet(currentSheet);
		int lastCell = sheet.getRow(rowNumber - 1).getLastCellNum();
		if (cellNumber > lastCell) {
			// TODO log or exception?
			cellNumber = lastCell;
		}

		sheet.getRow(rowNumber - 1).getCell(cellNumber - 1)
			.setCellStyle(style);
	}
}

SampleExcelApp

package com.automationrhapsody.apachepoi;

import java.io.File;

import org.apache.poi.ss.usermodel.IndexedColors;

public class SampleExcelApp {

	public static void main(String[] args) {
		String sheetName = "SheetName";

		ExcelWriter excelWriter = new ExcelWriter();
		excelWriter.switchToSheet(sheetName);
		excelWriter.writeRow("A1-blue", "B1", "C1");
		excelWriter.writeRow("A2", "B2", "C2");
		excelWriter.setCellColour(1, 1, IndexedColors.BLUE);

		excelWriter.switchToSheet("NewSheetName");
		excelWriter.writeRow("A1", "B1");
		excelWriter.writeRow("A2", "B2-red");
		excelWriter.setCellColour(2, 2, IndexedColors.RED);

		excelWriter.switchToSheet(sheetName);
		excelWriter.writeRow("A3", "B3", "C3");
		excelWriter.writeRow("A4", "B4", "C4");

		File excelFile = new File("testReport.xlsx");
		excelWriter.writeAndClose(excelFile);
	}
}

SampleExcelAppTest

package com.automationrhapsody.apachepoi;

import java.io.File;

import org.apache.poi.ss.usermodel.IndexedColors;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.junit.BeforeClass;
import org.junit.Test;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;

public class SampleExcelAppTest {

	private static final File FILE = new File("testReport.xlsx");

	private static XSSFWorkbook workbookUnderTest;

	@BeforeClass
	public static void beforeClass() throws Exception {
		if (FILE.exists()) {
			FILE.delete();
		}

		App.main(null);

		workbookUnderTest = new XSSFWorkbook(FILE);
	}

	@Test
	public void testNumberOfSheets() {
		assertThat(workbookUnderTest.getNumberOfSheets(), is(2));
	}

	@Test
	public void testSheetName() {
		assertThat(workbookUnderTest.getSheetName(0), is("SheetName"));
		assertThat(workbookUnderTest.getSheetName(1), is("NewSheetName"));
	}

	@Test
	public void testFirstSheetContent() {
		XSSFSheet sheet = workbookUnderTest.getSheetAt(0);
		assertThat(sheet.getLastRowNum(), is(3));
		assertThat(sheet.getRow(3).getLastCellNum(), is((short) 3));
		assertThat(sheet.getRow(3).getCell(2).getStringCellValue(), 
				is("C4"));

		assertThat(sheet.getRow(0).getCell(0).getCellStyle()
			.getFillForegroundColor(), is(IndexedColors.BLUE.getIndex()));
		assertThat(sheet.getRow(1).getCell(1).getCellStyle()
			.getFillForegroundColor(), 
				is(IndexedColors.AUTOMATIC.getIndex()));
	}

	@Test
	public void testSecondSheetContent() {
		XSSFSheet sheet = workbookUnderTest.getSheetAt(1);
		assertThat(sheet.getLastRowNum(), is(1));
		assertThat(sheet.getRow(1).getLastCellNum(), is((short) 2));
		assertThat(sheet.getRow(1).getCell(1).getStringCellValue(), 
				is("B2-red"));

		assertThat(sheet.getRow(1).getCell(1).getCellStyle()
			.getFillForegroundColor(), is(IndexedColors.RED.getIndex()));
		assertThat(sheet.getRow(0).getCell(0).getCellStyle()
			.getFillForegroundColor(), 
				is(IndexedColors.AUTOMATIC.getIndex()));
	}
}

Conclusion

Apache POI is very powerful toolkit for managing MS documents especially Excel, which might be needed in you test automation for reporting of data driven testing.

Read more...

Manage and automatically select needed WebDriver in Java 8 Selenium project

Last Updated on by

Post summary: Example code how to efficiently manage and automatically select needed local WebDriver using Java 8 method reference used as lambda expression.

Code examples in current post can be found in GitHub selenium-samples-java/design-patterns repository.

Java 8 features

In this example lambda expression and method reference Java 8 features are used.

Functional interface

Before explaining lambda it is needed to understand the idea of functional interface as they are leveraged for use with lambda expressions. Functional interface is interface that has only one abstract method that is to be implemented. Functional interface may or may not have default or static methods (again new Java 8 feature). Although not mandatory good practice is to annotate functional interface with @FunctionalInterface.

Lambda expressions

There is not such term in Java, but you can think of lambda expression as anonymous method. Lambda expression is piece of code that provides inline implementation of a functional interface, eliminating the need of using anonymous classes. Lambda expressions facilitate functional programming and ease development by reducing the amount of code needed.

Method reference

Sometimes when using lambda expression all you do is call a method by name. Method reference provides easy way to call the method making code more readable.

Managing WebDriver

Proposed solution of managing WebDriver has enumeration called Browser and class called WebDriverFactory. Another important thing is web drivers should be placed in folder with name webdrivers and named with special pattern.

Browser enum

Code is shown bellow:

package com.automationrhapsody.designpatterns;

import java.util.Arrays;
import java.util.function.Supplier;

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.ie.InternetExplorerDriver;

public enum Browser {
	FIREFOX("gecko", FirefoxDriver::new),
	CHROME("chrome", ChromeDriver::new),
	IE("ie", InternetExplorerDriver::new);

	private String name;
	private Supplier<WebDriver> driverSupplier;

	Browser(String name, Supplier<WebDriver> driverSupplier) {
		this.name = name;
		this.driverSupplier = driverSupplier;
	}

	public String getName() {
		return name;
	}

	public WebDriver getDriver() {
		return driverSupplier.get();
	}

	public static Browser fromString(String value) {
		for (Browser browser : values()) {
			if (value != null && value.toLowerCase().equals(browser.getName())) {
				return browser;
			}
		}
		System.out.println("Invalid driver name passed as 'browser' property. "
			+ "One of: " + Arrays.toString(values()) + " is expected.");
		return FIREFOX;
	}
}

Enumeration’s constructor has Supplier functional interface as parameter. When constructor is called method reference FirefoxDriver::new is called as lambda expression which purpose is to instantiate new Firefox driver. If only lambda expression is used is would be: () -> new FirefoxDriver(). Notice that method reference is much shorter and easy to read. getDriver() method invokes Supplier’s get() method which is implemented by the lambda expression, so lambda expression is executed hence instantiating new web driver. With this approach Firefox web dirver object is created only when getDriver() method is called.

WebDriverFactory

Code is:

package com.automationrhapsody.designpatterns;

import java.io.File;

import org.openqa.selenium.WebDriver;

class WebDriverFactory {

	private static final String WEB_DRIVER_FOLDER = "webdrivers";

		public static WebDriver createWebDriver() {
		Browser browser = Browser.fromString(System.getProperty("browser"));
		String arch = System.getProperty("os.arch").contains("64") ? "64" : "32";
		String os = System.getProperty("os.name").toLowerCase().contains("win") 
				? "win.exe" : "linux";
		String driverFileName = browser.getName() + "driver-" + arch + "-" + os;
		String driverFilePath = driversFolder(new File("").getAbsolutePath());
		System.setProperty("webdriver." + browser.getName() + ".driver", 
				driverFilePath + driverFileName);
		return browser.getDriver();
	}

	private static String driversFolder(String path) {
		File file = new File(path);
		for (String item : file.list()) {
			if (WEB_DRIVER_FOLDER.equals(item)) {
				return file.getAbsolutePath() + "/" + WEB_DRIVER_FOLDER + "/";
			}
		}
		return driversFolder(file.getParent());
	}
}

This code recursively searches for folder named webdrivers in the project. This is done because when you have multi-module project running from IDE and from Maven has different root folder and finding web drivers is not possible from both simultaneously. Once folder is found then proper web driver is selected based on OS and architecture. Code reads browser system property which can be passed from outside hence making selection of web driver easy to configure. Important part is to have web drivers with special naming convention.

Web drivers naming convention

In order code above to work web drivers should be place in webdrivers folder in the project and their names should match the pattern: {DIVER_NAME}-{ARCHITECTURE}-{OS}, e.g. geckodriver-64-win.exe for Windows 64 bit and geckodriver-64-linux for Linux 64 bit.

Conclusion

Proposed solution is very elegant way to manage your web drivers and select proper one just by passing -Dbrowser={BROWSER} Java system property.

Read more...

Run Dropwizard application in Docker with templated configuration using environment variables

Last Updated on by

Post summary: How to run Dropwizard application inside a Docker container with configuration template file which later get replaced by environment variables.

In Build a RESTful stub server with Dropwizard post I have described how to build a Dropwizard application and run it. In current post I will show how this application can be inserted into Docker container and run with different configurations, based on different environment variables. Code sample here can be found as buildable and runable project in GitHub sample-dropwizard-rest-stub repository.

Dropwizard templated config

Dropwizard supports replacing variables from config file with environment variables if such are defined. First step is to make the config file with special notation.

version: ${ENV_VARIABLE_VERSION:- 0.0.2}

Dropwizard substitution is based on Apache’s StrSubstitutor library. ${ENV_VARIABLE_VERSION:- 0.0.2} means that Dropwizard will search for environment variable with name ENV_VARIABLE_VERSION and replace its value with given variable. If no variable is configured then it will replace with default value of 0.0.2. If default is not needed then just ${ENV_VARIABLE_VERSION} can be used.

Next step is to make Dropwizard substitute config file with environment variables on its startup before file is being read. This is done with following code:

@Override
public void initialize(Bootstrap<RestStubConfig> bootstrap) {
	bootstrap.setConfigurationSourceProvider(new SubstitutingSourceProvider(
			bootstrap.getConfigurationSourceProvider(), 
			new EnvironmentVariableSubstitutor(false)));
}

EnvironmentVariableSubstitutor is used with false in order to suppress throwing of UndefinedEnvironmentVariableException in case environment variable is not defined. There is approach to pack config.yml file into JAR file and use it from there, then bootstrap.getConfigurationSourceProvider() should be changed with path -> Thread.currentThread().getContextClassLoader().getResourceAsStream(path).

Docker

Docker is platform for building software inside containers which then can be deployed on different environments. A container is like a virtual machine with significant difference that it does not build full operation system. In this way host’s resources are optimised, container consumes as much memory as needed by application. Virtual machine itself consumes memory to run the operation system as well. Containers have resource (CPU, RAM, HDD) isolation so that application sees the container as separate operating system.

Dockerfile

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession. In given example Dockerfile looks like this:

FROM openjdk:8u121-jre-alpine
MAINTAINER Automation Rhapsody http://automationrhapsody.com/

WORKDIR /var/dropwizard-rest-stub

ADD target/sample-dropwizard-rest-stub-1.0-SNAPSHOT.jar /var/dropwizard-rest-stub/dropwizard-rest-stub.jar
ADD config-docker.yml /var/dropwizard-rest-stub/config.yml

EXPOSE 9000 9001

ENTRYPOINT ["java", "-jar", "dropwizard-rest-stub.jar", "server", "config.yml"]

Dockerfile starts with FROM openjdk:8u121-jre-alpine which instructs docker to use this already prepared image as a base image for container. At https://hub.docker.com/ there are numerous of images maintained by different organisations or individuals that provide different frameworks and tools. They can save you a lot of work allowing you to skip configuration and installation of general things. In given case openjdk:8u121-jre-alpine has OpenJDK JRE 1.8.121 already installed with minimalist Linux libraries. MAINTAINER documents who is the author of this file. WORKDIR sets working directory of the container. Then ADD copies JAR file and config.yml to container. ENTRYPOINT configures container to be run as executable. This instruction translates to java -jar dropwizard-rest-stub.jar server config.yml command. More info can be found in Dockerfile reference link.

Build and run Docker container

Docker container can be build with following command:

docker build -t dropwizard-rest-stub .

dropwizard-rest-stub is the name of container, it is later used to run container with this name. Note that dot in the end is mandatory.

docker run -it -p 9000:9000 -p 9001:9001 -e ENV_VARIABLE_VERSION=1.1.1 dropwizard-rest-stub

Name of the container is dropwizard-rest-stub and is put in the end. With -p 9000:9000 port 9000 form guest machine is exposed to host machine, otherwise container will not be accessible. With -e ENV_VARIABLE_VERSION=1.1.1 environment variable is being set. If not passed it will be substituted with 0.0.2 as described above.

Conclusion

Dropwizard has very easy mechanism for making configuration dependant on environment variables. This makes Dropwizard applications extremely suitable to be build into Docker containers and run on different environments just by changing environment variables being passed to container.

Read more...

Data driven testing with JUnit and Gradle

Last Updated on by

Post summary: How to do data driven testing with JUnit and Gradle.

In Data driven testing with JUnit parameterized tests post I’ve shown how to create data driven JUnit test. It should be annotated with @RunWith(Parameterized.class).

Older Gradle and Parameterized.class

Gradle cannot run JUnit tests annotated with @RunWith(Parameterized.class). There is official Gradle bug which states issue is resolved in Gradle 2.12, so if you are using older Gradle then current post is suitable for you.

Data Driven JUnit tests

There is library called junit-dataprovider which is easy to use. What you have to do to use it is:

  1. Annotate the test class
  2. Define test data
  3. Create test and use test data

Annotate the test class

Class needs to be run with specialised runner in order to be treated as data driven one. Runner is: com.tngtech.java.junit.dataprovider.DataProviderRunner. Class looks like:

import com.tngtech.java.junit.dataprovider.DataProviderRunner;
import org.junit.runner.RunWith;

@RunWith(DataProviderRunner.class)
public class LocatorDataProviderTest {
}

Define test data

Test data is seeded from static method: public static Object[] dataProvider(). This method returns array of Object arrays where each array is one row with input and expected output test data. This method is annotated with @DataProvider. Here is how test data is defined:

@DataProvider
public static Object[] dataProvider() {
	return new Object[][] {
		{-1, -1, new Point(1, 1)},
		{-1, 0, new Point(1, 0)},
		{-1, 1, new Point(1, 1)},

		{0, -1, new Point(0, 1)},
		{0, 0, MOCKED_POINT},
		{0, 1, MOCKED_POINT},

		{1, -1, new Point(1, 1)},
		{1, 0, MOCKED_POINT},
		{1, 1, MOCKED_POINT}
	};
}

Create test and use test data

In order to use the test data in some test method it should be annotated with @UseDataProvider(“dataProvider”) where “dataProvider” is the name of the static method which generates the test data. Another mandatory is test method should have same number and type of arguments as each line of the test data array. Here is how test method looks like:

@Test
@UseDataProvider("dataProvider")
public void testLocateResults(int x, int y, Point expected) {
	assertTrue(PointUtils.arePointsEqual(expected, 
				locatorUnderTest.locate(x, y)));
}

Putting it all together

Combining all steps into one class leads to code bellow:

import com.automationrhapsody.junit.utils.PointUtils;
import com.tngtech.java.junit.dataprovider.DataProvider;
import com.tngtech.java.junit.dataprovider.DataProviderRunner;
import com.tngtech.java.junit.dataprovider.UseDataProvider;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;

import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

@RunWith(DataProviderRunner.class)
public class LocatorDataProviderTest {

	private static final Point MOCKED_POINT = new Point(11, 11);

	private LocatorService locatorServiceMock = mock(LocatorService.class);

	private Locator locatorUnderTest;

	@DataProvider
	public static Object[] dataProvider() {
		return new Object[][] {
			{-1, -1, new Point(1, 1)},
			{-1, 0, new Point(1, 0)},
			{-1, 1, new Point(1, 1)},

			{0, -1, new Point(0, 1)},
			{0, 0, MOCKED_POINT},
			{0, 1, MOCKED_POINT},

			{1, -1, new Point(1, 1)},
			{1, 0, MOCKED_POINT},
			{1, 1, MOCKED_POINT}
		};
	}

	@Before
	public void setUp() {
		when(locatorServiceMock.geoLocate(any(Point.class)))
				.thenReturn(MOCKED_POINT);

		locatorUnderTest = new Locator(locatorServiceMock);
	}

	@Test
	@UseDataProvider("dataProvider")
	public void testLocateResults(int x, int y, Point expected) {
		assertTrue(PointUtils.arePointsEqual(expected, 
				locatorUnderTest.locate(x, y)));
	}
}

Benefits

Using junit-dataprovider has one huge benefit over JUnit’s Parameterized runner. Test data provider is used only for the method annotated with its name. JUnit’s Parameterized runner runs each and every test method with given data provider. In one test class you can define several data providers as different static methods and use them in different test methods. This is not possible with JUnit’s Parameterized runner.

Conclusion

JUnit-dataprovider is very nice library which makes JUnit 4 data driven testing very nice and easy. Even if you do not have issues with Gradle I still would recommend it to use it instead of standard Parameterized runner because it gives you the flexibility to bind data provider method with specific unit test method.

 

Read more...

Coloured log files in Linux

Last Updated on by

Post summary: How to colour your log files for better perception under Linux.

I’m far away from being a Linux guru and honestly I like it that way. In order to be effective as QA you need to have minimal knowledge how to do certain things under Linux. This post is devoted on working with logs.

Chaining commands

Linux offers a possibility to combine several commands by chaining them. In current post I will just one of them, the PIPE (I) operator. By using it output of one command is used as input for other.

I would strongly recommend to read following post if you are interested in chaining Linux commands: 10 Useful Chaining Operators in Linux with Practical Examples.

Useful commands

Commands bellow are one I use on daily basis when working with logs. I will show basic usage, if you need more detail on certain command then you can type: man <command> e.g. man cat in Linux console and it will display you more information.

grep

It is used to search in text files or print lines matching some pattern. Usage is: grep text filename.log. If text contains spaces it should be wrapped around single quote (‘) or double quote (“). If text contains single quote, then you have to wrap it around with double quote and vice versa.

cat

Print file content on standard out put. Usage is: cat filename.log. You can concatenate several files: cat file1.log file2.log. Drawback using this command is when you have large files. It combines very well with grep to search output of the file: cat filename.log | grep text.

zcat

Print content of zipped file. Usage: zcat filename.gz. Combines with grep: zcat filename.gz | grep text.

tail

Prints last 10 lines from a file. Usage: tail filename.log. Most valuable tail usage is with -f option: tail -f filename.log. This monitor file in real time and outputs all new text appended to the file. You can also monitor several files: tail -f file1.log file2.log.

less

Used for paging through a file. It shows one page and with arrow key up and down you can scroll into the file. Usage: less filename.txt. In order to exit just type q. Valuable with this command is that you can type a search term /text and then with n go to next appearance and with N go to previous.

Colours

Commands above are nice, but using colours aid for much better perception of information in the files. In order to use colours perl -pe command will be used as chained command to colour the output of commands described above. Syntax is: perl -pe ‘s/^.*INFO.*$/\e[0;36;40m$&\e[0m/g’. It is quite complex expression and I will try to explain it in details.

Match text to be highlighted

^.*INFO.*$ is a regular expression that matches a text to be higlighted. Character ^ means from beginning of the string or line, character $ means to end of string or line. Group .* matches any character. So this regular expression means inspect every string or line and match those that contain INFO.

Text effects

\e[0;36;40m is the colouring part of the expression. 0 is value for ANSI escape code. Possible values for escape code are show in table bellow. Note that not all of them are supported by all OS.

Code Effect
0 Reset / Normal
1 Bold or increased intensity
2 Faint (decreased intensity)
3 Italic: on
4 Underline: Single
5 Blink: Slow
6 Blink: Rapid
7 Image: Negative
8 Conceal
9 Crossed-out

More codes can be found in ANSI escape code wiki.

Text colour

36 from \e[0;36;40m is colour code of text. Colour depends and is different based on escape code. Possible combinations of escape and colour codes are:

Code Colour Code Colour
0;30 Black 1;30 Dark Grey
0;31 Red 1;31 Light Red
0;32 Green 1;32 Lime
0;33 Dark Yellow 1;33 Yellow
0;34 Blue 1;34 Light Blue
0;35 Purple 1;35 Magenta
0;36 Dark Cyan 1;36 Cyan
0;37 Light Grey 1;37 White

Background colour

40m from \e[0;36;40m is colour code of background. Background colours are:

Code Colour
40m Black
41m Red
42m Green
43m Yellow
44m Blue
45m Purple
46m Cyan
47m Light Grey

Sample colour scheme for logs

One possible colour scheme I like is: cat application.log | perl -pe ‘s/^.*FATAL.*$/\e[1;37;41m$&\e[0m/g; s/^.*ERROR.*$/\e[1;31;40m$&\e[0m/g; s/^.*WARN.*$/\e[0;33;40m$&\e[0m/g; s/^.*INFO.*$/\e[0;36;40m$&\e[0m/g; s/^.*DEBUG.*$/\e[0;37;40m$&\e[0m/g’ which will produce following output:

linux-logs-colour

Conclusion

Having coloured logs makes it much easier to investigate logs. Linux provides tooling for better visualisation so it is good to take advantage of those.

Read more...

Mutation testing for Java with PITest

Last Updated on by

Post summary: Introduction to mutation testing and examples with PITest tool for Java.

Mutation testing

Mutation testing is a form of white-box testing. It is used to design new unit tests and evaluate the quality of the existing ones. Mutation testing involves modifying a program code in small ways, based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as dividing each expression by zero). Each mutated version is called a mutant. Existing unit tests are run against this mutant. If some unit test fails then mutant is killed. If no unit test fails then mutant survived. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. The purpose of mutation testing is to help the tester develop effective tests or locate weaknesses in the test data used in the existing tests.

It is not very often when I get surprised by discovering new testing technique I’ve never heard about, so I must give credits to Alexander Todorov since I learned this one from his presentation.

PITest

Mutation testing can be done manually by changing program code and running the tests, but this is not really effective and can lead to serious problems where you commit mutated code by mistake. Most effective and recommended way of doing mutation testing is by using tools. PITest is tool for mutation testing in Java. It seems to be fast growing and has a big community.

Integrate PITest

Examples given in current post can be found in GitHib sample-dropwizard-rest-stub repository. It is very easy to use PITest. First step is to add it to pom.xml file:

<plugin>
	<groupId>org.pitest</groupId>
	<artifactId>pitest-maven</artifactId>
	<version>1.1.10</version>
	<configuration>
		<targetClasses>
			<param>com.automationrhapsody.reststub.persistence*</param>
		</targetClasses>
		<targetTests>
			<param>com.automationrhapsody.reststub.persistence*</param>
		</targetTests>
	</configuration>
</plugin>

Example given above is the most simple one. PITest provides various configurations very well described into PITest Maven Quick Start page.

Run PITest

Once configured it can be run with: mvn org.pitest:pitest-maven:mutationCoverage or if you want to ensure clean build every time: mvn clean test org.pitest:pitest-maven:mutationCoverage

PITest results

PITest provides very understandable reports. Along with the line code coverage it measures the mutation coverage. There is statistics on package level.

pitest-package

PersonDB is the only class that has been uni tested. It has 91% line coverage and 1 not killed mutation. Opening the PersonDB class gives more details what is not covered by tests and what the mutation is:

pitest-class

PITest has negated the condition on line 44, making the mutated code to be: PERSONS.get(person.getId()) == null. Unit tests had passed although this mutation. Full reports can be found in PITest report example.

Action PITest results

Current results indicate that unit tests are not good enough because of the survived mutation. They are also insufficient as one line of code is not tested at all. First improvement is to kill the mutation by change line 37 of PersonDBTest.java from PersonDB.save(person); to assertEquals(“Added Person with id=11”, PersonDB.save(person)); and PITest is run again then results show all mutations are killed.

pitest-class-no-mutations

Still there is one line of code not covered. This will require adding a new unit test to cover the update person functionality.

PITest performance

Doing a mutation testing requires significant resources to run large amount of unit tests. Examples given above work very fast, but they are far away from the reality. I was curious how this works on real project so I run it on one which is relatively small one. It has very good unit tests though with 95% line coverage – 2291 out of 2402 lines. Still PITest found only 90% mutation coverage (849/940). 51 out of 940 mutation that survived and 40 with no unit test coverage which gives a room for improvement. Total run took 3 mins 11 secs. See results bellow:

PIT >> INFO : Found  464 tests
PIT >> INFO : Calculated coverage in 22 seconds.
PIT >> INFO : Created  132 mutation test units

– Timings
==================================================================
> scan classpath : < 1 second
> coverage and dependency analysis : 22 seconds
> build mutation tests : < 1 second
> run mutation analysis : 2 minutes and 48 seconds
——————————————————————————–
> Total : 3 minutes and 11 seconds
——————————————————————————–
==================================================================
– Statistics
==================================================================
>> Generated 940 mutations Killed 849 (90%)
>> Ran 2992 tests (3.18 tests per mutation)
==================================================================
– Mutators
==================================================================
> org.pitest.mutationtest.engine.gregor.mutators.ConditionalsBoundaryMutator
>> Generated 18 Killed 9 (50%)
> KILLED 7 SURVIVED 7 TIMED_OUT 2 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
> NO_COVERAGE 2
——————————————————————————–
> org.pitest.mutationtest.engine.gregor.mutators.VoidMethodCallMutator
>> Generated 115 Killed 96 (83%)
> KILLED 81 SURVIVED 12 TIMED_OUT 15 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
> NO_COVERAGE 7
——————————————————————————–
> org.pitest.mutationtest.engine.gregor.mutators.ReturnValsMutator
>> Generated 503 Killed 465 (92%)
> KILLED 432 SURVIVED 16 TIMED_OUT 33 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
> NO_COVERAGE 22
——————————————————————————–
> org.pitest.mutationtest.engine.gregor.mutators.MathMutator
>> Generated 10 Killed 9 (90%)
> KILLED 8 SURVIVED 0 TIMED_OUT 1 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 0
> NO_COVERAGE 1
——————————————————————————–
> org.pitest.mutationtest.engine.gregor.mutators.NegateConditionalsMutator
>> Generated 294 Killed 270 (92%)
> KILLED 254 SURVIVED 16 TIMED_OUT 15 NON_VIABLE 0
> MEMORY_ERROR 0 NOT_STARTED 0 STARTED 0 RUN_ERROR 1
> NO_COVERAGE 8
——————————————————————————–

Case study

I used PITest on production project written on Java 8 extensively using Stream APIs and Lambda expressions. Initial run of 606 existing test cases gave 90% mutation coverage (846/940) and 95% line coverage (2291/2402).

Note that PITest calculates in statistic given above that existing tests are 464. This is because some of them are data driven tests and JUnit calculates total number to 606 because every data row is counted as a test. Understand how to make JUnit data driven tests in Data driven testing with JUnit parameterized tests post.

After analysis and adding new tests total test cases number was increased to 654 which is almost 8% increase. PITest run shows 97% mutation coverage (911/939) and 97% line coverage (2332/2403). During the analysis no bugs in code were found.

Conclusion

Mutation testing is a good additional technique to make your unit tests better. It should not be the primary technique though as tests will be written just to kill the mutations instead of actually testing the functionality. In projects with well written unit tests mutation testing does not bring much of a value, but still it is very good addition to your testing arsenal. PITest is very good tool to do mutation testing in Java I would say to give it a try.

Read more...

Mock/Stub REST API with WireMock for better unit testing

Last Updated on by

Post summary: Examples how to use WireMock to stub (mock also is possible as a term) REST API in order make better unit testing.

Code shown in examples bellow is available in GitHub java-samples/wiremock repository.

WireMock

WireMock is a simulator for HTTP-based APIs. Some might consider it a service virtualisation tool or a mock server. It enables you to stay productive when an API you depend on doesn’t exist or isn’t complete. It supports testing of edge cases and failure modes that the real API won’t reliably produce. And because it’s fast it can reduce your build time from hours down to minutes.

When to use it

One case where WireMock is very helpful is when building a REST API client. Create simple REST API client using Jersey post describes a way to achieve this with Jersey. In most of the cases REST API might not be forced to fail with certain errors, so WireMock is excellent addition to standard functional tests to verify that client is working correctly in corner cases. Also it is mandatory for unit testing because it eliminates dependencies to external services. Mock server is extremely fast and under complete control. Other case where WireMock helps is if you need to create API tests, but API is not ready yet or not working. WireMock can be used to stub the service in order to make testing framework and structure. Once real server is ready tests will just be elaborated and details cleared up.

How to use it

WireMock is used in JUnit as a rule. More info on JUnit rules can be found in Use JUnit rules to debug failed API tests post. There is WireMockClassRule and WireMockRule. The most appropriate is the class rule, there is no need to create mock server for each and every test, also additional logic is needed for port collision avoidance. In case you use other unit testing framework there is WireMockServer which can be started before tests and stopped afterwards. Code given bellow is used to REST API client from Create simple REST API client using Jersey post. First JUnit class rule is created.

public class JerseyPersonRestClientTest {

private static final int WIREMOCK_PORT = 9999;

@ClassRule
public static final WireMockClassRule WIREMOCK_RULE
= new WireMockClassRule(WIREMOCK_PORT);

private JerseyPersonRestClient clientUnderTest;

@Before
public void setUp() throws Exception {
clientUnderTest
= new JerseyPersonRestClient("http://localhost:" + WIREMOCK_PORT);
}
}

Port should be free, otherwise there is com.github.tomakehurst.wiremock.common.FatalStartupException: java.lang.RuntimeException: java.net.BindException: Address already in use: bind exception thrown.

Usage is very simple. There are several methods which are important. Method stubFor() is initialising the stub. Method get() notifies that stub is called with HTTP GET request. Method urlMatching() uses regular expression to match which API path is invoked, then willReturn() returns aResponse() withBody(). There are several with() methods which gives variety of options for testing. Complete test is bellow:

@Test
public void testGet_WithBody_PersonJson() {
String personString = "{\"firstName\":\"FN1\",\"lastName\":\"LN1\"}";
stubFor(get(urlMatching(".*/person/get/.*"))
.willReturn(aResponse().withBody(personString)));

Person actual = clientUnderTest.get(1);

assertEquals("FN1", actual.getFirstName());
assertEquals("LN1", actual.getLastName());
}

This is very straight forward case, where client should work, but when you start to elaborate on with() scenarios you can sometimes catch a issue with code being tested. See test bellow is working correctly in case where API returns HTTP response code 500 – Internal Server Error. Client might need to add some verification on response codes as well:

@Test
public void testGet_WithStatus() {
String personString = "{\"firstName\":\"FN1\",\"lastName\":\"LN1\"}";
stubFor(get(GET_URL)
.willReturn(aResponse()
.withStatus(500)
.withBody(personString)));

Person actual = clientUnderTest.get(1);

assertEquals("FN1", actual.getFirstName());
assertEquals("LN1", actual.getLastName());
}

Difference between stub and mock

In Mock JUnit tests with Mockito example post I’ve shown how yo use Mockito to mock given classes and control their behaviour in order to control and eliminate dependencies. Mockito is not suitable for this particular case because if you mock JerseyPersonRestClient‘s get() method it will just return an object, there is no testing whatsoever. Stubbing with WireMock on other hand tests all code for invoking the service, getting a response (controlled by you) and deserialising this response from network stream to Java object. It is much more adequate and close to reality testing.

Conclusion

WireMock is very powerful framework for API stubbing in order to make your test better and it is a must for unit testing some REST API client.

Read more...

Create simple REST API client using Jersey

Last Updated on by

Post summary: Code examples how to create REST API client using Jersey.

In current port I will give code examples how to build REST API client using Jersey. Code shown in examples bellow is available in GitHub java-samples/wiremock repository.

REST API client

REST API client is needed when you want to consume given REST API, either for production usage or for testing this API. In the latter case client does not need to be very sophisticated since it is used just for testing the API with Java code. In current post I will show how to create REST API client for Persons functionality of Dropwizard Rest Stub described in Build a RESTful stub server with Dropwizard post.

Jersey 2 and Jackson

Jersey is a framework which allows easier building of RESTful services. It is one of the most used such framework nowadays. Jackson is JSON parser for Java. It is also one of the most used ones. First step is to import libraries you are going to use:

<dependency>
	<groupId>org.glassfish.jersey.core</groupId>
	<artifactId>jersey-client</artifactId>
	<version>2.25.1</version>
</dependency>
<dependency>
	<groupId>org.glassfish.jersey.media</groupId>
	<artifactId>jersey-media-json-jackson</artifactId>
	<version>2.25.1</version>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.core</groupId>
	<artifactId>jackson-databind</artifactId>
	<version>2.8.7</version>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.core</groupId>
	<artifactId>jackson-annotations</artifactId>
	<version>2.8.7</version>
</dependency>

Not mandatory but it is good practice to create an interface for this client and then do implementations. The idea is you may have different implementations and switch between them.

import com.automationrhapsody.wiremock.model.Person;

import java.util.List;

public interface PersonRestClient {

	List<Person> getAll();

	Person get(int id);

	String save(Person person);

	String remove();
}

Now you can start with implementation. In given example constructor take the host which also includes port and scheme. Then it creates ClientConfig object with specific properties. Full list is shown in ClientProperties javadoc. In the example I set up timeouts only. Next is to create WebTarget object to query different API endpoints. It could not be simpler than that:

import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.WebTarget;

import org.glassfish.jersey.client.ClientConfig;
import org.glassfish.jersey.client.ClientProperties;
import org.glassfish.jersey.filter.LoggingFilter;

public class JerseyPersonRestClient implements PersonRestClient {

	private final WebTarget webTarget;

	public JerseyPersonRestClient(String host) {
		ClientConfig clientConfig = new ClientConfig()
				.property(ClientProperties.READ_TIMEOUT, 30000)
				.property(ClientProperties.CONNECT_TIMEOUT, 5000);

		webTarget = ClientBuilder
				.newClient(clientConfig)
				.register(new LoggingFilter())
				.target(host);
	}
}

Once WebTarget is instantiated it will be used to query all the endpoints. I will show implementation of one GET and one POST endpoints consumption:

@Override
public List<Person> getAll() {
	Person[] persons = webTarget
		.path(ENDPOINT_GET_ALL)
		.request()
		.get()
		.readEntity(Person[].class);
	return Arrays.stream(persons).collect(Collectors.toList());
}

@Override
public String save(Person person) {
	if (person == null) {
		throw new RuntimeException("Person entity should not be null");
	}
	return webTarget
		.path(ENDPOINT_SAVE)
		.request()
		.post(Entity.json(person))
		.readEntity(String.class);
}

Full code can be seen in GitHub repo: JerseyPersonRestClient full code.

jersey-media-json-jackson

This is the bonding betheww Jersey and Jackson. It should be used otherwise Jersey’s readEntity(Class var1) method throws:

Exception in thread “main” org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException: MessageBodyReader not found for media type=application/json, type=class …

or

Exception in thread “main” org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException: MessageBodyWriter not found for media type=application/json, type=class …

Client builder

In code there is class called PersonRestClientBuilder. In current case it does not do many things, but in reality it might turn out that a lot of configurations or input is provided to build a REST API client instance. This is where such builder becomes very useful:

public class PersonRestClientBuilder {

	private String host;

	public PersonRestClientBuilder setHost(String host) {
		this.host = host;
		return this;
	}

	public PersonRestClient build() {
		return new JerseyPersonRestClient(host);
	}
}

Unit testing

It is common and best practice that each piece of code is covered by unit tests. In Mock JUnit tests with Mockito example post I’ve described how Mockito can be used. Problem in current example is if we use Mockito we have to mock readEntity() method to return some response objects. This is way too much mocking and will not do adequate testing, actually it does not testing at all. We want to test that out REST API client successfully communicates over the wire. In order to do proper testing we need to use library called WireMock. In subsequent post I will add more details how to use it.

Conclusion

REST API consuming or testing requires building a client. Jersey is perfect candidate to be used as underlying framework. WireMock can be used for unit testing the REST API client you have created.

Read more...

Amazon S3 file upload with cURL and Java code

Last Updated on by

Post summary: Working Java code to upload file to Amazon S3 bucket with cURL.

This post gives solution to a very rare use case where you want to use cURL from Java code to upload file to Amazon S3 bucket.

Amazon S3

Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.

Upload to Amazon S3 with Java

Amazon S3 has very good documentation how to use their Java client in order to upload or retrieve files. Ways described in the manual are:

  • Uplaod in a single operation – few lines of code to instantiate AmazonS3Client object and to upload the file in one chunk.
  • Upload using multipart upload API – provides ability to upload file into several chinks. It is possible to use low and high level operations on the upload.
  • Upload using pre-signed URLs – with this approach you can upload to some else’s bucket with having access key and shared key.

More on each of the approaches can be found in Amazon S3 upload object manual.

Upload with cURL

cURL is widely used and powerful tool for data transfer. It supports various protocols and functionalities. In Windows world it is not widely used, but there are cURL implementations for Windows which can be used. Uploading to Amazon S3 bucket with cURL is pretty easy. There are bash scripts that can do it. One such is described in File Upload on Amazon S3 server using CURL request post. It is the basis I took to create the Java code for upload.

Upload with cURL and Java

Upload with Java and cURL is pretty rare case. Benefits of using this approach is memory and CPU optimisation. If upload is done through Java code file to be uploaded is read and stored in heap. This reading is optimised and parts already uploaded part are removed from heap. Anyway reading and removing the not needed file parts requires memory to keep it and CPU for garbage collection, especially when huge amount of data is to be transferred. In some cases where resources and performance are absolutely important this memory and CPU usage can be critical. cURL also uses memory to upload the file, but this becomes no longer problem of the JVM, rather than problem of the OS. Upload Java code is:

import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.file.Path;
import java.security.InvalidKeyException;
import java.security.NoSuchAlgorithmException;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Base64;

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;

import org.apache.commons.io.IOUtils;

public class AmazonS3CurlUploader {

	private static final String ALGORITHM = "HmacSHA1";
	private static final String CONTENT_TYPE = "application/octet-stream";
	private static final String ENCODING = "UTF8";

	public boolean upload(Path localFile, String s3Bucket, String s3FileName,
			String s3AccessKey, String s3SecretKey) {
		boolean result;
		try {
			Process cURL = createCurlProcess(localFile, s3Bucket,
					s3FileName, s3AccessKey, s3SecretKey);
			cURL.waitFor();
			String response = IOUtils.toString(cURL.getInputStream())
					+ IOUtils.toString(cURL.getErrorStream());
			result = response.contains("HTTP/1.1 200 OK");
		} catch (IOException | InterruptedException e) {
			// Exception handling goes here!
			result = false;
		}
		return result;
	}

	private Process createCurlProcess(Path file, String bucket, String fileName,
			String accessKey, String secretKey) throws IOException {
		String dateFormat = ZonedDateTime.now()
				.format(DateTimeFormatter.RFC_1123_DATE_TIME);
		String relativePath = "/" + bucket + "/" + fileName;
		String stringToSign = "PUT\n\n" + CONTENT_TYPE + "\n"
				+ dateFormat + "\n" + relativePath;
		String signature = Base64.getEncoder()
				.encodeToString(hmacSHA1(stringToSign, secretKey));

		return new ProcessBuilder(
				"curl", "-X", "PUT",
				"-T", file.toString(),
				"-H", "Host: " + bucket + ".s3.amazonaws.com",
				"-H", "Date: " + dateFormat,
				"-H", "Content-Type: " + CONTENT_TYPE,
				"-H", "Authorization: AWS " + accessKey + ":" + signature,
				"http://" + bucket + ".s3.amazonaws.com/" + fileName)
				.start();
	}

	private byte[] hmacSHA1(String data, String key) {
		try {
			Mac mac = Mac.getInstance(ALGORITHM);
			mac.init(new SecretKeySpec(key.getBytes(ENCODING), ALGORITHM));
			return mac.doFinal(data.getBytes(ENCODING));
		} catch (NoSuchAlgorithmException | InvalidKeyException
				| UnsupportedEncodingException e) {
			return new byte[] {};
		}
	}
}

Conclusion

Upload to Amazon S3 with cURL from Java code is a rare case, which could be beneficial in case where memory and CPU usage by JVM is crucial. Delegating file upload to cURL does not disturb JVM heap and Garbage Collection process.

Read more...

Mock static methods in JUnit with PowerMock example

Last Updated on by

Post summary: Examples how to mock static methods in JUnit tests with PowerMock.

In Mock JUnit tests with Mockito example post I have shown how and why to use Mockito java mocking framework to create good unit tests. There are several things that Mockito is not supporting, but one of them is mocking of static methods. It is not that common to encounter such situation is real life, but the moment you encounter it Mockito is not able to solve the task. This is where PowerMock comes to the rescue.

PowerMock

PowerMock is a framework that extends other mock libraries giving them more powerful capabilities. PowerMock uses a custom classloader and bytecode manipulation to enable mocking of static methods, constructors, final classes and methods, private methods, removal of static initialisers and more.

Example class for unit test

Code shown in examples bellow is available in GitHub java-samples/junit repository. We are going to unit test a class called LocatorService that internally uses static method from utilities class Utils. Method randomDistance(int distance) in Utils is returning random variable, hence it has no predictable behaviour and only way to test it is by mocking it:

public class LocatorService {

	public Point generatePointWithinDistance(Point point, int distance) {
		return new Point(point.getX() + Utils.randomDistance(distance), 
			point.getY() + Utils.randomDistance(distance));
	}
}

And Utils class is:

import java.util.Random;

public final class Utils {

	private static final Random RAND = new Random();

	private Utils() {
		// Utilities class
	}

	public static int randomDistance(int distance) {
		return RAND.nextInt(distance + distance) - distance;
	}
}

Nota bene: it is good code design practice to make utilities classes final and with private constructor.

Using PowerMock

In order to use PowerMock two things has to be done:

  1. Import PowerMock into project
  2. Annotate unit test class
  3. Mock the static class

Import PowerMock into project

In case of using Maven import statement is:

<dependency>
	<groupId>org.powermock</groupId>
	<artifactId>powermock-module-junit4</artifactId>
	<version>1.6.5</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.powermock</groupId>
	<artifactId>powermock-api-mockito</artifactId>
	<version>1.6.5</version>
	<scope>test</scope>
</dependency>

Nota bene: there is possibility of version mismatch between PowerMock and Mockito. I’ve received: java.lang.NoSuchMethodError: org.mockito.mock.MockCreationSettings.isUsingConstructor()Z exception when using PowerMock 1.6.5 with Mockito 1.9.5, so I had to upgrade to Mockito 1.10.19.

Annotate JUnit test class

Two annotations are needed. One is to run unit test with PowerMockRunner: @RunWith(PowerMockRunner.class). Other is to prepare Utils class for testing: @PrepareForTest({Utils.class}). Final code is:

import org.junit.runner.RunWith;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

@RunWith(PowerMockRunner.class)
@PrepareForTest({Utils.class})
public class LocatorServiceTest {
}

Mock static class

Explicit mocking to static class should be made in order to be be able to use standard Mockito when().thenReturn() construction:

int distance = 111;
PowerMockito.mockStatic(Utils.class);
when(Utils.randomDistance(anyInt())).thenReturn(distance);

Putting it all together

Final JUnit test class is shown bellow. Code in tests verifies logic in LocatorService, if point is given then new point is returned by adding random to its X and Y coordinates. By removing the random element with mocking code can be tested with specific values.

package com.automationrhapsody.junit;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.anyInt;
import static org.mockito.Mockito.when;

@RunWith(PowerMockRunner.class)
@PrepareForTest({Utils.class})
public class LocatorServiceTest {

	private LocatorService locatorServiceUnderTest;

	@Before
	public void setUp() {
		PowerMockito.mockStatic(Utils.class);

		locatorServiceUnderTest = new LocatorService();
	}

	@Test
	public void testGeneratePointWithinDistance() {
		int distance = 111;

		when(Utils.randomDistance(anyInt())).thenReturn(distance);

		Point input = new Point(11, 11);
		Point expected = new Point(input.getX() + distance, 
				input.getY() + distance);

		assertTrue(arePointsEqual(expected, 
			locatorServiceUnderTest.generatePointWithinDistance(input, 1)));
	}

	public static boolean arePointsEqual(Point p1, Point p2) {
		return p1.getX() == p2.getX()
			&& p1.getY() == p2.getY();
	}
}

Conclusion

PowerMock is powerful addition to standard mocking libraries as Mockito. Using it has some specifics, but once you understand them it is easy and fun to use it. Keep in mind that if you encounter a need to use PowerMock that can mean that code under test is not well designed. In my experience it is possible to have very good unit tests with more than 85% coverage without any PowerMock usage. Still there are some exceptional cases where PowerMock can be put in operation.

Read more...

Data driven testing with JUnit parameterized tests

Last Updated on by

Post summary: How to do data driven testing with JUnit parameterized tests.

In Mock JUnit tests with Mockito example post I have introduced Mockito and showed how to use for proper unit testing. In current post I will show how to improve test coverage by adding more scenarios. One solution is to copy and then paste single unit test and change input and expected output values, but this is failure-prone approach. A smarter approach is needed – data driven testing.

Data Driven Testing

Term from Wikipedia is: Data-driven testing (DDT) is a term used in the testing of computer software to describe testing done using a table of conditions directly as test inputs and verifiable outputs as well as the process where test environment settings and control are not hard-coded.

This exactly what is needed to improve test coverage – test with different scenarios and different input data without hard-coding the scenario itself, but just feeding different input and expected output data to it.

Parameterized JUnit tests

JUnit supports running test or several tests with given data table. Several things has to be done in order to do this:

  1. Annotate the test class
  2. Define test data
  3. Define variables to store the test data and read it
  4. Use tests data in tests

Nota bene: Every JUnit test (class annotated with @Test) is be executed with each row of the test data set. If you have 3 tests and 12 data rows this will result in 36 tests.

Annotate the class

Class needs to be run with specialised runner in order to be treated as data driven one. Runner is: org.junit.runners.Parameterized. Class looks like:

import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;

@RunWith(Parameterized.class)
public class LocatorParameterizedTest {
}

Define test data

Test data is seeded from static method: public static Iterable<Object[]> data(). This method returns collection of Object arrays where each array is one row with input and expected output test data. This method is annotated with @Parameterized.Parameters. The annotation may accept name argument which can display data from each row by its index: name = “{index}: Test with X={0}, Y={1}, result is: {2}”, where {index} is current test sequence, {0} is the first element from Object array. Here is how test data is defined:

@Parameterized.Parameters(name = "{index}: Test with X={0}, Y={1}, result: {2}")
public static Iterable<Object[]> data() {
	return Arrays.asList(new Object[][] {
		{-1, -1, new Point(1, 1)},
		{-1, 0, new Point(1, 0)},
		{-1, 1, new Point(1, 1)},
	});
}

Define variables to store the test data and read it

Private fields are needed to store every index from Object array representing test data row. In constructor of the class those variables are stored. Not that constructor must have same number of parameters. If there is difference running the test fails with: java.lang.IllegalArgumentException: wrong number of arguments exception. Code is:

private final int x;
private final int y;
private final Point expected;

public LocatorParameterizedTest(int x, int y, Point expected, int a) {
	this.x = x;
	this.y = y;
	this.expected = expected;
}

Use tests data in tests

Once read test data is accessed in tests by using the private fields that were read through the constructor:

@Test
public void testLocateLocalResult() {
	assertTrue(arePointsEqual(expected, locatorUnderTest.locate(x, y)));
}

private boolean arePointsEqual(Point p1, Point p2) {
	return p1.getX() == p2.getX()
		&& p1.getY() == p2.getY();
}

Putting it all together

Combining all steps into one class leads to unit test shown bellow. If you switch the tabs you can see original test class with just two tests as described Mock JUnit tests with Mockito example post:

Data driven test with 12 cases

import java.util.Arrays;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;

import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

@RunWith(Parameterized.class)
public class LocatorParameterizedTest {

	private static final Point MOCKED_POINT = new Point(11, 11);

	private LocatorService locatorServiceMock = mock(LocatorService.class);

	private Locator locatorUnderTest;

	@Parameterized.Parameters(name 
		= "{index}: Test with X={0}, Y={1}, result: {2}")
	public static Iterable<Object[]> data() {
		return Arrays.asList(new Object[][] {
			{-1, -1, new Point(1, 1)},
			{-1, 0, new Point(1, 0)},
			{-1, 1, new Point(1, 1)},

			{0, -1, new Point(0, 1)},
			{0, 0, MOCKED_POINT},
			{0, 1, MOCKED_POINT},

			{1, -1, new Point(1, 1)},
			{1, 0, MOCKED_POINT},
			{1, 1, MOCKED_POINT}
		});
	}

	private final int x;
	private final int y;
	private final Point expected;

	public LocatorParameterizedTest(int x, int y, Point expected) {
		this.x = x;
		this.y = y;
		this.expected = expected;
	}

	@Before
	public void setUp() {
		when(locatorServiceMock.geoLocate(any(Point.class)))
			.thenReturn(MOCKED_POINT);

		locatorUnderTest = new Locator(locatorServiceMock);
	}

	@Test
	public void testLocateResults() {
		assertTrue(arePointsEqual(expected, 
			locatorUnderTest.locate(x, y)));
	}

	private boolean arePointsEqual(Point p1, Point p2) {
		return p1.getX() == p2.getX()
			&& p1.getY() == p2.getY();
	}
}

Simple test with 2 cases

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.when;

@RunWith(MockitoJUnitRunner.class)
public class LocatorTest {

	private static final Point TEST_POINT = new Point(11, 11);

	@Mock
	private LocatorService locatorServiceMock;

	private Locator locatorUnderTest;

	@Before
	public void setUp() {
		when(locatorServiceMock.geoLocate(any(Point.class)))
			.thenReturn(TEST_POINT);

		locatorUnderTest = new Locator(locatorServiceMock);
	}

	@Test
	public void testLocateWithServiceResult() {
		assertEquals(TEST_POINT, locatorUnderTest.locate(1, 1));
	}

	@Test
	public void testLocateLocalResult() {
		Point expected = new Point(1, 1);
		assertTrue(arePointsEqual(expected, 
			locatorUnderTest.locate(-1, -1)));
	}

	private boolean arePointsEqual(Point p1, Point p2) {
		return p1.getX() == p2.getX()
			&& p1.getY() == p2.getY();
	}
}

Full example can be found in LocatorParameterizedTest.java class.

data-driven-junit

Better alternatives

Standard JUnit data provider is not very flexible. Define the data set is used for the whole test class, thus every test method in this class will be run with each of data set rows. If you have 4 rows and 3 test methods then this will result in 12 tests being run. TestNG provides much better data provider where data set is defined and can be applied to individual test method only. More details can be found in TestNG data provider page. This data provider is available for JUnit by external Java library called junit-dataprovider. More details how to use this data provider can be found in Data driven testing with JUnit and Gradle post.

Conclusion

Data driven testing is powerful instrument. With current post I showed how easy it is to do it with JUnit as well as what alternatives are available.

Read more...

Introduction to Postman with examples

Last Updated on by

Post summary: This post is demonstrating different Postman features with examples.

All examples shown in this post are available at Postman Examples link and can be imported in Postman. Environments are also used in attached examples and are available at Admin environment and User environment. In order to run all the examples you need to download and run Dropwizard stub described in Build a RESTful stub server with Dropwizard post and available in sample-dropwizard-rest-stub GitHub repo, otherwise you can just see the Postman code and screenshots.

Postman

Postman is a Chrome add-on and Mac application which is used to fire requests to an API. It is very lightweight and fast. Requests can be organised in groups, also tests can be created with verifications for certain conditions on the response. With its features it is very good and convenient API tool. It is possible to make different kinds of HTTP requests – GET, POST, PUT, PATCH and DELETE. It is possible to add headers in the requests. In current post I will write about more interesting features it has: Variables, Pre-Request Script, Environments and Tests.

postman-main

Variables

There are two types of variables – global and environment. Global variables are for all requests, environment variables are defined per specific environment which can selected from a drop-down or no environment can be selected. Environments will be discussed in details later in current port. Global variables are editable by small eye-shaped icon in the top right corner. Once defined variables can be used in request with format surrounded by curly brackets: {{VARIABLE_NAME}}.

postman-globals

Pre-Request Script

Postman allows users to do some JavaScript coding with which to manipulate the data being sent with request. One such example is when testing and API with security as explained in How to implement secure REST API authentication over HTTP post – SHA256 hash (build from apiKey + secretKey + timestamp in seconds) is sent as request parameter with the request. Calculating SHA256 hash is done with following pre-request script and then stored as global variable token.

var timeInSeconds = parseInt((new Date()).getTime() / 1000);
var sigString = postman.getGlobalVariable("apiKey") + 
	postman.getGlobalVariable("secretKey") + timeInSeconds
var token = CryptoJS.SHA256(sigString);
postman.setGlobalVariable("token", token);

Here CryptoJS library is used to create SHA256 hash. All available libraries in Postman are described in Postman Testing Sandbox page. Global variable {{token}} is then send as token request parameter.

postman-pre-request-script

Environments

Code shown above is working fine with just one set of credentials because they are stored as global variables. If you need to switch between different credentials this is where environments come in play. By switching environment and with no change in the request you can send different parameters to API. Environments are managed from Settings icon in the top right corner which opens menu with “Manage Environments” link.

postman-environments

Postman supports so called shared environments, which means whole team can use one and the same credentials managed centrally. It requires sign in and some plan though, but might be good investment in the long run.

In order to use environments pre-request script has to be changed to:

var timeInSeconds = parseInt((new Date()).getTime() / 1000);
var sigString = environment.apiKey + environment.secretKey + timeInSeconds
var token = CryptoJS.SHA256(sigString);
postman.setEnvironmentVariable("token", token);

Both apiKey and secretKey are read from environment. Environment can be changed from top right corner.

Nota bene: There is specific behaviour (I would not call it bug as it makes sense) in Postman. If select “No Environment” and fire request above Postman will automatically create environment with name “No Environment”. This is because it actually needs an environment to store variable into. This might be very confusing first time.

Post-Request Script

There is no such term defined in Postman. The idea is that in many cases you will need to do something with the response and extract variable from it in order to use it at later stage. This can be done in “Tests” tab. Example given bellow is to take all persons with API call and then to process response and at random select one id which is stored as global variable and then used in next request. You can put whatever JavaScript code you like in order to fulfil your logic.

var jsonData = JSON.parse(responseBody)
var size = jsonData.length
var index = parseInt(Math.random() * size)
postman.setGlobalVariable("userId", index);

postman-post-request

Then in subsequent request you can use GET call to URL: http://localhost:9000/person/get/{{userId}}

Tests

After response is received Postman has functionality to make verifications on it. This is done in “Tests” tab. Bellow is example on different verifications. Most interesting part is in case of JSON response it can be parsed to an array and then elements accessed by index and value jsonData[0].id or even iterated as shown bellow. Format is: tests[“TEST_NAME”] = BOOLEAN_CONDITION.

tests["Status code is 200"] = responseCode.code === 200;

tests["Response time is less than 200ms"] = responseTime < 200;

var expected = "email1@email.na"
tests["Body cointains string: " + expected] = responseBody.has(expected);

var jsonData = JSON.parse(responseBody);
var expectedCount = 4
tests["Response count is: " + expectedCount] = jsonData.length === expectedCount;

for(var i=1; i<=expectedCount; i++) {
	tests["Verify id is: " + i] = jsonData[i-1].id === i;
}

postman-test-response

postman-test-results

Nota bene: if you use responseTime verification you have to know that it measures just the TTFB (time to first bite) it does not measure time needed to transfer the data. If you have API with big responses or network is slow you may fire the request, wait a lot and then Postman shows very small response time which might be confusing.

Run from command line

In order to run Postman tests in command line as part of some CI process there is separate tool called Newman. It requires NodeJS to be installed and runs on NodeJS environment. It is very well described in How to write powerful automated API tests with Postman, Newman and Jenkins.

Code reuse between requests

It is very convenient some piece of code to be re-used between request to prevent copy/paste it. Postman does not support yet code re-use between requests. Good thing is that there is workaround for this. It is possible to do it by defining a helper function with verifications which is saved as global variable in first request from your test scenario:

postman.setGlobalVariable("loadHelpers", function loadHelpers() {
	let helpers = {};

	helpers.verifyCount = function verifyCount(expectedCount) {
		var jsonData = JSON.parse(responseBody);
		tests["Response count is: " + expectedCount] 
			= jsonData.length === expectedCount;
	}

	// ...additional helpers

	return helpers;
} + '; loadHelpers();');

Then from other requests helpers are taken from global variables and verification functions can be used:

var helpers = eval(globals.loadHelpers);
helpers.verifyCount(4);

See more in Reusing pre-request scripts across requests in a collection issue thread.

Conclusion

Postman is very nice tool to use when develop your API or manual test it. I would definitely recommend Postman, I use it on daily basis for probing the API. For serious API functional tests automation I would say Postman is not ready yet and you’d better go for another approach. Good thing is that there is big community around it which is growing and new features are added.

Read more...

JSON format to register service with Eureka

Last Updated on by

Post summary: What JSON data is needed to register service node with Eureka server.

Eureka is a REST based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers.

Jersey 1 vs Jersey 2

As of now Eureka client works only with Jersey 1. There is PR to create Jersey 2 client, but by the time this post is created it is still not developed. Jersey 1 and Jersey 2 are mutually exclusive. If your product works with Jersey 2 then in order to register with Eureka you have to write your own client.

Registering with Eureka

Eureka documentation is giving example how to register a server or service node with Eureka server. Given example is an XML one and there is no JSON example. The XML example cannot be straight-forward converted for JSON because JSON does not support attributes as XML does.

Solution

Use following JSON in order to register with Eureka server with custom REST client:

{
	"instance": {
		"hostName": "WKS-SOF-L011",
		"app": "com.automationrhapsody.eureka.app",
		"vipAddress": "com.automationrhapsody.eureka.app",
		"secureVipAddress": "com.automationrhapsody.eureka.app"
		"ipAddr": "10.0.0.10",
		"status": "STARTING",
		"port": {"$": "8080", "@enabled": "true"},
		"securePort": {"$": "8443", "@enabled": "true"},
		"healthCheckUrl": "http://WKS-SOF-L011:8080/healthcheck",
		"statusPageUrl": "http://WKS-SOF-L011:8080/status",
		"homePageUrl": "http://WKS-SOF-L011:8080",
		"dataCenterInfo": {
			"@class": "com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo", 
			"name": "MyOwn"
		},
	}
}

Implementation details

Important part which is not clear enough in standard documentation is: “securePort”: {“$”: “8443”, “@enabled”: “true”}. Note that “securePort”: “8443” would also work, but this will just set the port number without enabling it. By default secure port is disabled unless register call enables it.

I would recommend to use one and the same for “vipAddress” and “secureVipAddress”. When searching for HTTP endpoint Eureka server uses “vipAddress” and if you switch your client to search for HTTPS port Eureka server will now search for “secureVipAddress”. If both are different this may lead to confusion why no endpoint is returned although there is one with HTTPS enabled.

Implementation is needed for DataCenterInfo interface. One such implementation is:

private static final class DefaultDataCenterInfo implements DataCenterInfo {
	private final Name name;

	private DefaultDataCenterInfo(Name name) {
		this.name = name;
	}

	@Override
	public Name getName() {
		return name;
	}

	public static DataCenterInfo myOwn() {
		return new DefaultDataCenterInfo(Name.MyOwn);
	}
}

As seen in JSON example application status is STARTING. This is good practice to keep STARTING state until application is fully started. This will prevent current not yet ready node to be returned for usage by Eureka server. Once application is fully started then with subsequent call you can change the status to UP. This is done with REST PUT call to /eureka/v2/apps/appID/instanceID/status?value=UP. InstanceID is basically the hostname. AppID is the one registered with “app” in JSON. Also good idea is to have “app” same as “vipAddress” to minimise confusion.

Conclusion

Eureka is a really nice tool. It has a default client which can be used out of the box. Problem is that this client uses Jersey 1. If you need Jersey 2 client by the time of this post you have to make it on your own. This post gives basic direction how to do this. Since official documentation is lacking details how to resister a node with JSON this port gives more clarity. The most important part is: “securePort”: {“$”: “8443”, “@enabled”: “true”}.

Read more...

Unmarshal/Convert JSON data to JAXBElement object

Last Updated on by

Post summary: How to marshal and unmarshal JAXBElement to JSON with Jackson.

This post gives solution for following usecase

Usecase

XML document -> POJO containing JAXBElement -> JSON -> POJO containing JAXBElement.

For some reason there is a POJO which has some JAXBElement. This usually happens when mixing SOAP and REST services with XML and JSON. This POJO is easily converted to JSON data. Then from this JSON data a POJO containing JAXBElement has to be unmarshalled.

Problem

By default Jackson’s ObjectMapper is unable to unmarshall JSON data into a JAXBElement object. An exception is thrown:

No suitable constructor found for type [simple type, class javax.xml.bind.JAXBElement]: can not instantiate from JSON object (missing default constructor or creator, or perhaps need to add/enable type information?)

Solution

Although somewhere it is recommended to use com.fasterxml.jackson.module.jaxb.JaxbAnnotationModule it might not work. The solution is to create custom MixIn and register it with ObjectMapper. MixIn class is:

import javax.xml.bind.JAXBElement;
import javax.xml.namespace.QName;

@JsonIgnoreProperties(value = {"globalScope", "typeSubstituted", "nil"})
public abstract class JAXBElementMixIn<T> {

	@JsonCreator
	public JAXBElementMixIn(@JsonProperty("name") QName name,
			@JsonProperty("declaredType") Class<T> declaredType,
			@JsonProperty("scope") Class scope,
			@JsonProperty("value") T value) {
	}
}

ObjectMapper is instantiated with following code:

import com.fasterxml.jackson.databind.ObjectMapper;

ObjectMapper objectMapper = new ObjectMapper();
objectMapper.addMixIn(JAXBElement.class, JAXBElementMixIn.class);

Conclusion

Jackson’s ObjectMapper does not support JSON to JAXBElement conversion by default. This is solved by creating a custom MixIn as described in current post and register it with ObjectMapper.

Read more...

MD5, SHA-1, SHA-256 and SHA-512 speed performance

Last Updated on by

Post summary: Speed performance comparison of MD5, SHA-1, SHA-256 and SHA-512 cryptographic hash functions in Java.

For Implement secure API authentication over HTTP with Dropwizard post an one way hash function was needed. Several factors are important when choosing hash algorithm: security, speed and purpose of use.

Security

MD5 and SHA-1 are compromised. Those shall not be used unless their speed is several times slower than SHA-256 or SHA-512. Other that remain are SHA-256 and SHA-512. They are from SHA-2 family and are much more secure. SHA-256 is computed with 32 bit words, SHA-512 with 64 bit words.

Hash implementations

For generating cryptographic hashes in Java there is Apache Commons Codec library which is very convenient.

Speed performance

In order to test the speed sample code is used:

import java.util.UUID;

import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.lang.time.StopWatch;

public class Test {

	private static final int TIMES = 1_000_000;
	private static final String UUID_STRING = UUID.randomUUID().toString();

	public static void main(String[] args) {
		System.out.println(generateStringToHash());
		System.out.println("MD5: " + md5());
		System.out.println("SHA-1: " + sha1());
		System.out.println("SHA-256: " + sha256());
		System.out.println("SHA-512: " + sha512());
	}

	public static long md5() {
		StopWatch watch = new StopWatch();
		watch.start();
		for (int i = 0; i < TIMES; i++) {
			DigestUtils.md5Hex(generateStringToHash());
		}
		watch.stop();
		System.out.println(DigestUtils.md5Hex(generateStringToHash()));
		return watch.getTime();
	}

	public static long sha1() {
		...
		System.out.println(DigestUtils.sha1Hex(generateStringToHash()));
		return watch.getTime();
	}

	public static long sha256() {
		...
		System.out.println(DigestUtils.sha256Hex(generateStringToHash()));
		return watch.getTime();
	}

	public static long sha512() {
		...
		System.out.println(DigestUtils.sha512Hex(generateStringToHash()));
		return watch.getTime();
	}

	public static String generateStringToHash() {
		return UUID.randomUUID().toString() + System.currentTimeMillis();
	}
}

Several measurements were done. Two groups – one with smaller length string to hash and one with longer. Each group had following variations of generateStringToHash() method:

  • cached UUID – no extra time should be consumed
  • cached UUID + current system time – in this case time is consumed to get system time
  • new UUID + current system time – in this case time is consumed for generating the UUID and to get system time

Raw results

Five measurements were made for each case and average value calculated. Time is in milliseconds per 1 000 000 calculations. System is 64 bits Windows 10 with 1 core Intel i7 2.60GHz and 16GB RAM.

  • generateStringToHash() with: return UUID_STRING;

Data to encode is ~36 characters in length (f5cdcda7-d873-455f-9902-dc9c7894bee0). UUID is cached and time stamp is not taken. No additional time is wasted.

Hash #1 (ms) #2 (ms) #3 (ms) #4 (ms) #5 (ms) Average per 1M (ms)
MD5 649 623 621 624 620 627.4
SHA-1 608 588 630 600 594 604
SHA-256 746 724 741 720 758 737.8
SHA-512 1073 1055 1050 1052 1052 1056.4
  • generateStringToHash() with: return UUID_STRING + System.currentTimeMillis();

Data to encode is ~49 characters in length (aa096640-21d6-4f44-9c49-4115d3fa69381468217419114). UUID is cached.

Hash #1 (ms) #2 (ms) #3 (ms) #4 (ms) #5 (ms) Average per 1M (ms)
MD5 751 789 745 806 737 765.6
SHA-1 768 765 694 763 751 748.2
SHA-256 842 876 848 839 850 851
SHA-512 1161 1152 1164 1154 1163 1158.8
  • generateStringToHash() with: return UUID.randomUUID().toString() + System.currentTimeMillis();

Data to encode is ~49 characters in length (1af4a3e1-1d92-40e7-8a74-7bb7394211e01468216765464). New UUID is generated on each calculation so time for its generation is included in total time.

Hash #1 (ms) #2 (ms) #3 (ms) #4 (ms) #5 (ms) Average per 1M (ms)
MD5 1505  1471 1518 1463 1487 1488.8
SHA-1 1333 1309 1323 1326 1334 1325
SHA-256 1505 1496 1507 1498 1516 1504.4
SHA-512 1834 1827 1833 1836 1857 1837.4
  • generateStringToHash() with: return UUID_STRING + UUID_STRING;

Data to encode is ~72 characters in length (57149cb6-991c-4ffd-9c98-d823ee8a61f757149cb6-991c-4ffd-9c98-d823ee8a61f7). UUID is cached and time stamp is not taken. No additional time is wasted.

Hash #1 (ms) #2 (ms) #3 (ms) #4 (ms) #5 (ms) Average per 1M (ms)
MD5 856 824 876 811 828 839
SHA-1 921 896 970 904 893 916.8
SHA-256 1145 1137 1241 1141 1177 1168.2
SHA-512 1133 1131 1116 1102 1110 1118.4
  • generateStringToHash() with: return UUID_STRING + UUID_STRING + System.currentTimeMillis();

Data to encode is ~85 characters in length (759529c5-1f57-4167-b289-899c163c775e759529c5-1f57-4167-b289-899c163c775e1468218673060). UUID is cached.

Hash #1 (ms) #2 (ms) #3 (ms) #4 (ms) #5 (ms) Average per 1M (ms)
MD5 1029 1035 1034 1012 1037 1029.4
SHA-1 1008 1016 1027 1007 990 1009.6
SHA-256 1254 1249 1290 1259 1248 1260
SHA-512 1228 1221 1232 1230 1226 1227.4
  • generateStringToHash() with: final String randomUuid = UUID.randomUUID().toString();
    return randomUuid + randomUuid + System.currentTimeMillis();

Data to encode is ~85 characters in length (2734b31f-16db-4eba-afd5-121d0670ffa72734b31f-16db-4eba-afd5-121d0670ffa71468217683040). New UUID is generated on each calculation so time for its generation is included in total time.

Hash #1 (ms) #2 (ms) #3 (ms) #4 (ms) #5 (ms) Average per 1M (ms)
MD5 1753 1757 1739 1751 1691 1738.2
SHA-1 1634 1634 1627 1634 1633 1632.4
SHA-256 1962 1956 1988 1988 1924 1963.6
SHA-512 1909 1946 1936 1929 1895 1923

Aggregated results

Results from all iterations are aggregated and compared in table bellow. There are 6 main cases. They are listed bellow and referenced in the table:

  • Case 1 – 36 characters length string, UUID is cached
  • Case 2 – 49 characters length string, UUID is cached and system time stamp is calculated each iteration
  • Case 3 – 49 characters length string, new UUID is generated on each iteration and system time stamp is calculated each iteration
  • Case 4 – 72 characters length string, UUID is cached
  • Case 5 – 85 characters length string, UUID is cached and system time stamp is calculated each iteration
  • Case 6 – 85 characters length string, new UUID is generated on each iteration and system time stamp is calculated each iteration

All times bellow are per 1 000 000 calculations:

Hash Case 1 (ms) Case 2 (ms) Case 3 (ms) Case 4 (ms) Case 5 (ms) Case 6 (ms)
MD5 627.4 765.6 1488.8 839 1029.4 1738.2
SHA-1 604 748.2 1325 916.8 1009.6 1632.4
SHA-256 737.8 851 1504.4 1168.2 1260 1963.6
SHA-512 1056.4 1158.8 1837.4 1118.4 1227.4 1923

Compare results

Some conclusions of the results based on two cases with short string (36 and 49 chars) and longer string (72 and 85 chars).

  • SHA-256 is faster with 31% than SHA-512 only when hashing small strings. When string are longer SHA-512 is faster with 2.9%.
  • Time to get system time stamp is ~121.6 ms per 1M iterations.
  • Time to generate UUID is ~670.4 ms per 1M iterations.
  • SHA-1 is fastest hashing function with ~587.9 ms per 1M operations for short strings and 881.7 ms per 1M for longer strings.
  • MD5 is 7.6% slower than SHA-1 for short strings and 1.3% for longer strings.
  • SHA-256 is 15.5% slower than SHA-1 for short strings and 23.4% for longer strings.
  • SHA-512 is 51.7% slower that SHA-1 for short strings and 20% for longer.

Hash sizes

Important data to consider is hash size that is produced by each function:

  • MD5 produces 32 chars hash – 5f3a47d4c0f703c5d83265c3669f95e6
  • SHA-1 produces 40 chars hash – 2c5a70165585bd4409aedeea289628fa6074e17e
  • SHA-256 produces 64 chars hash – b6ba4d0a53ddc447b25cb32b154c47f33770d479869be794ccc94dffa1698cd0
  • SHA-512 produces 128 chars hash – 54cdb8ee95fa7264b7eca84766ecccde7fd9e3e00c8b8bf518e9fcff52ad061ad28cae49ec3a09144ee8f342666462743718b5a73215bee373ed6f3120d30351

Purpose of use

In specific case this research was made for hashed string will be passed as API request. It is constructed from API Key + Secret Key + current time in seconds. So if API Key is something like 15-20 chars, Secret Key is 10-15 chars and time is 10 chars, total length of string to hash is 35-45 chars. Since it is being passed as request param it is better to be as short as possible.

Select hash function

Based on all data so far SHA-256 is selected. It is from secure SHA-2 family. It is much faster than SHA-512 with shorter stings and it produces 64 chars hash.

Conclusion

Current post gives comparison of MD5, SHA-1, SHA-256 and SHA-512 cryptographic hash functions. Important is that comparison is very dependant on specific implementation (Apache Commons Codec), specific purpose of use (generate secure token to be sent with API call). It is good MD5 and SHA-1 to be avoided as they are compromised and not secure. If their speed for given context is several times faster than secure SHA-2 ones and security is not that much important they can be chosen though. When choosing cryptographic hash function everything is up to a context of usage and benchmark tests for this context are needed.

Read more...

Implement secure API authentication over HTTP with Dropwizard

Last Updated on by

Post summary: Reference implementation on suggested in How to implement secure REST API authentication over HTTP post authentication mechanism.

API authentication mechanism

Suggested authentication mechanism consists of following steps:

  • Secret key that is known only by API consumer and API provider is needed along with API key.
  • Secret key is used to one way hash a token which is send to server along with API key in the API call.
  • Token consists of: API key + Secret key + Current time in seconds, which then gets hashed with SHA-256 algorithm preferably.
  • Server recreates all the tokens locally for every second for some time in the future, preferably not too long – 30~120 seconds.
  • Server recreates all the tokens for 30~120 seconds in the past, to take into account the time needed for request to reach the server.
  • Server compares each of the tokens with received one.
  • If there is match consumer is authenticated and response is returned.

Dropwizard implementation

Dropwizard stub introduced in Build a RESTful stub server with Dropwizard post will be used to create authentication. Full example can be found in GitHib sample-dropwizard-rest-stub repository. Implementation consists of following steps:

  • Implement javax.ws.rs.container.ContainerRequestFilter interface. Implementation will inspect every request and verify authentication.
  • Create custom annotation
  • Annotate RequestFilter and Dropwizard resource (API service) on which authentication should be applied.
  • Register RequestFilter implementation class into Dropwizard Jersey environment.

Create custom annotation

Starting with the easiest step. Creating custom annotation is pretty easy. It could be applied to a class (ElementType.TYPE) or to a method (ElementType.METHOD). It should live as long as program runs (RetentionPolicy.RUNTIME). In order to make it possible annotated request filter to be applied on specific resource only @NameBinding annotation is a must in Jersey. If not specified request filter will apply on all resources. Needed annotation is:

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import javax.ws.rs.NameBinding;

@Target({ElementType.TYPE, ElementType.METHOD})
@Retention(value = RetentionPolicy.RUNTIME)
@NameBinding
public @interface Authenticator {
}

ContainerRequestFilter implementation

Container request filter is applied on incoming requests. If used with @NameBinding annotation it is applied only where needed, if not it is applied globally. Mandatory is to override filter() method:

import com.automationrhapsody.reststub.persistence.AuthDB;

import java.io.IOException;
import java.util.List;

import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerRequestFilter;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;

@Authenticator
public class AuthenticateFilter implements ContainerRequestFilter {

	private static final String PARAM_API_KEY = "apiKey";
	private static final String PARAM_TOKEN = "token";
	private static final long SECONDS_IN_MILLISECOND = 1000L;
	private static final int TTL_SECONDS = 60;

	@Override
	public void filter(ContainerRequestContext context) throws IOException {
		final String apiKey = extractParam(context, PARAM_API_KEY);
		if (StringUtils.isEmpty(apiKey)) {
			context.abortWith(responseMissingParameter(PARAM_API_KEY));
		}

		final String token = extractParam(context, PARAM_TOKEN);
		if (StringUtils.isEmpty(token)) {
			context.abortWith(responseMissingParameter(PARAM_TOKEN));
		}

		if (!authenticate(apiKey, token)) {
			context.abortWith(responseUnauthorized());
		}
	}
}

As seen above two GET parameters are mandatory in the request: “apiKey” and “token”. Those are first extracted and verified. If some of them is not existing BAD_REQUEST (HTTP Status code 400) Response is returned with error message. Methods that extract params and build error response are:

private String extractParam(ContainerRequestContext context, String param) {
	final UriInfo uriInfo = context.getUriInfo();
	final List user = uriInfo.getQueryParameters().get(param);
	return CollectionUtils.isEmpty(user) ? null : String.valueOf(user.get(0));
}

private Response responseMissingParameter(String name) {
	return Response.status(Response.Status.BAD_REQUEST)
		.type(MediaType.TEXT_PLAIN_TYPE)
		.entity("Parameter '" + name + "' is required.")
		.build();
}

If both are present then code tried to authenticate the call by rebuilding all the hashes for 60 seconds in the past because request cannot arrive instantly it takes some time. If network is slower this time can be increased. It also rebuilds all hashes for 60 seconds in the future, this is token time to live. Server has access to Secret key for any given API key. In example above they are stored in fake DB provider and obtained by AuthDB.getSecretKey(apiKey):

private boolean authenticate(String apiKey, String token) {
	final String secretKey = AuthDB.getSecretKey(apiKey);

	// No need to calculate digest in case of wrong apiKey
	if (StringUtils.isEmpty(secretKey)) {
		return false;
	}

	final long nowSec = System.currentTimeMillis() / SECONDS_IN_MILLISECOND;
	long startTime = nowSec - TTL_SECONDS;
	long endTime = nowSec + TTL_SECONDS;
	for (; startTime < endTime; startTime++) {
		final String toHash = apiKey + secretKey + startTime;
		final String sha1 = DigestUtils.sha256Hex(toHash);
		if (sha1.equals(token)) {
			return true;
		}
	}

	return false;
}

As seen above server uses SHA-256 cryptographic algorithm. It is the best solution in terms of speed and security. In MD5, SHA-1, SHA-256 and SHA-512 speed performance post a comparison between MD5, SHA-1, SHA-256 and SHA-512 is made. If authentication cannot be verified then UNAUTHORIZED (HTTP Status code 401) Response response is returned:

private Response responseUnauthorized() {
	return Response.status(Response.Status.UNAUTHORIZED)
		.type(MediaType.TEXT_PLAIN_TYPE)
		.entity("Unauthorized")
		.build();
}

This is the hardest part. Now this filter has to be registered with Jersey and applied to needed resources (services). See more on ContainerRequestFilter interface and @NameBinding annotation in Jersey filters and interceptors page.

Apply authentication filter on a resource

Indicating that given resource should be checked for authentication is done with custom @Authenticator annotation created previously. If needed just for specific API call it can be applied also on a method level:

import com.automationrhapsody.reststub.data.Book;
import com.automationrhapsody.reststub.filters.Authenticator;
import com.automationrhapsody.reststub.persistence.BookDB;
import com.codahale.metrics.annotation.Timed;

import java.util.List;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

@Authenticator
@Path("/secure/books")
public class BooksSecureService {

	@GET
	@Timed
	@Produces(MediaType.APPLICATION_JSON)
	public List<Book> getBooks() {
		return BookDB.getAll();
	}
}

Register in Dropwizard Jersey

Last step is to register the request filter and resource with Dropwizard’s Jersey:

@Override
public void run(RestStubConfig config, Environment env) {

	env.jersey().register(BooksSecureService.class);
	env.jersey().register(AuthenticateFilter.class);

}

Conclusion

Very easy to implement in Dropwizard and relatively secure way to provide API authentication over HTTP protocol. For mission critical application definitely more strict consideration and review of this authentication mechanism is needed.

Read more...

How to implement secure REST API authentication over HTTP

Last Updated on by

Post summary: How to implement secure API authentication even over HTTP.

Important: this post is not a complete and expert guide on API security. It is mainly done to test Postman Pre-request hook that is described in Introduction to Postman with examples post. It does not go into all the details about API security, SSL certificates, encrypting the data, etc. It gives basic information how you can protect your API’s consumers against their network traffic being sniffed and credentials, apiKeys, session keys, etc stolen.

Authentication vs. Authorisation

Authentication is defined as “Who you are”. It deals with usernames and password. Authorisation is defined as “What you can do”. It deals with permissions. Before dealing with permissions application must know the user, so Authorisation comes after Authentication.

Basic Authentication

As it is stated it is very basic. The idea is to sent Base64 encoded username and password in the header of the request in following format:

Authorization: Basic dXNlcjpwYXNz

Server decodes the username and password and use them to authenticate and authorise the user. Problem with Basic authentication is it must be used only over HTTPS, since network traffic is encrypted. Over HTTP request can be easily sniffed. Base64 is reversible and there are numerous tools on the web where you can put dXNlcjpwYXNz and they will return user:pass as plain text.

Nota bene: Never use this one without HTTPS.

OAuth and OAuth 2.0

OAuth is authorisation protocol. It is intended mainly for web, but can be used in API authorisation. The idea is that authentication and authorisation is done by third party like Microsoft, Google, Facebook, Twitter, etc. This is easy for API as it does not have to deal with user data. Customer logins to third party and access token is being issued. Token has some validity which is not too long, but not too short, usually 1 or 2 days. The API can obtain user details from third party by this token. User authenticates itself to the API with this access token by sending it in the request header:

Authorization: Bearer 66408bd9-2bc0-40c3-9823-e9bec390532a

Problem with OAuth is it also must be used over HTTPS. Over HTTP traffic can be sniffed and token can be stolen. Although token has some expiry time, it is long enough for a hacker to use API from your behalf.

Nota bene: Never use this one without HTTPS.

API keys

API keys has become the standard when consuming an API. API key is some random hash which uniquely identifies the consumer. API keys have numerous benefits over username/password mechanism. Again in case of HTTP network traffic can be sniffed and API key stolen.

HTTPS

Reading post so far turned out there is not a single API authentication protocol that is secure if not used over HTTPS. In current a solution is proposed. It is commonly used in public APIs, it is possible to exist as a standard I’m just not aware of its name, which provides secure API authentication even over HTTP.

Implement API security over HTTP

In short in order to have security over HTTP following steps should be done:

  • Secret key that is known only by API consumer and API provider is needed along with API key.
  • Secret key is used to one way hash a token which is send to server along with API key in the API call.
  • Token consists of: API key + Secret key + Current time in seconds, which then gets hashed with SHA-256 algorithm preferably.
  • Server recreates all the tokens locally for every second for some time in the future, preferably not too long – 30~120 seconds.
  • Server recreates all the tokens for 30~120 seconds in the past, to take into account the time needed for request to reach the server.
  • Server compares each of the tokens with received one.
  • If there is match consumer is authenticated and response is returned.

Cryptographic hash algorithms

Most used hash algorithms nowadays are: MD5, SHA-1, SHA-256, SHA-512. MD5 and SHA-1 are to week and are not recommended. SHA-512 takes more time to compute the hashes. SHA-256 is the most appropriate solution in terms of security and speed. In MD5, SHA-1, SHA-256 and SHA-512 speed performance post all 4 algorithms have been tested and compared with Apache’s Commons Codec implementation.

Hash with Salt

Time stamp into hashed token is used for so called salt, an random data that is used to differentiate the hashed data against dictionary attacks. If just API key + Secret key are hashed, then hash will always be one and the same. Intruder will take the hash and just use it. Using time stamp makes the hash always different. Another function of time stamp is to set expiration time on the token, so even if stolen not to be used for a long period of time.

Conclusion

There are already established standards to secure an API, but all of them are effective only over HTTPS. In current post is given a proposal for secure API authentication which is very simple and relatively safe even over HTTP. Cons of this method is server has to recalculate hash many times, which in massive load would require some caching. In Implement secure API authentication over HTTP with Dropwizard post there is reference implementation of proposed solution with Dropwizard.

Read more...

Assert Mockito mock method arguments if no equals() method implemented

Last Updated on by

Post summary: How to assert mock method is called with specific object as argument in case no equals() method is implemented on argument object.

Mock JUnit tests with Mockito example post introduces Mockito as Java mocking framework. Code shown in examples bellow is available in GitHub java-samples/junit repository. Mockito makes it possible to verify whether a mock method has been called and with what specific object:

verify(locatorServiceMock, times(1)).geoLocate(new Point(1, 1));

Code above verifies that mock’s geoLocate() method was called with argument object with coordinates (1, 1).

Missing equals() method

Internally Mockito uses Point class’s equals() method to compare object that has been passed to method as argument with object configured as expected in verify() method. If equals() is not overridden then java.lang.Object’s equals() is used which compares only the references, i.e. if both variables point to one and the same object in heap. In current example Point class has no equals() method implemented. When providing expected a new object is created, references are not one and the same, so Mockito will fail the verification.

Override equals()

In order to make verification works simplest solution is to implement equals() method in Point class. Personally I’m not a big fan of changing production code for sake of testing. Maybe there is a valid reason for developer to have designed current class in such manner. More realistic scenario is that Point class comes from some external library which there is no control over, so overriding equals() method is not possible at all.

Use Mockito argThat matcher

Mockito provides method called argThat() in org.mockito.Matchers class. It accepts object from class that implements org.hamcrest.Matcher<T> interface. Actual equals implementation is done in its matches() method:

private class PointMatcher extends ArgumentMatcher<Point> {
	private final Point expected;

	public PointMatcher(Point expected) {
		this.expected = expected;
	}

	@Override
	public boolean matches(Object obj) {
		if (!(obj instanceof Point)) {
			return false;
		}
		Point actual = (Point) obj;

		return actual.getX() == expected.getX()
			&& actual.getY() == expected.getY();
	}
}

Once implemented this class can be used in tests:

verify(locatorServiceMock, times(1))
.geoLocate(argThat(new PointMatcher(new Point(1, 1))));

Conclusion

In examples above is shown how to implement or change equals() method behaviour for a specific class in unit tests so that Mockito can verify object from this class is provided as argument for mock’s method call.

Read more...