Code coverage of manual or automated tests with OpenCover for .NET applications

Last Updated on by

Post summary: Examples how to do code coverage of manual or automated functional test with OpenCover tool for .NET applications

Code coverage

This topic is how to do the code coverage on .NET applications with OpenCover. Theory on what is code coverage, why it is needed can be found in What about code coverage post.


OpenCover is open source tool for code coverage for .NET 2.0 and above applications for Windows only. With OpenCover instrumentation of the code is not needed. Application is started through OpenCover and it collects coverage results. What is mandatory though is PDB file along with executables and assemblies, so application under test should be build in Debug mode. If PDB file is not found then no coverage data will be gathered.

Latest version can be downloaded form “Releases” in GitHub. There is installer and zip archive. If installer is used by default OpenCover is installed in C:\Users\{USER_ACCOUNT}\AppData\Local\Apps\OpenCover. If you want to change this, click “Advanced” button during installation and then select “Install for all users on this machine”.


OpenCover produces results is raw format, which is not for humans. ReportGenerator is used to convert XML reports generated by OpenCover, PartCover, Visual Studio or NCover into human readable reports in various formats. Usage guide can be found on its home page, most useful commands are:

  • -reports: – coverage reports that should be parsed, semicolon separated, wildcards are allowed
  • -targetdir: – directory where the generated report should be saved
  • -sourcedirs:[;][;] – directories which contain the corresponding source code, optional, semicolon separated
  • -classfilters:<(+|-)filter>[;<(+|-)filter>][;<(+|-)filter>] – list of classes that should be included or excluded in the report, optional, wildcards are allowed.

How to use OpenCover

Usage guide can be found in OpenCover usage reference online. Also along with OpenCover installation there Usage.rft file which holds all the information about the tool. Most useful commands are listed bellow:

  • -target: – path to application executable file or name of service
  • -filter: – list of filters to apply to selectively include or exclude assemblies and classes from coverage results
  • -output: – path to output XML file, if empty then results.xml will be created in current directory
  • -register[:user] – register and de-register the code coverage profiler
  • -targetargs: – arguments to be passed to the target process
  • -targetdir: – path to the target directory or alternative path to PDB files

Hands on examples on manual code coverage

In order to try you need to checkout code samples from GitHub SampleApp or SampleAppPlus repository to C:\. Telerik Testing Framework needs to be installed as it copies lots of assemblies in GAC. OpenCover and ReportGenerator should also be installed to C:\

With current setup command to start SimpleAppPlus.exe from OpenCover is: C:\OpenCover\OpenCover.Console.exe -target:”C:\SampleAppPlus\SampleAppPlus\bin\Debug\SampleAppPlus.exe” -output:C:\SampleAppPlus\CoverageReports\SampelAppPlus.results.xml -register:user Now application is started and manual functional tests can be executed. Once application is stopped coverage results are saved in SampelAppPlus.results.xml file.

Hands on examples on automated code coverage

Since automation is the future of QA and we have already created automated tests for both SimpleApp and SimplaAppPlus, we want to measure how our tests perform on code coverage. Automated tests run the application, attach to it and manipulate it. So it seems close to mind just to use the command from manual example and start the application with it. It will not work though since command is starting and returning OpenCover process, not underlying SimpleAppPlus one. Extra code is needed in SampleAppPlus.Tests.Framework\Tests\BaseTest.cs file. Instead of:

Application appWhite = Application.Launch(applicationPath);

following code has to be added:

Process sampleAppPlus = StartProcess();
Application appWhite = Application.Attach(sampleAppPlus);

where StartProcess() method is:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading;

private Process StartProcess()
	string appName = "SampleAppPlus";
	string openCover = @"C:\OpenCover\OpenCover.Console.exe";
	long timeStamp = (long)(DateTime.UtcNow 
						- new DateTime(1970, 1, 1)).TotalMilliseconds;

	List<string> arguments = new List<string>();
					+ appName + ".exe");
					+ timeStamp + ".xml");

	Process process = new Process();
	process.StartInfo.FileName = openCover;
	process.StartInfo.Arguments = string.Join(" ", arguments);

	return Process.GetProcesses().First(proc => appName == proc.ProcessName);

OpenCover arguments “-target”, “-output” and “-register” are used. Note that -output file is always different by adding current unix time in file name, this is to prevent overwriting of file. The idea is to run OpenCover which will start SimpleAppPlus. Wait 5 seconds to ensure application is up, then get all Windows processes with Process.GetProcesses(), iterate them, find and return the needed SampleAppPlus process which TestStack.White and Telerik Testing Framework will attach to.

Create report

Once tests are run and OpenCover XML report files are generated it is time to generated human readable reports. This is done with ReportGenerator with command: C:\ReportGenerator\bin\ReportGenerator.exe -reports:C:\SampleAppPlus\CoverageReports\SampelAppPlus.*.xml -targetdir:C:\SampleAppPlus\CoverageReports\html -sourcedirs:C:\SampleAppPlus\ -classfilters:-SampleAppPlus.Properties.*


Inspect report

Coverage report file from examples above can be found in OpenCover code coverage report. Inspecting the report there is missed code in SampleAppPlus.MainWindow class – else branch of if ((bool)openFileDialog.ShowDialog()) condition is not covered. Documentation of this method states that it returns false if Cancel button is clicked of dialog window. In order to increase the coverage test that clicks Cancel button and verifies no upload is done should be added to test suite.

Code coverage for IIS web application or Windows service

Examples above show how to run normal windows application. It is valid for both UI and console applications as they are started with single EXE file. OpenCover can also work for IIS web applications, Silverlight applications and Windows service applications. More details can be found in documentation accompanying OpenCover installation.

Although it is possible to connect to a running service, I have done code coverage on Windows service in the manner suggested in the documentation – run the service as a console application. Since debugging a running Windows service is not that straightforward task, developers have most likely already implemented a switch to start service as console application. If not you will easy their lives by asking them to do so.


OpenCover is the only open source tool for code coverage for .NET applications. It is really powerful and easy to use. No code instrumentation is needed, just build the code into Debug mode to have PDB files and run the application through OpenCover. It can also be used for measuring code coverage of unit tests.


Code coverage with JaCoCo offline instrumentation with Maven

Last Updated on by

Post summary: Tutorial how to do code coverage with offline instrumentation with JaCoCo and Maven.

Offline instrumentation

Code coverage of manual or automated tests with JaCoCo post describes how to do code coverage with JaCoCo. This is applicable in many of the cases. Still there could be a case where application under test does not support Java agents. In this case JaCoCo offers offline instrumentation of the code. Once instrumented code can be run and coverage to be measured. This post describes how to do it.

JaCoCo for Maven

JaCoCo Maven plugin is discussed in Automated code coverage of unit tests with JaCoCo and Maven post. In order to get code instrumented “instrument” task should be added to Maven:


In current example this task will be executed when “mvn test” is called. It can be configured to be called on “package” or “install” by changing the “” element.

Make it configurable

Instrumentation MUST not be done on every build. You do not want to release instrumented code, first because this is bad practice and second code will not run unless jacocoagent.jar is in classpath. This is why instrumentation should be disabled by default with jacoco.skip.instrument=true pom.xml property, which can be overridden when needed with “mvn clean test -Djacoco.skip.instrument=false” command. Other option is separate pom-offline.xml file and build with it when needed.

Get sample application

Current post uses sample application first introduced in Build a RESTful stub server with Dropwizard post. It can be found in GitHub sample-dropwizard-rest-stub repository. For current tutorial it has been downloaded to C:\sample-dropwizard-rest-stub. This application gets packaged to a single JAR file by “mvn clean package”. If instrumentation gets put in “package” phase then it is not working, as packaging happens before instrumentation. This is why “test” phase is the correct for current example, as “package” includes test by default.

Instrument the code

Once application is downloaded it can be built with instrumentation with “mvn clean package -Djacoco.skip.instrument=false” command. It can be easily check if given class has been instrumented by opening it with some kind of decompiler. Image bellow shows instrumented class on the right hand side vs non-instrumented in the left hand side.


Run it with JaCoCo agent in the class path

If not instrumented sample application is started with “java -jar target/sample-dropwizard-rest-stub-1.0-SNAPSHOT.jar server config.yml” command. In case of instrumented code this command will give exception:

Exception in thread “main” java.lang.NoClassDefFoundError: org/jacoco/agent/rt/internal_773e439/Offline

This is because jacocoagent.jar is not in the classpath.

Adding JaCoCo agent in class path varies from case to case. In this particular tutorial application is a single JAR run by “java -jar” command. In order to add something in classpath “java -cp” command should be used. Problem is both “-jar” and “-cp” are mutually exclusive. Only way to do it is with following command:

java -Djacoco-agent.output=tcpserver -cp C:\JaCoCo\jacocoagent.jar;target/sample-dropwizard-rest-stub-1.0-SNAPSHOT.jar com.automationrhapsody.reststub.RestStubApp server config.yml

Where -Djacoco-agent.output=tcpserver is configuration to make JaCoCo agent report on TCP port. More about JaCoCo Offline settings here. “C:\JaCoCo\jacocoagent.jar” is the location of the JaCoCo agent JAR file. “com.automationrhapsody.reststub.RestStubApp” is main method to be run from target/sample-dropwizard-rest-stub-1.0-SNAPSHOT.jar file.


Now when we have the application running is time to run all tests we have both automation and manual. For manual it is important to be documented scenario which is reproducible on each regression testing, not just some random clicking. The idea is that we want to measure what coverage our tests do.

Import coverage results

In order to import results Eclipse with installed JaCoCo plugin from market place is needed. See Code coverage of manual or automated tests with JaCoCo post for more details how to install plugin.

Open Eclipse and import C:\sample-dropwizard-rest-stub project as Maven one.

Import the results into Eclipse. This is done from File -> Import -> Coverage Session -> select Agent address radio button but leave defaults -> enter some name and select code under test.


Once imported results can be seen and code gets highlighted.

In case no results are imported delete “target\classes” folder and rebuild with “mvn compile”.

Export to HTML and analyse

See Code coverage of manual or automated tests with JaCoCo how to export and analyse.


Although very rare to happen offline instrumentation is a way to measure code coverage with JaCoCo.


Automated code coverage of unit tests with JaCoCo and Maven

Last Updated on by

Post summary: Tutorial how to setup code coverage with JaCoCo on unit tests to be done on each Maven build.

Code coverage

There are two main streamlines in code coverage. One is running code coverage on each build measuring unit tests coverage. Once configured this needs no manual intervention. Depending on the tools there is even an option to fail build if code coverage doesn’t reach desired threshold. Current post is designated to this aspect. Other is running code coverage on automated or even manual functional tests scenarios. Latter is described in Code coverage of manual or automated tests with JaCoCo post.

Unit tests code coverage

Theory described in What about code coverage post is applicable for unit tests code coverage, but the real benefit of unit tests code coverage is:

  • Unit tests code coverage can be automated on every build
  • Build can be configured to fail if specific threshold is not met

JaCoCo for Maven

There is JaCoCo plugin that is used with Maven builds. More details and what goals can be accomplished with in can be seen in JaCoCo Maven plugin page. The very minimum to make it work is to setup “prepare-agent” and “report” goals. Report is good to be called during “test” Maven task. This is done with test instruction. You can define it on other tasks, like install or package, however code coverage is done for tests. XML block to be added to pom.xml file is:


Once this is configured it can be run with “mvn clean test” command. Once successfully build JaCoCo report can be found in “/target/site/jacoco/index.html” file. Similar report can be found in HTML JaCoCo report.

Add coverage thresholds

Just adding unit tests code coverage report is good but can be rather improved. Good practice is to add code coverage thresholds. If predefined code coverage percentage is not reached build will fail. This practice could have a negative effect of putting unit tests just for the sake of code coverage, but then again it is up to the development team to keep the good quality. Adding thresholds is done with “check” Maven task:

			<rule implementation="org.jacoco.maven.RuleConfiguration">
					<limit implementation="">

“check” task should be added along with “prepare-agent”. “report” is not mandatory but could also be added in order to inspect where code coverage is not enough. Note that “implementation” attribute is mandatory only for Maven 2. If Maven 3 is used then attributes can be removed.

JaCoCo check rules

In order “check” task to work is should be added at least one “goal” element as shown above. Rule is defined for particular scope. Available scopes are: BUNDLE, PACKAGE, CLASS, SOURCEFILE or METHOD. More details can be found in JaCoCo check Maven task documentation. In current example rule is for BUNDLE, which means whole code under analysis. PACKAGE is the far I would go. Even with it, there could be unpleasant surprises like an utility packages or POJO (data object without business logic inside) packages where objects do not have essential business logic but the PACKAGE rule will still require given code coverage for this package. This means you will have to write unit tests just to satisfy the coverage demand.

JaCoCo check limits

Although not mandatory each rule is good to have at least one “limit” element as shown above.

Available limit counters are: INSTRUCTION, LINE, BRANCH, COMPLEXITY, METHOD, CLASS. Those are the values measured in report. Some of them are JaCoCo specific other are accordance with code coverage general theory. See more details on counters in JaCoCo counters page. Check sample report at HTML JaCoCo report to see how counters are displayed.

Available limit values are: TOTALCOUNT, COVEREDCOUNT, MISSEDCOUNT, COVEREDRATIO, MISSEDRATIO. Those are the magnitude values for measuring. COUNT values are bounding with current implementation and cannot be referred to some industry standards. This is why RATIO values are much more suitable.

Actual measuring value is set in “minimum” or “maximum” elements. Minimum is generally used with COVERED values, maximum with MISSED values. Note that in case of RATIO this should be double value less than 1, e.g. 0.60 in example above means 60%.

With given counters and values there could be lots of combinations. The very minimum is to use INSTRUCTION with COVEREDRATIO as shown in example above. Still if you want to be really precise several limits with different counters can be used. Bellow is an example for Maven 3 where defining that every class should have been covered in init tests:


Configurations are good practice

Hard-coding is never a good practice anywhere. Example above have hard-coded value of 60% instruction code coverage, otherwise build will fail. This is not a proper way to do it. Proper way is to define a property and then use it. Properties element is defined at a root level:




Benefit of property is it can be easily overridden on runtime, just specify new value as system property: “mvn clean test -Djacoco.percentage.instruction=0.20”. In this way there could be general value of 0.60 in pom.xml, but until this is not reached it can be overridden with 0.20.

Experiment with JaCoCo checks

Sample application to experiment with is the one introduced in Build a RESTful stub server with Dropwizard post. It can be found in GitHub sample-dropwizard-rest-stub repository. It has very low amount of code in it and just one unit test with very low code coverage. This gives good opportunities to try different combinations of counters and values to see how exactly JaCoCo works.


Code coverage of unit tests with thresholds is a good practice. This tutorial gives very good introduction how to implement and use them, so apply and use them.


Code coverage of manual or automated tests with JaCoCo

Last Updated on by

Post summary: Tutorial how to do code coverage on automated or even manual functional tests with JaCoCo.

Code coverage is a way to check what part of the code your tests are exercising. JaCoCo is a free code coverage library for Java.

Code coverage

There are two main streamlines in code coverage. One is running code coverage on each build measuring unit tests coverage. Once configured this needs no manual intervention. Depending on the tools there is even an option to fail build if code coverage doesn’t reach desired threshold. This is well described in Automated code coverage of unit tests with Maven and JaCoCo post. Other aspect is running code coverage on automated or even manual functional tests scenarios. This post is dedicated to latter. I have already created What about code coverage post to theory of this type of code coverage. Current post is about getting it done in practice.


Idea is pretty simple. Application is started with a code coverage tool is attached to it, any test are executed and results are gathered. This is it. Having stated it that way it doesn’t sound that bizarre to run code coverage on manual tests. Running it makes sense if only tests are well documented and repeatable scenarios that are usually run on regression testing.

JaCoCo Java agent

Java agent is powerful mechanism providing ability to instrument and change classes at run time. Java agent library is passed as a JVM parameter when running given application with “-javaagent:{path_to_jar}”. JaCoCo tool is implemented as Java agent. More details for Java agents can be found at java.lang.instrument package.

In case application under test does not support pluging agents to JVM then coverage can be measured with offline instrumentation described in Code coverage with JaCoCo offline instrumentation with Maven post.


Some restrictions that has to be taken into consideration:

  • JaCoCo plugin is only for Eclipse IDE, hence it should be used in order to get report
  • Imported execution data must be based on the exact same class files that are also used within the Eclipse IDE, hence application should be run in Eclipse, it is not possible to build it and run it separately as class files will differ
  • Eclipse is not capable of shutting down the JVM, it directly kills it, hence only way to get results is to start JaCoCo agent in “tcpserver” output mode
  • JaCoCo agent version 0.7.4 and Eclipse EclEmma plugin 2.3.2 are used, those are compatible, 0.7.5 introduces change in data format

Install Eclipse plugin

Having stated the restriction it is time to start. Installation is done through Eclipse -> Help -> Eclipse Marketplace… -> search for “jacoco” -> Install -> restart Eclipse.


Run the application

As stated in restrictions in order code coverage to work application should be started in Eclipse. I will use sample application I have introduced in Build a RESTful stub server with Dropwizard post. It can be found in GitHub sample-dropwizard-rest-stub repository. For this tutorial it is checked out to: C:\sample-dropwizard-rest-stub.

Download JaCoCo 0.7.4 agent jar. For this tutorial it is saved to C:\JaCoCo

Project have to imported to Eclipse from File -> Import -> Existing Maven projects. Once imported Run configuration have to be done from Run -> Run Configurations… -> double click Java Application. This opens new window to define configuration. Properties are stated bellow:

  • Name (Main tab): RestStubApp (could be whatever name)
  • Project (Main tab): sample-dropwizard-rest-stub
  • Main class (Main tab): com.automationrhapsody.reststub.RestStubApp
  • Program arguments (Arguments tab): server config.yml
  • VM arguments (Arguments tab): -javaagent:C:\JaCoCo\jacocoagent.jar=output=tcpserver (started in “tcpserver” output mode)
  • Working directory -> Other (Arguments tab): C:\sample-dropwizard-rest-stub


Once configuration is created run it.

Apart from the given sample application it should be possible to run every application in Eclipse. This assumption is made on the fact that developers do need to debug their application, so they will have a way to run it. Important part is to include -javaagent: part in VM arguments section


Now when we have the application running is time to run all tests we have both automation and manual. For manual it is important to be documented scenario which is reproducible on each regression testing, not just some random clicking. The idea is that we want to measure what coverage our tests do.

Import coverage results

After all tests have passed it is time to import the results into Eclipse. This is done from File -> Import -> Coverage Session -> select Agent address radio button but leave defaults -> enter some name and select code under test.


Once imported results can be seen and code gets highlighted.


Export to HTML

Now result can be exported. There are several options: HTML, zipped HTML, XML, CSV or JaCoCo exctution data file. Export is done from: File -> Export -> Coverage Report -> select session and location.



Here come the hard part. Code could be quite complex and not that easy to understand. In the current tutorial code is pretty simple. Inspecting HTML JaCoCo report it can be easily noticed that addPerson() method has not been called. This leads to conclusion that one test has been missed – to invoke all persons endpoint. Another finding is that ProductServlet hasn’t been tested with empty product id.


It is pretty easy to measure the code coverage. Whether to do it is another discussion held in What about code coverage post. If there is time it might be beneficial. But do not make code coverage an important KPI as this could lead to aimless test with only one purpose – higher percentage. Code coverage should be done in order to optimise tests and search for gaps.


What about code coverage

Last Updated on by

Post summary: Code coverage is measurement of what percentage of program source code is being executed by your tests. Code coverage is a nice to have, but in no case make code coverage ultimate goal!

Code coverage is useful metric to check what part of the code your tests are exercising. Is really depends on the tools to be used for gathering code coverage metrics but in general they work in similar fashion.

How it works

One approach is to instrument application under test. Instrumentation is to modify original executables by adding meta data so code coverage tool is able to understand which line of code is being executed. Other option is to run the application through code coverage tool. In both ways once application is running tests are being executed.

What tests to run

Tests that can be run to measure code coverage can be unit tests, functional automation tests or even manual tests. It doesn’t matter the important part is to see how our tests are impacting application under test.


Once tests are finished code coverage metrics are obtained from code coverage tool. To have detailed information most of the tools take original source code and generate visual report with colour information which line is executed and which not. There are different levels of coverage, method, line, code block, etc. I personally prefer lines and when speaking further I’ll mean lines of code executed.


Code coverage information is equally useful for QA and developers. QA analyse code that has not being executed during tests to identify what tests conditions they are missing and improve their tests. Developers analyse the results to identify and remove dead or unused code.

When to do it

Code coverage is a nice to have activity. We can live with or without it. It is useful for big matured products where there is automation test suites. You can also do them against your tests code in order to optimise it. Removing dead code optimises the product and makes its maintenance easier. I would say doing it too often should be avoided. Everything depends on context but for me best is once or twice a year.

What does code coverage percentage means

In one word – nothing. You may have 30% of code coverage but to cover most important user functionality and bug rate to be low and you may have 90% with dummy tests made especially to exercise some code without the idea of actually testing it. I was lucky to work on a project where developers keep the code tidy and clean and my tests easily accomplished 81% just by verifying all user requirements. I may say 80-85% is the maximum you may get.


Do not ever put code coverage as important measurement or key performance indicator (KPI) in your testing strategy. Doing so and making people try to increase the code coverage will in most cases result in dummy tests made especially this percentage to go up. Code coverage is in most cases insignificant aspect of your testing strategy.

How to do it

Tutorial how to do code coverage for Java with JaCoCo can be found in Code coverage of manual or automated tests with JaCoCo post. Tutorial how to do code coverage for Java with OpenCover can be found in Code coverage of manual or automated tests with OpenCover for .NET applications post.


Code coverage is interesting aspect of testing. It might enhance your tests if done wisely or it can ruin them. Remember tests scenarios should be extracted from user requirements and features. Code coverage data should be used to see if you have some blind spots reading the requirements. It can help developers remove dead or unused code.