.NET Core code coverage on Linux with MiniCover

Last Updated on by

Post summary: How to run code coverage of your unit tests as part of your build on Linux build agents.

Code bellow can be found in GitHub SampleDotNetCore2RestStub repository. In Code coverage of .NET Core unit tests with OpenCover post I have shown how to do code coverage with OpenCover. Commands shown in that post can be made part of your CI or CD build. There is a but though, this works only for windows. If you are having build machines on Linux you need another alternative. In this post I’m going to show this alternative.

MiniCover

MiniCover is a lightweight code coverage tool for .NET Core on Linux. It is in early stage yet and there is not big community, but I really hope this is going to change soon as it looks very promising tool.

Include in project

In order to use MiniCover it has to be installed as .NET Cli Tool. This is done with following code:

<ItemGroup>
	<DotNetCliToolReference Include="MiniCover" Version="2.0.0-ci-*" />
</ItemGroup>

In order to keep you original projects intact best approach is to create tools project and add it to its tools.csproj file, which will look:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="MiniCover" Version="2.0.0-ci-*" />
  </ItemGroup>

</Project>

Commands

At this stage following command line options are available:

  • instrument – Instrument assemblies
  • uninstrument – Uninstrument assemblies
  • reset – Reset hits count
  • report – Outputs coverage report
  • htmlreport -Write html report to folder
  • xmlreport – Write an NCover-formatted XML report to folder

Run coverage

In case of a project structure where you have your code in src folder and your tests in test folder following bash script can be used directly. It accepts as parameter threshold coverage percentage, if not provided it uses 80% by default. Script restores NuGet packages and builds the projects. It navigates to tools folder and restores NuGet packages again. This is very important as it is the only way to get MiniCover NuGet package. Inside tools folder it instruments assemblies and resets previous statistics. Script navigates to root folder and runs all tests inside every project in test folder. Afterwards script navigates again to tools folder and uninstruments all assemblies so far. No matter this operation is safe I would recommend to run one more build or publish before assemblies go to production. In the end script generates reports.

if [ ! -z $1 ]; then
  if [ $1 -lt 0 ] || [ $1 -gt 100 ]; then
    echo "Threshold should be between 0 and 100"
    threshold=80
  fi
  threshold=$1
else
  threshold=80
fi

dotnet restore
dotnet build

cd tools
dotnet restore

# Instrument assemblies inside 'test' folder to detect hits for source files inside 'src' folder
dotnet minicover instrument --workdir ../ --assemblies test/**/bin/**/*.dll --sources src/**/*.cs 

# Reset hits count in case minicover was run for this project
dotnet minicover reset

cd ..

for project in test/**/*.csproj; do dotnet test --no-build $project; done

cd tools

# Uninstrument assemblies, it's important if you're going to publish or deploy build outputs
dotnet minicover uninstrument --workdir ../

# Create HTML reports inside folder coverage-html
# This command returns failure if the coverage is lower than the threshold
dotnet minicover htmlreport --workdir ../ --threshold $threshold

# Print console report
# This command returns failure if the coverage is lower than the threshold
dotnet minicover report --workdir ../ --threshold $threshold

# Create NCover report
dotnet minicover xmlreport --workdir ../ --threshold $threshold

cd ..

Reports

There are 3 type of reports: Console, HTML and NCover XML.

Console report

Console report is dumping results to the console and returns 1 if given threshold is not met, which basically fails the CI/CD build. In the example bellow codeCoverage.sh was called with argument 40, which means threshold is 40%.

HTML report

HTML report is also failing the build and gives similar summary information as console report, but also give detailed information for each class coverage. Example report can be found in MiniCover HTML report. I have to praise myself, as summary file shown bellow was something I contributed with, because I like the project very much.

NCover report

NCover report creates XMl file in its format. Beauty about it is that you can additionally use ReportGenerator on Windows machine and convert XML to nice HTML report. Assuming ReportGenerator is extracted on your C:\ then command is shown bellow. Report can be found in MiniCover ReportGenerator report.

 C:\ReportGenerator\ReportGenerator.exe
 -reports:coverage.xml
 -targetdir:coverage
 

Compare with OpenCover

If you check both OpenCover .Net Core report and MiniCover ReportGenerator report you can notice some difference in metrics. First is that MiniCover does not support branch coverage. This is not that bad after all, if you have your code nicely indented, line coverage is sufficient. For e.g. if your ternary operator is not on one line, but on three and you have missed to test one of the conditions, then line coverage will state that there is a not tested line. If ternary operator is on one line though then line coverage will miss this test problem. Another difference is Coverable lines and Covered lines. OpenCover counts opening and closing brackets as such, so its numbers are bigger. Because of this conceptual difference Line coverage percentage has small difference. MiniCover (35%) is more generous and give more percentage than OpenCover (33.6%).

Conclusion

MiniCover is very nice and compact tool that can be put in place during your continuous integration or continuous delivery to measure code coverage on each build. Most important advantage is that it is designed and works on Linux.

Related Posts

Read more...

Code coverage of .NET Core unit tests with OpenCover

Last Updated on by

Post summary: Examples how to measure code coverage of .NET Core unit tests with OpenCover.

Examples bellow are based on GitHub SampleDotNetCore2RestStub repository. Examples use code from .NET Core integration testing and mock dependencies post. Those are integration tests because they test more than one application module of a time, but they are run with unit testing framework, this is why current post title is such.

Code coverage

This topic is how to do the code coverage on .NET Core unit tests with OpenCover. Theory on what is code coverage, why it is needed can be found in What about code coverage post.

OpenCover

OpenCover is open source tool for code coverage for .NET 2.0 and above applications for Windows only. You can read more details about OpenCover in Code coverage of manual or automated tests with OpenCover for .NET applications post or you can visit their OpenCover Wiki page.

Run OpenCover

In order to make this examples work you need to check out SampleDotNetCore2RestStub repository to C:\ and run all commands from project root folder C:\SampleDotNetCore2RestStub. OpenCover and ReportGenerator should be installed on C:\ as well. If you have different paths, just adjust them in commands shown bellow.

C:\OpenCover\OpenCover.Console.exe
	-target:"c:\Program Files\dotnet\dotnet.exe"
	-targetargs:"test"
	-output:coverage.xml
	-oldStyle
	-filter:"+[SampleDotNetCore2RestStub*]* -[SampleDotNetCore2RestStub*Test*]*"
	-register:user

Enable .NET Core debug output

If you run command above you will get following message:

Committing…
No results, this could be for a number of reasons. The most common reasons are:
1) missing PDBs for the assemblies that match the filter please review the
output file and refer to the Usage guide (Usage.rtf) about filters.
2) the profiler may not be registered correctly, please refer to the Usage
guide and the -register switch.

Note: error with red text shown on image above is because with -targetargs:”test” dotnet.exe tries to run tests inside all projects, but src\SampleDotNetCore2RestStub simply does not have tests. You can refine which test project to get run by changing to: -targetargs:”test test\SampleDotNetCore2RestStub.Integration.Test\SampleDotNetCore2RestStub.Integration.Test.csproj”.

Message for no results is because debug output is not enabled on .NET Core project and OpenCover does not have needed data to work on. Change src\SampleDotNetCore2RestStub\SampleDotNetCore2RestStub.csproj file by adding <DebugType>full</DebugType>:

<PropertyGroup>
	<OutputType>Exe</OutputType>
	<TargetFramework>netcoreapp2.0</TargetFramework>
	<DebugType>full</DebugType>
</PropertyGroup>

Now running the command gives proper output:

Committing…
Visited Classes 5 of 12 (41.67)
Visited Methods 17 of 36 (47.22)
Visited Points 43 of 123 (34.96)
Visited Branches 18 of 44 (40.91)

==== Alternative Results (includes all methods including those without corresponding source) ====
Alternative Visited Classes 5 of 12 (41.67)
Alternative Visited Methods 20 of 43 (46.51)

Generate report

ReportGenerator is used to convert XMLreports generated by OpenCoverPartCoverVisual Studio or NCover into human readable reports in various formats. To generate report use following command:

C:\ReportGenerator\ReportGenerator.exe
	-reports:coverage.xml
	-targetdir:coverage

Inspect report

Report can be found in my examples: OpenCover .Net Core report. You can see what code is being covered during testing and what not.

Conclusion

In this post I have shown how to run code coverage with OpenCover on .NET Core unit tests.

Related Posts

Read more...

Useful .NET Core SDK CLI commands

Last Updated on by

Post summary: Some useful .NET Core SDK CLI commands.

Commands in current post are extraction from Build a REST API with .NET Core 2 and run it on Docker Linux container and .NET Core integration testing and mock dependencies posts where I have used them in real project.

All commands

All available commands are accessed with: dotnet –help.

Initialise .NET projects

Creating new projects is done with: dotnet new [options]. I have used following commands:

  • Create new console application project: dotnet new console -o <ProjectName>
  • Create new MS Test project: dotnet new mstest -o <ProjectName>
  • Create new solution file: dotnet new sln –name <SolutionName>

To list all available project types use: dotnet new –help.

Custom templates

.NET SDK allows you to create custom project template and then install it to SDK with command: dotnet new -i <TEMPLATE_FOLDER>. Once installed can create new project out of your custom template. This is valuable in a big organisations where cohesion is needed between similar project types. See more for templates in Custom templates for dotnet new article.

Manage dependencies

Two types of dependencies are available, to a NuGet package or to another .NET project.

  • Add reference to a NuGet package: dotnet add package <NuGetPackageName>
  • Add reference to another .NET project: dotnet add reference <PathToProjectFile>

Similar commands are available for removing dependencies with: dotnet remove.

Project dependencies can be shown with: dotnet list reference <PathToProjectFile>.

Actions on a project

  • Build a project: dotnet build
  • Run a project: dotnet run
  • Run tests of a project: dotnet test
  • Publish project artefacts: dotnet publish

All commands have a bunch of configuration options to be provided. More details on each command can be obtained by adding –help at the end.

Manage solution file

  • Create new solution file: dotnet new sln –name <SolutionName>
  • Add project to solution: dotnet sln <SolutionFileName> add <PathToProjectFile>
  • Remove project from solution: dotnet sln <SolutionFileName> remove <PathToProjectFile>
  • List projects in a solution: dotnet sln <SolutionFileName> list

Conclusion

This post is showing some useful .NET SDK CLI commands that make management of .NET project easy without Visual Studio 2017. More practical examples can be found in post where those commands are actually used: Build a REST API with .NET Core 2 and run it on Docker Linux container and .NET Core integration testing and mock.

Related Posts

Read more...

.NET Core integration testing and mock dependencies

Last Updated on by

Post summary: How to do integration testing on .NET Core application and stub or mock some inconvenient dependencies.

Code bellow can be found in GitHub SampleDotNetCore2RestStub repository. In Build a REST API with .NET Core 2 and run it on Docker Linux container post I have shown how to create .NET Core application. In current post I will show how to do integration testing on same application. Post is for REST API, but principles here apply for web UI as well, difference is that the response will be HTML, which is slightly harder to process compared to JSON.

Refactor project structure

Currently there is only one project created which contains .NET Core application. Since this is going to grow it has to be refactored and structured properly.

  • SampleDotNetCore2RestStub folder which contains the API is moved to src folder.
  • Solution file is created with dotnet new sln –name SampleDotNetCore2RestStub. Note that .sln extension is omitted as it is added automatically. Although everything in the example is done with open source tools, it is good to have solution file to keep compatibility with Visual Studio 2017.
  • API project file is added to solution file with:
    dotnet sln SampleDotNetCore2RestStub.sln add src/SampleDotNetCore2RestStub/SampleDotNetCore2RestStub.csproj.
  • In order to test that moving of files did not affected the functionality, API can be run with: dotnet run –project src/SampleDotNetCore2RestStub/SampleDotNetCore2RestStub.csproj.

Add test project

It is time to create integration tests project. We speak for integration tests, but they will be run with unit testing framework MSTest. I do not have some particular favour of it, it comes by default with .NET Core, along with xUnit, and I do not want to change it.

  • Create test folder: mkdir test.
  • Navigate to it: cd test.
  • MSTest project is created with: dotnet new mstest -o SampleDotNetCore2RestStub.Integration.Test.
  • Navigate to test project: cd SampleDotNetCore2RestStub.Integration.Test.
  • Run the unit tests: dotnet test. By default there is one dummy test that passes.
  • Go to root folder: cd .. and cd ..
  • Add test project to solution file: dotnet sln SampleDotNetCore2RestStub.sln add test/SampleDotNetCore2RestStub.Integration.Test/SampleDotNetCore2RestStub.Integration.Test.csproj.

Open with Visual Studio Code

Once refactored and opened in Visual Studio Code project has following structure:

Unit vs Integration testing

I would not like to focus on theory and terminology as this post is not intended to, but I have to do some theoretical setup before proceeding with the code. Generally speaking term integration testing is used in two cases. One is when different systems are interconnected together and tested, other is when different components of one system are grouped together and tested. In current post with term integration testing I will refer the latter. In unit testing each separate class is tested in isolation. In order to do so all external dependencies, like database, file system, web requests and response, etc., are mocked. This makes tests run very fast, but has very high risk of false positives because of mocking. When mocking a dependency there is always an assumption how it works and is being used. Mocked behaviour might be significantly different than actual one, then unit test is compromised. On the other hand integration testing verifies that different parts of application works correctly when grouped together. It is much slower than unit testing, because more and real resources are being used. Some parts of the application still can be mocked which can increase execution time. In current post I will shown how to run full application with only database being mocked.

The Test Host

One way to run fully assembled application is by building and deploying it. Then application will use real resources to work. Functional testing should also be done during testing, but is not part of current post. More interesting scenario is to run fully assembled or partially mocked application in memory, without deployment and run tests against it. This approach has benefits, e.g. since application is run locally its response time is very low, which speeds up tests; some parts, like database connection can be mocked and thus speed up tests. .NET Core Test Host is a tool that can host web or API .NET Core applications serving requests and responses. It eliminates the need of having testing environment.

Add dependencies

In order to use test host dependency to its NuGet package should be added. Navigate to test/SampleDotNetCore2RestStub.Integration.Test and add dependency:

dotnet add package Microsoft.AspNetCore.TestHost

SampleDotNetCore2RestStub.Integration.Test project should depend on SampleDotNetCore2RestStub in order to use its code. This is done with:

dotnet add reference ../../src/SampleDotNetCore2RestStub/SampleDotNetCore2RestStub.csproj

Create first test

Existing UnitTest1 class will be changed to start application inside test host and make a request.

using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	[TestClass]
	public class PersonsTest
	{
		private TestServer _server;
		private HttpClient _client;

		[TestInitialize]
		public void TestInitialize()
		{
			_server = new TestServer(new WebHostBuilder()
				.UseStartup<Startup>());
			_client = _server.CreateClient();
		}

		[TestMethod]
		public async Task GetPerson()
		{
			var response = await _client.GetAsync("/person/get/1");
			response.EnsureSuccessStatusCode();

			var result = await response.Content.ReadAsStringAsync();
			var person = JsonConvert.DeserializeObject<Person>(result);

			Assert.AreEqual("LN1", person.LastName);
		}
	}
}

TestServer uses instance of IWebHostBuilder. Startup from UseStartup<Startup> is same class that is used to run application, but here it is run inside TestServer instance. CreateClient() method returns instance of standard HttpClient, with which request to /person/get/1 endpoint is made. EnsureSuccessStatusCode() throws exception if response code is not inside 200-299 range. Response is then taken as a string and deserialized to Person object with Newtonsoft.Json, which is now part of .NET Core.

Test can be run from test\SampleDotNetCore2RestStub.Integration.Test folder with command: dotnet test. If you type dotnet test from root folder it will search for tests inside all projects.

Debug tests in Visual Studio Code

Before proceeding any further with the code it should be possible to debug unit tests inside VS Code. It is not as easy as with VS 2017, but still manageable. First you need to run your test from command prompt in debug mode:

set VSTEST_HOST_DEBUG=1
dotnet test

Once this is done there is message with specific process ID:

Starting test execution, please wait...
Host debugging is enabled. Please attach debugger to testhost process to continue.
Process Id: 16032, Name: dotnet

Now from Visual Studio Code you have to attach to given process, 16032 in current example. This is done from Debug View, then select .Net Core Attach launch configuration. If such is not existing, add it. Running this configuration shows list of all processes with name dotnet. Select the proper one, 16032 in current example.

Create PersonServiceClient and BaseTest

Tests should be easy to write, read and maintain, thus PersonServiceClient class is created. It exposes methods that hit the endpoints and return result. Since testing is not only happy path, it should be possible to have some negative scenarios. You may want to hit the API with invalid data and verify it returns BadRequest (400) HTTP response code, or Unauthorized (401) HTTP response code, etc. In order to fulfil this test requirement, a separate class ApiResponse<T> is created. It stores response code along with response content as string. In case response string can be deserialized to an object of given generic type T it is also stored in ApiResponse object.

Client is instantiated as protected variable in BaseTest constructor. PersonsTest extends BaseTest and have access to PersonServiceClient.

ApiResponse

using System.Net;

namespace SampleDotNetCore2RestStub.Integration.Test.Client
{
	public class ApiResponse<T>
	{
		public HttpStatusCode StatusCode { get; set; }
		public T Result { get; set; }
		public string ResultAsString { get; set; }
	}
}

PersonServiceClient

using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Threading.Tasks;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test.Client
{
	public class PersonServiceClient
	{
		private readonly HttpClient _httpClient;

		public PersonServiceClient(HttpClient httpClient)
		{
			_httpClient = httpClient;
		}

		public async Task<ApiResponse<Person>> GetPerson(string id)
		{
			var person = await GetAsync<Person>($"/person/get/{id}");
			return person;
		}

		public async Task<ApiResponse<List<Person>>> GetPersons()
		{
			var persons = await GetAsync<List<Person>>("/person/all");
			return persons;
		}

		public async Task<ApiResponse<string>> Version()
		{
			var version = await GetAsync<string>("api/version");
			return version;
		}

		private async Task<ApiResponse<T>> GetAsync<T>(string path)
		{
			var response = await _httpClient.GetAsync(path);
			var value = await response.Content.ReadAsStringAsync();
			var result = new ApiResponse<T>
			{
				StatusCode = response.StatusCode,
				ResultAsString = value
			};

			try
			{
				result.Result = JsonConvert.DeserializeObject<T>(value);
			}
			catch (Exception)
			{
				// Nothing to do
			}

			return result;
		}
	}
}

BaseTest

using System.Net.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SampleDotNetCore2RestStub.Integration.Test.Client;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	public abstract class BaseTest
	{
		protected PersonServiceClient PersonServiceClient;

		public BaseTest()
		{
			var server = new TestServer(new WebHostBuilder()
				.UseStartup<Startup>());
			var httpClient = server.CreateClient();
			PersonServiceClient = new PersonServiceClient(httpClient);
		}
	}
}

PersonsTest

using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	[TestClass]
	public class PersonsTest : BaseTest
	{
		[TestMethod]
		public async Task GetPerson()
		{
			var response = await PersonServiceClient.GetPerson("1");

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual("LN1", response.Result.LastName);
		}

		[TestMethod]
		public async Task GetPersons()
		{
			var response = await PersonServiceClient.GetPersons();

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual(4, response.Result.Count);
			Assert.AreEqual("LN1", response.Result[0].LastName);
		}
    }
}

Stub the database

So far there is integration test that starts the application with its actual external dependencies and makes requests against it. Current API service does not connect to real database, because this will make running the API harder. Instead there is a fake PersonRepository which stores data in memory. In reality a real repository will connect to database with a given connection string in appsettings.json, and will perform CRUD operations on it. Database operations might slow down the application response time, or test might not have full control over data in database, which makes testing harder. In order to solve those two issues database can be stubbed to serve test data. Actually anything that is not convenient can be stubbed with the examples given bellow.

In order to make stubbing possible and to keep application structure intact Startup has to be changed. Registering PersonRepository to .NET Core IoC container is extracted to separate virtual method that can be overridden later. All dependencies that are to be stubbed or mocked can be extracted to such methods. Then StartupStub overrides this method and registers stubbed repository PersonRepositoryStub. In it all database operations are substituted with in memory equivalence, hence skipping database calls. It might not be full and accurate substitution, as long as it serves your testing purpose. After all this PersonRepositoryStub will be used only for testing. BaseTest should be changed to start application with StartupStub instead of Statup. Finally PersonsTest should be changed to assert on new data that is configured in PersonRepositoryStub.

Startup

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.Configure<AppConfig>(Configuration);
	services.AddScoped<AuthenticationFilterAttribute>();

	ConfigureRepositories(services);
}

public virtual void ConfigureRepositories(IServiceCollection services)
{
	services.AddSingleton<IPersonRepository, PersonRepository>();
}

StartupStub

using Microsoft.Extensions.DependencyInjection;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Integration.Test.Mocks
{
	public class StartupStub : Startup
	{
		public override void ConfigureRepositories(IServiceCollection services)
		{
			services.AddSingleton<IPersonRepository, PersonRepositoryStub>();
		}
	}
}

PersonRepositoryStub

using System.Collections.Generic;
using System.Linq;
using SampleDotNetCore2RestStub.Models;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Integration.Test.Mocks
{
	public class PersonRepositoryStub : IPersonRepository
	{
		private Dictionary<int, Person> _persons 
					= new Dictionary<int, Person>();

		public PersonRepositoryStub()
		{
			_persons.Add(1, new Person
			{
				Id = 1,
				FirstName = "Stubed FN1",
				LastName = "Stubed LN1",
				Email = "stubed.email1@email.na"
			});
		}

		public Person GetById(int id)
		{
			return _persons[id];
		}

		public List<Person> GetAll()
		{
			return _persons.Values.ToList();
		}

		public int GetCount()
		{
			return _persons.Count();
		}

		public void Remove()
		{
			if (_persons.Keys.Any())
			{
				_persons.Remove(_persons.Keys.Last());
			}
		}

		public string Save(Person person)
		{
			if (_persons.ContainsKey(person.Id))
			{
				_persons[person.Id] = person;
				return "Updated Person with id=" + person.Id;
			}
			else
			{
				_persons.Add(person.Id, person);
				return "Added Person with id=" + person.Id;
			}
		}
	}
}

BaseTest

using System.Net.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SampleDotNetCore2RestStub.Integration.Test.Client;
using SampleDotNetCore2RestStub.Integration.Test.Mocks;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	public abstract class BaseTest
	{
		protected PersonServiceClient PersonServiceClient;

		public BaseTest()
		{
			var server = new TestServer(new WebHostBuilder()
				.UseStartup<StartupStub>());
			var httpClient = server.CreateClient();
			PersonServiceClient = new PersonServiceClient(httpClient);
		}
	}
}

PersonsTest

using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	[TestClass]
	public class PersonsTest : BaseTest
	{
		[TestMethod]
		public async Task GetPerson()
		{
			var response = await PersonServiceClient.GetPerson("1");

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual("Stubed LN1", response.Result.LastName);
		}

		[TestMethod]
		public async Task GetPersons()
		{
			var response = await PersonServiceClient.GetPersons();

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual(1, response.Result.Count);
			Assert.AreEqual("Stubed LN1", response.Result[0].LastName);
		}
	}
}

Mock the database

Stubbing is an option, but mocking is much better as you have direct control over the mock itself. Most famous .NET mocking framework is Moq. It is added to the project with command:

dotnet add package Moq

StartupMock extends Starup and overrides its ConfigureRepositories. It registers instance of IPersonRepository which is injected by its constructor. BaseTest is changed to use StartupMock in UseStartup method. Repository mock is instantiated with PersonRepositoryMock = new Mock<IPersonRepository>(). It is injected into StartupMock constructor with ConfigureServices(services => services.AddSingleton(PersonRepositoryMock.Object)). This is how mock instance is registered into IoC container of .NET Core application that is being tested. Once mock instance is registered it can be controlled. In BaseTest it is reset to defaults after each test with BaseTearDown method. It is run after each test because of [TestCleanup] MSTest attribute. Inside, the PersonRepositoryMock.Reset() resets mock state.

Test specific setup can be done for each test. For e.g. GetPerson_ReturnsCorrectResult has following setup: PersonRepositoryMock.Setup(x => x.GetById(It.IsAny<int>())).Returns(_person); That means when mock’s GetById method is called with whatever int value the _person object is returned. Another example is GetPerson_ThrowsException test. When mock’s GetById is called then InvalidOperationException is thrown. In this way you can test exception handling, which in current demo application is missing. Exception is not that easy to be reproduce if you are using repository stubbing.

StartupMock

using Microsoft.Extensions.DependencyInjection;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Integration.Test.Mocks
{
	public class StartupMock : Startup
	{
		private IPersonRepository _personRepository;
		
		public StartupMock(IPersonRepository personRepository)
		{
			_personRepository = personRepository;
		}

		public override void ConfigureRepositories(IServiceCollection services)
		{
			services.AddSingleton(_personRepository);
		}
	}
}

BaseTest

using System.Net.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.Extensions.DependencyInjection;
using Moq;
using SampleDotNetCore2RestStub.Integration.Test.Client;
using SampleDotNetCore2RestStub.Integration.Test.Mocks;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	public abstract class BaseTest
	{
		protected PersonServiceClient PersonServiceClient;
		protected Mock<IPersonRepository> PersonRepositoryMock;

		public BaseTest()
		{
			PersonRepositoryMock = new Mock<IPersonRepository>();

			var server = new TestServer(new WebHostBuilder()
				.UseStartup<StartupMock>()
				.ConfigureServices(services =>
				{
					services.AddSingleton(PersonRepositoryMock.Object);
				}));

			var httpClient = server.CreateClient();
			PersonServiceClient = new PersonServiceClient(httpClient);
		}

		[TestCleanup]
		public void BaseTearDown()
		{
			PersonRepositoryMock.Reset();
		}
	}
}

PersonsTest

using System;
using System.Collections.Generic;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	[TestClass]
	public class PersonsTest : BaseTest
	{
		private readonly Person _person = new Person
		{
			Id = 1,
			FirstName = "Mocked FN1",
			LastName = "Mocked LN1",
			Email = "mocked.email1@email.na"
		};

		[TestMethod]
		public async Task GetPerson_ReturnsCorrectResult()
		{
			PersonRepositoryMock.Setup(x => x.GetById(It.IsAny<int>()))
				.Returns(_person);

			var response = await PersonServiceClient.GetPerson("1");

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual("Mocked LN1", response.Result.LastName);
		}

		[TestMethod]
		[ExpectedException(typeof(InvalidOperationException))]
		public async Task GetPerson_ThrowsException()
		{
			PersonRepositoryMock.Setup(x => x.GetById(It.IsAny<int>()))
				.Throws(new InvalidOperationException());

			var result = await PersonServiceClient.GetPerson("1");
		}

		[TestMethod]
		public async Task GetPersons()
		{
			PersonRepositoryMock.Setup(x => x.GetAll())
				.Returns(new List<Person> { _person });

			var response = await PersonServiceClient.GetPersons();

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual(1, response.Result.Count);
			Assert.AreEqual("Mocked LN1", response.Result[0].LastName);
		}
	}
}

Conclusion

In current post I have shown how to do integration testing on .NET Core applications. This is very convenient approach which eliminates some of the disadvantages of stubbing or mocking all dependencies in unit testing. Because of using all dependencies, integration testing can be much slower. This can be improved by mocking some of them. Integration testing is not substitute for unit testing, nor for functional testing, but it is a good approach in you testing portfolio that should be considered.

Related Posts

Read more...

Build a REST API with .NET Core 2 and run it on Docker Linux container

Last Updated on by

Post summary: Code examples how to create RESTful API with .NET Core 2.0 and then run it on Docker Linux container.

Code bellow can be found in GitHub SampleDotNetCore2RestStub repository. In current post is shown sample application that can be very good foundation of a real production application. This project can be easily used as a template for real API service.

Microsoft and open source

I was doing Java for about 2 years and got back to .NET six months ago. Recently we had to do a project in .NET Core 2.0, a technology I haven’t heard of before. I was truly amazed how much open source Microsoft had begun. .NET now can be developed and even run on Linux. This definitely makes it really competitive to Java which advantage was multi-platform ability. Another benefit is that documentation is very extensive and there is huge community out there that makes solving issues really fast and easy.

.NET Core

In short .NET Core is a cross-platform development platform supporting Windows, macOS and Linux, and can be used in device, cloud, and embedded/IoT scenarios. It is maintained by Microsoft and the .NET community on GitHub. More can be read on .NET Core Guide.

.NET Core 2.0

The special thing about .NET Core 2.0 is implementation of .NET Standard 2.0. This makes it possible to use almost 70% of already existing NuGet packages, which is a big step forward and eases development of .NET applications because of reusability.

Create simple .NET Core project

Making default .NET Core console application is really simple:

  1. Download and install .NET Core SDK. For Windows and MacOS there are installers available. For Linux it depends on distribution used, see more at .NET Core Linux installation guide.
  2. Create app with following command: dotnet new console -o ProjectName. Option -o specifies output folder to be created which also becomes the project name. If -o is omitted then project will be created in current folder with current folder’s name.
  3. Run newly created application with: dotnet run.

Using Visual Studio Code

Once project is created it can be developed in any text editor. Most convenient is Visual Studio 2017 because it provides lots of tools that make development very fast and efficient. In this tutorial I will be using Visual Studio Code – open source multi-platform editor maintained by Microsoft. I admit it is much harder that Visual Studio 2017, but is free and multi-platform. Once project folder is imported, hitting Ctrl+F5 runs the project.

ASP.NET Core MVC

ASP.NET Core MVC provides features to build web APIs or web UIs. It has to be used in order to continue with current example. Dependency to its NuGet package is added with following command:

dotnet add package Microsoft.AspNetCore
dotnet add package Microsoft.AspNetCore.All

Create REST API

After project structure is done it is time to add classes needed to make the REST API. Functionality is very similar to one described in Build a RESTful stub server with Dropwizard post. There is a Person API which can retrieve, save or delete persons. They are kept in a in-memory data structure which mimics DB layer. Following classes are needed:

  • PersonController – controller that exposes the API endpoints. By extending Controller class the runtime makes all endpoints available as long as they have proper routing. In current example routing is done inside action attributes [HttpGet(“person/get/{id}”)]. There are different routing options described in this extensive documentation Routing to Controller Actions. Adding of person is done with POST: [HttpPost(“person/save”)]. Important bit here is [FromBody] attribute which takes HTTP body and deserialises it to a Person object.
  • Person – this is data model class with properties.
  • PersonRepository – in-memory DB abstraction that keeps the data in a Dictionary. In reality there will be DB layer responsible for managing data.
  • Startup – class with services configuration. Both ConfigureServices and Configure methods are called behind the scenes from the runtime. Any configurations needed goes to those two methods. Current configuration adds MVC to services and instructs application to use it. This is not really Model View Controller pattern, but this is what is needed to enable controllers and get API running.
  • Program – main program entry point where web host is build and started. It uses Startup.cs to run the configurations. More details on WebHost can be found in Hosting in ASP.NET Core. This article also shows how external configuration is managed, something that will be presented later in current post.

PersonController

using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNetCore.Mvc;
using SampleDotNetCore2RestStub.Models;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Controllers
{
	public class PersonController : Controller
	{
		[HttpGet("person/get/{id}")]
		public Person GetPerson(int id)
		{
			return PersonRepository.GetById(id);
		}

		[HttpGet("person/remove")]
		public string RemovePerson()
		{
			PersonRepository.Remove();
			return "Last person remove. Total count: " 
						+ PersonRepository.GetCount();
		}

		[HttpGet("person/all")]
		public List<Person> GetPersons()
		{
			return PersonRepository.GetAll();
		}

		[HttpPost("person/save")]
		public string AddPerson([FromBody]Person person)
		{
			return PersonRepository.Save(person);
		}
	}
}

Person

namespace SampleDotNetCore2RestStub.Models
{
	public class Person
	{
		public int Id { get; set; }
		public string FirstName { get; set; }
		public string LastName { get; set; }
		public string Email { get; set; }
	}
}

PersonRepository

using System.Collections.Generic;
using System.Linq;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Repositories
{
	public class PersonRepository
	{
		private static Dictionary<int, Person> PERSONS 
								= new Dictionary<int, Person>();

		static PersonRepository()
		{
			PERSONS.Add(1, new Person
			{
				Id = 1,
				FirstName = "FN1",
				LastName = "LN1",
				Email = "email1@email.na"
			});
			PERSONS.Add(2, new Person
			{
				Id = 2,
				FirstName = "FN2",
				LastName = "LN2",
				Email = "email2@email.na"
			});
		}

		public static Person GetById(int id)
		{
			return PERSONS[id];
		}

		public static List<Person> GetAll()
		{
			return PERSONS.Values.ToList();
		}

		public static int GetCount()
		{
			return PERSONS.Count();
		}

		public static void Remove()
		{
			if (PERSONS.Keys.Any())
			{
				PERSONS.Remove(PERSONS.Keys.Last());
			}
		}

		public static string Save(Person person)
		{
			var result = "";
			if (PERSONS.ContainsKey(person.Id))
			{
				result = "Updated Person with id=" + person.Id;
			}
			else
			{
				result = "Added Person with id=" + person.Id;
			}
			PERSONS.Add(person.Id, person);
			return result;
		}
	}
}

Startup

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;

namespace SampleDotNetCore2RestStub
{
	public class Startup
	{
		public Startup(IConfiguration configuration)
		{
			Configuration = configuration;
		}

		public IConfiguration Configuration { get; }

		public void ConfigureServices(IServiceCollection services)
		{
			services.AddMvc();
		}

		public void Configure(IApplicationBuilder app,
					IHostingEnvironment env)
		{
			app.UseMvc();
		}
	}
}

Program

using System;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;

namespace SampleDotNetCore2RestStub
{
	public class Program
	{
		public static void Main(string[] args)
		{
			BuildWebHost(args).Run();
		}

		public static IWebHost BuildWebHost(string[] args) =>
			WebHost.CreateDefaultBuilder(args)
				.UseStartup<Startup>()
				.Build();
	}
}

External configuration

Service so far is pretty much useless as it does not give opportunity for external configurations. Adding external configuration consist of adding and changing following files:

    • VersionController – controller to actually show full working configuration. Routing in this controller is handled by [Route(“api/[controller]”)]. This exposes /api/version endpoint because [controller] is a template that stands for controller name. Controller constructor takes IOptions object and extracts Value out of it. Actual object value is injected in Startup.cs.
    • appsettings.json – JSON file with application configurations.
    • AppConfig – data model class that represents JSON configuration as object.
    • Startup – change is needed to read file appsettings.json and bind it to AppConfig object. Configuration is read with: var configurationBuilder = new ConfigurationBuilder().AddJsonFile(“appsettings.json”, false, true) then it is saved internally with Configuration = configurationBuilder.Build(). JSON configuration is bound to a AppConfig object with following line: services.Configure<AppConfig>(Configuration).
    • SampleDotNetCore2RestStub.csproj – change is needed in project file to instruct build process to copy appsettings.json to output folder. This is where VS 2017 makes it much easier as it exposes property config to change, with VS Code you have to edit csproj XML.

VersionController

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Options;

namespace SampleDotNetCore2RestStub.Controllers
{
	[Route("api/[controller]")]
	public class VersionController : Controller
	{
		private readonly AppConfig _config;

		public VersionController(IOptions<AppConfig> options)
		{
			_config = options.Value;
		}

		[HttpGet]
		public string Version()
		{
			return _config.Version;
		}
	}
}

appsettings.json

{
	"Version": "1.0"
}

AppConfig

namespace SampleDotNetCore2RestStub
{
	public class AppConfig
	{
		public string Version { get; set; }
	}
}

Startup

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;

namespace SampleDotNetCore2RestStub
{
	public class Startup
	{
		public Startup()
		{
			var configurationBuilder = new ConfigurationBuilder()
				.AddJsonFile("appsettings.json", false, true);

			Configuration = configurationBuilder.Build();
		}

		public IConfiguration Configuration { get; }

		public void ConfigureServices(IServiceCollection services)
		{
			services.AddMvc();
			services.Configure<AppConfig>(Configuration);
		}

		public void Configure(IApplicationBuilder app, 
					IHostingEnvironment env)
		{
			app.UseMvc();
		}
	}
}

csproj

<ItemGroup>
	<None Include="appsettings.json" CopyToOutputDirectory="Always" />
</ItemGroup>

Request filtering

Almost mandatory feature is to have some kind of filtering on the request. Current example will provide very basic implementation of authentication filter achieved with attribute. Following files are needed:

  • SecurePersonController – controller that demonstrates filtering. Controller is no more different than other discussed above. Important bit is [ServiceFilter(typeof(AuthenticationFilterAttribute))] which assigns AuthenticationFilterAttribute to current controller.
  • AuthenticationFilterAttribute – very basic implementation to illustrate how it works. Request headers are extracted from HttpContext and are checked for existence of Authorization. If not found Exception is thrown. In next section I will show how to handle this exception more gracefully.
  • StartupAuthenticationFilterAttribute is registered to runtime with: services.AddScoped<AuthenticationFilterAttribute>(). .NET Core dependency injection mechanism is used here, which I have described it in more details in separate section bellow.

SecurePersonController

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using SampleDotNetCore2RestStub.Attributes;
using SampleDotNetCore2RestStub.Models;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Controllers
{
	[ServiceFilter(typeof(AuthenticationFilterAttribute))]
	public class SecurePersonController : Controller
	{
		[HttpGet("secure/person/all")]
		public List<Person> GetPersons()
		{
			return PersonRepository.GetAll();
		}
	}
}

AuthenticationFilterAttribute

using System;
using System.Linq;
using Microsoft.AspNetCore.Mvc.Filters;

namespace SampleDotNetCore2RestStub.Attributes
{
	public class AuthenticationFilterAttribute : ActionFilterAttribute
	{
		public override void OnActionExecuting(ActionExecutingContext ctx)
		{
			string authKey = ctx.HttpContext.Request
					.Headers["Authorization"].SingleOrDefault();

			if (string.IsNullOrWhiteSpace(authKey))
				throw new Exception();
		}
	}
}

Startup

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.Configure<AppConfig>(Configuration);
	services.AddScoped<AuthenticationFilterAttribute>();
}

If endpoint /secure/person/all is queried without Authorization header there is 500 Internal Server Error response from application. If header is present all persons are retrieved.

Middleware

Middleware is a software that is assembled into an application pipeline to handle requests and responses. Each component chooses whether to pass the request to the next component in the pipeline or perform work before that. More on middle ware can be found in ASP.NET Core Middleware Fundamentals. In current example middleware is used to handle better exceptions. In previous point AuthenticationFilterAttribute was throwing exception which was transformed to 500 Internal Server Error which is not pretty. In case of not authorised application should return 401 Unauthorized. In order to do this following files are needed:

  • HttpException – custom exception which then will be caught and processed in HttpExceptionMiddleware.
  • HttpExceptionMiddleware – this is where handling happens. Code checks for custom HttpException and if such is thrown pipeline changes HttpContext.Response object with proper values.
  • AuthenticationFilterAttribute – instead of Exception filter attribute throws new
    HttpException(HttpStatusCode.Unauthorized). This way middleware will get invoked.
  • Startup – middleware get registered here with app.UseMiddleware<HttpExceptionMiddleware>(). It is extremely important that this stands before app.UseMvc() otherwise it will not work.

HttpException

using System;
using System.Net;

namespace SampleDotNetCore2RestStub.Exceptions
{
	public class HttpException : Exception
	{
		public int StatusCode { get; }

		public HttpException(HttpStatusCode httpStatusCode)
			: base(httpStatusCode.ToString())
		{
			this.StatusCode = (int)httpStatusCode;
		}
	}
}

HttpExceptionMiddleware

using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Http.Features;
using SampleDotNetCore2RestStub.Exceptions;

namespace SampleDotNetCore2RestStub.Middleware
{
	public class HttpExceptionMiddleware
	{
		private readonly RequestDelegate _next;

		public HttpExceptionMiddleware(RequestDelegate next)
		{
			_next = next;
		}

		public async Task Invoke(HttpContext context)
		{
			try
			{
				await _next.Invoke(context);
			}
			catch (HttpException httpException)
			{
				context.Response.StatusCode = httpException.StatusCode;
				var feature = context.Features.Get<IHttpResponseFeature>();
				feature.ReasonPhrase = httpException.Message;
			}
		}
	}
}

AuthenticationFilterAttribute

public override void OnActionExecuting(ActionExecutingContext context)
{
	string authKey = context.HttpContext.Request
			.Headers["Authorization"].SingleOrDefault();

	if (string.IsNullOrWhiteSpace(authKey))
		throw new HttpException(HttpStatusCode.Unauthorized);
}

Startup

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
	app.UseMiddleware<HttpExceptionMiddleware>();
	app.UseMvc();
}

Dependency Injection

So far there is running service with basic functionality. It is missing very important bit though, something that should have been considered and added earlier. Actually it was added but only when registering AuthenticationFilterAttribute, but here I will go in more details. Dependency injection (DI) is a technique for achieving loose coupling between objects and their dependencies. Rather than directly instantiating object or using static references, the objects a class needs are provided to the class in some fashion. ASP.NET Core provides its own dependency injection mechanisms, read more on Introduction to Dependency Injection in ASP.NET Core. Code will now get refactored to match this pattern.

  • IPersonRepository – all database operations are declared in this interface.
  • PersonRepository – implements all methods of IPersonRepository interface. It still does not have real interaction with database, data is kept in a dictionary. Refactor is that all static methods are removed. In order to use this class you need instance of it. Sample data is populated on object creation in its constructor.
  • SecurePersonController – instance of implementation of IPersonRepository is passed through the constructor and is used internally. By using interfaces a level of abstraction is achieved, where multiple implementations may be used for same interface.
  • PersonController – same as SecurePersonController.
  • Startup – this is where DI is used to register that PersonRepository is implementation of IPersonRepositoryservices.AddSingleton<IPersonRepository, PersonRepository>().

Three different object life scopes are available in .NET Core DI. It is important to know the difference in order to use them properly. If object creation is expensive operation misuse of proper DI lifetime scope might be crucial for performance:

  • AddSingleton – only one instance is created for whole application. In example above PersonRepository needed to have one instance because sample data is initialised in constructor.
  • AddScoped – one instance is created per HTTP request scope. 
  • AddTransient – instance is created every time it is needed. Lets say there are 3 places where an object is needed and an HTTP request is coming to application. AddTransient will create 3 different objects, while AddScoped will create just one that will be used for current HTTP request scope.

IPersonRepository

using System.Collections.Generic;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Repositories
{
	public interface IPersonRepository
	{
		Person GetById(int id);
		List<Person> GetAll();
		int GetCount();
		void Remove();
		string Save(Person person);
	}
}

PersonRepository

using System.Collections.Generic;
using System.Linq;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Repositories
{
	public class PersonRepository : IPersonRepository
	{
		private Dictionary<int, Person> _persons 
						= new Dictionary<int, Person>();

		public PersonRepository()
		{
			_persons .Add(1, new Person
			{
				Id = 1,
				FirstName = "FN1",
				LastName = "LN1",
				Email = "email1@email.na"
			});
			_persons .Add(2, new Person
			{
				Id = 2,
				FirstName = "FN2",
				LastName = "LN2",
				Email = "email2@email.na"
			});
		}

		public Person GetById(int id)
		{
			return _persons[id];
		}

		public List<Person> GetAll()
		{
			return _persons.Values.ToList();
		}

		public int GetCount()
		{
			return _persons.Count();
		}

		public void Remove()
		{
			if (_persons.Keys.Any())
			{
				_persons.Remove(_persons.Keys.Last());
			}
		}

		public string Save(Person person)
		{
			if (_persons.ContainsKey(person.Id))
			{
				_persons[person.Id] = person;
				return "Updated Person with id=" + person.Id;
			}
			else
			{
				_persons.Add(person.Id, person);
				return "Added Person with id=" + person.Id;
			}
		}
	}
}

SecurePersonController

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using SampleDotNetCore2RestStub.Attributes;
using SampleDotNetCore2RestStub.Models;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Controllers
{
	[ServiceFilter(typeof(AuthenticationFilterAttribute))]
	public class SecurePersonController : Controller
	{
		private readonly IPersonRepository _personRepository;

		public SecurePersonController(IPersonRepository personRepository)
		{
			_personRepository = personRepository;
		}

		[HttpGet("secure/person/all")]
		public List<Person> GetPersons()
		{
			return _personRepository.GetAll();
		}
	}
}

Startup

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.Configure<AppConfig>(Configuration);
	services.AddScoped<AuthenticationFilterAttribute>();
	services.AddSingleton<IPersonRepository, PersonRepository>();
}

Docker file

Docker file that packs application is shown bellow:

FROM microsoft/dotnet:2.0-sdk
COPY pub/ /root/
WORKDIR /root/
ENV ASPNETCORE_URLS="http://*:80"
EXPOSE 80/tcp
ENTRYPOINT ["dotnet", "SampleDotNetCore2RestStub.dll"]

Docker container that is used is microsoft/dotnet:2.0-sdk. Everything from pub folder is copied to container root folder. ASPNETCORE_URLS is used to set the URLs that the server listens on by default. Current config runs and exposes application at port 80 in the container. With ENTRYPOINT is configured the command that is run when container is started.

Build, package and run Docker

Application is build and published in Release mode into pub folder with following command:

dotnet publish --configuration=Release -o pub

Docker container is packaged with tag netcore-rest with following command:

docker build . -t netcore-rest

Docker container is run with exposing port 80 from the container to port 9000 on host with following command:

docker run -e Version=1.1 -p 9000:80 netcore-rest

Notice the -e Version=1.1 which sets environment variable to be used inside the container. Intention is to use this variable in application. This can be enabled with modifying Startup.cs file by adding AddEnvironmentVariables():

public Startup()
{
	var configurationBuilder = new ConfigurationBuilder()
		.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
		.AddEnvironmentVariables();

	Configuration = configurationBuilder.Build();
}

If invoked now /api/version returns 1.1.

Docker optimisation

When container with microsoft/dotnet:2.0-sdk is packed it gets to a size of 1.7GB which is quite a lot. There is much leaner container image: microsoft/dotnet:2.0-runtime, but it requires all runtime assemblies to be present in pub folder. This can be done by changing the csproj file with adding PublishWithAspNetCoreTargetManifest = false:

<PropertyGroup>
	<OutputType>Exe</OutputType>
	<TargetFramework>netcoreapp2.0</TargetFramework>
	<PublishWithAspNetCoreTargetManifest>false</PublishWithAspNetCoreTargetManifest> 
</PropertyGroup>

This make pub folder about 37MB, but container size is 258MB. Problem with this proposal is that it might not be very reliable as some assemblies might not be copied or might not be correct version.

Since Docker is keeping layers in the repository, proposed optimisation might turn out not to be actual optimisation. It will consume much more space in repository, since layer that changes and is always saved is 258MB. Layers with OS might not change often if change at all.

Testing

How to given application can be integration tested is described in .NET Core integration testing and mock dependencies post.

Conclusion

In current tutorial I have shown how to create API from scratch with .NET Core 2.0 SDK on any platform. It is very easy to run .NET Core app and even run it Docker with Linux container.

Related Posts

Read more...

Partial JSON deserialize by JsonPath with Json.NET

Last Updated on by

Post summary: Code examples how to deserialize only part of a big JSON file by JsonPath when using NewtonSoft Json.NET.

Code shown in examples bellow is available in GitHub DotNetSamples/JsonPathConverter repository.

Use case description

Imagine you have a big JSON which you want to deserialize into a C# object.

{
  "node1": {
    "node1node1": "node1node1value",
    "node1node2": [ "value1", "value2" ],
    "node1node3": {
      "node1node3node1": "node1node3node1value"
    }
  },
  "node2": true,
  "node3": {
    "node3node1": "node3SubNode1Value",
    "node3node2": {
      "node3node2node1": {
        "node3node2node1node1": [ 1, 2, 3 ]
      },
      "node3node2node2": "node3node2node1value"
    }
  },
  "node4": "{\"node4node1\": \"n4n1value\", \"node4node2\": \"n4n1value\"}"
}

File above is actually pretty small and used for demo purposes. In practice you can stumble upon terrifyingly big JSON files. NewtonSoft.Json or Json.NET is defacto the JSON standard for .NET, so it is being used to parse the JSON file. In order to deserialize this JSON to a C# object you need a model class that represent the JSON nodes. Although immense effort you can create such, but why bother if you are going to use just a fraction of all JSON data. This is where JsonPath comes in play. Json.NET allows you to query JSON by JsonPath, so one option is to manually query the JSON, find data you need and assign it to your C# object. This is not an elegant solution. Since query by JsonPath is possible this can be used in a JsonConverter that will automatically do the job. What is needed is a custom JsonPathConverter and a model class that will be deserialized to, both are described bellow.

JSON model class

It is easier to describe the JSON model first. Bellow is a code for JSON model class that will collect only data we need.

using System.Collections.Generic;
using Newtonsoft.Json;

namespace JsonPathConverter
{
	[JsonConverter(typeof(JsonPathConverter))]
	public class JsonModel
	{
		[JsonProperty("node1.node1node2")]
		public IList<string> Node1Array { get; set; }

		[JsonProperty("node2")]
		public bool Node2 { get; set; }

		[JsonProperty("node3.node3node2.node3node2node1.node3node2node1node1")]
		public IList<int> Node3Array { get; set; }

		[JsonConverter(typeof(JsonPathConverter))]
		[JsonProperty("node4")]
		public NestedJsonModel Node4 { get; set; }
	}

	public class NestedJsonModel
	{
		[JsonProperty("node4node2")]
		public string NestedNode2 { get; set; }
	}
}

JSON model class is annotated with [JsonConverter(typeof(JsonPathConverter))] which tells Json.NET to use JsonPathConverter class to do the conversion. JsonPathConverter is implemented in such a way that JsonProperty is a mandatory for each property in order to be parsed: [JsonProperty(“node1.node1node2”)].

JSON as a string

You may have noticed already the weird case where node4 in JSON file has actually a string value which is escaped JSON string. This is something unusual and may not be pretty good programming practice, but I’ve encountered it in a production code, so examples given here cover this weirdo as well. There is special NestedJsonModel class which this JSON string is being deserialized to.

JsonPathConverter

Code bellow implements JsonConverter abstract class and implements needed methods.

public class JsonPathConverter : JsonConverter
{
	public override bool CanWrite => false;

	public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
	{
		var jObject = JObject.Load(reader);
		var targetObj = Activator.CreateInstance(objectType);

		foreach (var prop in objectType.GetProperties().Where(p => p.CanRead && p.CanWrite))
		{
			var jsonPropertyAttr = prop.GetCustomAttributes(true).OfType<JsonPropertyAttribute>().FirstOrDefault();
			if (jsonPropertyAttr == null)
			{
				throw new JsonReaderException($"{nameof(JsonPropertyAttribute)} is mandatory when using {nameof(JsonPathConverter)}");
			}

			var jsonPath = jsonPropertyAttr.PropertyName;
			var token = jObject.SelectToken(jsonPath);

			if (token != null && token.Type != JTokenType.Null)
			{
				var jsonConverterAttr = prop.GetCustomAttributes(true).OfType<JsonConverterAttribute>().FirstOrDefault();
				object value;
				if (jsonConverterAttr == null)
				{
					serializer.Converters.Clear();
					value = token.ToObject(prop.PropertyType, serializer);
				}
				else
				{
					value = JsonConvert.DeserializeObject(token.ToString(), prop.PropertyType,
						(JsonConverter)Activator.CreateInstance(jsonConverterAttr.ConverterType));
				}
				prop.SetValue(targetObj, value, null);
			}
		}

		return targetObj;
	}

	public override bool CanConvert(Type objectType)
	{
		return true;
	}

	public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
	{
		throw new NotImplementedException();
	}
}

Deserialization work is done in public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) method. JSON is loaded to a NewtonSoft JObject and instance of result object is created. All properties of this result object are iterated in a foreach loop. It is important to not that properties should have both get and set in order to be considered in deserialization: objectType.GetProperties().Where(p => p.CanRead && p.CanWrite). If you have properties with just get or just set they will be ignored. JsonPropertyAttribute for each property is taken. If there is no such then exception is thrown. This part can be changed. JsonPath can be considered to be the property name: var jsonPath = jsonPropertyAttr == null ? prop.Name : jsonPropertyAttr.PropertyName. This is tricky though as C# is case sensitive and it might not work as property could start with capital letter, but JSON itself to be with lower case. Once there is JsonPath defined JObject is queried with jObject.SelectToken(jsonPath). This should return a valid token. In case of valid token result object property is checked for JsonConverterAttribute. If such exists then JSON is deserialized with this newly found JsonConverter instance. If there is no converter attached to this property then all existing converters are cleared and token is converted into object. Clearing part is important as in case of recursive call it will throw exception.

Usage

Once job above is done usage is pretty easy:

var fileContent = File.ReadAllText("jsonFile.json");
var result = JsonConvert.DeserializeObject<JsonModel>(fileContent);

result.Node1Array.Should().BeEquivalentTo(new List<string> {"value1", "value2"});
result.Node2.Should().Be(true);
result.Node3Array.Should().BeEquivalentTo(new List<int> { 1, 2, 3 });
result.Node4.NestedNode2.Should().Be("n4n1value");

Conclusion

In this post I have shown how to partially deserialize JSON by JsonPath picking only data that you need.

Read more...

Soft assertions for C# unit testing frameworks (MSTest, NUnit, xUnit.net)

Last Updated on by

Post summary: Code example of very easy and useful custom implementation of soft assertions in C# unit testing frameworks such as MSTest, NUnit or xUnit.net.

Code shown in examples bellow is available in GitHub DotNetSamples/SoftAssertions repository.

Unit vs Functional testing

Unit testing paradigm states that each test exercises particular code behaviour. So in a perfect world one unit test would have one assertion which defines unit test result – either passed or failed. This is why unit testing frameworks provide only asserts which stop further execution of current test method. In functional testing usually one test verifies several conditions. Not debating if this is good or bad. Assume you are doing GUI testing, once you have opened particular page you’d better do as much verification as possible to reduce the risk of bugs. Having this page opened over and over for each single check is not the most efficient way of testing. This is why when you run functional tests you need some kind of assert that indicates whether passed or failed but to let the test continue in no critical issue is present. Those are generally called “soft” asserts.

Soft assertions code

Following code is an implementations of soft assertions:

public class SoftAssertions
{
	private readonly List<SingleAssert> 
		_verifications = new List<SingleAssert>();

	public void Add(string message, string expected, string actual)
	{
		_verifications.Add(new SingleAssert(message, expected, actual));
	}

	public void Add(string message, bool expected, bool actual)
	{
		Add(message, expected.ToString(), actual.ToString());
	}

	public void Add(string message, int expected, int actual)
	{
		Add(message, expected.ToString(), actual.ToString());
	}

	public void AddTrue(string message, bool actual)
	{
		_verifications
			.Add(new SingleAssert(message, true.ToString(), actual.ToString()));
	}

	public void AssertAll()
	{
		var failed = _verifications.Where(v => v.Failed).ToList();
		failed.Should().BeEmpty();
	}

	private class SingleAssert
	{
		private readonly string _message;
		private readonly string _expected;
		private readonly string _actual;

		public bool Failed => _expected != _actual;

		public SingleAssert(string message, string expected, string actual)
		{
			_message = message;
			_expected = expected;
			_actual = actual;
		}

		public override string ToString()
		{
			return $"'{_message}' assert was expected to be '{_expected}' " +
				$"but was '{_actual}'";
		}
	}
}

Soft assertions details

Actual assertion is handled by SingleAssert class. It contains a message to be displayed to user in case of fail as well as expected and actual values. They are stored as strings. All asserts during testing are stored in a List<SingleAssert>. There are several methods that add assert. There are such that accept bool, string and int. You can extend and add as many as you want. It is mandatory to call AssertAll() method so asserts can be evaluated. Evaluation consists of filtering out passed asserts leaving only failed: var failed = _verifications.Where(v => v.Failed).ToList(). Then list with failed is checked for empty failed.Should().BeEmpty(). In this case FluentAssertions framework is used, but code can be changed to such that suits your particular needs.

Soft assertions usage

Usage is pretty straight forward. SoftAssertions object should be created before each test and asserted after each test:

[TestClass]
public class UnitTest
{
	private SoftAssertions _softAssertions;

	[TestInitialize]
	public void SetUp()
	{
		_softAssertions = new SoftAssertions();
	}

	[TestCleanup]
	public void TearDown()
	{
		_softAssertions.AssertAll();
	}

	[TestMethod]
	public void TestMixedSoftAssertions()
	{
		_softAssertions.Add("Passing bool Add assertion", true, true);
		_softAssertions.Add("Failing bool Add assertion", true, false);
		_softAssertions
			.Add("Passing string Add assertion", "SameString", "SameString");
		_softAssertions
			.Add("Failing string Add assertion", "SameString", "OtherString");
		_softAssertions.Add("Passing int Add assertion", 1, 1);
		_softAssertions.Add("Failing int Add assertion", 1, 2);
		_softAssertions.AddTrue("Passing AddTrue assertion", true);
		_softAssertions.AddTrue("Failing AddTrue assertion", false);
	}
}

Soft assertions result

Result of test shown above is: Result Message: Expected collection to be empty, but found {‘Failing bool Add assertion’ assert was expected to be ‘True’ but was ‘False’, ‘Failing string Add assertion’ assert was expected to be ‘SameString’ but was ‘DifferentString’, ‘Failing int Add assertion’ assert was expected to be ‘1’ but was ‘2’, ‘Failing AddTrue assertion’ assert was expected to be ‘True’ but was ‘False’}.

This comes out of the box because FluentAssertions is used. Otherwise you have to do some other output and assertions.

Other soft assertions

Some custom implementation of soft assertions is as well available in NTestRunner framework, but it is more complex and demanding special approach for writing tests.

Conclusion

Soft assertions are very useful in functional testing. With this simple class you can directly have them in your functional tests.

Related Posts

Read more...

Convert NUnit 3 to NUnit 2 results XML file

Last Updated on by

Post summary: Examples how to convert NUnit 3 result XML file into NUnit 2 result XML file.

Although NUnit 3 was officially released in November 2015 still there are CI tools that do not provide support for parsing NUnit 3 result XML files. In this post I will show how to convert between formats so CI tools can read NUnit 2 format.

NUnit 3 console runner

The easiest way is if you are using NUnit 3 console runner. It can be provided with and option: –result=TestResult.xml;format=nunit2.

Nota bene: Mandatory for this to work is to have nunit-v2-result-writer in nuget packages directory otherwise error will be shown: Unknown result format: nunit2.

Convert NUnit 3 to NUnit 2

If tests are being run in some other way other than NUnit 3 console runner then solution bellow is needed. There is no program or tool that can do this conversion, so custom one is needed. This is a Powershell script that uses nunit-v2-result-writer assemblies and with their functionality converts the XML files:

$assemblyNunitEngine = 'nunit.engine.api.dll';
$assemblyNunitWriter = 'nunit-v2-result-writer.dll';
$inputV3Xml = 'TestResult.xml';
$outputV2Xml = 'TestResultV2.xml';

Add-Type -Path $assemblyNunitEngine;
Add-Type -Path $assemblyNunitWriter;
$xmldoc = New-Object -TypeName System.Xml.XmlDataDocument;
$fs = New-Object -TypeName System.IO.FileStream -ArgumentList $inputV3Xml,'Open','Read';
$xmldoc.Load($fs);
$xmlnode = $xmldoc.GetElementsByTagName('test-run').Item(0);
$writer = New-Object -TypeName NUnit.Engine.Addins.NUnit2XmlResultWriter;
$writer.WriteResultFile($xmlnode, $outputV2Xml);

Important here is to give proper path to nunit.engine.api.dll, nunit-v2-result-writer.dll and NUnit 3 TestResult.xml files. Powershell script above is equivalent to following C# code:

using System.IO;
using System.Xml;
using NUnit.Engine.Addins;

public class NUnit3ToNUnit2Converter
{
	public static void Main(string[] args)
	{
		var xmldoc = new XmlDataDocument();
		var fileStream 
			= new FileStream("TestResult.xml", FileMode.Open, FileAccess.Read);
		xmldoc.Load(fileStream);
		var xmlnode = xmldoc.GetElementsByTagName("test-run").Item(0);

		var writer = new NUnit2XmlResultWriter();
		writer.WriteResultFile(xmlnode, "TestResultV2.xml");
	}
}

File samples

Here NUnitFileSamples.zip is a collection of several NUnit result files. there with V3 are NUnit 3 format, those with V2_NUnit are generated with –result=TestResult.xml;format=nunit2 option and those with V2_Converted are converted with code above.

Conclusion

Although little inconvenient it is possible to convert NUnit 3 to NUnit 2 result XML files using Powershell scripts and nunit-v2-result-writer assemblies.

Read more...

Code coverage of manual or automated tests with OpenCover for .NET applications

Last Updated on by

Post summary: Examples how to do code coverage of manual or automated functional test with OpenCover tool for .NET applications

Code coverage

This topic is how to do the code coverage on .NET applications with OpenCover. Theory on what is code coverage, why it is needed can be found in What about code coverage post.

OpenCover

OpenCover is open source tool for code coverage for .NET 2.0 and above applications for Windows only. With OpenCover instrumentation of the code is not needed. Application is started through OpenCover and it collects coverage results. What is mandatory though is PDB file along with executables and assemblies, so application under test should be build in Debug mode. If PDB file is not found then no coverage data will be gathered.

Latest version can be downloaded form Releases in GitHub. There is installer and zip archive. If installer is used by default OpenCover is installed in C:\Users\{USER_ACCOUNT}\AppData\Local\Apps\OpenCover. If you want to change this, click Advanced button during installation and then select Install for all users on this machine.

How to use OpenCover

Usage guide can be found in OpenCover usage reference online. Also along with OpenCover installation there Usage.rft file which holds all the information about the tool. Most useful commands are listed bellow:

  • -target: – path to application executable file or name of service
  • -filter: – list of filters to apply to selectively include or exclude assemblies and classes from coverage results
  • -output: – path to output XML file, if empty then results.xml will be created in current directory
  • -register[:user] – register and de-register the code coverage profiler
  • -targetargs: – arguments to be passed to the target process
  • -targetdir: – path to the target directory or alternative path to PDB files

ReportGenerator

OpenCover produces results is raw format, which is not for humans. ReportGenerator is used to convert XML reports generated by OpenCover, PartCover, Visual Studio or NCover into human readable reports in various formats. Usage guide can be found on its home page, most useful commands are:

  • -reports: – coverage reports that should be parsed, semicolon separated, wildcards are allowed
  • -targetdir: – directory where the generated report should be saved
  • -sourcedirs:[;][;] – directories which contain the corresponding source code, optional, semicolon separated
  • -classfilters:<(+|-)filter>[;<(+|-)filter>][;<(+|-)filter>] – list of classes that should be included or excluded in the report, optional, wildcards are allowed.

Hands on examples on manual code coverage

In order to try you need to checkout code samples from GitHub SampleApp or SampleAppPlus repository to C:\Telerik Testing Framework needs to be installed as it copies lots of assemblies in GAC. OpenCover and ReportGenerator should also be installed to C:\.

With current setup command to start SimpleAppPlus.exe from OpenCover is:

C:\OpenCover\OpenCover.Console.exe
	-target:"C:\SampleAppPlus\SampleAppPlus\bin\Debug\SampleAppPlus.exe"
	-output:C:\SampleAppPlus\CoverageReports\SampelAppPlus.results.xml
	-register:user

Now application is started and manual functional tests can be executed. Once application is stopped coverage results are saved in SampelAppPlus.results.xml file.

Hands on examples on automated code coverage

Since automation is the future of QA and we have already created automated tests for both SimpleApp and SimplaAppPlus, we want to measure how our tests perform on code coverage. Automated tests run the application, attach to it and manipulate it. So it seems close to mind just to use the command from manual example and start the application with it. It will not work though since command is starting and returning OpenCover process, not underlying SimpleAppPlus one. Extra code is needed in SampleAppPlus.Tests.Framework\Tests\BaseTest.cs file. Instead of:

Application appWhite = Application.Launch(applicationPath);

following code has to be added:

Process sampleAppPlus = StartProcess();
Application appWhite = Application.Attach(sampleAppPlus);

where StartProcess() method is:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading;

private Process StartProcess()
{
	string appName = "SampleAppPlus";
	string openCover = @"C:\OpenCover\OpenCover.Console.exe";
	long timeStamp = (long)(DateTime.UtcNow 
						- new DateTime(1970, 1, 1)).TotalMilliseconds;

	List<string> arguments = new List<string>();
	arguments.Add(@"-target:C:\SampleAppPlus\SampleAppPlus\bin\Debug\"
					+ appName + ".exe");
	arguments.Add(@"-output:C:\SampleAppPlus\CoverageReports\SampelAppPlus."
					+ timeStamp + ".xml");
	arguments.Add("-register:user");

	Process process = new Process();
	process.StartInfo.FileName = openCover;
	process.StartInfo.Arguments = string.Join(" ", arguments);
	process.Start();

	Thread.Sleep(5000);
	return Process.GetProcesses().First(proc => appName == proc.ProcessName);
}

OpenCover arguments “-target”, “-output” and “-register” are used. Note that -output file is always different by adding current unix time in file name, this is to prevent overwriting of file. The idea is to run OpenCover which will start SimpleAppPlus. Wait 5 seconds to ensure application is up, then get all Windows processes with Process.GetProcesses(), iterate them, find and return the needed SampleAppPlus process which TestStack.White and Telerik Testing Framework will attach to.

Create report

Once tests are run and OpenCover XML report files are generated it is time to generated human readable reports. This is done with ReportGenerator with command:

C:\ReportGenerator\bin\ReportGenerator.exe
	-reports:C:\SampleAppPlus\CoverageReports\SampelAppPlus.*.xml
	-targetdir:C:\SampleAppPlus\CoverageReports\html
	-sourcedirs:C:\SampleAppPlus\
	-classfilters:-SampleAppPlus.Properties.*

OpenCover-report

Inspect report

Coverage report file from examples above can be found in OpenCover code coverage report. Inspecting the report there is missed code in SampleAppPlus.MainWindow class – else branch of if ((bool)openFileDialog.ShowDialog()) condition is not covered. Documentation of this method states that it returns false if Cancel button is clicked of dialog window. In order to increase the coverage test that clicks Cancel button and verifies no upload is done should be added to test suite.

Code coverage for IIS web application or Windows service

Examples above show how to run normal windows application. It is valid for both UI and console applications as they are started with single EXE file. OpenCover can also work for IIS web applications, Silverlight applications and Windows service applications. More details can be found in documentation accompanying OpenCover installation.

Although it is possible to connect to a running service, I have done code coverage on Windows service in the manner suggested in the documentation – run the service as a console application. Since debugging a running Windows service is not that straightforward task, developers have most likely already implemented a switch to start service as console application. If not you will easy their lives by asking them to do so.

Conclusion

OpenCover is the only open source tool for code coverage for .NET applications. It is really powerful and easy to use. No code instrumentation is needed, just build the code into Debug mode to have PDB files and run the application through OpenCover. It can also be used for measuring code coverage of unit tests.

Related Posts

Read more...

FIX messages simulator

Last Updated on by

Post summary: General thoughts how to test FIX messages with a FIX simulator.

FIX stands for Financial Information eXchange and is most probably the most used protocol for exchanging electronic messages in the financial world.

FIX sessions

In order to exchange data systems in financial world use FIX messages. First important step before exchanging messages is to establish connection between each other. There are two roles for counter parties

  • Acceptor – acts as a server. Stays and listens for clients that will connect
  • Initiator – acts as a client. It is the active participant in the connection

Once roles are defined fix session should be established between both. They can maintain as many different sessions as they need. Session is uniquely identified by FIX protocol version and IDs of both counter parties.

Session is started by initiator by sending Logon request and receiving Logon Acknowledgement. Session is kept alive by both sides. They send Heartbeat messages to each other. Session is ended by Logout message send from one of the counter parties. The other acknowledges the Logout.

FIX messages

Once all the ceremony of setting a session is done and session is alive both counter parties can start exchanging FIX messages with data. FIX message in short is a string containing key value pairs in format <TagNumber>=<Value> separated by SOH (\u0001) character. Each tag has a name and in some cases vast specification what values accepts, what each value means, etc. FIXimate is pretty good on-line tool which gives information about tags. Each tag is represented by its integer number in the message. Tags are separated in three parts of the message:

  • Header – contains information needed for session identification
  • Body – contains business data
  • Trailer – checksum for message validation

Example where SOH is replaced with |

8=FIX.4.2|9=76|35=A|49=Initiator|56=Acceptor|34=1|52=20150321-15:39:28.762|98=0|108=30|141=Y|10=187|

Testing of FIX message scenarios

Depending on business logic behind actual testing of FIX messages could be pretty complicated task. Each message alone could have tens of tags with data. Each tag is meaningful and system can react in different manner based on its value. Each message could depend on previous message. Combination and complex business logic could made through testing very difficult task.

How to do it

There are numerous FIX simulators available out there. They will work for most of the cases but since FIX communication can be really boutique in some cases external tool is not a good idea. Internal tool is what I suggest and propose a solution in current post.

Visualise the message

This is maybe the hardest part of everything. How to show messages so they can be easy to comprehend and edit. Obviously editing long string with user unfriendly data is not the best solution. I admit not the perfect solution but couldn’t think of better than good old fashioned Excel. In short each column is a specific tag. Each row is a single fix message and you fill the tag value for this particular fix message. There could be separation rows to make different tests scenarios.

Convert the messages

Once Excel test cases are ready they are exported with a macro to special XML format. XML is then read from FIX simulator and converted to real fix messages.

Send the messages

FIX messages are then send on the wire. In order to send them you need fix engine. Very popular is QuickFIX engine. It has Java and .NET version. There are example applications which help for better understanding how to use the engine.

Conclusion

Testing FIX messages based on a business rules is definitely very important task. It also is not the simplest task you might think of as a QA. Still it is achievable. I would say creating custom solution will initially take some time but in the long run it will pay off as you will have total control over features you have and need.

Read more...

What about code coverage

Last Updated on by

Post summary: Code coverage is measurement of what percentage of program source code is being executed by your tests. Code coverage is a nice to have, but in no case make code coverage ultimate goal!

Code coverage is useful metric to check what part of the code your tests are exercising. Is really depends on the tools to be used for gathering code coverage metrics but in general they work in similar fashion.

How it works

One approach is to instrument application under test. Instrumentation is to modify original executables by adding meta data so code coverage tool is able to understand which line of code is being executed. Other option is to run the application through code coverage tool. In both ways once application is running tests are being executed.

What tests to run

Tests that can be run to measure code coverage can be unit tests, functional automation tests or even manual tests. It doesn’t matter the important part is to see how our tests are impacting application under test.

Results

Once tests are finished code coverage metrics are obtained from code coverage tool. To have detailed information most of the tools take original source code and generate visual report with colour information which line is executed and which not. There are different levels of coverage, method, line, code block, etc. I personally prefer lines and when speaking further I’ll mean lines of code executed.

Benefits

Code coverage information is equally useful for QA and developers. QA analyse code that has not being executed during tests to identify what tests conditions they are missing and improve their tests. Developers analyse the results to identify and remove dead or unused code.

When to do it

Code coverage of unit tests is good to be done on each build. Those can be scheduled in continuous integration jobs and can be run unattended. Code coverage on automated or manual tests is more like a nice to have activity. We can live with or without it. It is useful for big matured products where there is automation test suites. You can also do them against your tests code in order to optimise it. Removing dead code optimises the product and makes its maintenance easier. I would say doing it too often should be avoided. Everything depends on context but for me best is once or twice a year.

What does code coverage percentage means

In one word – nothing. You may have 30% of code coverage but to cover most important user functionality and bug rate to be low and you may have 90% with dummy tests made especially to exercise some code without the idea of actually testing it. I was lucky to work on a project where developers keep the code tidy and clean and my tests easily accomplished 81% just by verifying all user requirements. I may say 80-85% is the maximum you may get.

Pitfalls

I would not recommend to put code coverage as important measurement or key performance indicator (KPI) in your testing strategy. Doing so and making people try to increase the code coverage will in most cases result in dummy tests made especially this percentage to go up. Code coverage is in most cases insignificant aspect of your testing strategy.

How to do it

Practical tutorial how to do code coverage with Java and C# can be found on my blog:

Conclusion

Code coverage is interesting aspect of testing. It might enhance your tests if done wisely or it can ruin them. Remember tests scenarios should be extracted from user requirements and features. Code coverage data should be used to see if you have some blind spots reading the requirements. It can help developers remove dead or unused code.

Related Posts

Read more...

Advanced WPF automation – memory usage

Last Updated on by

Post summary: Highlight eventual memory issue when using Telerik Testing Framework and TestStack White for desktop automation.

Memory is important aspect. When you have several test cases it is not a problem. But on large projects with many tests memory turn out to be serious issue.

Reference

This post is part of Advanced WPF desktop automation with Telerik Testing Framework and TestStack White series. Sample application can be found in GitHub.

Problem

Like every demo on certain technology automating WPF applications looks cool. And also like every technology problems occur when you start to use it in a large scale. Problem with WFP automation with Telerik Testing Framework and TestStack White is memory. When your tests’ number grows frameworks start to use too much memory. By too much I mean over 1GB which might not seems a lot but for a single process actually is. Increasing RAM of test machine is only temporary solution and is not one that can be scaled.

Why so much memory

I have project with 580 tests and 7300 verification points spread in 50 test classes. I’ve spend lots of hours debugging and profiling with several .NET profiling tools. In the end all profilers show that the large amount of memory used is in unmanaged objects. So generally there is nothing you can do. It seems like some of the frameworks or both have memory issues and do not free all memory they use.

Solution

Solution is pretty simple to suggest but harder to implement – run each test class in separate process. I’m using much more enhanced version of NTestsRunner. It is running each test class in separate windows process. Once test is finished results are serialised in results directory, process is exited and all memory and object used for this test is released.

Conclusion

Memory could be a crucial factor in an automation project. Be prepared to have solution for it. At this point I’m not planing to put running tests in separate process in NTestsRunner. If there is demand it is pretty easy task to do it.

Related Posts

Read more...

Complete guide to email verifications with Automation SMTP Server

Last Updated on by

Post summary: How to do complete email verification with your own Automation SMTP server.

SMTP is protocol initially defined in 1982 and is still used nowadays. In order to automate application which sends out emails you need SMTP server which reads messages and saves them to disk for further processing. Note that this is only in case when you application sends emails.

Windows SMTP server

One option is to use SMTP server provided by Windows. Problems here are two. First is that from Vista SMTP server is no more supported. There is SMTP server in Windows Server distributions but licence for them is more expensive. Second problem comes from configuration of the server. You might have several machines and configurations should be maintained on all of them. It is an feasible option to use Windows SMTP server but current post is not dedicated to it.

Automation SMTP Server

What I offer in this post is your own Automation SMTP Server. It is located in following GitHub project. Solution is actually mixture of two open source projects. For server I use Antix SMTP Server For Developers, which is really good SMTP server. It is windows application and is more suitable for manual SMTP testing rather than automation. I’ve extracted the SMTP core with some modifications as a console application which saves emails as EML file on disk. For reading of emails I use source code of Easily Retrieve Email Information from .EML Files article with several modifications. What you need to do in order to make successful email verification is download executable from GitHub and follow instructions bellow. More info for it can be fond on its home page Automation SMTP Server.

Automation SMTP Server usage

In GitHub AutomationSMTPServer repository there is example that shows how to use Automation SMTP Server. Server should be added as reference to your automation project. Since it is a reference it gets copied into compiled executables folder.

Delete recent emails

Before doing anything in your tests it is good to delete old emails. Automation SMTP Server is saving mail into folder named “temp”. This is how it works and cannot be changed.

private string currentDir =
	Directory.GetCurrentDirectory() + Path.DirectorySeparatorChar;
private string mailsDir = currentDir + "temp";

if (Directory.Exists(mailsDir))
{
	Directory.Delete(mailsDir, true);
}

Start Automation SMTP Server

Server is a console application. It receives emails and saves them to disk. If counter party send QUIT message to disconnect server gets restarted to wait for next connection. Server should be started as a process. Port should be provided as arguments. If not provided it can be configured in SMTP Server config file. If not configured there it gives message and takes 25 for default port.

Process smtpServer = new Process();
smtpServer.StartInfo.FileName = currentDir + "AutomationSMTPServer.exe";
smtpServer.StartInfo.Arguments = "25";
smtpServer.Start();

Send emails

This is the point where your application under test is sending emails which you will later verify.

Read emails

Once emails have been sent out from application under test you are ready to read and process them.

string[] files = Directory.GetFiles(mailsDir);
List<EMLFile> mails = new List<EMLFile>();

foreach (string file in files)
{
	EMLFile mail = new EMLFile(file);
	mails.Add(mail);
	File.Delete(file);
}

Verify emails

Here you can use EMLFile class which is parsing the EML file and is representing is as object so you can do operations on it. Once you have the mail as object you can access all its attributes and verify some of them. It all depends on your testing strategy. Another option is to define on expected EML file, read it and compare both actual and expected. EMLFile class has predefined Equals method which is comparing all the attributes of the emails.

bool compare1 = mails[0].Equals(mails[1]);
bool compare2 = mails[0].Equals(mails[2]);
bool compare3 = mails[1].Equals(mails[2]);

Stop Automation SMTP Server

This part is important. If not stopped server will continue to work and will block the port. Its architecture is defined in such manner that only way to stop it is it to terminate console application. In case where you have started it from C# code as process way to stop it is to kill the process.

smtpServer.Kill();

Conclusion

Proper email verification can be challenge. In case your application under tests send emails I would say it is crucial to have correct email testing as mail is what customers receive. And in the end it is all about customers! So give it a try and enjoy this easy way of email verification.

Read more...

Extract and verify text from PDF with C#

Last Updated on by

Post summary: How to extract text from PDF in C#.

PDF verification is pretty rare case in automation testing. Still it could happen.

iTextSharp

iTextSharp is library that allows you to manipulate PDF files. We need very small of this library. It has build in reader that iterates through pages and returns only text.

using iTextSharp.text.pdf;
using iTextSharp.text.pdf.parser;
using System.Text;

namespace PDFExtractor
{
	public class PDFExtractor
	{
		public static string ExtractTextFromPDF(string pdfFileName)
		{
			StringBuilder result = new StringBuilder();
			// Create a reader for the given PDF file
			using (PdfReader reader = new PdfReader(pdfFileName))
			{
				// Read pages
				for (int page = 1; page <= reader.NumberOfPages; page++)
				{
					SimpleTextExtractionStrategy strategy =
						new SimpleTextExtractionStrategy();
					string pageText =
						PdfTextExtractor.GetTextFromPage(reader, page, strategy);
					result.Append(pageText);
				}
			}
			return result.ToString();
		}
	}
}

Verification

Once extracted text can be verified against expected as described in Text verification post.

Related Posts

Read more...

Text verification

Last Updated on by

Post summary: Verify actual text with expected one by ignoring what is not relevant during compare.

In automation testing there is no definitive way what text verification is best to be done. One strategy is to check that an expected word or a phrase exists in actual text shown in application under tests. Other strategy is to prepare large amount of text to verify. Later strategy is expensive in case of effort for preparation and maintenance. First strategy might not be sufficient to do correct verifications.

In between

What I suggest here is something in between. Not too much but not too less. Problem with paragraph of text to be verified is it might contain data we do not have control over, e.g. date, time, unique values, etc.

Example

Imagine an e-commerce website. When you place the order there is order confirmation page. You want to verify not only that you are on this page but also that text is correct as per specification. Most likely text will contain data you do not have control over – order number and date. Breaking verification is small chunks is an option. Another option is to manipulate the actual text. Third option is to define the text as expected with special strings that will get ignored during compare.

Actual vs Expected

Actual text could be: “Order 123456 has been successfully placed on 01.01.1970! Thank you for your order. ”
Expected text could be: “Order ~SKIP~ has been successfully placed on ~SKIP~! Thank you for your order. ”
And then you can compare both where ~SKIP~ will be ignored during compare.

Compare code

Code to do the compare shown above is incorporated in NTestsRunner also:

public const string IgnoreDuringCompare = "~SKIP~";

public static bool EqualsWithIgnore(this string value1, string value2)
{
	string regexPattern = "(.*?)";
	// If value is null set it to empty
	value1 = value1 ?? string.Empty;
	value2 = value2 ?? string.Empty;
	string input = string.Empty;
	string pattern = string.Empty;
	// Unify new lines symbols
	value1 = value1.Replace("\r\n", "\n");
	value2 = value2.Replace("\r\n", "\n");
	// If no one conains ignore string then compare directly
	if (!value1.Contains(IgnoreDuringCompare) &&
		!value2.Contains(IgnoreDuringCompare))
	{
		return value1.Equals(value2);
	}
	else if (value1.Contains(IgnoreDuringCompare))
	{
		pattern = Regex.Escape(value1).Replace(IgnoreDuringCompare, regexPattern);
		input = value2;
	}
	else if (value2.Contains(IgnoreDuringCompare))
	{
		pattern = Regex.Escape(value2).Replace(IgnoreDuringCompare, regexPattern);
		input = value1;
	}

	Match match = Regex.Match(input, pattern);
	return match.Success;
}

Use in tests

In your tests you will do something like:

string actual = OrderConfirmationPage.GetConfirmationText();
string expected = "Order " + ExtensionMethods.IgnoreDuringCompare +
	" has been successfully placed on " + ExtensionMethods.IgnoreDuringCompare +
	"! Thank you for your order. ";
Assert.IsTrue(actual.EqualsWithIgnore(expected));

Conclusion

It might take little bit more effort to prepare expected strings but verification will be more complete and correct rather than just to expect a word or a phrase.

Related Posts

Read more...

Advanced WPF automation – read dependency property

Last Updated on by

Post summary: What is dependency property in .NET and how to read it from Telerik Testing Framework.

In this post I’ll show advanced way for getting more details from object and sophisticate your automation.

Reference

This post is part of Advanced WPF desktop automation with Telerik Testing Framework and TestStack White series. Sample application can be found in GitHub SampleAppPlus repository.

Dependency property

Dependency properties are easy way to easy extend available in .NET framework functionality. In SampleAppPlus there is CustomControl defined. Purpose of this control is to store text and visualise this text as image. Text is stored in dependency property.

public partial class CustomControl : UserControl
{
	public static readonly DependencyProperty MessageProperty =
			DependencyProperty.Register("Message",
									typeof(string), typeof(CustomControl),
									new PropertyMetadata(OnChange));

	...

	public string Message
	{
		get { return (string)GetValue(MessageProperty); }
		set { SetValue(MessageProperty, value); }
	}

	...

}

Read dependency property

In order to be able to properly automate something you have to know internal structure of the application. Generally you will try to locate and read the element and it will not work in the ways you are used to work with elements. At this point you have to inspect source code of application under test and see how it is done internally. Most important if dependency property is used you should know its name. Once you know the name reading is easy.

public class MainWindow : XamlElementContainer
{
	...

	private UserControl CustomControl_Image
	{
		get
		{
			return Get<UserControl>(mainPath + "CustomControl[0]");
		}
	}

	public Verification VerifyCustomImageText(string expected)
	{
		string actual =
			CustomControl_Image.GetAttachedProperty<string>("", "Message");
		return BaseTest.VerifyText(expected, actual);
	}
}

GetAttachedProperty

GetAttachedProperty is powerful method. Along with reading dependency properties you can read much more. In some cases WPF elements are nested in each other or in tooltip windows. In other cases some object is bound to WPF element. In such situations you can try to access the elements and method will return you FrameworkElement object. From this object you can again get GetAttachedProperty to access some class specific property. In all cases you will need access to application under test code to see how it is working internally.

FrameworkElement tooltip = wpfElement.
	GetAttachedProperty<FrameworkElement>("", "ToolTip");
string value = tooltip.GetAttachedProperty<string>("", "SomeSpecificProperty");

Conclusion

GetAttachedProperty is powerful method. Once you get stuck with normal processing of elements you can always try it. I would say definitely give it a try.

Related Posts

Read more...

Advanced WPF automation – working with WinForms grid

Last Updated on by

Post summary: Example how to work with WinForms grid with TestStack White.

TestStack White is really powerful framework. It works on top of Windows UI Automation framework hiding its complexity. If White is not able to locate element you have access to underlying UI Automation and you can do almost anything you need.

Reference

This post is part of Advanced WPF desktop automation with Telerik Testing Framework and TestStack White series. Sample application can be found in GitHub.

MainGrid

For single responsibility separation grid logic is in separate class MainGrid.cs. Constructor takes White.Core.UIItems.WindowItems.Window object. Inside window we search for element with control type ControlType.Table. It is the only one of its kind. If there are more we should narrow down the SearchCriteria.

public class MainGrid
{
	private Table table;
	public MainGrid(Window window)
	{
		SearchCriteria search = SearchCriteria.ByControlType(ControlType.Table);
		table = window.Get<Table>(search);
	}

	public string GetCellText(int index)
	{
		TableCell cell = GetCell(index);
		string value = cell.Value as string;
		return value;
	}

	public void ClickAtRow(int row)
	{
		TableCell cell = GetCell(row);
		Point topLeft = cell.Bounds.TopLeft;
		topLeft.X += 5;
		topLeft.Y += 5;
		Mouse.instance.Click(topLeft);
	}

	private TableCell GetCell(int index)
	{
		TableRows rows = table.Rows;
		TableCells cells = rows[index - 1].Cells;
		return cells[0];
	}
}

Access the grid

MainGrid is property inside MainWindow page object. On access of the property new object is instantiated. This might lead to performance issues if grid search and instantiation is slow. So in this case you can use Singleton design pattern. Singleton might lead to issues with old object state which will be hard to debug. It depends what your priorities are.

public class MainWindow : XamlElementContainer
{
	public static string WINDOW_NAME = "MainWindow";
	private Application app;
	private string mainPath =
		"XamlPath=/Border[0]/AdornerDecorator[0]/ContentPresenter[0]/Grid[0]/";
	public MainWindow(VisualFind find, Application application)
		: base(find)
	{
		app = application;
	}

	private MainGrid MainGrid
	{
		get
		{
			return new MainGrid(app.GetWindowByName(WINDOW_NAME));
		}
	}

	public void ClickTableAtRow(int row)
	{
		MainGrid.ClickAtRow(row);
	}

	public Verification VerifyTableCell(int index, string text)
	{
		return BaseTest.VerifyText(text, MainGrid.GetCellText(index));
	}
}

Conclusion

TestStack White is powerful framework. It will be perfect if you can do the job without it. If you cannot you are lucky it exists.

Related Posts

Read more...

Advanced WPF automation – page objects inheritance

Last Updated on by

Post summary: re-use of page objects code through inheritance.

Inheritance is one of the pillars of object oriented programming. It is a way to re-use functionality of already existing objects.

Reference

I’ve started a series with details of Advanced WPF desktop automation with Telerik Testing Framework and TestStack White. Sample application can be found in GitHub SampleAppPlus repository.

Abstract class

Abstract class is one that cannot be instantiated. Abstract class may or may not have abstract methods. If one method is marked as abstract then its containing class should also be marked as abstract. We have two similar windows with text box, save and cancel button that are shown on both of them. AddEditText class following Page Objects pattern. It is marked as abstract though. It has implementation of all three elements except “TextBox_Text”.

public abstract class AddEditText : XamlElementContainer
{
	protected string mainPath =
		"XamlPath=/Border[0]/AdornerDecorator[0]/ContentPresenter[0]/Grid[0]/";
	public AddEditText(VisualFind find) : base(find) { }

	protected abstract TextBox TextBox_Text { get; }
	private Button Button_Save
	{
		get
		{
			return Get<Button>(mainPath + "Button[0]");
		}
	}
	private Button Button_Cancel
	{
		get
		{
			return Get<Button>(mainPath + "Button[1]");
		}
	}

	public void EnterText(string text)
	{
		TextBox_Text.Clear();
		TextBox_Text.User.TypeText(text, 50);
	}

	public void ClickSaveButton()
	{
		Button_Save.User.Click();
		Thread.Sleep(500);
	}

	public void ClickCancelButton()
	{
		Button_Cancel.User.Click();
	}
}

Add Text page object

Only thing we have to do in Add Text window is to implement “TextBox_Text” property. All other functionality has already been implemented in AddEditText class.

public class AddText : AddEditText
{
	public static string WINDOW_NAME = "Add Text";
	public AddText(VisualFind find) : base(find) { }

	protected override TextBox TextBox_Text
	{
		get
		{
			return Get<TextBox>(mainPath + "TextBox[0]");
		}
	}
}

Edit Text page object

In Edit Text page object we have to implement “TextBox_Text” property. Also on this window there is one more element which needs to be defined.

public class EditText : AddEditText
{
	public static string WINDOW_NAME = "Edit Text";
	public EditText(VisualFind find) : base(find) { }

	private TextBlock TextBlock_CurrentText
	{
		get
		{
			return Get<TextBlock>(mainPath + "TextBlock[0]");
		}
	}

	protected override TextBox TextBox_Text
	{
		get
		{
			return Get<TextBox>(mainPath + "TextBox[1]");
		}
	}

	public Verification VerifyCurrentText(string text)
	{
		return BaseTest.VerifyText(text, TextBlock_CurrentText.Text);
	}
}

Conclusion

Inheritance is a powerful tool. We as automation engineers should use it whenever possible.

Related Posts

Read more...

Advanced WPF desktop automation

Last Updated on by

Post summary: In this series of posts I’ll expand the examples and ideas started in Automation of WPF applications series.

Telerik Testing Framework and TestStack White are powerful tools for desktop automation. You can automate almost everything with combination of those frameworks. This series of posts will give more details how to automate more complex applications.

Reference

Code samples are located in GitHub SampleAppPlus repository. Telerik Testing Framework requires installation as it copies lots of assemblies in GAC.

SampleAppPlusThere is SampleAppPlus which is actually a dummy application with only one purpose to be used to demonstrate automation principles. With this application you can upload image file. Once uploaded image is visualised. Image path is listed in a table. Image path is also visualised as image in a custom control in bottom of main window. User is able to add more text which is added to table as long with editing already existing text. Add and edit are reflected on custom image element.

Topics

  • Page objects inheritance of similar windows
  • Working with WinForms grid
  • Windows themes and XamlPath
  • Read dependency property
  • NTestsRunner in action
  • Extension methods
  • Memory usage

Page objects inheritance

It is common to have similar windows in an application. Each window is modelled as page object in automation code. If windows are also similar in terms of internal structure it is efficient to re-use similar part and avoid duplications. Re-use is achieved with inheritance. Given SampleAppPlus application has very similar windows for adding and editing text. Code examples show how to optimise your effort and re-use what is possible to be re-used. More details can be found in Advanced WPF automation – page objects inheritance post.

Working with WinForms grid

As mentioned before Telerik Testing Framework is not very good with WinForms elements. This is the main reason to use TestStack White. It is not very likely to have WinForms elements in WPF application but in order to complete the big picture I’ve added such grid in SampleAppPlus application. Code examples show how to manage WinForms grid. More details can be found in Advanced WPF automation – working with WinForms grid post.

Windows themes and XamlPath

In given examples elements are located with exact XamlPath find expression. This approach has serious problem related to Windows themes. For complex user interfaces XamlPath could be different on different theme. Windows Classic theme sometimes produces different XamlPath in comparison with standard Windows themes. Yes it is no more available from Windows 8 but Server editions are working only with Windows Classic theme. So one and the same tests could have differences. I couldn’t find a way to automatically detect which is current theme. Solution is to have different XamlPath for both standard and classic themes. Once you have it you can switch them manually with some configuration or you can try to automate the switch by locating element for which you know is different and save variable based on its location result.

Read dependency property

Dependency property is a way in C# to extend the standard provided functionality. It can happen in real application that developers use such functionality. Given SampleAppPlus application has special element with dependency property. Code examples show how to extract property value and use it in your tests. More details can be found in Advanced WPF automation – read dependency property post.

NTestsRunner in action

I’ve introduced NTestsRunner which is custom way for running functional automated tests. Code samples show how to use it and create good tests that are run only with this tool.

Extension methods

Extension methods are one extremely good feature of .NET framework. I personally like them very much. I assume everyone writing code in C# is aware of them. Still in code  examples show how they can be used.

Memory usage

Memory is not a problem on small projects. But when number of tests continue to grow it actually becomes a problem. More details can be found in Advanced WPF automation – memory usage post.

Related Posts

Read more...

Multilingual automation testing with enumerations

Last Updated on by

Post summary: Solution for automated testing of multilingual sites by using string values in all supported languages for enumerations.

In efficiently use of enumerations with string values in C# post I’ve described how you can add text to an enumeration element and then use it. Current post is elaboration with code samples for testing multilingual applications.

The challenge

Multilingual automation is always a challenge. If you use text to locate elements or verify condition then trying to run test with different language will fail. Enumerations with language dependant string values is pretty good solution. How to do it is described bellow.

Define attribute

StringValue class is extending System.Attribute. It has two properties for text and language. It should have AllowMultiple = true in order to be applied as many times as many languages you have.

namespace System
{
	[AttributeUsage(AttributeTargets.Field, AllowMultiple = true)]
	public class StringValue : Attribute
	{
		public string Value { get; private set; }
		public string Lang { get; private set; }

		public StringValue(string lang, string value)
		{
			Lang = lang;
			Value = value;
		}
	}
}

Read attribute

With reflection read all StringValue attributes. Iterate them and return the one that matches language given as parameter.

using System.Reflection;

namespace System
{
	public static class ExtensionMethods
	{
		public static string GetStringValue(this Enum value, string lang)
		{
			string stringValue = value.ToString();
			Type type = value.GetType();
			FieldInfo fieldInfo = type.GetField(value.ToString());
			StringValue[] attrs = fieldInfo.
				GetCustomAttributes(typeof(StringValue), false) as StringValue[];
			foreach (StringValue attr in attrs)
			{
				if (attr.Lang == lang)
				{
					return attr.Value;
				}
			}
			return stringValue;
		}
	}
}

Apply to enumerations

All supported languages can be defined as string constants. It will be pretty cool if can define enumeration with languages and pass it in StringValue constructor as language but it is not possible as it is not a compile time constant.

public class Constants
{
	public const string LangEn = "en";
	public const string LangFr = "fr";
	public const string LangDe = "de";
}

public enum Messages
{
	[StringValue(Constants.LangEn, "Problem occured, try again later")]
	[StringValue(Constants.LangFr, "Problème survenu, réessayer plus tard")]
	[StringValue(Constants.LangDe, "Problem aufgetreten, " +
		"versuchen Sie es später erneut")]
	ProblemOccured,
	[StringValue(Constants.LangEn, "Successfully done")]
	[StringValue(Constants.LangFr, "Fait avec succès")]
	[StringValue(Constants.LangDe, "Erfolgreich durchgeführt")]
	Success
}

Use in code

Somewhere at a top level of your tests you should have property or field which most likely will be read from conflagration and will define for which locate is current test run.

string lang = Constants.LangFr;

This is then used to read correct text value for given enumeration element.

Assert.AreEqual(Messages.ProblemOccured.GetStringValue(lang), 
	App.MessageBox.GetText());

Conclusion

Multilingual testing is a challenge. Be smart and use all tricks you might get. In this post I’ve revealed pretty good trick to do the automation. Challenge with this approach will be initial set up of enumerations with all the translations.

Related Posts

Read more...