Partial JSON deserialize by JsonPath with Json.NET

Last Updated on by

Post summary: Code examples how to deserialize only part of a big JSON file by JsonPath when using NewtonSoft Json.NET.

Code shown in examples bellow is available in GitHub DotNetSamples/JsonPathConverter repository.

Use case description

Imagine you have a big JSON which you want to deserialize into a C# object.

{
  "node1": {
    "node1node1": "node1node1value",
    "node1node2": [ "value1", "value2" ],
    "node1node3": {
      "node1node3node1": "node1node3node1value"
    }
  },
  "node2": true,
  "node3": {
    "node3node1": "node3SubNode1Value",
    "node3node2": {
      "node3node2node1": {
        "node3node2node1node1": [ 1, 2, 3 ]
      },
      "node3node2node2": "node3node2node1value"
    }
  },
  "node4": "{\"node4node1\": \"n4n1value\", \"node4node2\": \"n4n1value\"}"
}

File above is actually pretty small and used for demo purposes. In practice you can stumble upon terrifyingly big JSON files. NewtonSoft.Json or Json.NET is defacto the JSON standard for .NET, so it is being used to parse the JSON file. In order to deserialize this JSON to a C# object you need a model class that represent the JSON nodes. Although immense effort you can create such, but why bother if you are going to use just a fraction of all JSON data. This is where JsonPath comes in play. Json.NET allows you to query JSON by JsonPath, so one option is to manually query the JSON, find data you need and assign it to your C# object. This is not an elegant solution. Since query by JsonPath is possible this can be used in a JsonConverter that will automatically do the job. What is needed is a custom JsonPathConverter and a model class that will be deserialized to, both are described bellow.

JSON model class

It is easier to describe the JSON model first. Bellow is a code for JSON model class that will collect only data we need.

using System.Collections.Generic;
using Newtonsoft.Json;

namespace JsonPathConverter
{
	[JsonConverter(typeof(JsonPathConverter))]
	public class JsonModel
	{
		[JsonProperty("node1.node1node2")]
		public IList<string> Node1Array { get; set; }

		[JsonProperty("node2")]
		public bool Node2 { get; set; }

		[JsonProperty("node3.node3node2.node3node2node1.node3node2node1node1")]
		public IList<int> Node3Array { get; set; }

		[JsonConverter(typeof(JsonPathConverter))]
		[JsonProperty("node4")]
		public NestedJsonModel Node4 { get; set; }
	}

	public class NestedJsonModel
	{
		[JsonProperty("node4node2")]
		public string NestedNode2 { get; set; }
	}
}

JSON model class is annotated with [JsonConverter(typeof(JsonPathConverter))] which tells Json.NET to use JsonPathConverter class to do the conversion. JsonPathConverter is implemented in such a way that JsonProperty is a mandatory for each property in order to be parsed: [JsonProperty(“node1.node1node2”)].

JSON as a string

You may have noticed already the weird case where node4 in JSON file has actually a string value which is escaped JSON string. This is something unusual and may not be pretty good programming practice, but I’ve encountered it in a production code, so examples given here cover this weirdo as well. There is special NestedJsonModel class which this JSON string is being deserialized to.

JsonPathConverter

Code bellow implements JsonConverter abstract class and implements needed methods.

public class JsonPathConverter : JsonConverter
{
	public override bool CanWrite => false;

	public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
	{
		var jObject = JObject.Load(reader);
		var targetObj = Activator.CreateInstance(objectType);

		foreach (var prop in objectType.GetProperties().Where(p => p.CanRead && p.CanWrite))
		{
			var jsonPropertyAttr = prop.GetCustomAttributes(true).OfType<JsonPropertyAttribute>().FirstOrDefault();
			if (jsonPropertyAttr == null)
			{
				throw new JsonReaderException($"{nameof(JsonPropertyAttribute)} is mandatory when using {nameof(JsonPathConverter)}");
			}

			var jsonPath = jsonPropertyAttr.PropertyName;
			var token = jObject.SelectToken(jsonPath);

			if (token != null && token.Type != JTokenType.Null)
			{
				var jsonConverterAttr = prop.GetCustomAttributes(true).OfType<JsonConverterAttribute>().FirstOrDefault();
				object value;
				if (jsonConverterAttr == null)
				{
					serializer.Converters.Clear();
					value = token.ToObject(prop.PropertyType, serializer);
				}
				else
				{
					value = JsonConvert.DeserializeObject(token.ToString(), prop.PropertyType,
						(JsonConverter)Activator.CreateInstance(jsonConverterAttr.ConverterType));
				}
				prop.SetValue(targetObj, value, null);
			}
		}

		return targetObj;
	}

	public override bool CanConvert(Type objectType)
	{
		return true;
	}

	public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
	{
		throw new NotImplementedException();
	}
}

Deserialization work is done in public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) method. JSON is loaded to a NewtonSoft JObject and instance of result object is created. All properties of this result object are iterated in a foreach loop. It is important to not that properties should have both get and set in order to be considered in deserialization: objectType.GetProperties().Where(p => p.CanRead && p.CanWrite). If you have properties with just get or just set they will be ignored. JsonPropertyAttribute for each property is taken. If there is no such then exception is thrown. This part can be changed. JsonPath can be considered to be the property name: var jsonPath = jsonPropertyAttr == null ? prop.Name : jsonPropertyAttr.PropertyName. This is tricky though as C# is case sensitive and it might not work as property could start with capital letter, but JSON itself to be with lower case. Once there is JsonPath defined JObject is queried with jObject.SelectToken(jsonPath). This should return a valid token. In case of valid token result object property is checked for JsonConverterAttribute. If such exists then JSON is deserialized with this newly found JsonConverter instance. If there is no converter attached to this property then all existing converters are cleared and token is converted into object. Clearing part is important as in case of recursive call it will throw exception.

Usage

Once job above is done usage is pretty easy:

var fileContent = File.ReadAllText("jsonFile.json");
var result = JsonConvert.DeserializeObject<JsonModel>(fileContent);

result.Node1Array.Should().BeEquivalentTo(new List<string> {"value1", "value2"});
result.Node2.Should().Be(true);
result.Node3Array.Should().BeEquivalentTo(new List<int> { 1, 2, 3 });
result.Node4.NestedNode2.Should().Be("n4n1value");

Conclusion

In this post I have shown how to partially deserialize JSON by JsonPath picking only data that you need.

Read more...

Soft assertions for C# unit testing frameworks (MSTest, NUnit, xUnit.net)

Last Updated on by

Post summary: Code example of very easy and useful custom implementation of soft assertions in C# unit testing frameworks such as MSTest, NUnit or xUnit.net.

Code shown in examples bellow is available in GitHub DotNetSamples/SoftAssertions repository.

Unit vs Functional testing

Unit testing paradigm states that each test exercises particular code behaviour. So in a perfect world one unit test would have one assertion which defines unit test result – either passed or failed. This is why unit testing frameworks provide only asserts which stop further execution of current test method. In functional testing usually one test verifies several conditions. Not debating if this is good or bad. Assume you are doing GUI testing, once you have opened particular page you’d better do as much verification as possible to reduce the risk of bugs. Having this page opened over and over for each single check is not the most efficient way of testing. This is why when you run functional tests you need some kind of assert that indicates whether passed or failed but to let the test continue in no critical issue is present. Those are generally called “soft” asserts.

Soft assertions code

Following code is an implementations of soft assertions:

public class SoftAssertions
{
	private readonly List<SingleAssert> 
		_verifications = new List<SingleAssert>();

	public void Add(string message, string expected, string actual)
	{
		_verifications.Add(new SingleAssert(message, expected, actual));
	}

	public void Add(string message, bool expected, bool actual)
	{
		Add(message, expected.ToString(), actual.ToString());
	}

	public void Add(string message, int expected, int actual)
	{
		Add(message, expected.ToString(), actual.ToString());
	}

	public void AddTrue(string message, bool actual)
	{
		_verifications
			.Add(new SingleAssert(message, true.ToString(), actual.ToString()));
	}

	public void AssertAll()
	{
		var failed = _verifications.Where(v => v.Failed).ToList();
		failed.Should().BeEmpty();
	}

	private class SingleAssert
	{
		private readonly string _message;
		private readonly string _expected;
		private readonly string _actual;

		public bool Failed => _expected != _actual;

		public SingleAssert(string message, string expected, string actual)
		{
			_message = message;
			_expected = expected;
			_actual = actual;
		}

		public override string ToString()
		{
			return $"'{_message}' assert was expected to be '{_expected}' " +
				$"but was '{_actual}'";
		}
	}
}

Soft assertions details

Actual assertion is handled by SingleAssert class. It contains a message to be displayed to user in case of fail as well as expected and actual values. They are stored as strings. All asserts during testing are stored in a List<SingleAssert>. There are several methods that add assert. There are such that accept bool, string and int. You can extend and add as many as you want. It is mandatory to call AssertAll() method so asserts can be evaluated. Evaluation consists of filtering out passed asserts leaving only failed: var failed = _verifications.Where(v => v.Failed).ToList(). Then list with failed is checked for empty failed.Should().BeEmpty(). In this case FluentAssertions framework is used, but code can be changed to such that suits your particular needs.

Soft assertions usage

Usage is pretty straight forward. SoftAssertions object should be created before each test and asserted after each test:

[TestClass]
public class UnitTest
{
	private SoftAssertions _softAssertions;

	[TestInitialize]
	public void SetUp()
	{
		_softAssertions = new SoftAssertions();
	}

	[TestCleanup]
	public void TearDown()
	{
		_softAssertions.AssertAll();
	}

	[TestMethod]
	public void TestMixedSoftAssertions()
	{
		_softAssertions.Add("Passing bool Add assertion", true, true);
		_softAssertions.Add("Failing bool Add assertion", true, false);
		_softAssertions
			.Add("Passing string Add assertion", "SameString", "SameString");
		_softAssertions
			.Add("Failing string Add assertion", "SameString", "OtherString");
		_softAssertions.Add("Passing int Add assertion", 1, 1);
		_softAssertions.Add("Failing int Add assertion", 1, 2);
		_softAssertions.AddTrue("Passing AddTrue assertion", true);
		_softAssertions.AddTrue("Failing AddTrue assertion", false);
	}
}

Soft assertions result

Result of test shown above is: Result Message: Expected collection to be empty, but found {‘Failing bool Add assertion’ assert was expected to be ‘True’ but was ‘False’, ‘Failing string Add assertion’ assert was expected to be ‘SameString’ but was ‘DifferentString’, ‘Failing int Add assertion’ assert was expected to be ‘1’ but was ‘2’, ‘Failing AddTrue assertion’ assert was expected to be ‘True’ but was ‘False’}.

This comes out of the box because FluentAssertions is used. Otherwise you have to do some other output and assertions.

Other soft assertions

Some custom implementation of soft assertions is as well available in NTestRunner framework, but it is more complex and demanding special approach for writing tests.

Conclusion

Soft assertions are very useful in functional testing. With this simple class you can directly have them in your functional tests.

Read more...

Convert NUnit 3 to NUnit 2 results XML file

Last Updated on by

Post summary: Examples how to convert NUnit 3 result XML file into NUnit 2 result XML file.

Although NUnit 3 was officially released in November 2015 still there are CI tools that do not provide support for parsing NUnit 3 result XML files. In this post I will show how to convert between formats so CI tools can read NUnit 2 format.

NUnit 3 console runner

The easiest way is if you are using NUnit 3 console runner. It can be provided with and option: –result=TestResult.xml;format=nunit2.

Nota bene: Mandatory for this to work is to have nunit-v2-result-writer in nuget packages directory otherwise error will be shown: Unknown result format: nunit2.

Convert NUnit 3 to NUnit 2

If tests are being run in some other way other than NUnit 3 console runner then solution bellow is needed. There is no program or tool that can do this conversion, so custom one is needed. This is a Powershell script that uses nunit-v2-result-writer assemblies and with their functionality converts the XML files:

$assemblyNunitEngine = 'nunit.engine.api.dll';
$assemblyNunitWriter = 'nunit-v2-result-writer.dll';
$inputV3Xml = 'TestResult.xml';
$outputV2Xml = 'TestResultV2.xml';

Add-Type -Path $assemblyNunitEngine;
Add-Type -Path $assemblyNunitWriter;
$xmldoc = New-Object -TypeName System.Xml.XmlDataDocument;
$fs = New-Object -TypeName System.IO.FileStream -ArgumentList $inputV3Xml,'Open','Read';
$xmldoc.Load($fs);
$xmlnode = $xmldoc.GetElementsByTagName('test-run').Item(0);
$writer = New-Object -TypeName NUnit.Engine.Addins.NUnit2XmlResultWriter;
$writer.WriteResultFile($xmlnode, $outputV2Xml);

Important here is to give proper path to nunit.engine.api.dll, nunit-v2-result-writer.dll and NUnit 3 TestResult.xml files. Powershell script above is equivalent to following C# code:

using System.IO;
using System.Xml;
using NUnit.Engine.Addins;

public class NUnit3ToNUnit2Converter
{
	public static void Main(string[] args)
	{
		var xmldoc = new XmlDataDocument();
		var fileStream 
			= new FileStream("TestResult.xml", FileMode.Open, FileAccess.Read);
		xmldoc.Load(fileStream);
		var xmlnode = xmldoc.GetElementsByTagName("test-run").Item(0);

		var writer = new NUnit2XmlResultWriter();
		writer.WriteResultFile(xmlnode, "TestResultV2.xml");
	}
}

File samples

Here NUnitFileSamples.zip is a collection of several NUnit result files. there with V3 are NUnit 3 format, those with V2_NUnit are generated with –result=TestResult.xml;format=nunit2 option and those with V2_Converted are converted with code above.

Conclusion

Although little inconvenient it is possible to convert NUnit 3 to NUnit 2 result XML files using Powershell scripts and nunit-v2-result-writer assemblies.

Read more...

Code coverage of manual or automated tests with OpenCover for .NET applications

Last Updated on by

Post summary: Examples how to do code coverage of manual or automated functional test with OpenCover tool for .NET applications

Code coverage

This topic is how to do the code coverage on .NET applications with OpenCover. Theory on what is code coverage, why it is needed can be found in What about code coverage post.

OpenCover

OpenCover is open source tool for code coverage for .NET 2.0 and above applications for Windows only. With OpenCover instrumentation of the code is not needed. Application is started through OpenCover and it collects coverage results. What is mandatory though is PDB file along with executables and assemblies, so application under test should be build in Debug mode. If PDB file is not found then no coverage data will be gathered.

Latest version can be downloaded form “Releases” in GitHub. There is installer and zip archive. If installer is used by default OpenCover is installed in C:\Users\{USER_ACCOUNT}\AppData\Local\Apps\OpenCover. If you want to change this, click “Advanced” button during installation and then select “Install for all users on this machine”.

ReportGenerator

OpenCover produces results is raw format, which is not for humans. ReportGenerator is used to convert XML reports generated by OpenCover, PartCover, Visual Studio or NCover into human readable reports in various formats. Usage guide can be found on its home page, most useful commands are:

  • -reports: – coverage reports that should be parsed, semicolon separated, wildcards are allowed
  • -targetdir: – directory where the generated report should be saved
  • -sourcedirs:[;][;] – directories which contain the corresponding source code, optional, semicolon separated
  • -classfilters:<(+|-)filter>[;<(+|-)filter>][;<(+|-)filter>] – list of classes that should be included or excluded in the report, optional, wildcards are allowed.

How to use OpenCover

Usage guide can be found in OpenCover usage reference online. Also along with OpenCover installation there Usage.rft file which holds all the information about the tool. Most useful commands are listed bellow:

  • -target: – path to application executable file or name of service
  • -filter: – list of filters to apply to selectively include or exclude assemblies and classes from coverage results
  • -output: – path to output XML file, if empty then results.xml will be created in current directory
  • -register[:user] – register and de-register the code coverage profiler
  • -targetargs: – arguments to be passed to the target process
  • -targetdir: – path to the target directory or alternative path to PDB files

Hands on examples on manual code coverage

In order to try you need to checkout code samples from GitHub SampleApp or SampleAppPlus repository to C:\. Telerik Testing Framework needs to be installed as it copies lots of assemblies in GAC. OpenCover and ReportGenerator should also be installed to C:\

With current setup command to start SimpleAppPlus.exe from OpenCover is: C:\OpenCover\OpenCover.Console.exe -target:”C:\SampleAppPlus\SampleAppPlus\bin\Debug\SampleAppPlus.exe” -output:C:\SampleAppPlus\CoverageReports\SampelAppPlus.results.xml -register:user Now application is started and manual functional tests can be executed. Once application is stopped coverage results are saved in SampelAppPlus.results.xml file.

Hands on examples on automated code coverage

Since automation is the future of QA and we have already created automated tests for both SimpleApp and SimplaAppPlus, we want to measure how our tests perform on code coverage. Automated tests run the application, attach to it and manipulate it. So it seems close to mind just to use the command from manual example and start the application with it. It will not work though since command is starting and returning OpenCover process, not underlying SimpleAppPlus one. Extra code is needed in SampleAppPlus.Tests.Framework\Tests\BaseTest.cs file. Instead of:

Application appWhite = Application.Launch(applicationPath);

following code has to be added:

Process sampleAppPlus = StartProcess();
Application appWhite = Application.Attach(sampleAppPlus);

where StartProcess() method is:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading;

private Process StartProcess()
{
	string appName = "SampleAppPlus";
	string openCover = @"C:\OpenCover\OpenCover.Console.exe";
	long timeStamp = (long)(DateTime.UtcNow 
						- new DateTime(1970, 1, 1)).TotalMilliseconds;

	List<string> arguments = new List<string>();
	arguments.Add(@"-target:C:\SampleAppPlus\SampleAppPlus\bin\Debug\"
					+ appName + ".exe");
	arguments.Add(@"-output:C:\SampleAppPlus\CoverageReports\SampelAppPlus."
					+ timeStamp + ".xml");
	arguments.Add("-register:user");

	Process process = new Process();
	process.StartInfo.FileName = openCover;
	process.StartInfo.Arguments = string.Join(" ", arguments);
	process.Start();

	Thread.Sleep(5000);
	return Process.GetProcesses().First(proc => appName == proc.ProcessName);
}

OpenCover arguments “-target”, “-output” and “-register” are used. Note that -output file is always different by adding current unix time in file name, this is to prevent overwriting of file. The idea is to run OpenCover which will start SimpleAppPlus. Wait 5 seconds to ensure application is up, then get all Windows processes with Process.GetProcesses(), iterate them, find and return the needed SampleAppPlus process which TestStack.White and Telerik Testing Framework will attach to.

Create report

Once tests are run and OpenCover XML report files are generated it is time to generated human readable reports. This is done with ReportGenerator with command: C:\ReportGenerator\bin\ReportGenerator.exe -reports:C:\SampleAppPlus\CoverageReports\SampelAppPlus.*.xml -targetdir:C:\SampleAppPlus\CoverageReports\html -sourcedirs:C:\SampleAppPlus\ -classfilters:-SampleAppPlus.Properties.*

OpenCover-report

Inspect report

Coverage report file from examples above can be found in OpenCover code coverage report. Inspecting the report there is missed code in SampleAppPlus.MainWindow class – else branch of if ((bool)openFileDialog.ShowDialog()) condition is not covered. Documentation of this method states that it returns false if Cancel button is clicked of dialog window. In order to increase the coverage test that clicks Cancel button and verifies no upload is done should be added to test suite.

Code coverage for IIS web application or Windows service

Examples above show how to run normal windows application. It is valid for both UI and console applications as they are started with single EXE file. OpenCover can also work for IIS web applications, Silverlight applications and Windows service applications. More details can be found in documentation accompanying OpenCover installation.

Although it is possible to connect to a running service, I have done code coverage on Windows service in the manner suggested in the documentation – run the service as a console application. Since debugging a running Windows service is not that straightforward task, developers have most likely already implemented a switch to start service as console application. If not you will easy their lives by asking them to do so.

Conclusion

OpenCover is the only open source tool for code coverage for .NET applications. It is really powerful and easy to use. No code instrumentation is needed, just build the code into Debug mode to have PDB files and run the application through OpenCover. It can also be used for measuring code coverage of unit tests.

Read more...

FIX messages simulator

Last Updated on by

Post summary: General thoughts how to test FIX messages with a FIX simulator.

FIX stands for Financial Information eXchange and is most probably the most used protocol for exchanging electronic messages in the financial world.

FIX sessions

In order to exchange data systems in financial world use FIX messages. First important step before exchanging messages is to establish connection between each other. There are two roles for counter parties

  • Acceptor – acts as a server. Stays and listens for clients that will connect
  • Initiator – acts as a client. It is the active participant in the connection

Once roles are defined fix session should be established between both. They can maintain as many different sessions as they need. Session is uniquely identified by FIX protocol version and IDs of both counter parties.

Session is started by initiator by sending Logon request and receiving Logon Acknowledgement. Session is kept alive by both sides. They send Heartbeat messages to each other. Session is ended by Logout message send from one of the counter parties. The other acknowledges the Logout.

FIX messages

Once all the ceremony of setting a session is done and session is alive both counter parties can start exchanging FIX messages with data. FIX message in short is a string containing key value pairs in format <TagNumber>=<Value> separated by SOH (\u0001) character. Each tag has a name and in some cases vast specification what values accepts, what each value means, etc. FIXimate is pretty good on-line tool which gives information about tags. Each tag is represented by its integer number in the message. Tags are separated in three parts of the message:

  • Header – contains information needed for session identification
  • Body – contains business data
  • Trailer – checksum for message validation

Example where SOH is replaced with |

8=FIX.4.2|9=76|35=A|49=Initiator|56=Acceptor|34=1|52=20150321-15:39:28.762|98=0|108=30|141=Y|10=187|

Testing of FIX message scenarios

Depending on business logic behind actual testing of FIX messages could be pretty complicated task. Each message alone could have tens of tags with data. Each tag is meaningful and system can react in different manner based on its value. Each message could depend on previous message. Combination and complex business logic could made through testing very difficult task.

How to do it

There are numerous FIX simulators available out there. They will work for most of the cases but since FIX communication can be really boutique in some cases external tool is not a good idea. Internal tool is what I suggest and propose a solution in current post.

Visualise the message

This is maybe the hardest part of everything. How to show messages so they can be easy to comprehend and edit. Obviously editing long string with user unfriendly data is not the best solution. I admit not the perfect solution but couldn’t think of better than good old fashioned Excel. In short each column is a specific tag. Each row is a single fix message and you fill the tag value for this particular fix message. There could be separation rows to make different tests scenarios.

Convert the messages

Once Excel test cases are ready they are exported with a macro to special XML format. XML is then read from FIX simulator and converted to real fix messages.

Send the messages

FIX messages are then send on the wire. In order to send them you need fix engine. Very popular is QuickFIX engine. It has Java and .NET version. There are example applications which help for better understanding how to use the engine.

Conclusion

Testing FIX messages based on a business rules is definitely very important task. It also is not the simplest task you might think of as a QA. Still it is achievable. I would say creating custom solution will initially take some time but in the long run it will pay off as you will have total control over features you have and need.

Read more...

What about code coverage

Last Updated on by

Post summary: Code coverage is measurement of what percentage of program source code is being executed by your tests. Code coverage is a nice to have, but in no case make code coverage ultimate goal!

Code coverage is useful metric to check what part of the code your tests are exercising. Is really depends on the tools to be used for gathering code coverage metrics but in general they work in similar fashion.

How it works

One approach is to instrument application under test. Instrumentation is to modify original executables by adding meta data so code coverage tool is able to understand which line of code is being executed. Other option is to run the application through code coverage tool. In both ways once application is running tests are being executed.

What tests to run

Tests that can be run to measure code coverage can be unit tests, functional automation tests or even manual tests. It doesn’t matter the important part is to see how our tests are impacting application under test.

Results

Once tests are finished code coverage metrics are obtained from code coverage tool. To have detailed information most of the tools take original source code and generate visual report with colour information which line is executed and which not. There are different levels of coverage, method, line, code block, etc. I personally prefer lines and when speaking further I’ll mean lines of code executed.

Benefits

Code coverage information is equally useful for QA and developers. QA analyse code that has not being executed during tests to identify what tests conditions they are missing and improve their tests. Developers analyse the results to identify and remove dead or unused code.

When to do it

Code coverage is a nice to have activity. We can live with or without it. It is useful for big matured products where there is automation test suites. You can also do them against your tests code in order to optimise it. Removing dead code optimises the product and makes its maintenance easier. I would say doing it too often should be avoided. Everything depends on context but for me best is once or twice a year.

What does code coverage percentage means

In one word – nothing. You may have 30% of code coverage but to cover most important user functionality and bug rate to be low and you may have 90% with dummy tests made especially to exercise some code without the idea of actually testing it. I was lucky to work on a project where developers keep the code tidy and clean and my tests easily accomplished 81% just by verifying all user requirements. I may say 80-85% is the maximum you may get.

Pitfalls

Do not ever put code coverage as important measurement or key performance indicator (KPI) in your testing strategy. Doing so and making people try to increase the code coverage will in most cases result in dummy tests made especially this percentage to go up. Code coverage is in most cases insignificant aspect of your testing strategy.

How to do it

Tutorial how to do code coverage for Java with JaCoCo can be found in Code coverage of manual or automated tests with JaCoCo post. Tutorial how to do code coverage for Java with OpenCover can be found in Code coverage of manual or automated tests with OpenCover for .NET applications post.

Conclusion

Code coverage is interesting aspect of testing. It might enhance your tests if done wisely or it can ruin them. Remember tests scenarios should be extracted from user requirements and features. Code coverage data should be used to see if you have some blind spots reading the requirements. It can help developers remove dead or unused code.

Read more...

Advanced WPF automation – memory usage

Last Updated on by

Post summary: Highlight eventual memory issue when using Telerik Testing Framework and TestStack White for desktop automation.

Memory is important aspect. When you have several test cases it is not a problem. But on large projects with many tests memory turn out to be serious issue.

Reference

This post is part of Advanced WPF desktop automation with Telerik Testing Framework and TestStack White series. Sample application can be found in GitHub.

Problem

Like every demo on certain technology automating WPF applications looks cool. And also like every technology problems occur when you start to use it in a large scale. Problem with WFP automation with Telerik Testing Framework and TestStack White is memory. When your tests’ number grows frameworks start to use too much memory. By too much I mean over 1GB which might not seems a lot but for a single process actually is. Increasing RAM of test machine is only temporary solution and is not one that can be scaled.

Why so much memory

I have project with 580 tests and 7300 verification points spread in 50 test classes. I’ve spend lots of hours debugging and profiling with several .NET profiling tools. In the end all profilers show that the large amount of memory used is in unmanaged objects. So generally there is nothing you can do. It seems like some of the frameworks or both have memory issues and do not free all memory they use.

Solution

Solution is pretty simple to suggest but harder to implement – run each test class in separate process. I’m using much more enhanced version of NTestsRunner. It is running each test class in separate windows process. Once test is finished results are serialised in results directory, process is exited and all memory and object used for this test is released.

Conclusion

Memory could be a crucial factor in an automation project. Be prepared to have solution for it. At this point I’m not planing to put running tests in separate process in NTestsRunner. If there is demand it is pretty easy task to do it.

Read more...

Complete guide to email verifications with Automation SMTP Server

Last Updated on by

Post summary: How to do complete email verification with your own Automation SMTP server.

SMTP is protocol initially defined in 1982 and is still used nowadays. In order to automate application which sends out emails you need SMTP server which reads messages and saves them to disk for further processing. Note that this is only in case when you application sends emails.

Windows SMTP server

One option is to use SMTP server provided by Windows. Problems here are two. First is that from Vista SMTP server is no more supported. There is SMTP server in Windows Server distributions but licence for them is more expensive. Second problem comes from configuration of the server. You might have several machines and configurations should be maintained on all of them. It is an feasible option to use Windows SMTP server but current post is not dedicated to it.

Automation SMTP Server

What I offer in this post is your own Automation SMTP Server. It is located in following GitHub project. Solution is actually mixture of two open source projects. For server I use Antix SMTP Server For Developers, which is really good SMTP server. It is windows application and is more suitable for manual SMTP testing rather than automation. I’ve extracted the SMTP core with some modifications as a console application which saves emails as EML file on disk. For reading of emails I use source code of Easily Retrieve Email Information from .EML Files article with several modifications. What you need to do in order to make successful email verification is download executable from GitHub and follow instructions bellow. More info for it can be fond on its home page Automation SMTP Server.

Automation SMTP Server usage

In GitHub AutomationSMTPServer repository there is example that shows how to use Automation SMTP Server. Server should be added as reference to your automation project. Since it is a reference it gets copied into compiled executables folder.

Delete recent emails

Before doing anything in your tests it is good to delete old emails. Automation SMTP Server is saving mail into folder named “temp”. This is how it works and cannot be changed.

private string currentDir =
	Directory.GetCurrentDirectory() + Path.DirectorySeparatorChar;
private string mailsDir = currentDir + "temp";

if (Directory.Exists(mailsDir))
{
	Directory.Delete(mailsDir, true);
}

Start Automation SMTP Server

Server is a console application. It receives emails and saves them to disk. If counter party send QUIT message to disconnect server gets restarted to wait for next connection. Server should be started as a process. Port should be provided as arguments. If not provided it can be configured in SMTP Server config file. If not configured there it gives message and takes 25 for default port.

Process smtpServer = new Process();
smtpServer.StartInfo.FileName = currentDir + "AutomationSMTPServer.exe";
smtpServer.StartInfo.Arguments = "25";
smtpServer.Start();

Send emails

This is the point where your application under test is sending emails which you will later verify.

Read emails

Once emails have been sent out from application under test you are ready to read and process them.

string[] files = Directory.GetFiles(mailsDir);
List<EMLFile> mails = new List<EMLFile>();

foreach (string file in files)
{
	EMLFile mail = new EMLFile(file);
	mails.Add(mail);
	File.Delete(file);
}

Verify emails

Here you can use EMLFile class which is parsing the EML file and is representing is as object so you can do operations on it. Once you have the mail as object you can access all its attributes and verify some of them. It all depends on your testing strategy. Another option is to define on expected EML file, read it and compare both actual and expected. EMLFile class has predefined Equals method which is comparing all the attributes of the emails.

bool compare1 = mails[0].Equals(mails[1]);
bool compare2 = mails[0].Equals(mails[2]);
bool compare3 = mails[1].Equals(mails[2]);

Stop Automation SMTP Server

This part is important. If not stopped server will continue to work and will block the port. Its architecture is defined in such manner that only way to stop it is it to terminate console application. In case where you have started it from C# code as process way to stop it is to kill the process.

smtpServer.Kill();

Conclusion

Proper email verification can be challenge. In case your application under tests send emails I would say it is crucial to have correct email testing as mail is what customers receive. And in the end it is all about customers! So give it a try and enjoy this easy way of email verification.

Read more...

Extract and verify text from PDF with C#

Last Updated on by

Post summary: How to extract text from PDF in C#.

PDF verification is pretty rare case in automation testing. Still it could happen.

iTextSharp

iTextSharp is library that allows you to manipulate PDF files. We need very small of this library. It has build in reader that iterates through pages and returns only text.

using iTextSharp.text.pdf;
using iTextSharp.text.pdf.parser;
using System.Text;

namespace PDFExtractor
{
	public class PDFExtractor
	{
		public static string ExtractTextFromPDF(string pdfFileName)
		{
			StringBuilder result = new StringBuilder();
			// Create a reader for the given PDF file
			using (PdfReader reader = new PdfReader(pdfFileName))
			{
				// Read pages
				for (int page = 1; page <= reader.NumberOfPages; page++)
				{
					SimpleTextExtractionStrategy strategy =
						new SimpleTextExtractionStrategy();
					string pageText =
						PdfTextExtractor.GetTextFromPage(reader, page, strategy);
					result.Append(pageText);
				}
			}
			return result.ToString();
		}
	}
}

Verification

Once extracted text can be verified against expected as described in Text verification post.

Read more...

Text verification

Last Updated on by

Post summary: Verify actual text with expected one by ignoring what is not relevant during compare.

In automation testing there is no definitive way what text verification is best to be done. One strategy is to check that an expected word or a phrase exists in actual text shown in application under tests. Other strategy is to prepare large amount of text to verify. Later strategy is expensive in case of effort for preparation and maintenance. First strategy might not be sufficient to do correct verifications.

In between

What I suggest here is something in between. Not too much but not too less. Problem with paragraph of text to be verified is it might contain data we do not have control over, e.g. date, time, unique values, etc.

Example

Imagine an e-commerce website. When you place the order there is order confirmation page. You want to verify not only that you are on this page but also that text is correct as per specification. Most likely text will contain data you do not have control over – order number and date. Breaking verification is small chunks is an option. Another option is to manipulate the actual text. Third option is to define the text as expected with special strings that will get ignored during compare.

Actual vs Expected

Actual text could be: “Order 123456 has been successfully placed on 01.01.1970! Thank you for your order. ”
Expected text could be: “Order ~SKIP~ has been successfully placed on ~SKIP~! Thank you for your order. ”
And then you can compare both where ~SKIP~ will be ignored during compare.

Compare code

Code to do the compare shown above is incorporated in NTestsRunner also:

public const string IgnoreDuringCompare = "~SKIP~";

public static bool EqualsWithIgnore(this string value1, string value2)
{
	string regexPattern = "(.*?)";
	// If value is null set it to empty
	value1 = value1 ?? string.Empty;
	value2 = value2 ?? string.Empty;
	string input = string.Empty;
	string pattern = string.Empty;
	// Unify new lines symbols
	value1 = value1.Replace("\r\n", "\n");
	value2 = value2.Replace("\r\n", "\n");
	// If no one conains ignore string then compare directly
	if (!value1.Contains(IgnoreDuringCompare) &&
		!value2.Contains(IgnoreDuringCompare))
	{
		return value1.Equals(value2);
	}
	else if (value1.Contains(IgnoreDuringCompare))
	{
		pattern = Regex.Escape(value1).Replace(IgnoreDuringCompare, regexPattern);
		input = value2;
	}
	else if (value2.Contains(IgnoreDuringCompare))
	{
		pattern = Regex.Escape(value2).Replace(IgnoreDuringCompare, regexPattern);
		input = value1;
	}

	Match match = Regex.Match(input, pattern);
	return match.Success;
}

Use in tests

In your tests you will do something like:

string actual = OrderConfirmationPage.GetConfirmationText();
string expected = "Order " + ExtensionMethods.IgnoreDuringCompare +
	" has been successfully placed on " + ExtensionMethods.IgnoreDuringCompare +
	"! Thank you for your order. ";
Assert.IsTrue(actual.EqualsWithIgnore(expected));

Conclusion

It might take little bit more effort to prepare expected strings but verification will be more complete and correct rather than just to expect a word or a phrase.

Read more...

Advanced WPF automation – read dependency property

Last Updated on by

Post summary: What is dependency property in .NET and how to read it from Telerik Testing Framework.

In this post I’ll show advanced way for getting more details from object and sophisticate your automation.

Reference

This post is part of Advanced WPF desktop automation with Telerik Testing Framework and TestStack White series. Sample application can be found in GitHub SampleAppPlus repository.

Dependency property

Dependency properties are easy way to easy extend available in .NET framework functionality. In SampleAppPlus there is CustomControl defined. Purpose of this control is to store text and visualise this text as image. Text is stored in dependency property.

public partial class CustomControl : UserControl
{
	public static readonly DependencyProperty MessageProperty =
			DependencyProperty.Register("Message",
									typeof(string), typeof(CustomControl),
									new PropertyMetadata(OnChange));

	...

	public string Message
	{
		get { return (string)GetValue(MessageProperty); }
		set { SetValue(MessageProperty, value); }
	}

	...

}

Read dependency property

In order to be able to properly automate something you have to know internal structure of the application. Generally you will try to locate and read the element and it will not work in the ways you are used to work with elements. At this point you have to inspect source code of application under test and see how it is done internally. Most important if dependency property is used you should know its name. Once you know the name reading is easy.

public class MainWindow : XamlElementContainer
{
	...

	private UserControl CustomControl_Image
	{
		get
		{
			return Get<UserControl>(mainPath + "CustomControl[0]");
		}
	}

	public Verification VerifyCustomImageText(string expected)
	{
		string actual =
			CustomControl_Image.GetAttachedProperty<string>("", "Message");
		return BaseTest.VerifyText(expected, actual);
	}
}

GetAttachedProperty

GetAttachedProperty is powerful method. Along with reading dependency properties you can read much more. In some cases WPF elements are nested in each other or in tooltip windows. In other cases some object is bound to WPF element. In such situations you can try to access the elements and method will return you FrameworkElement object. From this object you can again get GetAttachedProperty to access some class specific property. In all cases you will need access to application under test code to see how it is working internally.

FrameworkElement tooltip = wpfElement.
	GetAttachedProperty<FrameworkElement>("", "ToolTip");
string value = tooltip.GetAttachedProperty<string>("", "SomeSpecificProperty");

Conclusion

GetAttachedProperty is powerful method. Once you get stuck with normal processing of elements you can always try it. I would say definitely give it a try.

Read more...

Advanced WPF automation – working with WinForms grid

Last Updated on by

Post summary: Example how to work with WinForms grid with TestStack White.

TestStack White is really powerful framework. It works on top of Windows UI Automation framework hiding its complexity. If White is not able to locate element you have access to underlying UI Automation and you can do almost anything you need.

Reference

This post is part of Advanced WPF desktop automation with Telerik Testing Framework and TestStack White series. Sample application can be found in GitHub.

MainGrid

For single responsibility separation grid logic is in separate class MainGrid.cs. Constructor takes White.Core.UIItems.WindowItems.Window object. Inside window we search for element with control type ControlType.Table. It is the only one of its kind. If there are more we should narrow down the SearchCriteria.

public class MainGrid
{
	private Table table;
	public MainGrid(Window window)
	{
		SearchCriteria search = SearchCriteria.ByControlType(ControlType.Table);
		table = window.Get<Table>(search);
	}

	public string GetCellText(int index)
	{
		TableCell cell = GetCell(index);
		string value = cell.Value as string;
		return value;
	}

	public void ClickAtRow(int row)
	{
		TableCell cell = GetCell(row);
		Point topLeft = cell.Bounds.TopLeft;
		topLeft.X += 5;
		topLeft.Y += 5;
		Mouse.instance.Click(topLeft);
	}

	private TableCell GetCell(int index)
	{
		TableRows rows = table.Rows;
		TableCells cells = rows[index - 1].Cells;
		return cells[0];
	}
}

Access the grid

MainGrid is property inside MainWindow page object. On access of the property new object is instantiated. This might lead to performance issues if grid search and instantiation is slow. So in this case you can use Singleton design pattern. Singleton might lead to issues with old object state which will be hard to debug. It depends what your priorities are.

public class MainWindow : XamlElementContainer
{
	public static string WINDOW_NAME = "MainWindow";
	private Application app;
	private string mainPath =
		"XamlPath=/Border[0]/AdornerDecorator[0]/ContentPresenter[0]/Grid[0]/";
	public MainWindow(VisualFind find, Application application)
		: base(find)
	{
		app = application;
	}

	private MainGrid MainGrid
	{
		get
		{
			return new MainGrid(app.GetWindowByName(WINDOW_NAME));
		}
	}

	public void ClickTableAtRow(int row)
	{
		MainGrid.ClickAtRow(row);
	}

	public Verification VerifyTableCell(int index, string text)
	{
		return BaseTest.VerifyText(text, MainGrid.GetCellText(index));
	}
}

Conclusion

TestStack White is powerful framework. It will be perfect if you can do the job without it. If you cannot you are lucky it exists.

Read more...

Advanced WPF automation – page objects inheritance

Last Updated on by

Post summary: re-use of page objects code through inheritance.

Inheritance is one of the pillars of object oriented programming. It is a way to re-use functionality of already existing objects.

Reference

I’ve started a series with details of Advanced WPF desktop automation with Telerik Testing Framework and TestStack White. Sample application can be found in GitHub SampleAppPlus repository.

Abstract class

Abstract class is one that cannot be instantiated. Abstract class may or may not have abstract methods. If one method is marked as abstract then its containing class should also be marked as abstract. We have two similar windows with text box, save and cancel button that are shown on both of them. AddEditText class following Page Objects pattern. It is marked as abstract though. It has implementation of all three elements except “TextBox_Text”.

public abstract class AddEditText : XamlElementContainer
{
	protected string mainPath =
		"XamlPath=/Border[0]/AdornerDecorator[0]/ContentPresenter[0]/Grid[0]/";
	public AddEditText(VisualFind find) : base(find) { }

	protected abstract TextBox TextBox_Text { get; }
	private Button Button_Save
	{
		get
		{
			return Get<Button>(mainPath + "Button[0]");
		}
	}
	private Button Button_Cancel
	{
		get
		{
			return Get<Button>(mainPath + "Button[1]");
		}
	}

	public void EnterText(string text)
	{
		TextBox_Text.Clear();
		TextBox_Text.User.TypeText(text, 50);
	}

	public void ClickSaveButton()
	{
		Button_Save.User.Click();
		Thread.Sleep(500);
	}

	public void ClickCancelButton()
	{
		Button_Cancel.User.Click();
	}
}

Add Text page object

Only thing we have to do in Add Text window is to implement “TextBox_Text” property. All other functionality has already been implemented in AddEditText class.

public class AddText : AddEditText
{
	public static string WINDOW_NAME = "Add Text";
	public AddText(VisualFind find) : base(find) { }

	protected override TextBox TextBox_Text
	{
		get
		{
			return Get<TextBox>(mainPath + "TextBox[0]");
		}
	}
}

Edit Text page object

In Edit Text page object we have to implement “TextBox_Text” property. Also on this window there is one more element which needs to be defined.

public class EditText : AddEditText
{
	public static string WINDOW_NAME = "Edit Text";
	public EditText(VisualFind find) : base(find) { }

	private TextBlock TextBlock_CurrentText
	{
		get
		{
			return Get<TextBlock>(mainPath + "TextBlock[0]");
		}
	}

	protected override TextBox TextBox_Text
	{
		get
		{
			return Get<TextBox>(mainPath + "TextBox[1]");
		}
	}

	public Verification VerifyCurrentText(string text)
	{
		return BaseTest.VerifyText(text, TextBlock_CurrentText.Text);
	}
}

Conclusion

Inheritance is a powerful tool. We as automation engineers should use it whenever possible.

Read more...

Advanced WPF desktop automation

Last Updated on by

Post summary: In this series of posts I’ll expand the examples and ideas started in Automation of WPF applications series.

Telerik Testing Framework and TestStack White are powerful tools for desktop automation. You can automate almost everything with combination of those frameworks. This series of posts will give more details how to automate more complex applications.

Reference

Code samples are located in GitHub SampleAppPlus repository. Telerik Testing Framework requires installation as it copies lots of assemblies in GAC.

SampleAppPlusThere is SampleAppPlus which is actually a dummy application with only one purpose to be used to demonstrate automation principles. With this application you can upload image file. Once uploaded image is visualised. Image path is listed in a table. Image path is also visualised as image in a custom control in bottom of main window. User is able to add more text which is added to table as long with editing already existing text. Add and edit are reflected on custom image element.

Topics

  • Page objects inheritance of similar windows
  • Working with WinForms grid
  • Windows themes and XamlPath
  • Read dependency property
  • NTestsRunner in action
  • Extension methods
  • Memory usage

Page objects inheritance

It is common to have similar windows in an application. Each window is modelled as page object in automation code. If windows are also similar in terms of internal structure it is efficient to re-use similar part and avoid duplications. Re-use is achieved with inheritance. Given SampleAppPlus application has very similar windows for adding and editing text. Code examples show how to optimise your effort and re-use what is possible to be re-used. More details can be found in Advanced WPF automation – page objects inheritance post.

Working with WinForms grid

As mentioned before Telerik Testing Framework is not very good with WinForms elements. This is the main reason to use TestStack White. It is not very likely to have WinForms elements in WPF application but in order to complete the big picture I’ve added such grid in SampleAppPlus application. Code examples show how to manage WinForms grid. More details can be found in Advanced WPF automation – working with WinForms grid post.

Windows themes and XamlPath

In given examples elements are located with exact XamlPath find expression. This approach has serious problem related to Windows themes. For complex user interfaces XamlPath could be different on different theme. Windows Classic theme sometimes produces different XamlPath in comparison with standard Windows themes. Yes it is no more available from Windows 8 but Server editions are working only with Windows Classic theme. So one and the same tests could have differences. I couldn’t find a way to automatically detect which is current theme. Solution is to have different XamlPath for both standard and classic themes. Once you have it you can switch them manually with some configuration or you can try to automate the switch by locating element for which you know is different and save variable based on its location result.

Read dependency property

Dependency property is a way in C# to extend the standard provided functionality. It can happen in real application that developers use such functionality. Given SampleAppPlus application has special element with dependency property. Code examples show how to extract property value and use it in your tests. More details can be found in Advanced WPF automation – read dependency property post.

NTestsRunner in action

I’ve introduced NTestsRunner which is custom way for running functional automated tests. Code samples show how to use it and create good tests that are run only with this tool.

Extension methods

Extension methods are one extremely good feature of .NET framework. I personally like them very much. I assume everyone writing code in C# is aware of them. Still in code  examples show how they can be used.

Memory usage

Memory is not a problem on small projects. But when number of tests continue to grow it actually becomes a problem. More details can be found in Advanced WPF automation – memory usage post.

Read more...

Multilingual automation testing with enumerations

Last Updated on by

Post summary: Solution for automated testing of multilingual sites by using string values in all supported languages for enumerations.

In efficiently use of enumerations with string values in C# post I’ve described how you can add text to an enumeration element and then use it. Current post is elaboration with code samples for testing multilingual applications.

The challenge

Multilingual automation is always a challenge. If you use text to locate elements or verify condition then trying to run test with different language will fail. Enumerations with language dependant string values is pretty good solution. How to do it is described bellow.

Define attribute

StringValue class is extending System.Attribute. It has two properties for text and language. It should have AllowMultiple = true in order to be applied as many times as many languages you have.

namespace System
{
	[AttributeUsage(AttributeTargets.Field, AllowMultiple = true)]
	public class StringValue : Attribute
	{
		public string Value { get; private set; }
		public string Lang { get; private set; }

		public StringValue(string lang, string value)
		{
			Lang = lang;
			Value = value;
		}
	}
}

Read attribute

With reflection read all StringValue attributes. Iterate them and return the one that matches language given as parameter.

using System.Reflection;

namespace System
{
	public static class ExtensionMethods
	{
		public static string GetStringValue(this Enum value, string lang)
		{
			string stringValue = value.ToString();
			Type type = value.GetType();
			FieldInfo fieldInfo = type.GetField(value.ToString());
			StringValue[] attrs = fieldInfo.
				GetCustomAttributes(typeof(StringValue), false) as StringValue[];
			foreach (StringValue attr in attrs)
			{
				if (attr.Lang == lang)
				{
					return attr.Value;
				}
			}
			return stringValue;
		}
	}
}

Apply to enumerations

All supported languages can be defined as string constants. It will be pretty cool if can define enumeration with languages and pass it in StringValue constructor as language but it is not possible as it is not a compile time constant.

public class Constants
{
	public const string LangEn = "en";
	public const string LangFr = "fr";
	public const string LangDe = "de";
}

public enum Messages
{
	[StringValue(Constants.LangEn, "Problem occured, try again later")]
	[StringValue(Constants.LangFr, "Problème survenu, réessayer plus tard")]
	[StringValue(Constants.LangDe, "Problem aufgetreten, " +
		"versuchen Sie es später erneut")]
	ProblemOccured,
	[StringValue(Constants.LangEn, "Successfully done")]
	[StringValue(Constants.LangFr, "Fait avec succès")]
	[StringValue(Constants.LangDe, "Erfolgreich durchgeführt")]
	Success
}

Use in code

Somewhere at a top level of your tests you should have property or field which most likely will be read from conflagration and will define for which locate is current test run.

string lang = Constants.LangFr;

This is then used to read correct text value for given enumeration element.

Assert.AreEqual(Messages.ProblemOccured.GetStringValue(lang), 
	App.MessageBox.GetText());

Conclusion

Multilingual testing is a challenge. Be smart and use all tricks you might get. In this post I’ve revealed pretty good trick to do the automation. Challenge with this approach will be initial set up of enumerations with all the translations.

Read more...

Efficiently use of enumerations with string values in C#

Last Updated on by

Post summary: Using enumerations or specialised classes makes your automation tests easy to understand and maintain. Show with code samples how to define and read string value to enumeration elements.

When you do automation tests and have to pass value to a method it is so easy and natural to just use strings. There are many cases where string is the correct solution. There are also many cases where string can be solution, but enumeration or specialised class are better and more efficient solution.

Why not strings

Having the following example – web application with drop down which has several options. We are using Page objects pattern to model the page. Page object has a method which accepts the option to be selected. String seems as a natural solution but is wrong. Although string will work enumeration is the only right solution. Drop down has limited and already defined options that can be selected. Exposing just string may cause misinterpretations for consumer of your method. It is much more easy to limit the consumer to several enumeration values. In this way consumer knows what data to provide and this automatically keeps code clean from magic strings. If changes are needed they will be done only in the enumeration making code easier to maintain.

Problem with enumerations in C#

Using enumerations for example given above will not work. Unlike Java enumerations in C# are wrappers for int or other numeric types value. You are not able to use text with enumeration element.

Using string values with enumerations

Only way to use string values in enumerations is by adding it as an attribute to each enumeration’s element. It takes several steps in order to accomplish this.

  1. Create the attribute class that will be applied to enumeration element
  2. Create extension method that is responsible for reading string value from enumeration element
  3. Apply string value attribute to enumeration element
  4. Use in code

Bellow are code samples how to use string values with enumerations in C#. Defining and reading of the attribute is functionality built in NTestsRunner.

Define attribute

First step is to create class that extends System.Attribute. It has only one string property to hold the text in it. Text is passed in constructor. Note that this class is defined in System name space in order to have it by default skipping the need of importing name space you might not be aware of.

namespace System
{
	public class StringValue : Attribute
	{
		public string Value { get; private set; }

		public StringValue(string value)
		{
			Value = value;
		}
	}
}

Read the attribute

C# provides so called extension methods, a great way to add new functionality to existing type without creating new derived type. Reading of string value from enumeration element is done with GetStringValue extension method. With reflection all StringValue custom attributes of element are obtained. If some found text of first is returned. If not then string representation of element is returned.

using System.Reflection;

namespace System
{
	public static class ExtensionMethods
	{
		public static string GetStringValue(this Enum value)
		{
			string stringValue = value.ToString();
			Type type = value.GetType();
			FieldInfo fieldInfo = type.GetField(value.ToString());
			StringValue[] attrs = fieldInfo.
				GetCustomAttributes(typeof(StringValue), false) as StringValue[];
			if (attrs.Length > 0)
			{
				stringValue = attrs[0].Value;
			}
			return stringValue;
		}
	}
}

Apply to enumerations

Once StringValue class is ready it can be applied as attribute to any enumeration.

public enum Messages
{
	[StringValue("Problem occured, try again later")]
	ProblemOccured,
	[StringValue("Successfully done")]
	Success
}

Use in code

In code string value can be obtained from enumeration’s element with GetStringValue method.

Assert.AreEqual(Messages.ProblemOccured.GetStringValue(), App.MessageBox.GetText());

Conclusion

Using enumerations is mandatory to make readable and maintainable automation. Working effectively with enumerations will increase your value as automation specialist.

Read more...

NTestsRunner for functional automated tests

Last Updated on by

Post summary: NTestsRunner implementation details and features.

In previous post I’ve described unit testing frameworks and why they are not suitable for running functional automated tests. I introduced NTestsRunner – very simple runner that can be used for running your automation tests. This topic is dedicated for implementation details of the NTestsRunner.

Verifications

It is important in functional testing to be able to place several verification points in one test. For this purpose abstract class Verification is implemented. It has two properties to store more details about verification and time it was taken. Constructor receives comma separated string values. In case of zero strings are passed then result is empty string. If one string is passed then this is the result. If more than one string is added then first string is taken as formatting string and others are used to build up the result. Logic is similar to string.Format(String, Object[]) method.

public abstract class Verification
{
	public string Result { get; private set; }
	public DateTime ExecutedAt { get; private set; }

	public Verification(params object[] args)
	{
		...
	}
}

Passed or Failed

In automation test may have two conditions – passed or failed. This is why two concrete classes are extending Verification: VerificationPassed and VerificationFailed. They do not add any other functionality. Those classes use parent’s class constructor. This is example how to instantiate object from those classes:

string value = "number";
int number = 1;
Verification result =
	new VerificationFailed("This is formatting string {0} {1}. ",
		value,
		number);

Test case result

Test case is generally a set of conditions to verify whether given scenario works are per user requirements. In automation world test case is test method with several verification points inside. In NTestsRunner TestCaseResult is class representing the idea of a test case. It has properties for name, time to run and list of all verifications with count of passed and failed.

public class TestCaseResult
{
	public List<Verification> Verifications = new List<Verification>();
	public string Name { get; set; }
	public int VerificationsFailed { get; set; }
	public int VerificationsPassed { get; set; }
	public TimeSpan Time { get; set; }
}

Test plan result

TestPlanResult in NTestsRunner has nothing to do with test plan term from QA world. Here this is a representation of a test class with test methods inside. It has properties for name and time to run. Also there is list with all TestCaseResults, i.e. test methods in that class. There are counters for passed and failed test cases and also counters for all passed and failed verifications inside all TestCaseResults.

public class TestPlanResult
{
	public List<TestCaseResult> TestCases = new List<TestCaseResult>();
	public string Name { get; set; }
	public int TestCasesPassed { get; private set; }
	public int TestCasesFailed { get; private set; }
	public int VerificationsPassed { get; private set; }
	public int VerificationsFailed { get; private set; }
	public TimeSpan Time { get; private set; }

	public void Count()
	{
		...
	}
}

Class and method attributes

In order to make one class a test class it should have with [TestClass] attributes. To convert method to a test one it should have [TestMethod] attribute. Just the attribute is not enough though. Method should have special method signature. This is required by NTestsRunner.

Test method signature

In order to run without exception test method needs to conform to two rules:

  1. To have attribute [TestMethod]
  2. Method to receive parameter List verifications in its signature, i.e.
    [TestMethod]
    public void TestMethod1(List<Verification> verifications)
    

Configurations

Configurations can be found on NTestRunner home page.

Execution

Once object from NTestsRunner is instantiated and configured tests with Execute() method. In side this method all classes from calling assembly (the one that holds the tests) are taken. If TestsToExecute is configured then only those with name matching given values are taken. If no TestsToExecute is provided then all classes with attribute [TestClass] are taken. Methods from each class are taken by default in order of appearance in the class. If method has [TestClass] attribute then method is executed by passing List object to it. Inside the method Verifications are collected as list into TestCaseResult object. After method is run TestCaseResult is added to its parent TestPlanResult which is added to list with all results. In the end results are saved as XML and HTML.

Results in jUnit XML

In order to integrate with CI tools such as Jenkins or Bamboo results are exported to XML file after execution has finished. File is named Results.xml and is located in test results folder. XML format is implemented according to junit-4.xsd.

Results in HTML

Tests result are saved as HTML report for better readability. File is named Results.html and is located in test results folder.

Usage

In order to use NTestsRunner a console application project is needed. This project will hold test classes. As the one bellow. Take into consideration that this is very simplified usage pattern. In reality Page objects design pattern will be used. Page objects will make the verifications and return them.

[TestClass]
public class TestClass1
{
	[TestMethod]
	public void TestMethod1(List<Verification> verifications)
	{
		// Do some actions
		verifications.Add(new VerificationFailed("There is error"));
		// Do some actions
		verifications.Add(new VerificationPassed("Everythign is OK"));
	}
}

In its main method new instance of NTestsRunner is created. Configurations are done and test executions is started. It is that simple to use it.

class Program
{
	static void Main(string[] args)
	{
		NTestsRunnerSettings settings = new NTestsRunnerSettings();
		settings.TestResultsDir = @"C:\temp";
		settings.MaxTestCaseRuntimeMinutes = 2;
		settings.TestsToExecute.Add("TestClass1");
		settings.PreventScreenLock = true;

		NTestsRunner runner = new NTestsRunner(settings);
		runner.Execute();
	}
}

Pros and cons

NTestsRunner has its pros and cons.
Pros are:

  • Pretty easy to use
  • Open source and can be customised to you specific needs
  • Gives you ability to make several verifications in one test and in case of fail it doesn’t break current test method
  • Tests are stored into console application that can be easily run
  • Results are saved in jUnit XML for CI integration
  • Results are saved in HTML

Cons are:

  • Test methods should have specific signature
  • It is not easy to migrate existing tests to new format

Conclusions

This is pretty good tool for running functional automated tests. It is very easy to use and is made especially for running functional automated tests. You can definitely give it a try.

Read more...

Running functional automation tests

Last Updated on by

Post summary: Unit testing frameworks are not very suitable for running functional tests. NTestsRunner is an alternative way of running functional automated tests.

Unit testing

Unit testing is focused on testing code on low level. Methods, sets of methods or modules are being tested by writing test code which invokes those methods with specific arguments. In unit testing all external dependencies (database, file system, network, etc.) are removed. Those resources are simulated in unit tests by using so called mock objects. Mock objects are controlled by tests designer and have predictive behaviour. Running piece of code which doesn’t have external dependencies happens almost immediately. So unit tests are executed for very low amount of time. It is considered set of unit tests taking longer than 5 minutes is not well designed. Unit tests are strictly focused. One test tests only one condition. Each test is not related in anyhow to other tests.

Unit testing frameworks

Unit testing frameworks conform to unit tests purpose and design. Tests should not depend on each other. For this reason unit testing frameworks execute tests in random order (xUnit.net or MS Unit Testing Framework), other like NUnit in alphabetic order of tests method names. Checking for given conditions are done with assertions. If one assert fails current test execution is stopped and test is marked as failed.

Functional testing

Functional testing is focused on ensuring that software product works as per user requirements. Real life software has external dependencies. Most software products has some kind of user interface. Automation tests are focused on verifying that UI works correctly. Transferring and rendering data to UI takes time, database operations take time, file and network operations also take time, etc. In general functional tests are more complex and take much longer to execute. In order to be efficient checking of several conditions are defined in each test. Functional tests can be manual and automated. In current post when I mention functional tests I mean only automated.

Requirement for running functional test – many verifications

In a perfect situation functional one test should verify only one condition. In reality because of too many external dependencies time for tests execution is large. An time matters. In order to shorten this time we make several checks in one test. For e.g. in e-commerce web site order is placed. We verify order confirmation page that there is order number, that address is same used during checkout, that user’s email is correct. We also can verify inside user’s email box that received mail is correct. We can check in database for some properties of the order. If we have to do one order for each check tests will take significantly long time. To be efficient we do all checks with one order in one test case. We need a framework which allows you to have multiple verifications in one test. Further more if verification is failed test execution should continue.

Requirement for running functional test – controlled sequence

What is really good to be avoided but sometimes cannot is test dependency. Sometimes one tests needs another to have done something before continuing. As I said this should be avoided but in order to be efficient you should make trade off between good tests design and time to execute. For e.g. you may want to cancel and then refund order placed in e-commerce web site. Generally it is best to place new order for this test but if placing of order requires too much time then an option is to reuse already existing order from previous test. We need to be able to control test case execution order.

NTestsRunner

In order to have control over tests and use many verifications I’ve created NTestsRunner. Its code is in GitHub NTestsRunner repository. This is .NET library. You create console application with your tests and use the library within. Tests are annotated is similar fashion as with unit testing frameworks. Tests are executed in sequential order. There could be many verifications. Results are saved as HTML and XML in jUnit format. NTests runner is described in more details in this post NTestsRunner for functional automated tests.

Read more...

WPF automation – running the tests

Last Updated on by

Post summary: How to sell your automation to management. Guide for running the tests unattended.

References

This post is part of Automation of WPF applications with Telerik Testing Framework and TestStack White series. Sample application can be found in GitHub SampleApp repository.

Test frameworks need mouse and keyboard

As you have noticed when running the examples it is not possible to do anything else while tests are running because both Telerik Testing Framework and TestStack White are using mouse and keyboard in order to click and type text. This is how both frameworks are designed and work. It doesn’t seem very effective if you are wasting you time watching tests running on your workstation instead of doing something productive. This is not a good argument when advocating your automation.

The essence of automation is being effective

This topic is not about Automation vs. Manual testing so I will not go in that direction. I’ll just say that still there are companies lacking management willpower to support and embrace automation. So we need to be a good salesmen!

How to drive you automation to success

As I said we need to be wise when promoting our automation to get more time and resources. Sales strategy is pretty easy and straight forward not requiring huge investments:

  • Free to use and pretty easy to work with frameworks.
  • Some enthusiasm to make the first automation. Remember, be smart and first automate most repeating scenarios. This will show real results and help you buy some time for further automation.
  • Virtual (or real) machine with test running on it during the night
  • Mail with results is sent to management with results in the morning

Run the tests unattended

You get the machine, set up the framework, set up scheduled tasks for deploying latest application under test and latest test code. Run the tests and get into trouble. Tests does not run! This is because there are special requirements to schedule unattended run. There must be an active Windows session in order mouse and keyboard to be used. Once session is interrupted tests stop. This is detailed KB article on the topic with several possible solutions.

Working solution

You can try solutions in the article to see which works best for you. For me solution is to have remote desktop in remote desktop. This requires Windows Server installation. Only server provides two simultaneous remote desktop sessions. There are unofficial patches for non-server versions which I haven’t tried and cannot comment. Two local users are needed on the Windows Server with no desktop locking or screen saver (domain users most likely will have desktop locked after a time). Log from your machine to test machine with first user. From first user’s session log to same testing machine with second user. Tests are started from second user’s session. Once tests are started you can freely close desktop (not log out!). You can create scheduled task which runs only when user is logged in and just wait results in the morning. If two accounts are overhead there is option to use software that prevents your computer to lock. See prevent screen lock thread if something works for you.

Conclusions

Automation is exciting field of career development for test engineers. This blog is dedicated to automation testing. You will find very useful and interesting topics in it. I would definitely encourage you just to give it a try. Good luck!

Read more...

WPF automation – using the elements

Last Updated on by

Post summary: Use already created Page Objects and build up test framework.

References

This post is part of Automation of WPF applications with Telerik Testing Framework and TestStack White series. Sample application can be found in GitHub SampleApp repository.

Page Objects holder

Bellow is App.cs which is representation of application under test.

using ArtOfTest.WebAii.Wpf;
using White.Core;
using White.Core.UIItems.WindowItems;

namespace SampleApp.Tests.Framework.Elements
{
	public class App
	{
		public WpfApplication ApplicationWebAii { get; private set; }
		public Application ApplicationWhite { get; private set; }

		public App(WpfApplication webAiiApp, Application whiteApp)
		{
			ApplicationWebAii = webAiiApp;
			ApplicationWhite = whiteApp;
		}

		public MainWindow MainWindow
		{
			get
			{
				return new MainWindow(ApplicationWebAii
					.WaitForWindow(MainWindow.WINDOW_NAME).Find);
			}
		}

		public OpenFile OpenFile
		{
			get
			{
				return new OpenFile(GetWindowByName("Open"));
			}
		}

		public MessageBox MessageBox
		{
			get
			{
				return new MessageBox(GetWindowByName(""));
			}
		}

		private Window GetWindowByName(string windowName)
		{
			// Workaround as method GetWindow(string title) is not working
			foreach (Window window in ApplicationWhite.GetWindows())
			{
				if (windowName.Equals(window.Name))
				{
					return window;
				}
			}
			return null;
		}
	}
}

Constructor takes instance of Telerik Testing Framework’s application (WpfApplication) and TestStack White’s application (Application). Those are stored inside the App instance.

Access the Page Objects

Each window in real application is represented by a property in the App class. When accessed a new object of this page object class is created and its elements can be accessed.

WPF page objects require VisualFind in order to be instantiated. It is obtained by first locating the window with Telerik’s

public WpfWindow WaitForWindow(string caption);

From located window we need only the VisualFind which is used internally to locate elements on that particular window.

WinForms page objects require White’s Window instance in order to be instantiated. Window is located by

public virtual Window GetWindow(string title);

I found this method not always working so I’ve made a workaround method

private Window GetWindowByName(string windowName);

New Page Objects vs. Cached Page Objects

In the example above every time an action is required a new page object is instantiated. In some cases instantiating the object may require longer time or you might need some properties in this object preserved during tests. In such cases you may use Singleton design pattern and instantiate only one object.

private MessageBox messageBox = null;
public MessageBox MessageBox
{
	get
	{
		if (messageBox == null)
		{
			messageBox = new MessageBox(GetWindowByName(""));
		}
		return messageBox;
	}
}

Both approaches have pros and cons. In case of new page object you always work with a fresh instance without any previous state saved. This might require more time to instantiate the objects and you are not able to save previous states. Cached objects may be much faster as a performance but having internal state may lead to unexpected bugs in your automation.

Base test

Finally to make all work we need an instance of App. Instance is created in BaseTest.cs class.

using ArtOfTest.WebAii.Core;
using SampleApp.Tests.Framework.Elements;
using White.Core;

namespace SampleApp.Tests.Framework.Tests
{
	public class BaseTest
	{
		protected App App { get; set; }
		private string applicationPath =
			"C:\\SampleApp\\SampleApp\\bin\\Debug\\SampleApp.exe";

		protected void Start()
		{
			if (App == null)
			{
				Application appWhite = Application.Launch(applicationPath);
				Manager manager = new Manager(false);
				manager.Start();
				App = new App(
					manager.ConnectToApplication(appWhite.Process), appWhite);
			}
		}

		protected void Stop()
		{
			if (App != null && App.ApplicationWhite != null)
			{
				App.ApplicationWhite.Kill();
			}
			App = null;
		}
	}
}

All tests inherit from the base test class. Initialise and clean up code is added in base test. In our case Start() method is the initialiser. It must be called in order to instantiate App class. App property is protected so every extending class has access to it.

Initialise the frameworks

In order to start the application under test we need full path to the exe file. In this example this is hard coded but in real life it will be configurable. Start the application with White’s

public static Application Launch(string executable);

Once started connect to it with Telerik framework by creating a Manager and use its

public WpfApplication ConnectToApplication(Process proc, string pid = null);

Process is obtained out of White Application.Process property. Opposite launch order order is not working. White is not able to Attach to running process.

Use page objects

Once Start() method is called, application under test is started and both frameworks are connected to it you can simply do in your test:

App.MainWindow.ClickBrowseButton();

This will find and create new instance of MainWindow and then it will find and click Browse button. Your framework defines the actions on elements which are later used in actual tests. Once all the work on framework has been done it is that simple to build your tests.

Clean up

Stop() method is called in the end of the test in order to close the application under test by killing underlying process.

The tests

This is unit test created with MS Unit Testing Framework in order to demonstrate real testing on application.

using Microsoft.VisualStudio.TestTools.UnitTesting;
using SampleApp.Tests.Framework.Tests;

namespace SampleApp.Tests
{
    [TestClass]
    public class UnitTest1 : BaseTest
    {
		[TestInitialize]
		public void Initialise()
		{
			Start();
		}

		[TestMethod]
		public void OpenFile_OnCancel_GivesMessage()
		{
			App.MainWindow.ClickBrowseButton();
			App.OpenFile.ClickCancelButton();
			Assert.AreEqual("Problem occured, try again later",
				App.MessageBox.GetText());
			App.MessageBox.ClickOkButton();
		}

		[TestMethod]
		public void OpenFile_OnAttachFile_GivesMessageAndFileIsShown()
		{
			string filePath = @"C:\SampleApp\SampleApp\bin\Debug\HappyFace.jpg";
			App.MainWindow.ClickBrowseButton();
			App.OpenFile.EnterFileName(filePath);
			App.OpenFile.ClickOpenButton();
			Assert.AreEqual("Successfully done", App.MessageBox.GetText());
			App.MessageBox.ClickOkButton();
			Assert.AreEqual(filePath, App.MainWindow.GetFilePathAtIndex(1));
		}

		[TestCleanup]
		public void CleanUp()
		{
			Stop();
		}
    }
}

Unit testing frameworks

Unit testing frameworks are designed to run tests in random order. Before each test method annotated with [TestInitialize] is run. In our case application is started. After each test method annotated with [TestCleanup] is run. In our case application is stopped. For this simple application running tests with unit testing framework is OK. We are not doing unit tests but functional once. So for bigger and more complex tests unit testing frameworks are not very convenient. I’ve created very simple tests runner. This post describing the need of such tests runner.

This post shows how to build up the framework based on page objects. Next post is WPF automation – running the tests.

Read more...