How to use AWS Transcribe in real-time with React and .NET Core

Last Updated on by

Post summary: Practical code example how to use AWS Transcribe from an application with React frontend and .NET Core backend.

AWS Transcribe

Amazon Transcribe makes it easy for developers to add speech to text capabilities to their applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, automate subtitling, and generate metadata for media assets to create a fully searchable archive. You can use Amazon Transcribe Medical to add medical speech to text capabilities to clinical documentation applications.

Real-time usage

Streaming Transcription utilizes HTTP 2’s implementation of bidirectional streams to handle streaming audio and transcripts between your application and the Amazon Transcribe service. Bidirectional streams allow your application to handle sending and receiving data at the same time, resulting in quicker, more reactive results. Read more along with a Java example in Amazon Transcribe now supports real-time transcriptions article. The way to achieve bidirectional communication in the browser is to use WebSocket.

How to use AWS Transcribe

The first thing that is needed is an AWS account with sufficient Transcribe privileges, read more in How Amazon Transcribe Works with IAM article. Once you have this, generate AWS AccessKey and SecretKey. With the AccessKey, SecretKey, and Region, a special pre-signed URL is generated, which is a rather complex process. A WebSocket connection is opened with this pre-signed URL. A full explanation of the process can be found in Using Amazon Transcribe Streaming with WebSockets article. Another very useful example of how to generate the pre-signed URL and open a WebSocket connection is shown in amazon-archives/amazon-transcribe-websocket-static GitHub repo.

Issues with generating pre-signed URL in the browser

So far so good, things are starting to make sense. In order to generate a pre-signed URL in the browser, AWS AccessKey and SecretKey are needed. One option is to make the users provide them every time they want to use the application, which is not really user friendly. Another option is to have the web application generate it automatically, which exposes the AWS credentials and is not really an option. The solution is to have a backend application, which can be even a Lambda function, to calculate the pre-signed URL.

Generate the pre-signed URL in C#

Below is a .NET Core example controller code on how to generate the pre-signed URL in C#:

public class PresignedUrlController : ControllerBase
{
	private const string Service = "transcribe";
	private const string Path = "/stream-transcription-websocket";
	private const string Scheme = "AWS4";
	private const string Algorithm = "HMAC-SHA256";
	private const string Terminator = "aws4_request";
	private const string HmacSha256 = "HMACSHA256";

	private readonly string _region;
	private readonly string _awsAccessKey;
	private readonly string _awsSecretKey;

	public PresignedUrlController(IOptions<Config> options)
	{
		_region = options.Value.AWS_SPEECH_REGION;
		_awsAccessKey = options.Value.AWS_SPEECH_ACCESS_KEY;
		_awsSecretKey = options.Value.AWS_SPEECH_SECRET_KEY;
	}

	[HttpPost]
	public ActionResult<string> GetPresignedUrl()
	{
		return GenerateUrl();
	}

	private string GenerateUrl()
	{
		var host = $"transcribestreaming.{_region}.amazonaws.com:8443";
		var dateNow = DateTime.UtcNow;
		var dateString = dateNow.ToString("yyyyMMdd");
		var dateTimeString = dateNow.ToString("yyyyMMddTHHmmssZ");
		var credentialScope = $"{dateString}/{_region}/{Service}/{Terminator}";
		var query = GenerateQueryParams(dateTimeString, credentialScope);
		var signature = GetSignature(host, dateString, dateTimeString, credentialScope);
		return $"wss://{host}{Path}?{query}&X-Amz-Signature={signature}";
	}

	private string GenerateQueryParams(string dateTimeString, string credentialScope)
	{
		var credentials = $"{_awsAccessKey}/{credentialScope}";
		var result = new Dictionary<string, string>
		{
			{"X-Amz-Algorithm", "AWS4-HMAC-SHA256"},
			{"X-Amz-Credential", credentials},
			{"X-Amz-Date", dateTimeString},
			{"X-Amz-Expires", "30"},
			{"X-Amz-SignedHeaders", "host"},
			{"language-code", "en-US"},
			{"media-encoding", "pcm"},
			{"sample-rate", "44100"}
		};
		return string.Join("&", result.Select(x => $"{x.Key}={Uri.EscapeDataString(x.Value)}"));
	}

	private string GetSignature(string host, string dateString, string dateTimeString, string credentialScope)
	{
		var canonicalRequest = CanonicalizeRequest(Path, host, dateTimeString, credentialScope);
		var canonicalRequestHashBytes = ComputeHash(canonicalRequest);

		// construct the string to be signed
		var stringToSign = new StringBuilder();
		stringToSign.AppendFormat("{0}-{1}\n{2}\n{3}\n", Scheme, Algorithm, dateTimeString, credentialScope);
		stringToSign.Append(ToHexString(canonicalRequestHashBytes, true));

		var kha = KeyedHashAlgorithm.Create(HmacSha256);
		kha.Key = DeriveSigningKey(HmacSha256, _awsSecretKey, _region, dateString, Service);

		// compute the final signature for the request, place into the result and return to the 
		// user to be embedded in the request as needed
		var signature = kha.ComputeHash(Encoding.UTF8.GetBytes(stringToSign.ToString()));
		var signatureString = ToHexString(signature, true);
		return signatureString;
	}

	private string CanonicalizeRequest(string path, string host, string dateTimeString, string credentialScope)
	{
		var canonicalRequest = new StringBuilder();
		canonicalRequest.AppendFormat("{0}\n", "GET");
		canonicalRequest.AppendFormat("{0}\n", path);
		canonicalRequest.AppendFormat("{0}\n", GenerateQueryParams(dateTimeString, credentialScope));
		canonicalRequest.AppendFormat("{0}\n", $"host:{host}");
		canonicalRequest.AppendFormat("{0}\n", "");
		canonicalRequest.AppendFormat("{0}\n", "host");
		canonicalRequest.Append(ToHexString(ComputeHash(""), true));
		return canonicalRequest.ToString();
	}

	private static string ToHexString(byte[] data, bool lowercase)
	{
		var sb = new StringBuilder();
		for (var i = 0; i < data.Length; i++)
		{
			sb.Append(data[i].ToString(lowercase ? "x2" : "X2"));
		}
		return sb.ToString();
	}

	private static byte[] DeriveSigningKey(string algorithm, string awsSecretAccessKey, string region, string date, string service)
	{
		char[] ksecret = (Scheme + awsSecretAccessKey).ToCharArray();
		byte[] hashDate = ComputeKeyedHash(algorithm, Encoding.UTF8.GetBytes(ksecret), Encoding.UTF8.GetBytes(date));
		byte[] hashRegion = ComputeKeyedHash(algorithm, hashDate, Encoding.UTF8.GetBytes(region));
		byte[] hashService = ComputeKeyedHash(algorithm, hashRegion, Encoding.UTF8.GetBytes(service));
		return ComputeKeyedHash(algorithm, hashService, Encoding.UTF8.GetBytes(Terminator));
	}

	private static byte[] ComputeKeyedHash(string algorithm, byte[] key, byte[] data)
	{
		var kha = KeyedHashAlgorithm.Create(algorithm);
		kha.Key = key;
		return kha.ComputeHash(data);
	}

	private static byte[] ComputeHash(string data)
	{
		return HashAlgorithm.Create("SHA-256").ComputeHash(Encoding.UTF8.GetBytes(data));
	}
}

Use the pre-signed URL in a React application

In one of the examples above, there was a raw code of how to open a WebSocket and use it. In the code below an example is given how to do the same in React with TypeScript. Note that there is a WebSocket closer configured to 15 seconds with setTimeout(). It is important to have some kind of a breaker because leave the socket open can generate a significant AWS bill.

import React from 'react';
import { EventStreamMarshaller, Message } from '@aws-sdk/eventstream-marshaller';
import { toUtf8, fromUtf8 } from '@aws-sdk/util-utf8-node';
import mic from 'microphone-stream';
import Axios from 'axios';

const sampleRate = 44100;
const eventStreamMarshaller = new EventStreamMarshaller(toUtf8, fromUtf8);

export default () => {
  const [webSocket, setWebSocket] = React.useState<WebSocket>();
  const [inputSampleRate, setInputSampleRate] = React.useState<number>();

  const streamAudioToWebSocket = async (userMediaStream: any) => {
    const micStream = new mic();

    micStream.on('format', (data: any) => {
      setInputSampleRate(data.sampleRate);
    });

    micStream.setStream(userMediaStream);

    const url = await Axios.post<string>('http://localhost:3016/url');

    //open up our WebSocket connection
    const socket = new WebSocket(url.data);
    socket.binaryType = 'arraybuffer';

    socket.onopen = () => {
      micStream.on('data', (rawAudioChunk: any) => {
        // the audio stream is raw audio bytes. Transcribe expects PCM with additional metadata, encoded as binary
        const binary = convertAudioToBinaryMessage(rawAudioChunk);
        if (socket.readyState === socket.OPEN) {
          socket.send(binary);
        }
      });
    };

    socket.onmessage = (message: MessageEvent) => {
      const messageWrapper = eventStreamMarshaller.unmarshall(Buffer.from(message.data));
      const messageBody = JSON.parse(String.fromCharCode.apply(String, messageWrapper.body as any));
      if (messageWrapper.headers[':message-type'].value === 'event') {
        handleEventStreamMessage(messageBody);
      } else {
        console.error(messageBody.Message);
        stop(socket);
      }
    };

    socket.onerror = () => {
      stop(socket);
    };

    socket.onclose = () => {
      micStream.stop();
    };

    setWebSocket(socket);

    setTimeout(() => {
      stop(socket);
    }, 15000);

    console.log('Amazon started');
  };

  const convertAudioToBinaryMessage = (audioChunk: any): any => {
    const raw = mic.toRaw(audioChunk);

    if (raw == null) return;

    // downsample and convert the raw audio bytes to PCM
    const downsampledBuffer = downsampleBuffer(raw, inputSampleRate, sampleRate);
    const pcmEncodedBuffer = pcmEncode(downsampledBuffer);

    // add the right JSON headers and structure to the message
    const audioEventMessage = getAudioEventMessage(Buffer.from(pcmEncodedBuffer));

    //convert the JSON object + headers into a binary event stream message
    const binary = eventStreamMarshaller.marshall(audioEventMessage);

    return binary;
  };

  const getAudioEventMessage = (buffer: Buffer): Message => {
    // wrap the audio data in a JSON envelope
    return {
      headers: {
        ':message-type': {
          type: 'string',
          value: 'event',
        },
        ':event-type': {
          type: 'string',
          value: 'AudioEvent',
        },
      },
      body: buffer,
    };
  };

  const pcmEncode = (input: any) => {
    var offset = 0;
    var buffer = new ArrayBuffer(input.length * 2);
    var view = new DataView(buffer);
    for (var i = 0; i < input.length; i++, offset += 2) {
      var s = Math.max(-1, Math.min(1, input[i]));
      view.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7fff, true);
    }
    return buffer;
  };

  const downsampleBuffer = (buffer: any, inputSampleRate: number = 44100, outputSampleRate: number = 16000) => {
    if (outputSampleRate === inputSampleRate) {
      return buffer;
    }

    var sampleRateRatio = inputSampleRate / outputSampleRate;
    var newLength = Math.round(buffer.length / sampleRateRatio);
    var result = new Float32Array(newLength);
    var offsetResult = 0;
    var offsetBuffer = 0;

    while (offsetResult < result.length) {
      var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);

      var accum = 0,
        count = 0;

      for (var i = offsetBuffer; i < nextOffsetBuffer && i < buffer.length; i++) {
        accum += buffer[i];
        count++;
      }

      result[offsetResult] = accum / count;
      offsetResult++;
      offsetBuffer = nextOffsetBuffer;
    }

    return result;
  };

  const handleEventStreamMessage = (messageJson: any) => {
    const results = messageJson.Transcript.Results;
    if (results.length > 0) {
      if (results[0].Alternatives.length > 0) {
        const transcript = decodeURIComponent(escape(results[0].Alternatives[0].Transcript));
        // if this transcript segment is final, add it to the overall transcription
        if (!results[0].IsPartial) {
          const text = transcript.toLowerCase().replace('.', '').replace('?', '').replace('!', '');
          console.log(text);
        }
      }
    }
  };

  const start = () => {
    // first we get the microphone input from the browser (as a promise)...
    window.navigator.mediaDevices
      .getUserMedia({
        video: false,
        audio: true,
      })
      // ...then we convert the mic stream to binary event stream messages when the promise resolves
      .then(streamAudioToWebSocket)
      .catch(() => {
        console.error('Please check thet you microphose is working and try again.');
      });
  };

  const stop = (socket: WebSocket) => {
    if (socket) {
      socket.close();
      setWebSocket(undefined);
      console.log('Amazon stoped');
    }
  };

  return (
    <div className="App">
      <button onClick={() => (webSocket ? stop(webSocket) : start())}>{webSocket ? 'Stop' : 'Start'}</button>
    </div>
  );
};

TypeScript declaration

Module microphone-stream does not have a TypeScript package. In order to create one, a file named microphone-stream.d.ts with content declare module ‘microphone-stream’ is needed.

Conclusion

AWS Transcribe is really easy to use service, which does not require a significant implementation effort. I can compare it with Google Speech-To-Text and it is much harder to make that one work.

Read more...

AWS examples in C# – structured logging in .NET Core and AWS Lambda

Last Updated on by

Post summary: Code examples of how to do structured logging in .NET Core and C# AWS Lambda.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository.

Structured logging

In the general case, log files are a big text file with no structure in it. This makes it hard to search and analyze log files. The idea of structured logging is to write log files into a given format, such as JSON or XML, so they can later be machine processed such search or big data analysis. In the current post, I will describe how to log in to JSON format. For e.g. one log entry is:

[12:17:16 INF] New Movie published with Die Hard, {"Title": "Die Hard", "Genre": "Action", "$type": "Movie"}

This is a simple text with the Movie object being input as JSON. In order to process it, a parser has to be implemented, which gets the data into the square brackets and extracts the date-time and the log level, then searching into the content itself is hard because it should be done with lots of regular expressions. With structured logging the same entity will look like this:

{
	"@t": "2020-03-29T10:21:24.2907688Z",
	"@mt": "New Movie published with {Title}, {@Content}",
	"Title": "Die Hard",
	"Content": {
		"Title": "Die Hard",
		"Genre": "Action",
		"$type": "Movie"
	},
	"SourceContext": "SqsWriter.Controllers.PublishController",
	"ActionId": "20de3310-ebce-48d5-8f9c-28914e199937",
	"ActionName": "SqsWriter.Controllers.PublishController.PublishMovie (SqsWriter)",
	"RequestId": "0HLUJPBNDGPSJ:00000001",
	"RequestPath": "/api/publish/movie",
	"SpanId": "|7bab219a-415f5c121e1756f5.",
	"TraceId": "7bab219a-415f5c121e1756f5",
	"ParentId": "",
	"ConnectionId": "0HLUJPBNDGPSJ"
}

There is much more data in the latter and it is also in JSON format which makes it easy to search with JsonPath.

JSON Path

JsonPath is a way to navigate into a JSON document, very similar to XPath for XML. With JSONPath Online Evaluator is easy to try some expressions. Both $.Content.Title and $.Title resolves to Die Hard. So it is very easy for a tool that understands the JsonPath to query logs where some JSON entity is related somehow to a given rule, for e.g. $.Title equals to Die Hard.

AWS CloudWatch

CloudWatch provides different monitoring functions, one of them is logging. By default, all AWS services log into CloudWatch. CloudWatch provides a feature called insights which is able to search into JSON log files.

Serilog

Serilog is a .NET library that provides logging capabilities. Its main benefits are that it can log into JSON format. Serilog can be very easily integrated into any C# project. Two Nuget packages are needed: Serilog and Serilog.AspNetCore.

Configure Serilog for .NET Core application

In the case of .NET Core microservice Serilog is injected into Program.cs.

using Serilog;
using Serilog.Events;
using Serilog.Formatting.Compact;

public static void Main(string[] args)
{
	Log.Logger = new LoggerConfiguration()
		.MinimumLevel.Debug()
		.MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
		.Enrich.FromLogContext()
		.WriteTo.Console(new CompactJsonFormatter())
		.CreateLogger();

	var webHost = WebHost.CreateDefaultBuilder(args)
		.UseStartup<Startup>()
		.UseSerilog()
		.UseUrls("http://*:5100")
		.Build();

	webHost.Run();
}

Configure Serilog for AWS C# Lambda function

In the case of C# lambda function, logging to console is enough, since all logs are sent to CloudWatch. I have created a proxy class called Logger, which instantiates an instance of Serilog logger. The custom logger is then used in lambda function itself.

Logger

public interface ILogger
{
	void LogInformation(string messageTemplate, params object[] arguments);
}

public class Logger : ILogger
{
	private static Serilog.Core.Logger _logger;

	public Logger()
	{
		_logger = new LoggerConfiguration()
			.MinimumLevel.Debug()
			.WriteTo.Console(new CompactJsonFormatter())
			.CreateLogger();
	}

	public void LogInformation(string messageTemplate, params object[] arguments)
	{
		_logger.Information(messageTemplate, arguments);
	}
}

MoviesFunction

private readonly ISqsWriter _sqsWriter;
private readonly IDynamoDbWriter _dynamoDbWriter;
private readonly ILogger _logger;

public MoviesFunction() : this(null, null, null) { }

public MoviesFunction(ISqsWriter sqsWriter, IDynamoDbWriter dynamoDbWriter, ILogger logger)
{
	_sqsWriter = sqsWriter ?? new SqsWriter();
	_dynamoDbWriter = dynamoDbWriter ?? new DynamoDbWriter();
	_logger = logger ?? new Logger();
}

public async Task MoviesFunctionHandler(DynamoDBEvent dynamoEvent, ILambdaContext context)
{
	foreach (var record in dynamoEvent.Records)
	{
		var title = record.Dynamodb.NewImage["Title"].S;
		var logEntry = new LogEntry
		{
			Message = $"Movie '{title}' processed by lambda",
			DateTime = DateTime.Now
		};
		_logger.LogInformation("MoviesFunctionHandler invoked with {Title}", title);

		await _sqsWriter.WriteLogEntryAsync(logEntry);
		await _dynamoDbWriter.PutLogEntryAsync(logEntry);
	}
}

GetMovie

public async Task<APIGatewayProxyResponse> GetMovie(APIGatewayProxyRequest request, ILambdaContext context)
{
	var title = WebUtility.UrlDecode(request.PathParameters["title"]);
	_logger.LogInformation("GetMovie invoked with {Title}", title);

	var document = await _dynamoDbReader.GetDocumentAsync(TableName, title);
	if (document == null)
	{
		_logger.LogInformation("GetMovie produced no results for {Title}", title);
		return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.NotFound };
	}

	var movie = new Movie
	{
		Title = document["Title"],
		Genre = (MovieGenre)int.Parse(document["Genre"])
	};
	_logger.LogInformation("GetMovie result is {Title}, {@Content}", movie.Title, movie);

	return new APIGatewayProxyResponse
	{
		StatusCode = (int)HttpStatusCode.OK,
		Body = _jsonConverter.SerializeObject(movie)
	};
}

AWS CloudWatch Insights

CloudWatch Logs Insights enables interactive search and analyze log data in Amazon CloudWatch Logs. Queries can be performed to help more efficiently and effectively respond to operational issues. Queries are done in a specific purpose-built query language with a few simple but powerful commands. A short example is to search for all logs in which FirstName field equals Bruce. Before that all log groups that has to be searched are selected above.

fields @@mt
| sort @timestamp desc
| limit 20
| filter FirstName = 'Bruce'

An extensive guide on query language can be found on CloudWatch Logs Insights Query Syntax page.

Conclusion

Structured logging can bring a lot of benefits to debugging an application and analyzing its behavior. It is easy to set up and be used.

Related Posts

Read more...

AWS examples in C# – AWS CLI commands

Last Updated on by

Post summary: Important AWS CLI commands used in AWS examples in C#.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository.

Introduction

In AWS examples in C# – run the solution post I have described how to install/uninstall current examples. In the current post, I am going to show in detail individual commands used. The configuration parameters in the command below will be given with capital letters and starting with a dollar sign, e.g. $CONFIGURATION_PARAMETER. Each AWS command has its code representation in the SDK for the desired programming language.

AWS Command Line Interface

The AWS Command Line Interface (CLI) is a unified tool to manage AWS services. Control of multiple AWS services from the command line and automate them through scripts can be done with just one tool to download and configure. The full list of services that can be controlled is listed in the AWS Command Line Interface reference page. Each service has a subpage with a list of all available commands. All commands return JSON as a response. In a subsequent post, I will describe how to manage the JSON in the command line. All operations in the current post are done after AWS credentials are set as environment variables:

export AWS_ACCESS_KEY_ID=KIA57FV4.....
export AWS_SECRET_ACCESS_KEY=mSgsxOWVh...
export AWS_DEFAULT_REGION=us-east-1

SQS operations

The full list can be found in aws sqs CLI reference page. More information about SQS can be found in AWS examples in C# – create a service working with SQS post.

Create

Initially, all queues are listed with list-queues, in order to check if the queue already exists.

aws sqs list-queues

The queue is created with create-queue command, the result of the command returns the queue URL.

aws sqs create-queue --queue-name $QUEUE_NAME

After queues are created, the re-drive policy has to be set up. The ARN of the dead-letter queue can be obtained with get-queue-attributes command by providing the queue URL.

aws sqs get-queue-attributes \
	--queue-url $DEAD_LETTER_QUEUE_URL \
	--attribute-names QueueArn

The re-drive policy is set with set-queue-attributes command.

aws sqs set-queue-attributes \
	--queue-url $QUEUE_URL \
	--attributes "{\"RedrivePolicy\":\"{\\\"maxReceiveCount\\\":\\\"3\\\",\\\"deadLetterTargetArn\\\":\\\"$DEAD_LETTER_QUEUE_ARN\\\"}\",\"ReceiveMessageWaitTimeSeconds\":\"$LONG_POLLING_TIMEOUT\"}"

Delete

In order to delete the queue, its URL is needed. The URL is obtained with get-queue-url command.

aws sqs get-queue-url --queue-name $QUEUE_NAME

Deletion happens with delete-queue command.

aws sqs delete-queue --queue-url $QUEUE_URL

DynamoDB operations

The full list can be found in aws dynamodb CLI reference page. More information about DynamoDB can be found in AWS examples in C# – create a service working with DynamoDB post.

Create

The table data is obtained with describe-table command.

aws dynamodb describe-table --table-name $TABLE_NAME

If the table does not exist, it is created with create-table command. The table command has all the data needed. See more about table attributes in AWS examples in C# – create a service working with DynamoDB post.

aws dynamodb create-table \
	--table-name $TABLE_NAME \
	--attribute-definitions 'AttributeName=FirstName,AttributeType=S' 'AttributeName=LastName,AttributeType=S' \
	--key-schema 'AttributeName=FirstName,KeyType=HASH' 'AttributeName=LastName,KeyType=RANGE' \
	--provisioned-throughput 'ReadCapacityUnits=5,WriteCapacityUnits=5' \
	--stream-specification 'StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES'

Delete

The table is deleted by name with delete-table command.

aws dynamodb delete-table --table-name $TABLE_NAME

IAM roles operations

The full list can be found in aws iam CLI reference page.

Create

Roles are listed with list-roles command to check if the role exists.

aws iam list-roles

The role is created with create-role command.

aws iam create-role \
	--role-name $ROLE_NAME \
	--assume-role-policy-document file://assume-role-policy-document.json

This is the only case in the current examples where an additional JSON document is needed alongside a command. It is not possible to pass this JSON inline as it is with aws sqs set-queue-attributes command. This JSON allows certain services to be accessed by this role.

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Principal": {
				"Service": [
					"lambda.amazonaws.com",
					"ec2.amazonaws.com",
					"ecs.amazonaws.com",
					"ecs-tasks.amazonaws.com",
					"batch.amazonaws.com"
				]
			},
			"Action": "sts:AssumeRole"
		}
	]
}

List policies to get the policy ARN with list-policies command. Basically, to make things easier, AdministratorAccess existing policy is used with its ARN.

aws iam list-policies

Attach the policy to the role with attach-role-policy command.

aws iam attach-role-policy \
	--role-name $ROLE_NAME \
	--policy-arn $POLICY_ARN

Delete

List policies with list-policies command to get the ARN, then detach the policy from the role.

aws iam detach-role-policy \
	--role-name $ROLE_NAME \
	--policy-arn $POLICY_ARN

After the policy is detached, role is deleted with delete-role command.

aws iam delete-role --role-name $ROLE_NAME

AWS Lambda operations

The full list can be found in aws lambda CLI reference page.

Create

List functions with list-functions command to check if the function exists.

aws lambda list-functions

Creating a function is done with create-function command and takes many arguments. Most of the parameters are self-explanatory. Timeout is important, the lambda function execution is suspended after the timeout passes, in current examples, it is 30 seconds, I found that cold start could take up to 15 seconds some times. The lambda configurations are described in AWS examples in C# – create basic Lambda function post.

aws lambda create-function \
	--function-name $FUNCTION_NAME \
	--runtime dotnetcore2.1 \
	--role $ROLE_ARN \
	--handler $HANDLER_STRING_WITH_NAMESPACE_CLASS_METHOD \
	--environment "Variables={AWS_SQS_QUEUE_NAME=$QUEUE_NAME, AWS_SQS_IS_FIFO=$IS_QUEUE_FIFO}" \
	--timeout $FUNCTION_TIMEOUT \
	--zip-file fileb://$PATH_TO_ZIP_FILE)

Once the function is created, it can be linked to an event source, such as DynamoDB. This happens by DynamoDB stream ARN. Once a record is inserted, updated or deleted in DynamoDB, the lambda function is called with this event.

aws lambda create-event-source-mapping \
	--function-name $FUNCTION_NAME \
	--event-source-arn $DYNAMODB_STREAM_ARN \
	--starting-position LATEST)

In case of function already exists, but its code has to be updated, this is done with update-function-code command.

aws lambda update-function-code \
	--function-name $FUNCTION_NAME \
	--zip-file fileb://$PATH_TO_ZIP_FILE)

Along with the code, function configuration can be updated as well with update-function-configuration command.

aws lambda update-function-configuration \
	--function-name $FUNCTION_NAME  \
	--role $ROLE_ARN\
	--handler $HANDLER_STRING_WITH_NAMESPACE_CLASS_METHOD  \
	--environment "Variables={AWS_SQS_QUEUE_NAME=$QUEUE_NAME, AWS_SQS_IS_FIFO=$IS_QUEUE_FIFO}" \
	--timeout $FUNCTION_TIMEOUT

Delete

In order to delete, then the event source UUID has to be obtained, this is done with list-event-source-mappings command.

aws lambda list-event-source-mappings --function-name $FUNCTION_NAME

Then event source mapping is deleted with delete-event-source-mapping command.

aws lambda delete-event-source-mapping --uuid $EVNET_SOURCE_UUID

And finally, the function itself is deleted with delete-function command.

aws lambda delete-function --function-name $FUNCTION_NAME

ECS (Elastic Container Service) operations

The full list can be found in aws ecs CLI reference page.

Create

Before doing anything with ECR, docker login command should be created with get-login, so docker is authenticated with AWS ECR. With eval function, the docker login command is directly executed.

eval $(aws ecr get-login --no-include-email)

Clusters are first listed, in order to evaluate if the application is already deployed.

aws ecs list-clusters

Cluster is created with create-cluster command. A cluster consists of services.

aws ecs create-cluster --cluster-name $CLUSTER_NAME

Existing task definitions are listed, to evaluate whether they are published or not. Task definitions are Docker configurations.

aws ecs describe-task-definition --task-definition $TASK_DEFINITION_NAME

Task definition is created with register-task-definition command.

aws ecs register-task-definition \
	--family $TASK_DEFINITION_NAME \
	--execution-role-arn $ROLE_ARN\
	--network-mode awsvpc \
	--container-definitions $CONTAINER_DEFINITIONS \
	--requires-compatibilities "FARGATE" \
	--cpu "256" \
	--memory "512"

$CONTAINER_DEFINITIONS is a Docker configuration which defines the task definition:

name=$TASK_DEFINITION_NAME,\
image=$IMAGE_TAG,\
environment=[\
	{name=AwsQueueIsFifo,value=$_IS_QUEUE_FIFO},\
	{name=AwsRegion,value=$REGION},\
	{name=AwsQueueName,value=$QUEUE_NAME},\
	{name=AwsAccessKey,value=$AWS_ACCESS_KEY},\
	{name=AwsSecretKey,value=$AWS_SECRET_KEY},\
	{name=AwsQueueAutomaticallyCreate,value=$AWS_QUEUE_AUTO_CREATE},\
	{name=AwsQueueLongPollTimeSeconds,value=$AWS_POLL_TIME_SECONDS}\
],\
logConfiguration={\
	logDriver=awslogs,\
	options={\
		awslogs-group=ecs/$SERVICE_NAME,\
		awslogs-region=$REGION,\
		awslogs-stream-prefix=ecs\
	}\
}

Before creating a service, existing ones are listed with describe-services command. Service has one or more running instances of a task definition. This is how service can scale.

aws ecs describe-services \
	--cluster $CLUSTER_NAME\
	--services $SERVICE_NAME

Creating a service is done with create-service command. $TASK_REVISION is the result of the register-task-definition command. $SUBNET_ID is returned by aws ec2 describe-subnets command.

aws ecs create-service --cluster $CLUSTER_NAME \
	--service-name $SERVICE_NAME \
	--task-definition "$TASK_DEFINITION_NAME:$TASK_REVISION" \
	--desired-count 1 \
	--launch-type "FARGATE" \
	--network-configuration "awsvpcConfiguration={subnets=[$SUBNET_ID],securityGroups=[$SECURITY_GROUP_ID],assignPublicIp=ENABLED}")

Updating of the service is done with a very update-service similar command.

aws ecs update-service --cluster $CLUSTER_NAME \
	--service $SERVICE_NAME \
	--task-definition "$TASK_DEFINITION_NAME:$TASK_REVISION" \
	--desired-count 1 \
	--network-configuration "awsvpcConfiguration={subnets=[$SUBNET_ID],securityGroups=[$SECURITY_GROUP_ID],assignPublicIp=ENABLED}")

Delete

In order to delete task definitions, they should be first listed with list-task-definitions command, so the task definition version is available.

aws ecs list-task-definitions

Removing of the task definition is done with deregister-task-definition command. Note that the command does what it says, deregister, it does not delete. The task definition is kept in history in status INACTIVE.

aws ecs deregister-task-definition --task-definition "$TASK_DEFINITION_VERSION"

Deleting the service is done with the delete-service command, the –force parameter also stops the running tasks.

aws ecs delete-service \
	--cluster $CLUSTER_NAME \
	--service $SERVICE_NAME \
	--force

In the end, the whole cluster is deleted with delete-cluster command.

aws ecs delete-cluster --cluster $CLUSTER_NAME

ECR (Elastic Container Registry) operations

The full list can be found in aws ecr CLI reference page.

Delete

The repository is created by Docker when the image is pushed to it. Repository and images inside are deleted with delete-repository command.

aws ecr delete-repository \
	--repository-name $REPOSITORY_NAME \
	--force

EC2 (Elastic Compute Cloud) operations

The full list can be found in aws ec2 CLI reference page.

Create

EC2 is responsible for security groups, which expose the service to the world by applying firewall rules. Before creating the group, it is first searched for presence with describe-security-groups command.

aws ec2 describe-security-groups

The security group is created with create-security-group command.

aws ec2 create-security-group \
	--description $SECIRITY_GROUP_DESCRIPTION\
	--group-name $SECIRITY_GROUP_NAME

Inbound rules are defined with authorize-security-group-ingress command, where ip_permission is a bash function generation the JSON for better reuse.

aws ec2 authorize-security-group-ingress \
	--group-id $SECURITY_GROUP_ID \
	--ip-permissions "[$(ip_permission $SERVICE_PORT)]"

Function generation firewall rule JSON, $1 is an argument given to the function.

function ip_permission() {
	echo "{\"IpProtocol\": \"tcp\", \"FromPort\": $1, \"ToPort\": $1, \"IpRanges\": [{\"CidrIp\": \"0.0.0.0/0\", \"Description\": \"Port $1\"}]}"
}

Subnets are listed with describe-subnets command. Each subnet has 3 availability zones.

aws ec2 describe-subnets

Finally, in order to report the IP of the deployed service, describe-network-interfaces command is used.

aws ec2 describe-network-interfaces --filters "Name=network-interface-id,Values=$networkInterfaceId"

Delete

A security group is deleted by name with delete-security-group command.

aws ec2 delete-security-group --group-name $SECURITY_GROUP

CloudWatch operations

The full list can be found in aws logs CLI reference page.

Delete

CloudWatch logs are created by default from the services. Deleting the logs is done with delete-log-group command. Note that I am using Git Bash on Windows and MSYS_NO_PATHCONV=1 is mandatory because the log group name starts with /.

MSYS_NO_PATHCONV=1 aws logs delete-log-group --log-group-name ecs/$SERVICE_NAME

Conclusion

AWS command-line interface provides tooling to handle all needed operations of the AWS services. It is the preferred way to manage services over the Web user interface.

Related Posts

Read more...

AWS examples in C# – introduction to Serverless framework

Last Updated on by

Post summary: Introduction to Serverless framework and .NET code example of a lambda function with API Gateway.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository.

When speaking about Serverless there are two concepts and terms that need to be clarified.

Serverless architecture

Serverless architecture is an application architectural concept of the cloud, enables shifting more of your operational responsibilities to the cloud. In the current examples, AWS is used, but this is a valid concept for Azure and Google cloud. Serverless allows building and running applications and services without thinking about servers.

Serverless framework

Serverless framework is a toolset that makes deployment of serverless applications to different cloud providers extremely easy and streamlined. It supports the following cloud providers: AWS, Google Cloud, Azure, OpenWhisk, and Kubeless and following programming languages: nodeJS, Go, Python, Swift, Java, PHP, and Ruby.

AWS Lambda

AWS Lambda allows easy ramp-up of service without all the hassle to manage servers and environments. The ready code is uploaded to Lambda and automatically run. More detailed information and design considerations about AWS Lambda can be found in AWS examples in C# – working with Lambda functions post.

CloudFormation

AWS CloudFormation provides infrastructure as a code (IoC) capabilities. It defines a common language to model and provision AWS application resources. AWS resources and applications are described in YAML or JSON files, which are then provisioned by CloudFormation. This gives a single source of truth. The Serverless framework uses applications when creating the underlying AWS JSON CloudFormation templates. What Serverless framework is doing is to translate its custom YAML format to a JSON CloudFormation templates, which can be found in .serverless folder of DynamoDbServerless after the deployment to AWS is done.

Create a Serverless application

Before creating the application, Serverless needs to be installed. Installation is possible as a standalone binary or as a Node.JS package. I prefer the latter because it is much simpler.

npm install -g serverless

Creating an empty application is done with the following command:

sls create --template aws-csharp --path MyService

The command outputs a nice message:

$ sls create --template aws-csharp --path MyService
Serverless: Generating boilerplate...
Serverless: Generating boilerplate in "C:\MyService"
 _______                             __
|   _   .-----.----.--.--.-----.----|  .-----.-----.-----.
|   |___|  -__|   _|  |  |  -__|   _|  |  -__|__ --|__ --|
|____   |_____|__|  \___/|_____|__| |__|_____|_____|_____|
|   |   |             The Serverless Application Framework
|       |                           serverless.com, v1.61.3
 -------'

Serverless: Successfully generated boilerplate for template: "aws-csharp"

Apart from the default lambda project created by AWS tools, see AWS examples in C# – create basic Lambda function post for more details, the Serverless project is not split to src and test. It is a good idea to manually split the project in order to add tests. The configuration of Serverless projects is done in serverless.yml file. The default one is very simple, it states the provider and runtime, which is aws (Amazon) and dotnetcore2.1. The handler shows which method is being called when this lambda is invoked. In the default example, the handler is CsharpHandlers::AwsDotnetCsharp.Handler::Hello, which means CsharpHandlers is the assembly name, configured in aws-csharp.csproj file. The namespace is AwsDotnetCsharp, the class name is Handler and the method is Hello.

service: myservice

provider:
  name: aws
  runtime: dotnetcore2.1

package:
  individually: true

functions:
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Handler::Hello

    package:
      artifact: bin/release/netcoreapp2.1/hello.zip

After the application is created, it has to be built with build.cmd or build.sh scripts.

Before deployment, AWS credential should be set:

export AWS_ACCESS_KEY_ID=KIA57FV4.....
export AWS_SECRET_ACCESS_KEY=mSgsxOWVh...
export AWS_DEFAULT_REGION=us-east-1

Deployment happens with sls deploy –region $AWS_DEFAULT_REGION command, the result of deployment command is:

Testing of the function can be done with sls invoke -f hello –region $AWS_DEFAULT_REGION command. Result of testing is:

{
	"Message": "Go Serverless v1.0! Your function executed successfully!",
	"Request": {
		"Key1": null,
		"Key2": null,
		"Key3": null
	}
}

Finally, the stack can be deleted with sls remove –region $AWS_DEFAULT_REGION command.

Use API Gateway

The default, Hello World example is pretty simple. In current examples, I have elaborated a bit on Lambda usage with API Gateway in order to expose it as a RESTful service. Two lambda functions are defined inside functions node, for movies and actions. Each has a handler node which defines where is the code that lambda function is executing. With events/http node and API Gateway endpoint is created. Each endpoint has a path and a method. Movies are using GET method, Actors are working via POST method. Both functions’ handler methods receive APIGatewayProxyRequest object and return APIGatewayProxyResponse object. The payload is found in Body string property of APIGatewayProxyRequest object.

Actors function is querying the Actors table. Mode details on the query and how it is constructed in BuildQueryRequest method can be found in Querying using the low-level interface section in AWS examples in C# – basic DynamoDB operations post.

Movies lambda function is getting the JSON document from Movies table. More details on getting the data can be found in Get item using document interface section in AWS examples in C# – basic DynamoDB operations post.

Actors lambda function has two more security features defined. One is an API Key defined with private: true setting. The request should have x-api-key header with the value which is returned by deployment command, otherwise, an HTTP status code 403 (Forbidden) is returned by API Gateway. See Run the project in AWS section in AWS examples in C# – run the solution post how to obtain the proper value of aws-examples-csharp-api-key API key. The example given here is not really scalable, more details on how to manage properly API keys can be found in Managing secrets, API keys and more with Serverless article.

The other security feature is lambda authorizer configured with authorizer: authorizer setting. The authorizer lambda function is not really doing anything significant in the examples, it uses UserManagement class to check if the Authorization header has proper value.  In the real-life, this would be a database call to get the user authorizations, not like in the examples with a dummy token. The request should have Authorization header with value Bearer validToken, otherwise, an HTTP status code 401 (Unauthorized) is returned by API Gateway.

serverless.yml

service: DynamoDbServerless

provider:
  name: aws
  runtime: dotnetcore2.1
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - dynamodb:Query
      Resource: ${env:actorsTableArn}
    - Effect: "Allow"
      Action:
        - dynamodb:DescribeTable
        - dynamodb:GetItem
      Resource: ${env:moviesTableArn}
  apiKeys:
    - aws-examples-csharp-api-key

package:
  individually: true
  artifact: bin/release/netcoreapp2.1/DynamoDbServerless.zip

functions:
  movies:
    handler: DynamoDbServerless::DynamoDbServerless.Handlers.MoviesHandler::GetMovie
    events:
      - http:
          path: movies/title/{title}
          method: get

  actors:
    handler: DynamoDbServerless::DynamoDbServerless.Handlers.ActorsHandler::QueryActors
    events:
      - http:
          path: actors/search
          method: post
          private: true
          authorizer: authorizer

  authorizer:
    handler: DynamoDbServerless::DynamoDbServerless.Handlers.AuthorizationHandler::Authorize

ActorsFunction

public async Task<APIGatewayProxyResponse> QueryActors(APIGatewayProxyRequest request, ILambdaContext context)
{
	context.Logger.LogLine($"Query request: {_jsonConverter.SerializeObject(request)}");

	var requestBody = _jsonConverter.DeserializeObject<ActorsSearchRequest>(request.Body);
	if (string.IsNullOrEmpty(requestBody.FirstName))
	{
		return new APIGatewayProxyResponse
		{
			StatusCode = (int)HttpStatusCode.BadRequest,
			Body = "FirstName is mandatory"
		};
	}
	var queryRequest = BuildQueryRequest(requestBody.FirstName, requestBody.LastName);

	var response = await _dynamoDbReader.QueryAsync(queryRequest);
	context.Logger.LogLine($"Query result: {_jsonConverter.SerializeObject(response)}");

	var queryResults = BuildActorsResponse(response);

	return new APIGatewayProxyResponse
	{
		StatusCode = (int)HttpStatusCode.OK,
		Body = _jsonConverter.SerializeObject(queryResults)
	};
}

MoviesFunction

public async Task<APIGatewayProxyResponse> GetMovie(APIGatewayProxyRequest request, ILambdaContext context)
{
	context.Logger.LogLine($"Query request: {_jsonConverter.SerializeObject(request)}");

	var title = WebUtility.UrlDecode(request.PathParameters["title"]);
	var document = await _dynamoDbReader.GetDocumentAsync(TableName, title);
	context.Logger.LogLine($"Query response: {_jsonConverter.SerializeObject(document)}");

	if (document == null)
	{
		return new APIGatewayProxyResponse { StatusCode = (int)HttpStatusCode.NotFound };
	}

	var movie = new Movie
	{
		Title = document["Title"],
		Genre = (MovieGenre)int.Parse(document["Genre"])
	};
	return new APIGatewayProxyResponse
	{
		StatusCode = (int)HttpStatusCode.OK,
		Body = _jsonConverter.SerializeObject(movie)
	};
}

MoviesFunction

public async Task<APIGatewayCustomAuthorizerResponse> Authorize(APIGatewayCustomAuthorizerRequest request, ILambdaContext context)
{
	context.Logger.LogLine($"Query request: {_jsonConverter.SerializeObject(request)}");

	var userInfo = await _userManager.Authorize(request.AuthorizationToken?.Replace("Bearer ", string.Empty));

	return new APIGatewayCustomAuthorizerResponse
	{
		PrincipalID = userInfo.UserId,
		PolicyDocument = new APIGatewayCustomAuthorizerPolicy
		{
			Version = "2012-10-17",
			Statement = new List<APIGatewayCustomAuthorizerPolicy.IAMPolicyStatement>
			{
				new APIGatewayCustomAuthorizerPolicy.IAMPolicyStatement
				{
					Action = new HashSet<string> {"execute-api:Invoke"},
					Effect = userInfo.Effect.ToString(),
					Resource = new HashSet<string> { request.MethodArn }
				}
			}
		}
	};
}

UserManager

public interface IUserManager
{
	Task<UserInfo> Authorize(string token);
}

public class UserManager : IUserManager
{
	private const string ValidToken = "validToken";
	private const string UserId = "usedId";

	public async Task<UserInfo> Authorize(string token)
	{
		var userInfo = new UserInfo
		{
			UserId = UserId,
			Effect = token == ValidToken ? EffectType.Allow : EffectType.Deny
		};

		return userInfo;
	}
}

Conclusion

The Serverless framework is making deployment and maintenance of lambdas very easy. It also supports different cloud providers.

Related Posts

Read more...

AWS examples in C# – create basic Lambda function

Last Updated on by

Post summary: Code examples on how to create AWS Lambda function in .NET Core.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository.

AWS Lambda

AWS Lambda allows easy ramp-up of service without all the hassle to manage servers and environments. The ready code is uploaded to Lambda and automatically run. More detailed information and design considerations about AWS Lambda can be found in AWS examples in C# – working with Lambda functions post.

Create Lambda

Amazon provides Amazon.Lambda.Templates NuGet package which contains a lot of templates for Lambda functions. The NuGet package can be installed with dotnet new -i Amazon.Lambda.Templates. Once templates are installed, a new empty function is created with:

dotnet new lambda.EmptyFunction --name MyFunction

Two projects are created: src and test. The source project is with netcoreapp2.1 runtime and has reference to Amazon.Lambda.Core and Amazon.Lambda.Serialization.Json NuGet packages. It has only one simple function that takes a string input and converts it to uppercase. The test project has a unit test that tests this function.

Once the function is ready, it can be deployed to AWS Lambda and tested. In order to deploy, the Amazon lambda tools should be installed with: dotnet tool install -g Amazon.Lambda.Tools. Once the tool is installed, there should be an IAM role created with name MyRole for e.g. Before deploying the lambda, environment variable with AWS access information should be set:

export AwsAccessKey=KIA57FV4.....
export AwsSecretKey=mSgsxOWVh...
export AwsRegion=us-east-1

Then lambda is deployed with:

dotnet lambda deploy-function MyFunction \
	--function-role MyRole \
	--project-location src/MyFunction \
	--region $AwsRegion \
	--aws-access-key-id $AwsAccessKey \
	--aws-secret-key $AwsSecretKey

The function can be tested with:

dotnet lambda invoke-function MyFunction \
	--payload "Just Testing the Payload" \
	--project-location src/MyFunction \
	--region $AwsRegion \
	--aws-access-key-id $AwsAccessKey \
	--aws-secret-key $AwsSecretKey

The result is shown:

Amazon Lambda Tools for .NET Core applications (3.3.1)
Project Home: https://github.com/aws/aws-extensions-for-dotnet-cli, https://github.com/aws/aws-lambda-dotnet

Payload:
"JUST TESTING THE PAYLOAD"

Log Tail:
START RequestId: 3f1844e0-7437-4219-a391-621a55dec0e9 Version: $LATEST
END RequestId: 3f1844e0-7437-4219-a391-621a55dec0e9
REPORT RequestId: 3f1844e0-7437-4219-a391-621a55dec0e9  Duration: 925.38 ms    Billed Duration: 1000 ms Memory Size: 256 MB     Max Memory Used: 62 MB  Init Duration: 196.85 ms

The duration in the example above is 925.38ms, billed duration is 1000ms. This is the first run, the cold start as described in the previous section, which took almost a second for a very simple function. Next run the duration was 0.28ms.

The lambda can also manually be deployed and tested, see how in Run a “Hello, World!” with AWS Lambda article.

Listen to DynamoDB events

In AWS examples in C# – create a service working with DynamoDB post, I have described more about DynamoDB and its streams are very well integrated with AWS Lambda. In the current examples, the lambda functions are designed to process DynamoDB stream events. DynamoDB stream ARN (Amazon Resource Name) is defined as an event source for the lambda. Then the lambda receives DynamoDBEvent object, which is defined in Amazon.Lambda.DynamoDBEvents NuGet package. There is not much business logic in the lambda function, once the event object is received, it is read, logged to AWS CloudWatch with ILambdaContext, and its data is written to another DynamoDB table and to an SQS queue. For this purpose, references to AWSSDK.DynamoDBv2 and AWSSDK.SQS NuGet packages are made. In the example code, I have created SQS and DynamoDB proxies, so functionalities are isolated and only what is needed is exposed, these are DynamoDbWriter and SqsWriter. In both proxies, the configuration is done with Region, which is read by AWS_REGION environment variable. This variable is always present in the AWS Lambda environment. In this case, there is no need for authentication with AWSCredentials class, as everything happens inside AWS. The lambda function has two constructors, one is receiving instances of both proxies, if none are passed it instantiates them. An automatic dependency injection is not used with lambda. This constructor is used by the unit tests to pass mocked objects. The parameterless constructor is needed by AWS Lambda to instantiate the function class.

MoviesFunction

public class MoviesFunction
{
	private readonly ISqsWriter _sqsWriter;
	private readonly IDynamoDbWriter _dynamoDbWriter;

	public MoviesFunction() : this(null, null) { }

	public MoviesFunction(ISqsWriter sqsWriter, IDynamoDbWriter dynamoDbWriter)
	{
		_sqsWriter = sqsWriter ?? new SqsWriter();
		_dynamoDbWriter = dynamoDbWriter ?? new DynamoDbWriter();
	}

	public async Task FunctionHandler(DynamoDBEvent dynamoEvent, ILambdaContext context)
	{
		context.Logger.LogLine($"Beginning to process {dynamoEvent.Records.Count} records...");

		foreach (var record in dynamoEvent.Records)
		{
			context.Logger.LogLine($"Event ID: {record.EventID}");
			context.Logger.LogLine($"Event Name: {record.EventName}");

			var streamRecordJson = _dynamoDbWriter.SerializeStreamRecord(record.Dynamodb);
			context.Logger.LogLine($"DynamoDB Record:{streamRecordJson}");
			context.Logger.LogLine(streamRecordJson);

			var logEntry = new LogEntry
			{
				Message = $"Movie '{record.Dynamodb.NewImage["Title"].S}' processed by lambda",
				DateTime = DateTime.Now
			};
			await _sqsWriter.WriteLogEntryAsync(logEntry);
			await _dynamoDbWriter.PutLogEntryAsync(logEntry);
		}

		context.Logger.LogLine("Stream processing complete.");
	}
}

DynamoDbWriter

public interface IDynamoDbWriter
{
	Task PutLogEntryAsync(LogEntry logEntry);
	string SerializeStreamRecord(StreamRecord streamRecord);
}

public class DynamoDbWriter : IDynamoDbWriter
{
	private static readonly string Region = Environment.GetEnvironmentVariable("AWS_REGION") ?? "us-east-1";

	private readonly IAmazonDynamoDB _dynamoDbClient;
	private readonly JsonSerializer _jsonSerializer;

	public DynamoDbWriter()
	{
		var dynamoDbConfig = new AmazonDynamoDBConfig
		{
			RegionEndpoint = RegionEndpoint.GetBySystemName(Region)
		};
		_dynamoDbClient = new AmazonDynamoDBClient(dynamoDbConfig);
		_jsonSerializer = new JsonSerializer();
	}

	public async Task PutLogEntryAsync(LogEntry logEntry)
	{
		var request = new PutItemRequest
		{
			TableName = "LogEntries",
			Item = new Dictionary<string, AttributeValue>
			{
				{"Message", new AttributeValue {S = logEntry.Message}},
				{"DateTime", new AttributeValue {S = logEntry.ToString()}}
			}
		};

		await _dynamoDbClient.PutItemAsync(request);
	}

	public string SerializeStreamRecord(StreamRecord streamRecord)
	{
		using (var writer = new StringWriter())
		{
			_jsonSerializer.Serialize(writer, streamRecord);
			return writer.ToString();
		}
	}
}

SqsWriter

public interface ISqsWriter
{
	Task WriteLogEntryAsync(LogEntry logEntry);
}

public class SqsWriter : ISqsWriter
{
	private static readonly string QueueName = Environment.GetEnvironmentVariable("AWS_SQS_QUEUE_NAME");
	private static readonly bool IsQueueFifo = bool.Parse(Environment.GetEnvironmentVariable("AWS_SQS_IS_FIFO") ?? "false");
	private static readonly string Region = Environment.GetEnvironmentVariable("AWS_REGION") ?? "us-east-1";

	private readonly IAmazonSQS _sqsClient;

	public SqsWriter()
	{
		var sqsConfig = new AmazonSQSConfig
		{
			RegionEndpoint = RegionEndpoint.GetBySystemName(Region)
		};
		_sqsClient = new AmazonSQSClient(sqsConfig);
	}

	public async Task WriteLogEntryAsync(LogEntry logEntry)
	{
		var queueUrl = await _sqsClient.GetQueueUrlAsync(QueueName);
		var sendMessageRequest = new SendMessageRequest
		{
			QueueUrl = queueUrl.QueueUrl,
			MessageBody = JsonConvert.SerializeObject(logEntry),
			MessageAttributes = SqsMessageTypeAttribute.CreateAttributes(typeof(LogEntry).Name)
		};
		if (IsQueueFifo)
		{
			sendMessageRequest.MessageGroupId = typeof(LogEntry).Name;
			sendMessageRequest.MessageDeduplicationId = Guid.NewGuid().ToString();
		}

		await _sqsClient.SendMessageAsync(sendMessageRequest);
	}
}

Conclusion

As shown in the current post, it is very easy to create lambda function, deploy and run it. The current post shows a lambda function that listens to DynamoDB events and processes those events by writing into another DynamoDB table and SQS queue.

Related Posts

Read more...

AWS examples in C# – working with Lambda functions

Last Updated on by

Post summary: Iintroduction to AWS Lambda functions.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository.

AWS Lambda

AWS Lambda allows easy ramp-up of service without all the hassle to manage servers and environments. The ready code is uploaded to Lambda and automatically run. AWS Lambda automatically scales applications by running code in response to each trigger. The code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.

Main concepts

There are several terms that need to be briefly explained to get some understanding of what AWS Lambda offers.

  • Function – A code written in a programming language, for supported runtime, that does some computational work.
  • Runtime – Allow running of functions in different programming languages, supported languages are: Node.js, Python, Ruby, Java, Go, .NET.
  • Event – A JSON formatted document that contains data for a function to process, which is converted to object by the runtime and passed to the function.
  • Concurrency – The number of requests that your function is serving at any given time. If a function is invoked, meanwhile executing another task, then another instance is provisioned, increasing the function’s concurrency.
  • Trigger – A resource or configuration that invokes a Lambda function. This includes AWS services, applications, and event source mappings.
  • Event source mapping – A resource in Lambda that reads items from a stream or queue and invokes a function.

AWS Lambda applications

Lambda is the actual name of serverless functions in AWS. Along with the lambda functions, AWS supports also a concept of an application, which is a combination of Lambda functions, event sources, and other resources that work together to perform tasks. AWS CloudFormation is used to collect application’s components into a single package that can be deployed and managed as one resource. Applications make Lambda projects portable.

CloudFormation

AWS CloudFormation provides infrastructure as a code (IoC) capabilities. It defines a common language to model and provision AWS application resources. AWS resources and applications are described in YAML or JSON files, which are then provisioned by CloudFormation. This gives a single source of truth.

API Gateway

API Gateway is a fully managed service that makes it easy to create, publish, maintain, monitor, and secure APIs. APIs act as the “front door” for applications to access data, business logic, or functionality from backend services. It is very easy to create RESTful APIs and WebSocket APIs with API Gateway. It supports traffic management, CORS support, authorization, access control, throttling, monitoring, and API version management.

API Keys

API keys are the way to create usage plans, so APIs can be given to customers as a product offering with predefined request rates and quotas. A usage plan is created in AWS and it has a throttling limit, which is basically the request rate limit that is applied to each API key that you add to the usage plan. A quota is configured to the usage plan and applied to its API keys. This is the maximum number of requests with a given API key that can be submitted within a specified time interval. API keys can be provided to API Gateway in the X-API-Key header, this is what is shown in the current examples. Another way to work with API keys is with a lambda authorizer function, which returns the API key as part of the authorization response. API Keys can be created or imported from a file. Important is that API keys are not used to manage authentication and authorization.

Access control

Access control to a REST API in API Gateway can be done with several mechanisms:

  • Resource policies
  • Standard AWS IAM roles and policies
  • IAM tags can be used together with IAM policies to control access
  • Endpoint policies for interface VPC endpoints
  • Lambda authorizers
  • Amazon Cognito user pools

Lambda (custom) authorizers

In the examples given, lambda (formerly knows as custom) authorizer is used. API Gateway uses a dedicated Lambda function to do the authorization. More details on how to use authorizers can be found in AWS examples in C# – introduction to Serverless framework post.

CloudWatch

CloudWatch is a monitoring and observability service. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing a unified view of AWS resources, applications, and services. CloudWatch can be used to detect anomalous behavior, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues. By default, AWS Lambda is logging into CloudWatch. This makes it very easy to trace lambda function issues.

Design considerations

There are some specifics that have to be taken into consideration when using lambdas. One of the benefits of lambdas is to be cost-effective. Users can select what amount of RAM to set for the lambda function when it is created. This is done with –memory-size in aws lambda create-function command, see more in AWS examples in C# – deploy with AWS CLI commands post. The default value is 128MB and CPU is allocated proportionally. Sometimes defining too low memory can end up in unexpected performance issues. This should be monitored and optimized based on specific programming language and code. Lambdas are paid per 100ms execution time, so this also should be taken into consideration when tweaking the memory setting. In terms of cost-effectiveness, it is more expensive to add more RAM in order to optimize from 100ms to 50ms execution time, because 100ms on the higher amount of RAM is being paid. It has to be analyzed how much it makes sense for the end-users. Also, another consideration is that API Gateway adds additional delay in total time for the request. CloudWatch logs cost money, so awareness is needed about how much data a lambda function is logging. More pitfalls with more details using lambdas can be found in Serverless Pitfalls: Issues With Running a Startup on AWS Lambda article.

Still, the main consideration for lambda performance is so-called cold start. If the function has not been run for a while then it needs some time for the first request to go through. I’ve seen up to 4 seconds when experimenting, although I had not really measured it. Theoretically, there is an option to ping your API at a certain amount of time to keep it “warm”. In practice, for heavy loads, AWS runs parallel instances of the lambda, in order to handle the traffic, and each new instance will have a cold start.

Create lambda

Practical examples of how to create AWS Lambda functions are available in the following posts:

Conclusion

AWS Lambda is a very convenient and easy way to create running applications with minimal overhead. There are certain design considerations such as lambda cold start that has to be taken into consideration when deciding on lambda usage.

Related Posts

Read more...

AWS examples in C# – create a service working with DynamoDB

Last Updated on by

Post summary: Introduction to NoSQL, introduction to DynamoDB and what are its basic features and capabilities.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository. In the current post, I give an overview of DyanmoDB and what it can be used for.

NoSQL database

NoSQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases (RDBMS). There are several types of NoSQL databases:

  • Key-value stores – every single item in the database is stored as an attribute name (or ‘key’), together with its value.
  • Document databases – pair each key with a complex data structure known as a document, usually, it is a JSON document. Documents can contain many different key-value pairs, or key-array pairs, or even nested documents.
  • Graph stores – used to store information about networks of data. Data is organized in the form of nodes and connections between the nodes.
  • Wide-column stores – store columns of data together, instead of rows. It can query large data volumes faster than conventional relational databases.

A very good article on the NoSQL topic is NoSQL Databases Explained.

AWS DynamoDB

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications.

DynamoDB tables

DynamoDB stores data in tables. The data is represented as items, which have attributes. When a table is created, along with its name, a primary key should be provided. The primary key can consist only of a partition key (HASH), it is mandatory. DynamoDB uses an internal hash function to evenly distribute data items across partitions, based on their partition key values. The primary key can also consist of the partition key and sort key (RANGE), which is complementary to the partition. DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

Secondary indexes

DynamoDB offers the possibility to define so-called secondary indexes. There are two types – global and local. A global secondary index is a one that has a partition, a HASH, key different than the HASH key or the table, each table has a limit of 20 global indexes. A local index is one that has the same partition key but different sorting key. Up to 5 local secondary indexes per table are allowed. Properly managing those indexes is the key to using efficiently DynamoDB as a storage unit.

Streams

DynamoDB Streams is an optional feature that captures data modification events in DynamoDB tables. The data about different DynamoDB events appear in the stream in near-real-time, and in the order that the events occurred. Each event is represented by a stream record in case of add, update or delete an item. Stream records can be configured what data to hold, they can have the old and the new item, or only one of them if needed, or even only the keys. Stream records have a lifetime of 24 hours, after that, they are automatically removed from the stream. Streams are used together with AWS Lambda to create a trigger code that executes automatically whenever an event appears in a stream.

Read/Write Capacity Mode

Amazon DynamoDB has two read/write capacity modes for processing reads and writes on your tables: on-demand and provisioned, which is the default, free-tier eligible mode. The read/write capacity mode controls how charges are applied to read and write throughput and how to manage capacity. The capacity mode is set when the table is created and it can be changed later. The provisioned mode is the default one, it is recommended to be used in case of known workloads. The on-demand mode is recommended to be used in case of unpredictable and unknown workloads. DynamoDB provides auto-scaling capabilities so the table’s provisioned capacity is adjusted automatically in response to traffic changes.

Understanding the concept around read and write capacity units is tricky. One write capacity unit is up to 1KB of data per second. If write is done in a transaction though, then the capacity unit count doubles. An example is if there is 2KB of data to be written per second, then the table definition needs 2 write capacity units. If the write is done in a transaction though, then 4 capacity units have to be defined. Read capacity unit is similar, with the difference that there are two flavors of reading – strongly consistent read and eventually consistent read. An eventually consistent read means, that data returned by DynamiDB might not be up to date and some write operation might not have been refracted to it. If data should be guaranteed to be propagated on all DynamoDB nodes and it is up-to-date data, then strongly consistent read is needed. One read capacity unit gives one strongly consistent read or two eventually consistent reads for data up to 4KB. Transactions double the count if read units needed, hence two units are required to read data up to 4KB. For example, if the data to be read is 8 KB, then 2 read capacity units are required to sustain one strongly consistent read per second, 1 read capacity unit if in case of eventually consistent reads, or 4 read capacity units for a transactional read request.

In case the application exceeds the provisioned throughput capacity on a table or index, then it is subject to request throttling. Throttling prevents the application from consuming too many capacity units. When a request is throttled, it fails with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceededException. The AWS SDKs have built-in support for retrying throttled requests, so no custom logic is needed.

Different programmatic interfaces

Every AWS SDK provides one or more programmatic interfaces for working with Amazon DynamoDB. These interfaces range from simple low-level DynamoDB wrappers to object-oriented persistence layers. The available interfaces vary depending on the AWS SDK and programming language that you use. For C# available interfaces are low-level interface, document interface and object persistence interface. In AWS examples in C# – basic DynamoDB operations post I have given detailed code examples of all of them.

Low-level interface

The low-level interface lets the consumer manage all the details and do the data mapping. Data is mapped manually to its proper data type. Supported data types are:

  • B – binary value, a MemoryStream
  • BS – list of MemoryStream objects
  • S – string
  • SS – list of string objects
  • N – number converted into a string
  • NS – list of number strings
  • BOOL – boolean
  • L – list of AttributeValue objects
  • M – map, dictionary of AttributeValue objects
  • NULL – if set to true, then this is a null value

If the low-level interface is used for querying then a KeyConditionExpression is used to query the data. It is called a query, but it not actually a query in terms of RDBMS way of thinking, as the HASH key should be only used with an equality operator. For the RANGE key, there is a variety of operators to be used, such as:

  • sortKeyName = :sortkeyval – true if the sort key value is equal to :sortkeyval
  • sortKeyName < :sortkeyval – true if the sort key value is less than :sortkeyval
  • sortKeyName <= :sortkeyval – true if the sort key value is less than or equal to :sortkeyval
  • sortKeyName > :sortkeyval – true if the sort key value is greater than :sortkeyval
  • sortKeyName >= :sortkeyval – true if the sort key value is greater than or equal to :sortkeyval
  • sortKeyName BETWEEN :sortkeyval1 AND :sortkeyval2 – true if the sort key value is greater than or equal to :sortkeyval1, and less than or equal to :sortkeyval2
  • begins_with ( sortKeyName, :sortkeyval ) – true if the sort key value begins with a particular operand. (You cannot use this function with a sort key that is of type Number.) Note that the function name begins_with is case-sensitive.

Document interface

The document programming interface returns the full document by its unique HASH key. The document is actually a JSON.

{
	"Title": {
		"Value": "Die Hard",
		"Type": 0
	},
	"Genre": {
		"Value": "0",
		"Type": 1
	}
}

Object persistence interface

WIth object persistency client classes are mapped to DynamoDB tables. There are several attributes that can be applied to database model classes, such as  DynamoDBTable, DynamoDBHashKey, DynamoDBRangeKey, DynamoDBProperty, DynamoDBIgnore, etc. To save the client-side objects to the tables, the object persistence model provides the DynamoDBContext class, an entry point to DynamoDB. This class provides a connection to DynamoDB and enables you to access tables, perform various CRUD operations.

Architectural constraints

Understanding DynamoDB nature is important in order to design a service that works with it. It is important to cost-efficiently define the table capacity. If less capacity is defined, then consumers can get 400 responses, the other extreme is to generate way too much cost. Another aspect is reading the data. DynamoDB does not provide a way to search for data. In any case, the application that used DynamoDB has to have a proper way to access the data by key.

Using DynamoDB in a service

DynamoDB can be straight forward used in a service, such as SqsReader or ActorsServerlessLambda and MoviesServerlessLambda functions, see the bigger picture in AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS post. An AmazonDynamoDBClient is instantiated and used with one of the programming interfaces described above.

Another important usage is to subscribe to and process stream events. This is done in both ActorsLambdaFunction and MoviessLambdaFunction. See more details about Lambda usage in AWS examples in C# – working with Lambda functions post.

More information on how to run the solution can be found in AWS examples in C# – run the solution post.

Conclusion

In the current post, I have given a basic overview of DynamoDB. It is important to understand its specifics in order to use it efficiently.

Related Posts

Read more...

AWS examples in C# – basic DynamoDB operations

Last Updated on by

Post summary: Code examples with DynamoDB write and read operations.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository. In the current post, I give practical code examples of how to work with DynamoDB.

Instantiate Amazon DynamoDB client

In the current examples, in SqsReader project, a configuration class called AppConfig is used. Its values are injected from the environment variables by .NET Core framework in Startup class. In order to work with DynamoDB, a client is needed. The DynamoDB client interface is called IAmazonDynamoDB and comes from AWS C# SDK. The NuGet package is called AWSSDK.DynamoDBv2. The concrete AWS client implementation is AmazonDynamoDBClient and an object is instantiated in DynamoDbClientFactory class and used as a singleton. RegionEndpoint is used to instantiate AmazonDynamoDBConfigAwsCredentials class extends the AWS’ abstract AWSCredentials and is used in order to manage the credentials.

DynamoDbClientFactory.cs

public static AmazonDynamoDBClient CreateClient(AppConfig appConfig)
{
	var dynamoDbConfig = new AmazonDynamoDBConfig
	{
		RegionEndpoint = RegionEndpoint.GetBySystemName(appConfig.AwsRegion)
	};
	var awsCredentials = new AwsCredentials(appConfig);
	return new AmazonDynamoDBClient(awsCredentials, dynamoDbConfig);
}

AwsCredentials.cs

public class AwsCredentials : AWSCredentials
{
	private readonly AppConfig _appConfig;

	public AwsCredentials(AppConfig appConfig)
	{
		_appConfig = appConfig;
	}

	public override ImmutableCredentials GetCredentials()
	{
		return new ImmutableCredentials(_appConfig.AwsAccessKey,
						_appConfig.AwsSecretKey, null);
	}
}

AppConfig.cs

public class AppConfig
{
	public string AwsRegion { get; set; }
	public string AwsAccessKey { get; set; }
	public string AwsSecretKey { get; set; }
}

Creating tables

DatabaseClient class that uses and exposes just a few methods of IAmazonDynamoDB is custom created. It checks if the table is created and if it is not, then it creates the table. Afterward, it waits for the table to become in status ACTIVE. Movies and Actors tables creation is done in separate classes with CreateTableRequest, which needs the table name. KeySchema specifies the attributes that build the primary key for a table or an index. The attributes must also be defined in the AttributeDefinitions list. KeyType has two possible values – HASH and RANGE. In the case of the Movies table, there is only a HASH key, which is always mandatory and unique, this means no two items can have the same partition key value, the second insert overwrites the first one. In the case of the Actors table, along with the partition key, there is also a sort key with KeyType of RANGE which is complimentary to the HASH. I have not used secondary indexes in the current example, but DynamoDB provides this functionality. They can be defined with GlobalSecondaryIndexes and LocalSecondaryIndexes elements of the CreateTableRequest. A stream is defined with StreamSpecification element in the CreateTableRequest, its StreamViewType is NEW_AND_OLD_IMAGES. This means that in case of add, update or delete, the DynamoDBEvent, which is later used in a lambda, holds both the new values and the old values of the item. ProvisionedThroughput is used to set the read and write capacity mode. In the current example, it is 5 capacity units for reading and the same for writing. The ProvisionedThroughput is needed because the default BillingMode is PROVISIONED. It can be changed to PAY_PER_REQUEST and then ProvisionedThroughput should not be specified. A more detailed explanation of each parameter can be found in AWS examples in C# – create a service working with DynamoDB post.

MoviesRepository.cs

public async Task CreateTableAsync()
{
	var request = new CreateTableRequest
	{
		TableName = TableName,
		KeySchema = new List<KeySchemaElement>
		{
			new KeySchemaElement
			{
				AttributeName = "Title",
				KeyType = "HASH"
			}
		},
		AttributeDefinitions = new List<AttributeDefinition>
		{
			new AttributeDefinition
			{
				AttributeName = "Title",
				AttributeType = "S"
			}
		},
		ProvisionedThroughput = new ProvisionedThroughput
		{
			ReadCapacityUnits = 5,
			WriteCapacityUnits = 5
		},
		StreamSpecification = new StreamSpecification
		{
			StreamEnabled = true,
			StreamViewType = StreamViewType.NEW_AND_OLD_IMAGES
		}
	};

	await _client.CreateTableAsync(request);
}

ActorsRepository.cs

public async Task CreateTableAsync()
{
	var request = new CreateTableRequest
	{
		TableName = TableName,
		KeySchema = new List<KeySchemaElement>
		{
			new KeySchemaElement
			{
				AttributeName = "FirstName",
				KeyType = "HASH"
			},
			new KeySchemaElement
			{
				AttributeName = "LastName",
				KeyType = "RANGE"
			}
		},
		AttributeDefinitions = new List<AttributeDefinition>
		{
		   new AttributeDefinition
			{
				AttributeName = "FirstName",
				AttributeType = "S"
			},
			new AttributeDefinition
			{
				AttributeName = "LastName",
				AttributeType = "S"
			}
		},
		ProvisionedThroughput = new ProvisionedThroughput
		{
			ReadCapacityUnits = 5,
			WriteCapacityUnits = 5
		},
		StreamSpecification = new StreamSpecification
		{
			StreamEnabled = true,
			StreamViewType = StreamViewType.NEW_AND_OLD_IMAGES
		}
	};

	await _client.CreateTableAsync(request);
}

DatabaseClient.cs

private const string StatusUnknown = "UNKNOWN";
private const string StatusActive = "ACTIVE";

private readonly IAmazonDynamoDB _client;

public DatabaseClient(IAmazonDynamoDB client)
{
	_client = client;
}

public async Task CreateTableAsync(CreateTableRequest createTableRequest)
{
	var status = await GetTableStatusAsync(createTableRequest.TableName);
	if (status != StatusUnknown)
	{
		return;
	}

	await _client.CreateTableAsync(createTableRequest);

	await WaitUntilTableReady(createTableRequest.TableName);
}

public async Task PutItemAsync(PutItemRequest putItemRequest)
{
	await _client.PutItemAsync(putItemRequest);
}

private async Task<string> GetTableStatusAsync(string tableName)
{
	try
	{
		var response = await _client.DescribeTableAsync(new DescribeTableRequest
		{
			TableName = tableName
		});
		return response?.Table.TableStatus;
	}
	catch (ResourceNotFoundException)
	{
		return StatusUnknown;
	}
}

private async Task WaitUntilTableReady(string tableName)
{
	var status = await GetTableStatusAsync(tableName);
	for (var i = 0; i < 10 && status != StatusActive; ++i)
	{
		await Task.Delay(500);
		status = await GetTableStatusAsync(tableName);
	}
}

Different programmatic interfaces

In AWS examples in C# – create a service working with DynamoDB post, I have explained more details about the three different programmatic interfaces, that DynamoDB offers, a low-level interface, document interface, and object persistence interface.

Writing using the low-level interface

The low-level interface lets the consumer manage all the details and do the data mapping. Here is an example of how to create an Actor using the low-level interface. Data is mapped manually to its proper data type. In this case, the actor.FirstName and actor.LastName is assigned to the S property of the AttributeValue, which is a string type.

private readonly IDatabaseClient _client;
public async Task SaveActorAsync(Actor actor)
{
	var request = new PutItemRequest
	{
		TableName = TableName,
		Item = new Dictionary<string, AttributeValue>
		{
			{"FirstName", new AttributeValue {S = actor.FirstName}},
			{"LastName", new AttributeValue {S = actor.LastName}}
		}
	};
	await _client.PutItemAsync(request);
}

The full code is in ActorsRepository.cs.

Writing using the object persistence interface

With the object persistency interface, client classes are mapped to DynamoDB tables. The example given below comes from the original AWS documentation and shows explicit mapping. With DynamoDBTable the mapping to the table is created, then DynamoDBHashKey and DynamoDBRangeKey annotate the keys. With DynamoDBProperty a specific name can be given, so it is different from the table field name. Title is directly mapped to Title field in the database table. DynamoDBIgnore attribute ignores writing and reading this particular property to and from the table.


[DynamoDBTable("ProductCatalog")]
public class Book
{
	[DynamoDBHashKey]
	public int Id { get; set; }

	public string Title { get; set; }

	[DynamoDBRangeKey]
	public int ISBN { get; set; }

	[DynamoDBProperty("Authors")]
	public List<string> BookAuthors { get; set; }

	[DynamoDBIgnore]
	public string CoverPage { get; set; }
}

To save the client-side objects to the tables, the object persistence model provides the DynamoDBContext class, an entry point to DynamoDB. This class provides a connection to DynamoDB and enables you to access tables and perform various CRUD operations. The current examples are slightly different since Movie model is very simple, there are no DynamoDB attributes on it, so DynamoDBContext uses its default mapping features to map them. The movie has two properties, Title, which is a string and is the HASH key in the table and Genre, which is an enum, practically an integer.


public enum MovieGenre
{
	[EnumMember(Value = "Action Movie")]
	Action,
	[EnumMember(Value = "Drama Movie")]
	Drama
}

public class Movie
{
	public string Title { get; set; }

	[JsonConverter(typeof(StringEnumConverter))]
	public MovieGenre Genre { get; set; }
}

Since there is no DynamoDBTable attribute of the model, then DynamoDBContext is trying to map it by default to a table with the name Movie, but such a table does not exist. This is why DynamoDBOperationConfig is needed to map to the correct table name.


private readonly IDynamoDBContext _context;

public async Task SaveMovieAsync(Movie movie)
{
	var operationConfig = new DynamoDBOperationConfig
	{
		OverrideTableName = "Movies"
	};
	await _context.SaveAsync(movie, operationConfig);
}

The full code is in MoviesRepository.cs. Object persistence interface is a wide topic, full details can be found in .NET: Object Persistence Model page.

Querying using the low-level interface

An example is given for query request for Actors table that has FirstName as a HASH key and LastName as RANGE key. Important here is KeyConditionExpression, it holds the actual query. It is called a query, but it not actually a query in terms of RDBMS way of thinking, as the HASH key should be only used with an equality operator. For the RANGE key, there is a variety of operators to be used, in the example given equality operator is used as well. To add value to the value placeholder, :FirstName in the example, ExpressionAttributeValues is used. The dictionary key is the placeholder value, and AttributeValue is the value mapped to a specific value type, in the example, it is S, for a string. It is also possible to give placeholder value for the table field name as well, which is then replaced with the actual value in ExpressionAttributeNames dictionary, such as #LastName.

private static QueryRequest BuildQueryRequest(string firstName, string lastName)
{
	var request = new QueryRequest("Actors")
	{
		KeyConditionExpression = "FirstName = :FirstName"
	};
	request.ExpressionAttributeValues.Add(":FirstName", new AttributeValue
	{
		S = firstName
	});

	if (!string.IsNullOrEmpty(lastName))
	{
		request.KeyConditionExpression += " AND #LastName = :LastName";
		request.ExpressionAttributeNames.Add("#LastName", "LastName");
		request.ExpressionAttributeValues.Add(":LastName", new AttributeValue
		{
			S = lastName
		});
	}

	return request;
}

The full code is in ActorsHandler.cs. More about DynamoDB queries can be found in DynamoDB API_Query page.

Get item using document interface

The document programming interface returns the full document by its unique HASH key. The table is accessed with public static Table LoadTable(IAmazonDynamoDB ddbClient, TableConfig config) and then the document is loaded with public Task<Document> GetItemAsync(Primitive hashKey). In current examples, a proxy class is defined, which isolates the IAmazonDynamoDB operations:

GetDocumentAsync


public async Task<Document> GetDocumentAsync(string tableName, string documentKey)
{
	var table = Table.LoadTable(_dynamoDbClient, new TableConfig(tableName));
	return await table.GetItemAsync(new Primitive(documentKey));
}

GetDocumentAsync

var document = await _dynamoDbReader.GetDocumentAsync(TableName, title);

var movie = new Movie
{
	Title = document["Title"],
	Genre = (MovieGenre)int.Parse(document["Genre"])
};

The document is actually a JSON.

{
	"Title": {
		"Value": "Die Hard",
		"Type": 0
	},
	"Genre": {
		"Value": "0",
		"Type": 1
	}
}

The full code is in MoviesHandler.cs.

Conclusion

In the current post, I have given practical code examples of how to do the basic DynamoDB operations in C#. This post is complimentary to AWS examples in C# – create a service working with DynamoDB post.

Related Posts

Read more...

AWS examples in C# – create a service working with SQS

Last Updated on by

Post summary: To give a basic overview of AWS SQS, how to write a message to it and how to make a consumer that constantly polls the queue for new messages.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository.

Event-driven architecture

I would like to briefly touch the topic of event-driven architecture since message service providers, such as SQS or RabbitMQ are the basis of its implementation. This is a software architecture paradigm promoting the production, detection, consumption of, and reaction to events. An event is a significant change in the state of an object, to which someone might be interested in. All communication happens asynchronously and systems are loosely coupled. An event-driven system typically consists of event emitters, event consumers, and event channels. Emitters have the responsibility to detect, gather, and transfer events. Emitters do not know the consumers of the events, they do not even know if a consumer exists. Consumers have the responsibility of applying a reaction as soon as an event is presented in a dedicated channel. This leads to the pattern commonly known as eventual consistency, which pushes the complexity of consistency to the application tier, which is the biggest challenge to solve in an event-driven architecture.

Apart from SQS, there is even more sophisticated service from AWS called EventBridge, which makes it easy to build event-driven applications because it takes care of event ingestion and delivery, security, authorization, and error handling. It is basically a serverless event bus that makes it easy to connect applications together using data from its own applications, integrated Software-as-a-Service (SaaS) applications, and AWS services.

AWS SQS

SQS stands for Simple Queue Service, it is a fully managed message queuing service that enables to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work.

Types of queues

SQS offers two types of message queues:

  • Standard queues – they offer maximum throughput, best-effort ordering, and at-least-once delivery. This means there is no guaranteed order and messages can be duplicated.
  • FIFO queues – they are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.

Dead-letter queue

In addition to those, there is a special type of queues, called dead-letter queues. They are used mainly for debugging and failure proofing applications. If a message cannot be successfully processed after several retries from one of the source queues above, it ends in the dead-letter queue, from which it can be analyzed and returned back to source queue for reprocessing.

Message processing

It is important to know how SQS operates, in order to make good architectural decisions. When a message is published to the queue it becomes visible. When some consumer reads the message, then the message becomes not visible, but still present in the queue, its status now is in-flight. There is visibility timeout which by default is 30 seconds, the maximum value is 12 hours. After the visibility timeout passes then the message is visible again to be read by consumers. In case there is no dead-letter queue, this process happens over and over until the message retention period is reached, afterward message gets automatically deleted. The retention period default value is 4 days, the maximum value is 14 days. In case of a dead-letter queue, after the message cannot be processed for more than maximum receive count times, then it goes to dead-letter queue and stays in the dead-letter for its message retention period. See more info on SQS on How Amazon SQS Works page.

Architectural approaches

One queue or many queues

Since many event emitters can write messages to the queue it gets tricky to process the messages properly. One option is to have a separate queue for separate types of messages, another option is to put some metadata into the messages. I have decided to go for the solution with one queue because I have just one consumer which knows which message processor to call and thus simplify the code. In the case of many SQS queues, there should be many consumers defined in the code, which is better to split into many micro-services, for each SQS queue.

Dead-letter

I would say a dead-letter queue with the maximum retention period of 14 days is a good idea. In this case, messages can be quarantined which will not slow down the normal queue operations. In the case of no dead-letter queue and default timeouts, if a message cannot be processed, then it will appear every 30 seconds for a period of 4 days, this makes 2880 times a day, 11520 times in total. Now imagine there are thousands of messages like this one. I have decided to go for a dead-letter queue with the default retention period.

Long polling

Long polling is another aspect that has to be considered. It can be enabled in two ways. One is on a queue level, by setting the ReceiveMessageWaitTimeSeconds when creating the queue, it can be from 1 to 20 seconds. Other way to enable it is when messages are read from the queue, there is WaitTimeSeconds setting in the request, which can be from 1 to 20 seconds. In case both options are combined, then WaitTimeSeconds takes precedence.

Unknown messages

Another architectural decision in case there is only one queue is what to be done with unknown messages. In the case of no dead-letter queue, messages are good to be deleted, otherwise, they will keep showing for the queue’s retention period. I throw an error in the logs and after 3 unsuccessful attempts, which is the receive count times I have configured, the message goes to the dead-letter queue.

Standard vs. FIFO queues

SQS is able to handle a high amount of messages, theoretically an unlimited amount of messages per second. Standard SQS queue does not maintain any order of messages and also it is possible that there is a duplication of messages delivery. For this reason, AWS offers a FIFO (First-In-First-Out) queue, they provide message order and ensure exactly-once processing. The limitation of the FIFO queue is its number of transactions per second, which are 300 messages per second or 3000 if they are in batched mode.

SQS queue operations at a glance

In AWS examples in C# – basic SQS queue operations post following the operations briefed below were described in more details:

  • Create queue with dead-letter queue
  • Read messages from the queue
  • Write a message to the queue (comes in two flavors)
  • Delete messages to the queue
  • Move messages from dead-letter to source queue

Creating SQS message consumer

In order to read the messages, there should be a consumer that constantly polls the queue and processes the messages. ProcessMessageAsync uses the strategy design pattern to get the proper message processor based on MessageType attribute. Processors are stored in _messageProcessors which is IEnumerable<IMessageProcessor> and is injected by .NET Core dependency injection. If a processor is found, then the processor is invoked, if not an error is shown in the logs. This logic can be subject to change if unknown messages are tolerated in the queue. In ProcessAsync method there is a while loop, which constantly reads for messages by _sqsClient which SqsClient class described in previous sections. SQS returns the response if there are some messages or if WaitTimeSeconds time expired when reading the message or ReceiveMessageWaitTimeSeconds configured by AwsQueueLongPollTimeSeconds environment variable has expired. This while loop is a little tricky to unit test though as it consumes the main thread, and the mocked object should be instructed to wait. Everything is controlled by a CancellationTokenSource, when this is canceled, then consumption is stopped.

ProcessMessageAsync

private async Task ProcessMessageAsync(Message message)
{
	try
	{
		var messageType = message.MessageAttributes.GetMessageTypeAttributeValue();
		if (messageType == null)
		{
			throw new Exception($"No 'MessageType' attribute present in message {JsonConvert.SerializeObject(message)}");
		}

		var processor = _messageProcessors.SingleOrDefault(x => x.CanProcess(messageType));
		if (processor == null)
		{
			throw new Exception($"No processor found for message type '{messageType}'");
		}

		await processor.ProcessAsync(message);
		await _sqsClient.DeleteMessageAsync(message.ReceiptHandle);
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, $"Cannot process message [id: {message.MessageId}, receiptHandle: {message.ReceiptHandle}, body: {message.Body}] from queue {_sqsClient.GetQueueName()}");
	}
}

ProcessAsync

private async void ProcessAsync()
{
	try
	{
		while (!_tokenSource.Token.IsCancellationRequested)
		{
			var messages = await _sqsClient.GetMessagesAsync(_tokenSource.Token);
			messages.ForEach(async x => await ProcessMessageAsync(x));
		}
	}
	catch (OperationCanceledException)
	{
		//operation has been canceled but it shouldn't be propagated
	}
}

StartConsuming

public void StartConsuming()
{
	if (!IsConsuming())
	{
		_tokenSource = new CancellationTokenSource();
		ProcessAsync();
	}
}

private bool IsConsuming()
{
	return _tokenSource != null && !_tokenSource.Token.IsCancellationRequested;
}

Message processors

In the current example, I have taken the architectural design decision to have one queue and different messages into it. For each different type of message, there is a relevant processor. With the strategy design pattern, the appropriate message processor is picked based on MessageType attribute. Processors implement a very simple interface IMessageProcessor. In the current example, they take the message as a string, serialize it to an object and save this object to DynamoDB. A sample implementation is shown below:

IMessageProcessor

public interface IMessageProcessor
{
	bool CanProcess(string messageType);
	Task ProcessAsync(Message message);
}

ActorMessageProcessor

public bool CanProcess(string messageType)
{
	return messageType == typeof(Actor).Name;
}

public async Task ProcessAsync(Message message)
{
	var actor = JsonConvert.DeserializeObject<Actor>(message.Body);
	await _actorsRepository.SaveActorAsync(actor);
	_logger.LogInformation($"ActorMessageProcessor invoked with: {message.Body}");
}

AWS ECS and AWS ECR

ECS stands for Elastic Container Service is a fully managed container orchestration service. Containers can be run in clusters using AWS Fargate, which is a serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

ECR stands for Elastic Container Registry is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. ECR is integrated with ECS, eliminating the need to operate own container repositories or worry about scaling the underlying infrastructure.

SqsWriter and SqsReader

SqsWriter is a .NET Core 3.0 application, that is dockerized and run in AWS ECS with Fargate, and its container images are stored in ECR. It exposes an API that can be used to publish Actor or Movie objects as messages with separate MessageType attributes in the SQS queue.

SqsReader is a .NET Core 3.0 application, that is dockerized and run in AWS ECS with Fargate, and its container images are stored in ECR. It has a consumer that listens to the SQS queue and processes the messages by writing them into appropriate AQS DynamoDB tables. It also exposes API to stop or start processing, as long as reprocess the dead-letter queue or simply get the queue status.

More information on how to run the solution can be found in AWS examples in C# – run the solution post.

Conclusion

In the current post, I have given some concepts of event-driven architecture and how SQS fits in it. Also, I have described some architectural considerations when using SQS queues, such as dead-letter queues, one queue with different message type or several queues, etc. In the end, I have given practical code on how to make a consumer for the SQS queue.

Related Posts

Read more...

AWS examples in C# – basic SQS queue operations

Last Updated on by

Post summary: Code examples of how to perform basic SQS queue operations like reading, writing, deleting, creating a queue, etc.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository. In the current post, I will put in practice example basic SQS operations, a more detailed description of their usage is available in AWS examples in C# – create a service working with SQS post.

Instantiate Amazon SQS client

In the current examples, I use a configuration class called AppConfig. Its values are injected from the environment variables by .NET Core framework in Startup class. In order to work with SQS, a client is needed. The SQS client interface is called IAmazonSQS and comes from AWS C# SDK. The NuGet package is called AWSSDK.SQS, which in the current example comes as a sub-reference from Automationrhapsody.Aws.Examples.Models NuGet package. The concrete AWS client implementation is AmazonSQSClient and a singleton object is instantiated in SqsClientFactory class, where RegionEndpoint is used to instantiate AmazonSQSConfig. I use the AwsCredentials class which extends the AWS’ abstract AWSCredentials in order to manage the credentials.

SqsClientFactory.cs

public static AmazonSQSClient CreateClient(AppConfig appConfig)
{
	var sqsConfig = new AmazonSQSConfig
	{
		RegionEndpoint = RegionEndpoint.GetBySystemName(appConfig.AwsRegion)
	};
	var awsCredentials = new AwsCredentials(appConfig);
	return new AmazonSQSClient(awsCredentials, sqsConfig);
}

AwsCredentials.cs

public class AwsCredentials : AWSCredentials
{
	private readonly AppConfig _appConfig;

	public AwsCredentials(AppConfig appConfig)
	{
		_appConfig = appConfig;
	}

	public override ImmutableCredentials GetCredentials()
	{
		return new ImmutableCredentials(_appConfig.AwsAccessKey,
						_appConfig.AwsSecretKey, null);
	}
}

AppConfig.cs

public class AppConfig
{
	private const string FifoSuffix = ".fifo";
	private string _queueName;

	public string AwsRegion { get; set; }
	public string AwsAccessKey { get; set; }
	public string AwsSecretKey { get; set; }
	public string AwsQueueName
	{
		get => AwsQueueIsFifo ? _queueName + FifoSuffix : _queueName;
		set => _queueName = value;
	}
	public string AwsDeadLetterQueueName
	{
		get
		{
			var deadLetter = _queueName + "-exceptions";
			return AwsQueueIsFifo ? deadLetter + FifoSuffix : deadLetter;
		}
	}

	public bool AwsQueueAutomaticallyCreate { get; set; }
	public bool AwsQueueIsFifo { get; set; }
	public int AwsQueueLongPollTimeSeconds { get; set; }
}

Startup.cs

public Startup()
{
	var configurationBuilder = new ConfigurationBuilder()
		.AddEnvironmentVariables();
}

Local SqsClient dependencies

This sample code shows what external dependencies the SqsClient class needs. They are injected into the constructor by .NET Core dependency injection.

private readonly AppConfig _appConfig;
private readonly IAmazonSQS _sqsClient;
private readonly ILogger<SqsClient> _logger;
private readonly ConcurrentDictionary<string, string> _queueUrlCache;

public SqsClient(IOptions<AppConfig> awsConfig, 
	IAmazonSQS sqsClient, ILogger<SqsClient> logger)
{
	_appConfig = awsConfig.Value;
	_sqsClient = sqsClient;
	_logger = logger;
	_queueUrlCache = new ConcurrentDictionary<string, string>();
}

Create SQS queue and dead-letter queue

Queues can be created programmatically, something that will be described in the current post. Another option is to create them from the AWS CLI, see more information in AWS examples in C# – deploy with AWS CLI commands post.

Once the client is in place, then the queue and dead-letter queue is created with the code below. The code snippet also enables long polling for the queue, which allows reducing costs while allowing consumers to receive messages as soon as they arrive in the queue. Basically SQS waits until a message is available in a queue before sending a response.

public async Task CreateQueueAsync()
{
	const string arnAttribute = "QueueArn";

	try
	{
		var createQueueRequest = new CreateQueueRequest();
		if (_appConfig.AwsQueueIsFifo)
		{
			createQueueRequest.Attributes.Add("FifoQueue", "true");
		}

		createQueueRequest.QueueName = _appConfig.AwsQueueName;
		var createQueueResponse = await _sqsClient.CreateQueueAsync(createQueueRequest);
		createQueueRequest.QueueName = _appConfig.AwsDeadLetterQueueName;
		var createDeadLetterQueueResponse = await _sqsClient.CreateQueueAsync(createQueueRequest);

		// Get the the ARN of dead letter queue and configure main queue to deliver messages to it
		var attributes = await _sqsClient.GetQueueAttributesAsync(new GetQueueAttributesRequest
		{
			QueueUrl = createDeadLetterQueueResponse.QueueUrl,
			AttributeNames = new List<string> { arnAttribute }
		});
		var deadLetterQueueArn = attributes.Attributes[arnAttribute];

		// RedrivePolicy on main queue to deliver messages to dead letter queue if they fail processing after 3 times
		var redrivePolicy = new
		{
			maxReceiveCount = "3",
			deadLetterTargetArn = deadLetterQueueArn
		};
		await _sqsClient.SetQueueAttributesAsync(new SetQueueAttributesRequest
		{
			QueueUrl = createQueueResponse.QueueUrl,
			Attributes = new Dictionary<string, string>
			{
				{"RedrivePolicy", JsonConvert.SerializeObject(redrivePolicy)},
				// Enable Long polling
				{"ReceiveMessageWaitTimeSeconds", _appConfig.AwsQueueLongPollTimeSeconds.ToString()}
			}
		});
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, $"Error when creating SQS queue {_appConfig.AwsQueueName} and {_appConfig.AwsDeadLetterQueueName}");
	}
}

Read messages from the SQS queue

Reading is done with the given code, where _queueUrlCache is ConcurrentDictionary<string, string>. Queue URL is cached for better performance in GetQueueUrl method.

GetMessagesAsync

public async Task<List<Message>> GetMessagesAsync(string queueName, CancellationToken cancellationToken = default)
{
	var queueUrl = await GetQueueUrl(queueName);

	try
	{
		var response = await _sqsClient.ReceiveMessageAsync(new ReceiveMessageRequest
		{
			QueueUrl = queueUrl,
			WaitTimeSeconds = _appConfig.AwsQueueLongPollTimeSeconds,
			AttributeNames = new List<string> { "ApproximateReceiveCount" },
			MessageAttributeNames = new List<string> { "All" }
		}, cancellationToken);

		if (response.HttpStatusCode != HttpStatusCode.OK)
		{
			throw new AmazonSQSException($"Failed to GetMessagesAsync for queue {queueName}. Response: {response.HttpStatusCode}");
		}

		return response.Messages;
	}
	catch (TaskCanceledException)
	{
		_logger.LogWarning($"Failed to GetMessagesAsync for queue {queueName} because the task was canceled");
		return new List<Message>();
	}
	catch (Exception)
	{
		_logger.LogError($"Failed to GetMessagesAsync for queue {queueName}");
		throw;
	}
}

GetQueueUrl

private async Task<string> GetQueueUrl(string queueName)
{
	if (string.IsNullOrEmpty(queueName))
	{
		throw new ArgumentException("Queue name should not be blank.");
	}

	if (_queueUrlCache.TryGetValue(queueName, out var result))
	{
		return result;
	}

	try
	{
		var response = await _sqsClient.GetQueueUrlAsync(queueName);
		return _queueUrlCache.AddOrUpdate(queueName, response.QueueUrl, (q, url) => url);
	}
	catch (QueueDoesNotExistException ex)
	{
		throw new InvalidOperationException($"Could not retrieve the URL for the queue '{queueName}' as it does not exist or you do not have access to it.", ex);
	}
}

Write a message to the SQS queue

The current example is to write a single message to the queue. AWS SDK offers a method called SendMessageBatchAsync, which can send a group of messages. Because of the nature of the example application, the use of SendMessageBatchAsync is not needed. Writing comes in two flavors. With generic method accepting object instance or with method accepting message text and message type.

In the case of a FIFO queue, there are two more values to be set. One is the MessageGroupId, so messages from the same group are always processed one by one. In the current example, messages are grouped by type. Another mandatory thing is MessageDeduplicationId, which used by SQS for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval.

PostMessageAsync<T>

public async Task PostMessageAsync<T>(string queueName, T message)
{
	var queueUrl = await GetQueueUrl(queueName);

	try
	{
		var sendMessageRequest = new SendMessageRequest
		{
			QueueUrl = queueUrl,
			MessageBody = JsonConvert.SerializeObject(message),
			MessageAttributes = SqsMessageTypeAttribute.CreateAttributes<T>()
		};
		if (_appConfig.AwsQueueIsFifo)
		{
			sendMessageRequest.MessageGroupId = typeof(T).Name;
			sendMessageRequest.MessageDeduplicationId = Guid.NewGuid().ToString();
		}

		await _sqsClient.SendMessageAsync(sendMessageRequest);
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, $"Failed to PostMessagesAsync to queue '{queueName}'. Exception: {ex.Message}");
		throw;
	}
}

PostMessageAsync

public async Task PostMessageAsync(string queueName, string messageBody, string messageType)
{
	var queueUrl = await GetQueueUrl(queueName);

	try
	{
		var sendMessageRequest = new SendMessageRequest
		{
			QueueUrl = queueUrl,
			MessageBody = messageBody,
			MessageAttributes = SqsMessageTypeAttribute.CreateAttributes(messageType)
		};
		if (_appConfig.AwsQueueIsFifo)
		{
			sendMessageRequest.MessageGroupId = messageType;
			sendMessageRequest.MessageDeduplicationId = Guid.NewGuid().ToString();
		}

		await _sqsClient.SendMessageAsync(sendMessageRequest);
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, $"Failed to PostMessagesAsync to queue '{queueName}'. Exception: {ex.Message}");
		throw;
	}
}

Distinguishing messages in the queue

Since many event emitters can write messages to the queue it gets tricky to process the messages properly. One option is to have a separate queue for separate types of messages, another option is to put some metadata into the messages. I have decided to go for the solution with one queue because I have just one consumer which knows which message processor to call. In the case of many consumers, it is recommended to have several SQS queues, so the consumer does not need to read and disregard messages, this is not optimal.

Every message is added additional MessageAttributes. In the example above this is done with SqsMessageTypeAttribute.CreateAttributes(messageType) extension method, available in Automationrhapsody.Aws.Examples.Models NuGet package, which is also part of the examples code, is located in Models project. What this method does is to add MessageType string attribute, where the value is typeof(T).Name.

public static class SqsMessageTypeAttribute
{
	private const string AttributeName = "MessageType";

	public static string GetMessageTypeAttributeValue(this Dictionary<string, MessageAttributeValue> attributes)
	{
		return attributes.SingleOrDefault(x => x.Key == AttributeName).Value?.StringValue;
	}

	public static Dictionary<string, MessageAttributeValue> CreateAttributes<T>()
	{
		return CreateAttributes(typeof(T).Name);
	}

	public static Dictionary<string, MessageAttributeValue> CreateAttributes(string messageType)
	{
		return new Dictionary<string, MessageAttributeValue>
		{
			{
				AttributeName, new MessageAttributeValue
				{
					DataType = nameof(String),
					StringValue = messageType
				}
			}
		};
	}
}

Delete message from the queue

Once the message is processed, it should be removed from the queue. This is done with the following method:

public async Task DeleteMessageAsync(string queueName, string receiptHandle)
{
	var queueUrl = await GetQueueUrl(queueName);

	try
	{
		var response = await _sqsClient.DeleteMessageAsync(queueUrl, receiptHandle);

		if (response.HttpStatusCode != HttpStatusCode.OK)
		{
			throw new AmazonSQSException($"Failed to DeleteMessageAsync with for [{receiptHandle}] from queue '{queueName}'. Response: {response.HttpStatusCode}");
		}
	}
	catch (Exception)
	{
		_logger.LogError($"Failed to DeleteMessageAsync from queue {queueName}");
		throw;
	}
}

Reprocess messages from dead-letter queue

If there is a problem with message processing, they are moved to the dead-letter queue. There might be a specific bug in the consumer application for this particular type of message. This bug might be fixed, new version deployed and now all those messages should be reprocessed. Moving from dead-letter to source queue is done with the following code:

public async Task RestoreFromDeadLetterQueueAsync(CancellationToken cancellationToken = default)
{
	var deadLetterQueueName = _appConfig.AwsDeadLetterQueueName;

	try
	{
		var token = new CancellationTokenSource();
		while (!token.Token.IsCancellationRequested)
		{
			var messages = await GetMessagesAsync(deadLetterQueueName, cancellationToken);
			if (!messages.Any())
			{
				token.Cancel();
				continue;
			}

			messages.ForEach(async message =>
			{
				var messageType = message.MessageAttributes.GetMessageTypeAttributeValue();
				if (messageType != null)
				{
					await PostMessageAsync(message.Body, messageType);
					await DeleteMessageAsync(deadLetterQueueName, message.ReceiptHandle);
				}
			});
		}
	}
	catch (Exception)
	{
		_logger.LogError($"Failed to ReprocessMessages from queue {deadLetterQueueName}");
		throw;
	}
}

SQS queue operations at a glance

All operations described above can be seen in SqsReader SqsClient class and SqsWriter SqsClient class.

Conclusion

In the current post, I have given code examples of how to perform basic SQS queue operations.

Related Posts

Read more...

AWS examples in C# – run the solution

Last Updated on by

Post summary: Explanation of how to install and use the solution in AWS examples in C# blog post series.

This post is part of AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository. In the current post, I give information on how to install and run the project.

Disclaimer

The solution has commands to deploy to the cloud as well as to clean resources. Note not all resources are cleaned, read more in the Cleanup section. In order to be run in AWS valid account is needed. I am not going to describe how to create an account. If an account is present, then there is also knowledge and awareness of how to use it.

Important: current examples generate costs on AWS account. Use cautiously at your own risk!

Restrictions

The project was tested to be working on Linux and Windows. For Windows, it is working only with Git Bash. The project requires a valid AWS account.

Required installations

In order to fully run and enjoy the project following needs to be installed:

Configurations

AWS CLI has to be configured in order to run properly. It happens with aws configure. If there is no AWS account, this is not an issue, put some values for access and secret key and put a correct region, like us-east-1.

Import Postman collection, in order to be able to try the examples. Postman collection is in aws.examples.csharp.postman_collection.json file in the code. This is an optional step, below there are cURL examples also.

Run the project in AWS

Running on AWS requires the setting of environment variables:

export AwsAccessKey=KIA57FV4.....
export AwsSecretKey=mSgsxOWVh...
export AwsRegion=us-east-1

Then the solution is deployed to AWS with ./solution-deploy.sh script. Note that the output of the command gives the API Gateway URL and API key, as well as the SqsWriter and SqsReader endpoints. See image below:

Run the project partly in AWS

The most expensive part of the current setup is the running of the docker containers in EC2 (Elastic Compute Cloud). I have prepared a script called ./solution-deploy-hybrid.sh, which runs the containers locally and all other things are in AWS. Still, environment variables from the previous section are mandatory to be set. I believe this is the optimal variant if you want to test and run the code in a “production-like” environment.

Run the project in LocalStack

LocalStack provides an easy-to-use test/mocking framework for developing Cloud applications. It spins up a testing environment on local machines that provide the same functionality and APIs as the real AWS cloud environment. I have experimented with LocalStack, there is a localstack branch in GitHub. The solution can be run with solution-deploy-localstack.sh command. I cannot really estimate if this is a good alternative, because I am running the free tier in AWS and the most expensive part is ECS, which I skip by running the containers locally, instead of AWS. See the previous section on how to run a hybrid.

Usage

There is a Postman collection which allows easy firing of the requests. Another option is to use cURL, examples of all requests with their Postman names are available below.

SqsWriter

SqsWriter is a .NET Core 3.0 application, that is dockerized and run in AWS ECS (Elastic Container Service). It exposes an API that can be used to publish Actor or Movie objects. There is also a health check request. After AWS deployment proper endpoint is needed. The endpoint can be found as an output of deployment scripts. See the image above.

PublishActor

curl --location --request POST 'http://localhost:5100/api/publish/actor' \
--header 'Content-Type: application/json' \
--data-raw '{
	"FirstName": "Bruce",
	"LastName": "Willis"
}'

PublishMovie

curl --location --request POST 'http://localhost:5100/api/publish/movie' \
--header 'Content-Type: application/json' \
--data-raw '{
	"Title": "Die Hard",
	"Genre": "Action Movie"
}'

When Actor or Movie is published, it goes to the SQS queue, SqsReader picks it up from there and processes it. What is visible in the logs is that both LogEntryMessageProcessor and ActorMessageProcessor are invoked. See the screenshot:

SqsWriterHealthCheck

curl --location --request GET 'http://localhost:5100/health'

SqsReader

SqsReader is a .NET Core 3.0 application, that is dockerized and run in AWS ECS. It has a consumer that listens to the SQS queue and processes the messages by writing them into appropriate AQS DynamoDB tables. It also exposes API to stop or start processing, as long as reprocess the dead-letter queue or simply get the queue status. After AWS deployment proper endpoint is needed. The endpoint can be found as an output of deployment scripts. See the image above.

ConsumerStart

curl --location --request POST 'http://localhost:5200/api/consumer/start' \
--header 'Content-Type: application/json' \
--data-raw ''

ConsumerStop

curl --location --request POST 'http://localhost:5200/api/consumer/stop' \
--header 'Content-Type: application/json' \
--data-raw ''

ConsumerStatus

curl --location --request GET 'http://localhost:5200/api/consumer/status'

ConsumerReprocess
If this one is invoked with no messages in the dead-letter queue then it takes 20 seconds to finish, because it actually waits for long polling timeout.

curl --location --request POST 'http://localhost:5200/api/consumer/reprocess' \
--header 'Content-Type: application/json' \
--data-raw ''

SqsReaderHealthCheck

curl --location --request GET 'http://localhost:5200/health'

ActorsServerlessLambda

This lambda is managed by the Serverless framework. It is exposed as REST API via AWS API Gateway. It also has a custom authorizer as well as API Key attached. Those are described in a further post.

ServerlessActors

In the case of AWS, the API Key and URL are needed, those can be obtained from deployment command logs. See the screenshot above. Put correct values to CHANGE_ME and REGION placeholders. Request is:

curl --location --request POST 'https://CHANGE_ME.execute-api.REGION.amazonaws.com/dev/actors/search' \
--header 'Content-Type: application/json' \
--header 'x-api-key: CHANGE_ME' \
--header 'Authorization: Bearer validToken' \
--data-raw '{
    "FirstName": "Bruce",
    "LastName": "Willis"
}'

MoviesServerlessLambda

ServerlessMovies

Put correct values to CHANGE_ME and REGION placeholders. Request is:

curl --location --request GET 'https://CHANGE_ME.execute-api.REGION.amazonaws.com/dev/movies/title/Die Hard'

Cleanup

Nota bene: This is a very important step, as leaving the solution running in AWS will accumulate costs.

In order to stop and clean up all AWS resources run ./solution-delete.sh script.

Nota bene: There a resource that is not automatically deleted by the scripts. This is a Route53 resource created by AWS Cloud Map. It has to be deleted with the following commands. Note that the id in the delete command comes from the result of list-namespaces command.

aws servicediscovery list-namespaces
aws servicediscovery delete-namespace --id ns-kneie4niu6pwwela

Verify cleanup

In order to be sure there are no leftovers from the examples, following AWS services has to be checked:

  • SQS
  • DynamoDB
  • IAM -> Roles
  • EC2 -> Security Groups
  • ECS -> Clusters
  • ECS -> Task Definitions
  • ECR -> Repositories
  • Lambda -> Functions
  • Lambda -> Applications
  • CloudFormation -> Stacks
  • S3
  • CloudWatch -> Log Groups
  • Route 53
  • AWS Cloud Map

On top of it, Billing should be regularly monitored to ensure no costs are applied.

Conclusion

This post describes how to run and they the solution described in AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS series

Related Posts

Read more...

AWS examples in C# – working with SQS, DynamoDB, Lambda, ECS

Last Updated on by

Post summary: Overview of the AWS examples in C# series.

In several blog posts, I give some practical examples of how to use AWS SQS, DynamoDB, Lambda with C# code. The code used for this series of blog posts is located in aws.examples.csharp GitHub repository.

Introduction

AWS stands for Amazon Web Services, it is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. AWS is one of the big cloud service providers. The others are Microsoft Azure and Google Cloud. All three cloud service providers have functions that are semantically common but differ in practical implementation. Also, every one of them has its own flavors. I have chosen to use AWS for these examples as it is something I have used before and I am most comfortable with it.

Architectural overview

In order to get a full understanding of the architecture, I have prepared this very basic diagram. It illustrates what services are there and how they communicate.

SqsReader and SqsWriter

Both are .NET Core 3.0 microservices running in docker containers. The images are uploaded in AWS ECR (Elastic Container Registry) and containers are run in AWS ECS (Elastic Container Service). SqsReader has a REST endpoint, by which an Actor or Movie can be posted. Both are pushed as a message to AWS SQS (Simple Queue Service). SqsWriter is listening to the SQS and in case of a message, it processes it. If the message is from type Actor or Movie then SqsReader saves it to the respective AWS DynamoDB tables. If the message is LogEntry, then the message is only output into SqsReader logs.

ActorsLambdaFunction and MoviessLambdaFunction

Both are .NET Core 2.1 lambda functions run in AWS Lambda. They listen to Actors and Movies DynamoDB tables changes and in case of new entries, they write to LogEntries DynamoDB table. Also, they write SQS messages from type LogEntry, which are then read by SqsReader.

ActorsServerlessLambda and MoviesServerlessLambda

Those are again lambda functions by are fully managed by the Serverless framework. They have a lambda application defined as well as Cloud Formation templates. They expose a REST API trough AWS API Gateway, by which the Actors table can be queried or a movie can be got from the Movies table.

Post in the series

This is a long series of posts describing into detail all the pieces of the architectural diagram above. Also, every aspect of the code in the repository is explained in detail in subsequent blog posts. It was a very interesting learning opportunity for me, which I would like to share. Here are the posts in the series:

Future plans

There are several topics I would like to go into as well, but there is no code yet for them into the GitHub repository. Those are:

  • AWS examples in C# – manage with Terraform
  • AWS examples in C# – use AWS Cognito for API Gateway authorizer

Conclusion

These series of posts are intended to give some basic overview of important AWS services and how to use them in C# code.

Related Posts

Read more...

Serialize and deserialize enum values to custom string in C# with Json.NET

Last Updated on by

Post summary: How to serialize and deserialize C# enum values to customs strings.

The code used for this blog post is located in dotnet.core.templates GitHub repository.

Use case description

Enums provide an efficient way to define a set of named integral constants that may be assigned to a variable. They are strongly typed values and are preferred choice over the string constants. In the given example there is an API that provides functionality to save or get movies with title and genre. Although not the best example, as the genre can have lots of values, but it still can be presented as an enum. In the same setup, there is a user interface that consumes the get API. In the UI it is more convenient to directly display the genre as text, so control over naming convention happens in the backend and there is no mapping in the frontend, and localization is ignored in the current example.

Serialize and deserialize

Serialization and deserialization to a custom string can be done with two steps. The first is to add an attribute to all enum values which gives the preferred string mapping.

using System.Runtime.Serialization;

public enum MovieGenre
{
	[EnumMember(Value = "Action Movie")]
	Action,
	[EnumMember(Value = "Drama Movie")]
	Drama
}

Then Json.NET is instructed to serialize/deserialize the enum value as a string with [JsonConverter(typeof(StringEnumConverter))] attribute.

using Newtonsoft.Json;
using Newtonsoft.Json.Converters;

public class Movie
{
	public string Title { get; set; }

	[JsonConverter(typeof(StringEnumConverter))]
	public MovieGenre Genre { get; set; }
}

This results in the following JSON:

{
	"Title": "Die Hard",
	"Genre": "Action Movie"
}

In the example above, if [EnumMember(Value = “Action Movie“)] is not provided in the enum declaration, then the string representation of the enum value is taken:

{
	"Title": "Die Hard",
	"Genre": "Action"
}

Conclusion

Although the current example is not ideal from a use-case point of view, it shows technically how Newtonsoft Json.NET can be instructed to serialize/deserialize enum values to custom strings or just string representation of the enum value.

Read more...

Create .NET Core health checks with custom response payload

Last Updated on by

Post summary: How to extend custom .NET Core health checks so the response JSON provides more information.

The code used for this blog post is located in dotnet.core.templates GitHub repository.

Heath checks in .NET Core

Health checks in .NET Core is a middleware that provides a possibility to report an application’s health. This allows monitoring of the application and taking corrective actions in case of issues. For e.g., if an application reports to be unhealthy, then the load balancer can exclude it from the infrastructure and appropriate alarms to be raised. More in about health checks can be read in Health checks in ASP.NET Core page.

Adding a health check

In order to add a health check in a .NET Core application, then reference to Microsoft.AspNetCore.Diagnostics.HealthChecks package has to be added. Health checks themselves are classes implementing IHealthCheck interface.

public class VersionHealthCheck : IHealthCheck
{
	private readonly AppConfig _config;

	public VersionHealthCheck(IOptions<AppConfig> options)
	{
		_config = options.Value;
	}

	public Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context,
				CancellationToken cancellationToken = new CancellationToken())
	{
		return Task.FromResult(string.IsNullOrEmpty(_config.Version)
			? HealthCheckResult.Unhealthy()
			: HealthCheckResult.Healthy());
	}
}

Health checks have to be registered in Startup.cs file:

public void ConfigureServices(IServiceCollection services)
{
	services.AddHealthChecks()
		.AddCheck<VersionHealthCheck>("Version Health Check");
}

As a last step health checks endpoint has to be configured in Startup.cs file:

public void Configure(IApplicationBuilder app)
{
	app.UseEndpoints(endpoints =>
	{
		endpoints.MapHealthChecks("/health");
	});
}

Now health checks report is available at <HOSTNAME>/health URL. If everything is good, the response is 200 OK with content Healthy. In case of issues, the response is 503 Service Unavailable with content Unhealthy.

Extend the health checks response

As stated above health checks are mainly intended for machine usage. I have had cases in practice, in which just looking into the health check allows faster problem solving rather than looking into the logs. For this reason, investing in more explanatory health checks is worth it. Below is a code snippet on how to have more information into the health check response payload. A new static class HealthCheckExtensions with MapCustomHealthChecks method can be added.

public static class HealthCheckExtensions
{
	public static IEndpointConventionBuilder MapCustomHealthChecks(
		this IEndpointRouteBuilder endpoints, string serviceName)
	{
		return endpoints.MapHealthChecks("/health", new HealthCheckOptions
		{
			ResponseWriter = async (context, report) =>
			{
				var result = JsonConvert.SerializeObject(
					new HealthResult
					{
						Name = serviceName,
						Status = report.Status.ToString(),
						Duration = report.TotalDuration,
						Info = report.Entries.Select(e => new HealthInfo
						{
							Key = e.Key,
							Description = e.Value.Description,
							Duration = e.Value.Duration,
							Status = Enum.GetName(typeof(HealthStatus),
													e.Value.Status),
							Error = e.Value.Exception?.Message
						}).ToList()
					}, Formatting.None,
					new JsonSerializerSettings
					{
						NullValueHandling = NullValueHandling.Ignore
					});
				context.Response.ContentType = MediaTypeNames.Application.Json;
				await context.Response.WriteAsync(result);
			}
		});
	}
}

All the formatting in the code depends on two additional data classes HealthInfo and HealthResult.

public class HealthInfo
{
	public string Key { get; set; }
	public string Description { get; set; }
	public TimeSpan Duration { get; set; }
	public string Status { get; set; }
	public string Error { get; set; }
}
public class HealthResult
{
	public string Name { get; set; }
	public string Status { get; set; }
	public TimeSpan Duration { get; set; }
	public ICollection<HealthInfo> Info { get; set; }
}

Registering the endpoint happens with the same code as in the default case, with the difference that the MapCustomHealthChecks extension method is used:

public void Configure(IApplicationBuilder app)
{
	app.UseEndpoints(endpoints =>
	{
		endpoints.MapCustomHealthChecks("Service Name");
	});
}

Now it is possible to have some more elaborate health checks, which can capture exception for e.g. and return it as well.

public Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context,
		CancellationToken cancellationToken = new CancellationToken())
{
	try
	{
		var message = $"Version is healthy: {_config.Version}";
		return Task.FromResult(HealthCheckResult.Healthy(message));
	}
	catch (Exception ex)
	{
		var message = "There is an error with version health check";
		return Task.FromResult(HealthCheckResult.Unhealthy(message, ex));
	}
}

In the case of 503 Service Unavailable, health check gives more details, which in some cases can be enough to resolve the issue without having to dig into the logs.

{
	"Name": "Service Name",
	"Status": "Unhealthy",
	"Duration": "00:00:00.0159186",
	"Info": [
		{
			"Key": "Version Health Check",
			"Description": "There is an error with version health check",
			"Duration": "00:00:00.0010564",
			"Status": "Unhealthy",
			"Error": "Exception's message text"
		}
	]
}

Conclusion

.NET Core health checks are a convenient way for automatic service monitoring and taking corrective actions. With a small effort, they can be enhanced so they can be made useful for people trying to identify what the issues with the services are.

Related Posts

Read more...

Create project for .NET Core custom template

Last Updated on by

Post summary: How to create a custom .NET Core template, install it and create projects from it.

The code used for this blog post is located in dotnet.core.templates GitHub repository.

.NET Core

In short .NET Core is a cross-platform development platform supporting Windows, macOS, and Linux, and can be used in device, cloud, and embedded/IoT scenarios. It is maintained by Microsoft and the .NET community on GitHub. More can be read on .NET Core Guide.

Why .NET Core templates

When you create a new .NET Core project with dotnet new command it is possible to select from a list of predefined templates. This is a very handy feature, but out of the box templates are not really convenient, additional changes are required afterward to make the project fit for purpose. In a situation where many projects with similar structures are created, such as many micro-services, a custom template is very helpful. Users can define a custom template and easily create new projects out of it. In the current post, I will describe how to create an elaborate custom template.

Create template

What has to be done is just to create a project and customize it according to the needs. Once code is ready, a file named template.json located in .template.config folder should be added. The file should conform to http://json.schemastore.org/template JSON schema. See the file below, it is more or less self-explanatory. An important field is identity, which is basically the unique template name, it is not visible to users though. What is visible are name and shortName. ShortName is actually used when creating a project from this template: dotnet new shortName. Guids used in the template are defined in guids section in template.json file and they are replaced with a fresh set of guids on each project creation. More details on each and every possible option can be found in “Runnable-Project”-Templates page.

{
	"$schema": "http://json.schemastore.org/template",
	"author": "Lyudmil Latinov",
	"classifications": [
		"Common",
		"Code"
	],
	"identity": "dotnet.core.micro.service",
	"name": ".NET Core 3.0 micro-service",
	"shortName": "microservice",
	"tags": {
		"language": "C#",
		"type": "item"
	},
	"guids": [
		"9A19103F-16F7-4668-BE54-9A1E7A4F7556"
	]
}

This is it, the template is now ready to be installed and used.

Install template and create project

The template is installed with dotnet new -i <PATH_TO_FOLDER>. Once the template is installed dotnet new command lists it along with all predefined templates.

Creating a project is similar as it is created from standard templates: dotnet new <SHORT_NAME>, dotnet new microservice in this example.

Uninstalling of a template is done with dotnet new -u <PATH_TO_FOLDER> command. There is another command which uninstalls all custom template from the system: dotnet new –debug:reinit.

Replace project name

A custom template is a very handy thing and I would like to go one step further in making this template more flexible. When a new project is created it is good to have a proper name, which is reflected in the file names, folders’ names and into the namespace. For this reason, a custom replace task can be added to the template definition. A placeholder PROJECT_NAME is added to namespaces, Dockerfile, folder names, solution file name, and .csproj files names. This is done by adding and external required parameter in symbols section of template.json file. On project creation, this parameter is provided with value, which gets replaced in files content and file names.

{
	"$schema": "http://json.schemastore.org/template",
	...
	"symbols": {
		"ProjectName": {
			"type": "parameter",
			"replaces": "PROJECT_NAME",
			"FileRename": "PROJECT_NAME",
			"isRequired": true
		}
	}
}

When dotnet new microservice -h is executed it outputs the possible parameters that are needed to create a project from this template:

.NET Core 3.0 micro-service (C#)
Author: Lyudmil Latinov
Options:
  -P|--ProjectName
                    string - Required

Now, the project cannot be created without providing this parameter. Command dotnet new microservice returns error Mandatory parameter –ProjectName missing for template .NET Core 3.0 micro-service. The proper command now is dotnet new microservice –ProjectName SampleMicroservice.

Conditional files and features

So far template is much nicer and looks more real. It can be also further enhanced. For e.g. there are two major types of projects needed. One option is to have two separate templates, which means maintaining two templates. Another option, in case of insignificant differences, is to have conditional functionality added based on a parameter during template creation. A boolean parameter is added into symbols section of template.json file. Another thing to be done is to exclude files depending on this parameter. This is done in the sources section of template.json file.

{
	"$schema": "http://json.schemastore.org/template",
	...
	"symbols": {
		...
		"AddHealthChecks": {
			"type": "parameter",
			"datatype": "bool",
			"defaultValue": "false"
		}
	},
	"sources": [
		{
			"modifiers": [
				{
					"condition": "(!AddHealthChecks)",
					"exclude": [
						"src/**/HealthChecks/**",
						"test/**/Client/HealthCheckClient.cs",
						"test/**/Tests/HealthCheckTest.cs"
					]
				}
			]
		}
	]
}

Customize parameters switches

If you add several of those parameters, then .NET gives them an automatic name, which may not be very meaningful. So the names of those can be customized. In the example here, which is located in dotnet.core.templates GitHub repository, the default parameter names are (dotnet new microservice -h):

.NET Core 3.0 micro-service (C#)
Author: Lyudmil Latinov
Options:
  -P|--ProjectName
                         string - Required

  -A|--AddHealthChecks
                         bool - Optional
                         Default: false / (*) true

  -Ad|--AddSqsPublisher
                         bool - Optional
                         Default: false / (*) true

  -p:A|--AddSqsConsumer
                         bool - Optional
                         Default: false / (*) true


* Indicates the value used if the switch is provided without a value.

Short names like -p:A, -Ad does not seem too convenient. Those can be easily customized by adding dotnetcli.host.json file into .template.config folder:

{
	"$schema": "http://json.schemastore.org/dotnetcli.host",
	"symbolInfo": {
		"ProjectName": {
			"longName": "ProjectName",
			"shortName": "pn"
		},
		"AddHealthChecks": {
			"longName": "AddHealthChecks",
			"shortName": "ah"
		},
		"AddSqsPublisher": {
			"longName": "AddSqsPublisher",
			"shortName": "ap"
		},
		"AddSqsConsumer": {
			"longName": "AddSqsConsumer",
			"shortName": "ac"
		}
	}
}

Now options are much different:

.NET Core 3.0 micro-service (C#)
Author: Lyudmil Latinov
Options:
  -pn|--ProjectName
                         string - Required

  -ah|--AddHealthChecks
                         bool - Optional
                         Default: false / (*) true

  -ap|--AddSqsPublisher
                         bool - Optional
                         Default: false / (*) true

  -ac|--AddSqsConsumer
                         bool - Optional
                         Default: false / (*) true


* Indicates the value used if the switch is provided without a value.

Conclusion

.NET Core provides an easy and extensible way to make project templates, which are stored and maintained under version control and can be used for creating new projects. Those templates can be project-specific, product-specific, company-specific.

Related Posts

Read more...

.NET Core code coverage on Linux with MiniCover

Last Updated on by

Post summary: How to run code coverage of your unit tests as part of your build on Linux build agents.

Code below can be found in GitHub SampleDotNetCore2RestStub repository. In Code coverage of .NET Core unit tests with OpenCover post, I have shown how to do code coverage with OpenCover. Commands shown in that post can be made part of your CI or CD build. There is a but though, this works only for windows. If you are having build machines on Linux you need another alternative. In this post, I’m going to show this alternative.

MiniCover

MiniCover is a lightweight code coverage tool for .NET Core on Linux. It is in an early stage yet and there is no big community, but I really hope this is going to change soon as it looks a very promising tool.

Include in project

In order to use MiniCover it has to be installed as .NET CLI Tool. This is done with following code:

<ItemGroup>
	<DotNetCliToolReference Include="MiniCover" Version="2.0.0-ci-*" />
</ItemGroup>

In order to keep your original projects intact, the best approach is to create tools project and add it to its tools.csproj file, which will look:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="MiniCover" Version="2.0.0-ci-*" />
  </ItemGroup>

</Project>

Commands

At this stage following command line options are available:

  • instrument – Instrument assemblies
  • uninstrument – Uninstrument assemblies
  • reset – Reset hits count
  • report – Outputs coverage report
  • htmlreport -Write HTML report to a folder
  • xmlreport – Write an NCover-formatted XML report to folder

Run coverage

In case of a project structure where you have your code in src folder and your tests in test folder following bash script can be used directly. It accepts as parameter threshold coverage percentage, if not provided it uses 80% by default. Script restores NuGet packages and builds the projects. It navigates to tools folder and restores NuGet packages again. This is very important as it is the only way to get MiniCover NuGet package. Inside tools folder, it instruments assemblies and resets previous statistics. Script navigates to root folder and runs all tests inside every project in test folder. Afterward, script navigates again to tools folder and uninstruments all assemblies so far. No matter this operation is safe I would recommend to run one more build or publish before assemblies go into production. In the end, the script generates reports.

if [ ! -z $1 ]; then
  if [ $1 -lt 0 ] || [ $1 -gt 100 ]; then
    echo "Threshold should be between 0 and 100"
    threshold=80
  fi
  threshold=$1
else
  threshold=80
fi

dotnet restore
dotnet build

cd tools
dotnet restore

# Instrument assemblies inside 'test' folder to detect hits for source files inside 'src' folder
dotnet minicover instrument --workdir ../ --assemblies test/**/bin/**/*.dll --sources src/**/*.cs 

# Reset hits count in case minicover was run for this project
dotnet minicover reset

cd ..

for project in test/**/*.csproj; do dotnet test --no-build $project; done

cd tools

# Uninstrument assemblies, it's important if you're going to publish or deploy build outputs
dotnet minicover uninstrument --workdir ../

# Create HTML reports inside folder coverage-html
# This command returns failure if the coverage is lower than the threshold
dotnet minicover htmlreport --workdir ../ --threshold $threshold

# Print console report
# This command returns failure if the coverage is lower than the threshold
dotnet minicover report --workdir ../ --threshold $threshold

# Create NCover report
dotnet minicover xmlreport --workdir ../ --threshold $threshold

cd ..

Reports

There are 3 types of reports: Console, HTML and NCover XML.

Console report

Console report is dumping results to the console and returns 1 if given threshold is not met, which basically fails the CI/CD build. In the example below, codeCoverage.sh was called with argument 40, which means threshold is 40%.

HTML report

HTML report is also failing the build and gives similar summary information as console report, but also give detailed information for each class coverage. An example report can be found in MiniCover HTML report. I have to praise myself, as the summary file that is shown below was something I contributed, because I like the project very much.

NCover report

NCover report creates XML file in its format. The beauty of it is that you can additionally use ReportGenerator on Windows machine and convert XML to nice HTML report. Assuming ReportGenerator is extracted on your C:\ then the command is shown below. The report can be found in MiniCover ReportGenerator report.

C:\ReportGenerator\ReportGenerator.exe -reports:coverage.xml -targetdir:coverage

Compare with OpenCover

If you check both OpenCover .Net Core report and MiniCover ReportGenerator report you can notice some difference in metrics. First is that MiniCover does not support branch coverage. This is not that bad after all if you have your code nicely indented, line coverage is sufficient. For e.g., if your ternary operator is not on one line, but on three and you have missed testing one of the conditions, then line coverage will state that there is a not tested line. If the ternary operator is on one line though then line coverage will miss this test problem. Another difference is Coverable lines and Covered lines. OpenCover counts opening and closing brackets as such, so its numbers are bigger. Because of this conceptual difference Line coverage percentage has a small difference. MiniCover (35%) is more generous and give more percentage than OpenCover (33.6%).

Conclusion

MiniCover is very nice and compact tool that can be put in place during your continuous integration or continuous delivery to measure code coverage on each build. The most important advantage is that it is designed and works on Linux.

Related Posts

Read more...

Code coverage of .NET Core unit tests with OpenCover

Last Updated on by

Post summary: Examples how to measure code coverage of .NET Core unit tests with OpenCover.

Examples below are based on GitHub SampleDotNetCore2RestStub repository. Examples use code from .NET Core integration testing and mock dependencies post. Those are integration tests because they test more than one application module at a time, but they are run with a unit testing framework, this is why current post title is such.

Code coverage

This topic is how to do the code coverage on .NET Core unit tests with OpenCover. Theory on what is code coverage, why it is needed can be found in What about code coverage post.

OpenCover

OpenCover is open source tool for code coverage for .NET 2.0 and above applications for Windows only. You can read more details about OpenCover in Code coverage of manual or automated tests with OpenCover for .NET applications post or you can visit their OpenCover Wiki page.

Run OpenCover

In order to make this examples work you need to check out SampleDotNetCore2RestStub repository to C:\ and run all commands from project root folder C:\SampleDotNetCore2RestStub. OpenCover and ReportGenerator should be installed in C:\ as well. If you have different paths, just adjust them in commands shown below.

C:\OpenCover\OpenCover.Console.exe `
	-target:"c:\Program Files\dotnet\dotnet.exe" `
	-targetargs:"test" `
	-output:coverage.xml `
	-oldStyle `
	-filter:"+[SampleDotNetCore2RestStub*]* -[SampleDotNetCore2RestStub*Test*]*" `
	-register:user

Enable .NET Core to debug output

If you run the command above you will get the following message:

Committing…
No results, this could be for a number of reasons. The most common reasons are:
1) missing PDBs for the assemblies that match the filter please review the
output file and refer to the Usage guide (Usage.rtf) about filters.
2) the profiler may not be registered correctly, please refer to the Usage
guide and the -register switch.

Note: error with red text shown on image above is because with -targetargs:”test” dotnet.exe tries to run tests inside all projects, but src\SampleDotNetCore2RestStub simply does not have tests. You can refine which test project to get run by changing to: -targetargs:”test test\SampleDotNetCore2RestStub.Integration.Test\SampleDotNetCore2RestStub.Integration.Test.csproj”.

Message for no results is because debug output is not enabled on .NET Core project and OpenCover does not have needed data to work on. Change src\SampleDotNetCore2RestStub\SampleDotNetCore2RestStub.csproj file by adding <DebugType>full</DebugType>:

<PropertyGroup>
	<OutputType>Exe</OutputType>
	<TargetFramework>netcoreapp2.0</TargetFramework>
	<DebugType>full</DebugType>
</PropertyGroup>

Now running the command gives proper output:

Committing…
Visited Classes 5 of 12 (41.67)
Visited Methods 17 of 36 (47.22)
Visited Points 43 of 123 (34.96)
Visited Branches 18 of 44 (40.91)

==== Alternative Results (includes all methods including those without corresponding source) ====
Alternative Visited Classes 5 of 12 (41.67)
Alternative Visited Methods 20 of 43 (46.51)

Generate report

ReportGenerator is used to convert XML reports generated by OpenCoverPartCoverVisual Studio or NCover into human-readable reports in various formats. To generate report use following command:

C:\ReportGenerator\ReportGenerator.exe `
	-reports:coverage.xml `
	-targetdir:coverage

Inspect report

The report can be found in my examples: OpenCover .Net Core report. You can see what code is being covered during testing and what not.

Conclusion

In this post, I have shown how to run code coverage with OpenCover on .NET Core unit tests.

Related Posts

Read more...

Useful .NET Core SDK CLI commands

Last Updated on by

Post summary: Some useful .NET Core SDK CLI commands.

Commands in the current post are extraction from Build a REST API with .NET Core 2 and run it on Docker Linux container and .NET Core integration testing and mock dependencies posts where I have used them in a real project.

All commands

All available commands are accessed with: dotnet –help.

Initialise .NET projects

Creating new projects is done with: dotnet new [options]. I have used following commands:

  • Create new console application project: dotnet new console -o <ProjectName>
  • Create new MS Test project: dotnet new mstest -o <ProjectName>
  • Create new solution file: dotnet new sln –name <SolutionName>

To list all available project types use: dotnet new –help.

Custom templates

.NET SDK allows you to create custom project template and then install it to SDK with the command: dotnet new -i <TEMPLATE_FOLDER>. Once installed can create a new project out of your custom template. This is valuable in big organizations where cohesion is needed between similar project types. See more for templates in Custom templates for dotnet new article.

Manage dependencies

Two types of dependencies are available, in a NuGet package or to another .NET project.

  • Add reference to a NuGet package: dotnet add package <NuGetPackageName>
  • Add reference to another .NET project: dotnet add reference <PathToProjectFile>

Similar commands are available for removing dependencies with: dotnet remove.

Project dependencies can be shown with: dotnet list reference <PathToProjectFile>.

Actions on a project

  • Build a project: dotnet build
  • Run a project: dotnet run
  • Run tests of a project: dotnet test
  • Publish project artefacts: dotnet publish

All commands have a bunch of configuration options to be provided. More details on each command can be obtained by adding –help at the end.

Manage solution file

  • Create new solution file: dotnet new sln –name <SolutionName>
  • Add project to solution: dotnet sln <SolutionFileName> add <PathToProjectFile>
  • Remove project from solution: dotnet sln <SolutionFileName> remove <PathToProjectFile>
  • List projects in a solution: dotnet sln <SolutionFileName> list

Conclusion

This post is showing some useful .NET SDK CLI commands that make management of .NET project easy without Visual Studio 2017. More practical examples can be found in post where those commands are actually used: Build a REST API with .NET Core 2 and run it on Docker Linux container and .NET Core integration testing and mock.

Related Posts

Read more...

.NET Core integration testing and mock dependencies

Last Updated on by

Post summary: How to do integration testing on .NET Core application and stub or mock some inconvenient dependencies.

Code below can be found in GitHub SampleDotNetCore2RestStub repository. In Build a REST API with .NET Core 2 and run it on Docker Linux container post I have shown how to create .NET Core application. In the current post, I will show how to do integration testing on the same application. The post is for REST API, but principles here apply for web UI as well, the difference is that the response will be HTML, which is slightly harder to process compared to JSON.

Refactor project structure

Currently, there is only one project created which contains .NET Core application. Since this is going to grow it has to be refactored and structured properly.

  • SampleDotNetCore2RestStub folder which contains the API is moved to src folder.
  • Solution file is created with dotnet new sln –name SampleDotNetCore2RestStub. Note that .sln extension is omitted as it is added automatically. Although everything in the example is done with open source tools, it is good to have solution file to keep compatibility with Visual Studio 2017.
  • API project file is added to solution file with:
    dotnet sln SampleDotNetCore2RestStub.sln add src/SampleDotNetCore2RestStub/SampleDotNetCore2RestStub.csproj.
  • In order to test that moving of files did not affected the functionality, API can be run with: dotnet run –project src/SampleDotNetCore2RestStub/SampleDotNetCore2RestStub.csproj.

Add test project

It is time to create integration tests project. We speak for integration tests, but they will be run with unit testing framework MSTest. I do not have some particular favor of it, it comes by default with .NET Core, along with xUnit, and I do not want to change it.

  • Create test folder: mkdir test.
  • Navigate to it: cd test.
  • MSTest project is created with: dotnet new mstest -o SampleDotNetCore2RestStub.Integration.Test.
  • Navigate to test project: cd SampleDotNetCore2RestStub.Integration.Test.
  • Run the unit tests: dotnet test. By default, there is one dummy test that passes.
  • Go to root folder: cd .. and cd ..
  • Add test project to solution file: dotnet sln SampleDotNetCore2RestStub.sln add test/SampleDotNetCore2RestStub.Integration.Test/SampleDotNetCore2RestStub.Integration.Test.csproj.

Open with Visual Studio Code

Once refactored and opened in Visual Studio Code project has following structure:

Unit vs Integration testing

I would not like to focus on theory and terminology as this post is not intended to, but I have to do some theoretical setup before proceeding with the code. Generally speaking, term integration testing is used in two cases. One is when different systems are interconnected together and tested, other is when different components of one system are grouped together and tested. In the current post with term integration testing, I will refer the latter. In unit testing, each separate class is tested in isolation. In order to do so all external dependencies, like a database, file system, web requests, and response, etc., are mocked. This makes tests run very fast but has a very high risk of false positives because of mocking. When mocking a dependency there is always an assumption how it works and is being used. The mocked behavior might be significantly different than actual one, then the unit test is compromised. On the other hand, integration testing verifies that different parts of the application work correctly when grouped together. It is much slower than unit testing because more and real resources are being used. Some parts of the application still can be mocked which can increase execution time. In the current post, I will show how to run a full application with only the database being mocked.

The Test Host

One way to run the fully assembled application is by building and deploying it. Then, the application will use real resources to work. Functional testing should also be done during testing but is not part of the current post. A more interesting scenario is to run fully assembled or partially mocked application in memory, without deployment and run tests against it. This approach has benefits, e.g. since the application is run locally its response time is very low, which speeds up tests; some parts, like database connection, can be mocked and thus speed up tests. .NET Core Test Host is a tool that can host web or API .NET Core applications serving requests and responses. It eliminates the need for having a testing environment.

Add dependencies

In order to use test host dependency to its NuGet package should be added. Navigate to test/SampleDotNetCore2RestStub.Integration.Test and add a dependency:

dotnet add package Microsoft.AspNetCore.TestHost

SampleDotNetCore2RestStub.Integration.Test project should depend on SampleDotNetCore2RestStub in order to use its code. This is done with:

dotnet add reference ../../src/SampleDotNetCore2RestStub/SampleDotNetCore2RestStub.csproj

Create the first test

Existing UnitTest1 class will be changed to start application inside test host and make a request.

using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	[TestClass]
	public class PersonsTest
	{
		private TestServer _server;
		private HttpClient _client;

		[TestInitialize]
		public void TestInitialize()
		{
			_server = new TestServer(new WebHostBuilder()
				.UseStartup<Startup>());
			_client = _server.CreateClient();
		}

		[TestMethod]
		public async Task GetPerson()
		{
			var response = await _client.GetAsync("/person/get/1");
			response.EnsureSuccessStatusCode();

			var result = await response.Content.ReadAsStringAsync();
			var person = JsonConvert.DeserializeObject<Person>(result);

			Assert.AreEqual("LN1", person.LastName);
		}
	}
}

TestServer uses an instance of IWebHostBuilder. Startup from UseStartup<Startup> is same class that is used to run the application, but here it is run inside TestServer instance. CreateClient() method returns instance of standard HttpClient, with which request to /person/get/1 endpoint is made. EnsureSuccessStatusCode() throws exception if response code is not inside 200-299 range. The response is then taken as a string and deserialized to Person object with Newtonsoft.Json, which is now part of .NET Core.

Test can be run from test\SampleDotNetCore2RestStub.Integration.Test folder with the command: dotnet test. If you type dotnet test from root folder it will search for tests inside all projects.

Debug tests in Visual Studio Code

Before proceeding any further with the code it should be possible to debug unit tests inside VS Code. It is not as easy as with VS 2017, but still manageable. First, you need to run your test from the command prompt in debug mode:

set VSTEST_HOST_DEBUG=1
dotnet test

Once this is done there is a message with specific process ID:

Starting test execution, please wait...
Host debugging is enabled. Please attach debugger to testhost process to continue.
Process Id: 16032, Name: dotnet

Now from Visual Studio Code, you have to attach to given process, 16032 in the current example. This is done from Debug View, then select .Net Core Attach launch configuration. If such is not existing, add it. Running this configuration shows a list of all processes with name dotnet. Select the proper one, 16032 in the current example.

Create PersonServiceClient and BaseTest

Tests should be easy to write, read and maintain, thus PersonServiceClient class is created. It exposes methods that hit the endpoints and return the result. Since testing is not only happy path, it should be possible to have some negative scenarios. You may want to hit the API with invalid data and verify it returns BadRequest (400) HTTP response code, or Unauthorized (401) HTTP response code, etc. In order to fulfill this test requirement, a separate class ApiResponse<T> is created. It stores response code along with response content as a string. In case that response string can be deserialized to an object of given generic type T it is also stored in ApiResponse object.

Client is instantiated as a protected variable in BaseTest constructor. PersonsTest extends BaseTest and has access to PersonServiceClient.

ApiResponse

using System.Net;

namespace SampleDotNetCore2RestStub.Integration.Test.Client
{
	public class ApiResponse<T>
	{
		public HttpStatusCode StatusCode { get; set; }
		public T Result { get; set; }
		public string ResultAsString { get; set; }
	}
}

PersonServiceClient

using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Threading.Tasks;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test.Client
{
	public class PersonServiceClient
	{
		private readonly HttpClient _httpClient;

		public PersonServiceClient(HttpClient httpClient)
		{
			_httpClient = httpClient;
		}

		public async Task<ApiResponse<Person>> GetPerson(string id)
		{
			var person = await GetAsync<Person>($"/person/get/{id}");
			return person;
		}

		public async Task<ApiResponse<List<Person>>> GetPersons()
		{
			var persons = await GetAsync<List<Person>>("/person/all");
			return persons;
		}

		public async Task<ApiResponse<string>> Version()
		{
			var version = await GetAsync<string>("api/version");
			return version;
		}

		private async Task<ApiResponse<T>> GetAsync<T>(string path)
		{
			var response = await _httpClient.GetAsync(path);
			var value = await response.Content.ReadAsStringAsync();
			var result = new ApiResponse<T>
			{
				StatusCode = response.StatusCode,
				ResultAsString = value
			};

			try
			{
				result.Result = JsonConvert.DeserializeObject<T>(value);
			}
			catch (Exception)
			{
				// Nothing to do
			}

			return result;
		}
	}
}

BaseTest

using System.Net.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SampleDotNetCore2RestStub.Integration.Test.Client;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	public abstract class BaseTest
	{
		protected PersonServiceClient PersonServiceClient;

		public BaseTest()
		{
			var server = new TestServer(new WebHostBuilder()
				.UseStartup<Startup>());
			var httpClient = server.CreateClient();
			PersonServiceClient = new PersonServiceClient(httpClient);
		}
	}
}

PersonsTest

using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	[TestClass]
	public class PersonsTest : BaseTest
	{
		[TestMethod]
		public async Task GetPerson()
		{
			var response = await PersonServiceClient.GetPerson("1");

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual("LN1", response.Result.LastName);
		}

		[TestMethod]
		public async Task GetPersons()
		{
			var response = await PersonServiceClient.GetPersons();

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual(4, response.Result.Count);
			Assert.AreEqual("LN1", response.Result[0].LastName);
		}
    }
}

Stub the database

So far there is integration test that starts the application with its actual external dependencies and makes requests against it. Current API service does not connect to a real database, because this will make running the API harder. Instead, there is a fake PersonRepository which stores data in memory. In reality, the repository will connect to a database with a given connection string in appsettings.json, and will perform CRUD operations on it. Database operations might slow down the application response time, or test might not have full control over data in the database, which makes testing harder. In order to solve those two issues database can be stubbed to serve test data. Actually, anything that is not convenient can be stubbed with the examples given below.

In order to make stubbing possible and to keep application structure intact Startup has to be changed. Registering PersonRepository to .NET Core IoC container is extracted to separate virtual method that can be overridden later. All dependencies that are to be stubbed or mocked can be extracted to such methods. Then StartupStub overrides this method and registers stubbed repository PersonRepositoryStub. In it all database operations are substituted with an in-memory equivalence, hence skipping database calls. It might not be a full and accurate substitution, as long as it serves your testing purpose. After all this PersonRepositoryStub will be used only for testing. BaseTest should be changed to start the application with StartupStub instead of Statup. Finally, PersonsTest should be changed to assert on new data that is configured in PersonRepositoryStub.

Startup

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.Configure<AppConfig>(Configuration);
	services.AddScoped<AuthenticationFilterAttribute>();

	ConfigureRepositories(services);
}

public virtual void ConfigureRepositories(IServiceCollection services)
{
	services.AddSingleton<IPersonRepository, PersonRepository>();
}

StartupStub

using Microsoft.Extensions.DependencyInjection;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Integration.Test.Mocks
{
	public class StartupStub : Startup
	{
		public override void ConfigureRepositories(IServiceCollection services)
		{
			services.AddSingleton<IPersonRepository, PersonRepositoryStub>();
		}
	}
}

PersonRepositoryStub

using System.Collections.Generic;
using System.Linq;
using SampleDotNetCore2RestStub.Models;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Integration.Test.Mocks
{
	public class PersonRepositoryStub : IPersonRepository
	{
		private Dictionary<int, Person> _persons 
					= new Dictionary<int, Person>();

		public PersonRepositoryStub()
		{
			_persons.Add(1, new Person
			{
				Id = 1,
				FirstName = "Stubed FN1",
				LastName = "Stubed LN1",
				Email = "stubed.email1@email.na"
			});
		}

		public Person GetById(int id)
		{
			return _persons[id];
		}

		public List<Person> GetAll()
		{
			return _persons.Values.ToList();
		}

		public int GetCount()
		{
			return _persons.Count();
		}

		public void Remove()
		{
			if (_persons.Keys.Any())
			{
				_persons.Remove(_persons.Keys.Last());
			}
		}

		public string Save(Person person)
		{
			if (_persons.ContainsKey(person.Id))
			{
				_persons[person.Id] = person;
				return "Updated Person with id=" + person.Id;
			}
			else
			{
				_persons.Add(person.Id, person);
				return "Added Person with id=" + person.Id;
			}
		}
	}
}

BaseTest

using System.Net.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SampleDotNetCore2RestStub.Integration.Test.Client;
using SampleDotNetCore2RestStub.Integration.Test.Mocks;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	public abstract class BaseTest
	{
		protected PersonServiceClient PersonServiceClient;

		public BaseTest()
		{
			var server = new TestServer(new WebHostBuilder()
				.UseStartup<StartupStub>());
			var httpClient = server.CreateClient();
			PersonServiceClient = new PersonServiceClient(httpClient);
		}
	}
}

PersonsTest

using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	[TestClass]
	public class PersonsTest : BaseTest
	{
		[TestMethod]
		public async Task GetPerson()
		{
			var response = await PersonServiceClient.GetPerson("1");

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual("Stubed LN1", response.Result.LastName);
		}

		[TestMethod]
		public async Task GetPersons()
		{
			var response = await PersonServiceClient.GetPersons();

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual(1, response.Result.Count);
			Assert.AreEqual("Stubed LN1", response.Result[0].LastName);
		}
	}
}

Mock the database

Stubbing is an option, but mocking is much better as you have direct control over the mock itself. The most famous .NET mocking framework is Moq. It is added to the project with the command:

dotnet add package Moq

StartupMock extends Starup and overrides its ConfigureRepositories. It registers an instance of IPersonRepository which is injected by its constructor. BaseTest is changed to use StartupMock in UseStartup method. Repository mock is instantiated with PersonRepositoryMock = new Mock<IPersonRepository>(). It is injected into StartupMock constructor with ConfigureServices(services => services.AddSingleton(PersonRepositoryMock.Object)). This is how mock instance is registered into IoC container of .NET Core application that is being tested. Once the mock instance is registered it can be controlled. In BaseTest it is reset to defaults after each test with BaseTearDown method. It is run after each test because of [TestCleanup] MSTest attribute. Inside, the PersonRepositoryMock.Reset() resets mock state.

Test specific setup can be done for each test. For e.g. GetPerson_ReturnsCorrectResult has following setup: PersonRepositoryMock.Setup(x => x.GetById(It.IsAny<int>())).Returns(_person); That means when mock’s GetById method is called with whatever int value the _person object is returned. Another example is GetPerson_ThrowsException test. When mock’s GetById is called then InvalidOperationException is thrown. In this way, you can test exception handling, which in current demo application is missing. The exception is not that easy to reproduce if you are using repository stubbing.

StartupMock

using Microsoft.Extensions.DependencyInjection;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Integration.Test.Mocks
{
	public class StartupMock : Startup
	{
		private IPersonRepository _personRepository;
		
		public StartupMock(IPersonRepository personRepository)
		{
			_personRepository = personRepository;
		}

		public override void ConfigureRepositories(IServiceCollection services)
		{
			services.AddSingleton(_personRepository);
		}
	}
}

BaseTest

using System.Net.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.Extensions.DependencyInjection;
using Moq;
using SampleDotNetCore2RestStub.Integration.Test.Client;
using SampleDotNetCore2RestStub.Integration.Test.Mocks;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	public abstract class BaseTest
	{
		protected PersonServiceClient PersonServiceClient;
		protected Mock<IPersonRepository> PersonRepositoryMock;

		public BaseTest()
		{
			PersonRepositoryMock = new Mock<IPersonRepository>();

			var server = new TestServer(new WebHostBuilder()
				.UseStartup<StartupMock>()
				.ConfigureServices(services =>
				{
					services.AddSingleton(PersonRepositoryMock.Object);
				}));

			var httpClient = server.CreateClient();
			PersonServiceClient = new PersonServiceClient(httpClient);
		}

		[TestCleanup]
		public void BaseTearDown()
		{
			PersonRepositoryMock.Reset();
		}
	}
}

PersonsTest

using System;
using System.Collections.Generic;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.TestHost;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
using Newtonsoft.Json;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Integration.Test
{
	[TestClass]
	public class PersonsTest : BaseTest
	{
		private readonly Person _person = new Person
		{
			Id = 1,
			FirstName = "Mocked FN1",
			LastName = "Mocked LN1",
			Email = "mocked.email1@email.na"
		};

		[TestMethod]
		public async Task GetPerson_ReturnsCorrectResult()
		{
			PersonRepositoryMock.Setup(x => x.GetById(It.IsAny<int>()))
				.Returns(_person);

			var response = await PersonServiceClient.GetPerson("1");

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual("Mocked LN1", response.Result.LastName);
		}

		[TestMethod]
		[ExpectedException(typeof(InvalidOperationException))]
		public async Task GetPerson_ThrowsException()
		{
			PersonRepositoryMock.Setup(x => x.GetById(It.IsAny<int>()))
				.Throws(new InvalidOperationException());

			var result = await PersonServiceClient.GetPerson("1");
		}

		[TestMethod]
		public async Task GetPersons()
		{
			PersonRepositoryMock.Setup(x => x.GetAll())
				.Returns(new List<Person> { _person });

			var response = await PersonServiceClient.GetPersons();

			Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
			Assert.AreEqual(1, response.Result.Count);
			Assert.AreEqual("Mocked LN1", response.Result[0].LastName);
		}
	}
}

Nicer database mock

As of version 2.1.0 of Microsoft.AspNetCore.TestHost, which is currently in pre-release, there is a method called ConfigureTestServices, which saves us from having separate StartupMock class. You can directly inject your mocks with following code:

var server = new TestServer(new WebHostBuilder()
	.UseStartup<Startup>()
	.ConfigureTestServices(services =>
	{
		services.AddSingleton(PersonRepositoryMock.Object);
	}));

Conclusion

In the current post, I have shown how to do integration testing on .NET Core applications. This is a very convenient approach which eliminates some of the disadvantages of stubbing or mocking all dependencies in unit testing. Because of using all dependencies, integration testing can be much slower. This can be improved by mocking some of them. Integration testing is not a substitute for unit testing, nor for functional testing, but it is a good approach in you testing portfolio that should be considered.

Related Posts

Read more...

Build a REST API with .NET Core 2 and run it on Docker Linux container

Last Updated on by

Post summary: Code examples how to create RESTful API with .NET Core 2.0 and then run it on Docker Linux container.

Code below can be found in GitHub SampleDotNetCore2RestStub repository. In the current post is shown a sample application that can be a very good foundation for a real production application. This project can be easily used as a template for real API service.

Microsoft and open source

I was doing Java for about 2 years and got back to .NET six months ago. Recently we had to do a project in .NET Core 2.0, a technology I haven’t heard of before. I was truly amazed how much open source Microsoft had begun. .NET now can be developed and even run on Linux. This definitely makes it really competitive to Java which advantage was multi-platform ability. Another benefit is that documentation is very extensive and there is a huge community out there that makes solving issues really fast and easy.

.NET Core

In short .NET Core is a cross-platform development platform supporting Windows, macOS, and Linux, and can be used in device, cloud, and embedded/IoT scenarios. It is maintained by Microsoft and the .NET community on GitHub. More can be read on .NET Core Guide.

.NET Core 2.0

The special thing about .NET Core 2.0 is the implementation of .NET Standard 2.0. This makes it possible to use almost 70% of already existing NuGet packages, which is a big step forward and eases development of .NET applications because of reusability.

Create simple .NET Core project

Making default .NET Core console application is really simple:

  1. Download and install .NET Core SDK. For Windows and MacOS there are installers available. For Linux it depends on distribution used, see more at .NET Core Linux installation guide.
  2. Create an application with following command: dotnet new console -o ProjectName. Option -o specifies the output folder to be created which also becomes the project name. If -o is omitted then the project will be created in the current folder with current folder’s name.
  3. Run the newly created application with: dotnet run.

Using Visual Studio Code

Once the project is created it can be developed in any text editor. Most convenient is Visual Studio 2017 because it provides lots of tools that make development very fast and efficient. In this tutorial, I will be using Visual Studio Code – open-source multi-platform editor maintained by Microsoft. I admit it is much harder that Visual Studio 2017 but is free and multi-platform. Once project folder is imported, hitting Ctrl+F5 runs the project.

ASP.NET Core MVC

ASP.NET Core MVC provides features to build web APIs or web UIs. It has to be used in order to continue with the current example. Dependency to its NuGet package is added with the following command:

dotnet add package Microsoft.AspNetCore
dotnet add package Microsoft.AspNetCore.All

Create REST API

After project structure is done it is time to add classes needed to make the REST API. Functionality is very similar to one described in Build a RESTful stub server with Dropwizard post. There is a Person API which can retrieve, save or delete persons. They are kept in an in-memory data structure which mimics DB layer. Following classes are needed:

  • PersonController – a controller that exposes the API endpoints. By extending Controller class the runtime makes all endpoints available as long as they have proper routing. In current example routing is done inside action attributes [HttpGet(“person/get/{id}”)]. There are different routing options described in this extensive documentation Routing to Controller Actions. Adding of person is done with POST: [HttpPost(“person/save”)]. The important bit here is [FromBody] attribute which takes HTTP body and deserializes it to a Person object.
  • Person – this is data model class with properties.
  • PersonRepository – in-memory DB abstraction that keeps the data in a Dictionary. In reality, there will be DB layer responsible for managing data.
  • Startup – class with services configuration. Both ConfigureServices and Configure methods are called behind the scenes from the runtime. Any configurations needed goes to those two methods. Current configuration adds MVC to services and instructs the application to use it. This is not really Model View Controller pattern, but this is what is needed to enable controllers and get API running.
  • Program – main program entry point where web host is built and started. It uses Startup.cs to run the configurations. More details on WebHost can be found in Hosting in ASP.NET Core. This article also shows how the external configuration is managed, something that will be presented later in the current post.

PersonController

using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNetCore.Mvc;
using SampleDotNetCore2RestStub.Models;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Controllers
{
	public class PersonController : Controller
	{
		[HttpGet("person/get/{id}")]
		public Person GetPerson(int id)
		{
			return PersonRepository.GetById(id);
		}

		[HttpGet("person/remove")]
		public string RemovePerson()
		{
			PersonRepository.Remove();
			return "Last person remove. Total count: " 
						+ PersonRepository.GetCount();
		}

		[HttpGet("person/all")]
		public List<Person> GetPersons()
		{
			return PersonRepository.GetAll();
		}

		[HttpPost("person/save")]
		public string AddPerson([FromBody]Person person)
		{
			return PersonRepository.Save(person);
		}
	}
}

Person

namespace SampleDotNetCore2RestStub.Models
{
	public class Person
	{
		public int Id { get; set; }
		public string FirstName { get; set; }
		public string LastName { get; set; }
		public string Email { get; set; }
	}
}

PersonRepository

using System.Collections.Generic;
using System.Linq;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Repositories
{
	public class PersonRepository
	{
		private static Dictionary<int, Person> PERSONS 
								= new Dictionary<int, Person>();

		static PersonRepository()
		{
			PERSONS.Add(1, new Person
			{
				Id = 1,
				FirstName = "FN1",
				LastName = "LN1",
				Email = "email1@email.na"
			});
			PERSONS.Add(2, new Person
			{
				Id = 2,
				FirstName = "FN2",
				LastName = "LN2",
				Email = "email2@email.na"
			});
		}

		public static Person GetById(int id)
		{
			return PERSONS[id];
		}

		public static List<Person> GetAll()
		{
			return PERSONS.Values.ToList();
		}

		public static int GetCount()
		{
			return PERSONS.Count();
		}

		public static void Remove()
		{
			if (PERSONS.Keys.Any())
			{
				PERSONS.Remove(PERSONS.Keys.Last());
			}
		}

		public static string Save(Person person)
		{
			var result = "";
			if (PERSONS.ContainsKey(person.Id))
			{
				result = "Updated Person with id=" + person.Id;
			}
			else
			{
				result = "Added Person with id=" + person.Id;
			}
			PERSONS.Add(person.Id, person);
			return result;
		}
	}
}

Startup

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;

namespace SampleDotNetCore2RestStub
{
	public class Startup
	{
		public Startup(IConfiguration configuration)
		{
			Configuration = configuration;
		}

		public IConfiguration Configuration { get; }

		public void ConfigureServices(IServiceCollection services)
		{
			services.AddMvc();
		}

		public void Configure(IApplicationBuilder app,
					IHostingEnvironment env)
		{
			app.UseMvc();
		}
	}
}

Program

using System;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;

namespace SampleDotNetCore2RestStub
{
	public class Program
	{
		public static void Main(string[] args)
		{
			BuildWebHost(args).Run();
		}

		public static IWebHost BuildWebHost(string[] args) =>
			WebHost.CreateDefaultBuilder(args)
				.UseStartup<Startup>()
				.Build();
	}
}

External configuration

Service so far is pretty much useless as it does not give an opportunity for external configurations. Adding external configuration consist of adding and changing following files:

    • VersionController – controller to actually show full working configuration. Routing in this controller is handled by [Route(“api/[controller]”)]. This exposes /api/version endpoint because [controller] is a template that stands for controller name. Controller constructor takes IOptions object and extracts Value out of it. Actual object value is injected in Startup.cs.
    • appsettings.json – JSON file with application configurations.
    • AppConfig – data model class that represents JSON configuration as an object.
    • Startup – change is needed to read file appsettings.json and bind it to AppConfig object. Configuration is read with: var configurationBuilder = new ConfigurationBuilder().AddJsonFile(“appsettings.json”, false, true) then it is saved internally with Configuration = configurationBuilder.Build(). JSON configuration is bound to a AppConfig object with following line: services.Configure<AppConfig>(Configuration).
    • SampleDotNetCore2RestStub.csproj – change is needed in the project file to instruct build process to copy appsettings.json to the output folder. This is where VS 2017 makes it much easier as it exposes property config to change, with VS Code you have to edit the csproj XML.

VersionController

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Options;

namespace SampleDotNetCore2RestStub.Controllers
{
	[Route("api/[controller]")]
	public class VersionController : Controller
	{
		private readonly AppConfig _config;

		public VersionController(IOptions<AppConfig> options)
		{
			_config = options.Value;
		}

		[HttpGet]
		public string Version()
		{
			return _config.Version;
		}
	}
}

appsettings.json

{
	"Version": "1.0"
}

AppConfig

namespace SampleDotNetCore2RestStub
{
	public class AppConfig
	{
		public string Version { get; set; }
	}
}

Startup

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;

namespace SampleDotNetCore2RestStub
{
	public class Startup
	{
		public Startup()
		{
			var configurationBuilder = new ConfigurationBuilder()
				.AddJsonFile("appsettings.json", false, true);

			Configuration = configurationBuilder.Build();
		}

		public IConfiguration Configuration { get; }

		public void ConfigureServices(IServiceCollection services)
		{
			services.AddMvc();
			services.Configure<AppConfig>(Configuration);
		}

		public void Configure(IApplicationBuilder app, 
					IHostingEnvironment env)
		{
			app.UseMvc();
		}
	}
}

csproj

<ItemGroup>
	<None Include="appsettings.json" CopyToOutputDirectory="Always" />
</ItemGroup>

Request filtering

An almost mandatory feature is to have some kind of filtering on the request. The current example will provide a very basic implementation of authentication filter achieved with an attribute. Following files are needed:

  • SecurePersonController – controller that demonstrates filtering. The controller is no more different than other discussed above. Important bit is [ServiceFilter(typeof(AuthenticationFilterAttribute))] which assigns AuthenticationFilterAttribute to current controller.
  • AuthenticationFilterAttribute – very basic implementation to illustrate how it works. Request headers are extracted from HttpContext and are checked for the existence of Authorization. If not found Exception is thrown. In next section, I will show how to handle this exception more gracefully.
  • StartupAuthenticationFilterAttribute is registered to runtime with: services.AddScoped<AuthenticationFilterAttribute>(). .NET Core dependency injection mechanism is used here, which I have described it in more details in separate section below.

SecurePersonController

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using SampleDotNetCore2RestStub.Attributes;
using SampleDotNetCore2RestStub.Models;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Controllers
{
	[ServiceFilter(typeof(AuthenticationFilterAttribute))]
	public class SecurePersonController : Controller
	{
		[HttpGet("secure/person/all")]
		public List<Person> GetPersons()
		{
			return PersonRepository.GetAll();
		}
	}
}

AuthenticationFilterAttribute

using System;
using System.Linq;
using Microsoft.AspNetCore.Mvc.Filters;

namespace SampleDotNetCore2RestStub.Attributes
{
	public class AuthenticationFilterAttribute : ActionFilterAttribute
	{
		public override void OnActionExecuting(ActionExecutingContext ctx)
		{
			string authKey = ctx.HttpContext.Request
					.Headers["Authorization"].SingleOrDefault();

			if (string.IsNullOrWhiteSpace(authKey))
				throw new Exception();
		}
	}
}

Startup

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.Configure<AppConfig>(Configuration);
	services.AddScoped<AuthenticationFilterAttribute>();
}

If endpoint /secure/person/all is queried without Authorization header there is 500 Internal Server Error response from the application. If Authorization header is present with any value all persons are retrieved.

Middleware

Middleware is a software that is assembled into an application pipeline to handle requests and responses. Each component chooses whether to pass the request to the next component in the pipeline or perform work before that. More on middleware can be found in ASP.NET Core Middleware Fundamentals. In current example middleware is used to handle better exceptions. In the previous point, AuthenticationFilterAttribute was throwing an exception which was transformed to 500 Internal Server Error which is not pretty. In case of not authorized application should return 401 Unauthorized. In order to do this following files are needed:

  • HttpException – a custom exception which then will be caught and processed in HttpExceptionMiddleware.
  • HttpExceptionMiddleware – this is where handling happens. Code checks for custom HttpException and if such is thrown pipeline changes HttpContext.Response object with proper values.
  • AuthenticationFilterAttribute – instead of Exception filter attribute throws new
    HttpException(HttpStatusCode.Unauthorized). This way middleware will get invoked.
  • Startup – middleware get registered here with app.UseMiddleware<HttpExceptionMiddleware>(). It is extremely important that this stands before app.UseMvc() otherwise it will not work.

HttpException

using System;
using System.Net;

namespace SampleDotNetCore2RestStub.Exceptions
{
	public class HttpException : Exception
	{
		public int StatusCode { get; }

		public HttpException(HttpStatusCode httpStatusCode)
			: base(httpStatusCode.ToString())
		{
			this.StatusCode = (int)httpStatusCode;
		}
	}
}

HttpExceptionMiddleware

using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Http.Features;
using SampleDotNetCore2RestStub.Exceptions;

namespace SampleDotNetCore2RestStub.Middleware
{
	public class HttpExceptionMiddleware
	{
		private readonly RequestDelegate _next;

		public HttpExceptionMiddleware(RequestDelegate next)
		{
			_next = next;
		}

		public async Task Invoke(HttpContext context)
		{
			try
			{
				await _next.Invoke(context);
			}
			catch (HttpException httpException)
			{
				context.Response.StatusCode = httpException.StatusCode;
				var feature = context.Features.Get<IHttpResponseFeature>();
				feature.ReasonPhrase = httpException.Message;
			}
		}
	}
}

AuthenticationFilterAttribute

public override void OnActionExecuting(ActionExecutingContext context)
{
	string authKey = context.HttpContext.Request
			.Headers["Authorization"].SingleOrDefault();

	if (string.IsNullOrWhiteSpace(authKey))
		throw new HttpException(HttpStatusCode.Unauthorized);
}

Startup

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
	app.UseMiddleware<HttpExceptionMiddleware>();
	app.UseMvc();
}

Dependency Injection

So far there is running service with basic functionality. It is missing very important bit though, something that should have been considered and added earlier. Actually, it was added but only when registering AuthenticationFilterAttribute, but here I will go into more details. Dependency injection (DI) is a technique for achieving loose coupling between objects and their dependencies. Rather than directly instantiating an object or using static references, the objects a class needs are provided to the class in some fashion. ASP.NET Core provides its own dependency injection mechanisms, read more on Introduction to Dependency Injection in ASP.NET Core. The code will now get refactored to match this pattern.

  • IPersonRepository – all database operations are declared in this interface.
  • PersonRepository – implements all methods of IPersonRepository interface. It still does not have real interaction with the database, data is kept in a dictionary. Refactor is that all static methods are removed. In order to use this class, you need an instance of it. Sample data is populated on object creation in its constructor.
  • SecurePersonController – an instance of an implementation of IPersonRepository is passed through the constructor and is used internally. By using interfaces a level of abstraction is achieved, where multiple implementations may be used for the same interface.
  • PersonController – same as SecurePersonController.
  • Startup – this is where DI is used to register that PersonRepository is the implementation of IPersonRepositoryservices.AddSingleton<IPersonRepository, PersonRepository>().

Three different object life scopes are available in .NET Core DI. It is important to know the difference in order to use them properly. If object creation is expensive operation misuse of proper DI lifetime scope might be crucial for performance:

  • AddSingleton – only one instance is created for the whole application. In the example above PersonRepository needed to have one instance because sample data is initialized in the constructor.
  • AddScoped – one instance is created per HTTP request scope. 
  • AddTransient – instance is created every time it is needed. Let us say there are 3 places where an object is needed and an HTTP request is coming to the application. AddTransient will create 3 different objects, while AddScoped will create just one that will be used for current HTTP request scope.

IPersonRepository

using System.Collections.Generic;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Repositories
{
	public interface IPersonRepository
	{
		Person GetById(int id);
		List<Person> GetAll();
		int GetCount();
		void Remove();
		string Save(Person person);
	}
}

PersonRepository

using System.Collections.Generic;
using System.Linq;
using SampleDotNetCore2RestStub.Models;

namespace SampleDotNetCore2RestStub.Repositories
{
	public class PersonRepository : IPersonRepository
	{
		private Dictionary<int, Person> _persons 
						= new Dictionary<int, Person>();

		public PersonRepository()
		{
			_persons .Add(1, new Person
			{
				Id = 1,
				FirstName = "FN1",
				LastName = "LN1",
				Email = "email1@email.na"
			});
			_persons .Add(2, new Person
			{
				Id = 2,
				FirstName = "FN2",
				LastName = "LN2",
				Email = "email2@email.na"
			});
		}

		public Person GetById(int id)
		{
			return _persons[id];
		}

		public List<Person> GetAll()
		{
			return _persons.Values.ToList();
		}

		public int GetCount()
		{
			return _persons.Count();
		}

		public void Remove()
		{
			if (_persons.Keys.Any())
			{
				_persons.Remove(_persons.Keys.Last());
			}
		}

		public string Save(Person person)
		{
			if (_persons.ContainsKey(person.Id))
			{
				_persons[person.Id] = person;
				return "Updated Person with id=" + person.Id;
			}
			else
			{
				_persons.Add(person.Id, person);
				return "Added Person with id=" + person.Id;
			}
		}
	}
}

SecurePersonController

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using SampleDotNetCore2RestStub.Attributes;
using SampleDotNetCore2RestStub.Models;
using SampleDotNetCore2RestStub.Repositories;

namespace SampleDotNetCore2RestStub.Controllers
{
	[ServiceFilter(typeof(AuthenticationFilterAttribute))]
	public class SecurePersonController : Controller
	{
		private readonly IPersonRepository _personRepository;

		public SecurePersonController(IPersonRepository personRepository)
		{
			_personRepository = personRepository;
		}

		[HttpGet("secure/person/all")]
		public List<Person> GetPersons()
		{
			return _personRepository.GetAll();
		}
	}
}

Startup

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.Configure<AppConfig>(Configuration);
	services.AddScoped<AuthenticationFilterAttribute>();
	services.AddSingleton<IPersonRepository, PersonRepository>();
}

Docker file

Docker file that packs application is shown below:

FROM microsoft/dotnet:2.0-sdk
COPY pub/ /root/
WORKDIR /root/
ENV ASPNETCORE_URLS="http://*:80"
EXPOSE 80/tcp
ENTRYPOINT ["dotnet", "SampleDotNetCore2RestStub.dll"]

Docker container that is used is microsoft/dotnet:2.0-sdk. Everything from pub folder is copied to container root folder. ASPNETCORE_URLS is used to set the URLs that the server listens on by default. Current config runs and exposes application at port 80 in the container. With ENTRYPOINT is configured the command that is run when the container is started.

Build, package and run Docker

The application is built and published in Release mode into pub folder with the following command:

dotnet publish --configuration=Release -o pub

Docker container is packaged with tag netcore-rest with the following command:

docker build . -t netcore-rest

Docker container is run with exposing port 80 from the container to port 9000 on the host with the following command:

docker run -e Version=1.1 -p 9000:80 netcore-rest

Notice the -e Version=1.1 which sets an environment variable to be used inside the container. The intention is to use this variable in application. This can be enabled by modifying Startup.cs file by adding AddEnvironmentVariables():

public Startup()
{
	var configurationBuilder = new ConfigurationBuilder()
		.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
		.AddEnvironmentVariables();

	Configuration = configurationBuilder.Build();
}

If invoked now /api/version returns 1.1.

Docker optimisation

When the container with microsoft/dotnet:2.0-sdk is packed it gets to a size of 1.7GB which is quite a lot. There is much leaner container image: microsoft/dotnet:2.0-runtime, but it requires all runtime assemblies to be present in pub folder. This can be done by changing the csproj file by adding PublishWithAspNetCoreTargetManifest = false:

<PropertyGroup>
	<OutputType>Exe</OutputType>
	<TargetFramework>netcoreapp2.0</TargetFramework>
	<PublishWithAspNetCoreTargetManifest>false</PublishWithAspNetCoreTargetManifest> 
</PropertyGroup>

This makes pub folder about 37MB, but container size is 258MB. Problem with this proposal is that it might not be very reliable as some assemblies might not be copied or might not be the correct version.

Since Docker is keeping layers in the repository, proposed optimization might turn out not to be actual optimization. It will consume much more space in the repository since layer that changes and is always saved is 258MB. Layers with OS might not change often if it changes at all.

Testing

How to given application can be integration tested is described in .NET Core integration testing and mock dependencies post.

Conclusion

In the current tutorial, I have shown how to create API from scratch with .NET Core 2.0 SDK on any platform. It is very easy to run .NET Core app and even run it Docker with Linux container.

Related Posts

Read more...