Distributed system observability: Instrument React application with OpenTelemetry

Last Updated on by

Post summary: Create a React web application using the Material UI design system and instrument the application with OpenTelemetry.

This post is part of Distributed system observability: complete end-to-end example series. The code used for this series of blog posts is located in selenium-observability-java GitHub repository.

React

React is a JavaScript library for building user interfaces.

Create React App

Create React App provides a simple way to create React applications from scratch. It also creates and abstracts the whole toolchain needed to develop JavaScript applications, such as WebPack and Babel, so the user does not need to bother with configuring those. Application is created with the following command: create-react-app my-app –template typescript.

Project structure

With the projects I have worked on professionally I am used to a specific folder structure of the project.

  • src/components – re-usable components, building blocks, used across the application
  • src/containers – components used to build the application, e.g. pages
  • src/helpers – functionality not related to the presentation logic
  • src/stylesheets – CSS files, which hold common and re-usable functionality
  • src/types – TypeScript data models, e.g. models used with API communication

Material UI

Material UI is a React design system that provides ready-to-use components. An official example is shown in create-react-app-with-typescript.

TypeScript

TypeScript is a programming language developed and maintained by Microsoft. It is a strict syntactical superset of JavaScript and adds optional static typing to the language. TypeScript is designed for the development of large applications and transcompiles to JavaScript. TypeScript brings some overhead, but for me, this is justified. Because of the static typing, errors are shown on compile-time, not in runtime. Also, IntelliSense, the intelligent code completion, kicks in and is of great help.

Code examples

Main file is src/index.tsx. It loads the App component, which uses React Router to define different path handling, it loads different components based on the path. In the current example, /about path is covered just by a very simple page, and all other paths are loading PersonsPage.

index.tsx

import ReactDOM from 'react-dom'

import App from 'containers/App'
import reportWebVitals from './reportWebVitals'
import './stylesheets/base.scss'

ReactDOM.render(<App />, document.querySelector('#root'))

// If you want to start measuring performance in your app, pass a function
// to log results (for example: reportWebVitals(console.log))
// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals
reportWebVitals()

App

import { Router, Route, Switch } from 'react-router-dom'
import { createBrowserHistory } from 'history'
import { ThemeProvider } from '@mui/material/styles'
import { CssBaseline } from '@mui/material'

import PersonsPage from 'containers/PersonsPage'

import theme from 'stylesheets/theme'

export default () => (
  <ThemeProvider theme={theme}>
    <CssBaseline />
    <Router history={createBrowserHistory()}>
      <Switch>
        <Route exact path={'/about'}>
          <div>About Page</div>
        </Route>
        <Route>
          <PersonsPage />
        </Route>
      </Switch>
    </Router>
  </ThemeProvider>
)

PersonsPage


import React from 'react'

import { apiFetch } from 'helpers/api'
import { personServiceUrl } from 'helpers/config'
import { IPerson } from 'types/types'

import PersonsList from './PersonsList'

import TracingButton from 'components/TracingButton'
import CreateNewPersonModal from 'containers/CreateNewPersonModal'

import styles from './styles.module.scss'

export default () => {
  const [isModalOpen, setIsModalOpen] = React.useState<boolean>(false)
  const [persons, setPersons] = React.useState<IPerson[]>([])

  const fetchPersons = async () => {
    const persons = await apiFetch<IPerson[]>(`${personServiceUrl}/persons`)
    setPersons(persons)
  }

  return (
    <div className={styles.app}>
      <CreateNewPersonModal open={isModalOpen} onClose={() => setIsModalOpen(false)} />

      <header className={styles.appHeader}>
        <p>Sample Patient Service Frontend</p>
      </header>

      <TracingButton id="test-create-person-button" label={'Create new person'} onClick={() => setIsModalOpen(true)} />

      <TracingButton id="test-fetch-persons-button" label={'Fetch persons'} onClick={fetchPersons} />
      {persons.length > 0 && (
        <React.Fragment>
          <div id="test-persons-count-text">Found {persons.length} persons</div>
          <PersonsList persons={persons} />
        </React.Fragment>
      )}
    </div>
  )
}

Proxy

Cross-Origin Resource Sharing (CORS) is an HTTP-header-based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources. In order to allow the frontend to connect to the backend, CORS should be allowed. One option is to instruct the backend to produce CORS headers that allow the frontend URL. Another option is to use React Create App’s mechanism to handle the CORS by defining a proxy. The file that is used is setupProxy.js. In the current examples, the proxy handles both connections to the backend and OpenTelementry connector.

const { createProxyMiddleware } = require('http-proxy-middleware')

const configureProxy = (path, target) =>
  createProxyMiddleware(path, {
    target: target,
    secure: false,
    pathRewrite: { [`^${path}`]: '' }
  })

module.exports = function (app) {
  app.use(configureProxy('/api/person-service', 'http://localhost:8090'))
  app.use(configureProxy('/api/tracing', 'http://localhost:4318'))
}

WebVitals

The default application has built-in support for WebVitals. If those need to be put into operation, a reporter just needs to be registered in src/index.tsx file by passing a method reference to reportWebVitals(). Easiest is to log to console: reportWebVitals(console.log). This can be enhanced further by creating a reporter which sends the data to Prometheus. Actually, pushing data to Prometheus is not possible. Prometheus Pushgateway can be used as metrics cache, from which Prometheus can pull.

Docker

The application is Dockerized with Nginx in exactly the same way as described in Dockerize React application with a Docker multi-staged build post.

Instrumentation

Instrumentation is done with OpenTracing JavaScript libraries. The API calls to the backend use the fetch() method. OpenTracing has a library that instruments all the calls going through fetch() – @opentelemetry/instrumentation-fetch. A WebTracerProvider is instantiated with a Resource that has the service.name. Several SimpleSpanProcessor are registered with addSpanProcessor() method. The important processor is the CollectorTraceExporter, which sends the traces to the OpenTelemetry collector. The actual tracer is returned by getTracer() method from the provider, it is used to do the custom tracing. registerInstrumentations() registers an instance of FetchInstrumentation, which actually traces the API calls. In case the API responds with a status code greater than 299, then this is considered an error, and the span is marked as ERROR. This is done in the applyCustomAttributesOnSpan function. Another custom change for fetch tracking is that the span name is overwritten in order to have a unique name for each API. This will allow separate tracing of each individual API. Custom traceSpan() method is defined in order to manually trace individual events in the application, such as a button click for e.g. In case of an error in the wrapped function func then span is also marked as an error.

import { context, trace, Span, SpanStatusCode } from '@opentelemetry/api'
import { WebTracerProvider } from '@opentelemetry/sdk-trace-web'
import { Resource } from '@opentelemetry/resources'
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base'
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector'
import { ZoneContextManager } from '@opentelemetry/context-zone'
import { FetchInstrumentation } from '@opentelemetry/instrumentation-fetch'
import { FetchError } from '@opentelemetry/instrumentation-fetch/build/src/types'
import { registerInstrumentations } from '@opentelemetry/instrumentation'

import { tracingUrl } from 'helpers/config'

const resource = new Resource({ 'service.name': 'person-service-frontend' })
const provider = new WebTracerProvider({ resource })

const collector = new CollectorTraceExporter({ url: tracingUrl })
provider.addSpanProcessor(new SimpleSpanProcessor(collector))
provider.register({ contextManager: new ZoneContextManager() })

const webTracerWithZone = provider.getTracer('person-service-frontend')

registerInstrumentations({
  instrumentations: [
    new FetchInstrumentation({
      propagateTraceHeaderCorsUrls: ['/.*/g'],
      clearTimingResources: true,
      applyCustomAttributesOnSpan:
      (span: Span, request: Request | RequestInit, result: Response | FetchError) => {
        const attributes = (span as any).attributes
        if (attributes.component === 'fetch') {
          span.updateName(`${attributes['http.method']} ${attributes['http.url']}`)
        }
        if (result.status && result.status > 299) {
          span.setStatus({ code: SpanStatusCode.ERROR })
        }
      }
    })
  ]
})

export function traceSpan<F extends (...args: any)
    => ReturnType<F>>(name: string, func: F): ReturnType<F> {
  var singleSpan = webTracerWithZone.startSpan(name)
  return context.with(trace.setSpan(context.active(), singleSpan), () => {
    try {
      const result = func()
      singleSpan.end()
      return result
    } catch (error) {
      singleSpan.setStatus({ code: SpanStatusCode.ERROR })
      singleSpan.end()
      throw error
    }
  })
}

Custom instrumentation

import { Button } from '@mui/material'

import { traceSpan } from 'helpers/tracing'

import styles from './styles.module.scss'

interface Props {
  label: string
  id?: string
  secondary?: boolean
  onClick: () => void
}

export default (props: Props) => {
  const onClick = async () => traceSpan(`'${props.label}' button clicked`, props.onClick)

  return (
    <div className={styles.button}>
      <Button id={props.id} variant={'contained'} color={props.secondary ? 'secondary' : 'primary'} onClick={onClick}>
        {props.label}
      </Button>
    </div>
  )
}

Traceability

Traceability between the frontend and the backend is described in the Trace Context W3C standard. In a nutshell, this is done by adding a traceparent header in the HTTP request to the backend. This is done automatically by @opentelemetry/instrumentation-fetch.

React component instrumentation

OpenTelemetry provides a library that can instrument React components and monitor their performance, such as load time for e.g. This library is called @opentelemetry/plugin-react-load. I tried it, it is working properly, but it is not in the current examples for two reasons. The first is that I am not really interested in React component lifecycle events. The more important reason is that this plugin works for React class components only. I started my React journey after version 16.8, which was released on 6 Feb 2019. Prior to this version functional components were stateless, they were just for data visualization purposes. In version 16.8 hooks have been introduced, which allows state management inside a functional component. I write all my components to be functional with hooks for state management. I do not have justification whether this is good or bad, I like it that way. There is a serious drawback because functions in the functional component reinitialize every time the component is re-rendered, in some cases I had to use useCallback() hook to remember some function state.

Traces output

In order to monitor a trace, run the examples as described in Distributed system observability: complete end-to-end example with OpenTracing, Jaeger, Prometheus, Grafana, Spring Boot, React and Selenium. Accessing http://localhost:3000/ and clicking “Fetch persons” button generates a trace in Jaeger:

Conclusion

OpenTelemetry provides libraries to instrument JavaScript applications and to report the traces to an OpenTelemetry collector. Creating an application with React and instrumenting it to collect OpenTelemetry traces is easy. Behind the scenes, the fetch() method is modified to pass traceparent header in the HTTP request to the backend. This is how tracing between different systems can happen.

Related Posts

Read more...

Distributed system observability: complete end-to-end example with OpenTracing, Jaeger, Prometheus, Grafana, Spring Boot, React and Selenium

Last Updated on by

Post summary: Code examples and explanations on an end-to-end example showcasing a distributed system observability from the Selenium tests through React front end, all the way to the database calls of a Spring Boot application. Examples are implemented with the OpenTracing toolset and traces are saved in Jaeger. This example also shows a complete observability setup including tools like Grafana, Prometheus, Loki, and Promtail.

This post is part of Distributed system observability: complete end-to-end example series. The code used for this series of blog posts is located in selenium-observability-java GitHub repository.

Introduction

Nowadays, the MIcroservices architecture is very popular. It certainly has its benefits, allowing the companies to deliver faster products to the market. It is much easier to manage several small applications, each one of them with isolated responsibilities, rather than one big fat monolithic application. Microservices architecture has its challenges as well. One of those challenges is traceability. What happens in case of error, where did it occur, what microservices were involved, what were the requests flow through the system, where is the stack trace? In a monolithic application, the stack trace is shown into the logs, giving the exact location of the error. In a microservices landscape, errors are in many cases meaningless, unless there is full traceability of the request flow.

Observability and distributed tracing

Distributed tracing, also called distributed request tracing, is a method used to profile and monitor applications, especially those built using a microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Logs, metrics, and traces are often known as the three pillars of observability. Further reading on observability can be done in The Three Pillars of Observability article.

OpenTracing

OpenTracing is an API specification and libraries, that enables the instrumentation of distributed applications. It is not locked to any particular vendors and allows flexibility just by changing the configuration of already instrumented applications. More details can be found in Instrumenting your application and What is Distributed Tracing?. Current examples are based on OpenTracing libraries and tools.

End-to-end traceability and observability

In the current examples, I am going to give an end-to-end solution, how observability can be achieved in a distributed system. I have used mnadeem/boot-opentelemetry-tempo project as a basis and have extended it with React Frontend and Selenium tests, to provide a complete end-to-end example. Below is a diagram of the full setup. All applications involved will be explained on a higher level.

PostgreSQL and pgAdmin

The basic examples used PostgreSQL, I thought of changing it to MySQL, but when I did short research, I found that PostgreSQL has some advantages. PostgreSQL is an object-relational database, while MySQL is a purely relational database. This means that Postgres includes features like table inheritance and function overloading, which can be important to certain applications. Postgres also adheres more closely to SQL standards. See more in MySQL vs PostgreSQL — Choose the Right Database for Your Project.

pgAdmin is the default user interface to manage a PostgreSQL database, so it is present in the architecture as well.

Spring Boot backend

Spring Boot is used as a backend. I did want to get some exposure to the technology, so I created a very basic application in Spring Boot. It uses the PostgreSQL database for reading and writing data. Spring Boot application is instrumented with OpenTelemetry Java library and exports the traces in Jaeger format directly to the Jaeger backend. It also writes application log files on a file system. Backend exposes APIs, which are consumed by the frontend. More details on the backend can be found in Distributed system observability: Instrument Spring Boot application with OpenTelemetry post.

React frontend

I am very experienced with React, so this was the natural choice for the frontend technology. The frontend uses fetch() to consume the backend APIs. It is instrumented with OpenTelementry JavaScript libraries to trace all communication happening through fetch() and to exports the traces in OpenTelemetry format to the OpenTelemetry collector. The frontend also has manual instrumentation which traces the actions done by end-users on it. More details on the frontend can be found in Distributed system observability: Instrument React application with OpenTelemetry post.

OpenTelemetry collector

OpenTelemetry collector converts the data received from the frontend in OpenTelemetry format into Jaeger format and exports it to the Jaeger backend. The collector is also extracting the span metrics, which are read by Prometheus, read more in Distributed system observability: extract and visualize metrics from OpenTelemetry spans post. Configurations are described in the collector configuration. Local configurations are in otel-config.yaml.

Selenium tests

Selenium was chosen for the web testing framework because of its observability feature. Actually, this was the reason for which I created the current examples. After getting to know the tracing features of Selenium better, I find them not much useful. Selenium does not provide traceability of the tests, but rather on its internal operations and performance. Having started with the tracing and the whole project, I could not ditch it in the middle, so I have to create a custom way to make Selenium trace the tests. Selenium tests export tracing information in Jaeger format directly into the Jaeger backend. More details on the tests can be found in Distributed system observability: Instrument Selenium tests with OpenTelemetry post.

Cypress tests

Cypress is a front-end testing tool built for the modern web. It is most often compared to Selenium. The initial driver of the current post series was Selenium observability. After I got a better understanding of the observability topic, I’ve decided to add examples on Cypress tests observability for more completeness of the examples. Cypress interacts with the Frontend and exports its traces to OpenTelemetry Collector, which then forwards the traces into Jaeger. More details on the tests can be found in Distributed system observability: Instrument Cypress tests with OpenTelemetry post.

Jaeger

Jaeger, inspired by Dapper and OpenZipkin, is an open-source distributed tracing system. It is used for monitoring and troubleshooting microservices-based distributed systems. Jaeger collects all the traces and provides a search and visualization of the traces. In the original examples, Grafana Tempo was used as a backend and Jaeger UI via the jaeger-query module to open the traces. I initially started with it, but Tempo does not provide a possibility to search the traces. I find this rather inconvenient, so I switched completely to Jaeger.

Promtail

Promtail is an agent which ships the contents of the Spring Boot backend logs to a Loki instance. It is usually deployed to every machine that has applications needed to be monitored. Local configurations are in promtail-local.yaml.

Loki

Grafana Loki is a log aggregation system inspired by Prometheus. It does not index the contents of the logs, but rather a set of labels for each log stream. Log data itself is then compressed and stored in chunks. In the current example, logs are being pushed to Loki by Promtrail. Local configurations are in loki-local.yaml.

Prometheus

Prometheus is an open-source monitoring and alerting toolkit. Prometheus collects and stores its metrics as time-series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels. In the current example, Prometheus is monitoring the Sprint Boot backend, Loki, Jaeger, and OpenTelemetry Collector. It pulls the metrics data from those applications at a regular interval and stores them in its database. Alerts can be configured based on the metrics. Local configurations are in prometheus.yaml.

Grafana

Grafana is an open-source solution for running data analytics, pulling up metrics from different data sources, and monitoring applications with the help of customizable dashboards. The tool helps to study, analyze and monitor data over a period of time, technically called time-series analytics. In the current example, Grafana pulls data from Prometheus, Jaeger, and Loki. Local configurations are in grafana-dashboards.yaml and grafana-datasource.yml.

Explore the example

Running the example is very easy. What is needed is Docker compose and IDE that can run JUnit tests, I prefer IntelliJ IDEA. Run the examples:

  1. Check out the source code from https://github.com/llatinov/selenium-observability-java
  2. Run: docker-compose build
  3. Run: docker-compose up
  4. Open selenium-tests Maven project and run all the unit tests

Explore the example artifacts:

pgAdmin

pgAdmin is accessible at http://localhost:8005/. In order to log in, use the following credentials: pgadmin4@pgadmin.org / admin. This is needed only if the database records have to be read or modified.

Jaeger

Jaeger is accessible at http://localhost:16686. The home page shows rich search functionality. There is a dropdown with all available services, then operations performed by the selected service can be also filtered.

A trace can be opened from the search results. It shows all the actions for this trace that have been recorded.

Grafana

Grafana is accessible at http://localhost:3001. Different data sources can be accessed from the left-hand side menu, there is a small compass, the Explore menu. From the top, there is a dropdown with the available data sources.

Grafana -> Loki

From Grafana select Loki as datasource. Search for {job=”person-service”}, this shows all logs for the Spring Boot backend.

Grafana -> Jaeger

Jaeger data source can open a trace by its id. This data source can be used in conjunction with Loki. Search logs in Loki, then open a log, this exposes a Jaeger button.

Jaeger data source can be opened directly from the dropdown, then type the TraceID.

Grafana -> Prometheus

From Grafana select Prometheus as a data source. Search for {job=”person-service”}, this shows all metrics for the Spring Boot backend.

Prometheus

Prometheus is accessible at http://localhost:9090/. Search for {job=”person-service”}, this shows all metrics for the Spring Boot backend.

Furter posts with details

This is an introductory post, more details, explanations, and code examples on actual implementation can be found in the following posts:

Conclusion

Microservices architecture is used more often. Alongside its advantages, it comes with specific challenges. Observability is one of those challenges and is a very important topic in a distributed software system. In the current example, I have shown end-to-end observability achieved with popular open-source tools. The main objective of my experiments was to be able to trace Selenium test execution through all the systems involved in the distributed architecture.

Related Posts

Read more...

How to gather code coverage with Istanbul and Selenium and pitfalls to avoid

Last Updated on by

Post summary: Istanbul does not seem to be recoding code coverage correctly, it turned out that the tests do navigation by changing the URL, which resets the code coverage.

How to use Istanbul for code coverage of Cypress automated tests was explained in detail in Testing with Cypress – Custom logging of errors and JUnit results post.

Code coverage with Istanbul and Selenium

Recently I had to do it again, this time with Selenium. There are several approaches, which can be taken to measure code coverage with Selenium. Whichever approach is taken, the first step is to instrument the frontend. How to do it with React and create-react-app is described in Testing with Cypress – Code coverage with Istanbul post. Coverage is present in __coverage__ JS frontend variable.

Once the frontend is instrumented, it is important to collect the code coverage after the tests are run. This is where approaches differ. One option is to use istanbul-middleware. In this case, a Node.js backend has to be created and the tests should post the coverage results, taken from __coverage__ to the backend. I find this approach not convenient, so I took the easier one.

Once the test is finished, the code coverage data is collected and saved as a JSON file in a test results folder, then all the results are used to generate the report. I use C# and the code to do so is as simple as:

public void CollectCodeCoverage()
	{
		var data = ((IJavaScriptExecutor)_webDriver)
			.ExecuteScript("return window.__coverage__");
		if (data != null)
		{
			var jsonString = JsonConvert.SerializeObject(data);
			var fileName = $"{_testResultsFolder}/coverage_{DateTime.Now.Ticks}.json";
			File.WriteAllText(fileName, jsonString);
		}
	}

Generating code coverage report

The report is generated with the nyc cli tool. Once all the JSON files are copied into a folder with the name .nyc_output, the command to run the report is nyc report –reporter=html. Nyc can be installed as a global NPM package or can be added to the frontend project inside package.json.

The issues measuring the code coverage

The setup described above is clear and easy to achieve. Although when tests were run, they did not record coverage, which was supposed to be there. I have spent several days trying to figure out what the issue was. And finally, I was able to understand. In my tests, I use _webDriver.Navigate().GoToUrl(). This actually visits a new URL, basically invalidating all the coverage results gathered so far. Once the problem was identified, the solution was pretty simple – save the cove coverage every time before a new URL is about to get opened.

Conclusion

Istanbul is a very good tool to measure the code coverage for web automation tests. In the current post, I have described a pitfall, which should be avoided when using it.

Related Posts

Read more...

Dockerize React application with a Docker multi-staged build

Last Updated on by

Post summary: How to build React application inside a Docker container, with a multi-staged build and then run it with NGINX or Caddy.

In the current post, I am not going to compare NGINX vs. Caddy. I will show how to build a React application and package it into a Docker container with both of them. Examples code is located in cypress-testing-framework GitHub repository.

NGINX

NGINX is open-source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. It started out as a web server designed for maximum performance and stability. In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email (IMAP, POP3, and SMTP) and a reverse proxy and load balancer for HTTP, TCP, and UDP servers.

Caddy

Caddy is an open-source, HTTP/2-enabled web server written in Go. It uses the Go standard library for its HTTP functionality. One of Caddy’s most notable features is enabling HTTPS by default.

Building

Docker multi-staged building is going to be used in the current post. I have slightly touched the topic in the Optimize the size of Docker images post. The main idea is to optimize the Docker images, so they become smaller. In the current post, I will show two flavors of builds. One is with the standard NPM package manager and is described in Build and run with NGINX section.

The other is with Yarn package manager and is described in Build and run with Caddy section. Current examples are configured to use Yarn. I personally prefer Yarn as for local development it has very effective caching and also it has a reliable dependency locking mechanism.

Build and run with NGINX

Following Dockerfile is describing the building of the React application with NPM package manager and packaging it into NGINX image.

# ========= BUILD =========
FROM node:8.16.0-alpine as builder

WORKDIR /app

COPY package.json .
COPY package-lock.json .
RUN npm install --production

COPY . .

RUN npm run build

# ========= RUN =========
FROM nginx:1.17

COPY conf/nginx.conf /etc/nginx/nginx.conf
COPY --from=builder /app/build /usr/share/nginx/html

The keyword as builder is used to put the name to the image. Both package.json and package-lock.json are copied to the already configured work directory /app. Installation of the packages is done with npm install –production, where the –production switch is used to skip the devDependencies. In the current example, Cypress takes a lot of time to install, and it is not needed for a production build. Afterward, all project files are copied to the image. The files configured in .dockerignore are skipped. All source code files are intentionally copied to the image only after the NPM packages installation. Packages installation takes time, and they need to be installed only if the package.json file has been changed. In case of code changes only, Docker cache is used for the packages layer, this speeds up the build. The build is initiated with npm run build and takes quite a time. Now there the build artifacts are ready. Next stage is to copy the artifacts to nginx:1.17 image into /usr/share/nginx/html folder from builder image’s /app/build folder. Also, NGINX configuration file is copied.

worker_processes auto;
worker_rlimit_nofile 8192;

events {
  worker_connections 1024;
}

http {
  include /etc/nginx/mime.types;
  sendfile on;
  tcp_nopush on;

  gzip on;
  gzip_static on;
  gzip_types
    text/plain
    text/css
    text/javascript
    application/json
    application/x-javascript
    application/xml+rss;
  gzip_proxied any;
  gzip_vary on;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;

  server {
    listen 3000;
    server_name localhost;
    root /usr/share/nginx/html;
    auth_basic off;

    location / {
      try_files $uri $uri/ /index.html;
    }

    # 404 if a file is requested (so the main app isn't served)
    location ~ ^.+\..+$ {
      try_files $uri =404;
    }
  }
}

I will not go into NGINX configuration details, the configuration can be checked in details in NGINX documentation. Important in the configuration above is that gzip compression is enabled and NGINX listens to port 3000. Then with try_files unknown routes are redirected to index.html, so React can bootstrap the routes.

Build and run with Caddy

Following Dockerfile is describing the building of the React application with Yarn package manager and packaging it into Caddy image.

# ========= BUILD =========
FROM node:8.16.0-alpine as builder

WORKDIR /app

RUN npm install yarn -g

COPY package.json .
COPY yarn.lock .
RUN yarn install --production=true

COPY . .

RUN yarn build

# ========= RUN =========
FROM abiosoft/caddy:1.0.3

COPY conf/Caddyfile /etc/Caddyfile
COPY --from=builder /app/build /usr/share/caddy/html

Absolutely the same logic applies here as above. Yarn is installed as an additional Linux package, then package.json and yarn.lock files are copied. It is very important to copy the yarn.lock, otherwise every run lates dependencies will be fetched, and there might be inconsistent behavior. Only production dependencies are installed with yarn install –production=true. After the application is built with yarn build it is being copied to abiosoft/caddy:1.0.3 image in /usr/share/caddy/html folder from builder image. Caddyfile is copied as well to configure Caddy.

0.0.0.0:3000 {
	gzip
	log / stdout "{method} {path} {status}"
	root /usr/share/caddy/html
	rewrite {
		regexp .*
		to {path} /
	}
}

Caddy is configured to listen to port 3000, gzip compression is enabled and there is rewrite rule which redirects unknown paths to the main path, so React can bootstrap the router.

Conclusion

In the current post, I have shown how to build React application inside a Docker image with both NPM and Yarn and then pack the build artifacts to NGINX or Caddy Docker image, which later can be run as a container. This process optimizes the Docker image size and also it does not put extra requirements to the build machine to have Node JS installed, as Node JS is inside the builder image.

Related Posts

Read more...

Testing with Cypress – Build a React application with Node.js backend

Last Updated on by

Post summary: Short introduction to the application under test that is created for and used in all Cypress examples. It is React frontend created with Create React App package. Backend is a Node.js application running on Express.

This post is part of a Cypress series, you can see all post from the series in Testing with Cypress – lessons learned in a complete framework. Examples code is located in cypress-testing-framework GitHub repository.

Backend

The backend is a simple Node.js application build with Express web server. It supports several APIs that can save a person, get a person by id, get all persons or delete the last person in the collection. You can read the full description in Build a REST API with Express on Node.js and run it on Docker post.

Frontend

Current post is mainly devoted to the frontend. It described how the React application is built. In order to make this part easy, Create React App is used. The best thing about it is that you do not need to handle lots of configurations and you just focus on your application. In order to create an application, Create React App has to be installed as a global NPM package with npm install -g create-react-app. The application itself is created with create-react-app my-application-name. Once this is done you can start building your application. See more details on application creation in How to Create a React App with create-react-app. I have added Bootstrap for better styles and Toastr for nicer notifications. I also use Axios for API calls. I am not going into details about how to work with React as this is a pretty huge topic and I am not really expert at it. You can inspect the GitHub repository given above of how controllers are structured.

Instrumented for code coverage

After having the application ready I wanted to add support for code coverage. The tool used to measure code coverage is Istanbul. Because of Create React App, adding the configuration is not straight-forward as practically there is no webpack.config.js file, it is hidden.

One option is to eject the application. Maybe for a big project where you need full control over the configurations, this is OK, but for this small application, I would not want to deal with it.

Another option is to use a package that builds on top of Create React App. One such plugin is react-app-rewired. It is installed along with istanbul-instrumenter-loader, the actual code coverage plugin. Once those two are installed the actual configuration is pretty simple. A file named config-overrides.js is created with the following content:

const path = require('path');
const fs = require('fs');

module.exports = function override(config, env) {
  // do stuff with the webpack config...
  config.module.rules.push({
    test: /\.js$|\.jsx$/,
    enforce: 'post',
    use: {
      loader: 'istanbul-instrumenter-loader',
      options: {
        esModules: true
      }
    },
    include: path.resolve(fs.realpathSync(process.cwd()), 'src')
  });
  return config;
};

Also, package.json has to be changed. The default react-scripts start/build/test is changed to react-app-rewired start/build/test. In order to verify that code coverage is enabled, go to Dev Tools (hit keyboard F12), then go to Console and search for __coverage__ variable.

Dockerization

In order to make it easy to run a Dockerfile has been added. It installs Yarn as a package manager, then copies package.json. Important is to copy yarn.lock as well since the actual dependencies are in it. If this is not copied, every time an install is run it will pick the latest dependencies, which may lead to instability. Then the installation of dependencies is done with command yarn, short for yarn install. Finally, all local files are copied. This is done in the end so installation is not triggered on every file change, but only on package.json or yarn.lock change.

FROM node:8.16.0-alpine

ENV APP /app
WORKDIR $APP

RUN npm install yarn -g

COPY package.json $APP
COPY yarn.lock $APP
RUN yarn

COPY . .

The docker-compose.yml file is also very simple. It has two services. The first is the backend which is exposed to 9000 port of the host. This is needed because Cypress tests directly access the APIs. It uses the image uploaded to the Docker hub repository: image: llatinov/nodejs-rest-stub. The second service is the frontend. It uses local Dockerfile: build: .. When frontend container is started yarn start command is executed and is exposed to port 3030 of the host machine. One more thing, that is added as configuration, is the backend API URL that can be controlled by setting API_URL environment variable, which then is set to REACT_APP_API_URL, used by the frontend. If no API_URL is provided then the default of http://localhost:9000 is taken.

version: '3'

services:
  backend:
    image: llatinov/nodejs-rest-stub
    ports:
      - '9000:3000'
  frontend:
    build: .
    command: yarn start
    environment:
      - REACT_APP_API_URL=${API_URL:-http://localhost:9000}
    ports:
      - '3030:3000'

Run the application

There are several ways to run the application under test in order to try Cypress examples. One way is to download both repositories of the backend and the frontend and run them separately.

Second is to run the backend with Docker command docker run -p 9000:3000 llatinov/nodejs-rest-stub. The command maps the 3000 port of the container to 9000 port of the host, this is where the APIs are available. I have uploaded the backend image to the public Docker hub repository. After backend is running, the frontend is run with yarn start command. In this case, frontend is running on port 3000, so you have to adjust the proper URL in the Cypress configurations.

The third option is to run with docker-compose with docker-compose up command. This runs the backend on port 9000 and the frontend on port 3030.

Functionality

The application is very simple, it has few pages where user can add a person or see already existing persons in the backend. On each successful action, there is a notification, in case of a network error, a message is shown.

Persons list


Add person


Version page



Conclusion

In order to demonstrate the Cypress examples, a separate React application with a backend is created with Create React App package. It is also configured to support code coverage with Istanbul.

Related Posts

Read more...