Running Automated Tests for a Server-Based Application

Stan Georgian
JavaScript in Plain English
7 min readJul 7, 2021

--

Photo by Sigmund on Unsplash

As a web developer, you want to ensure that the application you are delivering is stable and that it will not behave unexpectedly when used by any user or consumed by other applications.

To perform this check manually would be time-consuming, especially if we have an extensive application that is continuously changing and updating. We can use test automation scripts and even combine them with different tools to get the most out of them to solve this problem.

What is Automated Testing?

Test automation is exactly the opposite of manual testing. Instead of manually inspecting that the code works as expected, you write a script that does that for you and compares the outcome with a predefined value that you are expecting to receive. Automation testing can also be conducted for mobile device testing to make sure mobile apps work perfectly.

Writing automated test scripts can be more time-consuming at first, but they consistently prove to be the better long-term solution. However, not everything needs to be automated; the first step the one that does need to be identified. For example, everyday, repetitive code that implies a high complexity with multiple possible paths is an excellent example of a test that is worth automating.

How to do it

Here is how you can write automated test scripts for your app. We will use 3 of the most popular types of tests: unit testing, integration testing, and end-to-end testing, and we will write one for each.

Application Overview

Before we check out the test cases, let’s see what we need to test.

Target Application

The application we will use as a demonstration throughout this article is a simple API that queries an S3 bucket and lists all its objects. A single endpoint /object-names will return the contents.

As dependencies it has the following:

  • express — to create the back-end server
  • aws-sdk — to communicate with the S3 storage
  • dotenv — to load the environment variables (S3 credentials)
  • jest & supertest — for testing

As the application is quite simple, we’ll write Unit & Integration Tests for the code that reads the objects from the S3 storage, the S3Manager class, and an E2E Test case for the entire application.

From the S3Manager class, only the function listObjectNames() is worth being automated.

The code of this function is the following:

RAW

This function will list the objects in a given bucket and then return a series with the keys (names) of the objects.

this.#S3 accesses the private variable, #S3, which was instantiated in the constructor from the aws-sdk library with the storage credentials.

RAW

Unit Testing

Unit testing is when we isolate our code functionalities one from each other, and we test the smallest unit: a function, a class, a module, and so on.

The purpose of unit testing is to identify and fix bugs in our code as early as possible. Unit testing may be a little harder to write because the tested unit had to be isolated from the rest of the code by mocking the other modules it interacts with, internal or external.

Inside a new file name, s3-manager.unit.spec.js, we’ll start by mocking the external dependencies. As we use the listObjects methods from the aws-sdk library, the mock will look like the following:

RAW

What is worth mentioning here is that we created a new constant named mockedAwsSdk in which we defined the methods that our test target is using. We returned undefined by default because we will set the actual value we want to return in each test case.

At the same time, the mock was declared before importing the S3Manager class. Therefore, when the code from s3-manager.js is executed, and it will import the aws-sdk library, it will import this constant that we declared.

Our target function has a cyclomatic complexity of 2. That’s because our code has two possible paths: one in the try block and one in catch. Therefore, we need to write 2 unit tests to see if the expected result is returned correctly for both paths.

For the first path, the try block, we expect an array of strings, and for the second path, the catch block, we expect a custom error message.

These two paths can be validated with the following test suite.

RAW

In the first test, we mocked the listObjects function from aws-sdk to return a hard-coded array of objects, and we tested that our function listObjectNames will correctly loop through this array and extract the keys for each object.

In the second test, we mocked the same function from aws-sdk, but this time to throw an error, and we tested that our target function can catch that error and return a custom message for it.

At the same time, we defined an afterEach hook to delete the mocks that we define in each test case.

Integration Testing

In integration testing, we test how modules interact with each other. In our situation, we need to test that the S3Manager class uses the aws-sdk library that we have installed correctly. Therefore, we just need to test the try path and see that an array with objects, or no objects if the storage is empty, is returned.

Inside a new file named s3-manager.integration.spec.ts, we’ll define the following test.

RAW

For integration testing, we no longer mock the aws-sdk library, and we’ll allow the code from the file s3-manager.js to import the installed dependency.

As the object storage may be empty, we just test that a value is returned and that value is an array.

E2E Testing

In end-to-end testing, we test that the entire application works from one end to the other.

In our case, we will test that the entire workflow for requesting the bucket content through a GET request will work as expected, and an array of strings is returned.

Inside a new file named s3-manager.e2e.spec.ts, we’ll define the following test.

RAW

If we run jest to execute these test suites, we should see if our code works as designed.

Test Report

Get more from your tests

Having an automated test is always the preferred way. You write once and automatically execute it every time. But writing and managing sizeable automated test scripts for complex tasks can become challenging.

Nevertheless, the open-source dependencies that we used, like jest and supertest, are here to stay, as they make the test process much more straightforward.

However, it might not be enough to scale the automation process and successfully integrate it in a DevOps approach when the complexity increases.

When test automation becomes overwhelming, you can bring other more advanced tools into the process. This can help you manage and empower your test suites by collecting more information from them and providing multiple test targets.

One such tool is Perfecto, which acts as a unified cloud platform for test automation. Getting data from a large number of test suites scattered across multiple targets can be challenging. One benefit of Perfecto is that you can configure it to generate smart reports and run the test suite on multiple targets at once.

Conclusion

Writing test automation is what everyone does and is the way to go for any web application. Not only that it gives you stability, but it will also give you speed. There are plenty of tools that you can use to help you kickstart the automation process and go beyond the basic tests.

However, the most important thing is to choose the best tool that suits your needs and identify the points of your application that are worth automating.

More content at plainenglish.io

--

--