LibraryFirst Way: FlowTools & TechniquesTest Automation

FT-07TOOLFirst Way: Flow

Test Automation

Manual testing does not scale. How to build a test suite that runs in minutes, catches regressions before they reach production, and gives developers confidence to deploy continuously.

Sources:DevOps HandbookGrowing Object-Oriented Software — Freeman & PryceDORA Research

Video Lesson

A video lesson for this topic is in development. The library articles and mission exercises cover the same material in the meantime.

01

Why automate tests?

A manual regression test suite is a liability that grows with the codebase. As code accumulates, manual testing takes longer — until the suite takes longer to run than a sprint, and stops being run at all. Automated tests invert this dynamic: the suite can grow while run time stays bounded.

Codebase size

Manual regression

Automated suite

Without automation: runs

Small (50 files)

20 min

45 sec

Rarely

Medium (500 files)

3 hours

2 min

Weekly

Large (5,000 files)

2 days

8 min

Monthly

XL (50,000 files)

2 weeks

15 min

Quarterly

Manual regression time grows linearly with codebase. Automated suite time grows sub-linearly. At scale, manual testing is not slower — it is impossible.

DORA research identifies comprehensive test automation as one of the highest-leverage technical practices for software delivery performance. Teams with automated test suites deploy more frequently and have lower change failure rates — because every change is tested, not just the ones developers had time to check manually.

02

The test pyramid

Mike Cohn's test pyramid describes the optimal distribution of test types. The shape reflects cost and speed: unit tests are cheap and fast, so have many. E2E tests are expensive and slow, so have few. The ratio — roughly 70% unit, 20% integration, 10% E2E — is a guideline, not a law.

E2E / Acceptance~10%
FewMinutes
Integration~20%
SomeSeconds
Unit~70%
ManyMilliseconds

Many fast unit tests at the base. Few slow E2E tests at the top. Inverting the pyramid makes the suite slow and fragile.

The anti-pattern is the ice cream cone: mostly E2E tests, few unit tests. This is the natural result of teams that write tests only to satisfy QA, not to drive development. It produces a slow, brittle test suite that developers avoid running.

03

Unit testing

A unit test verifies the behavior of a single function or class in isolation. It runs in milliseconds, has no external dependencies, and tells you exactly what is broken. The Arrange-Act-Assert (AAA) pattern structures every test:

// test: computeDiscount()

it('applies 10% discount for premium users', () => {

// Arrange

const user = { tier: 'premium' };

const cart = { total: 100 };

// Act

const result = computeDiscount(user, cart);

// Assert

expect(result).toBe(90);

});

Test this

Business logic and calculations

Edge cases and boundary values

Error conditions and exceptions

Public interfaces and contracts

Do not test

Framework behavior (it's already tested)

Simple getters and setters

Implementation details (test behavior)

Third-party library internals

04

Integration testing

Integration tests verify that components work correctly together — a service and its database, two microservices communicating, a function and a file system. They are slower than unit tests but catch a class of bug that unit tests cannot: integration failures.

When you cannot use a real dependency (slow, expensive, non-deterministic), use a test double:

Stub

Returns a fixed value. Use when you need a dependency to return a specific response.

paymentService.charge() always returns { success: true }

Mock

Records calls and verifies interactions. Use when you need to assert that a dependency was called correctly.

assert emailService.send() was called once with the right args

Fake

A simplified working implementation. Use for databases, queues, filesystems — anything with state.

In-memory database that behaves like Postgres but runs in the test process

Spy

Wraps a real implementation and records calls. Use when you want real behavior but need to verify it happened.

Real logger that also captures log lines for assertions

05

Acceptance testing

Acceptance tests verify the system from the user's perspective: does it do what the user expects? They run against the full application stack and test complete user journeys. They are the most expensive test type — slow to run, slow to write, and fragile when UI changes — so use them sparingly.

Behaviour-Driven Development (BDD) structures acceptance tests as specifications written in plain language, using the Gherkin syntax:

# checkout.feature

Feature:

As a customer

I want to complete a purchase

So that I can receive my order

Scenario:

Given I have 2 items in my cart

And I am a premium member

When I complete the checkout

Then I should receive a 10% discount

And an order confirmation email

BDD specifications serve dual purpose: they are both executable tests and living documentation. When the test passes, the specification is verified. When it fails, the specification documents exactly what is broken.

06

Further reading

DevOps Handbook — Chapter 10

Enable Fast and Reliable Automated Testing. The full treatment of test automation in the context of the deployment pipeline.

Growing Object-Oriented Software — Freeman & Pryce

The book on test-driven development. How to design systems that are testable by construction. Mocks, fakes, and outside-in TDD.

xUnit Test Patterns — Meszaros

The comprehensive reference on test patterns. Test doubles, test organization, and the vocabulary of automated testing.

The Art of Unit Testing — Osherove

Practical guide to writing good unit tests. What makes a test maintainable vs brittle. Test naming and organization.