GUIDE

Node.js: Building Scalable Microservices

A deep technical guide to Node.js architecture, frameworks, concurrency patterns, and production-grade performance optimization. Based on 26 microservices built in production with ZeroMQ messaging and Redis queues.

Node.jsExpressFastifyNestJSTypeScriptZeroMQRedisDocker

Table of Contents

  1. 1. The Event Loop and Non-Blocking I/O
  2. 2. Framework Comparison: Express, Fastify, NestJS
  3. 3. ES Modules and TypeScript Integration
  4. 4. Clustering and Worker Threads
  5. 5. Stream Processing
  6. 6. Error Handling Patterns
  7. 7. Performance Optimization
  8. 8. Security Best Practices (OWASP)
  9. 9. Dependency Management
  10. 10. Testing with Jest
  11. 11. Modern Node.js (22+ / 25+) Built-in Features

1. The Event Loop and Non-Blocking I/O

The event loop is the heart of Node.js. It is a single-threaded mechanism that orchestrates asynchronous operations through a series of phases: timers, pending callbacks, idle/prepare, poll, check, and close callbacks. Understanding these phases is critical for building microservices that handle thousands of concurrent connections without thread overhead.

Event Loop Phases

Each iteration (or "tick") of the event loop processes callbacks in a fixed order. The poll phase is where most I/O callbacks execute. setTimeout and setInterval fire in the timers phase, while setImmediate executes in the check phase. The process.nextTick queue and microtask queue (Promises) are drained between every phase transition, giving them higher priority than any phase callback.

// Execution order demonstration
setTimeout(() => console.log('1: timer'), 0);
setImmediate(() => console.log('2: immediate'));
process.nextTick(() => console.log('3: nextTick'));
Promise.resolve().then(() => console.log('4: microtask'));

// Output: 3: nextTick, 4: microtask, 1: timer, 2: immediate
// nextTick and microtasks always run before I/O phases

Avoiding Event Loop Blocking

A single CPU-intensive operation blocks the entire event loop. In production microservices, this manifests as increased latency across all endpoints. Common offenders include JSON parsing of large payloads, synchronous cryptographic operations, and complex regex evaluation. The --max-old-space-size flag controls heap limits but does not prevent blocking. Instead, offload heavy computation to worker threads or child processes.

In production, we monitored event loop lag using monitorEventLoopDelay() from perf_hooks. Any service exceeding 50ms p99 lag triggered an alert, prompting us to identify and extract blocking operations into dedicated worker threads.

libuv and the Thread Pool

Node.js delegates certain operations to libuv's thread pool (default size: 4). DNS lookups, file system operations, and some crypto functions use this pool. When the pool is saturated, operations queue up, causing unexpected latency. Set UV_THREADPOOL_SIZE (max 1024) based on your workload. For microservices performing heavy file I/O, increasing the pool size to 16-32 threads can significantly reduce tail latency.

2. Framework Comparison: Express, Fastify, NestJS

Express

Express remains the most widely adopted Node.js framework. Its middleware pipeline is simple: each middleware calls next() to pass control downstream. However, Express has no built-in schema validation, no native TypeScript support, and its router uses linear matching, which degrades at scale. For microservices handling fewer than 10,000 req/s with simple routing, Express is adequate. Beyond that, consider Fastify.

// Express middleware pipeline
app.use(helmet());
app.use(cors({ origin: config.allowedOrigins }));
app.use(express.json({ limit: '1mb' }));
app.use(requestId());        // attach X-Request-Id
app.use(requestLogger());    // structured logging
app.use('/api/v1', router);
app.use(errorHandler());     // centralized error handling

Fastify

Fastify achieves 2-3x higher throughput than Express by using a radix tree router, JSON schema-based serialization (via fast-json-stringify), and an optimized plugin architecture. Its encapsulation model prevents plugin conflicts and enables genuine modular composition. Schema validation is not optional decoration but a first-class performance feature: Fastify compiles JSON schemas into optimized validation and serialization functions at startup.

// Fastify with schema-based validation and serialization
fastify.route({
  method: 'POST',
  url: '/api/users',
  schema: {
    body: {
      type: 'object',
      required: ['email', 'name'],
      properties: {
        email: { type: 'string', format: 'email' },
        name: { type: 'string', minLength: 2, maxLength: 100 }
      }
    },
    response: {
      201: {
        type: 'object',
        properties: {
          id: { type: 'string', format: 'uuid' },
          email: { type: 'string' },
          createdAt: { type: 'string', format: 'date-time' }
        }
      }
    }
  },
  handler: async (request, reply) => {
    const user = await userService.create(request.body);
    reply.code(201).send(user);
  }
});

NestJS

NestJS provides an opinionated architecture inspired by Angular: modules, controllers, services, guards, interceptors, pipes, and exception filters. It uses decorators and dependency injection, making it ideal for large teams and monorepo setups. Under the hood, NestJS can use either Express or Fastify as its HTTP adapter. For the platform's 26 microservices monorepo, NestJS with Fastify adapter gave us both architectural consistency and high throughput.

// NestJS controller with decorators and DI
@Controller('subscriptions')
@UseGuards(JwtAuthGuard, RolesGuard)
export class SubscriptionController {
  constructor(
    private readonly subscriptionService: SubscriptionService,
    private readonly eventBus: EventBus,
  ) {}

  @Post()
  @Roles(Role.ADMIN)
  @HttpCode(HttpStatus.CREATED)
  async create(@Body() dto: CreateSubscriptionDto): Promise<Subscription> {
    const sub = await this.subscriptionService.create(dto);
    this.eventBus.publish(new SubscriptionCreatedEvent(sub));
    return sub;
  }
}
In production, we started with Express but migrated critical microservices (payments, scheduling, notifications) to NestJS with the Fastify adapter. The migration improved request throughput by 2.4x on the scheduling service, which handles 8,000+ concurrent gym class bookings during peak hours. NestJS's module system let us share validation pipes, auth guards, and logging interceptors across all 16+ services via a shared library package.

3. ES Modules and TypeScript Integration

Node.js supports ES Modules natively since v12 (stable in v16+). Set "type": "module" in package.json to treat all .js files as ESM, or use .mjs extensions. CommonJS interop works through createRequire() or dynamic import(). For new microservices, ESM is the correct choice: it enables tree-shaking in bundlers, provides static analysis, and aligns with the browser module standard.

TypeScript Compilation Strategies

For microservices, choose between tsc (reference implementation, full type checking), esbuild (50-100x faster, no type checking), or swc (Rust-based, 20x faster). The production pattern: use tsc --noEmit for type checking in CI, and esbuild or swc for fast builds. With NestJS monorepos, tsc project references enable incremental builds across packages.

// tsconfig.json for a Node.js microservice (ESM output)
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "declaration": true,
    "declarationMap": true,
    "sourceMap": true,
    "incremental": true,
    "tsBuildInfoFile": "./dist/.tsbuildinfo"
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist", "**/*.spec.ts"]
}

When using ESM with TypeScript, all relative imports must include the .js extension (even though the source files are .ts). This is because TypeScript does not rewrite import specifiers. The "moduleResolution": "NodeNext" setting enforces this requirement at the type level.

In production, we migrated from CommonJS to ESM across 26 microservices incrementally. The key was setting "type": "module" per package and using tsc project references with esbuild for fast builds. CI ran tsc --noEmit on every PR to catch type errors without slowing the build. Total build time for the entire monorepo dropped from 4 minutes to 35 seconds after switching from tsc emit to esbuild.

4. Clustering and Worker Threads

The Cluster Module

The cluster module forks the Node.js process into multiple workers sharing the same server port. The primary process distributes connections using round-robin (default on Linux) or OS-level load balancing. Each worker runs its own event loop and V8 isolate. For HTTP microservices, cluster.fork() multiplied by CPU count saturates available cores. However, this approach doubles memory usage per worker and provides no shared state.

import cluster from 'node:cluster';
import { availableParallelism } from 'node:os';

if (cluster.isPrimary) {
  const numWorkers = availableParallelism();
  console.log(`Primary ${process.pid}: forking ${numWorkers} workers`);

  for (let i = 0; i < numWorkers; i++) cluster.fork();

  cluster.on('exit', (worker, code) => {
    console.error(`Worker ${worker.process.pid} died (code ${code}). Restarting...`);
    cluster.fork(); // auto-restart crashed workers
  });
} else {
  // Each worker runs the full HTTP server
  await startServer();
}

Worker Threads for CPU-Bound Tasks

Unlike clustering (separate processes), worker threads share the same process memory via SharedArrayBuffer and Atomics. Use worker threads for CPU-intensive operations: image processing, PDF generation, cryptographic hashing, or complex data transformations. Communication happens through MessagePort using the structured clone algorithm. Transfer ArrayBuffers instead of copying them for zero-copy performance.

// worker-pool.ts - reusable worker thread pool
import { Worker } from 'node:worker_threads';
import { EventEmitter } from 'node:events';

class WorkerPool extends EventEmitter {
  private workers: Worker[] = [];
  private queue: Array<{ data: any; resolve: Function; reject: Function }> = [];
  private freeWorkers: Worker[] = [];

  constructor(private script: string, private size: number) {
    super();
    for (let i = 0; i < size; i++) this.addWorker();
  }

  private addWorker() {
    const w = new Worker(this.script);
    w.on('message', (result) => {
      w._currentTask?.resolve(result);
      w._currentTask = null;
      this.freeWorkers.push(w);
      this.drain();
    });
    this.freeWorkers.push(w);
  }

  async run(data: any): Promise<any> {
    return new Promise((resolve, reject) => {
      this.queue.push({ data, resolve, reject });
      this.drain();
    });
  }

  private drain() {
    while (this.queue.length && this.freeWorkers.length) {
      const w = this.freeWorkers.pop()!;
      const task = this.queue.shift()!;
      w._currentTask = task;
      w.postMessage(task.data);
    }
  }
}
In production, our PDF invoice generation service initially blocked the event loop for 200-800ms per document. Moving it to a worker thread pool of 4 workers reduced p99 latency on the main HTTP endpoints from 1200ms to 45ms while maintaining the same throughput for invoice generation.

5. Stream Processing

Streams are Node.js's most powerful abstraction for handling large datasets without loading everything into memory. The four stream types (Readable, Writable, Duplex, Transform) implement backpressure automatically. When a writable stream's internal buffer exceeds highWaterMark, it signals the readable stream to pause, preventing memory exhaustion.

The pipeline() Function

Always use pipeline() from node:stream/promises instead of .pipe(). The pipeline function handles error propagation, stream cleanup, and backpressure correctly. A broken pipe in the middle of a chain properly destroys all streams and frees resources. With .pipe(), errors on intermediate streams can leave resources dangling.

import { pipeline } from 'node:stream/promises';
import { createReadStream, createWriteStream } from 'node:fs';
import { createGzip } from 'node:zlib';
import { Transform } from 'node:stream';

// CSV processing pipeline: read -> parse -> transform -> compress -> write
const csvParser = new Transform({
  objectMode: true,
  transform(chunk, encoding, callback) {
    const lines = chunk.toString().split('\n').filter(Boolean);
    for (const line of lines) {
      const [id, name, amount] = line.split(',');
      this.push(JSON.stringify({ id, name, amount: parseFloat(amount) }) + '\n');
    }
    callback();
  }
});

await pipeline(
  createReadStream('./transactions.csv', { highWaterMark: 64 * 1024 }),
  csvParser,
  createGzip(),
  createWriteStream('./transactions.json.gz')
);

Async Iterators with Streams

Since Node.js v10, readable streams implement the async iterable protocol. This enables processing streams with for await...of loops, which is cleaner than event listeners for sequential processing. Combined with Readable.from(), you can create streams from any async iterable, bridging the gap between pull-based and push-based data models.

import { createReadStream } from 'node:fs';
import { createInterface } from 'node:readline';

// Process a large log file line by line with constant memory usage
const rl = createInterface({
  input: createReadStream('/var/log/app/access.log'),
  crlfDelay: Infinity
});

const errorCounts = new Map<string, number>();
for await (const line of rl) {
  const match = line.match(/HTTP\/\d\.\d" (\d{3})/);
  if (match && parseInt(match[1]) >= 500) {
    const code = match[1];
    errorCounts.set(code, (errorCounts.get(code) || 0) + 1);
  }
}
In production, the reporting microservice used streams to generate CSV exports of membership data (500,000+ rows). Using pipeline() with a Transform stream and createGzip(), the service streamed compressed data directly to the HTTP response. Memory usage stayed under 50MB regardless of dataset size, compared to 1.2GB when loading everything into memory with JSON.stringify.

6. Error Handling Patterns

Error handling in Node.js microservices requires a layered strategy: operational errors (expected failures like network timeouts or validation errors) must be caught and handled gracefully, while programmer errors (bugs like null reference or type mismatches) should crash the process and let the orchestrator restart it.

Custom Error Hierarchy

// errors/base.ts
export abstract class AppError extends Error {
  abstract readonly statusCode: number;
  abstract readonly isOperational: boolean;

  constructor(message: string, public readonly context?: Record<string, unknown>) {
    super(message);
    this.name = this.constructor.name;
    Error.captureStackTrace(this, this.constructor);
  }
}

export class NotFoundError extends AppError {
  readonly statusCode = 404;
  readonly isOperational = true;
}

export class ValidationError extends AppError {
  readonly statusCode = 400;
  readonly isOperational = true;
  constructor(message: string, public readonly fields: Record<string, string>) {
    super(message, { fields });
  }
}

export class ExternalServiceError extends AppError {
  readonly statusCode = 502;
  readonly isOperational = true;
  constructor(service: string, cause: Error) {
    super(`External service failure: ${service}`, { service, cause: cause.message });
  }
}

Global Error Boundaries

Every microservice must register handlers for uncaughtException, unhandledRejection, and signal events. On an uncaught exception, log the error, flush telemetry, close database connections gracefully, and exit with code 1. Kubernetes or PM2 restarts the process. Never attempt to continue running after an uncaught exception; the process state is unreliable.

// Graceful shutdown handler
const shutdown = async (signal: string) => {
  logger.info({ signal }, 'Shutdown signal received');
  server.close();                      // stop accepting connections
  await db.end();                      // close database pool
  await redis.quit();                  // close Redis connection
  await telemetry.flush();             // flush traces/metrics
  process.exit(0);
};

process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));

process.on('uncaughtException', (err) => {
  logger.fatal({ err }, 'Uncaught exception - shutting down');
  shutdown('uncaughtException').finally(() => process.exit(1));
});

process.on('unhandledRejection', (reason) => {
  logger.fatal({ err: reason }, 'Unhandled rejection - shutting down');
  shutdown('unhandledRejection').finally(() => process.exit(1));
});
In production, every microservice used the same AppError hierarchy shared via the monorepo's common package. The centralized error handler mapped isOperational errors to structured JSON responses with correlation IDs, while programmer errors triggered PagerDuty alerts. This pattern caught a critical Redis connection leak in the scheduling service within minutes: the ExternalServiceError rate spiked, the alert fired, and we identified a missing .quit() call in a background job.

7. Performance Optimization

Profiling with V8 Inspector

Start your service with --inspect and connect Chrome DevTools to profile CPU and memory. For production profiling without DevTools, use --cpu-prof and --heap-prof flags to generate V8 profile files, then analyze them with speedscope or 0x. The clinic.js suite (Doctor, Bubbleprof, Flame) automates bottleneck detection.

Detecting Memory Leaks

Memory leaks in long-running microservices typically come from: unbounded caches (use LRU with max size), event listener accumulation (always call removeListener), closures capturing large scopes, and global arrays that grow over time. Monitor process.memoryUsage() and set up heap snapshot diffing in staging to catch leaks before production.

// Periodic memory monitoring
setInterval(() => {
  const { heapUsed, heapTotal, rss, external } = process.memoryUsage();
  metrics.gauge('nodejs.heap_used', heapUsed);
  metrics.gauge('nodejs.heap_total', heapTotal);
  metrics.gauge('nodejs.rss', rss);
  metrics.gauge('nodejs.external', external);

  // Alert if heap usage exceeds 85% of total
  if (heapUsed / heapTotal > 0.85) {
    logger.warn({ heapUsed, heapTotal }, 'High heap utilization');
  }
}, 30_000);

Key Optimization Techniques

In production, profiling the payments microservice with clinic flame revealed that JSON.stringify on large subscription objects consumed 18% of CPU time. Switching to fast-json-stringify with precompiled schemas reduced serialization time by 12x and cut p95 response time from 120ms to 34ms. We also increased UV_THREADPOOL_SIZE to 16 on the file-export service, which eliminated a mysterious 2-second latency spike that occurred when 4+ concurrent CSV exports saturated the default thread pool.

8. Security Best Practices (OWASP)

Node.js microservices are exposed to the same attack vectors as any HTTP service, plus language-specific risks like prototype pollution and ReDoS. The OWASP Top 10 and the OWASP Node.js Security Cheat Sheet provide the foundation for a defense-in-depth approach that addresses each layer.

OWASP Top 10 in Node.js Context

The OWASP Top 10 maps directly to Node.js patterns: A01 (Broken Access Control) requires proper JWT validation and role guards in every route; A02 (Cryptographic Failures) means using crypto.timingSafeEqual for comparisons and avoiding deprecated algorithms; A03 (Injection) demands parameterized queries (never string concatenation for SQL/NoSQL); A04 (Insecure Design) is mitigated by threat modeling each microservice boundary; A05 (Security Misconfiguration) is prevented with Helmet defaults and strict CORS policies.

In production, every microservice passed through a security checklist aligned with OWASP guidelines before deployment: dependency audit with zero critical/high vulnerabilities, Helmet headers enabled, rate limiting configured, input schemas validated, and secrets stored in Kubernetes sealed secrets. This prevented any security incident across 16+ services over 3 years of operation.

9. Dependency Management

In a microservices architecture, dependency management is a critical operational concern. Each service carries its own node_modules, and a single vulnerable or outdated transitive dependency can compromise the entire system. A disciplined approach to dependency hygiene prevents supply chain attacks, reduces container image sizes, and ensures reproducible builds.

Lockfiles and Deterministic Installs

Always commit package-lock.json (npm) or pnpm-lock.yaml (pnpm) to version control. Use npm ci (not npm install) in CI/CD pipelines for deterministic, reproducible installs. The ci command removes node_modules before installing, ensuring the lockfile is the single source of truth. For monorepos, pnpm's content-addressable store deduplicates packages across workspaces, reducing disk usage by 60-80%.

# CI pipeline dependency install
npm ci --ignore-scripts        # skip postinstall scripts (security)
npm audit --audit-level=high   # fail on high/critical vulnerabilities
npx --yes license-checker-webpack-plugin --failOnUnlicensed  # license compliance

# For pnpm monorepos
pnpm install --frozen-lockfile
pnpm audit --audit-level high
pnpm -r exec -- npx depcheck   # find unused dependencies per workspace

Automated Updates and Vulnerability Scanning

Use Dependabot or Renovate to automate dependency updates. Configure grouping rules to batch minor/patch updates weekly and handle major updates individually. Integrate npm audit or snyk test into the CI pipeline to block merges with known vulnerabilities. For production containers, use multi-stage Docker builds and install only production dependencies (npm ci --omit=dev) to minimize the attack surface.

// .github/dependabot.yml
version: 2
updates:
  - package-ecosystem: "npm"
    directory: "/"
    schedule:
      interval: "weekly"
    groups:
      minor-and-patch:
        update-types: ["minor", "patch"]
    open-pull-requests-limit: 10
    reviewers: ["josenobile"]
    labels: ["dependencies"]

Monorepo Dependency Strategy

In a monorepo with multiple microservices, enforce a single version policy for shared dependencies (TypeScript, ESLint, Prettier, testing libraries) to prevent version conflicts. Use workspace protocols ("workspace:*" in pnpm) for internal packages. The syncpack tool validates version consistency across all package.json files in the monorepo.

In production, we used pnpm workspaces for 26 microservices. Renovate opened grouped PRs for minor updates weekly, and CI blocked any PR with npm audit high/critical findings. We maintained a shared @myorg/eslint-config and @myorg/tsconfig package to enforce consistent tooling. When the event-stream supply chain attack hit the npm ecosystem, our --ignore-scripts policy and pinned lockfiles meant zero impact across all services.

10. Testing with Jest

Testing microservices requires a layered approach: unit tests validate individual functions and classes, integration tests verify interactions between components (database, message queue, external APIs), and end-to-end tests confirm the full request-response cycle. Jest is the standard testing framework for Node.js and TypeScript, offering built-in mocking, code coverage, snapshot testing, and parallel test execution.

Unit Testing with Jest and TypeScript

Configure Jest with ts-jest or @swc/jest for TypeScript support. Use @swc/jest for faster test execution (10-20x faster than ts-jest). Organize tests next to source files (*.spec.ts) or in a parallel __tests__ directory. Mock external dependencies using jest.mock() and inject test doubles through constructor injection (NestJS's DI makes this trivial).

// subscription.service.spec.ts
import { Test } from '@nestjs/testing';
import { SubscriptionService } from './subscription.service';
import { SubscriptionRepository } from './subscription.repository';
import { EventBus } from '../events/event-bus';

describe('SubscriptionService', () => {
  let service: SubscriptionService;
  let repo: jest.Mocked<SubscriptionRepository>;
  let eventBus: jest.Mocked<EventBus>;

  beforeEach(async () => {
    const module = await Test.createTestingModule({
      providers: [
        SubscriptionService,
        { provide: SubscriptionRepository, useValue: { create: jest.fn(), findById: jest.fn() } },
        { provide: EventBus, useValue: { publish: jest.fn() } },
      ],
    }).compile();

    service = module.get(SubscriptionService);
    repo = module.get(SubscriptionRepository);
    eventBus = module.get(EventBus);
  });

  it('should create a subscription and publish an event', async () => {
    const dto = { userId: 'u1', planId: 'plan-monthly', gymId: 'gym-123' };
    const expected = { id: 'sub-1', ...dto, status: 'active', createdAt: new Date() };
    repo.create.mockResolvedValue(expected);

    const result = await service.create(dto);

    expect(result).toEqual(expected);
    expect(repo.create).toHaveBeenCalledWith(dto);
    expect(eventBus.publish).toHaveBeenCalledWith(
      expect.objectContaining({ subscriptionId: 'sub-1' })
    );
  });

  it('should throw NotFoundError for non-existent subscription', async () => {
    repo.findById.mockResolvedValue(null);
    await expect(service.findById('invalid')).rejects.toThrow('Subscription not found');
  });
});

Integration Testing

Integration tests verify that microservice components work together correctly with real databases and message queues. Use testcontainers to spin up ephemeral Docker containers (MySQL, Redis, RabbitMQ) for each test suite. NestJS's Test.createTestingModule() bootstraps the full DI container, enabling realistic tests without manual wiring. Run integration tests in a separate Jest project with longer timeouts.

// jest.config.ts - multi-project configuration
export default {
  projects: [
    {
      displayName: 'unit',
      testMatch: ['<rootDir>/src/**/*.spec.ts'],
      transform: { '^.+\\.tsx?$': ['@swc/jest'] },
      moduleNameMapper: { '^(\\.{1,2}/.*)\\.js$': '$1' },
    },
    {
      displayName: 'integration',
      testMatch: ['<rootDir>/test/**/*.integration.ts'],
      transform: { '^.+\\.tsx?$': ['@swc/jest'] },
      testTimeout: 30_000,
      globalSetup: '<rootDir>/test/setup.ts',
      globalTeardown: '<rootDir>/test/teardown.ts',
    },
  ],
  collectCoverageFrom: ['src/**/*.ts', '!src/**/*.spec.ts', '!src/**/index.ts'],
  coverageThreshold: {
    global: { branches: 80, functions: 85, lines: 85, statements: 85 },
  },
};

Testing Patterns for Microservices

In production, we enforced 85% code coverage across all 26 microservices via CI gates. Each service had ~200-400 unit tests running in under 10 seconds (using @swc/jest) and 30-50 integration tests using testcontainers with MySQL and Redis. Contract tests between the payments service and the subscription service caught 3 breaking API changes before they reached staging. The total test suite for the monorepo (4,000+ tests) ran in parallel in 90 seconds on CI.

11. Modern Node.js (22+ / 25+) Built-in Features

Node.js 22+ and 25+ ship with powerful built-in features that eliminate the need for many third-party dependencies. These additions make Node.js a more self-contained platform for building production services.

Built-in Test Runner (node:test)

The node:test module is stable since Node.js 22. It provides a zero-dependency alternative to Jest with built-in test runner, custom reporters, code coverage via --experimental-test-coverage, mock timers, and the --test flag for auto-discovering test files. For microservices that want to minimize dependencies, this is now a viable production choice.

// test/user.test.ts - using built-in node:test
import { describe, it, mock, beforeEach } from 'node:test';
import assert from 'node:assert/strict';

describe('UserService', () => {
  beforeEach(() => mock.restoreAll());

  it('creates a user with valid email', async () => {
    const mockDb = mock.fn(async () => ({ id: '1', email: 'test@example.com' }));
    const service = new UserService({ query: mockDb });
    const user = await service.create({ email: 'test@example.com' });
    assert.equal(user.email, 'test@example.com');
    assert.equal(mockDb.mock.callCount(), 1);
  });

  it('uses mock timers for TTL logic', async (t) => {
    t.mock.timers.enable({ apis: ['setTimeout'] });
    const cache = new TTLCache(60_000);
    cache.set('key', 'value');
    t.mock.timers.tick(61_000);
    assert.equal(cache.get('key'), undefined);
  });
});

// Run: node --test --test-reporter spec
// Coverage: node --test --experimental-test-coverage

Native TypeScript Execution

Node.js 25.2+ has stable type stripping, allowing you to run .ts files directly with node file.ts. No build step, no ts-node, no tsx needed. Node.js strips the type annotations at load time and executes the resulting JavaScript. For development workflows and scripts, this eliminates the compilation step entirely. Note: this performs type stripping only — it does not type-check. Use tsc --noEmit in CI for type safety.

// greeting.ts - runs directly with: node greeting.ts
interface User { name: string; role: 'admin' | 'user'; }

function greet(user: User): string {
  return `Hello, ${user.name} (${user.role})`;
}

console.log(greet({ name: 'Jose', role: 'admin' }));

Built-in SQLite (node:sqlite)

Node.js 22+ includes an experimental built-in node:sqlite module for embedded database operations. No need to install better-sqlite3 or compile native addons. Useful for local caching, embedded configuration stores, CLI tools, and test fixtures.

import { DatabaseSync } from 'node:sqlite';

const db = new DatabaseSync(':memory:');
db.exec('CREATE TABLE metrics (id INTEGER PRIMARY KEY, name TEXT, value REAL)');
const insert = db.prepare('INSERT INTO metrics (name, value) VALUES (?, ?)');
insert.run('cpu_usage', 45.2);
insert.run('memory_mb', 512.0);

const rows = db.prepare('SELECT * FROM metrics WHERE value > ?').all(40);
console.log(rows); // [{ id: 1, name: 'cpu_usage', value: 45.2 }, ...]

Permission Model

Node.js 22+ introduces a permission model for restricting file system, network, and child process access at runtime. Use --permission to enable the model, then grant specific capabilities with --allow-fs-read, --allow-fs-write, and --allow-child-process. This is a defense-in-depth layer for sandboxing untrusted code or limiting blast radius in production.

// Run with restricted permissions:
// node --permission --allow-fs-read=/app/config --allow-fs-write=/app/logs app.js

// Attempting to read outside allowed paths throws ERR_ACCESS_DENIED
// Attempting to spawn child processes throws ERR_ACCESS_DENIED
// Attempting to use network without --allow-net throws ERR_ACCESS_DENIED

Watch Mode

node --watch is stable since Node.js 22. It automatically restarts the process when imported files change, eliminating the need for nodemon in development. Combine with --watch-path to restrict which directories are monitored.

# Development with built-in watch mode (no nodemon needed)
node --watch src/server.ts

# Watch specific paths only
node --watch-path=./src --watch-path=./config src/server.ts

Built-in WebSocket Client

Node.js 22+ enables the built-in WebSocket global by default (based on the undici implementation). No need to install the ws package for client-side WebSocket connections. The API matches the browser WebSocket standard.

// Built-in WebSocket client (Node.js 22+, no ws package needed)
const ws = new WebSocket('wss://api.example.com/stream');

ws.addEventListener('open', () => {
  ws.send(JSON.stringify({ subscribe: 'metrics' }));
});

ws.addEventListener('message', (event) => {
  const data = JSON.parse(event.data);
  console.log('Metric:', data);
});
These built-in features significantly reduce the dependency footprint of Node.js projects. In a new microservice, replacing Jest with node:test, nodemon with --watch, and the ws package with the built-in WebSocket eliminates three dependencies and their transitive trees. Combined with native TypeScript execution for development scripts, the toolchain becomes leaner and faster to set up.

Latest Updates (April 2026)

Node.js 24 LTS: The Current Production Standard

Node.js 24.15.0 LTS is the current recommended version, supported through April 30, 2028. Key additions include the stable Permission Model (simplified from --experimental-permission to --permission), native TypeScript execution via --strip-types enabled by default for .ts files (no ts-node or build step needed), Undici 7 as the built-in HTTP client with improved protocol support and performance, and V8 13.4 bringing Float16Array and RegExp.escape(). The built-in test runner now automatically waits for subtests to finish, eliminating a common source of flaky tests.

Node.js 24: URLPattern, AsyncLocalStorage, and Test Runner

Node.js 24 exposes the URLPattern API on the global object, enabling declarative URL routing and matching without external libraries or complex regex. AsyncLocalStorage now defaults to AsyncContextFrame, providing more efficient asynchronous context tracking for request-scoped data like user sessions and tracing IDs. The built-in test runner supports parallel execution by default on multi-core systems and auto-awaits subtests, and V8 13.6 brings Float16Array, RegExp.escape(), and Atomics.pause(). Undici 7 ships faster HTTP performance with improved HTTP/2 and HTTP/3 support.

Node.js 20 End-of-Life: April 30, 2026

Node.js 20 reaches end-of-life on April 30, 2026. After this date, Node 20 will receive no further security patches or bug fixes. Teams still on Node 20 must upgrade to Node.js 24 LTS before the deadline. The upgrade path is straightforward: npm v11 (bundled with Node 24) is 65% faster than npm v9, and most applications require only dependency updates. Test with node --check and review the V8 breaking changes list for any deprecated APIs.

The Node 20 EOL on April 30 is the most urgent action item. Upgrade to Node 24 LTS immediately if you have not already. The benefits are significant: native TypeScript execution eliminates the ts-node dependency, the stable Permission Model adds defense-in-depth security, and npm v11 dramatically speeds up installs. For new projects, Node 24 LTS with the built-in test runner, built-in WebSocket, and native .ts execution provides a dramatically leaner toolchain.

More Guides