Lighthouse: Web Performance Auditing and CI/CD Integration

A deep technical guide covering Core Web Vitals, performance scoring methodology, accessibility auditing, SEO checks, CI/CD integration, programmatic Node.js API, performance budgets, user flows, timespan mode, Chrome DevTools, and PageSpeed Insights API.

LighthouseCore Web VitalsLCPINPCLSWCAGaxe-coreLighthouse CIGitHub ActionsGitLab CINode.js APIPageSpeed InsightsUser Flows

1. Core Web Vitals (LCP, INP, CLS)

Largest Contentful Paint (LCP)

LCP measures loading performance: the time until the largest visible content element (image, heading, video poster) renders. Thresholds: good < 2.5s, needs improvement 2.5-4.0s, poor > 4.0s. Key optimizations: preload critical resources, optimize images (WebP/AVIF), server-side rendering, and CDN caching.

< 2.5s
Good
2.5 - 4.0s
Needs Improvement
> 4.0s
Poor
<!-- Preload LCP image -->
<link rel="preload" as="image" href="/hero.webp" fetchpriority="high">

<!-- Responsive images with modern formats -->
<picture>
  <source srcset="/hero.avif" type="image/avif">
  <source srcset="/hero.webp" type="image/webp">
  <img src="/hero.jpg" alt="Hero" width="1200" height="600"
       loading="eager" fetchpriority="high" decoding="async">
</picture>

<!-- Inline critical CSS to avoid render-blocking -->
<style>
  /* Critical above-the-fold CSS inlined here */
  body { font-family: system-ui; margin: 0; }
  .hero { min-height: 60vh; }
</style>

<!-- Defer non-critical CSS -->
<link rel="preload" href="/styles.css" as="style"
      onload="this.onload=null;this.rel='stylesheet'">

Interaction to Next Paint (INP)

INP replaced FID in March 2024 as the responsiveness metric. It measures the latency of all interactions (clicks, taps, key presses) throughout the page lifecycle, reporting the worst interaction. Thresholds: good < 200ms, needs improvement 200-500ms, poor > 500ms.

< 200ms
Good
200 - 500ms
Needs Improvement
> 500ms
Poor
// Break up long tasks to improve INP
function processLargeDataset(items) {
  const CHUNK_SIZE = 50;
  let index = 0;

  function processChunk() {
    const end = Math.min(index + CHUNK_SIZE, items.length);
    for (let i = index; i < end; i++) {
      renderItem(items[i]);
    }
    index = end;

    if (index < items.length) {
      // Yield to main thread between chunks
      requestAnimationFrame(() => {
        setTimeout(processChunk, 0);
      });
    }
  }
  processChunk();
}

// Use scheduler.yield() (when available) for better INP
async function handleClick(event) {
  // Step 1: immediate visual feedback
  button.classList.add('loading');

  // Yield to let the browser paint
  if ('scheduler' in globalThis) {
    await scheduler.yield();
  }

  // Step 2: expensive computation
  const result = await computeExpensiveResult();
  updateUI(result);
}

// Monitor INP in production with web-vitals
import { onINP } from 'web-vitals';
onINP((metric) => {
  analytics.send('inp', {
    value: metric.value,
    element: metric.attribution?.interactionTarget,
    type: metric.attribution?.interactionType,
  });
});

Cumulative Layout Shift (CLS)

CLS measures visual stability: how much the visible content shifts unexpectedly during page load. Thresholds: good < 0.1, needs improvement 0.1-0.25, poor > 0.25. Main causes: images without dimensions, dynamic content injection, web fonts causing FOIT/FOUT.

< 0.1
Good
0.1 - 0.25
Needs Improvement
> 0.25
Poor
<!-- Always set dimensions on images/videos -->
<img src="/photo.webp" width="800" height="600" alt="Photo">

<!-- Use aspect-ratio for responsive containers -->
<style>
  .video-embed {
    aspect-ratio: 16 / 9;
    width: 100%;
    background: #111;
  }

  /* Reserve space for dynamic ad slots */
  .ad-slot {
    min-height: 250px;
    contain: layout;
  }

  /* Font swap strategy to minimize CLS from web fonts */
  @font-face {
    font-family: 'CustomFont';
    src: url('/fonts/custom.woff2') format('woff2');
    font-display: optional; /* or swap for important fonts */
    size-adjust: 100.5%;    /* match fallback metrics */
    ascent-override: 95%;
    descent-override: 22%;
    line-gap-override: 0%;
  }
</style>

<!-- Preload critical fonts -->
<link rel="preload" href="/fonts/custom.woff2" as="font"
      type="font/woff2" crossorigin>

2. Performance Scoring Methodology

How Lighthouse Scores Work

Lighthouse performance score is a weighted average of 5 metrics: FCP (10%), SI (10%), LCP (25%), TBT (30% -- lab proxy for INP), and CLS (25%). Each metric maps to a score via a log-normal distribution curve derived from real-world HTTP Archive data. Scores 90-100 are green (good), 50-89 are orange (needs improvement), and 0-49 are red (poor).

// Lighthouse scoring weights (v12+)
const SCORING_WEIGHTS = {
  'first-contentful-paint':    0.10,  // FCP  - time to first content
  'speed-index':               0.10,  // SI   - visual loading speed
  'largest-contentful-paint':  0.25,  // LCP  - main content loaded
  'total-blocking-time':       0.30,  // TBT  - lab proxy for INP
  'cumulative-layout-shift':   0.25,  // CLS  - visual stability
};
// Total: 1.00

// Score thresholds (green/orange/red)
const THRESHOLDS = {
  performance:     { green: 90, orange: 50 },
  accessibility:   { green: 90, orange: 50 },
  'best-practices': { green: 90, orange: 50 },
  seo:             { green: 90, orange: 50 },
};

// Metric targets for a score of 90+
// FCP:  < 1.8s   (First Contentful Paint)
// SI:   < 3.4s   (Speed Index)
// LCP:  < 2.5s   (Largest Contentful Paint)
// TBT:  < 200ms  (Total Blocking Time)
// CLS:  < 0.1    (Cumulative Layout Shift)

// Log-normal scoring formula:
// score = POISSON_CDF(ln(metricValue / median) / ln(p10 / median))
// where median and p10 are derived from HTTP Archive data
100
Performance
100
Accessibility
100
Best Practices
100
SEO
Jose's Experience: The josenobile.co homepage achieves 100/100/100/100 across all four Lighthouse categories. The health dashboard reaches 98/100 on desktop performance. I use the proof.sh pipeline to run Lighthouse CI on every deploy, ensuring no regression below these scores.

3. Accessibility Auditing (WCAG AA/AAA, axe-core)

axe-core Engine

Lighthouse uses the axe-core engine (by Deque Systems) to power its accessibility audits. axe-core checks ~80 rules covering WCAG 2.1 Level A and AA criteria. It catches issues like missing alt text, insufficient color contrast, missing form labels, improper ARIA usage, and keyboard traps.

// Run axe-core standalone for deeper analysis
import AxeBuilder from '@axe-core/playwright';
import { chromium } from 'playwright';

const browser = await chromium.launch();
const page = await browser.newPage();
await page.goto('https://josenobile.co/');

const results = await new AxeBuilder({ page })
  .withTags(['wcag2a', 'wcag2aa', 'wcag21aa', 'best-practice'])
  .analyze();

console.log(`Violations: ${results.violations.length}`);
results.violations.forEach(v => {
  console.log(`[${v.impact}] ${v.id}: ${v.description}`);
  console.log(`  Affected: ${v.nodes.length} elements`);
  console.log(`  Fix: ${v.help}`);
});

await browser.close();

// axe-core rule tags mapped to WCAG levels:
// 'wcag2a'   - WCAG 2.0 Level A
// 'wcag2aa'  - WCAG 2.0 Level AA
// 'wcag21a'  - WCAG 2.1 Level A
// 'wcag21aa' - WCAG 2.1 Level AA
// 'wcag22aa' - WCAG 2.2 Level AA
// 'best-practice' - Not WCAG but recommended

Key Accessibility Audits

Lighthouse checks ~60 accessibility audits based on WCAG 2.1 AA criteria. It tests color contrast ratios, ARIA attributes, keyboard navigation, heading hierarchy, image alt text, form labels, and focus management. For WCAG AAA compliance, additional manual checks are needed: enhanced contrast (7:1 for normal text, 4.5:1 for large), no timing dependencies, and sign language for multimedia.

<!-- Skip navigation link -->
<a class="skip-link" href="#main-content">Skip to content</a>

<!-- Proper heading hierarchy -->
<h1>Page Title</h1>
  <h2>Section</h2>
    <h3>Subsection</h3>

<!-- Color contrast: WCAG AA requires 4.5:1 for normal text, 3:1 for large -->
<!-- WCAG AAA requires 7:1 for normal text, 4.5:1 for large text -->
<style>
  :root {
    --text: #e2e8f0;    /* on dark bg #0b0f1a = contrast 12.5:1 (AAA) */
    --muted: #94a3b8;   /* on dark bg = contrast 7.1:1 (AAA for large) */
  }
</style>

<!-- Accessible form with proper labeling -->
<form>
  <label for="email">Email address</label>
  <input type="email" id="email" name="email"
         autocomplete="email"
         aria-describedby="email-help"
         required>
  <span id="email-help">We'll never share your email.</span>

  <!-- Accessible custom select -->
  <label for="lang">Language</label>
  <select id="lang" aria-label="Language">
    <option value="en-US">English</option>
    <option value="es-CO">Espa&ntilde;ol</option>
  </select>
</form>

<!-- ARIA live region for dynamic updates -->
<div aria-live="polite" aria-atomic="true" id="status"></div>
<script>
  function updateStatus(message) {
    document.getElementById('status').textContent = message;
  }
</script>

4. SEO Audit (Meta, Structured Data, Robots, Canonical)

Meta Tags and Structured Data

Lighthouse SEO audits verify meta descriptions, canonical URLs, robots directives, hreflang for multilingual sites, structured data (JSON-LD), and mobile-friendliness. Each audit contributes to the SEO score.

<!-- Essential SEO meta tags -->
<title>Lighthouse Guide — Jose Nobile</title>
<meta name="description" content="Deep technical guide to Lighthouse:
  Core Web Vitals, performance scoring, accessibility auditing...">
<link rel="canonical" href="https://josenobile.co/guides/lighthouse/">
<meta name="robots" content="index,follow,max-snippet:-1">

<!-- Hreflang for bilingual content -->
<link rel="alternate" hreflang="en-US"
      href="https://josenobile.co/guides/lighthouse/">
<link rel="alternate" hreflang="es-CO"
      href="https://josenobile.co/guides/lighthouse/">

<!-- Open Graph for social sharing -->
<meta property="og:type" content="article">
<meta property="og:title" content="Lighthouse Guide">
<meta property="og:description" content="Core Web Vitals, CI/CD...">
<meta property="og:url" content="https://josenobile.co/guides/lighthouse/">
<meta name="twitter:card" content="summary_large_image">

Robots, Canonical, and Crawlability

Lighthouse checks that pages are not blocked from indexing, that canonical URLs are valid and self-referencing, that robots.txt does not accidentally block critical resources, and that the page is crawlable. It also verifies that link text is descriptive and that tap targets are properly sized for mobile.

# robots.txt
User-agent: *
Allow: /
Disallow: /api/
Disallow: /admin/
Sitemap: https://josenobile.co/sitemap.xml

# Lighthouse SEO checks include:
# [x] Page has a meta description
# [x] Document has a title element
# [x] Page has a valid canonical URL
# [x] Page is not blocked from indexing (no noindex)
# [x] robots.txt is valid and accessible
# [x] Links have descriptive text (not "click here")
# [x] Hreflang tags are valid
# [x] Document has a valid lang attribute
# [x] Tap targets are sized appropriately (>= 48x48px)
# [x] Font size is legible (>= 12px)

<!-- JSON-LD structured data (TechArticle) -->
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "TechArticle",
  "headline": "Lighthouse: Web Performance Auditing",
  "author": {
    "@type": "Person",
    "name": "Jose Nobile",
    "url": "https://josenobile.co/"
  },
  "publisher": {"@type": "Person", "name": "Jose Nobile"},
  "datePublished": "2026-03-17",
  "dateModified": "2026-04-23",
  "url": "https://josenobile.co/guides/lighthouse/",
  "about": ["Lighthouse", "Core Web Vitals", "Web Performance"]
}
</script>

5. Best Practices (HTTPS, CSP, SRI)

Security and Modern Web Standards

The Best Practices category checks HTTPS usage, Content Security Policy (CSP), Subresource Integrity (SRI), security headers, JavaScript errors, deprecated APIs, correct image aspect ratios, and modern web standards compliance.

# Security headers (_headers file for Cloudflare Pages)
/*
  Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
  X-Content-Type-Options: nosniff
  X-Frame-Options: DENY
  Referrer-Policy: strict-origin-when-cross-origin
  Permissions-Policy: camera=(), microphone=(), geolocation=()
  Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self'; connect-src 'self' https://api.josenobile.co; frame-ancestors 'none'; base-uri 'self'; form-action 'self'

Subresource Integrity (SRI)

SRI ensures that third-party resources (scripts, stylesheets) have not been tampered with. Lighthouse checks that external scripts include integrity attributes. SRI hashes verify the file content matches what was expected at build time, protecting against CDN compromises and supply-chain attacks.

<!-- SRI for third-party scripts -->
<script
  src="https://cdn.example.com/lib@3.2.1/lib.min.js"
  integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8w"
  crossorigin="anonymous">
</script>

<!-- SRI for stylesheets -->
<link
  rel="stylesheet"
  href="https://cdn.example.com/styles@2.0.0/main.css"
  integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm"
  crossorigin="anonymous">

# Generate SRI hash:
# openssl dgst -sha384 -binary lib.min.js | openssl base64 -A
# Or: shasum -b -a 384 lib.min.js | awk '{print $1}' | xxd -r -p | base64

# Lighthouse best practices checklist:
# [x] HTTPS with valid certificate
# [x] No mixed content (HTTP resources on HTTPS page)
# [x] No console errors in JavaScript
# [x] No deprecated APIs (document.write, etc.)
# [x] Images have correct aspect ratios
# [x] Charset declared early in <head>
# [x] No vulnerable JavaScript libraries
# [x] CSP prevents XSS attacks
# [x] SRI on external resources
# [x] Correct doctype declaration

6. Lighthouse CI (@lhci/cli, GitHub Actions, GitLab CI)

Lighthouse CI Configuration

Lighthouse CI (LHCI) uses @lhci/cli to run audits automatically in your CI/CD pipeline, track scores over time, and fail builds when scores drop below thresholds. It supports assertions for enforcing performance budgets and can upload results to temporary storage or a self-hosted LHCI server.

// lighthouserc.js - @lhci/cli configuration
module.exports = {
  ci: {
    collect: {
      url: [
        'http://localhost:8080/',
        'http://localhost:8080/health/',
        'http://localhost:8080/guides/lighthouse/',
      ],
      numberOfRuns: 3,       // Run 3 times for median
      settings: {
        preset: 'desktop',
        chromeFlags: '--no-sandbox --headless',
        onlyCategories: ['performance', 'accessibility', 'best-practices', 'seo'],
      },
    },
    assert: {
      assertions: {
        'categories:performance': ['error', { minScore: 0.9 }],
        'categories:accessibility': ['error', { minScore: 0.9 }],
        'categories:best-practices': ['error', { minScore: 0.9 }],
        'categories:seo': ['error', { minScore: 0.9 }],
        'largest-contentful-paint': ['warn', { maxNumericValue: 2500 }],
        'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],
        'total-blocking-time': ['warn', { maxNumericValue: 200 }],
        'first-contentful-paint': ['warn', { maxNumericValue: 1800 }],
      },
    },
    upload: {
      target: 'temporary-public-storage',
      // Or self-hosted: target: 'lhci', serverBaseUrl: 'https://lhci.example.com'
    },
  },
};

GitHub Actions Integration

Run Lighthouse CI in GitHub Actions with score reporting as PR comments. The pipeline serves the site locally, runs audits, and posts results.

# .github/workflows/lighthouse.yml
name: Lighthouse CI
on:
  pull_request:
    branches: [main]
  push:
    branches: [main]

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 20

      - name: Install dependencies
        run: npm ci

      - name: Serve site
        run: npx serve -s . -l 8080 &

      - name: Wait for server
        run: npx wait-on http://localhost:8080

      - name: Run Lighthouse CI
        run: |
          npm install -g @lhci/cli@0.14.x
          lhci autorun
        env:
          LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}

      - name: Upload Lighthouse results
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: lighthouse-results
          path: .lighthouseci/

GitLab CI Integration

For GitLab, Lighthouse CI runs in a Docker container with Chrome. Results are stored as artifacts and can be displayed in merge request widgets.

# .gitlab-ci.yml
lighthouse:
  stage: test
  image: node:20-slim
  before_script:
    - apt-get update && apt-get install -y chromium
    - npm install -g @lhci/cli@0.14.x serve wait-on
    - export CHROME_PATH=$(which chromium)
  script:
    - serve -s . -l 8080 &
    - wait-on http://localhost:8080
    - lhci autorun --config=lighthouserc.js
  artifacts:
    paths:
      - .lighthouseci/
    reports:
      performance: .lighthouseci/lhr-*.json
  only:
    - merge_requests
    - main
Jose's Experience: I integrated Lighthouse CI into the proof.sh pipeline for josenobile.co, achieving and maintaining 100/100/100/100 scores on the homepage and 98/100 performance on the health dashboard (desktop). For the health dashboard, I optimized the Lighthouse performance score from 44 to 98 by implementing lazy loading, code splitting, image optimization, and critical CSS inlining.

7. Programmatic Usage (Node.js API)

Custom Lighthouse Automation

The Lighthouse Node.js API enables building custom audit tools, dashboards, and monitoring systems. You control Chrome launch, audit configuration, and result processing programmatically.

import lighthouse from 'lighthouse';
import * as chromeLauncher from 'chrome-launcher';
import { writeFileSync } from 'fs';

async function auditPage(url) {
  const chrome = await chromeLauncher.launch({
    chromeFlags: ['--headless', '--no-sandbox'],
  });

  const result = await lighthouse(url, {
    port: chrome.port,
    output: ['json', 'html'],
    onlyCategories: ['performance', 'accessibility', 'best-practices', 'seo'],
    settings: {
      formFactor: 'desktop',
      screenEmulation: { disabled: true },
      throttling: {
        rttMs: 40,
        throughputKbps: 10240,
        cpuSlowdownMultiplier: 1,
      },
    },
  });

  await chrome.kill();

  const scores = {
    url,
    timestamp: new Date().toISOString(),
    performance: result.lhr.categories.performance.score * 100,
    accessibility: result.lhr.categories.accessibility.score * 100,
    bestPractices: result.lhr.categories['best-practices'].score * 100,
    seo: result.lhr.categories.seo.score * 100,
    metrics: {
      fcp: result.lhr.audits['first-contentful-paint'].numericValue,
      lcp: result.lhr.audits['largest-contentful-paint'].numericValue,
      tbt: result.lhr.audits['total-blocking-time'].numericValue,
      cls: result.lhr.audits['cumulative-layout-shift'].numericValue,
      si: result.lhr.audits['speed-index'].numericValue,
    },
  };

  writeFileSync(`report-${Date.now()}.html`, result.report[1]);
  return scores;
}

// Audit multiple pages
const urls = [
  'https://josenobile.co/',
  'https://josenobile.co/health/',
  'https://josenobile.co/contact/',
];

const results = await Promise.all(urls.map(auditPage));
console.table(results.map(r => ({
  url: r.url,
  perf: r.performance,
  a11y: r.accessibility,
  bp: r.bestPractices,
  seo: r.seo,
})));

8. Performance Budgets (Size and Timing)

Defining and Enforcing Budgets

Performance budgets set limits on page weight (size budgets), resource counts, and metric values (timing budgets). Lighthouse CI enforces these budgets in CI/CD, failing builds that exceed limits. Budgets prevent performance regressions as features are added.

// budget.json - Lighthouse performance budget
[
  {
    "path": "/*",
    "timings": [
      { "metric": "interactive", "budget": 3000 },
      { "metric": "first-contentful-paint", "budget": 1500 },
      { "metric": "largest-contentful-paint", "budget": 2500 }
    ],
    "resourceSizes": [
      { "resourceType": "total", "budget": 300 },
      { "resourceType": "script", "budget": 100 },
      { "resourceType": "stylesheet", "budget": 30 },
      { "resourceType": "image", "budget": 150 },
      { "resourceType": "font", "budget": 50 },
      { "resourceType": "document", "budget": 30 }
    ],
    "resourceCounts": [
      { "resourceType": "total", "budget": 30 },
      { "resourceType": "script", "budget": 5 },
      { "resourceType": "stylesheet", "budget": 2 },
      { "resourceType": "image", "budget": 15 },
      { "resourceType": "font", "budget": 3 }
    ]
  }
]

// lighthouserc.js with budget enforcement
module.exports = {
  ci: {
    collect: { /* ... */ },
    assert: {
      budgetsFile: 'budget.json',
      assertions: {
        'resource-summary:script:size': ['error', { maxNumericValue: 102400 }],
        'resource-summary:total:size': ['warn', { maxNumericValue: 307200 }],
      },
    },
  },
};

Continuous Monitoring Dashboard

Track Lighthouse scores over time with a monitoring dashboard. Store results in a database or use Lighthouse CI Server for built-in historical tracking and comparison.

#!/bin/bash
# proof.sh - Lighthouse audit script with score tracking
set -euo pipefail

URLS=("/" "/health/" "/contact/")
BASE="http://localhost:8080"
REPORT_DIR="./lighthouse-reports"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)

mkdir -p "$REPORT_DIR"

for path in "${URLS[@]}"; do
  slug=$(echo "$path" | tr '/' '-' | sed 's/^-//;s/-$//')
  [ -z "$slug" ] && slug="home"

  echo "Auditing ${BASE}${path}..."
  lighthouse "${BASE}${path}" \
    --output=json,html \
    --output-path="$REPORT_DIR/${slug}_${TIMESTAMP}" \
    --chrome-flags="--headless --no-sandbox" \
    --preset=desktop \
    --quiet

  # Extract scores
  PERF=$(jq '.categories.performance.score * 100' "$REPORT_DIR/${slug}_${TIMESTAMP}.report.json")
  A11Y=$(jq '.categories.accessibility.score * 100' "$REPORT_DIR/${slug}_${TIMESTAMP}.report.json")
  BP=$(jq '.categories["best-practices"].score * 100' "$REPORT_DIR/${slug}_${TIMESTAMP}.report.json")
  SEO=$(jq '.categories.seo.score * 100' "$REPORT_DIR/${slug}_${TIMESTAMP}.report.json")

  echo "  Perf: $PERF | A11y: $A11Y | BP: $BP | SEO: $SEO"

  # Fail if any score below 90
  for score in $PERF $A11Y $BP $SEO; do
    if (( $(echo "$score < 90" | bc -l) )); then
      echo "FAIL: Score $score below threshold 90"
      exit 1
    fi
  done
done

echo "All audits passed."

9. User Flows and Timespan Mode

Navigation, Timespan, and Snapshot Modes

Lighthouse user flows capture performance across multi-step interactions, not just cold page loads. There are three modes: Navigation (default cold page load), Timespan (measures a time range of user interaction), and Snapshot (audits the page in its current state without navigation). Together, these modes cover the full user journey.

import { startFlow } from 'lighthouse';
import puppeteer from 'puppeteer';

const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();

// Start a Lighthouse user flow
const flow = await startFlow(page, { name: 'Checkout Flow' });

// Step 1: Navigation mode (cold load)
await flow.navigate('https://shop.example.com/');

// Step 2: Timespan mode (measure interactions over time)
await flow.startTimespan({ stepName: 'Browse products' });
await page.click('.product-card:first-child');
await page.waitForSelector('.product-detail');
await page.click('#add-to-cart');
await page.waitForSelector('.cart-badge');
await flow.endTimespan();

// Step 3: Another navigation
await flow.navigate('https://shop.example.com/cart', {
  stepName: 'Navigate to cart',
});

// Step 4: Snapshot mode (audit current DOM state)
await flow.snapshot({ stepName: 'Cart page state' });

// Step 5: Timespan for checkout interaction
await flow.startTimespan({ stepName: 'Complete checkout' });
await page.click('#checkout-btn');
await page.waitForSelector('#payment-form');
await page.type('#card-number', '4242424242424242');
await page.type('#card-expiry', '12/28');
await page.type('#card-cvc', '123');
await page.click('#submit-payment');
await page.waitForSelector('.confirmation');
await flow.endTimespan();

// Generate the flow report
const report = await flow.generateReport();
writeFileSync('flow-report.html', report);

// Access individual step results
const flowResult = await flow.createFlowResult();
for (const step of flowResult.steps) {
  console.log(`${step.name}: ${step.lhr.categories.performance.score * 100}`);
}

await browser.close();

Timespan Mode Deep Dive

Timespan mode measures CLS and INP during real user interactions, something cold navigation mode cannot capture. It is ideal for single-page applications where route changes happen client-side, infinite scroll behavior, and interactions that trigger heavy DOM updates. Timespan mode collects TBT, CLS, and INP metrics for the measured period.

import { startFlow } from 'lighthouse';
import puppeteer from 'puppeteer';

const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.goto('https://josenobile.co/');

const flow = await startFlow(page, { name: 'SPA Interactions' });

// Measure CLS and INP during SPA navigation
await flow.startTimespan({ stepName: 'Language switch interaction' });

// Simulate user switching language
await page.select('#lang', 'es-CO');
await page.waitForTimeout(500);

// Simulate scrolling (can cause CLS)
await page.evaluate(() => window.scrollTo(0, document.body.scrollHeight));
await page.waitForTimeout(1000);

await flow.endTimespan();

// Timespan results include:
// - CLS accumulated during the timespan
// - TBT (Total Blocking Time) during the timespan
// - INP (if interactions occurred)
// - Layout shift events with affected elements

const result = await flow.createFlowResult();
const timespanStep = result.steps[0];
const cls = timespanStep.lhr.audits['cumulative-layout-shift'];
const tbt = timespanStep.lhr.audits['total-blocking-time'];
console.log(`CLS during interaction: ${cls.numericValue}`);
console.log(`TBT during interaction: ${tbt.numericValue}ms`);

await browser.close();

10. Chrome DevTools Lighthouse

Running Lighthouse in DevTools

Chrome DevTools includes a built-in Lighthouse panel for running audits directly in the browser. Open DevTools (F12), navigate to the Lighthouse tab, select categories and device type, and click "Analyze page load." The results appear inline with expandable audit details, filmstrip screenshots, and treemap visualizations. DevTools Lighthouse uses the same engine as the CLI but runs within the browser process.

# Running Lighthouse from Chrome DevTools:
#
# 1. Open Chrome DevTools (F12 or Cmd+Opt+I)
# 2. Navigate to the "Lighthouse" tab
# 3. Select categories: Performance, Accessibility, Best Practices, SEO
# 4. Choose device: Mobile or Desktop
# 5. Click "Analyze page load"
#
# DevTools Lighthouse features:
# - View Treemap: visualize JavaScript bundle sizes
# - View Trace: open in Performance panel for detailed waterfall
# - Filmstrip: screenshot timeline of page load
# - Expandable audits with affected elements highlighted
# - "View Original Trace" button opens Performance panel
#
# Tips for accurate DevTools results:
# - Use Incognito mode (no extensions interfering)
# - Close other tabs (reduces CPU contention)
# - Disable browser extensions
# - Use a consistent network (or enable throttling)
# - Run multiple times and compare (results vary 5-10%)
#
# DevTools also provides real-time CWV overlay:
# 1. Open DevTools > Performance panel
# 2. Check "Web Vitals" in the timeline
# 3. Interact with the page to see INP measurements
# 4. Layout shifts appear as red markers in the timeline
#
# Lighthouse flags available in DevTools:
# --throttling-method=devtools  (uses DevTools throttling)
# --screenEmulation.disabled    (use actual viewport)
# --formFactor=desktop          (desktop scoring)

11. PageSpeed Insights API

Using the PageSpeed Insights API

The PageSpeed Insights (PSI) API combines Lighthouse lab data with Chrome User Experience Report (CrUX) field data. It provides both lab scores (what Lighthouse measures in a controlled environment) and field data (real-world performance from actual Chrome users). The API is free, requires an API key, and returns JSON results for any public URL.

// PageSpeed Insights API - fetch lab + field data
const API_KEY = process.env.PSI_API_KEY;

async function fetchPSI(url, strategy = 'desktop') {
  const endpoint = 'https://www.googleapis.com/pagespeedonline/v5/runPagespeed';
  const params = new URLSearchParams({
    url,
    key: API_KEY,
    strategy,                          // 'mobile' or 'desktop'
    category: ['performance', 'accessibility', 'best-practices', 'seo'],
  });

  const res = await fetch(`${endpoint}?${params}`);
  const data = await res.json();

  // Lab data (Lighthouse)
  const lab = data.lighthouseResult;
  console.log('Lab scores:');
  console.log(`  Performance: ${lab.categories.performance.score * 100}`);
  console.log(`  Accessibility: ${lab.categories.accessibility.score * 100}`);
  console.log(`  Best Practices: ${lab.categories['best-practices'].score * 100}`);
  console.log(`  SEO: ${lab.categories.seo.score * 100}`);

  // Field data (CrUX) - real user metrics
  const field = data.loadingExperience;
  if (field && field.metrics) {
    console.log('\nField data (CrUX):');
    const lcp = field.metrics.LARGEST_CONTENTFUL_PAINT_MS;
    const inp = field.metrics.INTERACTION_TO_NEXT_PAINT;
    const cls = field.metrics.CUMULATIVE_LAYOUT_SHIFT_SCORE;

    if (lcp) console.log(`  LCP p75: ${lcp.percentile}ms (${lcp.category})`);
    if (inp) console.log(`  INP p75: ${inp.percentile}ms (${inp.category})`);
    if (cls) console.log(`  CLS p75: ${cls.percentile / 100} (${cls.category})`);
  } else {
    console.log('\nNo field data available (not enough CrUX traffic).');
  }

  return data;
}

// Monitor multiple pages
const pages = [
  'https://josenobile.co/',
  'https://josenobile.co/health/',
];

for (const url of pages) {
  console.log(`\n--- ${url} ---`);
  await fetchPSI(url, 'desktop');
  await fetchPSI(url, 'mobile');
}

PSI API in CI/CD Monitoring

Integrate the PageSpeed Insights API into scheduled CI/CD jobs to monitor production performance without running your own Lighthouse infrastructure. PSI tests the live production URL, providing both lab and real-user field data. This complements local Lighthouse CI by catching issues that only appear in production (CDN configuration, third-party script impact, geographic latency).

# .github/workflows/psi-monitor.yml
name: PageSpeed Insights Monitor
on:
  schedule:
    - cron: '0 6 * * 1'  # Weekly on Monday at 6 AM UTC
  workflow_dispatch:

jobs:
  psi-check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run PSI checks
        run: |
          for url in "https://josenobile.co/" "https://josenobile.co/health/"; do
            echo "Checking $url..."
            RESULT=$(curl -s "https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=$url&strategy=desktop&key=${{ secrets.PSI_API_KEY }}")
            PERF=$(echo "$RESULT" | jq '.lighthouseResult.categories.performance.score * 100')
            echo "  Performance: $PERF"
            if (( $(echo "$PERF < 90" | bc -l) )); then
              echo "::error::Performance score $PERF below 90 for $url"
              exit 1
            fi
          done

      - name: Store results
        if: always()
        run: |
          mkdir -p psi-results
          date +%Y-%m-%d > psi-results/timestamp.txt

12. Lighthouse 13: Insight-Based Audits (2025-2026)

Lighthouse 13.1.0 and Performance Insights

Lighthouse 13.1.0 (April 2026) ships in Chrome 148 DevTools. The biggest change is the move to insight-based audits: many non-scored audits have been consolidated into unified insights. For example, the layout-shifts, non-composited-animations, and unsized-images audits are now combined into a single cls-culprits-insight audit. Performance scores remain unchanged — this update targets non-scored audit structure only. However, API users should expect structural differences in report outputs as old audit names are retired.

INP as Core Web Vital

Interaction to Next Paint (INP) has fully replaced First Input Delay (FID) as the responsiveness Core Web Vital. INP measures the latency of all user interactions throughout the page lifecycle, not just the first one. The threshold is under 200ms for a "good" score. Lighthouse measures INP in Timespan mode during real user interactions. Optimizing INP requires reducing main-thread blocking time, breaking up long tasks, and yielding to the browser between event handlers. Use scheduler.yield() or setTimeout(0) to split long tasks.

Scoring Weight Evolution 2026

Lighthouse 13 (2026) adjusts the performance scoring weights to reflect real-world user experience data from the Chrome User Experience Report (CrUX). The updated weights are: Total Blocking Time (TBT) at 30% (up from 25%), reflecting its strong correlation with INP in lab environments; Largest Contentful Paint (LCP) at 25% (unchanged); Cumulative Layout Shift (CLS) at 25% (up from 15%); First Contentful Paint (FCP) at 10% (down from 15%); and Speed Index (SI) at 10% (down from 15%). The increase in TBT weight rewards pages that minimize main-thread blocking, while the CLS promotion reflects Google's increased emphasis on visual stability.

For field measurement alignment, teams should adopt web-vitals v4+, which provides the onINP() API for direct Interaction to Next Paint measurement in production. The library also adds attribution builds that identify the exact element and event responsible for poor INP scores. Pairing web-vitals v4 field data with Lighthouse lab scores gives the most complete picture of real user performance -- lab scores catch regressions in CI, field data validates that optimizations translate to real improvement.

More Guides