Stop Chasing Lighthouse 100: Performance Budgets That Protect UX (and Revenue)
If you don’t put a hard cap on LCP, INP, and JS weight, your UX will drift and your conversion curve will follow it down. Here’s the budget system we deploy to keep teams honest and fast.
Budgets that don’t block merges are wishes. Make them SLOs, wire them to CI, and measure them in the field.Back to all posts
The moment you lose the user
I watched a retail team spend a quarter chasing a Lighthouse 100 on a fiber-connected MacBook while their mobile LCP slid past 4s on prepaid Androids. Checkout conversion dipped 7% and no one tied it to the slow slide. I’ve seen this fail more times than I can count: if you don’t set a performance budget that matches what your users actually feel, UX drifts and revenue follows.
This is the system we deploy at GitPlumbers when a team needs to stop the drift and keep UX consistent—without turning every sprint into perf whack‑a‑mole.
Make the budget about UX SLOs, not lab glory
Set budgets as user-facing SLOs anchored on Core Web Vitals and time-to-meaningful metrics, at the 75th percentile:
- LCP (Largest Contentful Paint): p75 < 2.5s (mobile, real 4G). If you’re ad-heavy or image-led, push 2.0s.
- INP (Interaction to Next Paint): p75 < 200ms (don’t let JS work steal clicks).
- CLS (Cumulative Layout Shift): p75 < 0.1 (no jumpy layouts).
- TTFB: p75 < 500ms (edge cache or fix backend latency).
Segment budgets by what your business cares about:
- Device: low-end Android vs flagship iPhone.
- Network: 4G/5G vs Wi‑Fi.
- Region: edge PoPs vs centralized DC.
- Journey: home, PLP, PDP, cart, checkout—weighted by revenue.
Tie budgets to KPIs. Real numbers I’ve seen:
- Moving p75 LCP from ~3.8s to ~2.4s on mobile increased add-to-cart by 4–6% for a DTC brand in Q4.
- Tightening INP from ~280ms to ~140ms reduced support tickets about “buttons not working” by 18% at a B2B SaaS.
If the metric doesn’t change a user outcome (conversion, engagement, support load), it isn’t a budget—it’s a hobby.
Baseline first: field, then lab
You can’t budget what you can’t measure. Start with field data, validate with lab:
- RUM: instrument with
web-vitalsand Navigation Timing to collect LCP/INP/CLS and ship to your analytics or a time-series DB. - CrUX: pull Chrome UX Report for your origin to benchmark against the web.
- Synthetic: use SpeedCurve, Calibre, or WebPageTest for consistent lab baselines per page group.
Minimal RUM wiring:
// rum.ts
import {onLCP, onINP, onCLS} from 'web-vitals';
function sendToAnalytics(metric: any) {
navigator.sendBeacon('/rum', JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating,
id: metric.id,
url: location.pathname,
dt: Date.now(),
}));
}
onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);And if you want to see what the browser sees:
// Quick LCP probe
new PerformanceObserver((entryList) => {
const entries = entryList.getEntries();
const last = entries[entries.length - 1];
console.log('LCP candidate', last.startTime.toFixed(0), 'ms', last);
}).observe({type: 'largest-contentful-paint', buffered: true});Now you know where you are. Draft budgets that are ambitious but credible—e.g., if mobile LCP p75 is 3.6s, set 3.0s in 4 weeks, 2.5s next quarter.
Enforce in CI/CD or it won’t stick
Budgets that don’t block merges are wishes. We use lighthouse-ci and bundle size guards to stop regressions before they ship.
Lighthouse budgets file:
// budgets.json
[
{
"path": "/*",
"timings": [
{"metric": "interactive", "budget": 4000},
{"metric": "first-contentful-paint", "budget": 1800},
{"metric": "largest-contentful-paint", "budget": 2500}
],
"resourceSizes": [
{"resourceType": "total", "budget": 300},
{"resourceType": "script", "budget": 170},
{"resourceType": "image", "budget": 120}
],
"resourceCounts": [
{"resourceType": "third-party", "budget": 10}
]
}
]LHCI config and GitHub Action gate:
// .lighthouserc.json
{
"ci": {
"collect": {
"numberOfRuns": 3,
"url": ["https://preview.example.com/", "https://preview.example.com/product/123"]
},
"assert": {"assertions": {"categories:performance": ["warn", {"minScore": 0.85}]}},
"upload": {"target": "temporary-public-storage"},
"budgets": [{"path": "./budgets.json"}]
}
}# .github/workflows/perf.yml
name: perf-budgets
on: [pull_request]
jobs:
lhci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: {node-version: '20'}
- run: npm ci
- run: npx @lhci/cli autorunCap JavaScript size with size-limit so someone’s “quick util” doesn’t add 150kB:
// package.json
{
"scripts": {"size": "size-limit"},
"size-limit": [
{"path": "dist/app.*.js", "limit": "170 KB"},
{"path": "dist/vendor.*.js", "limit": "120 KB"}
]
}Run it in CI and fail the PR if limits are exceeded.
Optimize to the budget: changes that move LCP/INP
I’ve wasted weeks shaving 20ms off CSS when 800ms was hiding in images and JS parse. Work where the budget bleeds.
Images (usually 40–70% of weight)
- Serve AVIF/WebP with fallbacks; target ~0.8–1.5MP for hero images.
- Use
srcsetandsizes; no 2x retina bombs on 360px screens. - Lazy-load below-the-fold:
loading="lazy"anddecoding="async". - Prioritize hero:
<img fetchpriority="high">or Next.jspriority. - Measured outcome: cutting hero JPG 600kB→180kB improved mobile LCP p75 by ~700ms for a media site.
JavaScript execution (INP killer)
- Ship less JS. Code-split routes and heavy components with
import(). - Prefer
esbuild/rollupto cut vendor cruft; audit withwebpack-bundle-analyzer. - Defer non-critical:
deferscripts; move hydration until visible withIntersectionObserver. - Avoid synchronous
localStorageloops and JSON.parse of megabyte payloads on critical path. - Outcome: splitting a 320kB app bundle into route chunks reduced INP p75 from 260ms→160ms at a marketplace.
- Ship less JS. Code-split routes and heavy components with
Critical CSS and fonts
- Inline critical CSS (~5–8kB) and defer the rest.
font-display: swap;and preconnect to font CDNs; self-host if feasible.- Budget: first text paint under 1.8s on mobile without FOIT.
Third‑party scripts
- Maintain an allowlist with owners and ROI. Asynchronous by default, lazy-load below-the-fold tags.
- Replace legacy tag managers that inject sync scripts.
- Outcome: removing two dormant trackers (180kB each) cut total bytes by 360kB and improved LCP p75 by ~400ms.
Backend + CDN for TTFB
- Cache HTML for anonymous traffic with short TTLs and cookies bypass.
- Push API responses to edge (KV/Global Tables) where possible; compress JSON.
- Aim for p75 TTFB < 500ms globally; < 300ms in primary regions.
Express example for caching and compression:
import express from 'express';
import compression from 'compression';
const app = express();
app.use(compression());
app.set('etag', 'strong');
app.get('/', (req, res) => {
res.set('Cache-Control', 'public, max-age=60, s-maxage=300');
res.send(renderedHtml);
});
app.get('/api/products', (req, res) => {
res.set('Cache-Control', 'public, max-age=30, s-maxage=120');
res.json(products);
});
app.listen(3000);- Resource hints done right
preloadthe LCP image or CSS only when truly critical.preconnectto origins you hit before interaction (CDN, API, fonts).- Don’t
prefetcheverything—keep it surgical to avoid network contention.
Production monitoring and alerting like you mean it
Budgets aren’t real until they’re monitored in prod and tied to a pager or at least a red CI badge.
- Stream
web-vitalsmetrics to your TSDB and dashboard in Grafana. Alert on burn rates: e.g., if mobile LCP p75 > 2.5s for 30m with >5k samples, page the on-call for the owning team. - Correlate with business: join vitals with conversion/engagement to see revenue sensitivity.
- Segment dashboards by device/network/region; don’t let “All Users” hide pain.
Prometheus-style alert idea (pseudo):
# INP p75 burn (pseudo-PromQL)
alert: HighINP75
expr: quantile_over_time(0.75, web_vitals_inp_ms{device="android"}[30m]) > 200
for: 30m
labels: {severity: page, team: web}
annotations: {summary: "INP p75 > 200ms on Android for 30m"}Release policy that actually works:
- Block deploys if last 24h p75 in prod has no headroom (e.g., >2.4s LCP when budget is 2.5s). Fix before adding risk.
- Progressive delivery: ship canaries to 5%, measure field vitals, then ramp.
What I’ve seen fail (and what works)
What fails:
- Chasing a Lighthouse 100 on a Mac and calling it done.
- Setting budgets in kB only; users feel time, not bytes.
- Ignoring third-party bloat because “marketing needs it.” Make them own a budget.
- Vibe coding performance fixes without measuring real user impact. AI-generated code loves to add dependencies; it rarely removes them.
What works:
- Budgets as SLOs at p75 mobile, enforced in CI, validated by RUM.
- Weekly perf review: top regressions, owners, fixes, and a 30‑day trend.
- A “no new JS without a delete” rule for the app shell.
- Quarterly ratchets: when you hold green for 60 days, tighten by 10–15%.
If you need a partner that doesn’t just drop a report and bail, GitPlumbers embeds with your team, stands up the gates, and leaves you with dashboards, docs, and savings you can measure.
Key takeaways
- Tie budgets to user-facing SLOs (p75 LCP, INP, CLS) by segment (mobile, region) and track them like uptime.
- Enforce budgets in CI/CD with `lighthouse-ci` and size checks so regressions never hit prod.
- Optimize to the budget: prioritize image weight, JS execution time, and third‑party scripts before micro-optimizing CSS.
- Wire RUM to verify real users meet the budget; gate releases on field data, not just lab scores.
- Iterate: budgets should get tighter as teams pay down tech debt—and looser only when business impact is proven.
Implementation checklist
- Define UX SLOs: p75 LCP < 2.5s, INP < 200ms, CLS < 0.1 for primary user journeys.
- Segment by device/network/region; weight by traffic and revenue.
- Baseline with RUM + synthetic (CrUX, SpeedCurve/WebPageTest).
- Create CI gates: Lighthouse budgets + bundle size limits; fail PRs on violations.
- Optimize high-ROI areas: images, JS execution, third-party control, caching.
- Monitor in prod with `web-vitals` and burn-rate alerts on SLOs.
- Review budgets quarterly; ratchet once stability is proven.
Questions we hear from teams
- What’s a realistic starting budget for a typical React storefront?
- Mobile p75: LCP ≤ 2.5s, INP ≤ 200ms, CLS ≤ 0.1, TTFB ≤ 500ms. Script ≤ 170kB gz, images ≤ 120kB on landing, ≤ 300kB total. Segment by device/network and ratchet quarterly.
- How do we handle regional performance differences?
- Set region-specific TTFB and LCP targets. Use a CDN with edge caching and measure per region in RUM. If a region can’t meet global budgets, either invest in edge/replication or explicitly set a looser regional budget with a business owner sign-off.
- We’re an SPA. Do budgets change?
- Budgets include route transitions: measure soft navigations with the `web-vitals` SPA APIs and set LCP/INP budgets per route. Split the app shell from page chunks and cap JS execution during transitions.
- What about third-party scripts we ‘can’t remove’?
- Give each a budget and an owner. Load async, defer or lazy below the fold, and require a business justification for sync/early load. If it blows the budget, either pay the performance tax knowingly or replace it.
Ready to modernize your codebase?
Let GitPlumbers help you transform AI-generated chaos into clean, scalable applications.
