# Lighthouse & Core Web Vitals — Interpretation Standards (v0.0)

Provides clear guidance on how to interpret Lighthouse scores and Core Web Vitals correctly.

Prevents over-optimization, misaligned priorities, and wasted development effort.

Do:
– Focus on LCP, CLS, and responsiveness
– Use Lighthouse to identify bottlenecks
– Validate performance with real user data (Search Console)

Don’t:
– Chase a Lighthouse score of 100
– Optimize metrics that do not impact real UX
– Introduce complexity just to improve lab scores

Verify:
– Core Web Vitals are in “good” range
– No regressions in real user performance
– Page still feels fast and stable

Core Principle

Lighthouse is a diagnostic tool.

Core Web Vitals represent real user experience.

We optimize for users first, metrics second, and scores last.

Lighthouse vs Core Web Vitals

Lighthouse (Lab Data)

– simulated environment
– single-run variability
– useful for debugging
– not representative of all users

Use for:
– identifying performance issues
– spotting blocking resources
– testing improvements locally

Core Web Vitals (Field Data)

– real user data
– aggregated over time
– used in search evaluation
– reflects actual experience

Primary metrics:

– Largest Contentful Paint (LCP)
– Cumulative Layout Shift (CLS)
– Interaction responsiveness (FID / INP direction)

Field data takes priority over lab results.

Performance Targets

Target ranges:

– LCP: < 2.5 seconds
– CLS: < 0.1
– Responsiveness: fast and non-blocking

If these are met:

→ performance is acceptable
→ further optimization should be intentional, not automatic

Score Interpretation

Lighthouse score is a composite value.

It can be affected by:

– network conditions
– device simulation
– test variability
– non-critical optimizations

Guideline:

A score below 100 is not a problem if Core Web Vitals are healthy.

Optimization Strategy

Prioritize:

1. Real bottlenecks
2. User-perceived performance
3. Stability across pages

Avoid:

– micro-optimizing insignificant gains
– rewriting working systems for marginal score increases
– introducing risk for cosmetic improvements

Common Failure Modes

– chasing a perfect Lighthouse score
– ignoring field data
– over-optimizing JavaScript unnecessarily
– breaking UX to improve metrics
– adding complexity for minimal gains

Engineering Guidance

When reviewing performance:

1. Check Core Web Vitals first
2. Use Lighthouse to identify issues
3. Confirm findings with real-world data
4. Fix only meaningful problems
5. re-test after changes

Definition of Done

Performance work is complete when:

– Core Web Vitals are in acceptable ranges
– no regressions in real user data
– no unnecessary complexity introduced
– page remains stable and fast

Strategic Notes

Many teams optimize for scores.

A perfect score with worse UX is failure.
A lower score with strong UX is success.

Priority

High


Go_to: Glossary

Ownership

Engineering (implementation & performance)
SEO (standards & validation)

Stephen AND Lucent