

Core Web Vitals in 2026: Complete Guide to LCP, CLS, INP

Core Web Vitals (CWV) are Google's standardised page experience metrics that measure how fast, stable, and responsive a webpage feels to a real user. In 2026, the three metrics are Largest Contentful Paint (LCP, target 2.5 seconds or less), Cumulative Layout Shift (CLS, target 0.1 or less), and Interaction to Next Paint (INP, target 200 ms or less, having replaced First Input Delay in March 2024). CWV are confirmed Google ranking signals as part of the Page Experience signal cluster, and they correlate strongly with conversion: industry data consistently shows 5 to 25 percent conversion lift from 1 to 3 second LCP improvements. The honest framing on CWV in 2026: they are necessary but not sufficient. Sites with poor CWV will struggle to rank competitively in commercial categories, but sites with perfect CWV but weak content, thin authority, or bad UX will not rank either. Treat CWV as the technical foundation that lets your content and authority work, not as a magic SEO lever. The biggest practical issues in 2026 are JavaScript-heavy frontends causing INP failures, page builders (Elementor, Divi) producing layout shifts, third-party scripts (analytics, chat widgets, ad tags) blocking the main thread, and unoptimised images dominating LCP. The fixes are well-understood: image optimisation, render-blocking asset elimination, third-party script management via deferred loading or facade patterns, and a CDN. This guide covers what each metric measures, the 2026 thresholds, why they matter for rankings and conversion, common failure modes per metric, platform-specific patterns (Webflow, WordPress, Shopify, Next.js), measurement tools, mobile reality, implementation roadmap, and red flags in any vendor proposal.
What Core Web Vitals are in 2026
Core Web Vitals are a subset of Web Vitals, a broader Google initiative to standardise how page performance and user experience are measured. The Core Web Vitals are three metrics chosen because they capture the most consequential aspects of real user experience: loading speed (LCP), visual stability (CLS), and responsiveness to user interaction (INP).
Largest Contentful Paint (LCP) measures how quickly the largest visible content element renders inside the viewport during initial page load. The element is usually a hero image, hero video poster, or large block of text. The good threshold is 2.5 seconds or less, measured at the 75th percentile of real-user data. Slow LCP feels slow because the page appears empty or stuck while the user waits for the main content to appear.
Cumulative Layout Shift (CLS) measures visual stability by quantifying how much elements move during page load and during user interaction. A score of 0.1 or less is good; this is calculated by impact fraction (how much of the viewport is affected) times distance fraction (how far elements move). High CLS feels broken because the user tries to click something and the layout shifts, causing them to click the wrong thing.
Interaction to Next Paint (INP) measures responsiveness by tracking the time from when a user interacts (click, tap, key press) to when the page visibly responds. INP replaced First Input Delay (FID) in March 2024 because it captures responsiveness across all interactions, not just the first one. The good threshold is 200 ms or less. High INP feels laggy: the user clicks, nothing happens, they click again, then the action fires twice.
Two supporting metrics matter alongside Core Web Vitals: Time to First Byte (TTFB, target 800 ms or less) measures server response speed, and First Contentful Paint (FCP, target 1.8 seconds or less) measures when any content first appears. TTFB is upstream of LCP, so fixing TTFB usually helps LCP. FCP is sometimes used as a proxy for perceived loading speed.
All Core Web Vitals are measured at the 75th percentile of real-user data, meaning at least 75 percent of page loads must hit the good threshold for the page to be considered passing. This is what Google looks at for ranking, not lab data from synthetic tests. Lab data (Lighthouse, WebPageTest) is useful for debugging and pre-deploy testing, but the ranking signal comes from Chrome User Experience Report (CrUX) data collected from real users.
Why Core Web Vitals matter: ranking and conversion
CWV impact two business outcomes that compound: search rankings and conversion rate. The mechanisms are different but reinforcing.
Search rankings: CWV are confirmed Google ranking signals since June 2021 as part of the Page Experience update, and remain so in 2026. The signal is not a tiebreaker between two equally relevant pages; it is a meaningful factor for borderline cases and a more significant factor for sites with widespread CWV failures. Google has stated that CWV are not the most important factor (relevance and authority dominate), but they are real, especially for competitive commercial queries where many sites are close in relevance. Sites that fail CWV across many URLs find it harder to rank in those competitive categories.
Conversion lift: industry data consistently shows 5 to 25 percent conversion improvements from 1 to 3 second LCP improvements. Walmart reported 2 percent conversion lift per 1 second of speed improvement. Vodafone reported 8 percent revenue increase from improving LCP by 31 percent. The pattern holds across e-commerce, B2B SaaS, lead generation, and content sites. The mechanism is simple: faster pages reduce bounce, increase pages-per-session, and reduce friction at decision points.
Bounce rate impact compounds: pages that load in 1 to 3 seconds have 32 percent higher bounce rates than pages under 1 second; the 3-to-5 second range shows 90 percent higher bounces. The losses are compounding over time as more users abandon before the page finishes loading.
Crawl efficiency matters for large sites: Googlebot allocates crawl budget based on site responsiveness. Slow sites get crawled less, meaning fewer pages indexed and slower index updates after content changes. For sites with thousands or tens of thousands of pages, CWV failures translate directly to indexation problems.
AI engine visibility correlation: AI engines (Anthropic Claude, Perplexity, ChatGPT search tools) preferentially cite faster sites because their crawlers can extract content more reliably. Slow sites with rendering issues get cited less often. As AI engine visibility becomes a meaningful discovery channel, CWV becomes part of AEO too, not just traditional SEO.
Largest Contentful Paint (LCP): common issues and fixes
LCP is usually the most visible Core Web Vital because it directly correlates with the user's perception of "how fast does this page load." It is also the metric that has the largest gap between best and worst performers; well-optimised sites hit 1.5 seconds, while poorly optimised sites can be over 8 seconds.
The most common LCP issues are predictable. The hero image being too large is the single most frequent cause; uncompressed JPGs over 500 KB or oversized images served at full resolution to small viewports dominate LCP. The fix is to convert to WebP or AVIF, compress to 80 to 120 KB for typical hero images, and serve responsive sizes via srcset.
Lazy-loading the LCP element is a self-inflicted wound that happens when teams apply loading="lazy" indiscriminately. Above-the-fold images should not be lazy-loaded; use loading="eager" or remove the attribute entirely on the LCP element. Many CMSs and frameworks now handle this automatically, but legacy sites often have the issue.
Render-blocking JavaScript and CSS are the second most common LCP issues. Synchronous JS in the head delays rendering until the script downloads and executes. Large CSS files block rendering until they finish downloading. The fixes are inlining critical CSS, deferring non-critical JS, removing unused scripts, and minifying everything.
Slow server response (TTFB over 800 ms) cascades into LCP because nothing can render until the server responds. Common causes are slow shared hosting, missing page caching, database bottlenecks on dynamic CMSs, and missing CDN. The fix is hosting upgrade plus caching plus CDN; usually all three together.
Web fonts loading synchronously block text render. Use font-display: swap, preload critical fonts, and subset to needed glyphs. For brand-critical typography, font-display: optional avoids layout shift but may show fallback fonts on slow connections.
Third-party scripts (analytics, chat widgets, ad tags) often load before main content because they are injected in the head with synchronous execution. The fix is deferring or async-loading; chat widgets in particular should load only after user interaction.
Cookie banners that are the LCP element happen when the banner is large, animated, or styled in a way that makes it the largest visible element. Optimise the banner itself, ensure it does not block underlying content render, and consider whether a less intrusive design can satisfy compliance.
Hero video autoplay can push the LCP element below fold or replace it entirely. Use a poster image, lazy-load the video itself, and consider whether a static hero image would work better than autoplay video.
Cumulative Layout Shift (CLS): common issues and fixes
CLS is the most fixable Core Web Vital. The issues are predictable, the fixes are well-understood, and most sites can hit good CLS within a few hours of focused work. The challenge is keeping CLS good as new content and features are added.
Images without dimensions are the most common CLS cause. When an img tag has no width and height attributes, the browser does not know how much space to reserve before the image loads, so other content shifts when the image arrives. The fix is to set width and height attributes (or aspect-ratio CSS) on every img element. This is a one-time site-wide audit.
Web fonts swapping in cause text reflow when the custom font replaces the fallback. The text changes width or line height, shifting everything below it. Use font-display: optional to avoid the swap, or match fallback font metrics to the custom font using CSS size-adjust properties, or preload the font so it arrives before render.
Ads or iframes loading late are an issue for media sites and sites with embedded content. Reserve space with min-height or aspect-ratio CSS on ad and iframe containers so the layout does not shift when the content arrives.
Cookie banners pushing content down is a frequent issue. The banner appears at the top of the page and shifts everything down when it loads, then shifts everything up again when the user accepts or dismisses. Use a fixed-position overlay or modal instead of inserting in the document flow.
Dynamic content insertion via JavaScript that adds elements above visible content causes shifts. Insert below the current viewport when possible, or reserve space for the element so it does not push other content.
Embedded videos and social posts (YouTube, Twitter, Instagram) load asynchronously with no reserved space, causing shifts when they arrive. Set explicit aspect-ratio containers around embeds before they load.
Animations using top, left, or width trigger layout reflows that count toward CLS. Use transform and opacity for animations; these run on the GPU compositor without triggering layout, so they do not affect CLS.
Interaction to Next Paint (INP): common issues and fixes
INP is the hardest Core Web Vital to optimise. It replaced First Input Delay in March 2024 because INP captures responsiveness across all interactions, not just the first one. INP measures the longest delay during typical interactions (clicks, taps, key presses) and reports the slowest 98th percentile interaction.
Long-running JavaScript on click is the most common INP issue. Event handlers that run heavy logic synchronously block the main thread, preventing the page from updating. The fix is to break work into smaller tasks with setTimeout or queueMicrotask, use Web Workers for heavy computation, or optimise the algorithm itself.
Large bundles parsing and executing on first interaction are common in JavaScript-heavy frameworks. Tens of MB of JavaScript that downloads asynchronously may not have parsed by the time the user interacts. Code-splitting, dynamic imports, defer non-critical JS, and removing unused libraries all help.
Heavy event listeners on document (scroll, resize, input) with expensive logic block interactions. Throttle or debounce listeners, move logic to requestAnimationFrame, and use IntersectionObserver instead of scroll listeners where possible.
Forced synchronous layout (also called layout thrashing) happens when JavaScript reads layout properties (offsetWidth, getBoundingClientRect) and writes styles in the same task. The browser must perform layout synchronously to give accurate measurements. Batch DOM reads and writes, use ResizeObserver and IntersectionObserver where possible.
Third-party scripts blocking main thread is the most stubborn INP issue. Analytics, tag managers, chat widgets, and customer support tools run heavy JavaScript on the main thread. The fix is deferring to after first interaction, using partial hydration in modern frameworks, or facade patterns where a lightweight placeholder shows until the user interacts with that specific feature.
Large React or Vue re-renders cause INP issues when the component tree re-renders entirely on a small state change. Use React.memo, useMemo, useCallback for expensive components; check React DevTools profiler to identify unnecessary re-renders.
Long-tasks from third-party iframes (YouTube embeds, social widgets) cause main thread contention even when the user is not interacting with them. Use facade patterns: lightweight placeholder until the user clicks play or interacts with the embed.
How to measure Core Web Vitals correctly
CWV measurement has two distinct types: lab data and field data. Lab data comes from synthetic tests under controlled conditions (Lighthouse, WebPageTest); field data comes from real users (CrUX, RUM tools). Both matter, but they answer different questions.
Field data is the ranking signal. Google uses Chrome User Experience Report (CrUX) data, which aggregates real-user measurements from Chrome users who opt into anonymous performance data sharing. CrUX data is what Search Console reports, what shows in PageSpeed Insights as "Field Data," and what affects rankings. CrUX is updated monthly and only available for sites with sufficient Chrome traffic.
Lab data is the debugging tool. Lighthouse runs synthetic tests with controlled CPU and network throttling, producing reproducible scores. Lab data is useful for testing changes before deploying, comparing alternative implementations, and identifying specific bottlenecks. But lab scores do not directly affect rankings; they are a proxy for what real users experience.
PageSpeed Insights combines both: it shows lab data (Lighthouse) and field data (CrUX) for any URL. This is the fastest way to assess a single page. For site-wide health, use Search Console's Core Web Vitals report, which groups URLs by pattern and shows trends over time.
Real User Monitoring (RUM) tools provide continuous field data from your specific visitors. Tools include New Relic, Datadog RUM, SpeedCurve, and self-hosted via the web-vitals JavaScript library. RUM is essential for sites where CrUX data is sparse (low Chrome traffic) or where you need to track CWV by user segment, geography, or device.
The web-vitals JavaScript library lets you send field data to your own analytics (typically GA4). This gives you real-user CWV data for every page on your site, segmented however you want. The setup is a small JS snippet plus GA4 configuration; total effort is 2 to 4 hours for a typical site.
CWV measurement has two distinct types: lab data and field data. Lab data comes from synthetic tests under controlled conditions (Lighthouse, WebPageTest); field data comes from real users (CrUX, RUM tools). Both matter, but they answer different questions.
Field data is the ranking signal. Google uses Chrome User Experience Report (CrUX) data, which aggregates real-user measurements from Chrome users who opt into anonymous performance data sharing. CrUX data is what Search Console reports, what shows in PageSpeed Insights as "Field Data," and what affects rankings. CrUX is updated monthly and only available for sites with sufficient Chrome traffic.
Lab data is the debugging tool. Lighthouse runs synthetic tests with controlled CPU and network throttling, producing reproducible scores. Lab data is useful for testing changes before deploying, comparing alternative implementations, and identifying specific bottlenecks. But lab scores do not directly affect rankings; they are a proxy for what real users experience.
PageSpeed Insights combines both: it shows lab data (Lighthouse) and field data (CrUX) for any URL. This is the fastest way to assess a single page. For site-wide health, use Search Console's Core Web Vitals report, which groups URLs by pattern and shows trends over time.
Real User Monitoring (RUM) tools provide continuous field data from your specific visitors. Tools include New Relic, Datadog RUM, SpeedCurve, and self-hosted via the web-vitals JavaScript library. RUM is essential for sites where CrUX data is sparse (low Chrome traffic) or where you need to track CWV by user segment, geography, or device.
The web-vitals JavaScript library lets you send field data to your own analytics (typically GA4). This gives you real-user CWV data for every page on your site, segmented however you want. The setup is a small JS snippet plus GA4 configuration; total effort is 2 to 4 hours for a typical site.
Platform-specific CWV patterns
Different platforms have different CWV defaults and different common bottlenecks. Knowing the patterns saves time during audits.
Webflow performs strongly out of the box (Lighthouse 85 to 95 on mobile, 85 to 95 percent CWV pass rate). The common bottlenecks are custom code embeds (third-party scripts injected via embed elements), large hero videos, and third-party widgets (chat, calendar, social). The fixes are auditing code embeds, optimising video delivery, and lazy-loading below-fold widgets.
WordPress with a custom theme and minimal plugins performs well (Lighthouse 80 to 92, 60 to 80 percent CWV pass rate). The common bottlenecks are plugin overhead (each active plugin adds JS and CSS), theme bloat (page builders are particularly bad), and unoptimised images. The fixes are minimal plugin sets, custom theme, image plugin, caching, and CDN.
WordPress with page builders (Elementor, Divi, Beaver Builder) typically performs poorly (Lighthouse 40 to 70, 15 to 35 percent CWV pass rate). Page builders generate heavy DOM, render-blocking CSS, and excessive JavaScript. The fix is usually migration to a block-based or custom theme; aggressive caching helps but does not fully solve the problem.
Shopify performs moderately (Lighthouse 50 to 80, 40 to 65 percent CWV pass rate). The common bottlenecks are theme app code, third-party apps (each app injects scripts), and cart and checkout overhead. The fixes are app audits, lazy-loading non-critical sections, and choosing performance-focused themes (Dawn, Spotlight, Studio).
Next.js sites (modern React with Server Components) perform very strongly (Lighthouse 85 to 98, 80 to 95 percent CWV pass rate). The common bottlenecks are client-side hydration (large component trees), large JS bundles, and third-party scripts. The fixes are using Server Components for non-interactive content, image optimisation via next/image, and bundle analysis.
Static site generators (Astro, Hugo, Eleventy) usually perform best (Lighthouse 95 to 100, 90 to 99 percent CWV pass rate). The output is essentially HTML and CSS with minimal JavaScript, so CWV failures are rare and usually caused by third-party scripts. The fix is limiting third-party scripts; the SSG handles everything else.
Mobile vs desktop: where the real fight happens
Google has used mobile-first indexing as the default since 2023. This means Google primarily uses mobile crawl data for indexing and ranking, even for sites with primarily desktop visitors. Mobile CWV scores are the ones that affect rankings.
Mobile traffic share globally is 55 to 65 percent in 2026. Even B2B sites with desktop-heavy buyer journeys typically see 30 to 45 percent mobile traffic for top-of-funnel content (blog posts, marketing pages). Sites that perform well on desktop but poorly on mobile see this asymmetry directly impact mobile-derived organic traffic.
Desktop CWV is usually easier than mobile because desktop has faster connections (cable, fibre with low latency), larger screens (less likely to need responsive image swaps), and faster CPUs. Most sites that pass mobile CWV also pass desktop CWV; the reverse is rarely true.
Mobile reality is harsh. Real-world mobile testing should assume 4G connections (10 to 20 Mbps with 100 to 300 ms latency) and mid-tier devices (similar to a 3-year-old Android), not flagship phones on 5G. Lab tests like Lighthouse simulate this in the mobile preset; check that your tests use mobile preset, not desktop.
The largest mobile bottleneck is JavaScript. Mobile CPUs parse and execute JavaScript 4 to 6 times slower than desktop CPUs. A site with 2 MB of JavaScript that runs fine on desktop can be unusable on mobile. JavaScript budget management is the highest-leverage mobile CWV work.
Network and CPU together compound. The combination of slower network and slower CPU on mobile means performance budgets must be much tighter than desktop. A 1.5 MB page that loads in 1.5 seconds on desktop might take 5 to 7 seconds on mobile.
Test on real devices, not just emulators. Chrome DevTools mobile emulation is useful for layout, but real-device testing reveals actual performance. Use BrowserStack, real phones, or Chrome remote debugging for accuracy.
CWV implementation roadmap: from audit to ongoing monitoring
A proper CWV engagement has four phases: baseline, prioritisation, fixes, and monitoring. Trying to skip phases or batch them creates wasted work.
Baseline: capture current state. Pull CrUX data for the top 20 pages by traffic. Run Lighthouse mobile audits and document scores. Record TTFB, LCP, CLS, INP per template type (homepage, product page, blog post, etc.). The baseline is what you compare improvements against later, so it must be specific and saved.
Prioritisation: not all pages are equal. Sort pages by traffic times CWV failure; the highest-impact fixes are pages with high traffic and bad CWV. Templates that affect many pages (e.g., a product detail template that affects 5,000 product pages) get higher priority than one-off pages.
Fixes: address LCP first because it has the highest user impact and is most visible. Then CLS, which often has quick wins. Then INP, which usually requires more involved code changes. Avoid trying to fix everything at once; sequential fixes let you measure each change's impact.
Monitoring: set up real-user monitoring after fixes. The web-vitals JavaScript library plus GA4 takes 2 to 4 hours and gives continuous CWV data. Set alerts for regressions. Plan monthly Lighthouse spot-checks and quarterly full re-audits because CWV degrades over time as new code and content arrives.
Document standards: define an internal CWV budget for each page template (e.g., LCP under 2.5s on mid-tier mobile). Use the budget as a deploy-blocker for changes that breach it. Without this discipline, gains earned in one quarter are lost in the next as new features land without performance review.
UnFoldMart Core Web Vitals service tiers
UnFoldMart provides CWV optimisation as a standalone service or as part of broader SEO retainers. Pricing varies by site complexity, page count, and the level of implementation involved.
CWV audit (one-time) runs USD 1,500 to 4,500. Scope: single domain, top 10 to 20 pages. Deliverables: full CWV audit (LCP, CLS, INP, TTFB), prioritised fix list, implementation roadmap, before-and-after CrUX baseline. Best for brands that have an internal team or another implementation partner and need expert audit and prioritisation.
CWV audit plus implementation runs USD 4,500 to 18,000. Scope: single domain, top 30 to 50 pages. Deliverables: audit-tier deliverables plus implementation of priority fixes, ongoing CWV monitoring setup, 90-day post-implementation tracking. Best for brands that want both diagnosis and execution from the same partner.
Multi-domain or e-commerce engagements run USD 7,500 to 35,000. Scope: 2 plus domains, or e-commerce site with 100 plus pages. Deliverables: multi-domain CWV audit, e-commerce-specific optimisations (cart, checkout, product pages), CDN configuration, image optimisation pipeline. Best for brands operating across multiple sites or transactional sites where conversion impact is significant.
CWV as part of full SEO retainer is included in retainers from USD 5,000 per month and up. Initial audit plus monthly CWV monitoring and quarterly re-audit as part of broader SEO program. No separate charge. Best for brands that want CWV as one component of a strategic SEO program rather than a standalone project.
Red flags in any CWV vendor proposal
CWV is a relatively technical area, which means vendors range from highly competent to outright misleading. Knowing the red flags before evaluating proposals saves time and money.
Watch for vendors who promise specific Lighthouse scores (no vendor can guarantee scores; conditions vary), claim "one-click" or "automatic" CWV optimisation (real CWV work is per-page analysis, not a plugin), focus only on lab data without addressing field data and CrUX, ignore INP entirely (only mention LCP and CLS), give generic recommendations without site-specific analysis, charge recurring "monthly CWV maintenance" with no defined work, recommend removing all third-party scripts wholesale, promise that CWV alone will fix rankings, have no before-and-after measurement plan, and refuse to share previous case studies with CrUX before-and-after data.
Trustworthy vendors approach CWV as a structured engagement: baseline measurement, prioritised fix list with rationale, implementation, and post-implementation tracking against the baseline. The work is real but bounded; vendors who try to make it sound bigger than it is are usually overselling, and vendors who try to make it sound easier than it is are usually selling a plugin.
Ready to fix Core Web Vitals?
Core Web Vitals are necessary infrastructure for any site that wants to rank competitively in 2026 and convert real users effectively. The work is well-understood, the metrics are clear, and the impact compounds across rankings, conversion, and AI engine visibility.
UnFoldMart audits and fixes CWV as a standalone service or as part of broader SEO and technical retainers. If your site is failing CWV on critical pages, the next step is a 30-minute strategy call where we audit your current state, identify the highest-impact fixes, scope the implementation work, and outline the monitoring rhythm that follows.
FAQs
Got Questions? We’ve Got Answers – Clear, Simple, and Straight to the Point
Use a combination of Search Console for field data, web-vitals library with analytics for real-time tracking, Lighthouse CI for lab testing, and periodic audits. This layered approach helps catch issues before they impact rankings.
Mobile performance matters most because Google uses mobile-first indexing. Mobile devices are slower and networks less stable, so optimizing for mobile ensures better overall performance and rankings.
The most common cause is unoptimized hero images. Fix by compressing images, using modern formats like WebP, serving responsive sizes, and avoiding lazy loading for above-the-fold content. Also optimize scripts, CSS, and server response time.
Use field data from Chrome User Experience Report (CrUX) as the source of truth. Search Console shows this data at scale, while PageSpeed Insights provides both lab and field data. Lab tools are useful for debugging, but real-user data drives rankings.
LCP measures loading speed, CLS measures visual stability, and INP measures responsiveness. Together they represent how fast, stable, and responsive a page feels to users. Google uses these because they closely reflect real user experience.
Still have questions?
No question is too small—let’s talk

Want to Turn Your Brand Into a Scalable Growth Engine?
We help modern businesses unify branding, websites, SEO, and paid media into one performance-driven system designed to scale.

.jpeg)
.jpeg)
.jpeg)
