nical SEO Guide: Measuring for Google Core Web Vitals and optimizing it

The first step to a great experience for your users is collecting data about the performance of your website. Google has developed a number of tools over the years to evaluate and report web performance.

Core Web Vitals is one of them. It’s a set performance signals Google considers essential to the web experience.

This article will cover the Core Web Vitals, as well as key tips and tools for improving your website performance and delivering a great page experience to users.

The evolution of Web performance

It’s no longer easy to improve site performance.

In the past, websites were often held back by bloated resources or laggy connections. You can outperform your competitors by reducing the size of some images, turning on text compression and minimizing style sheets or JavaScript modules.

The connection speed is faster today. Many plugins are able to handle image compression and cache deployment.

Google’s quest for a faster web persists. PageSpeedInsights is still available on web.dev and serves as the best tool for evaluating individual page loads.

Although many people feel that PSI scores are excessively punitive and unnecessarily harsh, they’re still the closest thing we have to Google’s page speed signals for weighing and ranking sites.

You’ll have to pass Google’s latest version of the page speed test by completing the Core Web Vitals Assessment.

Understanding Core Web Vitals

Core Web Vitals is a set metrics that are integrated into broader page experience signals, which will be introduced in 2021. According to Google , each metric “represents an aspect of the user’s experience that is distinct, measurable on the ground, and reflects real-world experiences of a critical outcome for the user.”

Core Web Vitals metrics currently include:

Web.dev describes each metric as follows.

First Contentful Paint

The First Contentful Paint (FCP), metric measures how long it takes from the moment the page loads to the point where any of the content on the page is displayed. In this metric “content” includes text, images (including backgrounds images), Non-white elements


elements.”


What does HTML0 mean for SEOs?

FCP is easy to understand. Certain elements “are painted” (or arrive) before others as a page loads. In this context “painting”, on-screen rendering is meant.

The FCP is logged once any part of the rendered page – for example, the main navigation bar – has loaded.

Consider how fast the page appears to be loading. Page load will not be completed, but it has started.

First Input Delayed (FID).

FID measures the time between the first interaction of a user with a web page (that is when they click on a link or tap a button) and the moment the browser can begin to process event handlers as a result.


What does HTML0 mean for SEOs? HTML0

In March 2024, FID will be replaced by the Interaction to Next Paint metric (INP).

How long will it be before the site begins processing a request if a user clicks on an element of a page (i.e. a link or sorting a list, or using faceted navigation)?

Interaction to Next Paint

The INP metric is used to assess a website’s responsiveness by measuring the latency for all keyboard, click and tap interactions. This occurs throughout a user visit. The final INP is the longest observed interaction, disregarding outliers.”


What does HTML0 mean for SEOs? HTML0

INP, as mentioned above, will replace FID in March 2024 as a Core Web vital.

INP is more sophisticated and likely to contain deeper information.

Time to First Bite (TTFB),

“TTFB measures the time that passes between the request of a resource, and the arrival of the first byte in the response.”


What does HTML0 mean for SEOs?

How long will it take for the site to deliver a requested “resource”? (i.e. embedded image, JavaScript, CSS stylesheet etc.). How long will it be before the site begins to deliver that resource once a “resource” (i.e., embedded image, JavaScript module or CSS stylesheet) is requested?

Imagine you are on a website and there is an image embedded in the page. It has started to load, but not yet finished. How long does it take for the first byte to be delivered from the server to your client (web-browser)?

Largest contentful Paint (LCP),

The Largest Contentful Painting (LCP), metric measures the rendering time of the largest text or image block within the viewport relative to the first loading of the page.


What does HTML0 mean for SEOs?

LCP is a very important metric, but also one of the hardest to meet.

The LCP is recorded once the biggest chunk of visual media has been loaded (i.e. text or an image).

This can be read as: How long does it usually take to load the main content of a web page?

There may be some little things loading that the majority of users will not notice.

By the time that the LCP log is made, your page will have loaded the obvious and large chunk. You will fail the LCP test if it takes too much time for this to happen.

Cumulative layout shift (CLS)

“CLS measures the biggest burst in layout shift scores that occur during the lifetime of a page.

When a visible component changes position between rendered frames, a layout shift is triggered. See below for more information on the individual layout shift score.

The term session window is used to describe a rapid succession of one or more layout shifts with less than 1 second between each shift.

The session window that has the highest cumulative score for all shifts in the window is the largest burst.”


What it means for SEOs

In the past, when optimizing pages for speed was easier, many website owners discovered that they could achieve high page speeds by deferring the rendering-blocking resources.

It was great for speeding up the page load but it made navigation on the web more glitchy.

Deferring CSS (which controls the styling of your webpage) allows the content of the page to load before the CSS is applied.

The CSS will be loaded after the page has been styled.

It is annoying when you load a webpage and click a link. The link then jumps, and you accidentally click the wrong link.

These experiences can be incredibly frustrating, even if they only take a few seconds.

Google needed to create a counter-metric to offset the gains in page speed against the loss of user experience. This was due to website owners trying to “game” the system by deferring resources.

Enter Cumulative layout shift (CLS). You are dealing with a tricky customer who will ruin your day by trying to apply speed boosts to your pages without considering your users.

CLS analyzes your page load for any glitches or delayed CSS rules.

You will not pass the Core Web Vitals assessment even if you have met all speed metrics.

Assessment of your core web vitals for improved UX and SEO

PageSpeed Insights is one of the best tools to evaluate a webpage’s performance. The view is divided into:

We will use an example to make this clear:


https://pagespeed.web.dev/analysis/https-techcrunch-com/zo8d0t4x1p?form_factor=mobile

You can view the TechCrunch homepage page speed metrics and ratings.

You can see from the screenshot that Core Web Vitals Assessment failed.

It’s crucial to render the Mobile results tab by default in a mobile-first website (these are the real results).

Choose the origin toggle to see averaged data across the domain of your site, rather than only the homepage.

You will find the familiar, old numeric page rating further down the page:

What’s the difference between Core Web Vitals and the old Page Speed Rating?

The new Core Web Vitals assessment is based primarily on data from the field (real users).

The old numeric ratings are based on lab data and simulated mobile crawls, which is only an estimate.

Google’s search ranking system is now based on the Core Web Vitals evaluation.

Google’s ranking algorithm does not use the numerical rating from simulated lab data.

Core Web Vitals, on the other hand, doesn’t provide much information. This assessment is a factor in Google’s algorithm.

You will need to pass Core Web Vitals (derived from field data) by using the lab diagnostics.

You’ll need to wait until Google pulls more data from your site before you can pass the Core Web Vitals assessment.

Both the Core Web Vitals assessment as well as the old Page Speed Rating utilize many of the same metrics.

Both of these terms refer to First Contentful Paint, Largest Contentful Paint and Cumulative Layout Shift.

In some ways, the metrics that each rating system examines are similar. The difference is in the amount of detail and source of data.

The Core Web Vitals assessment must be passed. You may want to use the lab data to advance, but since it is not very rich.

You can hope to pass the Core Web Vitals Assessment by addressing lab opportunities and diagnosing. Remember that these two tests do not have any connection.

Search for daily newsletters that marketers use.

“> “> “>

Processing…Please wait.

PageSpeed Insights: Assessing your CWVs

You now know how to satisfy the Core Web Vitals metrics. Let’s run through an actual example.

Let’s return to our TechCrunch examination:


https://pagespeed.web.dev/analysis/https-techcrunch-com/zo8d0t4x1p?form_factor=mobile

In this case, INP is only a small margin away from failing.

CLS is not without problems, but LCP and FCP are the most problematic.

See what PageSpeed Insights says about Opportunities, and Diagnostics.

Now we must shift from field data to lab data in order to identify any patterns that could be affecting the Core Web Vitals.

In the upper-right corner, you will see a green boxed sub-navigation.

This can be used to focus on certain Core Web Vitals metrics and narrow down the opportunities.

The data in this case tells an extremely clear story, without being narrowed.

First, it is recommended that we reduce the amount of JavaScript not being used. It means that JavaScript can be loaded and not executed.

Also, there are notes on how to reduce CSS that is not being used. It’s possible that some CSS is loaded, but not applied.

It is also recommended that we eliminate all render-blocking resources. These are usually JavaScript modules or CSS sheets.

To prevent them from blocking page loading, render-blocking resources need to be deferred. As we’ve already seen, this can disrupt the CLS rating.

It is therefore advisable to start constructing both a crucial CSS and a rendering path . This will allow you to inline the JavaScript and CSS required above the fold, while deferring all the rest.

The site owner can use this approach to balance the CLS and satisfy the page load demands. This is not a simple task and requires the assistance of a senior developer.

We can perform a JavaScript code audit, since we found CSS and JavaScript that were unused. This will allow us to determine if JavaScript is being used more intelligently.

Opportunities, Diagnosis.

Now we will focus on diagnostics. Google intentionally slows down these tests due to poor 4G connection, making items like the main-thread task seem so long (17 second).

This is done to accommodate users who have slow or low bandwidth devices, which are very common.

I would like to call your attention to the entry “Minimize main thread work”. This one single entry can be a treasure trove of information.

The majority of JavaScript tasks (rendering and script execution) are sent through the main processing thread in a web browser (one processing thread). This can cause significant page load bottlenecks.

Even if your JavaScript has been perfectly masked and is delivered to the browser of the user quickly, by default it will be placed in a queue for a single-thread processing, which means that only one script at a time can be run.

Sending JavaScript quickly to your users is like shooting a firehose through a brick wall that has a centimeter-wide gap.

It’s not going to all go through, even if you did a great job!

Google increasingly pushes client-side responsiveness as being our responsibility. It’s the way it is, whether you like it or not (so get to know it).

In frustration you might ask, “Why does it work like this? Even mobile browsers now have multiple processing threads. “There’s no reason for this to be so awkward.”

Actually, yes. Some scripts depend on the output from other scripts to execute.

All likelihood, the majority of websites would crash if suddenly all browsers started processing JavaScript in parallel and out of order.

There’s a reason why sequential script execution is the standard behavior of modern web browsers. Why do I keep highlighting the word “default”?

There are alternatives. The first is to process scripts on behalf of the user, preventing the client’s web browser from doing so. Server-side rendering is the term for this.

This is a powerful tool for untangling JavaScript execution knots on the client side, but it’s also expensive.

Your server has to process all script requests from all users faster than the average browser of your user. Take a minute to absorb that.

Not a fan? Let’s look at JavaScript Parallelization. The idea behind this is to use web workers to determine which scripts load sequentially and which can load in parallel.

It is not recommended to force JavaScripts to load in parallel. In most cases, integrating technology like this could reduce the need for SSR.

It will be difficult to implement, and (you guessed!) The time of an experienced web developer.

You might also be able get help from the same person who audited your JavaScript code. You can really fly if you combine JavaScript Parallelization with an important JavaScript rendering path.

Here’s what I find most interesting about this example:

JavaScript is executed for 12 seconds while the main thread occupies 17 seconds.

Does this mean that 12 seconds out of 17 seconds are JavaScript? This is very likely.

We already know that JavaScript will be pushed to the main thread.

This is also the default way WordPress, active CMS, has been set up.

All of that 12 seconds of JavaScript time is likely to come from the 17 seconds spent on main thread work, since this site runs WordPress.

This is a very useful insight, as it shows us that the majority of time spent by the main thread processing is spent executing JavaScript. This is not surprising when you look at the number scripts that are referenced.

Take the crusade against Chrome Dev Tools

Remove the training wheels and get to work!

Open a fresh instance of Chrome. Open a guest account to make sure there are no plugins or clutter that could bloat your findings.

Remember: Perform these actions using a Chrome guest profile that is clean.

You’ll need to load the website you wish to analyze. In our case, it’s TechCrunch.

Accept cookies when needed. Open Chrome DevTools once the page has loaded (right-click on a page, and then select Inspect).

Navigate to Performance and click on Screenshots.

Reload the page to record it. The report will be generated.

Here is where you need to breathe deeply and not panic.

You can see the thin green boxed pane above that shows requests over time.

You can use your mouse to drag the box and select a specific time period. The rest of the analysis and page will adapt automatically.

The area I selected manually is covered by a semitransparent blue box.

This is where the page loads and I am interested in looking at it.

In this case I have selected roughly the time range and events between 32ms to 2.97 seconds. Focus our attention on the interior thread.

Remember how I said earlier that the main thread is the bottleneck for most rendering tasks?

We’re looking at the inside of that thread. You can see that there are a lot scripting tasks in yellow.

As time passes, the number of dark yellow pieces on the top two rows increases. These confirm the execution of all scripts as well as the processing time. Click on the individual bar chunks for a detailed readout.

This is a powerful image, but you can find one that’s even more powerful in the summary section.

The data is summarized in a simple visual format, a doughnut graph, which can be easily digested.

As you can tell, the scripting (script execution) is responsible for most of the page loading. Our earlier hypothesis, based on Google’s combination of field and laboratory data, that JavaScript execution bottlenecks were in the main thread seems to be accurate.

This is a major issue that will be faced by many people in 2023. There are few solutions available today.

The creation of critical JavaScript rendering paths is complex. JavaScript code audits require expertise, and JavaScript parallelization is not easy to implement.

Let’s now look at the call tree.

Call tree is more useful than bottom-up.

The data is the same, but call tree groups tasks by theme into buckets such as Evaluate script (script implementation).

The scripts will be displayed along with the time it took to load. 11% of time was spent loading pubads_impl.jsm, while 6% was spent loading opus.js.

This is often where optimization begins, even if I do not know what the modules are.

Now we can take a look back at:

Core Web Vitals has other tools that can be used to measure and optimize the performance of Core Web Vitals

Congratulations if you have made it this far. We only use:

You can really be that thin. There are also other tools that may be of great assistance:

Previously, improving your page speed rating was as easy as compressing images and uploading them. Nowadays? This is a complicated Core Web Vitals crusade. Be prepared to fully engage. Failure will result if you do anything less.

The first Search Engine land article that appeared was Measuring for Google Core Web Vitals and optimizing it: A technical guide.

Leave a Reply

Your email address will not be published. Required fields are marked *