In the latest Google Search Off The Record Podcast, John Mueller was joined by Annie and Vivek.
Annie works as a tech lead on the Chrome Web Platform Team. She works on the team that develops new core web vitals metrics.
Vivek is also on the core web vitals team, as a product manager.
Finally, Martin Splitt, a JavaScript developer on the search relations team, also joins John Mueller.
Collectively, each of these extraordinary individuals offer us great insight into Google’s latest Core Web Vitals metrics.
SEO Insight #1: Page Speed is Highly Multi-Faceted
Martin posed a question about why managing speed is so difficult and why it’s such a significant topic in relation to search.
Vivek explained that speed matters a lot in Google’s own services. As they’ve worked with their own partners around the web, they’ve come to the realization that speed matters to other businesses as well.
It’s difficult for the Google team to wrap their minds around page speed, because there are so many facets that encompass it.
Things such as server speed, how much JavaScript is being loaded, or how large images are on the page all impact overall page speed.
Because many different parts contribute to the entire feeling of page experience, there have been several long-running debates about which parts of page experience really matter, and which technologies can make a real difference.
SEO Insight #2: The Team Began With a Very Open Approach to Page Speed
Vivek details that during their questto tackle page speed, they began with a very open approach.
They did not limit themselves to any specific types of web pages – they examined as many different types of pages as they could.
They tried to understand what the common elements were that contributed to an amazing experience – the thing that causes users to go back to a website over and over again.
This is how they established the three Core Web Vitals: Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).
They asked themselves about what the user really sees. What do they experience? And what are the things they encounter on a regular basis?
From the answers to these questions, they tried to derive what they wanted browser and web technologies to do.
For a long time, the answer eluded them and they didn’t realize that they could ultimately make the user experience measurable and repeatable.
SEO Insight #3: Why Do LCP, CLS, and FID Matter As Opposed to the Many Others Google Has Looked Into?
Vivek answered this question from Martin in the following way:
LCP, or Largest Contentful Paint, measures how long it takes for the user to physically see the most meaningful part of the page’s content. How quickly can they get to that content?
FID, or First Input Delay, describes how long it takes for any given page to respond to input from the user. Any interactive element of some kind plays a part in how FID is measured.
Sometimes, JavaScript can interfere with the page’s physical response. So FID numbers will really be able to identify these issues as well as what’s causing them.
CLS, or Cumulative Layout Shift, also plays into page stability. If content on a page moves around, it can be extremely distracting and disruptive. If a page has these issues, they will factor into cumulative layout shift problems, which must be addressed and resolved.
Google really wants to ensure that any objects on the page remain where they are and where the users themselves expect them to stay.
SEO Insight #4: What Happened to Time to First Byte and the Google Speed Index?
Martin asked Annie this question, referencing his past experiences as a web developer and how people were always concerned with Time to First Byte (TTFB) and the Speed Index (SI).
Annie explained that there are certain weaknesses with these numbers.
Time to First Byte isn’t something that the user sees, it’s a technical measure of how long it took the user to receive some bytes. But this doesn’t give them information about the page itself or how it’s visible to the user..
It was a similar case with nload event, which was one of the most popular load events. The problem with it, however, was that because web pages are made out of code, they don’t really start loading until that specific onload event actually occurs.
That event metric, again, can’t even be qualified as a real user experience metric.
She was excited about the inception of Speed Index, considering it a major breakthrough, because it takes into account the average time at which pixels were physically painted on the page, and it was a better measurement of t what the user sees.
SEO Insight #5: What Does Google Try to Make Happen With Core Web Vitals?
Martin asked Annie how much of a difference that core web vitals really makes, and jokingly stated that web developers don’t need any new metrics.
Annie explained that there are two really big goals that Google has for core web vitals.
The first one is on the actual user experience, something that Time to First Byte and the Speed Index did not effectively cover.
The second goal is to be able to know exactly what the user sees when a page physically loads, not just an approximation.
The main issue with Speed Index was that it read the screen pixel by pixel.l. This brings into question many security and privacy concerns. Additionally, there are performance challenges, where they can’t exactly implement it as a performance API.
Therefore they had to find a metric that they could really use in a full user-monitoring context. Largest Contentful Paint (LCP) was the major extraordinary breakthrough in regards to that.
SEO Insight #6: Why Is There Such a Big Difference Between What is Measured
Martin asked Annie a very weighty question in this regard: Why such a big difference between tools and how they measure these things, such as differences in Chrome Dev Tools with Lighthouse vs. Web Page Test vs. Google Search Console? And compared to what can be seen in the page experience report?
There are different ways of measuring Speed Index and real user metrics. What are the advantages and disadvantages between these two?
Annie explained that the main thing about lab data is you’re basically telling the computer to process things.
The thing about lab data is that it can tell you a lot about worst-case scenarios: how does this website load in 3G or 2G? What happens if you cannot buy the absolute worst device ever on the market anymore?
The other thing about lab data is that it can still give you many details, like how long did every single request take? How many bytes were in all of the images? Blocking times of individual JavaScripts, and more.
Lab data should not be ignored because it can really help you dig into what’s happening physically on the page.
Lighthouse can measure things directly from the device itself, and there are going to be different results in Lighthouse for every time you run the tool. There will be a different LCP, FID, and CLS.
You will run into a distribution of users that have certain things happening. Some users will see it load really fast, and others will see it load really slow.
There’s a long tail where most people are pretty fast, and then things happen where it gets slower and slower over time.
In Core Web Vitals, Google is measuring what the 75th percentile of overall users have seen.
If three quarters of people using the website get a fast, slow time, we say that that is a pretty good result.
SEO Insight #7: What Happens When Everyone Has a Fast Connection? Will All of This Go Away?
John posed this thoughtful question to Annie. He inquired whether we still need to worry about core web vitals if everyone has high-end websites, top-tier phones, or high-performance devices in general.
Annie answered with a very interesting perspective: different types of pages, based on audience or based on content, will have different values.
You still want to check your LCP, CLS, and FID because it will tell you as your users are viewing your page whether or not your content actually is showing up quickly.
Sometimes, it’s the case that the content doesn’t show up fast.
For some sites and audiences, it’s just much easier to load swiftly. Other sites are serving different types of audiences who might have slower connections and slower devices.
These types of sites really need to make sure that they are serving those audiences. They want to see this content load quickly, regardless of their type of device.
Want to hear more? You can listen to the entire podcast here:
Listen to the Podcast
Let’s Talk Core Web Vitals Transcript
Welcome, everyone to the next episode of the Search Off the Record Podcast. Our plan is to talk a bit about what’s happening at Google search, how things work behind the scenes, and who knows, maybe have some fun along the way. My name is John Mueller. I’m a search advocate on the search relations team here at Google in Switzerland. I’m joined today by Martin, also on the search relations team, as well as Annie and Vivek, who both work on different aspects of the core web vitals. Annie and Vivek, could you introduce yourselves briefly?
Annie 0:42
Hi, I’m Annie. I work on the Chrome Web platform team. And I’m the tech lead of a team that develops new core vitals metrics.
Vivek 0:49
And Hi, I’m Vivek. It’s good to be with you, John. I’m the product manager for the team that develops the core web vitals metrics.
John 0:55
So awesome. Good to have you all here.
Martin 0:57
Yeah. I mean, I’m really, really excited to actually have you on this podcast. Because I think for pretty much ever since the inception of the web, people have been struggling with performance and speed and how to measure it, what to look for, how to improve it, how to build the fastest best web, there’s so many case studies out in the world, they’re telling us how important it is.
But why is this so hard? I mean, you’ve been in the trenches with this for a long time. And you have probably thought long and hard about this. Why is this entire topic of website speeds so hard? And why does it matter so much?
Vivek 1:39
Yeah, that’s a really great question. I mean, I think at Google, we’ve known for a really long time that speed is super important. It matters a lot in our own services. And the more we work with partners on the web, we realize just how much speed matters to their businesses as well. Like you mentioned, for a really long time, the web community has been trying to tackle this question of performance.
And it’s just been really difficult to wrap our arms around it, because it’s so multifaceted.s it how fast your server is, or how much JavaScript you’re loading, or how large your images are on your page, there’s just so many different parts that contribute to that qualitative feeling that a user gets that, oh, this page is fast, or this page is stable. And so for a long time, we’ve had these long running debates about which parts of the user experience really matters and which parts of the underlying browser technology or web technology can make a big difference. And so I think what we did with core web vitals is first we took a very open approach to it, we looked at as many different web pages and different applications on the web as we could.
And we tried to understand what are the common elements that contribute to that great feeling and that great experience that users get. And that’s why we came up with the three core web vitals, the Largest Contentful Paint, First Input Delay and Cumulative Layout Shift. And what we did with these metrics was really take a user focused perspective: What does the user see? What do they experience? And what do they encounter? And then from that, try to derive what we actually want the browser and the web technologies to do. And so for a long time, we didn’t realize that we could even do this, that we could quantify the user experience and make it measurable in a repeatable way. But we were really happy with a couple of great breakthrough ideas that we developed in the open with the ecosystem to achieve exactly that.
Martin 3:12
Cool. I’m sure most people know this, but I just…one…this is sort of short, concise way and the definition. In one sentence each: What are these three metrics that you just mentioned, Largest Contentful Paint, Cumulative Layout Shift, and First Input Delay. Why them? And not any of the many, many others that we have looked into?
Vivek 3:35
That’s a great question. So Largest Contentful Paint basically measures how long it takes for the user to see the most meaningful part of the page’s content. A user obviously navigates to a page to look at something, read something, interact with something. So how quickly can we get the user that bit of content? First input delay is how long it takes for the page to respond to a user’s input.
So this is for a page that has an interactive element of some kind, maybe you’re clicking on a button, or maybe you’re submitting a form. And sometimes JavaScript and other things can get in the way of making the page respond to that user input, it’s really important that the page feels responsive. And then cumulative layout shift is all about page stability.
Once a user is looking at a page, maybe their eyes started scanning the text or looking at an image, it’s really disruptive for users to have content on the page move around. So we really want to make sure that when pages load and when elements are painted on the screen, they kind of stay where the user expects them to stay. And that really helps the user engage with the page quickly.
Martin 4:28
All right, okay, cool. That was clear and short. Congratulations. I think I couldn’t have summarized this shorter. I’ll steal all of these when I’m asked the next time. So I’ve been a web developer for quite a long time and I remember at first we were basically just like everyone was totally psyched about Time to First Byte and that was the biggest most important thing that you would ever look into and then came Speed Index. Do you remember the Speed Index?
Annie 4:53
Oh, yeah, yeah, I was so excited when they developed Speed Index because all of the loading metrics up to that point were very technical. Time to First Byte isn’t something the user sees, Time to First Byte is a technical measure of how long it took the user to receive some bytes. But then when is the web page visible, we still don’t know.
And it’s the same thing with onload event was the most popular page load event. And the problem with that is, again, it’s a technical point in time, it sounds great, right? All the HTML on the page has been parsed, and the sub resources are loaded. But because web pages are made out of code, and you can do anything you want, a lot of web pages don’t even start really loading until that onload event happens. So that event isn’t again, a user experience. Until Speed Index was this major breakthrough that I was so excited about. It’s the average time at which pixels were painted on the page. That’s about what the user sees. And that was just a really big breakthrough.
Martin 5:43
And would you say that core web vitals are like a different kind of breakthrough? Because I mean, hypothetically, as far as I was concerned, a developer back when Speed Index came out, I was like, Oh, this is the end-all, know-all metric. We don’t need any new metrics. And boy, was I wrong. Why did that happen?
Annie 6:05
Okay, so there’s two really big things that we’re trying to make happen with core web vitals. The first is a focus on the actual user experience. If we’re looking at how long it takes to load a page, we want to look at what did the user see? When did the user see something? And Speed Index obviously accomplishes that. The second thing is a bit harder.
What we really want to look at –not just what does the user see, theoretically, but what does the real user see when your users are actually loading your web pages? When do they see something happen? When do they actually see that the page is loaded? And the problem with the Speed Index is like it goes pixel by pixel.
So there are very many security and privacy reasons and also performance reasons that we can’t implement this as a performance API inChrome or other browsers. So we needed to find a metric that we could use in a real user monitoring context. And that’s where Largest Contentful Paint is a really big breakthrough.
Martin 6:58
Right, right. Right. Right. Right. But I think you mentioned something very important that I get asked so many times out there. Why is there such a big difference between what I measure in my Chrome dev tools with lighthouse versus what web page test measures versus what I see in Search Console in the Chrome Web vitals report, or in the page experience report?
So you already said like real user metrics and Speed Index wasn’t that so there are different ways of measuring these things and different places in which to measure right? So this lab data and real user metrics? Could you explain for us very shortly what the advantages and disadvantages of each of these two are?
Annie 7:42
Yep. So lab data is basically anytime either you have a computer sitting in front of you, or you literally like when you go to a web page test, you’re loading in like a lab, but it’s a computer, you tell the computer: load this web page and give me a bunch of numbers. And the great thing about lab data is it can tell you a lot of worst case scenarios. What if the network was like 3G? Or like slow 2G? What if it was the worst device you can’t even buy on the market anymore? What if all these things went wrong, you can still see how your page would load. And it can give you a ton of details? How long did every single request take?
How many bites were in all of the images, all of the blocking times, what was the longest individual JavaScript. Like you can get just dozens and dozens and dozens of details about your particular page. So the lab data can really help you dig into what’s happening. But there’s that question of what is happening. As you said, like, you might get a different result from Lighthouse on web page tests, because they’ve got a different device, right?
Like what you see on your computer, and what you see in what page tests and what you see in lighthouse, those are all like individual settings. Basically, every time a web page is loaded, you’re going to get a new result like a different page load time, a different LCP, FID and CLS. And what we really want people to focus on is what their real users are seeing. You got a bunch of users for your website and they make up a distribution.
Some people see that it loads really fast, some people see it load really slow. Generally, there’s a long tail where most people are pretty fast, and then it gets slower and slower over time. With the core web vitals, we’re measuring what the 75th percentile of users have seen. So if three quarters of the people that use your website get a fast load time, we say that’s pretty good.
Martin 9:27
Okay, that sounds really, really cool. I think, for me as a developer, lab testing is also very nice, because I can do that repeatedly while I’m developing the page, and then just like see how my changes impact these things, which I can’t really do if I had to, like, update the real world production version, and then just wait a couple of days until I get the real world data back.
Okay, so that’s cool. That makes sense to me so far. Is there anything else that you would want people to understand like, for instance, one question that I have? It looks very much like this core web vitals…kinda…are that’s the perception from some people fell from the heavens, and is now happening behind the scenes. But I don’t think that’s true, right? I think it’s being discussed very much in the open.
Annie 10:14
Yeah, we developed these as open standards, we talk about them in the web performance working group. And the original core web vitals like some of them come from ideas that are many, many, many years old. I remember being at Velocity 10 years ago, when people were like, “Well, can Chrome show when content was painted?”
And basically, it took years and years for Chrome to be able to be fast enough that it could show like when an individual element was painted, and then we made LCP, based on that, but these ideas are like over a decade old at some points.
John 10:46
So if I have kind of a high end website, a lot of high end users with high end phones, or high end devices, in general, with fast connections, do I still need to worry about all of this? Or is it basically like, at some point, you will have a fast internet connection? And you don’t have to worry about all of these details anymore? Or how do you see that happening?
Annie 11:07
So what’s really interesting is different types of pages, either based on their audience or based on their content are going to have just if even if they don’t look at the metrics are gonna have different values. So what you want to do is, look and see like, what is your Largest Contentful Paint, and your Cumulative Layout Shift, and check on your First Input Delay.
And if that’s correct, right, all your users have fast devices, then maybe your content will show up fast. I’m pretty surprised that you know, I have a fast device, and sometimes content does not show up fast. So you should definitely check your numbers. But for some sites, like yeah, it’s easier to be fast. And for some sites, they’re serving audiences that have slower connections or slower devices.
And I think that they really need to make sure that they’re serving those audiences. People want to see the content load fast, no matter what kind of device they have.
Vivek 11:51
Yeah, I think that’s a really important point, I think the other aspect is that your audience and your users, they have finite attention and time, and they’re not necessarily going to go and sample every single web business in your industry evenly, they’re going to gravitate towards the ones that give them a great experience.
And certainly that has to do with the content, or the products you’re selling or the aspects of the business that are unique to you. But if they’re interacting with you primarily through the web, then there’s always going to be an upside to building a faster site.
And that’s something that we want to encourage all web properties and web businesses to really take seriously. And I think the core web vitals are a great way of shining a light on that particular aspect of the user experience.
Martin 12:25
And what would I, as a site owner, do if I think I, let’s say, I think I found a bug in one of the metrics where my site’s clearly performing well, and I tested with real world users, but my numbers come back very differently. Where can I turn? Where can I give feedback on how much I think these metrics actually reflect the reality?
Annie 12:47
So it depends on how deep you want to go with the feedback. We have an email list where we welcome any and all feedback, [email protected], you just explain as much as you can about the problem. And then you send it and like our commitment is that we were going to review all that. And we’ll keep it in mind as we’re modifying the current metrics and making new metrics. If you’re like, “No, I am exactly sure what is happening.
And this is incorrect,” you can make what’s called a reduced test case, that’s just a web page that has nothing on it, but the problem you’re seeing. So let’s say that you think that the largest content on your page is this image. And you see that it’s loading in 1.2 seconds. And then the LCP is being reported by the performance API observer as like 2.8 seconds.
So that’s a lot of like technical work to figure that out. But if you’re willing to do that work and take everything off of the page, except for that one image, and like, you know, write a little JavaScript that has a performance observer that shows the problem, then you can file a bug at crbug.com[?not distinguishable], there’s a template for core web vitals, follow bug, put the reduced test case in there, and then we’ll take a look and try to really understand about what’s going on.
John 13:56
So cool. I think when we talk with people externally, one of the topics that always comes up is kind of figuring out how to prioritize this kind of work, where the SEOs will come to us and say, actually, so how important is core web vitals anyway? And usually we just wave our hands, because I think that’s what we do on the team. But from your point of view, how would you decide how to prioritize working on performance, on core web vitals, versus working on a feature or other parts of a website?
Vivek 14:29
That’s a really great question and really important decision that I think a lot of web businesses need to make. And this is definitely true in the last year that we’ve seen a lot of disruption, a lot of changes to how businesses operate and increasingly becoming increasingly reliant on the web in a way they never have before. I think that our first and foremost position is that this is a decision every business has to make for themselves.
And our goal is to sort of inform that decision as best we can. There definitely is a point for every online business where your site is unbelievably fast, your user experience is rock solid, your site is responsive and stable. And there may be other aspects of your business you should focus on, maybe you should be focusing on pricing. Or maybe you should be focusing on new product development.
Or maybe you should be building new features. And we want to make sure that web businesses have the ability to know when they’ve hit that point, but also know when they haven’t. We’ve kind of observed that kind of sawtooth pattern of performance, kind of getting worse by attrition over a long period of time, and then sites panicking and investing heavily in fixing their performance issues and recovering.
And we want to kind of make sure that we can make that more of an ongoing process so that this trade off is real and present with every release of your site or with every release of your web experience. And so I think that the core web vitals approach, there is here are the metrics. And here are some thresholds that we found yield really great user experiences across the board. And there may come a time where there are other priorities in your business, like responding to current events, or responding to business emergencies, or responding to putting things on sale for the holidays, that sort of thing.
And we definitely don’t want to take away from that, or even suggest that those aren’t important. It’s primarily about being able to say, with every release, I’ve launched this new feature, or I’m prototyping something new, perhaps I’m rolling out an A/B test for a new feature.
And among the various metrics, including conversion and revenue, that I might be concerned about performance, hopefully is one of them as well. So at least you’ll know when you’re making that trade off, and can make it consciously in a way that suits your business.
Martin 16:14
That’s really cool. I think a performance culture is something that many, many IT departments and or companies are lacking. And it has been traditionally hard to advocate for this. Because if something is clearly broken, like it doesn’t work, then everyone’s like, “Oh, we need to fix this.”
But with performance, and really, really poorly performing websites are kind of broken, it’s just very, very hard to drive that point, or has been very hard to drive that point home. And I do hope that core web vitals provide people out there who are willing to build a performance culture in the department or company to actually have the possibility to do so. So I really like this approach.