During the Q&A portion of John Mueller’s hangout on 09/17/2021, one webmaster was concerned about the regrouping of approximately 30,000 pages on their site.
This brought the average LCP for those pages up approximately 0.9 seconds, even though they were all at around 2.5 seconds below the threshold.
The webmaster was looking for information about how to help them establish more of their pages under the 2.5-seconds loading threshold.
John explained that Google doesn’t have a clear or exact definition of how they do grouping. This is something that evolves over time and depends on the data that they have.
If they have a lot of data for different kinds of pages, it’s easier for them to say how they will do grouping for those pages.
The question and answer occur at the 35:40 mark in the video:
John Mueller 09/17/2021 Hangout Transcript
John 35:40 (Question)
Next question. I have here: Google recently regrouped together 30,000 pages on our site that are noticeably different pages for core web title scores.
This has brought the average LCP for these pages up to 3.4 seconds, despite the fact that our product pages were averaging 2.5 seconds before the regrouping. We’re working to get these pages at 2.5 seconds below the threshold, but our tactics now seem too insignificant to get us to the score we need to hit. Is the grouping set and then a score average is taken or is a score taken and then the grouping is set? This will help us establish if these product pages under 2.5 seconds will help to solve the problem or not.
John 36:29 (Answer)
So my understanding is with grouping, and in general, we don’t have any clear or exact definition of how we do grouping because that’s something that has to evolve over time, a little bit, depending on the amount of data that we have for our website. So if we have a lot of data for a lot of different kinds of pages on a website, it’s a lot easier for systems to say we will do grouping slightly more fine-grained versus as rough as before.
Whereas if we don’t have a lot of data, we end up maybe even going to a situation where we take the whole website as one group. So that’s kind of the one thing. The other thing is, the data that we collect is based on field data, you see that in Search Console as well, which means it’s not so much that we would take the average of individual pages and just average them by number of pages.
But, rather, what would happen in practice is that it’s more of a traffic weighted average, in the sense that some pages will have a lot more traffic and we’ll have more data there, and other pages will have less traffic and we won’t have as much data there. So that might be something where you’re seeing these kinds of differences. If a lot of people are going to your homepage and not so many to individual products, then it might be that that homepage is weighted a little bit higher just because we have more data.
So that’s kind of the direction I would go there. And in practice, that means instead of focusing so much on individual pages, I would tend to look at things like your Google Analytics or other analytics that you have to figure out which pages or which page types are getting a lot of traffic.
And then by optimizing those pages, you’re, essentially, trying to improve the user experience for the users on your website. And that’s something that we would try to pick up for the core web vitals scoring there.
John 38:34
So, essentially, less of averaging across the number of pages and more averaging across the traffic of what people actually see when they come to your website. And I think that also goes into the second question like what logic does Google use to group these pages? Essentially, it’s, it’s, first of all, we have to have data. And if we don’t have data, field data that we can use there, then we can’t, kind of, group pages in a more fine-grained way.