During a hangout, one SEO professional asked John Mueller in a Question and Answer segment about crawling using View More buttons.
They explained that they recently redesigned their website and changed the way they listed their blog posts and other pages. They changed it from page one, two, three, four, to a View More button.
Does Google still crawl the ones that are not shown on the main blog page?
What is the best practice here?
John explained that on the one hand, it depends a bit on how you have that implemented. For example, a View More button could be implemented as a button that does something with JavaScript.
And these types of buttons are something that Google’s not going to be able to crawl through and actually see more content.
On the other hand, you could also implement a View More button essentially as a text link to page two of those results, or from page two to page three.
And if this is implemented as a text link, Google would follow it as a link, even if it doesn’t have a label that says page two.
This is the first thing to double check: is this something that can actually be crawled or not?
If it cannot be crawled, then usually what would happen is there would be a focus primarily on the blog posts that would be linked directly from these pages. Google would likely keep the old blog posts in their index, because they have seen them and will index them at some point.
However, Google will likely focus on the ones that are currently there.
One way to help mitigate this is if you cross link your blog posts as well.
Sometimes this is done with category pages, or the tag pages that people add.
Sometimes blogs have a mechanism for linking to related blog posts. And all these kinds of mechanisms add more internal linking to a site.
And this makes it possible that, even if Google initially just sees the first page of the results from their blog, they would still be able to crawl the rest of the site.
One way John suggests the SEO pro can double check this is to use a local crawler. There are various third-party crawling tools available.
And if you crawl your site, and you see that it only picks up 5 blog posts, then probably those are the only posts that are findable.
On the other hand, if it goes through those five blog posts and then finds a bunch more and a bunch more, then you can be pretty sure that Googlebot will be able to crawl the rest of the site also.
This happens at approximately the 39:51 mark in the video.
John Mueller Hangout Transcript
John (Submitted Question) 39:51
Let’s see a question about crawling: I recently redesigned my website and changed the way I list my blog posts and other pages from pages one, two, three, four to a View More button. Can Google still crawl the ones that are not shown on the main blog page? What is the best practice? If not, let’s say those pages are not important when it comes to search and traffic, would the whole site as a whole be affected when it comes to how relevant it is for the topic for Google?
John (Answer) 40:07
So on the one hand, it depends a bit on how you have that implemented. A View More button could be implemented as a button that does something with JavaScript. And those kinds of buttons, we would not be able to crawl through and actually see more content there. On the other hand, you could also implement a View More button, essentially, as a link to kind of page two of those results, or from page two to page three.
And if it’s implemented as a link, we would follow it as a link, even if it doesn’t have a label that says page two. So that’s, I think the first thing to double check, is it actually something that can be crawled or not. And with regards to kind of, like, if it can’t be crawled, then usually what would happen here is we would focus primarily on the blog posts that would be linked directly from those pages. And it’s it I mean, it’s something where we probably would keep the old blog posts in our index, because we’ve seen them and index them at some point.
But we will probably focus on the ones that are currently there. One way you can help to mitigate this is if you cross link your blog posts as well. So sometimes that is done with category pages, or kind of these tag pages that people add. Sometimes blogs have a mechanism for linking to related blog posts. And all of those kinds of mechanisms add more internal linking to a site. And that makes it possible that, even if we initially just see the first page of the results from your blog, we would still be able to crawl to the rest of your website.
And one way you can double check this is to use a local crawler. There are various third party crawling tools available. And if you crawl your website, and you see that, oh, it only picks up five blog posts. Then probably like, those are the five blog posts that are findable. On the other hand, if it goes through those five blog posts and then finds a bunch more and a bunch more, then you can be pretty sure that Googlebot will be able to crawl the rest of the site as well.