In today’s SEO Insights feature, we will be going over several insights from John Mueller’s recent Google Search Central Office Hours Hangout from last week on 07/09/2021.
Including: Traffic fluctuations after a site revamp, link exchanges are still against Google’s webmaster guidelines, how SEO value passes from a 301 redirect to a new one, featured snippets can vary between locations, and much more.
If you want to catch John’s latest hangout, he holds them on most Fridays at 7:00 a.m. PT (sometimes 12:00 a.m. PT.)
Without further ado, let’s get on with the SEO insights!
SEO Insight #1: Traffic Fluctuates After a Site Revamp
One webmaster was concerned about their traffic fluctuating wildly after a site revamp. They were concerned about wild fluctuations—where one site would be on the first page, then it would jump to the 10th page. Or it would then be on the 5th page.
John explained that it doesn’t have anything to do with things on their side. It’s more to do with the website itself. His point was, essentially, if a website is revamped, then they have to change the ranking. He thinks it sounds like the webmaster is doing something unique with the revamp process. His tips for watching what happens with a website revamp include the following:
- Don’t change the URL structure: make sure the internal linking stays the same as much as possible, and that the content and layouts stay the same as much as possible.
- So long as those technical elements are aligned, then the only thing google sees is that the website is a bit faster now because of faster elements.
- If those factors are not aligned, then they see it as a brand-new website. They will recrawl the site from the start, and then have to re-learn the site all over again.
That’s where the wild fluctuations in ranking are coming from.
SEO Insight #2: Link Exchanges Are Against Google’s Guidelines
Another webmaster was concerned about link exchanges and how they would impact the site. They asked whether or not link exchanges are considered spam, because they have been doing outreach to other websites, and the other sites commonly ask for a link back when they do so.
John explained that link exchanges are against their webmaster guidelines—just don’t do them. Their algorithms would simply ignore these links. It doesn’t matter if it’s topically relevant, either—if it’s a link exchange, it’s still against the guidelines.
SEO Insight #3: SEO Value Passes From an Old 301 Redirect to the New One
This is nothing new. The webmaster was concerned whether or not when they create a 301 from an older URL to a new one, it would result in the SEO value from the old URL passing to the new one.
John confirmed that yes, SEO value would pass from the old URL to the new one through a 301 redirect. He explained the following:
SEO Insight #4: Featured Snippets Can Vary Between Locations
This webmaster was wondering why certain featured snippets are different based on their locations. For example, one featured snippet would be one thing in India, while a featured snippet for the same query would be different in the United States.
John said that it isn’t a snippet from a search result. It’s common for search results to be different across different locations. From Google’s perspective, that’s normal.
These are John’s insights:
SEO Insight #5: No Ranking Issues Because of Being Indexed on Googlebot Desktop Instead of Mobile-First
Another webmaster had a question regarding whether or not they would be facing being indexed only by Googlebot Desktop. Will another batch of sites be moved to mobile-first indexing soon?
John explained that they shouldn’t be seeing any ranking issues because of being indexed by Googlebot desktop. With a site that’s responsive, they would index both versions as the same.
SEO Insight #6: URLs Should Always Have a Self-referenced Canonical to the Clean URL
A webmaster was concerned about URL parameters and how to declare them to Google. Their setup is that they have different content silos based on the URL parameters. They want to know what the most effective method would be to declare these URLs to Google. Or, if it would be OK to allow Google to decide what the canonical will be?
John said that they should always rel=canonical to maintain a clean URL without URL parameters if this is possible and supported by the web host.
These are John’s answers to this question:
All of that, I think, helps us to pick that version as well. It’s not guaranteed that we would but I it kind of helps us and the other thing is also that if you have all of these different parameter versions, if all of them were indexable, and we would look at them and say, oh, they’re very similar, but they’re different, then we would end up indexing a lot of separate URLs for your website. And that means we would kind of dilute the value of your website across all of these different URLs.
And that kind of means, well, we have more different URLs to index, we could show those individually. But that also means for very competitive types of queries, we’d say, well, none of these URLs are really strong. And then that would be a little bit of a disadvantage for the more competitive queries versus the more targeted queries.”
John Mueller Google Search Central Office Hours Transcript
Below is the transcript to the John Mueller Google Search Central Office Hours from July 9, 2021.
You can also watch the video here:
All right. Welcome, everyone to today’s Google SEO search Central Office Hours Hangout. My name is John Mueller. I’m a search Advocate at Google in Switzerland. And part of what we do are these office hour hangouts where people can join in and ask their questions around search. We have a bunch of stuff already submitted on YouTube, which we can go through. But we also have a bunch of people raising their hands already. So maybe we’ll go through some live questions first, and then go and see what all has been submitted on YouTube.
I have a question related to website revamping and redesigning. Okay, so, like I have done a year I did like lots of transformation of website, like, from WordPress, to PHP, to another language, and another language to WordPress so many times, but what like, like, in last six months, I have revamped total three website, okay. And in every website, what I feel like a year, before revamping their traffic was almost like you can say 10 to 15, in between 10 to 15,000 in a month. Okay.
And after revamping the website like after one month of revamp, okay, suppose I did revamp in January. So in February onward, their traffic is like down to zero or even like 50 or 60, from 10,000, or 50,000. So, I have applied all these things like I have checked, like here, their URL, their meta tag, meta title, meta description, whatever things which are required for on pace factor, I did all check 301 and all. But still I am facing these kinds of issues with the last three websites from the last six months. So I don’t know what mistakes I am making for every website. But before six months, like in 2020, these issues are not generally happening. But in 2021, I am facing these kinds of issues.
And for this, like I can say the website is going inside the Google bouncy Sue, like you can see, sometimes it’s ranking is on the first page, sometimes it is on a 10th page, sometimes it is on the fifth page, sometimes it is on the first page. So this is the major issue. I am feeling right now for a couple of websites. So I don’t know what basically I can do while revamping any website, I just wanted to know from you.
Okay. So, in general, these kinds of things I would look at on a per website basis, there is nothing on our side. That is kind of like saying if a website is revamped, then we must change its ranking. So that’s kind of one thing kind of kind of First off, like if you’re seeing this with three websites, that sounds like maybe you’re doing something unique with the revamp process and not that there’s something on Google side that would be blocking revamps, in general, for revamps there. There are sometimes a few things that come together. And it’s sometimes tricky to figure out exactly what all is happening.
But the main things I would watch out for when doing your revamp is to make sure that the URLs status thing as much as possible so that you don’t change the URL structure, that the internal linking stays the same as much as possible that the content and the the layout on the pages stays as same as much as possible. And if those technical elements are essentially aligned, then from our side, the only thing that we see is that maybe the website is a little bit faster now because you’re using a faster infrastructure. But if, on the other hand, those factors don’t align, like if the URLs change if the layout changes if the content changes. If you don’t have redirects from the old URLs to the new ones, then those are essentially aspects that say to us that we have to treat this as a new website because essentially, we crawl from the start and there’s completely different content or it’s a completely different setup, or it’s a completely different layout or the URLs are completely different.
So that’s essentially from our side, we would say, Oh, it’s a new website. We will start over and try to understand it again. So that’s something that I would watch out for there. The other thing is also, that we also make other kinds of ranking changes across the web, as well. And sometimes when you do a revamp, you get the timing in such a perfect way that it aligns exactly with when we make a core update, or when we make a bigger ranking chain. And then it’s really hard to recognize this issue because of my kind of technical change that I made? Or is this issue because Google just generally would have understood my website differently anyway? and trying to figure out like, is it?
Is it something that you did with the revamp? Or is it something that Google changed? I think that’s kind of a good first step. And to do some of that. It’s, it’s really useful to just double check all of the technical details, and really kind of like, get a map of all of the old URLs, and then check them in archive.org and see what they look like for confirm what they look like now use a different testing tools to make sure that it’s all crawlable and indexable. All of those things.
Yeah, John, like everything is okay. Whatever you said, is all fine from my site. But except one thing, like, I have changed the structure of the website, like, initially, it was a plain text. Now it is in fold, or it is in, like, you can see multiple sections, there is a multiple section in the website. So I guess this might be one factor where I am getting references, right?
Yeah. And it’s, it’s something where changing the structure of a website, it will affect how search looks at it. And it can have a positive effect, too. So that might be like the previous revamps that you did. If you went from a one page website to a multi page website, it might be that that was a good change for those websites. But it might be that the same change for the current website that you’re working on does not make so much sense. So it changes something.
Suppose there are multiple paragraphs in your website, okay. And, for instance, I’ve just changed like, for suppose the third paragraph, I set it to second number boys and the second number paragraph, I’m separate to third number poison. So does it really matter in ranking purpose?
Not usually, but it can affect a little bit, because we do try to understand the context of the text that you have on the pages. And if you move one paragraph from something that is very prominent to kind of an area where it’s like, oh, this is a side note, then that could affect how we see that information.
Now, generally, these kinds of things are happening with the FAQ section like I have, on one page, I have a total of seven or eight FAQs. And I just shifted a few FAQs to like from first position to sixth position, or seventh position. Does it really matter?
I think that’s perfectly fine. That’s not something that we would look at. Now. Okay, this would be more if you change something from it’s like a heading of a page and you moved its way to the footer or something like that. That’s something where I could imagine our system saying, Oh, it’s not important anymore.
And does image change matter? Or in your point of view, if suppose, while revamping I changed my image, like earlier, it was 30 kB and now, it is 15 kB, images uploaded, and the image is also different than earlier. So does it really hamper…?
for normal web search? it wouldn’t matter for image search. If you want those images in image search. And if you’re getting traffic from Image Search, then of course, changing the images would matter.
Actually, my question is related to link exchange up to what extent is it permissible to exchange the links are not considered as spam, let’s say I have been outreaching to their websites from my competitors have taken to web links. So basically as a human tendency, we want something in consideration. So whenever I outreach them, they also ask for a link in exchange. So what are the best practices when it comes to backlinks and exchanging the backlinks?
So link exchanges are where both sides are kind of like you link to me, and therefore I will link back to you kind of thing that is essentially against our webmaster guidelines. So that’s something where our algorithms would look at that and try to understand what is happening here and try to ignore those links. And if the webspam team were to look at it, they would also say, Well, this is not okay. And if this is the majority of the links to your website like this, then they might apply manual action. So that’s something I would avoid.
Even if it is topically relevant?
It doesn’t matter if it’s topically relevant, or if it’s a useful link. If you’re doing this systematically, then we think that’s a bad idea. Because from our point of view, those are not natural links to your website, they’re only there because you’re doing this to deal with the other side.
What this comes under natural, natural backlinks, like we have to create the quality content, naturally, people will give us links that’s only natural or something because
That’s essentially the idea behind natural linking. From our point of view, it’s fine to contact people and tell them it’s like, by the way, I have this great content. And maybe it’s something that you would appreciate for your website as well, that’s, that’s generally fine. But anything beyond that, where you’re saying like, well, you must link to me like this, or you must pay me money, or I will pay your money for this link, or I will exchange something for this link.
That’s all something that from our point of view would make this unnatural. So providing it to people and promoting it and saying like, here’s great content, and they link to you, that’s perfectly fine. Providing it and saying like, I will do an exchange, if you give me a link, that’s something that we would consider unnatural.
So I want to ask you two questions. Okay. So first one is suppose if there is a URL of my website, which is performing good in Google search, getting trapped in all those things, but for some unfortunate reason, the reasons it got 404 Okay, so now what I did is I created a new URL and shifted the old 1301 to the new one. Okay. And reindex, the new one. So I want to ask you, will my new URL be able to perform the same way as the old one?
It’s possible. Yeah. I mean, it’s something where, with the redirect, you’re forwarding a lot of the signals from the old URL to the new one. So that’s kind of a good starting point. But there’s a lot more than just kind of what you’re forwarding with the redirects that that’s in play there. It’s like the content itself has to be relevant. things on the page have to be appropriate for search, all of that has to align to, but the redirect is essentially a good sign for us to say, well, this URL is replacing this old one. And ideally, we can just forward everything to the new URL.
Okay, okay. And what if I just fluctuated the situation a little bit and made some changes, made some good changes in my new content? And then I redirected? Will the situation be the same that you just told me or if there will be some difference?
It’s essentially the same, like you’re changing the URL, but we can forward some of the signals there. But the content on the URL is also kind of what is important.
Okay, my second question is, so suppose Why is it happening that the featured snippets are different for two locations? So suppose I am searching the query in India, the featured snippet is different. But for the US location, the featured snippet is different. Why is that? What is the reason behind this?
It isn’t essentially a snippet from a search result. And sometimes search results are different across different locations. So from our point of view, that’s completely normal. It’s, you know, sometimes you see it across different countries, sometimes you see it even within the same country slightly differently. Sometimes you also see just differences when you search again, when you try Half an hour later. So these things do change, and they’re essentially normal rankings from our point of view.
So one case that can be possible is that the website which is ranking one featured snippet is not ranking in the US, okay. But what I am saying is that my other website, which is ranking our own featured snippet in India is also ranking on the US, then don’t you think the featured snippet will be also same, not necessarily know that any best practice is to acquire the featured snippet for the US location as well?
Now, I agree we don’t have any guidelines as to what you need to do to be visible as a featured snippet.
So I have a question with regards to FAQ schema. So how would you detect that the question is the same over different sites? How would you differentiate between sites, because we have overlapped country with our websites on the same top level domain, but separated by different sub folders. So all of the websites do have hreflang tags, which tells us for example, that this is the Canadian English site, and this is the US English site. But from when we publish English content from our sort of master site, that will automatically be pushed to all the English in our English at websites. So we are looking into creating some sort of FAQ page or FAQ component that are different webmasters can use but we cannot be sure that they won’t put the same question in, on the different pages. So would you be able to detect this?
I am pretty sure we will be able to detect it. Because it’s like It’s same piece of text across multiple sites. But it sounds like it’s something that makes sense for your situation, especially if these are countries specific sites, then we would use something like hreflang or other methods to try to figure out which one to show appropriately. And then sometimes for different countries, you have the same questions. I think that’s perfectly fine.
Yeah, yeah. Because, yeah, we are an industrial company. So they, I mean, the We are, we are good in pushing the same products all over the world. So we would probably be able to, to, like use the FAQ and possibly get the image to FAQ results in the search results page. Now, even though we would have the same question on the English and Canadian site?
Yeah, I think that’s, that’s perfectly fine. Because if these are kind of international sites, it’s not that we would show them on the same search results page. So it’s not that it would look weird. It’s essentially either we show the Canadian version or the French version or whatever. And, like, if they have the same questions, that’s generally fine. For internationalization, a good practice is to make sure that things are really localized, not just kind of like in the right language. But essentially, those are like small tweaks. And that’s not something that from the rich results side, we would say, Oh, this is problematic. So I would go for it. Alright. Alright. Thank you, Maurice.
Well, I was just going to comment, I mean, the FAQ thing says you can only have it on one page. So we also have a lot of sites with English, global and American English, with the same questions, possibly slightly varied. So that’s okay, then?
Yeah, that’s generally fine. Like, what problematic is for us if you have the same questions and answers across the whole site, or if you have like a hotel site, and every hotel listing is like, why what is I don’t know, cancellation period, and you have the same questions and answers everywhere, then that starts to look kind of annoying. And yeah, I mean,
Ideally, we’d like to put it on individual pages. So we have a cocktail, we’d have the FAQ, generally, and then on the direct page, but at the moment, we just put the text on the page. But to my actual question, we’ve seen some really weird results. Immediately after the July core update is all our sites, our big beverage sites, you know, it’s one of the better bigger drinks companies. So for example, on the run on the run brand, we can see in desktop that we’ve basically lost all search and all of the cocktail pages appear to be dropped from the index because if we do a site and look at what page is there, we can see all the other pages you know the product pages that can’t see the cocktails.
And if you look at in sem rush for example, if we look at say mojito, which is a very popular rum cocktail We could see where before we were ranking page one quite often. And then for all of the terms related to keto, you can see it just dropped to zero. Now, we thought this, obviously, we know there’s some problems with our cocktail markup recipe markup. But he was thinking if the algorithm changed, we’d expect to see them drop, lower down the page or to page two, but not a complete Wipeout. And we’ve actually done research because we put a vermouth brand. And that’s happened.
But ironically, in the German locale, we don’t see the same effect. And also, we’ve looked at some of the competitors like Jack daniels. And there’s some, there’s some sites in that area, Dr. Lee Epicurious, some of these have dropped completely and deferred, which is another big, big player in that area, which is a review for a cocktail place, they’ve sort of dropped off to page two. But it seems to be okay on mobile. So we’re still ranking quite well on mobile. Obviously, that’s probably about 60 to 80% of our traffic.
But it’s just really weird, but it’s completely gone. We could understand if the algorithm has changed, and our sites aren’t as good anymore. But we wouldn’t expect them to completely go. And I know, I think there was someone else who had a similar problem with his. Now, it was a blind sight. So there seems to be something really weird happening with indexing. What appears to be indexing? I mean, the pages are still indexing Search Console. But they’re just not turning up at all. Not like they’ve dropped from, like, petition for the position 13 or 15. So really, that’s the question we were trying to like raise. And it does appear to be across the industry and recipe related. I mean, someone suggested it is because we have the FAQ tax on there, but not the actual markup FAQ. So this is the problem we’re facing.
Yeah. So I looked into it a bit with the other site that posted the details in the forums. And it turns out, we have a system that tries to recognize soft four oh fours, across desktop and mobile separately. And essentially, what happens is that sometimes we see pages that essentially on desktop look like a 404 page. So we say, Oh, this is soft, 404. on desktop, we don’t need to index it. And on mobile, it looks like a normal page. So we’ll actually index it there. And that’s where you would see this difference between kind of the indexing side where if on desktop or mobile, you search, you do a site query for the URL, and it shows up normally, and on the other device, if you do a site query and doesn’t show it all, then that’s essentially a sign that for that device type, our systems have picked up somehow that this is a soft, 404 page. And the tricky thing here is, I mean, on the one hand, this is fairly new. So I was a bit surprised that we do this.
On the other hand, in Search Console, we do show soft 404s but we show it for the mobile version. So if on the mobile version, everything is okay. From your side, then in Search Console, it looked like oh, it’s indexed. Normally, he’s like, What is your problem? Whereas on desktop, if we see it as a soft 404, there, you would not be able to see that directly in Search Console. So that’s, that’s kind of something new that has started happening. I don’t know, maybe like a month or so I don’t have the exact time frame there.
But it is something where, at least the other side with the blinds, I think a lot of those examples, were really useful for the team to figure out, like, how do we need to improve this classifier? So if you have examples there, where you can maybe email me a bunch of URLs also from the other sites where you saw this, then that would be super useful to pass on to the team. Yeah, cuz they are actually returning 200 when we brought it. Okay, we’ll all Yeah. Oh, I mean, I’ll get a list. So four, four is kind of in that direction is like they returned 200. And our systems look at the page and say, oh, for whatever reason, we think this is an error page, or kind of, there’s no content here.
And we’ve got quite poor core web vitals scores because unfortunately they are big, big, flashy images. Okay, well, I’ll send you a few links. And we’ve got some competitors like, say, Jack daniels. I think he’s also got the same problem.
Yeah. All right, Jess. Sure. Cool. All right, Andre.
I looked at the implementation of JSON-LD on webpages because Google now supports the rich snippets for job offers. I think I asked about that four years ago. I don’t know how long it is in there. And I have some questions for special points there. It’s about Let me have a look. Experience requirements and education requirements. That’s something that I can place. But I see in the rich snippets, they are not displayed, or is it possible that, for example, in the US, there are other other rich snippets that already showed these points?
I don’t know if they would be shown as rich snippets or as kind of those job search pages. I think for Google jobs, we have those kinds of interstitial pages. I don’t know offhand exactly how it all looks. But I don’t think we would show it as a rich snippet in the search results, but rather kind of like tied to it from there is a special I think, a special layout in the search results specifically for the jobs kind of content.
I mean, I could show it to you. And I would still have a question about the link to the website, how Google checks. If there’s the possibility of online application on the website, maybe you can tell us something about that.
How do we check that? I don’t know. I don’t know if we do? I think that is something that we do try to figure out. And my assumption, like I, I’m not involved with the jobs stuff directly. But my assumption is that we have a mix of manual and automatic checks, or these kinds of things. Because it is completely automatic, I don’t think it would really be possible. So it has to be kind of this mix.
Okay, do you have any tips for us on how to optimize here for good results? Or is it just something that is based on the location of the job offer?
I think, mostly the location, but I actually don’t know any of the details there. Yeah. Sorry. Maybe what I would do in your situation is fine. Someone who’s been using it for longer, maybe in the US, or one of the countries where it was rolled out earlier. And I try to pick their mind and like, get their insight on what worked well, what didn’t work well. Because it’s also something where from from my side, like if if there were a secret ranking trick than to tell you, it’s like, oh, you have to put the keyword five times and then it ranks well, but someone who has been doing it for a while, they might be able to give you some of these tips.
Okay, that’s it for now. Okay, cool. We still have a bunch of questions lined up. I think you’re ready to go, Michael. But let me get some of the YouTube questions in just so that we kind of make sure that those folks also get an answer. I’ll run through fairly quickly. Let’s see, our site is still being indexed by Googlebot desktop, the site is facing indexing and ranking issues because of that, when will the next batch be moved to mobile first indexing?
I don’t have a timing on the next batch. But you would not be seeing ranking issues because of a site that is still on Googlebot desktop. So it’s something where we expect sites to be indexable by both desktop and mobile kinds of users. So that’s something where indexing side like your content should be available. If you’re seeing ranking issues.
That wouldn’t be from the mobile first indexing or not. Maurice’s question we talked about that briefly, after redirecting and changing a CDN, we’re seeing a drastic rate and a drastic drop in crawl rate. Yes, that’s very reasonable. If you change your website’s infrastructure, then we will change our crawling on the one hand first to be a little bit conservative and make sure we don’t cause problems and then later on, we automatically ramp up again. So if you change to a different CDN, that’s a significant change in the infrastructure and we recognize that change and we hold off crawling for a while and then we ramp up again if we think everything is fast. If a website is migrated to react from HTML, PHP, does it negatively affect the whole website?
With regards to kind of like automatically ranking for all of your localized content, that does not happen. So on the one hand, you have to provide clean internal linking within your website, we have to be able to crawl and index all of those pages. And then we essentially use hreflang to swap out the right one. So just automatically moving everything to a lot of different languages does not multiply the visibility of your website, it means it’s a lot of extra work for your website. Sometimes it’s not worth it. Sometimes it is worth it. But you still have a lot of things to do other than just add hreflang.
Our website had no index, no follow tags and source code which we removed in April. I think I provided a short answer on this on Twitter just before. So I double checked the website. And on the one hand, April is a few months ago, but it’s still kind of relatively seen, fairly close. And we have to recrawl the whole website to understand that these no index tags were dropped. And the main difficulty we saw there, or I saw when I looked at the website, is that the website is very slow and that essentially means that we’re crawling, much less than we would like to. And that means it will take much longer to kind of see that the no index has dropped. So if there’s a way that you can significantly speed things up, that would be really helpful. With regards to speed, I don’t mean the speed that you would see in things like PageSpeed Insights.
But even just requesting URLs from the server takes three, four seconds on average. And that makes crawling a lot slower than it needs to be. Removing a tool from indexing using the removal tool will also remove the page from the core web vitals report? No, probably not. So the URL removal tool just hides the URL in the search results. It doesn’t change indexing. And you would continue to see that URL in other reports in Search Console. So that includes things like structured data reports, amp reports. The core web vitals reports, all of those places where index content is used, you would still see that URL. How long does it take for Google to draw pages from the index of bots encountering a 5xx status code due to problems with the server being unresponsive for about three days, Google the index nearly one quarter of my content without even showing any errors and search console? How long will it take for my content to be back? So usually, we recommend a 503 status code for somewhere up to a day. Sometimes it’s okay, if it takes maybe two days to resolve.
But if we’re talking about an error timeframe, which is more than two days, which three days is kind of close, but it’s still a little bit longer, then we would start dropping things from the index there. And usually what happens is, we would probably drop the things that are most visible in the search results first. And that’s not because we want to cause any problems but more because those URLs tend to be the ones that we crawl the most so If the URLs that we crawl the most see a lot of errors, and we will drop them from the index. On the other hand, that’s also an advantage for you. Because that means the URLs that we think we would show the most are also the ones we would recrawl. Again the most.
So we would try to pick those up fairly quickly. And my guess is after a week or so for the most part, that will settle back down again, everything will be normal. Again, there is no kind of long lasting grudge against the website. If it shows errors like this, these things happen. And we dropped them out of the index mostly, so that we kind of like reducing the load on your server as well. And we picked that up again as quickly as we can afterwards. Implementing server side rendering for improving PageSpeed and keywords ranking is a good strategy. So implementing anything that improves PageSpeed is usually a good strategy, especially now that we use the core web vitals as the ranking factor. So that sounds like a good thing.
If you’re implementing it in a way that does improve the speed. Obviously, implementing server side rendering is not a magic bullet, that will automatically make your website faster. But if you can implement it in a way to make your website faster, then go for it. server side rendering does not automatically improve keyword rankings. So it basically takes the existing content you have and just provides it in a different way to users to search engines. And because of that, it’s not like a magic SEO bullet that will catapult your site to position one. But rather, it’s kind of like a technical change that you’re making that could make things faster for users. There’s a lot of work with server side rendering. So I wouldn’t just assume you can do like a checkbox somewhere in your CMS, and suddenly everything is server side rendering.
We have three systems that are website environment, WordPress, react and Vue. js, we’re rebuilding the URL structure for our CMS, however, we’re having a conflict with the existing URL pass us for Vue js, let’s see sounds like kind of like, like changing the URLs a little bit, would that hurt our crawl budget, what’s the best practice to adopt in a scenario like this? Considering these are kind of like the ideal URL paths for user experience. So having different frameworks on a website, different infrastructures on an unshared domain is perfectly fine. That’s not something that would cause any problems. With regards to URLs, the main thing I would watch out for here is kind of like what I mentioned before, is, if you’re changing things like the URL structure, then that’s something that will be reflected in search. And that takes a bit of time to be reprocessed.
Or does it consider the whole web page to measure core web vitals metrics, I would double check the core web vitals documentation that we have on the I think web dot dev website, there’s a ton of information there on how we calculate and measure these things. And I don’t think kind of like me trying to explain that in five seconds would really be useful in that regard. So I will double check the information that we have there. Also keep in mind for core web vitals for page experience, we use the live field data from users, the sort of things that people actually see. We don’t use lab tests, where you would theoretically test the page. Then core web vitals, is that concerning SEO landing pages only? Or is it also important for sem pages? So kind of ads landing pages? I don’t know what, what the quality score, things are with regards to ads landing pages. So I don’t know. I would not be surprised if at some point, or maybe if already if Google Ads is using this as a part of the quality evaluation. But I have no no insight there, though. We’re trying to improve our discovery integration. We have some doubts regarding max image preview size, what would happen if we put the meta tag on a page, but for users on mobile devices, we serve a hero image smaller than 1200 pixels.
Would that mean we need to serve big images for every user, regardless of the form factor? Wouldn’t that go against the core web vitals recommendations? Yes, that’s tricky and problematic. The solution that you probably need to look into is responsive images. In particular, there’s a picture element that you can use in HTML, where you can specify different resolutions of images. And with that, we can recognize the higher resolution image and the normal resolution images. And for users, it’ll automatically show the appropriate version. And for search for discover, we can pick up the larger version and show that as a preview. If serving the same URLs for visitors to our public site and logged in users, using our software as a service, with two separate apps serving the two meaning very different, meaning very different user experience and HTML for the same URLs, will Google somehow distinguish between core web vitals field data, or just simply blend them. So if we can’t recognize that these are clearly separate URLs, then we will treat them as the same thing with regards to core web vitals and probably from kind of the field data point of view, we will, we will mix them together, it’s a little bit different.
If you have completely separate URLs, then probably what would happen there is we would recognize these are different parts of a website, and we would potentially group them separately. So you see that in Search Console, in that we try to group similar content together. And we kind of look at that group and use the aggregated data for that group of content. And if you could provide a way to make it, like super clear, for us to recognize, this is kind of the internal or private content, this is kind of the public content, then potentially, our systems would be able to group those separately and treat them separately. Whereas essentially, everything looks the same to us.
And from a user point of view, it’s like they happen to be logged in, therefore, they see like 20 times as much content and it’s a much slower page, then we would probably mix that in together with the other field data that we have. And it would look like some users have a really fast experience. Some users have a really slow experience. And on average, it kind of folds in here. I don’t have the exact details offhand. But it’s also something where I believe we use kind of percentiles with regards to figuring out which number is relevant for this. So I would double check the Chrome, the web.dev information on core web vitals just to be sure there.
But it is something where if you have kind of like significantly different content on the same website, especially if you’re using the same URLs, then I would expect our systems to not be able to different And CH, clearly between? Um, how’s the integration with Google search and AdSense regarding page speed or page experience? I don’t know. It’s hard for me to say I don’t know what all is happening on the AdSense side. I do know from the chrome side there, there are different teams involved with regards to page experience overall. And they do work with some of the bigger sites out there that are providing content that are working on embeds to try to help them to improve their kind of core web vitals score overall, these are not teams directly involved in search.
So it’s not that we’re trying to help them to improve their search rankings, but we’re trying to help them to improve their way overall. So that’s something where my guess is if AdSense is something that is significantly slowing things down, we see that as something that is shared across the web overall, that someone on the Chrome or the partnership side would try to get in touch and help them to improve things. But also, I mean, they have a lot of information available already in our public documentation that can improve things on their side, too. I was using my homepage as a landing page, which is ranked on some keywords. And now I need to use my homepage as a homepage, which isn’t the landing page. When I redirect, the ranks have been transformed to the new page. However, when I disable the redirect, the ranks decreased again. How can I redirect my old landing page and use the homepage for something else?
I guess you kind of saw the answer there yourself in that you can’t both redirect and not redirect at the same time. With the redirect, we forward the signals without a redirect, we kind of keep the pages separate. So this is something where if you’re significantly changing your site, then it takes a while for us to understand that and sometimes the new structure of your site is better. Sometimes it’s not so good. Oh, okay. Wow, running through all the questions. Maybe I’ll jump back through some of the live questions here as well. And I have a bit more time afterwards, as well. If any of you want to stick around and ask more questions, we can definitely do that. Let’s see, Michael, you’re up first. Hi, john.
So I’ve noticed a lot of sites now include a short summary or list of key points on the top of their content in those bulleted points generally, under the headline, duplicate, often verbatim, the content that the reader will then later find in an article is, is this helpful to Google? Or is there any added value to this practice?
I don’t think it changes anything. For us. It seems like it’s probably more of a usability thing. Or maybe it encourages people to read on and find more information. But I don’t think it would change anything from Google’s point of view. The the main practical effect, I might expect there is that if this is right below the heading, it’s possible that we would pull this in as a part of the description, which might be a good thing, or might be a bad thing, depending on like, what what else would be kind of like the top text on the page? Yeah,
I mean, I was wondering if people did this, just because people are generally lazy, and they’ll read the summary. But for me as a practice, when I’m reading it, it’s just like, Hey, I read this, you know, I read the bullet point. And then two paragraphs later, I say, I guess read. This is not a great user experience. Now.
I don’t know. People try different things out. Some of them end up working, some of them maybe not.
Yeah. But as far as Google’s concerned, it doesn’t make a difference other than if it were.
Thanks, Janet. I think you’re muted.
Oh, no. Can’t hear you. There should be a microphone symbol on the bottom.
A friend of mine has asked me to help him with his website. And he vanished from Google. So the first thing I did was Search Console collecting and looking on site:website name. It doesn’t appear anywhere. I couldn’t see in any manual actions. So I wanted to ask for reconsideration because I think Google has finished it. I don’t know. How do I do this without a manual action? Because I can’t. Yeah, see a URL somewhere.
You can only do a reconsideration request if there is an active manual action, because that’s something that would only be looked at. In cases like that. I, I think, I mean, you’re welcome to drop the URL in the chat here. And I can take a quick look after the recording stops, to see what is happening there. But in practice, if none of the websites are appearing, there are two possibilities, or two common possibilities. On the one hand, there might be something technical that’s happening on the website, that maybe it’s returning an error code or something like that. And it looks normal in a browser, but it doesn’t look normal to the search engine crawler, that can happen from time to time.
Another thing is there’s the URL removal tool where you can remove your whole website and hide it in the search results, essentially. And this is something that I think is not visible directly in the manual action section, and maybe not that visible on the dashboard in Search Console. But it’s something where we’ve noticed that sometimes people use the tool to remove an alternate version of their website, and they don’t realize the tool removes all versions of the website. So if you have www and non www, for example, if you want to remove the non www version with that tool, then it removes all of those versions.
And then it looks in Search Console, it looks like everything is normal. But in the search results, everything is removed. So those are kind of the common variations there. But I’m happy to take a quick look to see if I see anything super obvious. It might also be that something is super complicated, but maybe, maybe there’s an easy answer.
Okay. So when I put Search Console, connected it, what it see in the past, if things had been happening
a little bit. Yes. So the performance report, and I believe the index coverage reports, they take maybe half a week or up to a week to be updated when you verify the site for the first time. And then it will show the older data as well.
Okay. Thank you. I will put my URL in the chat. Fantastic. Thank you. Okay.
All right, Jason.
Cool. Hi, john. So I’ve got a real fun one for you. And I also asked this one on Google asked Googlebot on Twitter. About a month ago, you answered a question about the maximal maximum indexable page limits, and talked a little bit about static versus dynamic pages. So I’m curious about your take on something as part of our consumer journey, we use URL parameters to kind of understand what content a consumer was interested in and clicked on.
And then also, subsequent to that we dynamically update page content, right, both including textual content that could be served to the consumer as well, as you know, links right to other areas of our website. And we do that all dynamically, again, based on that interest that that’s, that’s been expressed, the consumer experience is very similar, right?
But it is, again, like I mentioned, different with links that go to different content silos based on the URL params. So what’s your guidance here? Should we rel=canonical to the naked URL with no parameters? Or is it okay to let Googlebot decide which URL is canonical. And also, if I could reserve a follow up?
I would generally recommend trying to rel=canonical to the clean URL if you can. Just because it makes it a lot easier for us to understand this is the URL that you actually want to have indexed. And it doesn’t have to be the clean URL, it can be like one of the parameter versions as well. But essentially, to kind of provide some guidance for the situation where we look at the URL we recognize, oh, it’s like 99%, the same as the other URL that we saw, which one of these should we index, you probably have a preference with with regards to which one should be indexed. And letting us know about that preference with the rel=canonical with things like internal linking.
All of that, I think, helps us to pick that version as well. It’s not guaranteed that we would but I it kind of helps us and the other thing is also that if you have all of these different parameter versions, if all of them were indexable, and we would look at them and say, oh, they’re very similar, but they’re different, then we would end up indexing a lot of separate URLs for your website. And that means we would kind of dilute the value of your website across all of these different URLs.
And that kind of means, well, we have more different URLs to index, we could show those individually. But that also means for very competitive types of queries, we’d say, well, none of these URLs are really strong. And then that would be a little bit of a disadvantage for kind of the more competitive queries versus the more targeted queries.
Okay, so I guess, this isn’t the follow up, but, you know, would there be any kind of? I mean, I don’t know, any penalty associations with that at all? I mean, based on what I have outlined, or is it just kind of the algorithmic aspect of it that you’re, you’re, you’re thinking through on this?
Um, I think for the most part, it would be the algorithmic side. So the penalty Association, to me, would come in more if these are essentially doorway pages. If it’s essentially the same content, you’re swapping out city names, or something like that, then I could imagine that our systems would look at this and say, Oh, it’s a collection of doorway pages. And algorithmically, we might say, Oh, we just pick one of these pages and show that and ignore the parameter in the future.
If it were really excessive, then it’s possible that the webspam team would look at it and say, oh, they’re trying to kind of rank with all of these doorway pages, therefore, we have to take a manual action. But it’s, it’s hard for me to recognize if you’re talking about doorway pages, or if you’re just saying, well, these are personalized, and like that’s slightly different content for different users, because maybe they came in with an ad or they came in from a different site or something like that. It’s sometimes tricky to draw that line.
But fair enough. Um, the unrelated follow up, thank you for the clarification. Much appreciated. So we had a malformed 301. When we migrated content from a subdomain to dub dub dub a little over a year ago, the issue with a 301 was noticed about mid to late May. And we’ve noticed differences in our logs for how Googlebot is crawling that sub domain. So, you know, we are assuming that it’s, it’s working as it should now, and we validated it several times. But we’re still having like 25,000 URLs showing in on a site colon search for that old subdomain. So I guess the question is, are there any next steps on how we get the dub dub dub version of those pages in the index? Um,
it’s possible that that is working as expected, that we’ve already kind of moved most of the URLs over the tricky part there is with a site colon, when you’re specifically searching for something that you’ve moved, our systems understand that the content used to be there. And we will try to show it to you anyway, even if we’ve already moved things over indexing. So that’s something where sometimes when you do a site query for an old site or a section of a site that has moved, then you still see a significant number of URLs.
But if you look at the cached version of some of those pages, you’ll see that the new URL is actually being and so I kind of double check that you’re not like searching for, or trying to find something that doesn’t exist anymore. And our systems are trying to be helpful. And kind of check to see if the cache page is actually the new one. And if it is, and like I would just let that go. Cool. Thank you. All right. Let me pause the recording here.
Like I mentioned, you’re all welcome to hang around a little bit longer. And we can go through some of the morgue questions. Looks like lots of people still have raised hands. So lots of stuff to do. But it’s always good to kind of limit the recordings to a certain amount. Thank you all for dropping in. Thanks for all of the questions that were submitted. I hope some of the answers were useful. If you’re watching this on YouTube. You’re welcome to join us for one of the future sessions as well. And with that, let me pause here now.
Catch John Mueller’s Hangout on Most Fridays at 7:00 a.m. PT
You can catch John Mueller’s Google Search Central Office Hours Hangouts on most Fridays at 7:00 a.m. Pacific Time.
If you’re interested in watching past hangouts, you can catch them on the Google Search Central YouTube channel as well.
As always, we’ll be here covering John’s hangouts every week with our own SEO insights. Stay tuned!