During the Search Off The Record podcast, Googlers John Mueller, Gary Illyes, and Lizzi Sassman discussed all things Sitemaps.
With this discussion, these Googlers dived right into the topic of sitemaps, how they can help Google discover the content on your site, and how you can take advantage of sitemaps for better crawling and indexing.
This includes how John Mueller got started at Google and how creating a sitemap generator led to that, along with a number of other great tidbits.
Without further ado, let’s get into it and learn more about sitemaps!
Sitemaps and the Future of Crawling
They said that it was easier to crawl and find all of the content on the web. They also said that, at the time, regardless of whether sitemap files actually helped with visibility, creating a sitemap file forced them to look at their website and think about all of the URLs that Google could find. And why is it not finding this part? And what’s up with all of these parameters, and upper and lowercase? And all of these things, where when you crawl your website, it’s like suddenly this infinite mess.
But when you see that for the first time, you realize that this is something that I can control. And this is something that a site owner can kind of work on to make it easier for search engines to crawl. So then they were asked if they made some changes on their website based off of the learning exercises. And if so, what kind of changes did they make?
URL Parameters and Crawling
And they said that they don’t know the details of what they changed on the website. But things like URL parameters were super common. And kind of understanding that using random URL parameters like session IDs and URL parameters at the time was super common to have that you just have this really wrong long number as a parameter attached. And every user gets a different number. And that’s something that, in the early days, you would look at a website and say, well, it is how it is.
And like, I’m not supposed to understand all of these things. But when you crawl it, you realize that actually, this makes it pretty much impossible to crawl the website properly. Unless a search engine can figure that out. And if you can figure it out for the search engine, it makes it a little bit easier. They also said that they noticed this on their website, but also when they made the generator, like other people were using it, and they would contact me and say, Well, I ran your tool on my website, and it’s not stopping. And then you kind of are forced to look at other people’s websites and try them out as well.
And then you notice that these kinds of crawling issues, they’re just everywhere. They believe that a lot of that has gotten significantly better, because people use more common CMS systems. And they don’t generate this kind of messy website anymore. But at least back in the early days, it was super common to have a website that was pretty much impossible to crawl.
Using Sitemaps to Make a Difference on Your Site
They said that if you want to make a difference for your website and make it easier for Google to crawl, then sitemap files are definitely something that you should look into. They were also asked what are some of the common mistakes that people make with sitemap files? And they said that the most common mistake is probably not including all of the pages on the website. So they have this sitemap file, but it only includes like the 10 most important pages, or maybe 20 pages. But there’s thousands of pages on their website.
And they’re just kind of ignoring all of those other pages. And that’s something that can be really bad, because if you don’t include a page in your sitemap file, Google will probably never find that page. So if you have a page that you want people to be able to find, then make sure that it’s in your sitemap file. They also said that another common mistake is not updating the sitemap file when the website changes.
Common Sitemap Mistakes
So if you add a new page, or if you remove a page, or if you change the URL of a page, then you need to update your sitemap file as well. Otherwise, Google is going to try to crawl the old URL, and they’re just going to get a 404 error because the page doesn’t exist anymore. Or they’re going to try to index the new page, but they’re not going to be able to because it’s not in the sitemap file. So those are two really common mistakes that people make with sitemap files.
They said that, as a small business owner, one would not want to do the exercise because it is not necessary. They said that, before 2013, crawl budget was not a thing. However, it became a thing because of the popularity of the internet.
The idea behind priority is kind of understandable, but at the same time, if you’re making these files for any larger website, you have to automatically fill out these values. And you don’t necessarily know what is the relative priority of this random blog post that I have. And at some point, you just say, well, everything is important, or you create this kind of artificial structure of priority for your website.
But you can’t really determine it yourself. And at that point, that data is not really that useful. They said that the last modification date is something that has an absolute value that can be supplied. However, with change frequency, one does not really know in advance how often it’ll change. It’s more that search engines could over time track kind of how often this page changes on average. And they could use that to determine how often to recrawl it. So, at that point, why would a site owner specify that directly? Because it is much more tempting to say, well, this page could change every day, even if it doesn’t.
Listen to the Podcast
Let’s Talk Sitemaps – Transcript
John 0:10
Hello, and welcome to another episode of search off the record. A Podcast coming to you from the Google Search team discussing all things search, and having some fun along the way. My name is John. And I’m joined today by Lizzy and Gary from the search relations team, of which I’m also a part of, say hi, Lizzy.
Lizzi Sassman 0:31
Hi Lizzy.
John 0:32
Say Hi, Gary.
Gary 0:33
No, I don’t want to.
John 0:34
No. Anyway, Gary, it’s great to be back here. But why am I here?
Gary 0:42
Well, we were told that you don’t like us anymore. And I kind of wanted to twist your arm into coming back and, you know, show your face. Wait, that’s the wrong format. I was thinking that, I don’t even know when, maybe half a year ago or so we had an episode or about robots.txt. And it was a fun episode. And then I was thinking about what else is similar to robots.txt, and we ended up with sitemaps. And I happen to know that you were involved in Sitemaps, in some ways back in the days like 1902 A.D., and maybe you could talk about your experience with sitemaps. And then we probably just deep dive into certain parts of sitemaps.
John 1:29
Cool. Okay. So I guess you could say the sitemaps is how I snuck into Google, which may be a good thing. Who knows, we’ll see what happens. I don’t really know much about the story internally with regards to sitemaps, way in the beginning, but I kind of saw that externally. So I was active at my own software company and kind of interested in the web and then somehow got interested into SEO. And then as it happens, Google happened to launch Sitemaps, right around that time. And I thought, well, this is a cool way to sneak into Google. So I started trying to look into that a little bit. And I noticed that there were no Sitemaps generators in the early days, or no, kind of easily usable Sitemaps generators. So I made one of the first sitemap generators for windows at the time.
Gary 2:27
I think Sitemaps was launched in 2005. Maybe like in the very, very beginning, it was a Google initiative. And then later, it became supported by other search engines as well. Right around that time, Google also published a script to create Sitemaps. How your generator was different from from that script?
John 2:48
Well, I made a generator for Windows. So instead of having to run some obscure script, and this weird programming language that nobody knew about at the time, called Python. So I made something that was more usable by, I don’t know, like average site owners, or at least I thought it would be. So basically, you enter your website, and then it goes off and crawls your website based on some settings that you provide. And then in the end, when it knows about everything, it generates a sitemap file for you. I thought that was pretty neat at the time. And the disciple maps project was, I think, launched between different teams in Mountain View, in Kirkland and in Zurich. So there was definitely a big team in Zurich at the time, which I didn’t really realize I, like, I don’t know, as someone externally from Google, you just see, oh, Google, and you have no idea what is actually behind it. But it was interesting, because at some point, I got invited to chat with the team in Zurich, it was interesting to meet some of the people that were active there. And I think one of the people that I met there still works in Zurich. So so that was very cool. I think the initial initiative from the sitemap side or from Google was was really about kind of understanding the web a little bit better and making it easier to crawl and find all of the content on the web.
Lizzi Sassman 4:14
Did you feel like that helped at the time, like when you started using a sitemap, like did that help with your early SEO experiences trying to get your software company site found?
John 4:27
To some extent, I thought, especially the process leading up to that was really helpful. I don’t know, SEOs did it at the time, but it’s kind of that process of crawling your website was really eye opening, because in the early days, you’re like, Well, Google is this big magic black box, and nobody really knows what it’s actually doing. And then when you crawl your website yourself, you realize, oh, there are actually a lot of technical details that are involved with crawling, and there are a lot of things that you can do right or that you can do wrong on your website. And that I thought was really interesting. So it kind of like, regardless of whether or not sitemap files actually helped with the visibility of your website, that step of creating a sitemap file forced you to look at your website and think about, well, what are all of the URLs that Google could find? And why is it not finding this part? And what’s up with all of these parameters, and upper and lowercase and all of these things, where when you crawl your website, it’s like suddenly this infinite mess. And when you see that for the first time, you realize, well, actually, this is something that I can control. And this is something that a site owner can kind of work on to make it easier for search engines to crawl.
Lizzi Sassman 5:42
So then did you make some changes on your website based off of the learning exercises? And if so, what kind of changes did you do?
John 5:50
I don’t know the details of what what I changed on the website. But things like URL parameters were were super common. And kind of understanding that using random URL parameters, like like session IDs and URL parameters at the time was, I don’t know, is super common to have that you just have this really wrong–long number as a parameter attached. And every user gets a different number. And that’s something that in the early days, you would look at a website and say, well, it is how it is. And like, I’m not supposed to understand all of these things. But when you crawl it, you realize well actually, this makes it pretty much impossible to crawl the website properly. Unless a search engine can figure that out. And if you can figure it out for the search engine, it makes it a little bit easier. I noticed this on on our website, but also when I made the generator, like other people were using it, and they would contact me and like say, Well, I ran your tool on my website, and it’s not stopping. And then you kind of are forced to look at other people’s websites and try them out as well. And then you notice that, that these kinds of crawling issues, they’re just everywhere. I think a lot of that has gotten significantly better, because people use more common CMS systems. And they don’t generate this kind of messy website anymore. But at least back in the early days, it was super common to have a website that was pretty much impossible to crawl.
Gary 7:17
I think the session ID that was one of those things that it was pretty much transparent to to to a human eye. Basically, it was so prevalent on the internet that you didn’t even notice as a human in the URL. And you are just like assuming that it’s not there. But for a search engine or for any crawler. Basically, that meant that there’s an infinite number of URLs, well, pseudo infinite number of URLs on the site. And as with any crawler crawlers are happy to crawl the URLs and Sitemaps probably were a very eye opening thing for for webmasters and site owners and developers in general. Yeah. What about different tags that can show up in a sitemap, because I’m fairly certain that most people who who dealt with sitemap know that you have the LOC lock tag, where you put the URL, and then you have a bunch of other tags like priority and change frequency that are basically covered in math. And some people think that search engines use them, some other people think that search engines don’t use them. How were those with your generator?
John 8:23
Taking a step back, the sitemap files themselves are basically text files. And you can look at them in a text editor, which at the time, like that was kind of interesting for me to see. I expected to see some, I don’t know, machine language file. But XML is essentially like HTML pages. And you have different tags and different content in there. And the main tags there for sitemap files are really like the URL, you specify the URL, I don’t even know what they’re all called nowadays, or what they’re still called. But they’re also extra fields that you can add, which I think are optional, like the last modification date, the change frequency and the priority. And I’m sure I’m forgetting something, but something along those lines. And the interesting thing, I think is the assumption I have from the sitemap side is that Google wanted to understand a little bit better, which pages are changing, how frequently, and which pages you think are important. And that’s kind of with the change frequency and the priority data in the sitemap file. But it feels like that was something that was more like wishful thinking, like maybe we can learn more about the web like this. Because in practice, of course, if you give people a field that says priority, they’re gonna say like, my website is the most important and all of my pages are the most important. And using that as a way to understand more about the website is then really hard because people are just biased and they think their stuff is the most important.
Lizzi Sassman 9:50
Well, but is the priority supposed to be like on the web or within the context of your own site? Because I guess that would be a good exercise to prioritize within your own site, which ones are the things that change more often? So why wouldn’t you actually go through that exercise? Unless you’re thinking like, Oh, this is like me, my website compared to your John Mueller website? I think mine is priority number one.
Gary 10:15
I think you’re being way too rational.
Lizzi Sassman 10:17
Okay. I’m not accounting for other things on the internet?
Gary 10:22
I mean, the internet itself is not a rational place. And if you are a small business owner, for example, then why would you want to do that exercise? Basically, you just want to say that, well, I publish these pages, and all these pages are important to me. So
Lizzi Sassman 10:38
here. I guess, is crawl budget a thing here? Most people
Gary 10:41
don’t even know about crawl budget, there’s a few bigger entities on the internet who made crawl budget a thing. But before, I don’t want to say a stupid date, but I will say 2013. I rarely ever heard of crawl budget, and then suddenly, it came to be and then we started talking about it. Because reasons.
John 11:01
I think the idea behind priority is kind of understandable. But at the same time, if you’re making these files for any larger website, you have to automatically fill out these values. And you don’t necessarily know like, what what is the relative priority of this random blog post that I have. And at some point, you just say, well, everything is important, or you create this kind of artificial structure of priority for for your website. But you can’t really determine it yourself. And at that point, that data is is not really that useful. And I think even in our documentation, we now say like, we don’t use priority from a sitemap file.
Gary 11:44
This is true. Also goes for change frequency, I think, where you can’t actually expect to know when your page will actually change, like, How often should it change. Because if you have the terms of service, for example, or if we go to our documentation to developers.google.com/search, there are pages that we haven’t touched for two years now, because we just had no reason to touch them. But when we published those pages, we wouldn’t have known that we are not going to touch them for two years.
Lizzi Sassman 12:14
Okay, so there’s the change frequency thing but then there’s also the last mod thing. I mean,
John 12:19
the last modification date is something that I would say, there, there is an absolute value that you can supply there. And that’s something that the script can look at. And if it looks at your pages and says, Well, like I updated this page one year ago, or last week, it’s it’s a real date that you can supply, whereas with the change frequency, you don’t really know in advance how often it’ll change. And it’s more that search engines could over time track kind of how often this page changes on average. And they could use that to determine how often to recrawl it. So at that point, why would a site owner specify that directly? Because it’s much more tempting to say, well, this page could change every day, even if it doesn’t. But
Gary 13:01
also with last mod, I think we are not doing a great job explaining when you should update that tag, because it should be something like less significant update, like when you’re updating the content itself, not the some head tag, or element like in the HTML.
John 13:18
I disagree. Okay.
Gary 13:21
I know that some search engines use it, like, for example, Bing. I know that Bing is using it, and Google doesn’t use it, because reasons. And one of the reasons is that it’s highly unreliable, because people want search engines to believe that their page changed. So it should be crawled. But in reality, the page didn’t even change, for example, or it changed just a little.
John 13:48
I think it’s it’s trickier in that regard. Yeah. I mean, it’s still like you can pull out the primary content and say, like this content changed, but at the same time, you could change something in your heading or something in your footer or in the sidebar that has links to other pieces of content. And technically, that’s, that’s a change on your page. And technically, that’s something that search engines could find value in. So maybe the issue is more that there’s difference of opinion on what the date should be. And then at that point, like, well, if people mean different things with the same value, like what can search engines do with it?
Gary 14:25
Well that’s fair. Well, and
Lizzi Sassman 14:26
you bring up a good point, like significant changes to search engines or to users. Because it maybe is that different? What would be considered significant or like an interesting change, like just changing a link, or oh, we added this reference or something. This could be a new page for search engine to identify. But for a user, it’s just like, well, that’s another link. Okay.
John 14:46
I mean, it could be something like adding structured data where the user doesn’t see any change at all. But for the site owner, it’s really important because suddenly, you’re providing information for search engines that they could show in a different snippet, for example.
Gary 15:01
All right, fair point, I’ll buy in. But every–kind of this discussion
John 15:05
of like what is actually a change that should be flagged as a date? I adding–doing that in in an automated way across a larger website? I imagined that that’s pretty tricky.
Lizzi Sassman 15:16
Well, the change frequency or the last one, because the last one seems like that could be okay. Because it’s like in the past,
John 15:22
I think the change frequency, like you can’t really know ahead of time. But last modification, even that feels like something where people might say you oh, well, the last time I edited this article, or the last time the HTML changed,
Gary 15:34
Right, I imagine that you’re updating something in the in the heads for the whole site, like you’re injecting verification tech, for example. And then it propagates across all your pages, and you have 2 million pages and suddenly all change–all the last mod tags are updated to basically now. Is that useful? I doubt it.
John 15:54
I don’t know. Or you changed your copyright date? Like, at the end of the year, like copyright 2022.
Gary 16:00
We’ve actually seen that. I remember, someone from the sitemap team, back in the days, was complaining, that was a real issue that when New Year’s hit the large portion of the change frequencies updated to January 1.
John 16:13
So if it was an issue, that means it was used, I
Gary 16:17
can’t confirm or deny anything without the explicit approval of the secretary.
John 16:22
Another thing that that kind of came out with sitemaps. So I thought like, two kind of semi-related things, were pretty cool at the time. So the standard was kind of announced or or the beta. I don’t know how they framed it in the early days. But they also created this kind of console thing where site owners could go and verify their site and add sitemap files. Webmaster Tools. Webmaster Tools, yeah. The early Webmaster Tools. Sitemaps,
Lizzi Sassman 16:51
Google Sitemaps. Tool, did it have the word tool or console?
Gary 16:56
I think it was called Google Sitemaps. And people just
Lizzi Sassman 16:59
knew that this was like a thing that you could use? You didn’t need the word tool in the name?
Gary 17:04
You know how we are really good at picking terms that are ambiguous? Oh, yeah. Okay. So Google Sitemaps.
Lizzi Sassman 17:12
Excellent, excellent name for many things. The tool, probably also the docs and the help group.
John 17:19
Yeah, the help group was the other thing that came out at the time, because it was positioned kind of as a beta for site owners, and they wanted to get their feedback, I guess. So they created a help group for site owners, specifically around sitemaps. And I got involved in that at the time as well, kind of helping people with their pre-Google. Yeah, pre-Google helping people to figure things out. Oh, you were a bionic poster. Right. That was I think, before that, and at some point, it migrated from being a group about Sitemaps to being the Webmaster Help group or something like that. That was, that was pretty fun. I guess in the early days, like, there wasn’t a ton of documentation from Google’s side about how to make websites. So there’s lots of guessing and people trying to make tests. It was interesting.
Lizzi Sassman 18:11
Did it just start out with how to use sitemaps? And then kind of grow from there?
John 18:15
So I think the main problem there was because there was no other Google official discussion forum for these kinds of SEO topics. Everyone went to the sitemaps group and was like, why is my website not being indexed? And luckily, we solved that problem. Right, Gary?
Gary 18:34
I don’t want to talk about it. It’s still trauma. For me. I still have PTSD. I mean, other parts of Google or other search engines of Google, like Google News, they had similar problems. They didn’t have documentation or documentation was not great. And that’s how I got involved in new Sitemaps as well, because I don’t know if you know, but Sitemaps can have extensions, because it’s an XML file. And it’s extensible.
John 19:07
Oh, wait, so so new sitemap is different from a normal sitemap? I thought it was just smaller. What? No? What? Well, I thought that was just a limit of like the number of pages that you could include.
Lizzi Sassman 19:19
Wait, you know the answer to this? We’ve been trying to track down why there’s a discrepancy. We thought maybe there’s a discrepancy. There is a discrepancy or not. We don’t know. Now. We’re trying to find out.
Gary 19:29
No, I’m asking you, John.
John 19:31
I was never involved with the news side of things. You would know. Well, you
Lizzi Sassman 19:36
seem to know something. Yeah.
John 19:38
I just know. It’s like smaller file. Maybe I knew more about this in the past. And you’re kind of making me worried that I’m forgetting things, but I don’t really know the details of what what otherwise is kind of special around new sitemaps. So
Gary 19:54
let’s go back to too sitemap extensions, because those are one of the exciting things things that you can do with sitemaps. Basically, you have the base sitemap, and then you can extend it with a new namespace like XML namespace. And then it becomes an image sitemap or a video sitemap or new sitemap. And I’m pretty certain that there are a bunch of more different sitemaps as well, that we didn’t talk about. But those seem to have been very popular. Also, in the earlier days of sitemaps. I think video sitemaps, for example, came to be around like 2008, 2009, when universal search was launched, and then video became more prominent on search result pages. And then we started adding, because that was a Google thing. Like it was a Google sitemap extension. We could just add tags to it whenever we wanted, which I can’t decide if it was a good thing or a bad thing. Definitely good. Okay, wasa good thing? Now, it’s a bad thing.
Lizzi Sassman 20:53
Why is that a bad thing?
Gary 20:55
Well, you would know. Because you maintain our documentation on some of these sitemap extensions have these kilometre long? No, I’m sorry, half mile long tables with tags and attributes.
Lizzi Sassman 21:09
Yes, I do know about that. Do you think that they need to be that long?
Gary 21:13
I’m fairly certain that they shouldn’t be that long. What
Lizzi Sassman 21:17
makes you say that? My–You just have a gut feeling? Things should be shorter, more succinct. And if they’re too long,
Gary 21:27
I, I think it would be worth looking into those tags and attributes and see if they are still useful, because some of them have been replaced, or not replaced complemented, let’s say, with schema.Org schema.org annotations, like schema–what’s itcalled, what’s the name? Structured
Lizzi Sassman 21:49
data markup schema, these are all fair words to be using.
Gary 21:52
So some of those things, some of those tags and attributes have structured data counterpart. And if they have a structured data counterpart, then maybe it’s better to supply them in the structured email, because then all parsing is in one place instead of two different places. Because usually when you have it, well, usually when you have it in two different places, then sometimes you might end up with conflicts like, for example, the sitemap is generated offline, not when the page is rendered. So it might have a different value for a tag. But technically, the structured data on the page should always be the up to date version, I guess. That makes sense. Because that happens when you actually render the page or pull the data from the from your database for the page. So maybe some of those tags could go. So maybe
Lizzi Sassman 22:43
we should go check with the video team and see if all they need is the structured data and see if there’s some like tidying that we could do in the sitemap extensions?
Gary 22:54
I mean, video team, the Google Images team, and then probably also the sitemaps team, because we also have to figure out how, if at all, is possible to their code, deprecate these tags. I imagine,
John 23:05
like that’s some somewhat of a longer process anyway. Especially if, I imagine, like it won’t be that the sitemap file will stop working, it’s just like, we will primarily pull the data from from the markup on the page, then. And then it’s more like we work together with the team. And then we work together with the ecosystem to let them know about the change early on, so that they can update if they want to, because I imagine a lot of the sitemap generators out there, they haven’t been touched in many years, because they just work. And it’s like, why would you change it if it’s working?
Gary 23:40
What do you think about the future of sitemaps? Should we transform them into JSON objects? For example, because everyone loves JSON?
John 23:48
Everybody likes JSON? I don’t know how Jason feels about that.
Gary 23:52
Not that Jason. Oh, yeah. Wrong Jason.
John 23:56
I don’t know, I kind of feel on the one hand, XML is a really ancient format. So it’s kind of weird to keep using, but it kind of just works for for this purpose. So especially about informing search engines, or anyone who’s interested in a website, what the pages are on the website. I don’t know, like what the future will be like. There’s the initiative, I think, from Bing and some other search engines about index now, where you submit individual pages. There’s the indexing API from our side, where you also submit individual pages. Maybe at some point, things will transition in that direction. But I don’t know I still kind of find the process of crawling websites useful to understand the websites a little bit better. So I don’t want to move to a model of people don’t understand what their website is actually like when it’s crawled. And they just submit pages whenever they think, like this page is interesting. It should still be something that is crawlable. And that kind of maps to what users see as well, because if a website is crawlable, then users can also click around and find the content. And that’s ultimately kind of the important part, you guide people to part of a website. And they should be able to dig deeper from there and find out more. So Gary, what do you think about the future of sitemaps?
Gary 25:15
I’m very fond of sitemaps. But I also want to see things evolve a little. But I also don’t like JSON, because JSON is weird.
John 25:27
I think they’re, they’re like two possible directions that could happen. On the one hand, you could just submit a text file of all of the URLs from your site, where basically we say, well, all of these attributes haven’t been that super useful. Like, you should just give us a list of the URLs. And that might be one approach. The other approach that, I don’t know if index now uses this, or indexing API could be where you actually submit the pages themselves, kind of directly. So it’s not that search engines would have to crawl your web page to find the information there. But rather, that the information is together with the submission. And my feeling is that will be trickier because people, I don’t know, it adds an extra layer of complexity. And it makes it so that it’s easier to get those two sides out of sync. You submit something to a search engine, and you have something different on your website for accidental reasons, or for spammy reasons, or whatever. But kind of that disconnect feels kind of tricky.
Lizzi Sassman 26:30
Tricky for them or tricky for search engines?
John 26:33
I think both sides, because a search engine or anyone who’s kind of consuming this still has to look at the pages to confirm that actually, this is reasonable. And at that point, you’re crawling the page. So it’s what is, what is the difference?
Lizzi Sassman 26:47
We do get a lot of people writing in that seems to think that this is like how it should work, that they should just be able to send us this URL that we haven’t indexed. And like there should just be like a box for them to be like upload, here’s this URL, Google, please know about it. But it seems like it’s more complex than that, that they might not know that all these other things that they should be thinking about.
Gary 27:06
I mean that that could be a good topic for a future episode where we talk about what gets into our crawl queues and what does not, because it is way more complicated than just submitting a sitemap. Basically, with a sitemap, you’re just telling search engines, any search engine, that your URLs are here, you do whatever you want with them, you’re not instructing that you want these crawled. Or not crawled. Well, you’re–with sitemaps you cannot say not crawled, you use robots.txt for that, right. Oh, look, we made a complete loop. Full circle.
John 27:42
And that’s it for this episode. Thanks for joining us here, folks. Next time, on Search Off the Record we’ll be talking about the future of the web with Alan Kent. We’ve been having fun with these episodes. And I hope you the listener have found them both entertaining and insightful as well. Feel free to drop me a note on Twitter or chat with us at one of the next virtual or in person events that we go to if you have any thoughts. And of course, don’t forget to like and subscribe. Thank you and goodbye. Bye now.
Gary 28:15
Good bye.