SEMrush recently published an article titled Why Budweiser Gets An “F” In SEO.
A brand as massive as Budweiser should have no problems ranking in search engines for all manner of relevant terms, however based on the comments from Ryan Johnson – clearly that isn’t the case.
Following are some of the items identified after completing a rapid fire SEO website audit of budweiser.com to see what other sorts of issues might be causing them problems:
- XML sitemaps
- Internal Redirects
- URL Canonicalisation
- <title> tags
- <hX> tags
- Structured Markup
- Load Time Performance
Budweiser have a lot of sub-domains configured, which in and of itself isn’t a problem. However, it does become a problem when settings aren’t configured properly. In a few seconds the following list of sub-domains showed up:
Many of the sub-domains are development versions of the site such as the qa.* or new. In an ideal world only the primary website would be crawled and indexed by search engines. Having so many copies of budweiser.com indexed poses a duplicate content issue for the site and could lead to problems down the road.
robots.txt files are used to control what content spiders are allowed to crawl, but they don’t control indexing (a common misconception). The robots.txt file used across many of the sub-domains listed above are incorrectly configured.
A few issues that appeared at a glance:
- multiple blocks for the same user-agent
- attempting to disallow a domain instead of a URL on the current domain
- incorrect usage of the * and specific spider user agent blocks
In the first issue above, since the directives are split across multiple blocks – it could lead a spider to pick either block of directives without combining them, leaving half of the intended URLs blocked available for crawling.
The second issue is a massive problem, as Budweiser have disallow directives that attempt to block a domain from being crawled which isn’t a supported feature of the Robots Exclusion Protocol. As such, those domains which they had intended to block from crawling are available for crawling except for the URLs correctly specified within the robots.txt file.
Spiders that honor robots.txt will pick the most specific user-agent block for their crawler, falling back onto less specific, then into the wild card and if nothing is present they’ll assume the website is fully available for crawling.
The Budweiser robots.txt file has a block for Googlebot-Image, however has no disallow directive. That may be considered invalid and the block is ignored entirely, causing Googlebot-Image to fallback into a less specific block if one exists. In this particular instance, since the less specific blocks allow images to be crawled – it is unlikely to be causing a problem but it should be corrected as a matter of hygiene.
Following on from the above, there are two Google-specific blocks defined within the robots.txt:
The support documentation by Google on their list of crawlers doesn’t mention a crawler named ‘Google’. All of the spiders that do support falling back to a less specific Google spider, fallback to ‘Googlebot’. This behaviour documented by Google means that all directives specified in the block for the user agent ‘Google’ will be ignored and they’ll fallback to the wild card * entry.
Next on the agenda are disallow directives in the wild card block not present in the name specific blocks. Again, while not a problem in and of itself – after reviewing the content of the blocks, Budweiser’s intention versus what is actually happening aren’t in sync.
Good news, Budweiser are generating an XML sitemap.
Even more good news, it is linked from robots.txt for easy discovery by all relevant bots.
Bad news, crawling www.budweiser.com returned 81 web pages however only 20 of those pages are listed within the XML sitemap to help search engines discover, crawl and index the Budweiser content.
More bad news, the XML sitemap links to broken URLs.
Crawling www.budweiser.com with a tool identified over 6,500 internal redirects within the site.
Each time Google processes a redirect, a small amount of the equity in the 10-20% range, that Google would have passed to the linked URL is needlessly lost.
While Budweiser are correctly using HTTP 301 permanent redirects, they should simply update their internal links to point directly to the intended URL. In time the site will recover the lost equity and it has the added benefit of speeding the site up slightly for users as well.
URL canonicalisation is a process where one true URL is defined for a given resource.
To provide an example, search engines might find thousands of links to the home page of www.budweiser.com with marketing campaign tracking codes, which they consider completely separate URLs by default. Correctly configuring the rel=”canonical” meta tag or HTTP response header provides a mechanism to instruct search engines to merge all of the equity split over thousands of URLs into the true home page URL, boosting its strength and capacity to rank in the search results.
Budweiser are canonicalising the content throughout their site inconsistently, some URLs include a rel=”canonical” meta tag while others don’t. Crawling www.budweiser.com yielded 81 web pages however only 37 of them appeared to have the rel=”canonical” meta tag specified.
Additionally, there were examples where Budweiser have a rel=”canonical” tag specified with the wrong URL. For example the brewery locations page, www.budweiser.com/our-brand/brewery-locations.html has a canonical value of http://www.budweiser.com/our-brand/brew-location.html which produces a 404 error.
Since approximately 50% of the site doesn’t have a rel=”canonical” tag specified and in some instances, incorrectly configured, it’s possible there is a lot of equity or PageRank being squandered through poor configuration.
The rel=”nofollow” meta tag or attribute instructs Google to drop any links effected by the nofollow directive from their link graph. This is commonly used for links to third party websites that might be untrusted (ie, submitted via user generated content) or also for advertising.
By removing the effected links from the link graph, it has a knock on effect that those links inherently cannot play a role in boosting the search engine rankings of the linked URL, since the link is removed from the graph and no equity or PageRank can flow through that link.
Simplistically, when Google calculates how much PageRank or equity to flow through a URL, they look at the equity of the linking URL and divide that into the number of out links equally.
In years gone by, applying a rel=”nofollow” to an internal link meant that the equity that Google had originally allocated to that link would be reallocated to all other equity flowing out links, increasing the amount of equity flowing through those links. This technique of maximising the equity flowing through specific links within a site became known as PageRank sculpting.
Several years ago Google changed the behavioiur of how internal rel=”nofollow” links were handled and instead of reallocating the equity of the effected out links to all other equity flowing out links, that equity now vanishes or evaporates.
On quick inspection, Budweiser have internal rel=”nofollow” links pointing to the following URLs (maybe more):
The <title> tag is a strong indicator to Google about the content they should expect to find in a given page and is displayed prominently in the search results for users to evaluate whether or not a given URL would yield the content they are looking for.
Broadly speaking the <title> tags in use throughout the Budweiser website are okay. For example, reviewing the <title> tags used throughout the Budweiser Clydesdales blog shows that they lead in the a descriptive title of the page, they aren’t bloated, nor are they keyword stuffed.
However, there are number of high priority pages that could be improved, such as:
The rules of headings are pretty basic, no rocket science needed:
- use one <h1> tag per page that describes the primary content of the page
- if you need more headings, use <h2> through <h6>
- nest heading tags as needed to give hierarchy to the document
- use descriptive headings to help users and search engines alike understand it better
These simple rules are being broken throughout the Budweiser website:
- first <h1> is actually wrapped around the logo
- there are multiple <h1> tags in each page
- there is no testing of <hX> tags to create hierarchy if and when needed
- <h1> tags for the primary body copy area often aren’t descriptive or relevant to the content
Fortunately, this is a fast and simple problem to correct throughout the Budweiser website.
Completing in depth* keyword research using Ubersuggest highlights a variety of topics that users want information on related to the Budweiser brand which is great news.
Unfortunately the Budweiser website suffers from an all too common condition of being brochureware, in that it looks good but has no real substance or content to help search engines.
Take for instance the Budweiser product Chelada. As a consumer, it’d be a reasonable expectation to head to Google and type in ‘chelada’ and find Budweiser in a top 5-10 positions but no. No problem, the consumer refines their query to ‘chelada beer’, still nothing. More refinement, ‘chelada beer budweiser’ and even with the Budweiser keyword – budweiser.com still doesn’t have a position 1 ranking.
However when reviewing the Chelada web page, Budweiser are giving Google nothing to work with – the only way they could have reasonably given Google less was to delete the page entirely from their website.
As an immediate step for Budweiser, they should perform keyword research for all of their products and build out the relevant content consumers are seeking. Budweiser could expect to see the Budweiser website bounce to the top of the search results if they do a good job of this.
* submit it once with the keyword ‘budweiser’ and scan the results
Structured markup such as schema.org allows a publisher to provide rich meta data about the content on the page, which search engines like Google use to augment the search results. Common use cases that are very visible are elements like reviews that can produce star ratings in the search results.
Quickly clicking through budweiser.com, it appears they have a few opportunities for this:
- brewery locations
- beer nutritional information
With respect to the first point, the brewery page indicates that they have 12 brewery locations across the United States, however no additional information is provided about those locations. It’d be practical and helpful to users to provide their address, phone number, opening hours, if they offer tours, sell products and so forth. Some of this information could be marked up using the LocalBusiness schema.org element.
The extensive keyword research performed clearly indicated the consumers are interested in the nutritional information of the Budweiser products. If Budweiser were to provide detailed nutritional information, it’d help Google, help users and also allow them to mark up that information using the NutritionInformation schema.org object – which may lead to interesting universal objects appearing in the search results.
Load Time Performance
Users don’t like slow websites.
Assessing the Budweiser website with a variety of performance testing tools such as:
- Google PageSpeed Insights
- GT Metrics
all reveal a common theme, budweiser.com could do with some serious attention.
This fast paced SEO audit has identified a variety of technical and on-site issues that are holding Budweiser back in the search results. No doubt if a more structured and rigorous audit was completed, the list would be even longer but the above items certainly represent an excellent starting point to improve the search engine rankings for budweiser.com.