Pilih Laman

sumber : https://support.google.com/webmasters/topic/4598733?hl=en&ref_topic=6001981

Ajax-enhanced sites

Design AJAX-powered sites for accessibility

Many webmasters have discovered the advantages of using AJAX to improve the user experience on their sites, creating dynamic pages that act as powerful web applications. But like Flash, AJAX can make a site difficult for search engines to index if the technology is not implemented carefully. There are two main search engine issues around AJAX: Making sure that search engine bots can see your content, and making sure they can see and follow your navigation.

While Googlebot is great at understanding the structure of HTML links, it can have difficulty finding its way around sites which use JavaScript for navigation. We’re working on doing a better job of understanding JavaScript, but your best bet for creating a site that’s crawlable by Google and other search engines is to provide HTML links to your content.

Design for accessibility
We encourage webmasters to create pages for users, not just search engines. When you’re designing your AJAX site, think about the needs of your users, including those who may not be using a JavaScript-capable browser (for example, people who use screen readers or mobile devices). One of the easiest ways to test your site’s accessibility is to preview it in your browser with JavaScript turned off, or to view it in a text-only browser such as Lynx. Viewing a site as text-only can also help you identify other content which may be hard for Googlebot to see, such as text embedded in images or Flash.

Avoid iFrames – or link to their content separately
Content displayed via iFrames may not be indexed and available to appear in Google’s search results. We recommend that you avoid the use of iFrames to display content. If you do include iFrames, make sure to provide additional text-based links to the content they display, so that Googlebot can crawl and index this content.

Develop with progressive enhancement
If you’re starting from scratch, a good approach is to build your site’s structure and navigation using only HTML. Then, once you have the site’s pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses.

Of course, you’ll likely have links requiring JavaScript for Ajax functionality. Web developer Jeremy Keith labeled this technique Hijax, and it’s a way to help AJAX and static links coexist.

When creating your links, format them so they’ll offer a static link as well as calling a JavaScript function. That way you’ll have the AJAX functionality for JavaScript users, while non-JavaScript users can ignore the script and follow the link. For example:

<a href="ajax.htm?foo=32" onClick="navigate('ajax.html#foo=32');
 return false">foo 32</a>

Note that the static link’s URL has a parameter (?foo=32) instead of a fragment (#foo=32), which is used by the AJAX code. This is important, as search engines understand URL parameters but often ignore fragments. Since you now offer static links, users and search engines can link to the exact content they want to share or reference.

While we’re constantly improving our crawling capability, using HTML links remains a strong way to help us (as well as other search engines, mobile devices and users) better understand your site’s structure.

Follow the guidelines
In addition to the tips described here, we encourage you to also check out our Webmaster Guidelines for more information about what can make a site good for Google and your users. The guidelines also point out some practices to avoid, including sneaky JavaScript redirects. A general rule to follow is that while you can provide users different experiences based on their capabilities, the content should remain the same. For example, imagine we’ve created a page for Wysz’s Hamster Farm. The top of the page has a heading of “Wysz’s Hamster Farm,” and below it is an AJAX-powered slideshow of the latest hamster arrivals. Turning JavaScript off on the same page shouldn’t surprise a user with additional text reading:

Wysz’s Hamster Farm — hamsters, best hamsters, cheap hamsters, free hamsters, pets, farms, hamster farmers, dancing hamsters, rodents, hampsters, hamsers, best hamster resource, pet toys, dancing lessons, cute, hamster tricks, pet food, hamster habitat, hamster hotels, hamster birthday gift ideas and more!

A better implementation would display the same text whether JavaScript was enabled or not, and in the best scenario, offer an HTML version of the slideshow to non-JavaScript users.

AJAX: Frequently asked questions

This FAQ answers the most common questions about AJAX crawling.

Your site should use the #! syntax in all URLs that have adopted the AJAX crawling scheme. Googlebot will not follow hyperlinks in the _escaped_fragment_ format.

See sample AJAX application at http://gwt.google.com/samples/Showcase/Showcase.html. If you click on any of the links on the left, you’ll see that the URL will contain a #! hash fragment, and the application will navigate to the state corresponding to this hash fragment. If you change the #! (for example http://gwt.google.com/samples/Showcase/Showcase.html#!CwRadioButton) to ?_escaped_fragment_= (for example, http://gwt.google.com/samples/Showcase/Showcase.html?_escaped_fragment_=CwRadioButton), the site will return an HTML snapshot.

In the near term your pages may not appear properly in Google’s search results pages. However, we are continously working to make Googlebot behave more like a browser. As features required by your site are implemented, Googlebot may start to index your pages properly without help. However, this AJAX crawling scheme provides a solution for sites that are already using AJAX and wish to ensure that their content is properly indexed today. We expect that it will be a good solution for anyone who already has HTML snapshots of their pages or who chooses to use a headless browser to get such HTML snapshots.

The answer to this question depends entirely on how frequently your app’s content changes. If it changes frequently, you should always construct a fresh HTML snapshot in response to a crawler request. On the other hand, consider an library archive whose inventory does not change on a regular basis. To keep the server from having to produce the same HTML snapshots over and over, you could create all relevant HTML snapshots once, possibly offline, and then save them for future reference. You could also respond to Googlebot with a 304 (Not modified) HTTP status code.

Maybe it should! You can greatly speed up your application by using hash fragments, because hash fragments are handled by the browser on the client side and do not cause the entire page to refresh. Additionally, they will allow you to make history work in your application (the infamous “browser back button”). Various AJAX frameworks support hash fragments. For example, see Really Simple History, jQuery’s history plugin, Google Web Toolkit’s history mechanism, or ASP.NET AJAX’s support for history management.

If, however, it is not feasible to structure your app to use hash fragments, you can do the following use a special token in your hash fragments (that is, everything after the # sign in a URL). Hash fragments that represent unique page states must begin with an exclamation mark. For example, if your AJAX app contains a URL like this:

www.example.com/ajax.html#mystate

it should now become this:

www.example.com/ajax.html#!mystate

When your site adopts the scheme, your site will be considered “AJAX crawlable.” This means that the crawler will see the content of your app if your site supplies HTML snapshots.

The _escaped_fragment_ syntax for URLs is a temporary URL that should never be seen by the end user. In all contexts that the user will see, the pretty URL (with #! instead of _escaped_fragment_) should be used: in normal app interaction, in Sitemaps, in hyperlinks, in redirects, and in any other situation where the user might see the URL. For the same reason, search results are pretty URLs rather than ugly URLs.

Cloaking is serving different content to users from the content served to search engines. This is generally done with the intent of boosting one’s ranking in search results. Cloaking has always been (and will always be) an important issue for search engines, and it’s important to note that making AJAX applications crawlable is by no means an invitation to make cloaking easier. For this reason, the HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking. See this answer for more details.

Google does index many rich media file types, and we’re continually working on improving our crawling and indexing. However, Googlebot may not be able to see all the content of a Flash or other rich media application (just like it cannot crawl all dynamic content on your site), so it can be useful to use this scheme to provide Googlebot with additional content. Again, the HTML snapshot must contain the same content as the end user would see in a browser. Google reserves the right to exclude sites from its index that are considered to use cloaking.

When your site adopts the AJAX crawling scheme, the Google crawler will crawl every hash fragment URL it encounters. If you have hash fragment URLs that should not be crawled, we suggest that you add a regular expression directive to your robots.txt file. For example, you can use a convention in your hash fragments that should not be crawled and then exclude all URLs that match it in your robots.txt file. Suppose all your non-indexable states are of the form #DONOTCRAWLmyfragment. Then you could prevent Googlebot from crawling these pages by adding the following to your robots.txt:

Disallow: /*_escaped_fragment_=DONOTCRAWL

#! is an infrequently used token in existing hash fragments; however, it is not disallowed by the URL specification. What happens if your application uses #! but does not want to adopt the new AJAX crawling scheme? One approach you can take is to add a directive in your robots.txt to indicate this to the crawler.

Disallow: /*_escaped_fragment_

Please note that this means that if your application contains only this URL: www.example.com/index.html#!mystate, then this URL will not be crawled. If your application additionally contains the bare URL www.example.com/ajax.html, this URL will be crawled.

One side-effect of the current practice to provide static content to search engines is that webmasters have made their applications more accessible to users with disabilities. This new agreement takes accessibility to a new level: without manual intervention, webmasters can use a headless browser to create HTML snapshots, which contain all the relevant content and are usable by screen readers. This means that it is now easier to keep the static content up-to-date, as less manual work is required. In other words, webmasters now have an even greater incentive to make their applications accessible to those with disabilities.

Use <link rel="canonical" href="http://example.com/ajax.html#!foo=123" /> (don’t use <link rel="canonical" href="http://example.com/ajax.html?_escaped_fragment_=foo=123" />.

Your Sitemap should include the version you prefer displayed in search results, so it should be http://example.com/ajax.html#!foo=123.

It’s common for sites to want the same URLs for Product Search and Web Search. Generally, the #!version of the URL should be treated as the “canonical” version that should be used in all contexts. The _escaped_fragment_ URL is considered a temporary URL that end users should never see.

If “doesn’t work” means that HtmlUnit does not return the snapshot you were expecting to see, it’s very likely that the culprit is that you didn’t give it enough time to execute the JavaScript and/or XHR requests. To fix this, try any or all of the following:

  • Use NicelyResynchronizingAjaxController. This will cause HtmlUnit to wait for any outstanding XHR calls.
  • Bump up the wait time for waitForBackgroundJavaScript and/or waitForBackgroundJavaScriptStartingBefore.

This will very likely fix your problem. If it doesn’t, you can also try the FAQ for HtmlUnit here: http://htmlunit.sourceforge.net/faq.html. HtmlUnit also has a user forum.

Images and video

Google Image best practices

Google Images is a way to visually discover information on the web. Users can quickly explore information with more context around images with new features, such as image captions, prominent badges, and AMP results.

By adding more context around images, results can become much more useful, which can lead to higher quality traffic to your site. You can aid in the discovery process by making sure that your images and your site are optimized for Google Images. Follow our guidelines to increase the likelihood that your content will appear in Google Images search results.

Opt out of image search inline linking

If you choose, you can prevent the full-sized image from appearing in Google’s image search results page by opting out of inline linking in Google image search results.

To opt out of inline linking:

  1. When your image is requested, examine the HTTP referrer header in the request.
  2. If the request is coming from a Google domain, reply with HTTP 200 or 204 and no content.

Google will still crawl your page and see the image, but will display a thumbnail image generated at crawl time in search results. This opt-out is possible at any time, and does not require re-processing of a website’s images. This behavior is not considered image cloaking and will not result in manual actions.

You can also prevent the image from appearing in search results entirely.

Create a great user experience

To boost your content’s visibility in Google Images, focus on the user by providing a great user experience: make pages primarily for users, not for search engines. Here are some tips:

  • Provide good context: Make sure that your visual content is relevant to the topic of the page. We suggest that you display images only where they add original value to the page. We particularly discourage pages where neither the images or the text are original content.
  • Optimize placement: Whenever possible, place images near relevant text. When it makes sense, consider placing the most important image near the top of the page.
  • Don’t embed important text inside images: Avoid embedding text in images, especially important text elements like page headings and menu items, because not all users can access them (and page translation tools won’t work on images). To ensure maximum accessibility of your content, keep text in HTML, provide alt text for images.
  • Create informative and high quality sites: Good content on your webpage is just as important as visual content for Google Images – it provides context and makes the result more actionable. Page content may be used to generate a text snippet for the image, and Google considers the page content quality when ranking images.
  • Create device-friendly sites: Users search on Google Images more from mobile than on desktop. For this reason, it is important that you design your site for all device types and sizes. Use the mobile friendly testing tool to test how well your pages work on mobile devices, and get feedback on what needs to be fixed.
  • Create good URL structure for your images: Google uses the URL path as well as the file name to help it understand your images. Consider organizing your image content so that URLs are constructed logically.

Check your page title and description

Google Images automatically generates a title and snippet to best explain each result and how it relates to the user query. This helps users decide whether or not to click on a result.

We use a number of different sources for this information, including descriptive information in the title, and meta tags for each page.

You can help us improve the quality of the title and snippet displayed for your pages by following Google’s title and snippet guidelines.

Add structured data

If you include structured data, Google Images can display your images as rich results, including a prominent badge, which give users relevant information about your page and can drive better targeted traffic to your site. Google Images supports structured data for the following types:

Follow the general structured data guidelines as well as any guidelines specific to your structured data type; otherwise your structured data might be ineligible for rich result display in Google Images. In each of these structured data types, the image attribute is a required field to be eligible for badge and rich result in Google Images.

Optimize for speed

Images are often the largest contributor to overall page size, which can make pages slow and expensive to load. Make sure to apply the latest image optimization and responsive image techniques to provide a high quality and fast user experience.

On Google Images, the AMP logo  helps users identify pages that load quickly and smoothly. Consider turning your image host page into an AMP to decrease page load time (where the target page is the page the user lands after clicking on a result in Google Images).

Analyze your site speed with PageSpeed Insights and visit our Web Fundamentals page to learn about best practices and techniques to improve website performance.

Add good quality photos

High-quality photos appeal to users more than blurry, unclear images. Also, sharp images are more appealing to users in the result thumbnail and increase the likelihood of getting traffic from users.

Include descriptive titles, captions, filenames, and text for images

Google extracts information about the subject matter of the image from the content of the page, including captions and image titles. Wherever possible, make sure images are placed near relevant text and on pages that are relevant to the image subject matter.

Likewise, the filename can give Google clues about the subject matter of the image. For example, my-new-black-kitten.jpg is better than IMG00023.JPG.

Use descriptive alt text

Alt text (text that describes an image) improves accessibility for people who can’t see images on web pages, including users who use screen readers or have low-bandwidth connections.

Google uses alt text along with computer vision algorithms and the contents of the page to understand the subject matter of the image. Also, alt text in images is useful as anchor text if you decide to use an image as a link.

When choosing alt text, focus on creating useful, information-rich content that uses keywords appropriately and is in context of the content of the page. Avoid filling alt attributes with keywords (keyword stuffing) as it results in a negative user experience and may cause your site to be seen as spam.

  • Bad (missing alt text): <img src="puppy.jpg" alt=""/>
  • Bad (keyword stuffing): <img src="puppy.jpg" alt="puppy dog baby dog pup pups puppies doggies pups litter puppies dog retriever  labrador wolfhound setter pointer puppy jack russell terrier puppies dog food cheap dogfood puppy food"/>
  • Better: <img src="puppy.jpg" alt="puppy"/>
  • Best: <img src="puppy.jpg" alt="Dalmatian puppy playing fetch"/> 

We recommend testing your content by auditing for accessibility and using a slow network connection emulator.

Help us discover all your images

Use an image sitemap

Images are an important source of information about the content on your site. You can give Google additional details about your images, and provide the URL of images we might not have otherwise discover by adding information to an image sitemap.

Image sitemaps can contain URLs from other domains, unlike regular sitemaps, which enforce cross-domain restrictions. This allows you to use CDNs (content delivery networks) to host images. We encourage you to verify the CDN’s domain name in Search Console so that we can inform you of any crawl errors that we may find.

Supported image formats

Google Images supports images in the following formats: BMP, GIF, JPEG, PNG, WebP, and SVG.

You can also inline images as Data URIs. Data URIs provide a way to include a file, such as an image, inline by setting the src of an img element as a Base64 encoded string using the following format:

<img src="data:image/svg+xml;base64,[data]">

While inlining images can reduce HTTP requests, you should carefully judge when to use them since it can considerably increase the size of the page. For more on this, refer to the section on pros and cons of inlining images on our Web Fundamentals page.

Responsive images

Designing responsive web pages leads to better user experience, since users use them across a plethora of device types. Refer to our Web Fundamentals on Images to learn about the best practices for handling images on your website.

Webpages use <img srcset> attribute or <picture> element to specify responsive images. However, some browsers and crawlers do not understand these attributes. We recommend that you always specify a fallback URL via the img src attribute.

The srcset attribute allows specifying different versions of the same image, specifically for different screen sizes.

Example: <img srcset>

<img srcset="example-320w.jpg 320w,
             example-480w.jpg 480w,
             example-800w.jpg 800w"
     sizes="(max-width: 320px) 280px,
            (max-width: 480px) 440px,
            800px"
     src="example-800w.jpg" alt="responsive web!">

The <picture> element is a container that is used to group different <source> versions of the same image. It offers a fallback approach so the browser can choose the right image depending on device capabilities, like pixel density and screen size. The picture element also comes in handy for using new image formats with built-in graceful degradation for clients that may not yet support the new formats.

We recommend that you always provide an img element as a fallback with a src attribute when using the picture tag using the following format:

Example: <picture>

<picture>
  <source type="image/svg+xml" srcset="pyramid.svg">
  <source type="image/webp" srcset="pyramid.webp"> 
  <img src="pyramid.png" alt="large PNG image...">
</picture>

Optimize for SafeSearch

SafeSearch is a setting in your account that specifies whether to show or block explicit images, videos, and websites in Google Search results. You should help Google understand the nature of your images in order to apply SafeSearch settings to your images if appropriate.

Group adult-only images in a common URL location

If your site contains adult images, we strongly recommend grouping the images separately from other images on your website. For example: http//www.example.com/adult/image.jpg.

Add metadata to adult pages

Our algorithms use a variety of signals to decide whether an image or a whole page should be filtered from the results when the user’s SafeSearch filter is turned on. In the case of images, some of these signals are generated using machine learning, but the SafeSearch algorithms also look at simpler things such as where the image was used previously and the context in which the image was used.

One of the strongest signals is self-marked adult pages. If you publish adult content, we recommend that you add one of the following meta tags to your pages:

<meta name="rating" content="adult" />
<meta name="rating" content="RTA-5042-1996-1400-1577-RTA" />

Many users prefer not to have adult content included in their search results (especially if kids use the same device). When you provide one of these meta tags, it helps to provide a better user experience because users don’t see results which they don’t want to or expect to see.

As with all algorithms, sometimes it may happen that SafeSearch filters content incorrectly. If you think your images or pages are mistakenly being filtered by SafeSearch, please let us know using the Safe Search form.

And finally…

Please read our SEO Starter Guide which contains lots of useful information to rank better, and if you have more questions please post them in the Webmaster Help Forum.

Video best practices

Get your videos found by Google Search

Of the billions of Google searches done every day, many are looking for video content. Following the best practices listed here (as well as our usual Webmaster Guidelines) can increase the likelihood that your videos will be returned in search results.

Video results in Google Search appear both in combined Search results and in Video Search results. When a user clicks the video result they will be taken to your page, where they can watch your video.

 

How Google crawls a video

In order to expose a video in search results, Google must understand something about the video. Google can extract information about a video in the following ways:

  • Google can crawl the video (if in a supported video encoding) and extract a thumbnail and preview. Google can also extract some limited meaning from the audio and video of the file.
  • Google can extract information from the page hosting the video, including the page text and meta tags.
  • Google can use structured data (VideoObject) or video sitemap associated with the video.
YouTube content: YouTube videos are always crawlable. However it is still helpful if you provide a video sitemap or structured data to help Google find the embedded YouTube video on your page. Sitemaps and structured data also help you provide us additional information about the video.

About Video Search results

How, or if, your video shows up on search depends on how much information you provide to Google. Google requires two pieces of information to expose your video in search results: a thumbnail image and a link to the actual video file. However, the more information you provide, the better your search result experience.

Here are the two basic levels of video search appearance:

  • Basic appearance: If you provide the absolute minimum information to Google, your video can appear in combined search results and video search results with a thumbnail image and a link. You will not get any enhanced features such as video preview or content parsing. Absolute minimum information is a thumbnail image and a link to a video file.


    Example basic video result

  • Enhanced appearance: If you provide more information, Google can provide more features for your video, such as video preview, video length, video date and provider information, the ability to restrict search results based on the user’s country or search device, and more.

    Sample desktop video search result
    Example enhanced desktop video result

    Sample mobile video search result
    Example enhanced mobile video result

 

Best practices

Minimum requirements for a video search result:

If you want your video to be eligible for search results:

  • Google must be able to find the video. Videos are identified in the page by the presence of an HTML tag, for example: <video><embed>, or <object>. Ensure that the page doesn’t require complex user actions or specific URL fragments to load, or Google might not find it. Tip: Although we can find videos embedded in a page through natural crawling, you can help us find your videos by posting a video sitemap.
  • You must provide a high-quality thumbnail image for the video.
  • Make sure that each video lives in a publicly available page where users can watch the video. The page should not be require a login by the user. The page should also not be blocked by robots.txt or noindex (it must be accessible by Google).
  • The video content should apply specifically to the content of its host page. For example, if you have a recipe page for peach pie, don’t embed a video about pastries in general.
  • Ensure that any information you provide in a video sitemap or video markup is consistent with the actual video content.

For best results:

If you take these extra steps, Google can provide better search results for your video:

Provide a high-quality thumbnail for your video

To be eligible to appear in Google Video Search results, a video must have a thumbnail image that can be shown in search results.

You can provide (or enable) a thumbnail in several ways:

  • If using the <video> HTML tag, specify the poster attribute.
  • In a video sitemap, specify <video:thumbnail_loc>
  • In structured data, specify VideoObject.thumbnailUrl
  • Provide a video in a crawlable format and we can generate a thumbnail for you.

Preferred formats: JPG, PNG

Size: From 160×90 to 1920×1080 pixels

Location: The preview thumbnail must be accessible by Googlebot (that is, not blocked by robots.txt or a login requirement).

Make your video crawlable

If Google can crawl your video, we can generate a thumbnail image for you, enable video preview, and provide other features.

To make your video crawlable:

  • The video must be in a supported format.
  • The video host page and streaming file bytes must not be blocked to Google. (Blocked means that the page or file are paywall-protected, login required, noindex or robots.txt blocked.)
  • The video host page and the server streaming the actual video must have the bandwidth to be crawled. So if your landing page at example.com/puppies.html has an embedded puppies video served by somestreamingservice.com, both example.com and somestreamingservice.com must be unblocked and have available server load.

Supported video encodings

Google can crawl the following video file types: .3g2, .3gp2, .3gp, .3gpp, .asf, .avi, .divx, .f4v, .flv, .m2v,, .m3u8, .m4v, .mkv, .mov, .mp4, .mpe, .mpeg, .mpg, .ogv, .qvt, .ram, .rm, .vob, .webm, .wmv, .xap

Describe your video using structured data or a video sitemap

You can provide additional information about your video to Google using structured data, a video sitemap, or both. Providing this extra information can enable more features in search results and help us understand and rank your video better.

Both techniques can expose the same information to Google, but a video sitemap can be useful for helping Google find new content or updated content more quickly, while structured data might be more familiar to some people than sitemaps, and be more consistent with their website’s use of structured data to enable rich results. You can use both techniques for your website, but if you do, be sure that your data is consistent in both places.

Structured data

Add structured data describing your video on the hosting page. Structured data is information that you provide according to a well-defined format using either tags or JSON. When Google crawls the page, it can read and understand that format to extract information about your video.

There are several formats that you can use, but Google strongly recommends using schema.org’s VideoObjectsyntax in JSON-LD format.

Schema.org VideoObject (Recommended)

Embed code for VideoObject on the page. The VideoObject is associated with the embedded video that has a matching source URL.

Learn how to embed a VideoObject description in your page for each video.

VideoObject JSON-LD example

<html>
<head>
  <title>Schnitzel in an hour</title>
</head>
<body>
  <script type="application/ld+json">
  {
    "@context": "http://schema.org",
    "@type": "VideoObject",
    "name": "Schnitzel Stories",
    "description": "How to make fantastic schnitzel in just one hour",
    "thumbnailUrl": "https://example.com/imgs/schnitzel-small.jpg",
    "uploadDate": "2015-02-05T08:00:00+08:00",
    "duration": "PT1M33S",
    "contentUrl": "https://streamserver.example.com/schnitzel.mp4"
  }
  </script>
  <h1>Everybody loves schnitzel</h1>

  ... omitted schnitzel-related page content...

  <video width="420"
      src="https://streamserver.example.com/schnitzel.mp4"
      poster="https://example.com/imgs/schnitzel-small.jpg"/>
</body>
</html>

 

Simple VideoObject or TV/Movie rich result?

If you are simply describing a television show or movie with information such as reviews or cast information, or if your video requires complex actions such as purchase or rental, you should implement the TV or Movie structured data type on your website. Using TV or Movie structured data enables a rich result in Search that can include ratings, reviews, and actor information, as well as links to free or paid streaming services. Rich results are shown in the combined search results pane only.

Open Graph Protocol

As an alternative to the schema.org VideoObject syntax, Google can also process some Open Graph Protocol metadata. The tags should describe the primary and most prominent video on the page.

Video sitemap

A video sitemap is an XML sitemap that Google uses to find videos on your site, and can also provide information about a video to Google. A video sitemap entry can describe a video the same way as a VideoObject structured data element. The advantage of using a video sitemap is that it also helps Google find new or updated videos, and that it can describe many videos in one file rather than requiring Google to crawl each page and discover changes individually.

Learn how to create a video sitemap.

Updating your content

You can tell Google when a video has changed, depending on how you helped us find or read your content. If you simply swap out the video URL or source file with no other changes, Google might not notice the change.

  • Structured data: When your on-page video structured data changes, Google will see the change the next time it crawls the page. You can notify Google of a changed page by using a normal sitemap or a video sitemap.
  • Video sitemaps and mRSS: When you post a video sitemap, Google will periodically recrawl it and update search results with any changed video data. You can also resubmit a sitemap or notify Google of a changed sitemap to request an immediate recrawl. Find out more about submitting sitemaps and using HTTP requests to update sitemaps.

Removing a video

We recommend the following options to remove a video from your site:

  • Return an 404 (Not found) HTTP status code for any landing page that contains a removed or expired video. In addition to the 404 response code, you can still return the HTML of the page to make the change transparent to most users.
  • Indicate an expiration dates in your schema.org structured data, video sitemap (use the <video:expiration_date> element), or mRSS feed (<dcterms:valid> tag). Here is an example of a video sitemap with a video that expired in November 2009:
    <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
            xmlns:video="http://www.google.com/schemas/sitemap-video/1.1"> 
      <url> 
        <loc>http://www.example.com/videos/some_video_landing_page.html</loc>
        <video:video>
          <video:thumbnail_loc>
              http://www.example.com/thumbs/123.jpg
          </video:thumbnail_loc> 
          <video:title>
              Grilling steaks for summer
          </video:title>
          <video:description>
              Bob shows you how to grill steaks perfectly every time
          </video:description>
          <video:player_loc>
              http://www.example.com/videoplayer?video=123
          </video:player_loc>
          <video:expiration_date>2009-11-05T19:20:30+08:00</video:expiration_date>
        </video:video> 
      </url> 
    </urlset>

Avoid using complex video loading conditionals

When designing your site, configure your video pages without any overly complex user interaction or conditions required to load a video. For instance, if you are using complicated JavaScript to embed video objects from within JavaScript under only certain circumstances (for example, using hash tags in the URL), then it’s also possible that we will not find all your videos. This is especially important if you aren’t using a sitemap to list the video.

Create a great user experience on your video pages

In addition to having great video, you should think about the design of the HTML pages around your content. For example, consider the following:

  • Create a standalone landing page for each video, where you can gather all its related information. If you do this, be sure to provide unique information—such as descriptive titles and captions—on each page.
  • Make it as easy as possible for users to find and play the videos on each landing page. The presence of a prominent, embedded video player using widely supported video formats can make your videos more attractive to users and easier for Google to index.

Restrict users by platform

You can restrict search results for your video based on the searcher’s platform. Platforms include computer browsers, mobile device browsers, and television browsers.

Restrict by platform using a video sitemap

If your video does not have any platform restrictions, you should omit the platform restriction tag.

In video sitemaps, the <video:platform> tag can be used to allow or prevent the video from appearing in search results from specified devices. Only one <video:platform> tag is allowed per video entry. The tag has a required relationship attribute that specifies whether the platforms listed are excluded or required.

Example

In this video sitemap example, this video will appear only on desktop and mobile browsers.

<url>
  <loc>http://www.example.com/videos/some_video_landing_page.html</loc>
  <video:video>
    <video:thumbnail_loc>
        http://www.example.com/thumbs/123.jpg
    </video:thumbnail_loc>
    <video:title>Grilling steaks for summer</video:title>
    <video:description>
        Bob shows you how to get perfectly done steaks every time
    </video:description>
    <video:player_loc>
        http://www.example.com/videoplayer?video=123
    </video:player_loc>
    <video:platform relationship="allow">web mobile</video:restriction>
  </video:video>
</url>

Restrict by platform using structured data or mRSS

There is no platform restriction tag for VideoObject or mRSS feeds.

Restrict users by country

You can restrict search results for your video based on the searcher’s location. If your video does not have any country restrictions, you should omit the country restriction tags.

Restrict by country using a video sitemap

In a video sitemap, the <video:restriction> tag can be used to allow or deny the video from appearing in specific countries. Only one <video:restriction> tag is allowed per video entry.

The <video:restriction> tag should contain one or more space-delimited ISO 3166 country codes. The required relationship attribute specifies the type of restriction.

  • relationship="allow" – The video will appear only for the specified countries. If no countries are specified, the video will not appear anywhere.
  • relationship="deny" – The video will appear everywhere except for the specified countries. If no countries are specified, the video will appear everywhere.

In this video sitemap example, the video will appear only in Canada and Mexico.

   <url> 
     <loc>http://www.example.com/videos/some_video_landing_page.html</loc>
     <video:video>
       <video:thumbnail_loc>
           http://www.example.com/thumbs/123.jpg
       </video:thumbnail_loc> 
       <video:title>Grilling steaks for summer</video:title>
       <video:description>
           Bob shows you how to get perfectly done steaks every time
       </video:description>
       <video:player_loc>
           http://www.example.com/player?video=123
       </video:player_loc>
       <video:restriction relationship="allow">ca mx</video:restriction> 
     </video:video> 
   </url>

Restrict by country using structured data

If you use VideoObject structured data to describe a video, set the VideoObject.regionsAllowedproperty to specify which regions can get the video search result. If you omit this property, all regions can see the video in search results.

Restrict by country using mRSS

Videos in mRSS feeds can specify country restrictions can by using the media:restriction tag with the required type attribute set to countrymedia:restriction also requires a relationshipattribute set to allow or deny and accepts a space-delimited list of ISO 3166 country codes.

In this example mRSS entry, this video will appear everywhere except for the United States and Canada.

  <item xmlns:media="http://search.yahoo.com/mrss/" xmlns:dcterms="http://purl.org/dc/terms/">
    <link>http://www.example.com/examples/mrss/example.html</link>
    <media:content url="http://www.example.com/examples/mrss/example.mp4"
                   fileSize="405321" type="video/x-flv" height="240"
                   width="320" duration="120" medium="video"
                   isDefault="true">
      <media:title>Grilling Steaks for Summer</media:title>
      <media:description>
          Get perfectly done steaks every time
      </media:description>
      <media:thumbnail
          url="http://www.example.com/examples/mrss/example.png"
          height="120" width="160"/>
    </media:content>
    <media:restriction relationship="deny" type="country">us ca</media:restriction>
  </item>

Read more about using mRSS feeds for Google Video Search or about the media:restriction tag in the mRSS specification.

Which URL is which?

There are several URLs that can be associated with a video file on the page. Here is a summary of most of them:

Diagram of URLs in a page

Tag Description
1
  • <loc>
    (Video sitemap tag)
The URL of the page hosting the video. Example:

<loc>https://example.com/news/worlds-biggest-cat.html</loc>

2
  • VideoObject.embedUrl
    (Structured data)
  • <video:player_loc>
    (Video sitemap tag)
  • <iframe src="...">
The URL of the custom player. This is often the src value for an <iframe> or <embed> tag on the page. Example:

<video:player_loc>
https://archive.example.org/cats/1234</video:player_loc>

3
  • <video src="...">
    (HTML tag)
  • <embed src="...">
    (HTML tag)
  • <video:content_loc>
    (Video sitemap tag)
  • VideoObject.contentUrl
    (Structured data)
The URL of the actual content bytes, either on the local site or on a streaming service. Example:

<video src="videos.example.com/cats/1234.mp4">

 

When including structured data, a video sitemap, or a sitemap alternate, you should point to the embedded player or file bytes, as appropriate for the field.

Block a video from Google Search results

If you want to hide a video from Google Search results, there are a few methods to do so:

  • Put up some kind of login screen for the host page and video file.
  • Add a country restriction in a video sitemap for the video, and specify an empty allow list:
    <video:restriction relationship="allow"></video:restriction>
  • Use robots.txt to block the source video and/or the host page. If the video and host page are the same site, block the source file URL (the contentUrl address) and the host page URL. If the video is hosted on a different CDN, just block the host/player page.
  • Return a noindex HTTP response for the host page and the file (if the file exists on your page).

Note that none of these methods prevent another page from linking to your video or page.

Common video indexing mistakes

These are some of the most common video indexing mistakes we have seen and how we suggest you resolve them to increase the likelihood that your videos will be shown in search results. You should also take a look at our Webmaster Guidelines.

Blocking resources with robots.txt

A common practice is to use robots.txt to prevent search engines from crawling JavaScript, video, and image files. In order for Google to index a video, we must be able to see the thumbnail specified in your structured data or sitemap, the page the video is on, the video itself, and any JavaScript or other resources needed to load the video. Make sure that your robots.txt rules do not block any of these video-related resources.

If you are using video sitemaps or mRSS, make sure that Google can access any sitemap or mRSS feed that you submit. If these are blocked by robots.txt, we will not be able to read them.

Learn more about robots.txt.

Low-quality thumbnail images

We accept thumbnails of any image format, but find that .png and .jpg images work best. Images must be at least 160×90 pixels, and no more than 1920×1080 pixels.

Duplicate thumbnails, titles, or descriptions

Using the same thumbnail, title, or description for different videos can affect video indexing and can be confusing to users. Make sure that the data for each video is unique. For episodic content, a common problem is multiple videos with the same title-screen thumbnail.

Setting an expiration date in the past

When Google sees a video with an expiration date in the past, we will not include the video in any search results. This includes expiration dates from sitemaps, on-page structured data, and the meta expiration tag in the site header. Make sure that your expiration dates are correct for each video. While this is useful if your video is no longer available after the expiration date, it’s easy to accidentally setting the date to the past for an available video. If a video does not expire, do not include any expiration information.

Listing removed videos

When an embedded video has been removed from a page, some sites use a Flash player to tell users that the video is no longer available. This can be problematic for search engines, and therefore, we recommend the following options:

  • Return a 404 (Not found) HTTP status code for any landing page that contains a removed or expired video. In addition to the 404 response code, you can still return the HTML of the page to make this transparent to most users.
  • Indicate expiration dates in on-page structured data, video sitemaps (use the <video:expiration_date>element), or mRSS feed (<dcterms:valid> tag) submitted to Google.

Complex JavaScript and URL fragments

When designing your site, it’s important to configure your video pages without any overly complex JavaScript. If you are using overly complicated JavaScript to embed video objects from within JavaScript under only certain circumstances, then it’s also possible that we will not correctly index your videos. URLs for content or landing-pages that require ‘hash marks’ or fragment identifiers are not supported. Also, using Flash on the page can prevent efficient indexing. For best results, show your video title and description in plain HTML markup rather than using Flash.

If you are using on-page structured data, the structured data should be present without running Flash or other embedded players.

Small, hidden, or difficult to find videos

Make sure that your videos are visible and easy to find on your video pages. Google suggests using a standalone page for each video with a descriptive title or description unique to each individual video. Videos should be prominent on the page and should not be hidden or difficult to find.

Rich media file best practices

What types of files can Google index?

Google can index most types of pages and files. Here are a few details about some specific rich media types:

General best practices

If you do plan to use rich media on your site, here are some recommendations that can help prevent problems.

  • Try to use rich media only where it is needed. We recommend that you use HTML for content and navigation.
  • Provide text versions of pages. If you use a non-HTML splash screen on the home page, make sure to include a regular HTML link on that front page to a text-based page where a user (or Googlebot) can navigate throughout your site without the need for rich media.

In general, search engines are text based. This means that in order to be crawled and indexed, your content needs to be in text format. (Google can now index text content contained in Flash files, but other search engines may not.)

This doesn’t mean that you can’t include rich media content such as Flash, Silverlight, or videos on your site; it just means that any content you embed in these files should also be available in text format or it might not be accessible to all search engines. The examples below focus on the most common types of non-text content, but the guidelines are similar for any other types: Provide text equivalents for all non-text files. (Also note that Flash no longer works on most mobile browsers.)

This will not only increase Googlebot’s ability to successfully crawl and index your content; it will also make your content more accessible. Many people, for example users with visual impairments, who use screen readers, or have low bandwidth connections, cannot see images on web pages, and providing text equivalents widens your audience.

Video

See video best practices.

IFrames

IFrames are sometimes used to display content on web pages. Content displayed via iFrames may not be indexed and available to appear in Google’s search results. We recommend that you avoid the use of iFrames to display content. If you do include iFrames, make sure to provide additional text-based links to the content they display, so that Googlebot can crawl and index this content.

Flash

Googlebot can index almost any text a user can see as they interact with any Flash SWF file on your site, and can use that text to generate a snippet or match query terms in Google searches. Additionally, Googlebot can also discover URLs in SWF files (for example, links to other pages on your site) and follow those links.

We’ll crawl and index this content in the same way that we crawl and index other content on your site—you don’t need to take any special action. However, we don’t guarantee that we’ll crawl or index all the content, Flash or otherwise.

When a SWF file loads content from some other file—whether it’s text, HTML, XML, another SWF, etc.—Google can index this external content too, and associate it with the parent SWF file and any documents that embed it.

We’re continually working to improve our indexing of Flash files, but there are some limitations. For example, we’re currently unable to index the bidirectional language content (for example, Hebrew or Arabic) in Flash files.

Note that while Google can index the content of Flash files, other search engines may not be able to. Therefore, we recommend that you use rich-media technologies like Flash primarily for decorative purposes, and instead use HTML for content and navigation. This makes your site more crawler-friendly, and also makes it accessible to a larger audience including, for example, readers with visual impairments that require the use of screen readers, users of old or non-standard browsers, and users with limited or low-bandwidth connections such as a cellphone or mobile device. An added bonus? Using HTML for navigation will allow users to bookmark content and send direct links in email.

You could also consider using sIFR (Scalable Inman Flash Replacement). sIFR (an open-source project) lets webmasters replace text elements with Flash equivalents. Using this technique, content and navigation is displayed by an embedded Flash object but, because the content is contained in the HTML source, it can be read by non-Flash users (including search engines).

International

Managing multi-regional and multilingual sites

If your site offers different content to users in different languages, countries, or regions, you can optimize Google Search results for your site.

Background:

  • multilingual website is any website that offers content in more than one language. For example, a Canadian business with English and French versions of its site. Google Search tries to find pages that match the language of the searcher.
  • multi-regional website is one that explicitly targets users in different countries. For example, a product manufacturer that ships to both Canada and the United States. Google Search tries to find the right locale page for the searcher.

Some sites are both multi-regional and multilingual: for example, a site might have different versions for the USA and for Canada, and both French and English versions of the Canadian content.

Managing multilingual versions of your site

If you have identical content in multiple languages on your site, here are some tips for helping users (and Google Search) find the right page:

Use different URLs for different language versions

Google recommends using different URLs for each language version of a page rather than using cookies or browser settings to adjust the content language on the page.

If you use different URLs for different languages, use hreflang annotations to help Google search results link to the correct language version of a page.

If you prefer to dynamically change content or reroute the user based on language settings, be aware that Google might not find and crawl all your variations. This is because the Googlebot crawler usually originates from the USA. In addition, the crawler sends HTTP requests without setting Accept-Language in the request header.

Tell Google about your different language versions

Google supports several different methods for labeling language or region variants of a page, including hreflang annotations and sitemaps. Mark your pages appropriately.

Make sure the page language is obvious

Google uses the visible content of your page to determine its language. We don’t use any code-level language information such as lang attributes, or the URL. You can help Google determine the language correctly by using a single language for content and navigation on each page, and by avoiding side-by-side translations.

Translating only the boilerplate text of your pages while keeping the bulk of your content in a single language (as often happens on pages featuring user-generated content) can create a bad user experience if the same content appears multiple times in search results with various boilerplate languages.

Use robots.txt to block search engines from crawling automatically translated pages on your site. Automated translations don’t always make sense and could be viewed as spam. More importantly, a poor or artificial-sounding translation can harm your site’s perception.

Let the user switch the page language

If you have multiple versions of a page:

  • Consider adding hyperlinks to other language versions of a page. That way users can click to choose a different language version of the page.
  • Avoid automatic redirection based on the user’s perceived language. These redirections could prevent users (and search engines) from viewing all the versions of your site.

Use language-specific URLs

It’s fine to use localized words in the URL, or to use an Internationalized Domain Name (IDN). However, be sure to use UTF-8 encoding in the URL (in fact, we recommend using UTF-8 wherever possible) and remember to escape the URLs properly when linking to them.

Targeting site content to a specific country (geotargeting)

You can target your website or parts of it to users in a single specific country speaking a specific language. This can improve your page rankings in the target country, but at the expense of results in other locales/languages.

To geotarget your site on Google:

  • Page or site level: Use locale-specific URLs for your site or page.
  • Page level: Use hreflang or sitemaps to tell Google which pages apply to which locations or languages.
  • Site level: If your site has a generic top-level domain (for example, .com, .org, or .eu), specify your site’s target locale using the International Targeting reportDon’t use this tool if your site targets more than a single country. For example, it would make sense to set the target as Canada for a site about restaurants in Montreal; it would not make sense to set the target as Canada if it also targets French speakers in France, Canada, and Mali.

Remember that geotargeting isn’t an exact science, so it’s important to consider users who land on the “wrong” version of your site. One way to do this could be to show links on all pages for users to select their region and/or language of choice.

Using locale-specific URLs

Consider using a URL structure that makes it easy to geotarget your site, or parts of it, to different regions. The following table describes your options:

URL structure Example URL Pros Cons
Country-specific domain example.de
  • Clear geotargeting
  • Server location irrelevant
  • Easy separation of sites
  • Expensive (can have limited availability)
  • Requires more infrastructure
  • Strict ccTLD requirements (sometimes)
Subdomains with gTLD de.example.com
  • Easy to set up
  • Can use Search Console geotargeting
  • Allows different server locations
  • Easy separation of sites
  • Users might not recognize geotargeting from the URL alone (is “de” the language or country?)
Subdirectories with gTLD example.com/de/
  • Easy to set up
  • Can use Search Console geotargeting
  • Low maintenance (same host)
  • Users might not recognize geotargeting from the URL alone
  • Single server location
  • Separation of sites harder
URL parameters site.com?loc=de
  • Not recommended.
  • URL-based segmentation difficult
  • Users might not recognize geotargeting from the URL alone
  • Geotargeting in Search Console is not possible

 

How does Google determine a target locale?

Google relies on a number of signals to determine the best target audience for a page:

  • A target locale specified using Search Console’s International Targeting reportIf you use a generic top-level domain (gTLD) and use a hosting provider in another country, we recommend using Search Console to tell us which country your site should be associated with (if you want to geotarget your site).
  • Country-code top-level domain names (ccTLDs). These are tied to a specific country (for example .de for Germany, .cn for China), and therefore are a strong signal to both users and search engines that your site is explicitly intended for a certain country. (Some countries have restrictions on who can use ccTLDs, so be sure to do your research first.) We also treat some vanity ccTLDs (such as .tv, .me, etc.) as gTLDs, as we’ve found that users and webmasters frequently see these as being more generic than country-targeted (we don’t have a complete list of such vanity ccTLDs that we treat as gTLDs, because such a list would change over time). See Google’s list of gTLDs.
  • hreflang statements, whether in tags, headers, or sitemaps.
  • Server location (through the IP address of the server). The server location is often physically near your users and can be a signal about your site’s intended audience. Some websites use distributed content delivery networks (CDNs) or are hosted in a country with better webserver infrastructure, so it is not a definitive signal.
  • Other signals. Other sources of clues as to the intended audience of your site can include local addresses and phone numbers on the pages, the use of local language and currency, links from other local sites, and/or the use of Google My Business (where available).

What Google doesn’t do:

  • Google crawls the web from different locations around the world. We do not attempt to vary the crawler source used for  a single site in order to find any possible variations in a page. Therefore, any locale or language variations that your site exposes should be communicated to Google explicitly through the methods shown here (such as hreflang entries, ccTLDs, or explicit links).
  • Google ignores locational meta tags (like geo.position or distribution) or geotargeting HTML attributes.

Handling duplicate pages with multilingual/multi-regional sites

If you provide similar or duplicate content on different URLs in the same language as part of a multi-regional site (for instance, if both example.de/ and example.com/de/ show similar German language content), you should pick a preferred version and use the rel=canonical element and hreflang tags to make sure that the correct language or regional URL is served to searchers.

 

Generic top-level domains

Generic top-level domains (gTLDs) are domains that aren’t associated with specific locations. If your site has a generic top-level domain such as .com, .org, or any of the domains listed below, and wants to target users in a particular geographic location, you should explicitly set a country target using one of the methods described previously.

Google treats the following as gTLDs that can be geotargeted in Search Console:

  • Generic Top Level Domains (gTLDs): Unless a top level domain is registered as a country code top level domain (ccTLD) with ICANN, Google will treat any TLD that resolves through the IANA DNS root zone as a gTLD. Examples:
    • .com
    • .org
    • .edu
    • .gov
    • and many more…
  • Generic regional top-level domains: Although these domains are associated with a geographical region, they are generally treated as generic top-level domains (much like .com or .org):
    • .eu
    • .asia
  • Generic Country Code Top Level Domains (ccTLDs): Google treats some ccTLDs (such as .tv, .me, etc.) as gTLDs, as we’ve found that users and webmasters frequently see these more generic than country-targeted. Here is a list of those ccTLDs (this list may change).
    • .ad
    • .as
    • .bz
    • .cc
    • .cd
    • .co
    • .dj
    • .fm
    • .io
    • .la
    • .me
    • .ms
    • .nu
    • .sc
    • .sr
    • .su
    • .tv
    • .tk
    • .ws

Tell Google about localized versions of your page

Use hreflang or sitemaps for language- or region-specific pages

If you have multiple versions of a page for different languages or regions, tell Google about these different variations. Doing so will help Google Search point users to the most appropriate version of your page by language or region.

Note that even without taking action, Google might still find alternate language versions of your page, but it is usually best for you to explicitly indicate your language- or region-specific pages.

Some example scenarios where indicating alternate pages is recommended:

  • If you keep the main content in a single language and translate only the template, such as the navigation and footer. Pages that feature user-generated content, like forums, typically do this.
  • If your content has small regional variations with similar content, in a single language. For example, you might have English-language content targeted to the US, GB, and Ireland.
  • If your site content is fully translated into multiple languages. For example, you have both German and English versions of each page.

Localized versions of a page are only considered duplicates if the main content of the page remains untranslated.

Methods for indicating your alternate pages

There are three ways to indicate multiple language/locale versions of a page to Google:

HTML tags

Add <link rel="alternate" hreflang="lang_code"... > elements to your page header to tell Google all of the language and region variants of a page. This is useful if you don’t have a sitemap or the ability to specify HTTP response headers for your site.

Each variation of the page should include a set of <link> elements in the <head> element, one link for each page variant including itself. The set of links is identical for every version of the page. See the additional guidelines.

Here is the syntax of each link element:

<link rel="alternate" hreflang="lang_code" href="url_of_page" />

lang_code
supported language/region code targeted by this version of the page, or x-defaultto match any language not explicitly listed by an hreflang tag on the page.
url_of_page
The fully-qualified URL for the version of this page for the specified language/region.

Example

Example Widgets, Inc has a website that serves users in the USA, Great Britain, and Germany. The following URLs contain substantially the same content, but with regional variations:

  • http://en.example.com/page.html – Generic English language homepage that contains information about fees for shipping internationally from the USA.
  • http://en-gb.example.com/page.html – UK homepage that displays prices in pounds sterling.
  • http://en-us.example.com/page.html – US homepage that displays prices in US dollars.
  • http://de.example.com/page.html German language homepage.
  • http://www.example.com/ Default page that doesn’t target any language or locale; has selectors to let users pick their language and region.

Note that the language-specific subdomains in these URLs (en, en-gb, en-us, de) are not used by Google to determine the target audience for the page; you must explicitly map the target audience.

Here is the HTML that should be pasted into the <head> section of all the pages listed above. It would direct US, UK, generic English speakers, and German speakers to localized pages, and all others to a generic homepage. Google Search returns the appropriate result for the user, according to their browser settings.

<head>
  <title>Widgets, Inc</title>
  <link rel="alternate" hreflang="en-gb"
        href="http://en-gb.example.com/page.html" />
  <link rel="alternate" hreflang="en-us"
        href="http://en-us.example.com/page.html" />
  <link rel="alternate" hreflang="en"
        href="http://en.example.com/page.html" />
  <link rel="alternate" hreflang="de"
        href="http://de.example.com/page.html" />
  <link rel="alternate" hreflang="x-default"
        href="http://www.example.com/" />
</head>

 

HTTP Headers

You can return an HTTP header with your page’s GET response to tell Google about all of the language and region variants of a page. This is useful for non-HTML files (like PDFs).

Here is the format of the header:

Link: <url1>; rel="alternate"; hreflang="lang_code_1", <url2>; rel="alternate"; hreflang="lang_code_2", ...

<url_x>
The fully-qualified URL of the alternate page corresponding to the locale string assigned to the associated hreflang attribute. The URL must include surrounding < > marks. Example:<https://www.google.com>
lang_code_x
supported language/region code targeted by this version of the page, or x-defaultto matches any language not explicitly listed by an hreflang tag on the page.

You must specify a set of <url>rel="alternate", and hreflang values for every version of the page including the requested version, separated by a comma as shown in the example below. The Link: header returned for every version of a page is identical. See the additional guidelines.

Example

Here is an example Link: header returned by a site that has three versions of a PDF file: one for English speakers, one for German speakers from Switzerland, and one for all other German speakers:

Link: <http://example.com/file.pdf>; rel="alternate"; hreflang="en",
      <http://de-ch.example.com/file.pdf>; rel="alternate"; hreflang="de-ch",
      <http://de.example.com/file.pdf>; rel="alternate"; hreflang="de"

 

Sitemap

You can use a Sitemap to tell Google all of the language and region variants for each URL. To do so, add a <loc> element specifying a single URL, with child <xhtml:link> entries listing every language/locale variant of the page including itself. Therefore if you have 3 versions of a page, your sitemap will have 3 entries, each with 3 identical child entries.

Sitemap rules:

  • Specify the xhtml namespace as follows:
    xmlns:xhtml="http://www.w3.org/1999/xhtml"
  • Create a separate <url> element for each URL.
  • Each <url> element must include a <loc> child indicating the page URL.
  • Each <url> element must have a child element <xhtml:link rel="alternate" hreflang="supported_language-code"> that lists every alternate version of the page, including itself.  The order of these child <xhtml:link> elements doesn’t matter, though you might want to keep them in the same order to make them easier for you to check for mistakes.
  • See the additional guidelines.

Example

Here is an English language page targeted at English speakers worldwide, with equivalent versions of this page targeted at German speakers worldwide and German speakers located in Switzerland. Here are all the URLs present on your site:

  • www.example.com/english/page.html, targeted at English speakers.
  • www.example.com/deutsch/page.html targeted at German speakers.
  • www.example.com/schweiz-deutsch/page.html targeted at German speakers in Switzerland.

Here is the sitemap for those three pages:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
  xmlns:xhtml="http://www.w3.org/1999/xhtml">
  <url>
    <loc>http://www.example.com/english/page.html</loc>
    <xhtml:link 
               rel="alternate"
               hreflang="de"
               href="http://www.example.com/deutsch/page.html"/>
    <xhtml:link 
               rel="alternate"
               hreflang="de-ch"
               href="http://www.example.com/schweiz-deutsch/page.html"/>
    <xhtml:link 
               rel="alternate"
               hreflang="en"
               href="http://www.example.com/english/page.html"/>
  </url>
  <url>
    <loc>http://www.example.com/deutsch/page.html</loc>
    <xhtml:link 
               rel="alternate"
               hreflang="de"
               href="http://www.example.com/deutsch/page.html"/>
    <xhtml:link 
               rel="alternate"
               hreflang="de-ch"
               href="http://www.example.com/schweiz-deutsch/page.html"/>
    <xhtml:link 
               rel="alternate"
               hreflang="en"
               href="http://www.example.com/english/page.html"/>
  </url>
  <url>
    <loc>http://www.example.com/schweiz-deutsch/page.html</loc>
    <xhtml:link 
               rel="alternate"
               hreflang="de"
               href="http://www.example.com/deutsch/page.html"/>
    <xhtml:link 
               rel="alternate"
               hreflang="de-ch"
               href="http://www.example.com/schweiz-deutsch/page.html"/>
    <xhtml:link 
               rel="alternate"
               hreflang="en"
               href="http://www.example.com/english/page.html"/>
  </url>
</urlset>

Guidelines for all methods

  • Each language version must list itself as well as all other language versions.
  • Alternate URLs must be fully-qualified, including the transport method (http/https), so:
    https://example.com/foonot //example.com/foo or /foo
  • Alternate URLs do not need to be in the same domain.
  • If you have several alternate URLs targeted at users with the same language but in different locales, it’s a good idea also to provide a catchall URL for geographically unspecified users of that language. For example, you may have specific URLs for English speakers in Ireland (en-ie), Canada (en-ca), and Australia (en-au), but should also provide a generic English (en) page for searchers in, say, the US, UK, and all other English-speaking locations. It can be one of the specific pages, if you choose.
  • If two pages don’t both point to each other, the tags will be ignored. This is so that someone on another site can’t arbitrarily create a tag naming itself as an alternative version of one of your pages.
  • If it becomes difficult to maintain a complete set of bidirectional links for every language, you can omit some languages on some pages; Google will still process the ones that point to each other. However, it is important to link newly expanded language pages bidirectionally to the originating/dominant language(s). For example, if your site was originally created in French with URLs on .fr, it’s more important to bidirectionally link newer Mexican (.mx) and Spanish (.es) pages to your strong .fr presence, rather than to bidirectionally link your new Spanish language variant pages (.mx and .es) to each other.
  • Consider adding a fallback page for unmatched languages, especially on language/country selectors or auto-redirecting homepages. Use the the x-default value:
    <link rel="alternate" href="http://example.com/" hreflang="x-default" />

Supported language/region codes

The value of the hreflang attribute identifies the language (in ISO 639-1 format) and optionally a region (in ISO 3166-1 Alpha 2 format) of an alternate URL. (The language need not be related to the region.) For example:

  • de: German language content, independent of region
  • en-GB: English language content, for GB users
  • de-ES: German language content, for users in Spain

For language script variations, the proper script is derived from the country. For example, when using zh-TW for users in Taiwan, the language script is automatically derived (in this example: Chinese-Traditional). You can also specify the script itself explicitly using ISO 15924, like this:

  • zh-Hant: Chinese (Traditional)
  • zh-Hans: Chinese (Simplified)

Alternatively, you can also specify a combination of script and region—for example, use zh-Hans-TW to specify Chinese (Simplified) for Taiwanese users.

Use the x-default tag for unmatched languages

The reserved value hreflang="x-default" is used when no other language/region matches the user’s browser setting. This value is optional, but recommended, as a way for you to control the page when no languages match. A good use is to target your site’s homepage where there is a clickable map that enables the user to select their country.

Troubleshooting

Common Mistakes

Here are the most common mistakes with hreflang usage:

  • Missing return links: If page X links to page Y, page Y must link back to page X. If this is not the case for all pages that use hreflang annotations, those annotations may be ignored or not interpreted correctly.
  • Incorrect language codes: Make sure that all language codes you use identify the language (in ISO 639-1 format) and optionally the region (in ISO 3166-1 Alpha 2 format) of an alternate URL. Specifying the region alone is not valid.

Debugging hreflang errors

You can use the International Targeting report to debug the most common problems. Make sure that Google has had time to crawl your pages, then visit the Language tab on the report to see if any errors were detected.

There are also many third-party tools available. Here are a few popular tools. (These tools are not maintained or checked by Google.)

How Google crawls locale-adaptive pages

If your site has locale-adaptive pages (that is, your site returns different content based on the perceived country or preferred language of the visitor), Google might not crawl, index, or rank all your content for different locales. This is because the default IP addresses of the Googlebot crawler appear to be based in the USA. In addition, the crawler sends HTTP requests without setting Accept-Language in the request header.

IMPORTANT: We recommend using separate locale URL configurations and annotating them with rel=alternate hreflang annotations.

 

Geo-distributed crawling

Googlebot crawls with IP addresses based outside the USA, in addition to the US-based IP addresses.

As we have always recommended, when Googlebot appears to come from a certain country, treat it like you would treat any other user from that country. This means that if you block USA-based users from accessing your content, but allow visitors from Australia to see it, your server should block a Googlebot that appears to be coming from the USA, but allow access to a Googlebot that appears to come from Australia.

Other considerations

  • Googlebot uses the same user-agent string for all crawling configurations. Learn more about the user-agent strings used by Google crawlers in our Help Center.
  • You can verify Googlebot geo-distributed crawls using reverse DNS lookups.
  • Make sure your that site applies the robots exclusion protocol consistently for every locale. This means that robots meta tags and the robots.txt file should specify the same directives in each locale.

Mobile

Mobile viewing on feature phones

Looking for transcoding on smartphones? See here.

Google Web Search on feature phones allows users to search all the content in the Google index for desktop web browsers. Because this content isn’t written specifically for feature phones and devices and thus might not display properly, web search results are viewed through our transcoder, which analyzes the original HTML code and converts it to a mobile-ready format. To ensure that the highest quality and most useable web page is displayed on your mobile phone or device, Google may resize, adjust, or convert images, text formatting and/or certain aspects of web page functionality.

If you do not want Google to transcode your web page on feature phones, you may request that Google redirect the user to an alternate page whenever the user attempts to view the page through the transcoder. You can do so by including the following line in the <HEAD> section of the HTML file for your page:

<link rel="alternate" media="handheld" href="alternate_page.htm" />

The alternate page should be a mobile-optimized version of the original page or a message informing the user that the site is not available on the phone.

Web Light: Faster and lighter mobile pages from search

Information for website owners

Google shows faster, lighter pages to people searching on slow mobile clients. To do this, we transcode (convert) web pages on the fly into a version optimized for slow clients, so that these pages load faster while saving data. This technology is called Web Light. Web Light pages preserve a majority of the relevant content and provide a link for users to view the original page. Our experiments show that optimized pages load four times faster than the original page and use 80% fewer bytes. Because these pages load so much faster, we also saw a 50% increase in traffic to these pages.

Here’s an example of a web page being loaded with and without transcoding:

See the Web Light version of a web page

You can preview a Web Light version of a non-AMP webpage on your mobile device or desktop as described here:

Some pages are currently not transcoded, including video sites, pages that require cookies (such as personalized sites), and other websites that are technically challenging to transcode. In these cases, you will see a “not transcoded” notification if you request the transcoded page using one of the tools listed above.

Compare load times

You can see a side-by-side load comparison between a Web Light page and a non-transcoded page (this test takes a few minutes).

Opting out of Web Light

If you do not want your pages to be transcoded, set the HTTP header “Cache-Control: no-transform” in your page response. If Googlebot sees this header, your page will not be transcoded.

FAQ

General

When will a user see these Web Light pages?

  • Users will see these pages only if Google has detected that they are on a slow clients.

Will Google Analytics work on my page?

  • Yes. Note that we support only page view statistics to keep the pages small and fast-loading: for example, event tracking is not reported. Also we currently support only Universal Analytics (the analytics.js JavaScript library).
    Metrics for the transcoded version of a page are shown in Analytics with googleweblight.com appended to the page’s host name. So, if you have a page at example.com/mypage, metrics for the unencoded page are shown as example.com/mypage, and metrics for the transcoded page are shown as example.com.googleweblight.com/mypage.

Will my pages be transcoded for users on faster clients?

  • Pages will not be transcoded if the user is using a fast client.

Will my pages be transcoded for users doing searches from desktop computers or tablets?

  • Pages will be transcoded only for users on mobile phones, not desktops or tablets.

What browsers are supported? 

  • Pages are currently transcoded for searches from the Chrome browser and the Android browser (version 2.3+), as well as Google Go.

How do I provide feedback about these optimized pages?

Do you cache my transcoded page?

  • Pages are generally transcoded when the user requests them, from the current version of the page. Google caches the main content for up to 24 hours. Other resources such as CSS, JS and images could be cached longer.

Do you transcode just the page linked by search results, or do you transcode the whole site?

We transcode the page and any pages that the user clicks to from within that page, unless the page is non-transcodable or opted out from transcoding.

Ads and Revenue

How will this affect my ad revenue? 

  • We currently support ads from several networks and are working to include more. Our experiments show that transcoded sites get 50% more traffic than non-transcoded sites and we expect that this will help monetize your site.

What ad networks are currently supported? 

  • As of October 2018, we support Sovrn, Zedo, AdSense, and Google Publisher Tags (GPT). We are working to support Google Ad Manager and more ad networks as well.

How many ads are shown on a single page?

  • To keep the page size down and make pages load faster, we limit the number of ads shown on a single page. Currently this limit is set at 3.

I use multiple ad networks. How does Google select which ads are shown?

  • Ads are currently chosen in the order in which they are requested by the original page.

I’m an ad network and my network is not supported. How can I get added?

Opt-out

What happens if I opt out of Web Light?

  • If you opt out, Google will not transcode your page for users on slow devices. Please note that traffic to your site from search users on slow devices may decrease, as they would need to spend more time loading your pages.

I didn’t opt out, why is my page not transcoded?

  • Due to technical limitations, some pages cannot be currently transcoded. These pages will also be labeled as non-transcoded in search results. This includes:
    • Sites that require cookies (e.g. personalized site or sites that require you to login before using them)
    • Sites that use a significant amount of data (e.g. video sites)
    • Other sites that are technically difficult to transcode.

Ad network support for Web Light pages in Google Search

Information for ad network providers

Google shows faster and lighter pages to people searching on slow mobile connections. To do this, we transcode (convert) web pages on the fly into a version optimized for slow networks, so that these pages load faster while saving data. These optimized pages preserve a majority of the relevant content and provide a link for users to view the original page. We call this technology “Web Light.”Our experiments show that optimized pages load four times faster than the original page and use 80% fewer bytes. Because these pages load so much faster, we also saw a 50% increase in traffic to these pages.

Google displays ads on these transcoded pages so that publishers can continue to monetize their traffic. At the outset we support several 3rd party ad networks, including SovrnZedo. We are interested in supporting ads from other networks as well. Read on if you run an ad network and want your ads to be supported on Web Light pages.

Our approach

When Google optimizes pages for users on a slow network, several steps may be required. If the original page is not mobile-friendly, we convert the content into a mobile-friendly format. We also reduce the page size so that the page loads faster than the original. We do this by removing unnecessary JavaScript and CSS, compressing images, and doing other performance optimizations.

After the content has been optimized, we place the publisher’s ads back into the Web Light page. We do this by detecting the existing ads on a publisher’s page. If the original page was mobile-friendly, we can include the ad tags directly. If the original page was not mobile-friendly, we need to modify the ad tag’s parameters to request a mobile ad from the appropriate ad network.

For more details, please see our FAQ.

Participate

If you’re an ad network and interested in being included in Web Light pages, please contact us by filling out this form. We will need the following information:

  • The format of your ad tags on a publisher’s page
    Providing us with examples of your ad tags or with a way to identify your ad tags in the HTML of a publisher’s page.
  • Parameters to request a mobile-friendly version of your ad
    If the page that we’re optimizing is not mobile-friendly, we may need to modify the ad tag. Please provide documentation on the ad tag parameters that would need to be modified so that your servers return a mobile-friendly ad format.
  • Supported formats
    We currently display ads only in the following formats: 320×50 and 300×250.
  • Additional considerations
    To help Web Light pages load quickly, please make sure that your JavaScript bundle is not too large. It is also preferable that you support HTTPS requests.

Google Discover

Optimize your content for Discover

Google can present a summary of your page as a card shown to users in Discover, which is a scrollable list of topics that users can browse on their mobile devices. Tapping a card will send the user to the page that is the source of the Discover entry.

Discover shows users a mix of content based on their interactions with Google products or content that they choose to follow directly. And we’re not limited to what’s published today — if we think that a user would find earlier content interesting, then Discover will show it.

Discover also features videos, sports scores, entertainment updates (such as a new movie release), stock prices, event information (such as the nominees for a major award ceremony, or the lineup of an upcoming music festival), and more. Discover is a content hub for all of your interests.

Optimize your content for Discover

Discover content is algorithmically ranked by what Google thinks a user would find most interesting. Content ranking is powered by the strength of the match between an article’s content and a user’s interests, so there aren’t any methods for boosting the ranking of your pages other than posting content that you think users will find interesting.

Your pages are eligible to appear in Discover cards simply if they are indexed by Google and meet Google News content policies. No special tags or structured data are required. Google ranks Discover content algorithmically based on content quality and the strength of the match between page content and user interests.

The two best ways to boost the ranking and performance of your Discover content are (1) to post content that you think users would find interesting and (2) to use high-quality images in your content. Publishers experience a 5% increase in clickthrough rate, a 3% increase in time spent on their pages, and a 3% increase in user satisfaction when Discover cards feature large images instead of thumbnail images.

To enable large images in your Discover results:

  • Use large, high-quality images that are at least 1,200 px wide, and
  • Ensure that Google has the rights to display your high-quality images to users, either by using AMP or by filling out this form to express your interest in our opt-in program.

Resources for developing mobile-friendly pages

Users now browse websites from a mobile site as often (or more often) from a mobile device as from a desktop computer. You should design your website to be viewed by mobile devices.

Mobile devices generally fall into two classes: smartphones and feature phones. Smartphones are essentially mini desktop computers with small screens, and include Android and iPhone devices (we include tablet devices in this classification for design purposes). Feature phones are less capable devices, and are handled differently by Google than smartphones.

For both types of devices:

  • SERP and any pages linked by SERP are transcoded globally on any speed network unless either:
    • The page is already “feature-phone ready”,
      or
    • The page has a <link rel=”alternate” media=”handheld” href=”alternate_page.htm” /> tag.
  • All images in SERP and any pages linked by SERP are resized unless user specifically opts out.
  • PageSpeed insights

Feature phone markup

Feature phone web pages come in several markup dialects, including WML, XHTML Basic, XHTML MP, and cHTML. Your choice will vary according to your target market.

Feature phone markup standards:

Feature phone validators:

Feature phone emulators

Detect and get rid of unwanted sneaky mobile redirects

Sneaky mobile redirects occur when a site redirects users on a mobile device to other content not made available to a search engine crawler. These sneaky redirects are a violation of Google Webmaster Guidelines. To ensure quality search results for our users, the Google Search Quality team can take action on such sites, including removal of URLs from our index.

In many cases, it is okay to show slightly different content on different devices. For example, optimizing for the smaller space of a smartphone screen can mean that some content, like images, need to be modified. Similarly, for mobile-only redirects, redirecting mobile users to improve their mobile experience (like redirecting mobile users from example.com/url1 to m.example.com/url1) is often beneficial to them. However, redirecting mobile users sneakily to different content is bad for the user experience.

Sneaky Mobile Redirect

Sneaky mobile redirects can be created intentionally by a site owner, but we’ve also seen situations where mobile-only sneaky redirects happen without the site owner knowing. The following are examples of configurations that can cause sneaky mobile redirects:

  • Adding code that creates redirection rules for mobile users
  • Using a script or element to display ads and monetize content that redirect mobile users
  • A script or element added by hackers that redirects your mobile users to malicious sites

Detecting sneaky redirects on your site

To check for sneaky mobile redirects on your site, walk through the following steps:

  1. Check if you are redirected when you navigate to your site on your smartphoneWe recommend you check the mobile user experience of your site by visiting your pages from Google search results with a smartphone. When debugging, mobile emulation in desktop browsers is handy because you can test for many different devices. You can, for example, view pages as a mobile device straight from your browser in Chrome, Firefox or Safari (for the latter, make sure you have enabled the “Show Develop menu in menu bar” feature).
  2. Listen to your usersYour users can see your site in different ways than you. It’s always important to pay attention to user complaints, so you can hear of any issue related to the mobile user experience.
  3. Monitor your mobile users in your site’s analytics dataUnusual mobile user activity can be detected by looking at some of the data in your website’s analytics data. For example, keep an eye on the average time spent on your site by your mobile users: if all of a sudden, only your mobile users start spending much less time on your site than they used to, there might be an issue related to mobile redirections.

    Monitoring for any large changes in your mobile user activity can help you proactively identify sneaky mobile redirects. You can set up Google Analytics alerts that will warn you of sharp drops in average time spent on your site by mobile users or drops in mobile users. While these alerts do not necessarily mean that you have mobile sneaky redirects, it’s something worth investigating.

Instructions for removing sneaky mobile directs

  1. Make sure that your site is not hackedCheck the Security Issues tool in Search Console to see if Google has detected any problems on your site. If your site has been hacked, review our guide for hacked sites.
  2. Audit third-party scripts/elements on your siteIf your site is not hacked, then we recommend you take the time to investigate if third-party scripts or elements are causing the redirects. You can follow these steps:
    1. One by one remove any third-party scripts or elements you do not control from the redirecting page(s).
    2. Check your site on a mobile device or through emulation between each script or element removal and see if the redirect stops.
    3. If you think a particular script or element is responsible for the sneaky redirect, consider removing it from your site, and debugging the issue with the script or element provider.

Avoiding sneaky mobile redirects in the future

To diminish the risk of unknowingly redirecting your own users, be sure to choose advertisers who are transparent on how they handle user traffic. If you are interested in building trust in the online advertising space, you should research industry-wide best practices when participating in ad networks. For example, the Trustworthy Accountability Group’s (Interactive Advertising Bureau) Inventory Quality Guidelines are a good place to start. There are many ways to monetize your content with mobile solutions that provide a high quality user experience. Be sure to use them.

Other content considerations

Meta tags that Google understands

Meta tags are a great way for webmasters to provide search engines with information about their sites. Meta tags can be used to provide information to all sorts of clients, and each system processes only the meta tags they understand and ignores the rest. Meta tags are added to the <head> section of your HTML page and generally look like this:

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta name="Description" CONTENT="Author: A.N. Author, Illustrator: P. Picture, Category: Books, Price:  £9.24, Length: 784 pages">
    <meta name="google-site-verification" content="+nxGUDJ4QpAZ5l9Bsjdi102tLVC21AIh5d1Nl23908vVuFHs34="/>
    <title>Example Books - high-quality used books for children</title>
    <meta name="robots" content="noindex,nofollow">

Google understands the following meta tags (and related items):

<meta name="description" content="A description of the page" /> This tag provides a short description of the page. In some situations this description is used as a part of the snippet shown in the search results. More information
<title>The Title of the Page</title> While technically not a meta tag, this tag is often used together with the “description”. The contents of this tag are generally shown as the title in search results (and of course in the user’s browser). More information
<meta name="robots" content="..., ..." />
<meta name="googlebot" content="..., ..." />
These meta tags can control the behavior of search engine crawling and indexing. The robots meta tag applies to all search engines, while the “googlebot” meta tag is specific to Google. The default values are “index, follow” (the same as “all”) and do not need to be specified. We understand the following values (when specifying multiple values, separate them with a comma):

  • noindex – Prevents the page from being indexed.
  • nofollow – Prevents the Googlebot from following links from this page.
  • nosnippet – Prevents a text snippet or video preview from being shown in the search results. For video, a static image will be shown instead, if possible.
  • noarchive – Prevents Google from showing the Cached link for a page.
  • unavailable_after:[date] – Lets you specify the exact time and date you want to stop crawling and indexing of this page.
  • noimageindex – Lets you specify that you do not want your page to appear as the referring page for an image that appears in Google search results.
  • none – Equivalent to noindex, nofollow.

You can now also specify this information in the header of your pages using the “X-Robots-Tag” HTTP header directive. This is particularly useful if you wish to limit indexing of non-HTML files like graphics or other kinds of documents. More information about robots meta tags

<meta name="google" content="nositelinkssearchbox" /> When users search for your site, Google Search results sometimes display a search box specific to your site, along with other direct links to your site. This meta tag tells Google not to show the sitelinks search box. Learn more about sitelinks search box.
<meta name="google" content="notranslate" /> When we recognize that the contents of a page are not in the language that the user is likely to want to read, we often provide a link to a translation in the search results. In general, this gives you the chance to provide your unique and compelling content to a much larger group of users. However, there may be situations where this is not desired. This meta tag tells Google that you don’t want us to provide a translation for this page.
<meta name="google-site-verification" content="..." /> You can use this tag on the top-level page of your site to verify ownership for Search Console. Please note that while the values of the “name” and “content” attributes must match exactly what is provided to you (including upper and lower case), it doesn’t matter if you change the tag from XHTML to HTML or if the format of the tag matches the format of your page. More information
<meta http-equiv="Content-Type" content="...; charset=..." />
<meta charset="..." >
This defines the page’s content type and character set. Make sure that you surround the value of the content attribute with quotes – otherwise the charset attribute may be interpreted incorrectly. We recommend using Unicode/UTF-8 where possible. More information
<meta http-equiv="refresh" content="...;url=..." /> This meta tag sends the user to a new URL after a certain amount of time, and is sometimes used as a simple form of redirection. However, it is not supported by all browsers and can be confusing to the user. The W3C recommends that this tag not be used. We recommend using a server-side 301 redirect instead.
<meta name="viewport" content="..."> This tag tells the browser how to render a page on a mobile device. Presence of this tag indicates to Google that the page is mobile friendly. Read more about how to configure the viewport meta tag.

Other points to note:

  • Google can read both HTML and XHTML-style meta tags, regardless of the code used on the page.
  • With the exception of verify, case is generally not important in meta tags.

This is not an exclusive list of available meta tags, and you should feel free to use unlisted meta tags if they are important to your site. Just remember that Google will ignore meta tags it doesn’t know.

Keep a simple URL structure

A site’s URL structure should be as simple as possible. Consider organizing your content so that URLs are constructed logically and in a manner that is most intelligible to humans (when possible, readable words rather than long ID numbers). For example, if you’re searching for information about aviation, a URL like http://en.wikipedia.org/wiki/Aviation will help you decide whether to click that link. A URL like http://www.example.com/index.php?id_sezione=360&sid=3a5ebc944f41daa6f849f730f1, is much less appealing to users.

Consider using punctuation in your URLs. The URL http://www.example.com/green-dress.html is much more useful to us than http://www.example.com/greendress.html. We recommend that you use hyphens (-) instead of underscores (_) in your URLs.

Overly complex URLs, especially those containing multiple parameters, can cause a problems for crawlers by creating unnecessarily high numbers of URLs that point to identical or similar content on your site. As a result, Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all the content on your site.

Common causes of this problemUnnecessarily high numbers of URLs can be caused by a number of issues. These include:

  • Additive filtering of a set of items Many sites provide different views of the same set of items or search results, often allowing the user to filter this set using defined criteria (for example: show me hotels on the beach). When filters can be combined in a additive manner (for example: hotels on the beach and with a fitness center), the number of URLs (views of data) in the sites explodes. Creating a large number of slightly different lists of hotels is redundant, because Googlebot needs to see only a small number of lists from which it can reach the page for each hotel. For example:
    • Hotel properties at “value rates”:
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461
    • Hotel properties at “value rates” on the beach:
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461+4294967240
    • Hotel properties at “value rates” on the beach and with a fitness center:
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461+4294967240+4294967270
  • Dynamic generation of documents. This can result in small changes because of counters, timestamps, or advertisements.
  • Problematic parameters in the URL. Session IDs, for example, can create massive amounts of duplication and a greater number of URLs.
  • Sorting parameters. Some large shopping sites provide multiple ways to sort the same items, resulting in a much greater number of URLs. For example:
    http://www.example.com/results?search_type=search_videos&search_query=tpb&search_sort=relevance
       &search_category=25
  • Irrelevant parameters in the URL, such as referral parameters. For example:
    http://www.example.com/search/noheaders?click=6EE2BF1AF6A3D705D5561B7C3564D9C2&clickPage=
       OPD+Product+Page&cat=79
    http://www.example.com/discuss/showthread.php?referrerid=249406&threadid=535913
    http://www.example.com/products/products.asp?N=200063&Ne=500955&ref=foo%2Cbar&Cn=Accessories.
  • Calendar issues. A dynamically generated calendar might generate links to future and previous dates with no restrictions on start of end dates. For example:
    http://www.example.com/calendar.php?d=13&m=8&y=2011
    http://www.example.com/calendar/cgi?2008&month=jan
  • Broken relative links. Broken relative links can often cause infinite spaces. Frequently, this problem arises because of repeated path elements. For example:
    http://www.example.com/index.shtml/discuss/category/school/061121/html/interview/
      category/health/070223/html/category/business/070302/html/category/community/070413/html/FAQ.htm

Steps to resolve this problemTo avoid potential problems with URL structure, we recommend the following:

  • Consider using a robots.txt file to block Googlebot’s access to problematic URLs. Typically, you should consider blocking dynamic URLs, such as URLs that generate search results, or URLs that can create infinite spaces, such as calendars. Using regular expressions in your robots.txt file can allow you to easily block large numbers of URLs.
  • Wherever possible, avoid the use of session IDs in URLs. Consider using cookies instead. Check our Webmaster Guidelines for additional information.
  • Whenever possible, shorten URLs by trimming unnecessary parameters.
  • If your site has an infinite calendar, add a nofollow attribute to links to dynamically created future calendar pages.
  • Check your site for broken relative links.

Use rel=”nofollow” for specific links

“Nofollow” provides a way for webmasters to tell search engines “Don’t follow links on this page” or “Don’t follow this specific link.”

Originally, the nofollow attribute appeared in the page-level meta tag, and instructed search engines not to follow (i.e., crawl) any outgoing links on the page. For example:

 <meta name="robots" content="nofollow" />

Before nofollow was used on individual links, preventing robots from following individual links on a page required a great deal of effort (for example, redirecting the link to a URL blocked in robots.txt). That’s why the nofollow attribute value of the rel attribute was created. This gives webmasters more granular control: instead of telling search engines and bots not to follow any links on the page, it lets you easily instruct robots not to crawl a specific link. For example:

 <a href="signin.php" rel="nofollow">sign in</a>

How does Google handle nofollowed links?

In general, we don’t follow them. This means that Google does not transfer PageRank or anchor text across these links. Essentially, using nofollow causes us to drop the target links from our overall graph of the web. However, the target pages may still appear in our index if other sites link to them without using nofollow, or if the URLs are submitted to Google in a Sitemap. Also, it’s important to note that other search engines may handle nofollow in slightly different ways.

What are Google’s policies and some specific examples of nofollow usage?

Here are some cases in which you might want to consider using nofollow:

  • Untrusted content: If you can’t or don’t want to vouch for the content of pages you link to from your site — for example, untrusted user comments or guestbook entries — you should nofollow those links. This can discourage spammers from targeting your site, and will help keep your site from inadvertently passing PageRank to bad neighborhoods on the web. In particular, comment spammers may decide not to target a specific content management system or blog service if they can see that untrusted links in that service are nofollowed. If you want to recognize and reward trustworthy contributors, you could decide to automatically or manually remove the nofollow attribute on links posted by members or users who have consistently made high-quality contributions over time.
  • Paid links: A site’s ranking in Google search results is partly based on analysis of those sites that link to it. In order to prevent paid links from influencing search results and negatively impacting users, we urge webmasters use nofollow on such links. Search engine guidelines require machine-readable disclosure of paid links in the same way that consumers online and offline appreciate disclosure of paid relationships (for example, a full-page newspaper ad may be headed by the word “Advertisement”). More information on Google’s stance on paid links.
  • Crawl prioritization: Search engine robots can’t sign in or register as a member on your forum, so there’s no reason to invite Googlebot to follow “register here” or “sign in” links. Using nofollow on these links enables Googlebot to crawl other pages you’d prefer to see in Google’s index. However, a solid information architecture — intuitive navigation, user- and search-engine-friendly URLs, and so on — is likely to be a far more productive use of resources than focusing on crawl prioritization via nofollowed links.

Tag site for child-directed treatment

Visit the Tag for Child Directed Treatment page to tag a site or service that you would like Google to treat as child-directed in whole or in part for the purposes of the Children’s Online Privacy Protection Act (COPPA). If you have not already added a site to Search Console, you must first add the site and verify ownership.

Keep in mind the following:

  • You can tag an entire domain or portions of a domain (subdomain or subdirectory) for treatment as child-directed
  • Any pages beneath a domain or directory are also covered by the tag.
  • It may take some time for this designation to take effect in applicable Google services.
  • Google may limit the number of domains or sub-domains you may include at any time.

For finer control over how your content is treated, you can also tag individual ad units for treatment as child-directed. See the help center for your product to learn more about proper tagging.

Browser compatibility

Users typically view your website using a browser. Each browser interprets your website code in a slightly different manner, which means that it may appear differently to visitors using different browsers. In general, you should avoid relying on browser specific behavior, such as relying on a browser to correctly detect a content-type or encoding when you did not specify one. In addition, there are some steps you can take to make sure your site doesn’t behave in unexpected ways.

Test your site in as many browsers as possible

Once you’ve created your web design, you should review your site’s appearance and functionality on multiple browsers to make sure that all your visitors are getting the experience you worked so hard to design. Ideally, you should start testing as early in your site development process as possible. Different browsers – and even different versions of the same browser – can see your site differently. You can use services such as Google Analytics to get a good idea of the most popular browsers used to view your site.

Write good, clean HTML

While your site may appear correctly in some browsers even if your HTML is not valid, there’s no guarantee that it will appear correctly in all browsers – or in all future browsers. The best way to make sure that your page looks the same in all browsers is to write your page using valid HTML and CSS, and then test it in as many browsers as possible. Clean, valid HTML is a good insurance policy, and using CSS separates presentation from content, and can help pages render and load faster. Validation tools, such as the free online HTML and CSS validatorsprovided by the W3 Consortium, are useful for checking your site, and tools such as HTML Tidy can help you quickly and easily clean up your code. (Although we do recommend using valid HTML, it’s not likely to be a factor in how Google crawls and indexes your site.)

Specify your character encoding

To help browsers render the text on your page, you should always specify an encoding for your document. This encoding should appear at the top of the document (or frame) as some browsers won’t recognize charset declarations that appear deep in the document. In addition, you should make sure that your web server is not sending conflicting HTTP headers. A header such as content-type: text/html; charset=ISO-8859-1 will override any charset declarations in your page.

Consider accessibility

Not all users may have JavaScript enabled in their browsers. In addition, technologies such as Flash and ActiveX may not render well (or at all) in every browser. We recommend following our guidelines for using Flash and other rich media, and testing your site in a text-only browser such as Lynx. As a bonus, providing text-only alternatives to rich-media content and functionality will make it easier for search engines to crawl and index your site, and also make your site more accessible to users who use alternative technologies such as screenreaders.

Create good titles and snippets in Search Results

Google’s generation of page titles and descriptions (or “snippets”) is completely automated and takes into account both the content of a page as well as references to it that appear on the web. The goal of the snippet and title is to best represent and describe each result and explain how it relates to the user’s query.

We use a number of different sources for this information, including descriptive information in the title and meta tags for each page. We may also use publicly available information, or create rich results based on markup on the page.

While we can’t manually change titles or snippets for individual sites, we’re always working to make them as relevant as possible. You can help improve the quality of the title and snippet displayed for your pages by following the general guidelines below.

Create descriptive page titles

Titles are critical to giving users a quick insight into the content of a result and why it’s relevant to their query. It’s often the primary piece of information used to decide which result to click on, so it’s important to use high-quality titles on your web pages.

Here are a few tips for managing your titles:

  • As explained above, make sure every page on your site has a title specified in the <title> tag.
  • Page titles should be descriptive and concise. Avoid vague descriptors like "Home" for your home page, or "Profile" for a specific person’s profile. Also avoid unnecessarily long or verbose titles, which are likely to get truncated when they show up in the search results.
  • Avoid keyword stuffing. It’s sometimes helpful to have a few descriptive terms in the title, but there’s no reason to have the same words or phrases appear multiple times. A title like "Foobar, foo bar, foobars, foo bars" doesn’t help the user, and this kind of keyword stuffing can make your results look spammy to Google and to users.
  • Avoid repeated or boilerplate titles. It’s important to have distinct, descriptive titles for each page on your site. Titling every page on a commerce site “Cheap products for sale”, for example, makes it impossible for users to distinguish one page differs another. Long titles that vary by only a single piece of information (“boilerplate” titles) are also bad; for example, a standardized title like "<band name> - See videos, lyrics, posters, albums, reviews and concerts" contains a lot of uninformative text. One solution is to dynamically update the title to better reflect the actual content of the page: for example, include the words “video”, “lyrics”, etc., only if that particular page contains video or lyrics. Another option is to just use "<band name>" as a concise title and use the meta description (see below) to describe your site’s content.
  • Brand your titles, but concisely. The title of your site’s home page is a reasonable place to include some additional information about your site—for instance, "ExampleSocialSite, a place for people to meet and mingle." But displaying that text in the title of every single page on your site hurts readability and will look particularly repetitive if several pages from your site are returned for the same query. In this case, consider including just your site name at the beginning or end of each page title, separated from the rest of the title with a delimiter such as a hyphen, colon, or pipe, like this:
    <title>ExampleSocialSite: Sign up for a new account.</title>
  • Be careful about disallowing search engines from crawling your pages. Using the robots.txt protocol on your site can stop Google from crawling your pages, but it may not always prevent them from being indexed. For example, Google may index your page if we discover it by following a link from someone else’s site. To display it in search results, Google will need to display a title of some kind and because we won’t have access to any of your page content, we will rely on off-page content such as anchor text from other sites. (To truly block a URL from being indexed, you can use the “noindex” directive.)

Why the search result title might differ from the page’s <title> tag

If we’ve detected that a particular result has one of the above issues with its title, we may try to generate an improved title from anchors, on-page text, or other sources. However, sometimes even pages with well-formulated, concise, descriptive titles will end up with different titles in our search results to better indicate their relevance to the query. There’s a simple reason for this: the title tag as specified by a webmaster is limited to being static, fixed regardless of the query.

When we know the user’s query, we can often find alternative text from a page that better explains why that result is relevant. Using this alternative text as a title helps the user, and it also can help your site. Users are scanning for their query terms or other signs of relevance in the results, and a title that is tailored for the query can increase the chances that they will click through.

If you’re seeing your pages appear in the search results with modified titles, check whether your titles have one of the problems described above. If not, consider whether the alternate title is a better fit for the query. If you still think the original title would be better, let us know in our Webmaster Help Forum.

How snippets are created

Snippets are automatically created from page content. Snippets are designed to emphasize the page content that best relates to a user’s specific search: this means that a page might show different snippets for different searches.

Site owners have two main ways to suggest content for the snippets that we create: rich results and meta description tags.

Preventing snippet creation

You can, alternatively, prevent snippets from being created and shown for your site in Search results. Use the <meta name=”nosnippet”> tag to prevent Google from displaying a snippet for your page in Search results.

Create good meta descriptions

Google will sometimes use the <meta> description tag from a page to generate a search results snippet, if we think it gives users a more accurate description than would be possible purely from the on-page content. A meta description tag should generally inform and interest users with a short, relevant summary of what a particular page is about. They are like a pitch that convince the user that the page is exactly what they’re looking for. There’s no limit on how long a meta description can be, but the search result snippets are truncated as needed, typically to fit the device width.

  • Make sure that every page on your site has a meta description.
  • Differentiate the descriptions for different pages. Identical or similar descriptions on every page of a site aren’t helpful when individual pages appear in the web results. In these cases we’re less likely to display the boilerplate text. Wherever possible, create descriptions that accurately describe the specific page. Use site-level descriptions on the main home page or other aggregation pages, and use page-level descriptions everywhere else. If you don’t have time to create a description for every single page, try to prioritize your content: At the very least, create a description for the critical URLs like your home page and popular pages.
  • Include clearly tagged facts in the description. The meta description doesn’t just have to be in sentence format; it’s also a great place to include information about the page. For example, news or blog postings can list the author, date of publication, or byline information. This can give potential visitors very relevant information that might not be displayed in the snippet otherwise. Similarly, product pages might have the key bits of information—price, age, manufacturer—scattered throughout a page. A good meta description can bring all this data together. For example, the following meta description provides detailed information about a book.
    <meta name="Description" content="Written by A.N. Author, 
    Illustrated by V. Gogh, Price: $17.99, 
    Length: 784 pages">

    In this example, information is clearly tagged and separated.

  • Programmatically generate descriptions. For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions can be impossible. In the latter case, however, programmatic generation of the descriptions can be appropriate and are encouraged. Good descriptions are human-readable and diverse. Page-specific data is a good candidate for programmatic generation. Keep in mind that meta descriptions comprised of long strings of keywords don’t give users a clear idea of the page’s content, and are less likely to be displayed in place of a regular snippet.
  • Use quality descriptions. Finally, make sure your descriptions are truly descriptive. Because the meta descriptions aren’t displayed in the pages the user sees, it’s easy to let this content slide. But high-quality descriptions can be displayed in Google’s search results, and can go a long way to improving the quality and quantity of your search traffic.

Create good titles and snippets in Search Results

Google’s generation of page titles and descriptions (or “snippets”) is completely automated and takes into account both the content of a page as well as references to it that appear on the web. The goal of the snippet and title is to best represent and describe each result and explain how it relates to the user’s query.

We use a number of different sources for this information, including descriptive information in the title and meta tags for each page. We may also use publicly available information, or create rich results based on markup on the page.

While we can’t manually change titles or snippets for individual sites, we’re always working to make them as relevant as possible. You can help improve the quality of the title and snippet displayed for your pages by following the general guidelines below.

Create descriptive page titles

Titles are critical to giving users a quick insight into the content of a result and why it’s relevant to their query. It’s often the primary piece of information used to decide which result to click on, so it’s important to use high-quality titles on your web pages.

Here are a few tips for managing your titles:

  • As explained above, make sure every page on your site has a title specified in the <title> tag.
  • Page titles should be descriptive and concise. Avoid vague descriptors like "Home" for your home page, or "Profile" for a specific person’s profile. Also avoid unnecessarily long or verbose titles, which are likely to get truncated when they show up in the search results.
  • Avoid keyword stuffing. It’s sometimes helpful to have a few descriptive terms in the title, but there’s no reason to have the same words or phrases appear multiple times. A title like "Foobar, foo bar, foobars, foo bars" doesn’t help the user, and this kind of keyword stuffing can make your results look spammy to Google and to users.
  • Avoid repeated or boilerplate titles. It’s important to have distinct, descriptive titles for each page on your site. Titling every page on a commerce site “Cheap products for sale”, for example, makes it impossible for users to distinguish one page differs another. Long titles that vary by only a single piece of information (“boilerplate” titles) are also bad; for example, a standardized title like "<band name> - See videos, lyrics, posters, albums, reviews and concerts" contains a lot of uninformative text. One solution is to dynamically update the title to better reflect the actual content of the page: for example, include the words “video”, “lyrics”, etc., only if that particular page contains video or lyrics. Another option is to just use "<band name>" as a concise title and use the meta description (see below) to describe your site’s content.
  • Brand your titles, but concisely. The title of your site’s home page is a reasonable place to include some additional information about your site—for instance, "ExampleSocialSite, a place for people to meet and mingle." But displaying that text in the title of every single page on your site hurts readability and will look particularly repetitive if several pages from your site are returned for the same query. In this case, consider including just your site name at the beginning or end of each page title, separated from the rest of the title with a delimiter such as a hyphen, colon, or pipe, like this:
    <title>ExampleSocialSite: Sign up for a new account.</title>
  • Be careful about disallowing search engines from crawling your pages. Using the robots.txt protocol on your site can stop Google from crawling your pages, but it may not always prevent them from being indexed. For example, Google may index your page if we discover it by following a link from someone else’s site. To display it in search results, Google will need to display a title of some kind and because we won’t have access to any of your page content, we will rely on off-page content such as anchor text from other sites. (To truly block a URL from being indexed, you can use the “noindex” directive.)

Why the search result title might differ from the page’s <title> tag

If we’ve detected that a particular result has one of the above issues with its title, we may try to generate an improved title from anchors, on-page text, or other sources. However, sometimes even pages with well-formulated, concise, descriptive titles will end up with different titles in our search results to better indicate their relevance to the query. There’s a simple reason for this: the title tag as specified by a webmaster is limited to being static, fixed regardless of the query.

When we know the user’s query, we can often find alternative text from a page that better explains why that result is relevant. Using this alternative text as a title helps the user, and it also can help your site. Users are scanning for their query terms or other signs of relevance in the results, and a title that is tailored for the query can increase the chances that they will click through.

If you’re seeing your pages appear in the search results with modified titles, check whether your titles have one of the problems described above. If not, consider whether the alternate title is a better fit for the query. If you still think the original title would be better, let us know in our Webmaster Help Forum.

How snippets are created

Snippets are automatically created from page content. Snippets are designed to emphasize the page content that best relates to a user’s specific search: this means that a page might show different snippets for different searches.

Site owners have two main ways to suggest content for the snippets that we create: rich results and meta description tags.

Preventing snippet creation

You can, alternatively, prevent snippets from being created and shown for your site in Search results. Use the <meta name=”nosnippet”> tag to prevent Google from displaying a snippet for your page in Search results.

Create good meta descriptions

Google will sometimes use the <meta> description tag from a page to generate a search results snippet, if we think it gives users a more accurate description than would be possible purely from the on-page content. A meta description tag should generally inform and interest users with a short, relevant summary of what a particular page is about. They are like a pitch that convince the user that the page is exactly what they’re looking for. There’s no limit on how long a meta description can be, but the search result snippets are truncated as needed, typically to fit the device width.

  • Make sure that every page on your site has a meta description.
  • Differentiate the descriptions for different pages. Identical or similar descriptions on every page of a site aren’t helpful when individual pages appear in the web results. In these cases we’re less likely to display the boilerplate text. Wherever possible, create descriptions that accurately describe the specific page. Use site-level descriptions on the main home page or other aggregation pages, and use page-level descriptions everywhere else. If you don’t have time to create a description for every single page, try to prioritize your content: At the very least, create a description for the critical URLs like your home page and popular pages.
  • Include clearly tagged facts in the description. The meta description doesn’t just have to be in sentence format; it’s also a great place to include information about the page. For example, news or blog postings can list the author, date of publication, or byline information. This can give potential visitors very relevant information that might not be displayed in the snippet otherwise. Similarly, product pages might have the key bits of information—price, age, manufacturer—scattered throughout a page. A good meta description can bring all this data together. For example, the following meta description provides detailed information about a book.
    <meta name="Description" content="Written by A.N. Author, 
    Illustrated by V. Gogh, Price: $17.99, 
    Length: 784 pages">

    In this example, information is clearly tagged and separated.

  • Programmatically generate descriptions. For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions can be impossible. In the latter case, however, programmatic generation of the descriptions can be appropriate and are encouraged. Good descriptions are human-readable and diverse. Page-specific data is a good candidate for programmatic generation. Keep in mind that meta descriptions comprised of long strings of keywords don’t give users a clear idea of the page’s content, and are less likely to be displayed in place of a regular snippet.
  • Use quality descriptions. Finally, make sure your descriptions are truly descriptive. Because the meta descriptions aren’t displayed in the pages the user sees, it’s easy to let this content slide. But high-quality descriptions can be displayed in Google’s search results, and can go a long way to improving the quality and quantity of your search traffic.

Duplicate content

Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar. Mostly, this is not deceptive in origin. Examples of non-malicious duplicate content could include:

  • Discussion forums that can generate both regular and stripped-down pages targeted at mobile devices
  • Store items shown or linked via multiple distinct URLs
  • Printer-only versions of web pages

If your site contains multiple pages with largely identical content, there are a number of ways you can indicate your preferred URL to Google. (This is called “canonicalization”.) More information about canonicalization.

However, in some cases, content is deliberately duplicated across domains in an attempt to manipulate search engine rankings or win more traffic. Deceptive practices like this can result in a poor user experience, when a visitor sees substantially the same content repeated within a set of search results.

Google tries hard to index and show pages with distinct information. This filtering means, for instance, that if your site has a “regular” and “printer” version of each article, and neither of these is blocked with a noindex meta tag, we’ll choose one of them to list. In the rare cases in which Google perceives that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we’ll also make appropriate adjustments in the indexing and ranking of the sites involved. As a result, the ranking of the site may suffer, or the site might be removed entirely from the Google index, in which case it will no longer appear in search results.

There are some steps you can take to proactively address duplicate content issues, and ensure that visitors see the content you want them to.

    • Use 301s: If you’ve restructured your site, use 301 redirects (“RedirectPermanent”) in your .htaccess file to smartly redirect users, Googlebot, and other spiders. (In Apache, you can do this with an .htaccess file; in IIS, you can do this through the administrative console.)
    • Be consistent: Try to keep your internal linking consistent. For example, don’t link to http://www.example.com/page/ and http://www.example.com/page and http://www.example.com/page/index.htm.
    • Use top-level domains: To help us serve the most appropriate version of a document, use top-level domains whenever possible to handle country-specific content. We’re more likely to know that http://www.example.de contains Germany-focused content, for instance, than http://www.example.com/de or http://de.example.com.
    • Syndicate carefully: If you syndicate your content on other sites, Google will always show the version we think is most appropriate for users in each given search, which may or may not be the version you’d prefer. However, it is helpful to ensure that each site on which your content is syndicated includes a link back to your original article. You can also ask those who use your syndicated material to use the noindex meta tag to prevent search engines from indexing their version of the content.
    • Use Search Console to tell us how you prefer your site to be indexed: You can tell Google your preferred domain (for example, http://www.example.com or http://example.com).
    • Minimize boilerplate repetition: For instance, instead of including lengthy copyright text on the bottom of every page, include a very brief summary and then link to a page with more details. In addition, you can use the Parameter Handling tool to specify how you would like Google to treat URL parameters.
    • Avoid publishing stubs: Users don’t like seeing “empty” pages, so avoid placeholders where possible. For example, don’t publish pages for which you don’t yet have real content. If you do create placeholder pages, use the noindex meta tag to block these pages from being indexed.
    • Understand your content management system: Make sure you’re familiar with how content is displayed on your web site. Blogs, forums, and related systems often show the same content in multiple formats. For example, a blog entry may appear on the home page of a blog, in an archive page, and in a page of other entries with the same label.
  • Minimize similar content: If you have many pages that are similar, consider expanding each page or consolidating the pages into one. For instance, if you have a travel site with separate pages for two cities, but the same information on both pages, you could either merge the pages into one page about both cities or you could expand each page to contain unique content about each city.

Google does not recommend blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can’t crawl pages with duplicate content, they can’t automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Search Console.

Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results. If your site suffers from duplicate content issues, and you don’t follow the advice listed above, we do a good job of choosing a version of the content to show in our search results.

However, if our review indicated that you engaged in deceptive practices and your site has been removed from our search results, review your site carefully. If your site has been removed from our search results, review our Webmaster Guidelines for more information. Once you’ve made your changes and are confident that your site no longer violates our guidelines, submit your site for reconsideration.

In rare situations, our algorithm may select a URL from an external site that is hosting your content without your permission. If you believe that another site is duplicating your content in violation of copyright law, you may contact the site’s host to request removal. In addition, you can request that Google remove the infringing page from our search results by filing a request under the Digital Millennium Copyright Act.

Make your links crawlable

Google can follow links only if they are an <a> tag with an href attribute. Links that use other formats won’t be followed by Google’s crawlers. Google cannot follow <a> links without an href tag or other tags that perform as links because of script events. Here are examples of links that Google can and can’t follow:

Can follow:

  • <a href=”https://example.com”>
  • <a href=”/relative/path/file”>

Can’t follow:

  • <a routerLink=”some/path”>
  • <span href=”https://example.com”>
  • <a onclick=”goto(‘https://example.com’)”>

Announce mobile billing charges clearly

If your site incurs mobile usage charges, you should clearly tell your users before any charges are applied.

 

Here are some best practices to ensure that your users are sufficiently informed about any mobile charges that they might incur:

  • Display billing information. Inform users on what actions they will be charged for, before being charged.
  • Billing information should be visible and obvious to the user. Don’t hide or obfuscate crucial billing information. Ensure that the information is visible on all types of devices.
  • Make sure that the fee structure is clearly understandable. Include information about fee amounts and billing frequency; for example, is the fee charged daily, weekly, or monthly?

If your site or pages are triggering warnings in Chrome, check out your Security Issues report in Search Console for more information and example pages. After you have fixed the issue on your site, visit the report to request a review of your fixes. A successful review should halt the warnings.