Robot Exclusion Profile

From Microformats Wiki
robots-exclusion / (Redirected from Robot Exclusion Profile)
Jump to navigation Jump to search

This document represents a draft microformat specification. Although drafts are somewhat mature in the development process, the stability of this document cannot be guaranteed, and implementers should be prepared to keep abreast of future developments and changes. Watch this wiki page, or follow discussions on the #microformats Freenode IRC channel to stay up-to-date.

Draft Specification 2005-06-18



Per the public domain release on the author's and contributors' user pages (Peter Janes, Ryan King, Tantek Çelik) this specification is released into the public domain.

Public Domain Contribution Requirement. Since the author(s) released this work into the public domain, in order to maintain this work's public domain status, all contributors to this page agree to release their contributions to this page to the public domain as well. Contributors may indicate their agreement by adding the public domain release template ( to their user page per the Voluntary Public Domain Declarations instructions ( Unreleased contributions may be reverted/removed.


The author neither holds nor intends to apply for any patents on anything required to implement this specification.


The Robot Exclusion Profile is a reworking of the meta-robots tag (and less-standard extensions) as a microformat.


The meta-robots tag is used to provide page-specific direction for web crawlers. While being useful in many cases, its page-specific nature means it cannot be used to restrict crawlers from indexing only certain sections of a document. Several attempts have been made to create more granular solutions through various methods but have perceived shortcomings that limit their use; the Robot Exclusion Profile defines a microformat that can be applied to any element or set of elements in a page.

Like other microformats such as hCalendar, the Robot Exclusion Profile defines a set of class names that may be applied to (X)HTML elements. class can be applied to almost every (X)HTML element, which means that authors may be as specific or general as they wish in their application. This differs from the similarly-purposed rel="nofollow" attribute, which may only be applied to (and does not refer to the content of) a specific inline link. (It is interesting to note that this behavior is entirely encompassed by the use of class="robots-nofollow" on the same element.) Classes are also additive, so multiple values can be specified at once, e.g. class="robots-nofollow robots-noindex". For robot exclusion in particular, this allows authors to specify multiple rules for an element without adding unnecessary extra markup.


Profile URI (obviously placeholder)

The classes defined by the Robot Exclusion Profile should be considered meaningless when the profile URI is not present in the document <head>'s profile attribute.

XMDP Profile

<dl class="profile">
 <dt id="robots-nofollow">robots-nofollow</dt>
  Informs robots that links contained by the element are not to be followed.
 <dt id="robots-follow">robots-follow</dt>
  Informs robots that links contained by the element are to be followed.
 <dt id="robots-noindex">robots-noindex</dt>
  Informs robots that the content of the element is not to be included as part of the page.
 <dt id="robots-index">robots-index</dt>
  Informs robots that the content of the element is to be included as part of the page.
 <dt id="robots-noanchortext">robots-noanchortext</dt>
  Informs robots that the link target document is not to be indexed under the anchor text.
 <dt id="robots-anchortext">robots-anchortext</dt>
  Informs robots that the link target document is to be indexed under the anchor text.
 <dt id="robots-noarchive">robots-noarchive</dt>
  Informs caching robots that the content of the element is not to be included in their cached copy.
 <dt id="robots-archive">robots-archive</dt>
  Informs caching robots that the content of the element is to be included in their cached copy.


Removing page content:

<head profile=””>
<div class=”robots-noindex”>There once was a man from Nantucket…</div>
<p>This page is not about <span class=”robots-noindex”>pornography</span>.</p>

Showing nofollow in conjunction with Vote Links, and applying it in parallel with rel="nofollow":

<head profile=””>
<p class=”robots-nofollow”>This is <a href=””>a bogus link</a>
and so is <a href=””>this</a>.</p>

<p>I don't like <a rel="nofollow" rev="vote-against" class="robots-nofollow"
                   href="">this page</a>
but I do like <a rev="vote-for" href="">this one</a>.</p>

Preventing images from being stored by search engines, forcing them to be retrieved from the originating website:

<head profile="">
<p><img src="example.png" class="robots-noarchive" alt="Private image" /></p>

A consequence of this is that the small summaries that modern search engines display with the result links also exclude the robots-noarchive. We suggest replacing small excluded segments with an ellipsis [...]. Unarchived segments of a size comparable to the segments the search engine normally uses for summaries can just be omitted. Probably a display of an entire cached document which has unarchived segments should also include some locution to show the places where text has been elided, no matter what the size.

A more complex example is available which also shows how the robots metadata may be visualized.





related pages