This module provides a single class, RobotFileParser, which answers
questions about whether or not a particular user agent can fetch a URL on
the Web site that published the robots.txt file. For more details on
the structure of robots.txt files, see
http://www.robotstxt.org/wc/norobots.html.
-
This class provides a set of methods to read, parse and answer questions
about a single robots.txt file.
-
Sets the URL referring to a robots.txt file.
-
Reads the robots.txt URL and feeds it to the parser.
-
Parses the lines argument.
can_fetch( |
useragent, url) |
-
Returns
True if the useragent is allowed to fetch the url
according to the rules contained in the parsed robots.txt file.
-
Returns the time the
robots.txt file was last fetched. This is
useful for long-running web spiders that need to check for new
robots.txt files periodically.
-
Sets the time the
robots.txt file was last fetched to the current
time.
The following example demonstrates basic use of the RobotFileParser class.
>>> import robotparser
>>> rp = robotparser.RobotFileParser()
>>> rp.set_url("http://www.musi-cal.com/robots.txt")
>>> rp.read()
>>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
False
>>> rp.can_fetch("*", "http://www.musi-cal.com/")
True
Release 2.5.2, documentation updated on 21st February, 2008.
See About this document... for information on suggesting changes.
|