.. _topics-request-response: ====================== Requests and Responses ====================== .. module:: scrapy.http :synopsis: Request and Response classes Scrapy uses :class:`Request` and :class:`Response` objects for crawling web sites. Typically, :class:`Request` objects are generated in the spiders and pass across the system until they reach the Downloader, which executes the request and returns a :class:`Response` object which travels back to the spider that issued the request. Both :class:`Request` and :class:`Response` classes have subclasses which add functionality not required in the base classes. These are described below in :ref:`topics-request-response-ref-request-subclasses` and :ref:`topics-request-response-ref-response-subclasses`. Request objects =============== .. class:: Request(url[, callback, method='GET', headers, body, cookies, meta, encoding='utf-8', priority=0, dont_filter=False, errback]) A :class:`Request` object represents an HTTP request, which is usually generated in the Spider and executed by the Downloader, and thus generating a :class:`Response`. :param url: the URL of this request :type url: string :param callback: the function that will be called with the response of this request (once its downloaded) as its first parameter. For more information see :ref:`topics-request-response-ref-request-callback-arguments` below. If a Request doesn't specify a callback, the spider's :meth:`~scrapy.spider.Spider.parse` method will be used. Note that if exceptions are raised during processing, errback is called instead. :type callback: callable :param method: the HTTP method of this request. Defaults to ``'GET'``. :type method: string :param meta: the initial values for the :attr:`Request.meta` attribute. If given, the dict passed in this parameter will be shallow copied. :type meta: dict :param body: the request body. If a ``unicode`` is passed, then it's encoded to ``str`` using the `encoding` passed (which defaults to ``utf-8``). If ``body`` is not given, an empty string is stored. Regardless of the type of this argument, the final value stored will be a ``str`` (never ``unicode`` or ``None``). :type body: str or unicode :param headers: the headers of this request. The dict values can be strings (for single valued headers) or lists (for multi-valued headers). If ``None`` is passed as value, the HTTP header will not be sent at all. :type headers: dict :param cookies: the request cookies. These can be sent in two forms. 1. Using a dict:: request_with_cookies = Request(url="http://www.example.com", cookies={'currency': 'USD', 'country': 'UY'}) 2. Using a list of dicts:: request_with_cookies = Request(url="http://www.example.com", cookies=[{'name': 'currency', 'value': 'USD', 'domain': 'example.com', 'path': '/currency'}]) The latter form allows for customizing the ``domain`` and ``path`` attributes of the cookie. This is only useful if the cookies are saved for later requests. When some site returns cookies (in a response) those are stored in the cookies for that domain and will be sent again in future requests. That's the typical behaviour of any regular web browser. However, if, for some reason, you want to avoid merging with existing cookies you can instruct Scrapy to do so by setting the ``dont_merge_cookies`` key to True in the :attr:`Request.meta`. Example of request without merging cookies:: request_with_cookies = Request(url="http://www.example.com", cookies={'currency': 'USD', 'country': 'UY'}, meta={'dont_merge_cookies': True}) For more info see :ref:`cookies-mw`. :type cookies: dict or list :param encoding: the encoding of this request (defaults to ``'utf-8'``). This encoding will be used to percent-encode the URL and to convert the body to ``str`` (if given as ``unicode``). :type encoding: string :param priority: the priority of this request (defaults to ``0``). The priority is used by the scheduler to define the order used to process requests. Requests with a higher priority value will execute earlier. Negative values are allowed in order to indicate relatively low-priority. :type priority: int :param dont_filter: indicates that this request should not be filtered by the scheduler. This is used when you want to perform an identical request multiple times, to ignore the duplicates filter. Use it with care, or you will get into crawling loops. Default to ``False``. :type dont_filter: boolean :param errback: a function that will be called if any exception was raised while processing the request. This includes pages that failed with 404 HTTP errors and such. It receives a `Twisted Failure`_ instance as first parameter. :type errback: callable .. attribute:: Request.url A string containing the URL of this request. Keep in mind that this attribute contains the escaped URL, so it can differ from the URL passed in the constructor. This attribute is read-only. To change the URL of a Request use :meth:`replace`. .. attribute:: Request.method A string representing the HTTP method in the request. This is guaranteed to be uppercase. Example: ``"GET"``, ``"POST"``, ``"PUT"``, etc .. attribute:: Request.headers A dictionary-like object which contains the request headers. .. attribute:: Request.body A str that contains the request body. This attribute is read-only. To change the body of a Request use :meth:`replace`. .. attribute:: Request.meta A dict that contains arbitrary metadata for this request. This dict is empty for new Requests, and is usually populated by different Scrapy components (extensions, middlewares, etc). So the data contained in this dict depends on the extensions you have enabled. See :ref:`topics-request-meta` for a list of special meta keys recognized by Scrapy. This dict is `shallow copied`_ when the request is cloned using the ``copy()`` or ``replace()`` methods, and can also be accessed, in your spider, from the ``response.meta`` attribute. .. _shallow copied: https://docs.python.org/2/library/copy.html .. method:: Request.copy() Return a new Request which is a copy of this Request. See also: :ref:`topics-request-response-ref-request-callback-arguments`. .. method:: Request.replace([url, method, headers, body, cookies, meta, encoding, dont_filter, callback, errback]) Return a Request object with the same members, except for those members given new values by whichever keyword arguments are specified. The attribute :attr:`Request.meta` is copied by default (unless a new value is given in the ``meta`` argument). See also :ref:`topics-request-response-ref-request-callback-arguments`. .. _topics-request-response-ref-request-callback-arguments: Passing additional data to callback functions --------------------------------------------- The callback of a request is a function that will be called when the response of that request is downloaded. The callback function will be called with the downloaded :class:`Response` object as its first argument. Example:: def parse_page1(self, response): return scrapy.Request("http://www.example.com/some_page.html", callback=self.parse_page2) def parse_page2(self, response): # this would log http://www.example.com/some_page.html self.logger.info("Visited %s", response.url) In some cases you may be interested in passing arguments to those callback functions so you can receive the arguments later, in the second callback. You can use the :attr:`Request.meta` attribute for that. Here's an example of how to pass an item using this mechanism, to populate different fields from different pages:: def parse_page1(self, response): item = MyItem() item['main_url'] = response.url request = scrapy.Request("http://www.example.com/some_page.html", callback=self.parse_page2) request.meta['item'] = item return request def parse_page2(self, response): item = response.meta['item'] item['other_url'] = response.url return item .. _topics-request-meta: Request.meta special keys ========================= The :attr:`Request.meta` attribute can contain any arbitrary data, but there are some special keys recognized by Scrapy and its built-in extensions. Those are: * :reqmeta:`dont_redirect` * :reqmeta:`dont_retry` * :reqmeta:`handle_httpstatus_list` * :reqmeta:`handle_httpstatus_all` * ``dont_merge_cookies`` (see ``cookies`` parameter of :class:`Request` constructor) * :reqmeta:`cookiejar` :reqmeta:`dont_cache` * :reqmeta:`redirect_urls` * :reqmeta:`bindaddress` * :reqmeta:`dont_obey_robotstxt` * :reqmeta:`download_timeout` * :reqmeta:`download_maxsize` * :reqmeta:`proxy` .. reqmeta:: bindaddress bindaddress ----------- The IP of the outgoing IP address to use for the performing the request. .. reqmeta:: download_timeout download_timeout ---------------- The amount of time (in secs) that the downloader will wait before timing out. See also: :setting:`DOWNLOAD_TIMEOUT`. .. _topics-request-response-ref-request-subclasses: Request subclasses ================== Here is the list of built-in :class:`Request` subclasses. You can also subclass it to implement your own custom functionality. FormRequest objects ------------------- The FormRequest class extends the base :class:`Request` with functionality for dealing with HTML forms. It uses `lxml.html forms`_ to pre-populate form fields with form data from :class:`Response` objects. .. _lxml.html forms: http://lxml.de/lxmlhtml.html#forms .. class:: FormRequest(url, [formdata, ...]) The :class:`FormRequest` class adds a new argument to the constructor. The remaining arguments are the same as for the :class:`Request` class and are not documented here. :param formdata: is a dictionary (or iterable of (key, value) tuples) containing HTML Form data which will be url-encoded and assigned to the body of the request. :type formdata: dict or iterable of tuples The :class:`FormRequest` objects support the following class method in addition to the standard :class:`Request` methods: .. classmethod:: FormRequest.from_response(response, [formname=None, formnumber=0, formdata=None, formxpath=None, clickdata=None, dont_click=False, ...]) Returns a new :class:`FormRequest` object with its form field values pre-populated with those found in the HTML ``