1
0
mirror of https://github.com/scrapy/scrapy.git synced 2025-02-26 07:44:38 +00:00
Pablo Hoffman ce7a787970 Big downloader refactoring to support real concurrency limits per domain/ip,
instead of global limits per spider which were a bit useless.

This removes the setting CONCURRENT_REQUESTS_PER_SPIDER and adds thre new
settings:

* CONCURRENT_REQUESTS
* CONCURRENT_REQUESTS_PER_DOMAIN
* CONCURRENT_REQUESTS_PER_IP (overrides per domain)

The AutoThrottle extension had to be disabled, but will be ported and
re-enabled soon.
2011-07-27 13:38:09 -03:00
..
2011-01-02 16:16:40 -02:00
2011-07-20 01:31:36 -03:00

======================================
Scrapy documentation quick start guide
======================================

This file provides a quick guide on how to compile the Scrapy documentation.


Setup the environment
---------------------

To compile the documentation you need the following Python libraries:

 * Sphinx
 * docutils
 * jinja

If you have setuptools available the following command will install all of them
(since Sphinx requires both docutils and jinja)::

    easy_install Sphinx


Compile the documentation
-------------------------

To compile the documentation (to classic HTML output) run the following command
from this dir::

    make html

Documentation will be generated (in HTML format) inside the ``build/html`` dir.


View the documentation
----------------------

To view the documentation run the following command::

    make htmlview

This command will fire up your default browser and open the main page of your
(previously generated) HTML documentation.


Start over
----------

To cleanup all generated documentation files and start from scratch run::

    make clean

Keep in mind that this command won't touch any documentation source files.