mirror of
https://github.com/scrapy/scrapy.git
synced 2025-02-23 13:44:25 +00:00
This fixes the explanation to use Requests instead of URLs, which is what actually happens, and is also consistent with the new tutorial, which already explains how URLs become Request objects. I've also changed the "loop", jumping from 9 to step 2.
====================================== Scrapy documentation quick start guide ====================================== This file provides a quick guide on how to compile the Scrapy documentation. Setup the environment --------------------- To compile the documentation you need Sphinx Python library. To install it and all its dependencies run :: pip install 'Sphinx >= 1.3' Compile the documentation ------------------------- To compile the documentation (to classic HTML output) run the following command from this dir:: make html Documentation will be generated (in HTML format) inside the ``build/html`` dir. View the documentation ---------------------- To view the documentation run the following command:: make htmlview This command will fire up your default browser and open the main page of your (previously generated) HTML documentation. Start over ---------- To cleanup all generated documentation files and start from scratch run:: make clean Keep in mind that this command won't touch any documentation source files. Recreating documentation on the fly ----------------------------------- There is a way to recreate the doc automatically when you make changes, you need to install watchdog (``pip install watchdog``) and then use:: make watch