1
0
mirror of https://github.com/scrapy/scrapy.git synced 2025-02-23 08:44:13 +00:00

Adds HtmlCSSSelector and XmlCSSSelector classes, cssselect as optional dependency.

Ported .get() from _Element and .text_content() from HTMLMixin

Add CSS selectors to scrapy shell

Documenting CSS Selectors: Constructing selectors

Documenting CSS Selectors: Using Selectors

Make CSS Selectors a default feature.

Adds XPath powers to CSS Selectors and some syntactic sugar.

Removes methods copied over from lxml.html.HtmlMixin.

Updating docs to use new CSS Selector super powers.

Documenting CSS Selectors: Regular Expressions

Moving section after Nesting section, since it mentions it.

Documenting CSS Selectors: Nesting Selectors

Fix XPath specificity in lxml.selector.CSSSelectorMixin.text

Cleaning up unused stuff from cssel.py

Changing the behavior of lxml.selector.CSSSelectorMixin.text.

Concatenating all of the descendant text nodes is more useful
than returning it in pieces (there's xpath() if you need that).

Documenting CSS Selectors: CSS Selector objects

Documenting CSS Selectors: CSSSelectorList objects

Documenting CSS Selectors: HtmlCSSSelector objects

Documenting CSS Selectors: XmlCSSSelector objects

Fixing some documentations typos and errors

Enforcing the 80-char width lines

Tidying up CSS selectors and CSSSelectorMixin objects

Adding some missing references in documentation.

Fixing lxml.selector.CSSSelectorList.text
This commit is contained in:
Capi Etheriel 2012-09-30 14:55:55 -03:00 committed by Daniel Graña
parent 8bf3284ebf
commit bc17e9d412
5 changed files with 325 additions and 77 deletions

View File

@ -6,22 +6,24 @@ Selectors
When you're scraping web pages, the most common task you need to perform is
to extract data from the HTML source. There are several libraries available to
achieve this:
achieve this:
* `BeautifulSoup`_ is a very popular screen scraping library among Python
programmers which constructs a Python object based on the
structure of the HTML code and also deals with bad markup reasonably well,
but it has one drawback: it's slow.
programmers which constructs a Python object based on the structure of the
HTML code and also deals with bad markup reasonably well, but it has one
drawback: it's slow.
* `lxml`_ is a XML parsing library (which also parses HTML) with a pythonic
API based on `ElementTree`_ (which is not part of the Python standard
library).
Scrapy comes with its own mechanism for extracting data. They're called XPath
selectors (or just "selectors", for short) because they "select" certain parts
of the HTML document specified by `XPath`_ expressions.
Scrapy comes with its own mechanism for extracting data. They're called
selectors because they "select" certain parts of the HTML document specified
either by `XPath`_ or `CSS`_ expressions.
`XPath`_ is a language for selecting nodes in XML documents, which can also be used with HTML.
`XPath`_ is a language for selecting nodes in XML documents, which can also be
used with HTML. `CSS`_ is a language for applying styles to HTML documents. It
defines selectors to associate those styles with specific HTML elements.
Both `lxml`_ and Scrapy Selectors are built over the `libxml2`_ library, which
means they're very similar in speed and parsing accuracy.
@ -31,14 +33,16 @@ small and simple, unlike the `lxml`_ API which is much bigger because the
`lxml`_ library can be used for many other tasks, besides selecting markup
documents.
For a complete reference of the selectors API see the :ref:`XPath selector
reference <topics-selectors-ref>`.
For a complete reference of the selectors API see :ref:`XPath selector
reference <topics-xpath-selectors-ref>` and :ref:`CSS selector reference
<topics-css-selectors-ref>`.
.. _BeautifulSoup: http://www.crummy.com/software/BeautifulSoup/
.. _lxml: http://codespeak.net/lxml/
.. _ElementTree: http://docs.python.org/library/xml.etree.elementtree.html
.. _libxml2: http://xmlsoft.org/
.. _XPath: http://www.w3.org/TR/xpath
.. _CSS: http://www.w3.org/TR/selectors
Using selectors
===============
@ -46,24 +50,33 @@ Using selectors
Constructing selectors
----------------------
There are two types of selectors bundled with Scrapy. Those are:
There are four types of selectors bundled with Scrapy. Those are:
* :class:`~scrapy.selector.HtmlXPathSelector` - for working with HTML documents
* :class:`~scrapy.selector.HtmlXPathSelector` - for working with HTML
documents using XPath.
* :class:`~scrapy.selector.XmlXPathSelector` - for working with XML documents
using XPath.
* :class:`~scrapy.selector.HtmlCSSSelector` - for working with HTML documents
using CSS selectors.
* :class:`~scrapy.selector.XmlCSSSelector` - for working with XML documents
using CSS selectors.
.. highlight:: python
Both share the same selector API, and are constructed with a Response object as
their first parameter. This is the Response they're going to be "selecting".
All of them share the same selector API, and are constructed with a Response
object as their first parameter. This is the Response they're going to be
"selecting".
Example::
hxs = HtmlXPathSelector(response) # a HTML selector
xxs = XmlXPathSelector(response) # a XML selector
hcs = HtmlCSSSelector(response) # an HTML CSS selector
xxs = XmlXPathSelector(response) # an XML XPath selector
Using selectors with XPaths
---------------------------
Using selectors
---------------
To explain how to use the selectors we'll use the `Scrapy shell` (which
provides interactive testing) and an example page located in the Scrapy
@ -84,24 +97,28 @@ First, let's open the shell::
scrapy shell http://doc.scrapy.org/en/latest/_static/selectors-sample1.html
Then, after the shell loads, you'll have some selectors already instantiated and
ready to use.
Then, after the shell loads, you'll have some selectors already instantiated
and ready to use.
Since we're dealing with HTML, we'll be using the
:class:`~scrapy.selector.HtmlXPathSelector` object which is found, by default, in
the ``hxs`` shell variable.
Since we're dealing with HTML, we can use either the
:class:`~scrapy.selector.HtmlXPathSelector` object which is found, by default,
in the ``hxs`` shell variable, or the equivalent
:class:`~scrapy.selector.HtmlCSSSelector` found in the ``hcs`` shell variable.
Note that CSS selectors can only select element nodes, while XPath selectors
can select any nodes, including text and comment nodes. There are some methods
to augment CSS selectors with XPath as we'll see below.
.. highlight:: python
So, by looking at the :ref:`HTML code <topics-selectors-htmlcode>` of that page,
let's construct an XPath (using an HTML selector) for selecting the text inside
the title tag::
So, by looking at the :ref:`HTML code <topics-selectors-htmlcode>` of that
page, let's construct an XPath (using an HTML selector) for selecting the text
inside the title tag::
>>> hxs.select('//title/text()')
[<HtmlXPathSelector (text) xpath=//title/text()>]
As you can see, the select() method returns an XPathSelectorList, which is a list of
new selectors. This API can be used quickly for extracting nested data.
As you can see, the select() method returns an XPathSelectorList, which is a
list of new selectors. This API can be used quickly for extracting nested data.
To actually extract the textual data, you must call the selector ``extract()``
method, as follows::
@ -109,11 +126,22 @@ method, as follows::
>>> hxs.select('//title/text()').extract()
[u'Example website']
Now notice that CSS selectors can't select the text nodes. There are some
methods that allow enhancing CSS selectors, such as ``text`` and ``get``::
>>> hcs.select('title').text()
[<HtmlCSSSelector xpath='text()' data=u'Example website'>]
>>> hcs.select('title').text().extract()
[u'Example website']
Now we're going to get the base URL and some image links::
>>> hxs.select('//base/@href').extract()
[u'http://example.com/']
>>> hcs.select('base').get('href')
[u'http://example.com/']
>>> hxs.select('//a[contains(@href, "image")]/@href').extract()
[u'image1.html',
u'image2.html',
@ -121,6 +149,13 @@ Now we're going to get the base URL and some image links::
u'image4.html',
u'image5.html']
>>> hcs.select('a[href*=image]').get('href').extract()
[u'image1.html',
u'image2.html',
u'image3.html',
u'image4.html',
u'image5.html']
>>> hxs.select('//a[contains(@href, "image")]/img/@src').extract()
[u'image1_thumb.jpg',
u'image2_thumb.jpg',
@ -128,32 +163,21 @@ Now we're going to get the base URL and some image links::
u'image4_thumb.jpg',
u'image5_thumb.jpg']
Using selectors with regular expressions
----------------------------------------
Selectors also have a ``re()`` method for extracting data using regular
expressions. However, unlike using the ``select()`` method, the ``re()`` method
does not return a list of :class:`~scrapy.selector.XPathSelector` objects, so you
can't construct nested ``.re()`` calls.
Here's an example used to extract images names from the :ref:`HTML code
<topics-selectors-htmlcode>` above::
>>> hxs.select('//a[contains(@href, "image")]/text()').re(r'Name:\s*(.*)')
[u'My image 1',
u'My image 2',
u'My image 3',
u'My image 4',
u'My image 5']
>>> hcs.select('a[href*=image] img').get('src').extract()
[u'image1_thumb.jpg',
u'image2_thumb.jpg',
u'image3_thumb.jpg',
u'image4_thumb.jpg',
u'image5_thumb.jpg']
.. _topics-selectors-nesting-selectors:
Nesting selectors
-----------------
The ``select()`` selector method returns a list of selectors, so you can call the
``select()`` for those selectors too. Here's an example::
The ``select()`` selector method returns a list of selectors of the same type
(XPath or CSS), so you can call the ``select()`` for those selectors too.
Here's an example::
>>> links = hxs.select('//a[contains(@href, "image")]')
>>> links.extract()
@ -173,6 +197,40 @@ The ``select()`` selector method returns a list of selectors, so you can call th
Link number 3 points to url [u'image4.html'] and image [u'image4_thumb.jpg']
Link number 4 points to url [u'image5.html'] and image [u'image5_thumb.jpg']
The CSSSelectorList ``select`` method will accept CSS selectors, as expected,
but it also provides an ``xpath`` method that accepts XPath selectors to
augment the CSS selectors. Here's an example::
>>> links = hcs.select('a[href*=image]')
>>> for index, link in enumerate(links):
args = (index, link.get('href').extract(), link.xpath('img/@src').extract())
print 'Link number %d points to url %s and image %s' % args
Link number 0 points to url [u'image1.html'] and image [u'image1_thumb.jpg']
Link number 1 points to url [u'image2.html'] and image [u'image2_thumb.jpg']
Link number 2 points to url [u'image3.html'] and image [u'image3_thumb.jpg']
Link number 3 points to url [u'image4.html'] and image [u'image4_thumb.jpg']
Link number 4 points to url [u'image5.html'] and image [u'image5_thumb.jpg']
Using selectors with regular expressions
----------------------------------------
Selectors (both CSS and XPath) also have a ``re()`` method for extracting data
using regular expressions. However, unlike using the ``select()`` method, the
``re()`` method does not return a list of
:class:`~scrapy.selector.XPathSelector` objects, so you can't construct nested
``.re()`` calls.
Here's an example used to extract images names from the :ref:`HTML code
<topics-selectors-htmlcode>` above::
>>> hxs.select('//a[contains(@href, "image")]/text()').re(r'Name:\s*(.*)')
[u'My image 1',
u'My image 2',
u'My image 3',
u'My image 4',
u'My image 5']
.. _topics-selectors-relative-xpaths:
Working with relative XPaths
@ -212,16 +270,19 @@ XPath specification.
.. _topics-selectors-ref:
Built-in XPath Selectors reference
Built-in Selectors reference
==================================
.. module:: scrapy.selector
:synopsis: XPath selectors classes
:synopsis: Selectors classes
There are two types of selectors bundled with Scrapy:
:class:`HtmlXPathSelector` and :class:`XmlXPathSelector`. Both of them
implement the same :class:`XPathSelector` interface. The only different is that
one is used to process HTML data and the other XML data.
There are four types of selectors bundled with Scrapy:
:class:`HtmlXPathSelector` and :class:`XmlXPathSelector`,
:class:`HtmlCSSSelector` and :class:`XmlCSSSelector`. All of them implement the
same :class:`XPathSelector` interface. The only differences are the selector
syntax and whether it is used to process HTML data or XML data.
.. _topics-xpath-selectors-ref:
XPathSelector objects
---------------------
@ -232,13 +293,13 @@ XPathSelector objects
certain parts of its content.
``response`` is a :class:`~scrapy.http.Response` object that will be used
for selecting and extracting data
for selecting and extracting data.
.. method:: select(xpath)
Apply the given XPath relative to this XPathSelector and return a list
of :class:`XPathSelector` objects (ie. a :class:`XPathSelectorList`) with
the result.
of :class:`XPathSelector` objects (ie. a :class:`XPathSelectorList`)
with the result.
``xpath`` is a string containing the XPath to apply
@ -269,8 +330,8 @@ XPathSelector objects
.. method:: __nonzero__()
Returns ``True`` if there is any real content selected by this
:class:`XPathSelector` or ``False`` otherwise. In other words, the boolean
value of an XPathSelector is given by the contents it selects.
:class:`XPathSelector` or ``False`` otherwise. In other words, the
boolean value of an XPathSelector is given by the contents it selects.
XPathSelectorList objects
-------------------------
@ -282,11 +343,12 @@ XPathSelectorList objects
.. method:: select(xpath)
Call the :meth:`XPathSelector.select` method for all :class:`XPathSelector`
objects in this list and return their results flattened, as a new
:class:`XPathSelectorList`.
Call the :meth:`XPathSelector.select` method for all
:class:`XPathSelector` objects in this list and return their results
flattened, as a new :class:`XPathSelectorList`.
``xpath`` is the same argument as the one in :meth:`XPathSelector.select`
``xpath`` is the same argument as the one in
:meth:`XPathSelector.select`
.. method:: re(regex)
@ -298,16 +360,16 @@ XPathSelectorList objects
.. method:: extract()
Call the :meth:`XPathSelector.extract` method for all :class:`XPathSelector`
objects in this list and return their results flattened, as a list of
unicode strings.
Call the :meth:`XPathSelector.extract` method for all
:class:`XPathSelector` objects in this list and return their results
flattened, as a list of unicode strings.
.. method:: extract_unquoted()
Call the :meth:`XPathSelector.extract_unoquoted` method for all
:class:`XPathSelector` objects in this list and return their results
flattened, as a list of unicode strings. This method should not be applied
to all kinds of XPathSelectors. For more info see
flattened, as a list of unicode strings. This method should not be
applied to all kinds of XPathSelectors. For more info see
:meth:`XPathSelector.extract_unoquoted`.
HtmlXPathSelector objects
@ -316,7 +378,8 @@ HtmlXPathSelector objects
.. class:: HtmlXPathSelector(response)
A subclass of :class:`XPathSelector` for working with HTML content. It uses
the `libxml2`_ HTML parser. See the :class:`XPathSelector` API for more info.
the `libxml2`_ HTML parser. See the :class:`XPathSelector` API for more
info.
.. _libxml2: http://xmlsoft.org/
@ -324,8 +387,9 @@ HtmlXPathSelector examples
~~~~~~~~~~~~~~~~~~~~~~~~~~
Here's a couple of :class:`HtmlXPathSelector` examples to illustrate several
concepts. In all cases, we assume there is already an :class:`HtmlPathSelector`
instantiated with a :class:`~scrapy.http.Response` object like this::
concepts. In all cases, we assume there is already an
:class:`HtmlXPathSelector` instantiated with a :class:`~scrapy.http.Response`
object like this::
x = HtmlXPathSelector(html_response)
@ -343,7 +407,7 @@ instantiated with a :class:`~scrapy.http.Response` object like this::
3. Iterate over all ``<p>`` tags and print their class attribute::
for node in x.select("//p"):
... print node.select("@href")
... print node.select("@class").extract()
4. Extract textual data from all ``<p>`` tags without entities, as a list of
unicode strings::
@ -366,13 +430,14 @@ XmlXPathSelector examples
~~~~~~~~~~~~~~~~~~~~~~~~~
Here's a couple of :class:`XmlXPathSelector` examples to illustrate several
concepts. In both cases we assume there is already an :class:`XmlXPathSelector`
concepts. In both cases we assume there is already an :class:`XmlXPathSelector`
instantiated with a :class:`~scrapy.http.Response` object like this::
x = XmlXPathSelector(xml_response)
1. Select all ``<product>`` elements from a XML response body, returning a list of
:class:`XPathSelector` objects (ie. a :class:`XPathSelectorList` object)::
1. Select all ``<product>`` elements from a XML response body, returning a list
of :class:`XPathSelector` objects (ie. a :class:`XPathSelectorList`
object)::
x.select("//product")
@ -426,3 +491,148 @@ of relevance, are:
though.
.. _Google Base XML feed: http://base.google.com/support/bin/answer.py?hl=en&answer=59461
.. _topics-css-selectors-ref:
CSSSelector objects
--------------------
.. class:: CSSSelectorMixin(object)
A :class:`CSSSelectorMixin` object is a mixin for either
:class:`XmlXPathSelector` or :class:`HtmlXPathSelector` to select element
nodes using CSS selectors syntax. As a mixin, it is not meant to be used on
its own, but as a secondary parent class. See :class:`XmlCSSSelector` and
:class:`HtmlCSSSelector` for implementations.
.. method:: select(css)
Apply the given CSS selector relative to this CSSSelectorMixin and
return a list of :class:`CSSSelectorMixin` objects (ie. a
:class:`CSSSelectorList`) with the result.
``css`` is a string containing the CSS selector to apply.
.. method:: xpath(xpath)
Apply the given XPath relative to this CSSSelectorMixin and return a list
of :class:`CSSSelectorMixin` objects (ie. a :class:`CSSSelectorList`)
with the result.
``xpath`` is a string containing the XPath to apply.
.. method:: get(attr)
Get the attribute relative to this CSSSelectorMixin and return a list
of :class:`CSSSelectorMixin` objects (ie. a :class:`CSSSelectorList`)
with the result (usually with one element only).
``attr`` is a string containing the attribute name to get.
.. method:: text(all=False)
Get the children text nodes relative to this CSSSelectorMixin or, if
``all`` is True, a string node concatenating all of the descendant text
nodes relative to this CSSSelectorMixin, and return a list of
:class:`CSSSelectorMixin` objects (ie. a :class:`CSSSelectorList`) with
the result.
``all`` is a boolean to either select children text nodes (False) or
select a string node concatenating all of the descendant text nodes.
CSSSelectorList objects
-----------------------
.. class:: CSSSelectorList
The :class:`CSSSelectorList` class is subclass of :class:`XPathSelectorList`
which overrides and adds methods to match those of
:class:`CSSSelectorMixin`.
.. method:: xpath(xpath)
Call the :meth:`CSSSelectorMixin.xpath` method for all
:class:`CSSSelectorMixin` objects in this list and return their results
flattened, as a new :class:`CSSSelectorList`.
``xpath`` is the same argument as the one in
:meth:`CSSSelectorMixin.xpath`
.. method:: get(attr)
Call the :meth:`CSSSelectorMixin.get` method for all
:class:`CSSSelectorMixin` objects in this list and return their results
flattened, as a new :class:`CSSSelectorList`.
``attr`` is the same argument as the one in :meth:`CSSSelectorMixin.get`
.. method:: text(all=False)
Call the :meth:`CSSSelectorMixin.text` method for all
:class:`CSSSelectorMixin` objects in this list and return their results
flattened, as a new :class:`CSSSelectorList`.
``all`` is the same argument as the one in :meth:`CSSSelectorMixin.text`
HtmlCSSSelector objects
-----------------------
.. class:: HtmlCSSSelector(response)
A subclass of :class:`CSSSelectorMixin` and :class:`HtmlXPathSelector` for
working with HTML content using CSS selectors.
HtmlCSSSelector examples
~~~~~~~~~~~~~~~~~~~~~~~~
Here's a couple of :class:`HtmlCSSSelector` examples to illustrate several
concepts. In all cases, we assume there is already an :class:`HtmlCSSSelector`
instantiated with a :class:`~scrapy.http.Response` object like this::
x = HtmlCSSSelector(html_response)
1. Select all ``<h1>`` elements from a HTML response body, returning a list of
:class:`HtmlCSSSelector` objects (ie. a :class:`CSSSelectorList` object)::
x.select("h1")
2. Extract the text of all ``<h1>`` elements from a HTML response body,
returning a list of unicode strings::
x.select("h1").extract() # this includes the h1 tag
x.select("h1").text().extract() # this excludes the h1 tag
3. Iterate over all ``<p>`` tags and print their class attribute::
for node in x.select("p"):
... print node.get("class").extract()
XmlCSSSelector objects
----------------------
.. class:: XmlCSSSelector(response)
A subclass of :class:`CSSSelectorMixin` and :class:`XmlXPathSelector` for
working with XML content using CSS selectors.
XmlCSSSelector examples
~~~~~~~~~~~~~~~~~~~~~~~
Here's a couple of :class:`XmlCSSSelector` examples to illustrate several
concepts. In both cases we assume there is already an :class:`XmlCSSSelector`
instantiated with a :class:`~scrapy.http.Response` object like this::
x = XmlCSSSelector(xml_response)
1. Select all ``<product>`` elements from a XML response body, returning a list
of :class:`XmlCSSSelector` objects (ie. a :class:`CSSSelectorList` object)::
x.select("product")
2. Extract all prices from a `Google Base XML feed`_ which requires registering
a namespace::
x.register_namespace("g", "http://base.google.com/ns/1.0")
x.xpath("//g:price").extract()
.. _Google Base XML feed: http://base.google.com/support/bin/answer.py?hl=en&answer=59461

View File

@ -24,3 +24,5 @@ else:
from scrapy.selector.libxml2sel import *
else:
from scrapy.selector.lxmlsel import *
from scrapy.selector.csssel import *

32
scrapy/selector/csssel.py Normal file
View File

@ -0,0 +1,32 @@
from cssselect import GenericTranslator, HTMLTranslator
from scrapy.utils.python import flatten
from scrapy.selector import HtmlXPathSelector, XmlXPathSelector, XPathSelectorList
class CSSSelectorList(XPathSelectorList):
def xpath(self, xpath):
return self.__class__(flatten([x.xpath(xpath) for x in self]))
def get(self, attr):
return self.__class__(flatten([x.get(attr) for x in self]))
def text(self, all=False):
return self.__class__(flatten([x.text(all) for x in self]))
class CSSSelectorMixin(object):
def select(self, css):
return CSSSelectorList(super(CSSSelectorMixin, self).select(self.translator.css_to_xpath(css)))
def xpath(self, xpath):
return CSSSelectorList(super(CSSSelectorMixin, self).select(xpath))
def text(self, all=False):
return self.xpath('string()') if all else self.xpath('text()')
def get(self, attr):
return self.xpath('@' + attr)
class XmlCSSSelector(CSSSelectorMixin, XmlXPathSelector):
translator = GenericTranslator()
class HtmlCSSSelector(CSSSelectorMixin, HtmlXPathSelector):
translator = HTMLTranslator()

View File

@ -11,7 +11,7 @@ from w3lib.url import any_to_uri
from scrapy.item import BaseItem
from scrapy.spider import BaseSpider
from scrapy.selector import XPathSelector, XmlXPathSelector, HtmlXPathSelector
from scrapy.selector import XPathSelector, XmlXPathSelector, HtmlXPathSelector, XmlCSSSelector, HtmlCSSSelector
from scrapy.utils.spider import create_spider_for_request
from scrapy.utils.misc import load_object
from scrapy.utils.response import open_in_browser
@ -97,8 +97,12 @@ class Shell(object):
self.vars['response'] = response
self.vars['xxs'] = XmlXPathSelector(response) \
if isinstance(response, XmlResponse) else None
self.vars['xcs'] = XmlCSSSelector(response) \
if isinstance(response, XmlResponse) else None
self.vars['hxs'] = HtmlXPathSelector(response) \
if isinstance(response, HtmlResponse) else None
self.vars['hcs'] = HtmlCSSSelector(response) \
if isinstance(response, HtmlResponse) else None
if self.inthread:
self.vars['fetch'] = self.fetch
self.vars['view'] = open_in_browser

View File

@ -122,6 +122,6 @@ try:
except ImportError:
from distutils.core import setup
else:
setup_args['install_requires'] = ['Twisted>=10.0.0', 'w3lib>=1.2', 'queuelib', 'lxml', 'pyOpenSSL']
setup_args['install_requires'] = ['Twisted>=10.0.0', 'w3lib>=1.2', 'queuelib', 'lxml', 'pyOpenSSL', 'cssselect>0.8']
setup(**setup_args)