mirror of
https://github.com/scrapy/scrapy.git
synced 2025-02-23 07:23:40 +00:00
* multiple projects * uploading scrapy projects as Python eggs * scheduling spiders using a JSON API Documentation is added along with the code. Closes #218. --HG-- rename : debian/scrapy-service.default => debian/scrapyd.default rename : debian/scrapy-service.dirs => debian/scrapyd.dirs rename : debian/scrapy-service.install => debian/scrapyd.install rename : debian/scrapy-service.lintian-overrides => debian/scrapyd.lintian-overrides rename : debian/scrapy-service.postinst => debian/scrapyd.postinst rename : debian/scrapy-service.postrm => debian/scrapyd.postrm rename : debian/scrapy-service.upstart => debian/scrapyd.upstart rename : extras/scrapy.tac => extras/scrapyd.tac
26 lines
973 B
Plaintext
26 lines
973 B
Plaintext
Source: scrapy
|
|
Section: python
|
|
Priority: optional
|
|
Maintainer: Insophia Team <info@insophia.com>
|
|
Build-Depends: debhelper (>= 7.0.50), python (>=2.5), python-twisted
|
|
Standards-Version: 3.8.4
|
|
Homepage: http://scrapy.org/
|
|
|
|
Package: scrapy
|
|
Architecture: all
|
|
Depends: ${python:Depends}, python-libxml2, python-twisted, python-openssl
|
|
Description: Python web crawling and scraping framework
|
|
Scrapy is a fast high-level screen scraping and web crawling framework,
|
|
used to crawl websites and extract structured data from their pages.
|
|
It can be used for a wide range of purposes, from data mining to
|
|
monitoring and automated testing.
|
|
|
|
Package: scrapyd
|
|
Architecture: all
|
|
Depends: scrapy, python-setuptools
|
|
Description: Scrapy Service
|
|
The Scrapy service allows you to deploy your Scrapy projects by building
|
|
Python eggs of them and uploading them to the Scrapy service using a JSON API
|
|
that you can also use for scheduling spider runs. It supports multiple
|
|
projects also.
|