1
0
mirror of https://github.com/scrapy/scrapy.git synced 2025-02-26 11:03:45 +00:00

5 Commits

Author SHA1 Message Date
Jochen Maes
47a7f154ab Add listjobs.json to Scrapyd API
You can use listjobs.json with project=<projectname> to get a list of projects that are running currently.
It returns a list of jobs with spidername and job-id.

Signed-off-by: Jochen Maes <jochen.maes@sejo.be>
---
 scrapyd/webservice.py |    9 +++++++++
 scrapyd/website.py    |    1 +
 2 files changed, 10 insertions(+), 0 deletions(-)
2011-03-09 14:22:10 -02:00
Pablo Hoffman
fa644f7a5e Some simplifications to Scrapyd architecture and internals:
- launcher no longer knows about egg storage
- removed get_spider_list_from_eggifile() file and replaced by simpler
  get_spider_list() which doesn't receive en egg file as argument
- changed "egg runner" name to just "runner" to reflect the fact that it
  doesn't necesarilly run eggs (though it does in the default case)

--HG--
rename : scrapyd/eggrunner.py => scrapyd/runner.py
2010-12-27 16:22:32 -02:00
Pablo Hoffman
831dc818d6 scrapyd: added more information webui homepage 2010-11-30 18:43:59 -02:00
Pablo Hoffman
5c4f562ec4 scrapyd: changed keys used in poller message to _project, _spider, _job, and added link to log file in web ui 2010-11-30 13:03:20 -02:00
Pablo Hoffman
df54ed0041 Some Scrapyd enhancements:
* added minimal web ui
* return unique id per job (spider scheduled)
* store one log per spider run (job) and rotate them, keeping the last N logs (where N is configurable through settings)
2010-11-30 02:26:31 -02:00