Releases: scrapy/scrapyd
Releases · scrapy/scrapyd
1.4.1
1.4.0
Added
- Add
item_url
andlog_url
to the response from the listjobs.json webservice. (@mxdev88) - Scrapy 2.8 support. Scrapyd sets
LOG_FILE
andFEEDS
command-line arguments, instead ofSCRAPY_LOG_FILE
andSCRAPY_FEED_URI
environment variables. - Python 3.11 support.
- Python 3.12 support. Use
packaging.version.Version
instead ofdistutils.LooseVersion
. (@pawelmhm)
Changed
-
Rename environment variables to avoid spurious Scrapy deprecation warnings.
SCRAPY_EGG_VERSION
toSCRAPYD_EGG_VERSION
SCRAPY_FEED_URI
toSCRAPYD_FEED_URI
SCRAPY_JOB
toSCRAPYD_JOB
SCRAPY_LOG_FILE
toSCRAPYD_LOG_FILE
SCRAPY_SLOT
toSCRAPYD_SLOT
SCRAPY_SPIDER
toSCRAPYD_SPIDER
::: attention
::: title
Attention
:::These are undocumented and unused, and may be removed in future versions. If you use these environment variables, please report your use in an issue.
:::
Removed
- Scrapy 1.x support.
- Python 3.6 support.
- Unmaintained files (Debian packaging) and unused code (
scrapyd/script.py
).
Fixed
- Print Scrapyd's version instead of Twisted's version with
--version
(-v
) flag. (@niuguy) - Override Scrapy's
LOG_STDOUT
setting toFalse
to suppress logging output for listspiders.json webservice. (@Lucioric2000)
1.3.0
Added
- support for HTTP authentication in scrapyd server
- Jobs website shortcut to cancel a job using the cancel.json webservice.
- Make project argument to listjobs.json optional,
so that we can easily query for all jobs. - Python 3.7, 3.8, 3.9, 3.10 support
- Configuration option for job storage class
- Configuration option for egg storage class
- improved HTTP headers in webservice
- improved test coverage
Removed
- Python 2 support
- Python 3.3 support (although never officially supported)
- Python 3.4 support
- Python 3.5 support
- Pypy 2 support
- Doc for ubuntu installs, Zyte no longer maintains ubuntu repo.
Fixed
- ScrapyD now respects Scrapy TWISTED_REACTOR setting
- replaced deprecated SafeConfigParser with ConfigParser
1.2.1
1.2.0
The highlight of this release is the long-awaited Python 3 support.
The new scrapy requirement is version 1.0 or higher.
Python 2.6 is no longer supported by scrapyd.
Some unused sqlite utilities are now deprecated
and will be removed from a later scrapyd release.
Instantiating them or subclassing from them
will trigger a deprecation warning.
These are located under scrapyd.sqlite
:
- SqliteDict
- SqlitePickleDict
- SqlitePriorityQueue
- PickleSqlitePriorityQueue
Added
- Include run's PID in listjobs webservice.
- Include full tracebacks from scrapy when failing to get spider list.
This will lead to more noisy webservice output
but will make debugging deployment problems much easier. - Include start/finish time in daemon's joblist page
- Twisted 16 compatibility
- Python 3 compatibility
- Make console script executable
- Project version argument in the schedule webservice
- Configuration option for website root class
- Optional jobid argument to schedule webservice
- Contribution documentation
- Daemon status webservice
Removed
- scrapyd's bind_address now defaults to 127.0.0.1 instead of 0.0.0.0
to listen only for connection from the local host - scrapy < 1.0 compatibility
- python < 2.7 compatibility
Fixed
- Poller race condition for concurrently accessed queues
1.1.1
Removed
- Disabled bdist_wheel command in setup to define dynamic requirements
despite of pip-7 wheel caching bug.
Fixed
- Use correct type adapter for sqlite3 blobs.
In some systems, a wrong type adapter leads to incorrect buffer reads/writes. - FEED_URI was always overridden by scrapyd
- Specified maximum versions for requirements that became incompatible.
- Marked package as zip-unsafe because twistd requires a plain
txapp.py
- Don't install zipped scrapy in py26 CI env
because its setup doesn't include thescrapy/VERSION
file.
Added
- Enabled some missing tests for the sqlite queues.
- Enabled CI tests for python2.6 because it was supported by the 1.1 release.
- Document missing config options and include in default_scrapyd.conf
- Note the spider queue's
priority
argument in the scheduler's doc.
1.0.2
setup script
- Specified maximum versions for requirements that became incompatible.
- Marked package as zip-unsafe because twistd requires a plain
txapp.py
documentation
- Updated broken links, references to wrong versions and scrapy
- Warn that scrapyd 1.0 felling out of support
1.1.0
Features & Enhancements
- Outsource scrapyd-deploy command to scrapyd-client (#92, #90)
- Look for a .scrapyd.conf file in the users home (~/.scrapyd.conf) (#58)
- Adding the nodename to identify the process that is working on the job (#42)
- Allow remote items store (#48)
- Debian sysvinit script (#41)
- Add 'start_time' field in webservice for running jobs (#24)
Bugfixes
- Updating integration test script (#98)
- Changed scripts to be installed using entry_points (#89)
- Fix bug with --list-projects option in scrapyd-deploy (#88)
- Update api.rst (#79)
- Renovate scrapy upstart job a bit (#57)
- Sanitize version names when creating egg paths (#72)
- Use w3lib to generate feed uris (#73)
- Copy txweb/JsonResource import from scrapy (#62)
- Travis.yml: remove deprecated --use-mirros pip option (b3cdc61)
- Make scrapyd package zip unsafe because the scrapyd command requires the txapp.py unpacked to run (f27c054, #49)
- Check if a spider exists before schedule it (with sqlite cache) (#8, #17)
- Fixing typo "mulitplied" (#51)
- Fix GIT versioning for projects without annotated tags (#47)
- Fix release notes: 1.0 is already released (6c8dcfb)
- Correcting HTML tags in scrapyd website monitor (#38)
- Update index.rst (#37)
- Added missing anchor closing tags (#35)
- Removed python 2.6/lucid env from travis (#32)
- Changed the links to the new documentation page (#33)
- Fix (at least) windows problem (#19)
- Remove reference to 'scrapy server' command (f599b60, #25)
- Made Scrapyd package name lowercase (1adfc31)