1
0
mirror of https://github.com/scrapy/scrapy.git synced 2025-02-06 08:49:32 +00:00

chore: fix some typos in comments (#6317)

Signed-off-by: TechVest <techdashen@qq.com>
This commit is contained in:
TechVest 2024-04-17 16:56:26 +08:00 committed by GitHub
parent 1d11ea3a54
commit 5f67c01d1d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 3 additions and 3 deletions

View File

@ -1,7 +1,7 @@
# .git-blame-ignore-revs
# adding black formatter to all the code
e211ec0aa26ecae0da8ae55d064ea60e1efe4d0d
# re applying black to the code with default line length
# reapplying black to the code with default line length
303f0a70fcf8067adf0a909c2096a5009162383a
# reaplying black again and removing line length on pre-commit black config
# reapplying black again and removing line length on pre-commit black config
c5cdd0d30ceb68ccba04af0e71d1b8e6678e2962

View File

@ -116,7 +116,7 @@ Reduce log level
When doing broad crawls you are often only interested in the crawl rates you
get and any errors found. These stats are reported by Scrapy when using the
``INFO`` log level. In order to save CPU (and log storage requirements) you
should not use ``DEBUG`` log level when preforming large broad crawls in
should not use ``DEBUG`` log level when performing large broad crawls in
production. Using ``DEBUG`` level when developing your (broad) crawler may be
fine though.