Settings¶
Scrapy设定(settings)提供了定制Scrapy组件的方法,包括核心(core),插件(extension),pipeline及spider组件。
设定为代码提供了提取以key-value映射的配置值的的全局命名空间(namespace)。 设定可以通过下面介绍的多种机制进行设置。
设定(settings)同时也是选择当前激活的Scrapy项目的方法(如果您有多个的话)。
内置设定列表请参考 内置设定参考手册 。
指定设定(Designating the settings)¶
当您使用Scrapy时,您需要声明您所使用的设定。 这可以通过使用环境变量: SCRAPY_SETTINGS_MODULE
来完成。
SCRAPY_SETTINGS_MODULE
必须以Python路径语法编写, 如 myproject.settings
。 注意,设定模块应该在 Python import search path中。
获取设定值¶
设定可以通过多种方式设置,每个方式具有不同的优先级。 下面以优先级降序的方式给出方式列表:
- 命令行选项(Command line Options)(最高优先级)
- 每个spider的设定
- 项目设定模块(Project settings module)
- 命令默认设定模块(Default settings per-command)
- 全局默认设定(Default global settings) (最低优先级)
这些设定(settings)由scrapy内部很好的进行了处理,不过您仍可以使用API调用来手动处理。 详情请参考 设置(Settings) API.
这些机制将在下面详细介绍。
1. 命令行选项¶
命令行传入的参数具有最高的优先级。 您可以使用command line 选项 -s
(或 --set
) 来覆盖一个(或更多)选项。
样例:
scrapy crawl myspider -s LOG_FILE=scrapy.log
2. 每个Spider的Setting¶
Spiders (See the Spiders chapter for reference) can define their own settings that will take precedence and override the project ones. They can do so by setting their custom_settings
attribute:
class MySpider(scrapy.Spider):
name = 'myspider'
custom_settings = {
'SOME_SETTING': 'some value',
}
3. 项目设定模块¶
项目设定模块是您Scrapy项目的标准配置文件。 这些设定在命令的类的 default_settings
属性中指定。
4. 每个命令的默认设定¶
每个 Scrapy tool 命令拥有其默认设定,并覆盖了全局默认的设定。 另外您可以通过spider的 download_delay
属性为每个spider设置该设定。
如何访问设定(How to access settings)¶
在一个Spider中,设置通过self.settings
访问:
class MySpider(scrapy.Spider):
name = 'myspider'
start_urls = ['http://example.com']
def parse(self, response):
print("Existing settings: %s" % self.settings.attributes.keys())
Note
The settings
attribute is set in the base Spider class after the spider is initialized. If you want to use the settings before the initialization (e.g., in your spider’s __init__()
method), you’ll need to override the from_crawler()
method.
Settings can be accessed through the scrapy.crawler.Crawler.settings
attribute of the Crawler that is passed to from_crawler
method in extensions, middlewares and item pipelines:
class MyExtension(object):
def __init__(self, log_is_enabled=False):
if log_is_enabled:
print("log is enabled!")
@classmethod
def from_crawler(cls, crawler):
settings = crawler.settings
return cls(settings.getbool('LOG_ENABLED'))
The settings object can be used like a dict (e.g., settings['LOG_ENABLED']
), but it’s usually preferred to extract the setting in the format you need it to avoid type errors, using one of the methods provided by the Settings
API.
设定名字的命名规则¶
设定的名字以要配置的组件作为前缀。 例如,一个robots.txt插件的合适设定应该为 ROBOTSTXT_ENABLED
, ROBOTSTXT_OBEY
, ROBOTSTXT_CACHEDIR
等等。
内置设定参考手册¶
这里以字母序给出了所有可用的Scrapy设定及其默认值和应用范围。
如果给出可用范围,并绑定了特定的组件,则说明了该设定使用的地方。 这种情况下将给出该组件的模块,通常来说是插件、中间件或pipeline。 同时也意味着为了使设定生效,该组件必须被启用。
BOT_NAME¶
默认值:'scrapybot'
Scrapy项目实现的bot的名字(也为项目名称)。 这将用来构造默认 User-Agent,同时也用来log。
当您使用 startproject
命令创建项目时其也被自动赋值。
CONCURRENT_REQUESTS_PER_DOMAIN¶
默认值:8
The maximum number of concurrent (ie.simultaneous) requests that will be performed to any single domain.
另见:AutoThrottle extension和它的AUTOTHROTTLE_TARGET_CONCURRENCY
选项。
CONCURRENT_REQUESTS_PER_IP¶
默认值:0
对单个IP进行并发请求的最大值。 某些网站会分析请求, 查找请求之间时间的相似性。 如果非0,则忽略 CONCURRENT_REQUESTS_PER_DOMAIN
设定, 使用该设定。 也就是说,并发限制将针对IP,而不是网站。
该设定也影响 DOWNLOAD_DELAY
和AutoThrottle extension:如果 CONCURRENT_REQUESTS_PER_IP
非0,下载延迟应用在IP而不是网站上。
DEFAULT_REQUEST_HEADERS¶
默认值:
{
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
}
Scrapy HTTP Request使用的默认header。 由 DefaultHeadersMiddleware
产生。
DEPTH_PRIORITY¶
默认值:0
默认: 'scrapy.spidermanager.SpiderManager'
如果为0,则不根据深度进行优先级调整。
- if zero (default), no priority adjustment is made from depth
- a positive value will decrease the priority, i.e. higher depth requests will be processed later ; this is commonly used when doing breadth-first crawls (BFO)
- a negative value will increase priority, i.e., higher depth requests will be processed sooner (DFO)
另见:Does Scrapy crawl in breadth-first or depth-first order?来调整Scrapy的BFO或DFO。
Note
This setting adjusts priority in the opposite way compared to other priority settings REDIRECT_PRIORITY_ADJUST
and RETRY_PRIORITY_ADJUST
.
DOWNLOADER_HTTPCLIENTFACTORY¶
默认值:'scrapy.statscol.MemoryStatsCollector'
默认的 (RFPDupeFilter
) 过滤器基于 scrapy.utils.request.request_fingerprint
函数生成的请求fingerprint(指纹)。
Note
HTTP/1.0 is rarely used nowadays so you can safely ignore this setting, unless you use Twisted<11.1, or if you really want to use HTTP/1.0 and override DOWNLOAD_HANDLERS_BASE
for http(s)
scheme accordingly, i.e. to 'scrapy.core.downloader.handlers.http.HTTP10DownloadHandler'
.
DOWNLOADER_CLIENTCONTEXTFACTORY¶
默认值:'scrapy.core.downloader.contextfactory.ScrapyClientContextFactory'
如果启用,Scrapy将会尊重 robots.txt策略。
保存项目中启用用于测试spider的scrapy contract及其顺序的字典。
Note
Scrapy default context factory does NOT perform remote server certificate verification. Note
If you do need remote server certificate verification enabled, Scrapy also has another context factory class that you can set, 'scrapy.core.downloader.contextfactory.BrowserLikeContextFactory'
, which uses the platform’s certificates to validate remote endpoints. This is only available if you use Twisted>=14.0.
If you do use a custom ContextFactory, make sure it accepts a method
parameter at init (this is the OpenSSL.SSL
method mapping DOWNLOADER_CLIENT_TLS_METHOD
).
DOWNLOADER_CLIENT_TLS_METHOD¶
默认值:'TLS'
默认情况下, 该设定包含所有稳定(stable)的内置插件。
This setting must be one of these string values:
'TLS'
: maps to OpenSSL’sTLS_method()
(a.k.aSSLv23_method()
), which allows protocol negotiation, starting from the highest supported by the platform; default, recommended'TLSv1.0'
: this value forces HTTPS connections to use TLS version 1.0 ; set this if you want the behavior of Scrapy<1.1'TLSv1.1'
: forces TLS version 1.1'TLSv1.2'
: forces TLS version 1.2'SSLv3'
: forces SSL version 3 (not recommended)
Note
This feature needs Twisted >= 11.1.
DOWNLOADER_MIDDLEWARES_BASE¶
默认值:
{
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,
'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,
'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware': 830,
'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,
'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,
}
保存项目中默认启用的spider中间件的字典。 例如,关闭文件下载处理器: 永远不要在项目中修改该设定,而是修改 DOWNLOADER_MIDDLEWARES
。 更多内容请查看 激活下载器中间件 。
DOWNLOAD_DELAY¶
默认值:0
下载器在下载同一个网站下一个页面前需要等待的时间。 该选项可以用来限制爬取速度, 减轻服务器压力。 整数值。 样例:
DOWNLOAD_DELAY = 0.25 # 250 ms of delay
该设定影响(默认启用的) RANDOMIZE_DOWNLOAD_DELAY
设定。 By default, Scrapy doesn’t wait a fixed amount of time between requests, but uses a random interval between 0.5 * DOWNLOAD_DELAY
and 1.5 * DOWNLOAD_DELAY
.
当 CONCURRENT_REQUESTS_PER_IP
非0时,延迟针对的是每个ip而不是网站。
你可以通过设置Spider的download_delay
属性来更改每个Spider的这个设置。
DOWNLOAD_HANDLERS_BASE¶
默认值:
{
'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',
'http': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',
'https': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',
's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',
'ftp': 'scrapy.core.downloader.handlers.ftp.FTPDownloadHandler',
}
保存项目中默认启用的scrapy contract的字典。 You should never modify this setting in your project, modify DOWNLOAD_HANDLERS
instead.
You can disable any of these download handlers by assigning None
to their URI scheme in DOWNLOAD_HANDLERS
. 例如,若要禁用内置的FTP处理程序(没有可替换的),则将它放在你的settings.py
中:
DOWNLOAD_HANDLERS = {
'ftp': None,
}
DOWNLOAD_TIMEOUT¶
默认值:180
保存项目中启用的插件及其顺序的字典。
Note
该超时值可以使用 download_timeout
来对每个spider进行设置, 也可以使用 download_timeout
Request.meta key 来对每个请求进行设置.
DOWNLOAD_MAXSIZE¶
默认值:1073741824 (1024MB)
爬取URL的最大长度。
如果为0,则没有限制。
Note
This size can be set per spider using download_maxsize
spider attribute and per-request using download_maxsize
Request.meta key.
This feature needs Twisted >= 11.1.
DOWNLOAD_WARNSIZE¶
默认值:33554432 (32Mb)
如果为0,将不发送警告。
需要注意,有些插件需要通过设定来启用。
Note
This size can be set per spider using download_warnsize
spider attribute and per-request using download_warnsize
Request.meta key.
这些功能要求Twisted >= 11.1。
DUPEFILTER_CLASS¶
默认值︰'scrapy.dupefilters.RFPDupeFilter'
用于检测过滤重复请求的类。
如果您需要修改检测的方式,您可以继承 RFPDupeFilter
并覆盖其 request_fingerprint
方法。 如果设置为 None
或 0
, 则使用动态分配的端口。 该方法接收 Request
对象并返回其fingerprint(一个字符串)。
DUPEFILTER_DEBUG¶
默认值︰False
默认情况下, RFPDupeFilter
只记录第一次重复的请求。 设置 DUPEFILTER_DEBUG
为 True
将会使其记录所有重复的requests。
EDITOR¶
默认值︰取决于环境
执行 edit
命令编辑spider时使用的编辑器。 如果该变量未设置,其默认为 vi
(Unix系统) 或者 IDLE编辑器(Windows)。 否则,它默认为vi
(在Unix系统上)或IDLE编辑器(在Windows上)。
EXTENSIONS_BASE¶
默认值︰
{
'scrapy.extensions.corestats.CoreStats': 0,
'scrapy.extensions.telnet.TelnetConsole': 0,
'scrapy.extensions.memusage.MemoryUsage': 0,
'scrapy.extensions.memdebug.MemoryDebugger': 0,
'scrapy.extensions.closespider.CloseSpider': 0,
'scrapy.extensions.feedexport.FeedExporter': 0,
'scrapy.extensions.logstats.LogStats': 0,
'scrapy.extensions.spiderstate.SpiderState': 0,
'scrapy.extensions.throttle.AutoThrottle': 0,
}
保存项目中默认启用的下载处理器(request downloader handler)的字典。 其是获取大多数设定的方法。 如果该设置不为空,当启用内存调试时将会发送一份内存报告到指定的地址;否则该报告将写到log中。
更多内容请参考 extensions用户手册 及 所有可用的插件 。
FEED_TEMPDIR¶
The Feed Temp dir allows you to set a custom folder to save crawler temporary files before uploading with FTP feed storage and Amazon S3.
ITEM_PIPELINES¶
默认值︰{}
保存项目中启用的pipeline及其顺序的字典。 不过值(value)习惯设定在0-1000范围内。 Lower orders process before higher orders.
示例︰
ITEM_PIPELINES = {
'mybot.pipelines.validate.ValidateMyItem': 300,
'mybot.pipelines.validate.StoreMyItem': 800,
}
LOG_FORMAT¶
默认值︰'%(asctime)s [%(name)s] %(levelname)s: %(message)s'
另外,设定可以以字典方式进行访问。 Refer to the Python logging documentation for the whole list of available placeholders.
LOG_DATEFORMAT¶
默认值︰'%Y-%m-%d %H:%M:%S'
String for formatting date/time, expansion of the %(asctime)s
placeholder in LOG_FORMAT
. Refer to the Python datetime documentation for the whole list of available directives.
MEMDEBUG_NOTIFY¶
默认值︰[]
当Scrapy进程占用的内存超出限制时,该插件将会关闭Scrapy进程, 同时发送email进行通知。
示例︰
MEMDEBUG_NOTIFY = ['[email protected]']
MEMUSAGE_ENABLED¶
默认值︰False
范围:scrapy.contrib.memusage
是否启用内存使用插件,当Scrapy进程占用的内存超出限制时,该插件将会关闭Scrapy进程, 同时发送email进行通知。
MEMUSAGE_LIMIT_MB¶
默认值︰0
范围:scrapy.contrib.memusage
在关闭Scrapy之前所允许的最大内存数(单位: MB)(如果 MEMUSAGE_ENABLED为True)。 如果为None,则不做检查。
MEMUSAGE_CHECK_INTERVAL_SECONDS¶
New in version 1.1.
默认值︰60.0
范围:scrapy.contrib.downloadermiddleware.robotstxt
The Memory usage extension checks the current memory usage, versus the limits set by MEMUSAGE_LIMIT_MB
and MEMUSAGE_WARNING_MB
, at fixed time intervals.
This sets the length of these intervals, in seconds.
MEMUSAGE_NOTIFY_MAIL¶
默认值:False
范围︰scrapy.extensions.memusage
达到内存限制时通知的email列表。
示例︰
MEMUSAGE_NOTIFY_MAIL = ['[email protected]']
MEMUSAGE_REPORT¶
默认值:False
范围︰scrapy.extensions.memusage
Whether to send a memory usage report after each spider has been closed.
MEMUSAGE_WARNING_MB¶
默认值:0
范围︰scrapy.extensions.memusage
在发送警告email前所允许的最大内存数(单位: MB)(如果 MEMUSAGE_ENABLED为True)。 该字典默认为空,值(value)任意。
RANDOMIZE_DOWNLOAD_DELAY¶
默认值:True
If enabled, Scrapy will wait a random amount of time (between 0.5 * DOWNLOAD_DELAY
and 1.5 * DOWNLOAD_DELAY
) while fetching requests from the same website.
该随机值降低了crawler被检测到(接着被block)的机会,某些网站会分析请求, 查找请求之间时间的相似性。
随机的策略与 wget --random-wait
选项的策略相同。
若 DOWNLOAD_DELAY
为0(默认值),该选项将不起作用。
REACTOR_THREADPOOL_MAXSIZE¶
默认值:10
Twisted Reactor线程池大小的最大值。这是由多个Scrapy组件使用的通用多用途线程池。Threaded DNS Resolver、BlockingFeedStorage、S3FilesStore只是其中的一部分。如果你遇到IO不足并阻塞问题,请增加此值。
REDIRECT_MAX_TIMES¶
默认值:20
定义请求可以被重定向的最大次数。After this maximum the request’s response is returned as is. 对某些任务我们使用Firefox默认值。
REDIRECT_PRIORITY_ADJUST¶
默认值:+2
Scope: scrapy.downloadermiddlewares.redirect.RedirectMiddleware
用于根据深度调整request优先级。
- a positive priority adjust (default) means higher priority.
- a negative priority adjust means lower priority.
RETRY_PRIORITY_ADJUST¶
默认值︰-1
范围:scrapy.downloadermiddlewares.retry.RetryMiddleware
Adjust retry request priority relative to original request:
- a positive priority adjust means higher priority.
- a negative priority adjust (default) means lower priority.
ROBOTSTXT_OBEY¶
默认值:False
范围:scrapy.downloadermiddlewares.robotstxt
如果启用,Scrapy会遵守robots.txt的规则。 更多内容请查看 RobotsTxtMiddleware 。
Note
While the default value is False
for historical reasons, this option is enabled by default in settings.py file generated by scrapy startproject
command.
SCHEDULER_DEBUG¶
默认值:False
Setting to True
will log debug information about the requests scheduler. This currently logs (only once) if the requests cannot be serialized to disk. Stats counter (scheduler/unserializable
) tracks the number of times this happens.
Example entry in logs:
1956-01-31 00:00:00+0800 [scrapy] ERROR: Unable to serialize request:
<GET http://example.com> - reason: cannot serialize <Request at 0x9a7c7ec>
(type Request)> - no more unserializable requests will be logged
(see 'scheduler/unserializable' stats counter)
SPIDER_CONTRACTS_BASE¶
默认值:
{
'scrapy.contracts.default.UrlContract' : 1,
'scrapy.contracts.default.ReturnsContract': 2,
'scrapy.contracts.default.ScrapesContract': 3,
}
一个字典,包含Scrapy中默认启用的Scrapy协议。 永远不要在项目中修改该设定,而是修改 SPIDER_CONTRACTS
。 更多内容请参考 Spiders Contracts 。
You can disable any of these contracts by assigning None
to their class path in SPIDER_CONTRACTS
. E.g., to disable the built-in ScrapesContract
, place this in your settings.py
:
SPIDER_CONTRACTS = {
'scrapy.contracts.default.ScrapesContract': None,
}
SPIDER_LOADER_CLASS¶
默认值:'scrapy.spiderloader.SpiderLoader'
The class that will be used for loading spiders, which must implement the SpiderLoader API.
SPIDER_MIDDLEWARES¶
默认值:{}
A dict containing the spider middlewares enabled in your project, and their orders. 更多内容请参考 激活spider中间件 。
SPIDER_MIDDLEWARES_BASE¶
默认值:
{
'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,
'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,
'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,
}
Example: Low orders are closer to the engine, high orders are closer to the spider. 更多内容请参考 激活spider中间件.
SPIDER_MODULES¶
默认值:[]
A list of modules where Scrapy will look for spiders.
示例︰
SPIDER_MODULES = ['mybot.spiders_prod', 'mybot.spiders_dev']
TELNETCONSOLE_PORT¶
默认值:[6023, 6073]
If you want to disable it set to 0. If set to None
or 0
, a dynamically assigned port is used. 更多内容请查看 Telnet终端(Telnet Console) 。
TEMPLATES_DIR¶
默认值:Scrapy模块内部的templates
目录
The directory where to look for templates when creating new projects with startproject
command and new spiders with genspider
command.
The project name must not conflict with the name of custom files or directories in the project
subdirectory.
URLLENGTH_LIMIT¶
默认值:2083
范围:spidermiddlewares.urllength
可选的级别有: CRITICAL、 ERROR、WARNING、INFO、DEBUG。 更多关于该设定的默认值信息请查看: http://www.boutell.com/newfaq/misc/urllength.html
USER_AGENT¶
默认值:"Scrapy/VERSION (+http://scrapy.org)"
The default User-Agent to use when crawling, unless overridden.
Settings documented elsewhere:¶
The following settings are documented elsewhere, please check each specific case to see how to enable and use them.
- AJAXCRAWL_ENABLED
- AUTOTHROTTLE_DEBUG
- AUTOTHROTTLE_ENABLED
- AUTOTHROTTLE_MAX_DELAY
- AUTOTHROTTLE_START_DELAY
- AUTOTHROTTLE_TARGET_CONCURRENCY
- CLOSESPIDER_ERRORCOUNT
- CLOSESPIDER_ITEMCOUNT
- CLOSESPIDER_PAGECOUNT
- CLOSESPIDER_TIMEOUT
- COMMANDS_MODULE
- COMPRESSION_ENABLED
- COOKIES_DEBUG
- COOKIES_ENABLED
- FEED_EXPORTERS
- FEED_EXPORTERS_BASE
- FEED_EXPORT_ENCODING
- FEED_EXPORT_FIELDS
- FEED_FORMAT
- FEED_STORAGES
- FEED_STORAGES_BASE
- FEED_STORE_EMPTY
- FEED_URI
- FILES_EXPIRES
- FILES_RESULT_FIELD
- FILES_STORE
- FILES_STORE_S3_ACL
- FILES_URLS_FIELD
- HTTPCACHE_ALWAYS_STORE
- HTTPCACHE_DBM_MODULE
- HTTPCACHE_DIR
- HTTPCACHE_ENABLED
- HTTPCACHE_EXPIRATION_SECS
- HTTPCACHE_GZIP
- HTTPCACHE_IGNORE_HTTP_CODES
- HTTPCACHE_IGNORE_MISSING
- HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS
- HTTPCACHE_IGNORE_SCHEMES
- HTTPCACHE_POLICY
- HTTPCACHE_STORAGE
- HTTPERROR_ALLOWED_CODES
- HTTPERROR_ALLOW_ALL
- HTTPPROXY_AUTH_ENCODING
- IMAGES_EXPIRES
- IMAGES_MIN_HEIGHT
- IMAGES_MIN_WIDTH
- IMAGES_RESULT_FIELD
- IMAGES_STORE
- IMAGES_STORE_S3_ACL
- IMAGES_THUMBS
- IMAGES_URLS_FIELD
- MAIL_FROM
- MAIL_HOST
- MAIL_PASS
- MAIL_PORT
- MAIL_SSL
- MAIL_TLS
- MAIL_USER
- METAREFRESH_ENABLED
- METAREFRESH_MAXDELAY
- REDIRECT_ENABLED
- REDIRECT_MAX_TIMES
- REFERER_ENABLED
- RETRY_ENABLED
- RETRY_HTTP_CODES
- RETRY_TIMES
- TELNETCONSOLE_HOST
- TELNETCONSOLE_PORT