自动限速扩展¶
该扩展能根据Scrapy服务器及您爬取的网站的负载自动限制爬取速度。
设计目标¶
- 更友好的对待网站,而不使用默认的下载延迟0。
- 自动调整scrapy来优化下载速度,使得用户不用调节下载延迟及并发请求数来找到优化的值。 用户只需指定允许的最大并发请求数,剩下的都交给扩展来完成。
How it works¶
AutoThrottle extension adjusts download delays dynamically to make spider send AUTOTHROTTLE_TARGET_CONCURRENCY
concurrent requests on average to each remote website.
It uses download latency to compute the delays.The main idea is the following: if a server needs latency
seconds to respond, a client should send a request each latency/N
seconds to have N
requests processed in parallel.
Instead of adjusting the delays one can just set a small fixed download delay and impose hard limits on concurrency using CONCURRENT_REQUESTS_PER_DOMAIN
or CONCURRENT_REQUESTS_PER_IP
options. It will provide a similar effect, but there are some important differences:
- because the download delay is small there will be occasional bursts of requests;
- often non-200 (error) responses can be returned faster than regular responses, so with a small download delay and a hard concurrency limit crawler will be sending requests to server faster when server starts to return errors. But this is an opposite of what crawler should do - in case of errors it makes more sense to slow down: these errors may be caused by the high request rate.
AutoThrottle doesn’t have these issues.
限速算法¶
限速算法基于以下规则调整下载延迟:
- spider永远以1并发请求数及
AUTOTHROTTLE_START_DELAY
中指定的下载延迟启动。 - when a response is received, the target download delay is calculated as
latency / N
wherelatency
is a latency of the response, andN
isAUTOTHROTTLE_TARGET_CONCURRENCY
. - 当接收到回复时,下载延迟会调整到该回复的延迟与之前下载延迟之间的平均值。
- latencies of non-200 responses are not allowed to decrease the delay;
- download delay can’t become less than
DOWNLOAD_DELAY
or greater thanAUTOTHROTTLE_MAX_DELAY
Note
AutoThrottle扩展尊重标准Scrapy设置中的并发数及延迟。 这意味着其永远不会设置一个比 DOWNLOAD_DELAY
更低的下载延迟或者比 CONCURRENT_REQUESTS_PER_DOMAIN
更高的并发数 (或 CONCURRENT_REQUESTS_PER_IP
,取决于您使用哪一个)。
在Scrapy中,下载延迟是通过计算建立TCP连接到接收到HTTP包头(header)之间的时间来测量的。
注意,由于Scrapy可能在忙着处理spider的回调函数或者无法下载,因此在合作的多任务环境下准确测量这些延迟是十分苦难的。不过,这些延迟仍然是对Scrapy(甚至是服务器)繁忙程度的合理测量,而这扩展就是以此为前提进行编写的。
设置¶
下面是控制AutoThrottle扩展的设置:
-
AUTOTHROTTLE_ENABLED
-
AUTOTHROTTLE_START_DELAY
-
AUTOTHROTTLE_MAX_DELAY
-
AUTOTHROTTLE_DEBUG
-
CONCURRENT_REQUESTS_PER_DOMAIN
-
CONCURRENT_REQUESTS_PER_IP
-
DOWNLOAD_DELAY
更多内容请参考 限速算法 。
AUTOTHROTTLE_TARGET_CONCURRENCY¶
New in version 1.1.
默认值:1.0
Average number of requests Scrapy should be sending in parallel to remote websites.
By default, AutoThrottle adjusts the delay to send a single concurrent request to each of the remote websites. Set this option to a higher value (e.g. 2.0
) to increase the throughput and the load on remote servers. A lower AUTOTHROTTLE_TARGET_CONCURRENCY
value (e.g. 0.5
) makes the crawler more conservative and polite.
Note that CONCURRENT_REQUESTS_PER_DOMAIN
and CONCURRENT_REQUESTS_PER_IP
options are still respected when AutoThrottle extension is enabled. This means that if AUTOTHROTTLE_TARGET_CONCURRENCY
is set to a value higher than CONCURRENT_REQUESTS_PER_DOMAIN
or CONCURRENT_REQUESTS_PER_IP
, the crawler won’t reach this number of concurrent requests.
At every given time point Scrapy can be sending more or less concurrent requests than AUTOTHROTTLE_TARGET_CONCURRENCY
; it is a suggested value the crawler tries to approach, not a hard limit.
AUTOTHROTTLE_DEBUG¶
Default: False
起用AutoThrottle调试(debug)模式,展示每个接收到的response,您可以通过此来查看限速参数是如何实时被调整的。