Scrapy爬取

要执行你的蜘蛛,请在 _firstscrapy 目录中运行以下命令:

scrapy crawl first

其中, 首先 是创建蜘蛛时指定的蜘蛛名称。

一旦蜘蛛爬行,你可以看到下面的输出:

2016-08-09 18:13:07-0400 [scrapy] INFO: Scrapy started (bot: tutorial)
2016-08-09 18:13:07-0400 [scrapy] INFO: Optional features available: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Overridden settings: {}
2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled extensions: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled downloader middlewares: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled spider middlewares: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled item pipelines: ...
2016-08-09 18:13:07-0400 [scrapy] INFO: Spider opened
2016-08-09 18:13:08-0400 [scrapy] DEBUG: Crawled (200)
<GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2016-08-09 18:13:09-0400 [scrapy] DEBUG: Crawled (200)
<GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2016-08-09 18:13:09-0400 [scrapy] INFO: Closing spider (finished)

正如您在输出中看到的那样,对于每个URL,都有一个日志行指出这些URL是起始URL,并且它们没有引用。接下来,您应该会在 _firstscrapy 目录中看到名为 Books.htmlResources.html的 两个新文件。

Scrapy提取项目:为了从网页中提取数据,Scrapy使用了一种基于 XPath 和 CSS 表达式的称为选择器的技术。以下是XPath表达式的一些示例/ html / head / title - 这将选择HTML文档的 &l ...