使用scrapy进行爬网时,没有名为scrapy.spider的模块

我有抓取代码来抓取网站。我的代码如下。

from scrapy.spider import Spider
from scrapy.selector import Selector

class JustASpider(Spider):
    name = "googlespider"
    allowed_domains=["google.com"]
    start_urls = ["http://www.google.com/search?hl=en&q=search"]


    def parse(self,response):
        sel = Selector(response)
        sites = sel.xpath('//title/text()').extract()
        print (sites)
        #for site in sites: (I dont know why you want to loop for extracting the text in the title element)
            #print site.extract()

使用$scrapy crawl test.py运行此命令时出现错误

from scrapy.spider import Spider
ImportError: No module named 'scrapy.spider'

我也在尝试其他几个示例,但是所有人都遇到相同的错误。

SAGILL 回答:使用scrapy进行爬网时,没有名为scrapy.spider的模块

这对我有用:

import scrapy


class JustASpider(scrapy.Spider):
    name = "googlespider"
    allowed_domains=["google.com"]
    start_urls = ["http://www.google.com/search?hl=en&q=search"]

    def parse(self,response):
        sites = response.xpath('//title/text()').extract()
        print (sites)
        #for site in sites: (I dont know why you want to loop for extracting the text in the title element)
            #print site.extract()
本文链接:https://www.f2er.com/3167118.html

大家都在问