scrapy使用记录

xiaoxiao2021-02-28  86

下载以后打开scrapy命令行: 命令行选项中:

可以通过startproject或genspider开启爬虫,前者是创建一个如下目录结构的工程,后者是按模板创建一个Spider,也可以自己写一个类继承Spider,得到一个Scrapy Spider;两种创建方式分别对应crawl和runspider两种运行方式;预定义的Spider模板有:CrawlSpider,XMLFeedSpider,CSVFeedSpider和SitemapSpider通过shell选项进入scrapy交互式命令行模式(类似于在控制台输入python进入的界面)

startproject以后的目录结构: 在items里定义类,每个类都可以像字典一样引用。

# items.py class FollowItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() # 关注列表/粉丝列表用户名/id username = scrapy.Field() # //a[@usercard!='']/@title uid = scrapy.Field() # //a[@usercard!='']/@usercard # 用户主页url pagelink = scrapy.Field() # //a[@usercard!='']/@href class UserItem(scrapy.Item): username = scrapy.Field() uid = scrapy.Field() # 微博内容 content = scrapy.Field() # 在follows.py里的引用 import sys sys.path.append('D:\develop\workspace\example') from example.items import FollowItem, UserItem followitem = FollowItem() followitem['username'] = selector.css('a[usercard]::attr(title)').extract() followitem['uid'] = selector.css('a[usercard]::attr(usercard)').extract() followitem['pagelink'] = selector.css('a[usercard]::attr(href)').extract()

Spider

自定义Spider从模板创建的Spider

自定义Spider: 需要from scrapy import Request, Spider, Selector 其中Request的作用是发起请求,Spider是自定义类的父类,需要从它那里继承并重写parse()、start_request()方法还有继承一些属性,Selector是选择器,负责从response返回的html文本中根据css选择器/xpath/正则抽取想要的部分。 首先要继承Spider并定义Spider的名字,Spider的名字用于在命令行运行:

class FollowSpider(Spider): name = 'follows' allowed_domains = ['weibo.com']

然后定义了一些必要的头部和cookies之后,重写start_request()方法,在这里发起Request请求:

def start_requests(self): yield Request(url=url, headers=self.headers, cookies=self.cookies)

最后在parse()方法里解析response:

def parse(self, response): # 在字符串里\\才代表反斜杠 m = re.search('"html":"(<div class=\\\\"WB_cardwrap.+)\\\\n"}\\)</script>', response.body) if m: html = '<html><body>' + m.group(1) + '</body></html>' selector = Selector(text=html) followitem = FollowItem() followitem['username'] = selector.css('a[usercard]::attr(title)').extract() followitem['uid'] = selector.css('a[usercard]::attr(usercard)').extract() followitem['pagelink'] = selector.css('a[usercard]::attr(href)').extract() with open('D:\\\\rs.txt', mode='w') as f: for itemkey,items in followitem.items(): f.write('---------------' + itemkey + '----------------\n') for item in items: f.write(item.encode('utf-8').replace('\\','') + '\n')

编码完成以后运行

scrapy crawl follows

执行爬虫

从模板创建Spider

转载请注明原文地址: https://www.6miu.com/read-28882.html

最新回复(0)