第一次初学爬虫编写的最简单的爬出百度贴吧的图片

xiaoxiao2025-05-31  36

、`此代码可以无限翻页下载,可以在上面直接改URL里面的贴吧名字就能爬取自己喜欢的贴吧的图片,不过 不建议爬取大贴吧,因为大贴吧 帖子多 运行很久才能下载,下面附上简单的代码 url=‘https://tieba.baidu.com/f?kw=性能测试&ie=utf-8’ 中间的性能测试 是贴吧的名字

#coding:utf-8 import re import requests import os from lxml import etree url='https://tieba.baidu.com/f?kw=性能测试&ie=utf-8' html=respose.text selector=etree.HTML(html) links1= selector.xpath('//*[@class="red_text"]/text()') pagenumber=int(re.sub(',', '', links1[0]))/50 print re.sub(',', '', links1[0]) urls=[] urls2=[] for i in range(pagenumber+1): n=i*50 url1=url+'&pn='+str(n) print url1 respose=requests.get(url1) html1=respose.text selector1=etree.HTML(html1) links = selector1.xpath('//div[@class="threadlist_lz clearfix"]/div/a[@rel="noreferrer"]/@href') for link in links: link='http://tieba.baidu.com'+link respose=requests.get(link) url4=re.findall(r'class="BDE_Image".*?src="(.*?)"',respose.text,re.S) #re.S 把文本信息转换成1行匹配 urls2=urls2+url4 urls=urls+urls2 print len(urls) print len(urls) x=0 for i in range(len(urls)): result=requests.get(urls[i]) x+=1 print '正在下载第'+str(i)+'张' with open('D:/zzz/p%s.jpg'%x,'wb') as file: file.write(result.content)

![下载的进度可以显示出来

转载请注明原文地址: https://www.6miu.com/read-5031028.html

最新回复(0)