python常用库--------Requests

xiaoxiao2021-02-28  26

一、什么是Requests   Requests是使用python写的,基于urllib,采用Apache2 Licensed开源协议的HTTP库,他比urllib易用,可以简化很多代码。    二、Requests安装

pip3 install requests

三、用法   1.基本用法

res = requests.get("http://www.baidu.com") print(res.headers)#获取响应头 print(res.status_code)#获取响应码 print(res.text)#获取响应内容 print(res.cookies)#获取cookie

  2.各种请求方式

requests.post("http://www.baidu.com") requests.get("http://www.baidu.com") requests.put("http://www.baidu.com") requests.delete("http://www.baidu.com") requests.head("http://www.baidu.com") requests.options("http://www.baidu.com")

  3.Get请求:

#设置请求参数 import requests data = { 'name': 'germey', 'age': 22 } response = requests.get("http://httpbin.org/get", params=data) print(response.text)

json解析

response = requests.get("http://httpbin.org/get") print(type(response.text)) print(response.json()) print(json.loads(response.text))

获取二进制数据

response = requests.get("https://github.com/favicon.ico") print(response.text) print(response.content)

保存二进制数据

response = requests.get("https://github.com/favicon.ico") with open('favicon.ico', 'wb') as f: f.write(response.content) f.close()

添加请求头

headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36' } response = requests.get("http://www.baidu.com", headers=headers) print(response.text)

  4.Post请求(不需要像urllib那么繁琐)

data = {'name': 'germey', 'age': '22'} headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36' } response = requests.post("http://httpbin.org/post", data=data, headers=headers) print(response.json())

  5.文件上传       

files = {'file': open('favicon.ico', 'rb')} response = requests.post("http://httpbin.org/post", files=files) print(response.text)

  6.cookie获取

response = requests.get("https://www.baidu.com") print(response.cookies) for key, value in response.cookies.items(): print(key + '=' + value)

  7.会话保持(比如淘宝登陆成功后拿着cookie去获取淘宝订单)

s = requests.Session() s.get('http://httpbin.org/cookies/set/number/123456789') response = s.get('http://httpbin.org/cookies') print(response.text)

  8.https证书验证

#取消验证,屏蔽验证消息错误 import requests from requests.packages import urllib3 urllib3.disable_warnings() response = requests.get('https://www.12306.cn', verify=False) print(response.status_code) #设置证书位置 response = requests.get('https://www.12306.cn', cert=('/path/server.crt', '/path/key')) print(response.status_code)

  9、设置代理

#代理ip可以去西刺网站里复制 proxies = { "http": "http://127.0.0.1:9743", "https": "https://127.0.0.1:9743", } response = requests.get("https://www.taobao.com", proxies=proxies) print(response.status_code)

  10、超时设置(时间单位为s)

import requests from requests.exceptions import ReadTimeout try: response = requests.get("https://www.taobao.com", timeout = 0.1) print(response.status_code) except ReadTimeout: print('Timeout')

  11,网址认证设置,有些网站需要认证设置才能访问

r = requests.get('http://120.27.34.24:9001', auth=HTTPBasicAuth('user', '123')) print(r.status_code) r = requests.get('http://120.27.34.24:9001', auth=('user', '123')) print(r.status_code)

  12.异常处理(RequestException为父类异常,与java中Exception类似)

from requests.exceptions import ReadTimeout, ConnectionError, RequestException try: response = requests.get("http://httpbin.org/get", timeout = 0.5) print(response.status_code) except ReadTimeout: print('Timeout') except ConnectionError: print('Connection error') except RequestException: print('Error')
转载请注明原文地址: https://www.6miu.com/read-2632579.html

最新回复(0)