在Python中,使用requests库进行网络请求时,有时会遇到目标网站采取的反爬措施,如设置User-Agent、Cookie验证、IP封禁等。为了应对这些反爬措施,可以采取以下策略:
fake_useragent
库来随机选择User-Agent。import requests
from fake_useragent import UserAgent
ua = UserAgent()
headers = {'User-Agent': ua.random}
url = 'https://example.com'
response = requests.get(url, headers=headers)
requests.Session()
来保持会话状态。import requests
url = 'https://example.com'
cookies = {'cookie_name': 'cookie_value'}
response = requests.get(url, cookies=cookies)
或者使用requests.Session()
:
import requests
session = requests.Session()
session.cookies.update({'cookie_name': 'cookie_value'})
url = 'https://example.com'
response = session.get(url)
time.sleep()
函数来实现。import requests
import time
url = 'https://example.com'
response = requests.get(url)
time.sleep(5) # 等待5秒
import requests
proxies = {
'http': 'http://proxy_ip:proxy_port',
'https': 'https://proxy_ip:proxy_port'
}
url = 'https://example.com'
response = requests.get(url, proxies=proxies)
from selenium import webdriver
driver = webdriver.Chrome()
url = 'https://example.com'
driver.get(url)
content = driver.page_source
总之,应对反爬措施需要综合考虑多种策略,根据实际情况选择合适的方法来提高爬虫的稳定性和效率。