在Python中,使用代理IP进行爬虫可以避免被目标网站封禁。以下是一些使用代理IP的爬虫方法:
requests
库:import requests
proxies = {
'http': 'http://代理IP:端口',
'https': 'https://代理IP:端口',
}
url = '目标网址'
response = requests.get(url, proxies=proxies)
print(response.text)
Scrapy
框架:首先,在settings.py
文件中配置代理:
HTTP_PROXY = 'http://代理IP:端口'
DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.ProxyMiddleware': 100,
}
然后,创建一个名为middlewares.py
的文件,定义一个中间件来处理代理:
class ProxyMiddleware(object):
def process_request(self, request, spider):
request.meta['proxy'] = spider.settings.get('HTTP_PROXY')
最后,在爬虫文件中,你可以使用response.meta['proxy']
来获取代理IP。
http.client
库:import http.client
proxy_server = "http://代理IP:端口"
conn = http.client.HTTPConnection(proxy_server)
conn.set_tunnel(proxy_server)
url = "目标网址"
path = "/".join(url.split("/")[:3]) + "/" + url.split("/")[-1]
headers = {
"Host": url.split("/")[2],
}
conn.request("GET", path, headers=headers)
response = conn.getresponse()
print(response.read().decode())
requests-proxy
:首先,安装requests-proxy
库:
pip install requests-proxy
然后,使用requests
库并设置代理:
import requests
from requests_proxy import ProxyManager
proxies = ProxyManager("http://代理IP:端口")
url = "目标网址"
response = requests.get(url, proxies=proxies)
print(response.text)
请注意,这些方法都需要你有一个可用的代理IP。你可以从免费或付费的代理IP提供商处获取代理IP。在使用代理IP时,请确保遵守目标网站的爬虫政策和法律法规。