一、需求场景:
1、由于业务需要,会频繁地购买或更换HTTPS类型的代理。
2、购买到的代理会出现所属地区不正确或不可用等问题,每次都需要测试无误后才能交付。
3、起初都是人工操作,“使用Proxifier填上代理信息-->清缓存-->访问测试IP的网站”,多的时候一天会有近千条需要测试。
二、想法:
用Python爬虫挂上代理去抓取IP网站的信息,将正确结果依次返回(使用多线程加快爬取速度)。
三、实践:
# -*- coding: utf-8 -*- import time import requests from bs4 import BeautifulSoup from threading import Thread, Lock lock = Lock() ipaddress = 0 country = 0 bad = 0 url = "https://whatismyipaddress.com/" kv = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36\ (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36'} class GetProxiesThread(Thread): def __init__(self, url, kv, proxy, proxies): self.url = url self.kv = kv self.proxy = proxy self.proxies = proxies super(GetProxiesThread, self).__init__() def run(self): check_proxies(self.url, self.kv, self.proxy, self.proxies) def check_proxies(url, kv, proxy, proxies): global ipaddress, country, bad, seq try: r = requests.get(url, headers=kv, proxies=proxies) r.raise_for_status() r.encoding = r.apparent_encoding soup = BeautifulSoup(r.text, 'lxml') ipaddress = soup.select('#section_left div a')[0].text country = soup.body.table.contents[-2].td.string lock.acquire() print("{} ==> {}".format(ipaddress, country)) except Exception as reason: lock.acquire() bad += 1 print(reason) print("{} is unavailable!".format(proxy)) finally: lock.release() def main(): num = 0 start = time.time() with open('proxieslist.txt', 'r') as f: threads = [] for line in f: num += 1 proxy = ''.join(line.splitlines()) https_proxy = 'https://' + proxy proxies = {"https": https_proxy} t = GetProxiesThread(url, kv, proxy, proxies) threads.append(t) t.start() time.sleep(0.6) for t in threads: t.join() print("The total amount: %s" % (num)) print("Not available quantity: %s" % (bad)) print("Elapsed time: %s" % (time.time() - start)) if __name__ == '__main__': main()
四、存在的问题:
1、可以添加将结果写入到记事本的功能。
2、有时候运行到一半会卡主,原因不明。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。