在Python中,可以使用concurrent.futures
模块中的ThreadPoolExecutor
或ProcessPoolExecutor
来实现多线程爬虫。为了设置任务优先级,可以使用queue.PriorityQueue
来存储和排序任务。以下是一个简单的示例:
import requests
from bs4 import BeautifulSoup
import queue
import threading
from concurrent.futures import ThreadPoolExecutor
def process_url(url):
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
# 处理数据,例如提取信息、存储到数据库等
print(f"Processed {url}")
def worker(work_queue):
while not work_queue.empty():
priority, url = work_queue.get()
process_url(url)
work_queue.task_done()
url_queue = queue.PriorityQueue()
# 添加任务到优先级队列,任务的优先级为负数,数值越小优先级越高
url_queue.put((-3, 'https://example.com/high-priority-url'))
url_queue.put((-1, 'https://example.com/low-priority-url'))
url_queue.put((-2, 'https://example.com/medium-priority-url'))
num_threads = 3
with ThreadPoolExecutor(max_workers=num_threads) as executor:
for _ in range(num_threads):
worker_thread = threading.Thread(target=worker, args=(url_queue,))
worker_thread.start()
url_queue.join()
这个示例中,我们使用了一个优先级队列来存储任务,任务的优先级由优先级值决定,数值越小优先级越高。工作线程从优先级队列中获取任务并处理。这样,我们可以根据任务的优先级来设置任务的执行顺序。