这篇文章主要介绍python分布式爬虫中spider_Worker节点指的是什么,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
在将多线程版本改写成分布式的爬虫,主要用的可跨平台的multiprocessing.managers的BaseManager模块,这个模块的主要功能就是将task_queue和result_queue两个队列注册成函数暴露到网上去,Master节点监听端口,让Worker子节点去连接,不同主机之间就可以通过注册的函数来共享同步资源,Master节点主要负责发送任务和获取结果,Worker就获取任务队列的任务开始跑,并将获取的结果存储到数据库获取返回回来。
spider_Worker 节点主要调用spider()函数对任务进行处理,方法都类似,子节点每获取一个链接就传回Master, 另外需要注意的是Master文件只能运行一个,但Worker节点可以同时运行多个并行同步处理task任务队列。
spider_Master.py
#coding:utf-8 from multiprocessing.managers import BaseManager from Queue import Queue import time import argparse import MySQLdb import sys page = 2 word = 'inurl:login.action' output = 'test.txt' page = (page+1) * 10 host = '127.0.0.1' port = 500 urls = [] class Master(): def __init__(self): self.task_queue = Queue() #server需要先创建两个共享队列,worker端不需要 self.result_queue = Queue() def start(self): BaseManager.register('get_task_queue',callable=lambda:self.task_queue) #在网络上注册一个get_task_queue函数,即把两个队列暴露到网上,worker端不需要callable参数 BaseManager.register('get_result_queue',callable=lambda:self.result_queue) manager = BaseManager(address=(host,port),authkey='sir') manager.start() #master端为start,即开始监听端口,worker端为connect task = manager.get_task_queue() #master和worker都是从网络上获取task队列和result队列,不能在创建的两个队列 result = manager.get_result_queue() print 'put task' for i in range(0,page,10): target = 'https://www.baidu.com/s?wd=%s&pn=%s'%(word,i) print 'put task %s'%target task.put(target) print 'try get result' while True: try: url = result.get(True,5) #获取数据时需要超时长一些 print url urls.append(url) except: break manager.shutdown() if __name__ == '__main__': start = time.time() server = Master() server.start() print '共爬取数据%s条'%len(urls) print time.time()-start with open(output,'a') as f: for url in urls: f.write(url[1]+'\n') conn = MySQLdb.connect('localhost','root','root','Struct',charset='utf8') cursor = conn.cursor() for record in urls: sql = "insert into s045 values('%s','%s','%s')"%(record[0],record[1],str(record[2])) cursor.execute(sql) conn.commit() conn.close()
spider_Worker
#coding:utf-8 import re import Queue import time import requests from multiprocessing.managers import BaseManager from bs4 import BeautifulSoup as bs host = '127.0.0.1' port = 500 class Worder(): def __init__(self): self.headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0'} def spider(self,target,result): urls = [] pn = int(target.split('=')[-1])/10 +1 # print pn # print target html = requests.get(target,headers=self.headers) soup = bs(html.text,"lxml") res = soup.find_all(name="a", attrs={'class':'c-showurl'}) for r in res: try: h = requests.get(r['href'],headers=self.headers,timeout=3) if h.status_code == 200: url = h.url # print url time.sleep(1) title = re.findall(r'<title>(.*?)</title>',h.content)[0] # print url,title title = title.decode('utf-8') print 'send spider url:',url result.put((pn,url,title)) else: continue except: continue # return urls def start(self): BaseManager.register('get_task_queue') BaseManager.register('get_result_queue') print 'Connect to server %s'%host m = BaseManager(address=(host,port),authkey='sir') m.connect() task = m.get_task_queue() result = m.get_result_queue() print 'try get queue' while True: try: target = task.get(True,1) print 'run pages %s'%target res = self.spider(target,result) # print res except: break if __name__ == '__main__': w = Worder() w.start()
以上是“python分布式爬虫中spider_Worker节点指的是什么”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。