小编给大家分享一下python爬虫中如何分类,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!
1、根据目的可以分为功能性爬虫和数据增量爬虫。
2、根据url地址和对应的页面内容是否改变,数据增量爬虫可分为地址变内容也变的爬虫和地址不变内容变的爬虫。
实例
# 1.spider文件
import scrapy
from movieAddPro.items import MovieaddproItem
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from redis import Redis
class MovieaddSpider(CrawlSpider):
name = 'movieadd'
# allowed_domains = ['www.xxx.com']
start_urls = ['https://www.4567tv.tv/frim/index1.html']
link = LinkExtractor(allow=r'.frim/index1-\d+.html')
rules = (
Rule(link, callback='parse_item', follow=True),
)
# 创建reids连接对象
conn = Redis(host='127.0.0.1',port=6379)
# 解析电影的名称和详情页的url
def parse_item(self, response):
li_list = response.xpath('/html/body/div[1]/div/div/div/div[2]/ul/li')
for li in li_list:
title = li.xpath('./div/a/@title').extract_first()
# 获取详情页url
detail_url = 'https://www.4567tv.tv' + li.xpath('./div/a/@href').extract_first()
item = MovieaddproItem()
item['title'] = title
# 判断该详情页的url是否进行请求发送
ex = self.conn.sadd('movieadd_detail_urls',detail_url)
if ex == 1: # 说明detail_url之前不存在redis的set集合中,需要发送请求
print('已有新数据更新,正在爬取数据......')
yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item})
else:
print('暂无新数据更新......')
def parse_detail(self,response):
item = response.meta['item']
desc = response.xpath('/html/body/div[1]/div/div/div/div[2]/p[5]/span[3]/text()').extract_first()
item['desc'] = desc
yield item
--------------------------------------------------------------------------------
# 2.pipelines文件
class MovieaddproPipeline(object):
def process_item(self, item, spider):
dic = {
'title':item['title'],
'desc':item['desc']
}
print(dic)
conn = spider.conn
conn.lpush('movieadd_data',dic)
return item
--------------------------------------------------------------------------------
# 3.items文件
import scrapy
class MovieaddproItem(scrapy.Item):
title = scrapy.Field()
desc = scrapy.Field()
--------------------------------------------------------------------------------
# 4.setting文件
BOT_NAME = 'movieAddPro'
SPIDER_MODULES = ['movieAddPro.spiders']
NEWSPIDER_MODULE = 'movieAddPro.spiders'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
ITEM_PIPELINES = {
'movieAddPro.pipelines.MovieaddproPipeline': 300,
}
- 需求:爬取糗事百科中的段子和作者数据。
# 1.spider文件
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from incrementByDataPro.items import IncrementbydataproItem
from redis import Redis
import hashlib
class QiubaiSpider(CrawlSpider):
name = 'qiubai'
start_urls = ['https://www.qiushibaike.com/text/']
rules = (
Rule(LinkExtractor(allow=r'/text/page/\d+/'), callback='parse_item', follow=True),
Rule(LinkExtractor(allow=r'/text/$'), callback='parse_item', follow=True),
)
#创建redis链接对象
conn = Redis(host='127.0.0.1',port=6379)
def parse_item(self, response):
div_list = response.xpath('//div[@id="content-left"]/div')
for div in div_list:
item = IncrementbydataproItem()
item['author'] = div.xpath('./div[1]/a[2]/h3/text() | ./div[1]/span[2]/h3/text()').extract_first()
item['content'] = div.xpath('.//div[@class="content"]/span/text()').extract_first()
#将解析到的数据值生成一个唯一的标识进行redis存储
source = item['author']+item['content']
source_id = hashlib.sha256(source.encode()).hexdigest()
#将解析内容的唯一表示存储到redis的data_id中
ex = self.conn.sadd('data_id',source_id)
if ex == 1:
print('该条数据没有爬取过,可以爬取......')
yield item
else:
print('该条数据已经爬取过了,不需要再次爬取了!!!')
--------------------------------------------------------------------------------
# 2.pipelines文件
from redis import Redis
class IncrementbydataproPipeline(object):
conn = None
def open_spider(self, spider):
self.conn = Redis(host='127.0.0.1', port=6379)
def process_item(self, item, spider):
dic = {
'author': item['author'],
'content': item['content']
}
print(dic)
self.conn.lpush('qiubaiData', dic)
return item
以上是“python爬虫中如何分类”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。
原文链接:https://www.py.cn/jishu/spider/33448.html