这篇文章主要介绍了python如何爬取新闻门户网站,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。
1、云计算,典型应用OpenStack。2、WEB前端开发,众多大型网站均为Python开发。3.人工智能应用,基于大数据分析和深度学习而发展出来的人工智能本质上已经无法离开python。4、系统运维工程项目,自动化运维的标配就是python+Django/flask。5、金融理财分析,量化交易,金融分析。6、大数据分析。
每个文件夹下的代码就是对应平台的新闻爬虫
py 文件直接运行
pyd 文件需要,假设为 pengpai_news_spider.pyd
将 pyd 文件下载到本地,新建项目,把 pyd 文件放进去
项目根目录下新建 runner.py,写入以下代码即可运行并抓取
import pengpai_news_spider pengpai_news_spider.main()
百度新闻
# -*- coding: utf-8 -*- # 文件备注信息 如果遇到打不开的情况,可以先在浏览器打开一下百度搜索引擎 import requests from datetime import datetime, timedelta from lxml import etree import csv import os from time import sleep from random import randint def parseTime(unformatedTime): if '分钟' in unformatedTime: minute = unformatedTime[:unformatedTime.find('分钟')] minute = timedelta(minutes=int(minute)) return (datetime.now() - minute).strftime('%Y-%m-%d %H:%M') elif '小时' in unformatedTime: hour = unformatedTime[:unformatedTime.find('小时')] hour = timedelta(hours=int(hour)) return (datetime.now() - hour).strftime('%Y-%m-%d %H:%M') else: return unformatedTime def dealHtml(html): results = html.xpath('//div[@class="result-op c-container xpath-log new-pmd"]') saveData = [] for result in results: title = result.xpath('.//h4/a')[0] title = title.xpath('string(.)').strip() summary = result.xpath('.//span[@class="c-font-normal c-color-text"]')[0] summary = summary.xpath('string(.)').strip() # ./ 是直接下级,.// 是直接/间接下级 infos = result.xpath('.//div[@class="news-source"]')[0] source, dateTime = infos.xpath(".//span[last()-1]/text()")[0], \ infos.xpath(".//span[last()]/text()")[0] dateTime = parseTime(dateTime) print('标题', title) print('来源', source) print('时间', dateTime) print('概要', summary) print('\n') saveData.append({ 'title': title, 'source': source, 'time': dateTime, 'summary': summary }) with open(fileName, 'a+', encoding='utf-8-sig', newline='') as f: writer = csv.writer(f) for row in saveData: writer.writerow([row['title'], row['source'], row['time'], row['summary']]) headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36', 'Referer': 'https://www.baidu.com/s?rtt=1&bsst=1&cl=2&tn=news&word=%B0%D9%B6%C8%D0%C2%CE%C5&fr=zhidao' } url = 'https://www.baidu.com/s' params = { 'ie': 'utf-8', 'medium': 0, # rtt=4 按时间排序 rtt=1 按焦点排序 'rtt': 1, 'bsst': 1, 'rsv_dl': 'news_t_sk', 'cl': 2, 'tn': 'news', 'rsv_bp': 1, 'oq': '', 'rsv_btype': 't', 'f': 8, } def doSpider(keyword, sortBy = 'focus'): ''' :param keyword: 搜索关键词 :param sortBy: 排序规则,可选:focus(按焦点排序),time(按时间排序),默认 focus :return: ''' global fileName fileName = '{}.csv'.format(keyword) if not os.path.exists(fileName): with open(fileName, 'w+', encoding='utf-8-sig', newline='') as f: writer = csv.writer(f) writer.writerow(['title', 'source', 'time', 'summary']) params['wd'] = keyword if sortBy == 'time': params['rtt'] = 4 response = requests.get(url=url, params=params, headers=headers) html = etree.HTML(response.text) dealHtml(html) total = html.xpath('//div[@id="header_top_bar"]/span/text()')[0] total = total.replace(',', '') total = int(total[7:-1]) pageNum = total // 10 for page in range(1, pageNum): print('第 {} 页\n\n'.format(page)) headers['Referer'] = response.url params['pn'] = page * 10 response = requests.get(url=url, headers=headers, params=params) html = etree.HTML(response.text) dealHtml(html) sleep(randint(2, 4)) ... if __name__ == "__main__": doSpider(keyword = '马保国', sortBy='focus')
感谢你能够认真阅读完这篇文章,希望小编分享的“python如何爬取新闻门户网站”这篇文章对大家有帮助,同时也希望大家多多支持亿速云,关注亿速云行业资讯频道,更多相关知识等着你来学习!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。