这篇文章主要介绍Python如何爬取携程评论,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
第一步:打开网页
查看网页源代码
至此已经获取了网址最重要的内容
直接上完整代码
import requests
import json
import time
import csv
import re
c=open(r'D:\guifeng.csv','a+',newline='',encoding='utf-8')
fieldnames=['user','time','score','content']
writer=csv.DictWriter(c,fieldnames=fieldnames)
writer.writeheader()
head = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0'}
postUrl = "https://sec-m.ctrip.com/restapi/soa2/12530/json/viewCommentList"
data_1 = {
"pageid": "290510",
"viewid": "127481",
"tagid": "-11",
"pagenum": "1",
"pagesize": "10",
"contentType": "json",
"SortType": "1",
"head": {
"appid": "100013776",
"cid": "09031010311083118856",
"ctok": "",
"cver": "1.0",
"lang": "01",
"sid": "8888",
"syscode": "09",
"auth": "",
"extension": [
{
"name": "protocal",
"value": "https"
}
]
},
"ver": "7.10.3.0319180000"
}
html = requests.post(postUrl, data=json.dumps(data_1)).text
html = json.loads(html)
jingqu = '龟峰'
pages = html['data']['totalpage']
datas = []
for j in range(pages):
data1 = {
"pageid": "290510",
"viewid": "127481",
"tagid": "-11",
"pagenum": str(j + 1),
"pagesize": "10",
"contentType": "json",
"SortType": "1",
"head": {
"appid": "100013776",
"cid": "09031010311083118856",
"ctok": "",
"cver": "1.0",
"lang": "01",
"sid": "8888",
"syscode": "09",
"auth": "",
"extension": [
{
"name": "protocal",
"value": "https"
}
]
},
"ver": "7.10.3.0319180000"
}
datas.append(data1)
for k in datas[:50]:
print('正在抓取第' + k['pagenum'] + "页")
time.sleep(3)
html1 = requests.post(postUrl, data=json.dumps(k)).text
html1 = json.loads(html1)
comments = html1['data']['comments']
for i in comments:
user = i['uid']
time1 = i['date']
score = i['score']
content = i['content']
content = re.sub(" ", "", content)
content = re.sub("
", "", content)
writer.writerow({'user': user,'time':time1,'score': score,'content':content})
c.close()
即
按F5直接运行得出文件(自动存储在D盘里)
关键步骤解读
postUrl = "https://sec-m.ctrip.com/restapi/soa2/12530/json/viewCommentList"
这个网址是评论的通用网址不需要改动
可以改动的地方为:
c=open(r'D:\guifeng.csv','a+',newline='',encoding='utf-8')
存储的路径可以自定义,这里我直接用的是D盘
data_1 = {
"pageid": "290510",
"viewid": "127481",
"tagid": "-11",
"pagenum": "1",
"pagesize": "10",
"contentType": "json",
"SortType": "1",
"head": {
"appid": "100013776",
"cid": "09031010311083118856",
"ctok": "",
"cver": "1.0",
"lang": "01",
"sid": "8888",
"syscode": "09",
"auth": "",
"extension": [
{
"name": "protocal",
"value": "https"
}
]
},
"ver": "7.10.3.0319180000"
}
具体数据就是如下图需爬取网页评论代码里红圈的部分
同样的这个也是
data1 = {
"pageid": "290510",
"viewid": "127481",
"tagid": "-11",
"pagenum": str(j + 1),
"pagesize": "10",
"contentType": "json",
"SortType": "1",
"head": {
"appid": "100013776",
"cid": "09031010311083118856",
"ctok": "",
"cver": "1.0",
"lang": "01",
"sid": "8888",
"syscode": "09",
"auth": "",
"extension": [
{
"name": "protocal",
"value": "https"
}
]
},
"ver": "7.10.3.0319180000"
}
datas.append(data1)
注意:
"pagenum": str(j + 1),
这一步不可更改
最后
for k in datas[:50]:
print('正在抓取第' + k['pagenum'] + "页")
time.sleep(3)
这里的爬取的页数50是可以更改的
时间间隔3也是可以更改的
至此可以成功爬取携程上任意一家的评论,yes!!!
以上是“Python如何爬取携程评论”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注亿速云行业资讯频道!
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。
原文链接:https://my.oschina.net/u/4848094/blog/4745883