小编给大家分享一下利用python爬取网易云音乐热歌榜的案例,希望大家阅读完这篇文章后大所收获,下面让我们一起去探讨吧!
首先找到要下载的歌曲排行榜的链接,这里用的是:
https://music.163.com/discover/toplist?id=3778678
然后更改你要保存的目录,目录要先建立好文件夹,例如我的是保存在D盘-360下载-网易云热歌榜文件夹内,就可以完成下载。
如果文件夹没有提前建好,会报错[Errno 2] No such file or directory。
代码实现:
from urllib import request
from bs4 import BeautifulSoup
import re
import requests
import time
class Music(object):
def __init__(self, baseurl, path):
head = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
}
self.baseurl = baseurl
self.headers = head
self.path = path
def main(self):
html = self.askurl()
bs4 = self.analysis(html)
name1 = self.matching(bs4)
self.save(name1)
def askurl(self):
req = request.Request(url=self.baseurl, headers=self.headers)
response = request.urlopen(req)
html = response.read().decode("utf-8")
return html
def analysis(self, html):
soup = BeautifulSoup(html, "html.parser")
bs4 = soup.find_all("textarea")
bs4 = str(bs4)
return bs4
def matching(self, bs4):
rule0 = re.compile(r'"name":"(.*?)","tns":[],"alias":[]')
name0 = re.findall(rule0, bs4)
str = ""
for i in name0:
str = str + "," + i
str = str.replace("\xa0", " ")
rule1 = re.compile(r'jpg,(.*?),(.*?)","id":(\d*)')
name1 = re.findall(rule1, str)
return name1
def save(self, name1):
for j in name1:
print("正在下载:" + j[1] + " - " + j[0] + "...")
url = "http://music.163.com/song/media/outer/url?id=" + j[2]
content = requests.get(url=url, headers=self.headers).content
with open(self.path + j[1] + " - " + j[0] + ".mp3", "wb") as f:
f.write(content)
print(j[1] + " - " + j[0] + "下载完毕。\n")
time.sleep(0.5)
return
if __name__ == "__main__":
baseurl = "https://music.163.com/discover/toplist?id=3778678" # 要爬取的热歌榜链接
path = "D:/360下载/网易云热歌榜/" # 保存的文件目录
demo0 = Music(baseurl, path)
demo0.main()
print("下载完毕")
看完了这篇文章,相信你对利用python爬取网易云音乐热歌榜的案例有了一定的了解,想了解更多相关知识,欢迎关注亿速云行业资讯频道,感谢各位的阅读!
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。