爬取猫眼电影TOP100(http://maoyan.com/board/4?offset=90)
1). 爬取内容: 电影名称,主演, 上映时间,图片url地址保存到mariadb数据库中;
2). 所有的图片保存到本地/mnt/maoyan/电影名.png
代码:
import re
import pymysql as mysql
from urllib import request
from urllib.request import urlopen
u = 'root'
p = 'root'
d = 'python'
sql = 'insert into maoyan_top100 values(%s,%s,%s,%s,%s)'
url = 'http://maoyan.com/board/4?offset='
pattern = r'<dd>[\s\S]*?board-index.*?>(\d+)</i>[\s\S]*?<img data-src="(http://.+?)" alt="(.*?)"[\s\S]*?star">[\s]*(.*?)[\s]*?</p>[\s\S]*?releasetime">[\s]*(.*?)[\s]*?</p>'
myAgent = "Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0"
conn = mysql.connect(user=u, passwd=p, db=d, charset='utf8', autocommit=True)
cur = conn.cursor()
def write_to_mysql(item):
cur.executemany(sql,item)
def save_picture(rank,img_url,film_name):
img_content = urlopen(img_url).read()
img_name = 'maoyan_images/'+rank+'_'+film_name+'.jpg'
with open(img_name,'wb') as f:
f.write(img_content)
def main():
for i in range(10):
page_url = url+str(i*10)
myrequest = request.Request(page_url, headers={'User-Agent': myAgent})
page_content = urlopen(myrequest).read().decode('utf-8')
items = re.findall(pattern,page_content)
# [('1', 'http://p1.meituan.net/movie/20803f59291c47e1e116c11963ce019e68711.jpg@160w_220h_1e_1c', '霸王别姬', '主演:张国荣,张丰毅,巩俐', '上映时间:1993-01-01')...]
write_to_mysql(items)
for item in items:
save_picture(item[0],item[1],item[2])
if __name__ == '__main__':
main()
爬取结果:
1)保存海报图片
2)数据库数据
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。