这篇文章将为大家详细讲解有关用Python如何爬取下载kindle网站电子书,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。
一个下载看kindle(kankindle.com)的所有电子书的python脚本,程序会自动下载首页部分13页的所有电子书,下载到ebook目录下,程序会检测是否下载过。
#!/usr/bin/env python # coding=utf-8 from bs4 import BeautifulSoup import urllib2 import socket import re import unicodedata import os from urwid.text_layout import trim_line def download(url): print 'starting download %s' % url response=urllib2.urlopen(url,timeout=30) html_data=response.read() soup=BeautifulSoup(html_data) print 'start to analayse---------------' title_soup=soup.find_all(class_='yanshi_xiazai') name_soup = soup.find_all('h2') tag_a = title_soup[0].a.attrs['href'] tag_name= title_soup[0].a.contents link_name = name_soup[0] link_name = str(link_name).replace("<h2>","").replace("</h2>","") #print tag_name[0] #print link_name filename = link_name+".mobi" filename = "ebook/"+filename print 'filename is :%s' % filename print "downloading with urllib2 %s" % tag_a if os.path.exists(filename): print 'already donwload ,ignore' else: try: f = urllib2.urlopen(tag_a,timeout=60) data = f.read() #print 'the data is %s'% data with open(filename, "wb") as code: code.write(data) except Exception,e: print e def get_all_link(url): print 'Starting get all the list' response=urllib2.urlopen(url,timeout=30) html_data=response.read() #print html_data soup=BeautifulSoup(html_data) link_soup = soup.find_all('a') #print link_soup for each_link in link_soup: if re.search('view',str(each_link)): #print each_link print each_link print each_link.attrs['href'] download(each_link.attrs['href']) if __name__ == '__main__': for page in range(1,13): url = "http://kankindle.com/simple/page/3"+str(page) url = url.strip() print url get_all_link(url)
关于用Python如何爬取下载kindle网站电子书就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。