在Python中编写爬虫并将数据存储起来,通常有以下几种方法:
import json
data = {"title": "example", "content": "This is an example."}
# 保存到JSON文件
with open("data.json", "w") as f:
json.dump(data, f)
首先,安装数据库相关的库:
pip install pymysql pymongo
然后,连接到数据库并保存数据:
import pymysql
import pymongo
# 连接到MySQL数据库
conn = pymysql.connect(host="localhost", user="username", password="password", db="database_name")
cursor = conn.cursor()
sql = "INSERT INTO table_name (title, content) VALUES (%s, %s)"
data = ("example", "This is an example.")
cursor.execute(sql, data)
conn.commit()
# 连接到MongoDB数据库
client = pymongo.MongoClient("mongodb://localhost:27017/")
db = client["database_name"]
collection = db["collection_name"]
document = {"title": "example", "content": "This is an example."}
collection.insert_one(document)
首先,安装requests
库:
pip install requests
然后,发送数据到API:
import requests
url = "https://api.example.com/data"
data = {"title": "example", "content": "This is an example."}
response = requests.post(url, json=data)
if response.status_code == 200:
print("Data saved successfully.")
else:
print("Error saving data:", response.status_code)
根据你的需求和数据类型,可以选择合适的方法来存储爬取到的数据。