scrapy框架爬取数据(创建一个scrapy项目+xpath解析数据+通过pipelines管道实现数据保存+中间件)
目录
一、创建一个scrapy项目
二、xpath解析数据
三、通过pipelines管道实现数据保存
四、中间件
一、创建一个scrapy项目
1.创建一个文件夹:C06
在终端输入以下命令:
2.安装scrapy:pip install scrapy
3.来到文件夹下:cd C06
4.创建项目:scrapy startproject C06L02(项目名称)
5.切换到C06L02下:cd C06L02/C06L02
切换到spiders下:cd spiders
6.创建爬虫名称和输入爬取链接:scrapy genspider app https://product.cheshi.com/rank/2-0-0-0-1/
(若是crawlspider爬虫字类实现全站爬取:scrapy genspider -t crawl app http://seller.cheshi.com/jinan/
7.注意看爬虫文件(新生成的app.py)链接是否一致
8.运行爬虫文件:scrapy crawl app
9.若想要消除日志文件,在settings.py中添加命令:LOG_LEVEL="ERROR"
若想要绕过ROBOTS协议,在settings.py中添加命令:ROBOTSTXT_OBEY=False
10.简单的scrapy项目的app.py文件代码如下:
import scrapyclass AppSpider(scrapy.Spider):name = "app"allowed_domains = ["product.cheshi.com"]started_urls = ["http://product.cheshi.com/rank/2-0-0-0-1/"]def parse(self, response):print(response.text)
若是crawlspider爬虫字类实现全站爬取:
import scrapy
from scrapy.linkextractors import linkExtractor
from scrapy.spiders import CrawlSpider, Ruleclass AppSpider(CrawlSpider):name = "app"allowed_domains = ["product.cheshi.com"]started_urls = ["http://product.cheshi.com/jinan"]rules = (Rule(linkExtractor(allow=r"seller.cheshi.com/\d+", deny=r"seller.cheshi.com/\d+/.+"),callback="parse_item",follow=True),)def parse(self, response):print(response.url)
11.user-agent配置:在settings.py文件中将user-agent注释内容展开,添加需要内容
二、xpath解析数据
在app.py文件中修改parse函数
import scrapyclass AppSpider(scrapy.Spider):name = "app"allowed_domains = ["product.cheshi.com"]started_urls = ["http://product.cheshi.com/rank/2-0-0-0-1/"]def parse(self, response):cars = response.xpath('//ul[@class="condition_list_con"]/li')for car in cars:title = car.xpath('./div[@class="m_detail"]//a/text()').get()price = car.xpath('./div[@class="m_detail"]//b/text()').get()
若实现分页爬取则为以下代码
import scrapy
from ..items import C06L10Itemclass AppSpider(scrapy.Spider):name = "app"allowed_domains = ["book.douban.com"]started_urls = ["http://book.douban.com/latest"]def parse(self, response):books = response.xpath('//ul[@class="chart-dashed-list"]/li')for book in books:link = book.xpath('.//h2/a/@href').get()yield scrapy.Request(url=link,callback=self.parse_details)next_url = response.xpath('//*[@id="content"]/div/div[1]/div[4]/span[4]/a/@href').get()if next_url is not None:next_url = response.urljoin(next_url)print(next_url)yield scrapy.Request(url=next_url, callback=self.parse)else:next_url = response.xpath('//*[@id="content"]/div/div[1]/div[4]/span[3]/a/@href').get()next_url = response.urljoin(next_url)print(next_url)yield scrapy.Request(url=next_url, callback=self.parse)def parse_details(self, reponse):item = C06L10Item()item["title"] = response.xpath('//*[id="wrapper"]/h1/span/text()').get()item["publisher"] = response.xpath('//*[id="info"]/a[1]/text()').get()yield item
三、通过pipelines管道实现数据保存
1.在items.py文件中定义数据模型
import scrapyclass C06L04Item(scrapy.Item):title = scrapy.Field()price = scrapy.Field()
2.在app.py文件中添加如下代码
import scrapy
from ..items import C06L04Itemclass AppSpider(scrapy.Spider):name = "app"allowed_domains = ["product.cheshi.com"]started_urls = ["http://product.cheshi.com/rank/2-0-0-0-1/"]def parse(self, response):item = C06L04Item()cars = response.xpath('//ul[@class="condition_list_con"]/li')for car in cars:item["title"] = car.xpath('./div[@class="m_detail"]//a/text()').get()item["price"] = car.xpath('./div[@class="m_detail"]//b/text()').get()yield item
3.在settings.py文件中展开被注释掉的ITEM_PIPELINES,无需修改
4.修改pipelines.py文件代码
from itemadapter import ItemAdapterclass C06L04Pipeline:def process_item(self, item, spider):# print(item["title"],item["price"])return item
若想要保存成文件添加以下代码
from itemadapter import ItemAdapterclass C06L04Pipeline:def __init__(self):self.f = open("data.tet", "w")def process_item(self, item, spider):self.f.write(item["title"]+item["price"]+"\n")return itemdef __del__(self):self.f.close()
存储为mongodb形式为如下代码
from itemadapter import ItemAdapter
import pymongoclass C06L04Pipeline:def __init__(self):self.client = pymongo.MongoClient("mongodb://localhost:27017")self.db = self.client["cheshi"]self.col = self.db["cars"]def process_item(self, item, spider):res = self.col.insert_one(dict(item))print(res.inserted_id)return itemdef __del__(self):print("end")
四、中间件
1.Middleware的应用:随机User-Agent、代理IP、使用Selenium、添加Cookie
2.动态User-Agent
打开settings.py文件中注释掉的DOWNLOADER_MIDDLEWARES
在middlewares.py文件中添加如下代码(只显示修改部分):
import randomdef process_request(self, request, spider):uas = ["User-Agent:Mxxxxxxxxxxxxxxxxxxxxxxxx","User-Agent:Mxxxxxxxxxxxxxxxxxxxxxxxx","User-Agent:Mxxxxxxxxxxxxxxxxxxxxxxxx","User-Agent:Mxxxxxxxxxxxxxxxxxxxxxxxx",]request.headers["User-Agent"] = random.choice(uas)
2.代理IP
具体操作略去,例如:快代理-隧道代理-python-scrapy的文档中心有具体的书写方式