鹤壁企业网站建设,公司网页网站建设,wordpress fox,检察院网站建设标书前言
Scrapy是非常优秀的一个爬虫框架#xff0c;基于twisted异步编程框架。yield的使用如此美妙。基于调度器#xff0c;下载器可以对scrapy扩展编程。插件也是非常丰富#xff0c;和Selenium#xff0c;PlayWright集成也比较轻松。
当然#xff0c;对网页中的ajax请求…前言
Scrapy是非常优秀的一个爬虫框架基于twisted异步编程框架。yield的使用如此美妙。基于调度器下载器可以对scrapy扩展编程。插件也是非常丰富和SeleniumPlayWright集成也比较轻松。
当然对网页中的ajax请求它是无能无力的但结合mitmproxy几乎无所不能Scrapy PlayWright模拟用户点击mitmproxy则在后台抓包取数据登录一次运行一天。
最终我通过asyncio把这几个工具整合到了一起基本达成了自动化无人值守的稳定运行一篇篇的文章送入我的ElasticSearch集群经过知识工厂流水线变成知识商品。
”爬虫数据算法智能“这是一个技术人的理想。
配置与运行
安装: pip install scrapy 当前目录下有scrapy.cfg和settings.py即可运行scrapy
命令行运行 scrapy crawl ArticleSpider 在程序中运行有三种写法
from scrapy.cmdline import executeexecute(scrapy crawl ArticleSpider.split())采用CrawlerRunner:
# 采用CrawlerRunner
from twisted.internet.asyncioreactor import AsyncioSelectorReactor
reactor AsyncioSelectorReactor()runner CrawlerRunner(settings)
runner.crawl(ArticleSpider)
reactor.run()采用CrawlerProcess
# 采用CrawlerProcess
process CrawlerProcess(settings)
process.crawl(ArticleSpider)
process.start()和PlayWright的集成
使用PlayWright的一大好处就是用headless browser做自动化数据采集。A headless browser 是一种特殊的Web浏览器它为自动化提供API。通过安装 asyncio reactor 则可以集成 asyncio 基础库用于处理无头浏览器。
import scrapy
from playwright.async_api import async_playwrightclass PlaywrightSpider(scrapy.Spider):name playwrightstart_urls [data:,] # avoid using the default Scrapy downloaderasync def parse(self, response):async with async_playwright() as pw:browser await pw.chromium.launch()page await browser.new_page()await page.goto(https:/example.org)title await page.title()return {title: title}使用 playwright-python 与上面的示例一样直接绕过了大多数scrapy组件(中间件、dupefilter等)。建议使用 scrapy-playwright 进行整合。
安装 pip install scrapy-playwright playwright install playwright install firefox chromium settings.py配置
BOT_NAME ispiderSPIDER_MODULES [ispider.spider]TWISTED_REACTOR twisted.internet.asyncioreactor.AsyncioSelectorReactor
DOWNLOAD_HANDLERS {https: scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler,http: scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler,
}CONCURRENT_REQUESTS 32
PLAYWRIGHT_MAX_PAGES_PER_CONTEXT 4
CLOSESPIDER_ITEMCOUNT 100PLAYWRIGHT_CDP_URL http://localhost:9900
爬虫定义
class ArticleSpider(Spider):name ArticleSpidercustom_settings {# TWISTED_REACTOR: twisted.internet.asyncioreactor.AsyncioSelectorReactor,# DOWNLOAD_HANDLERS: {# https: scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler,# http: scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler,# },# CONCURRENT_REQUESTS: 32,# PLAYWRIGHT_MAX_PAGES_PER_CONTEXT: 4,# CLOSESPIDER_ITEMCOUNT: 100,}start_urls [https://blog.csdn.net/nav/lang/javascript]def __init__(self, nameNone, **kwargs):super().__init__(name, **kwargs)logger.debug(ArticleSpider initialized.)def start_requests(self):for url in self.start_urls:yield Request(url,meta{playwright: True,playwright_context: first,playwright_include_page: True,playwright_page_goto_kwargs: {wait_until: domcontentloaded,},},)async def parse(self, response: Response, current_page: Optional[int] None) - Generator:content response.textpage response.meta[playwright_page]context page.contexttitle await page.title()while True:## 垂直滚动下拉不断刷新数据page.mouse.wheel(delta_x0, delta_y200)time.sleep(3)pass参考链接
scrapy文档官方scrapy-playwright插件崔庆才丨静觅写的插件GerapyPlaywright