当前位置: 首页 > news >正文

crawl4ai--bitcointalk爬虫实战项目

📌 项目目标

本项目旨在自动化抓取 Bitcointalk 论坛中指定板块的帖子数据(包括主贴和所有回复),并提取出结构化信息如标题、作者、发帖时间、用户等级、活跃度、Merit 等,以便进一步分析或使用。

本项目只供科研学习使用

核心环境:

py==3.9,  Crawl4AI==0.6.3,beautifulsoup4==4.12.3

  • 爬虫框架crawl4ai(基于异步爬虫 + 浏览器模拟)

  • HTML 解析:BeautifulSoup (bs4)

  • 异步进度显示tqdm.asyncio

  • 存储:JSON 文件格式

两个py代码文件 bitcointalk_crawler.py  和  main.py即可运行

直接给出完整代码:

bitcointalk_crawler.py :

import osfrom bs4 import BeautifulSoup
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, JsonCssExtractionStrategy, BrowserConfig, CacheMode
import json
import re
from pathlib import Pathfrom tqdm.asyncio import tqdm as async_tqdm
# 保存路径
SAVE_DIR = "../bitcointalk/"
os.makedirs(SAVE_DIR, exist_ok=True)# url = "https://bitcointalk.org/index.php?board=77"  # Bitcoin > Bitcoin Discussion > Press
# board = "Bitcoin Discussion_Press"
# board_url = "https://bitcointalk.org/index.php?board=74"   # Bitcoin > Bitcoin Discussion > Legal
# board = "Bitcoin Discussion_Legal"board_url = "https://bitcointalk.org/index.php?board=6"   #  Bitcoin > Development_Technical_Discussion子板块的url
board = "Bitcoin Development_Technical_Discussion"  # 保存文件的名称关键词
bitcointalk_page = 346  # 设置每个子板块爬取的页数
##使用时只修改以上三个参数即可完美运行# board_url = "https://bitcointalk.org/index.php?board=8"   #  Economy > Trading Discussion
# board = " Economy Trading Discussion"URL_path=SAVE_DIR+board+"_bitcointalk_urls.json"
DONE_URLS_FILE = SAVE_DIR+board+"bitcointalk_done_urls.json"
RESULTS_FILE = SAVE_DIR+board+"bitcointalk_results.json"
# JavaScript:点击“下一页”并等待刷新
js_click_next_and_wait = """
(async () => {const getTopicTitles = () => {return Array.from(document.querySelectorAll('.tborder a')).map(a => a.textContent.trim()).join('||');};const initialTitles = getTopicTitles();const nextButton = Array.from(document.querySelectorAll('#bodyarea #toppages .prevnext a.navPages')).find(a => a.textContent.trim() === '»');if (nextButton) nextButton.click();while (true) {await new Promise(resolve => setTimeout(resolve, 200));const currentTitles = getTopicTitles();if (currentTitles !== initialTitles) break;}
})();
"""# schema 提取规则
schema = {"name": "BitcointalkList","baseSelector": "#bodyarea .tborder a[href*='topic=']","fields": [{"name": "title", "selector": "a", "type": "text", "transform": "strip"},{"name": "url", "selector": "a", "type": "attr:href"},],
}# 翻页爬取url
async def crawl_bitcointalk_dynamic_list(board_url, max_pages: int = 3):print("开始收集url")browser_config = BrowserConfig(headless=True, java_script_enabled=True)async with AsyncWebCrawler(config=browser_config) as crawler:all_urls = []# url = "https://bitcointalk.org/index.php?board=77"   Bitcoin Discussion > Press# url = "https://bitcointalk.org/index.php?board=74"   Bitcoin Discussion > Legalsession_id = "bitcointalk_session"for page in range(max_pages):offset = page * 40page_url = f"{board_url}.{offset}"urls = []config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS,css_selector="#bodyarea .tborder .windowbg a[href*='topic=']",extraction_strategy=JsonCssExtractionStrategy(schema),# js_code=js_click_next_and_wait if page > 0 else None,# js_only=page > 0,session_id=session_id,)result = await crawler.arun(url=page_url, config=config)# print("首页结果:", result.markdown)# print("首页结果:", result)if result.success:html_content = result.html  # 假设这里是原始 HTML 字符串urls = re.findall(r'href="(https://bitcointalk\.org/index\.php\?topic=\d+\.0)"', html_content)for url in urls:all_urls.append(url)else:print(f"⚠️ 第 {page + 1} 页抓取失败")print(f"✅ 目前共 {len(all_urls)} 个url")with open(URL_path, "w", encoding="utf-8") as f:json.dump(all_urls, f, ensure_ascii=False, indent=2)print(f"爬取完毕✅ 共 {len(all_urls)} 个url")
# 更新 URL 为下一页return all_urls# 进入每个帖子页面抓取详细内容async def crawl_bitcointalk_post_detail_with_replies(url: str) -> dict:"""输入一个 Bitcointalk 帖子 URL,返回结构化的主贴与回复数据"""result_data = {"url": url,"time": "unknown","title": "","content": "","read_count": -1,"author": "unknown","rank": "unknown","activity": 0,"merit": 0,"replies_count": 0,"replies": []}schema = {"name": "Bitcointalk Thread","baseSelector": ".bordercolor .msgcl1",  # 每个帖子(主贴 + 回复) .bordercolor"fields": [{"name": "author","selector": ".poster_info > b > a","type": "text",},{"name": "author_inf","selector": ".poster_info .smalltext","type": "text",},{"name": "time","selector": ".td_headerandpost .smalltext",#quickModForm > table.bordercolor > tbody > tr:nth-child(1) > td > table > tbody > tr > td > table > tbody > tr:nth-child(1) > td.td_headerandpost"type": "text",},{"name": "content","selector": ".td_headerandpost .post","type": "text",},],}browser_config = BrowserConfig(headless=True, java_script_enabled=True)crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS,extraction_strategy=JsonCssExtractionStrategy(schema),delay_before_return_html=1,magic=True,simulate_user=True,override_navigator=True,)try:# 启动爬虫async with AsyncWebCrawler(config=browser_config) as crawler:result = await crawler.arun(url=url,config=crawler_config,# js_code=js_click_next_and_wait if page > 0 else None,# js_only=page > 0,)# print(result)soup = BeautifulSoup(result.cleaned_html, "html.parser")# print(soup)# 帖子标题# 查找包含帖子标题和阅读次数的 <td>topic_td = soup.find("td", string=re.compile(r"Topic:\s*\s*.+?\(Read\s+\d+\s+times\)"))# print(topic_td)if topic_td:# 假设你已经获得了 td 的文本text = topic_td.get_text(strip=True)# 匹配标题和阅读次数match = re.search(r"Topic:\s*(.+?)\s*\(Read\s+(\d+)\s+times\)", text)if match:title = match.group(1)  # 帖子标题read_count = int(match.group(2))  # 阅读次数# print("标题:", title)# print("阅读次数:", read_count)else:print("❌ 无法匹配标题和阅读次数")else:title = "unknown"read_count = -1# 保存结果result_data["title"] = titleresult_data["read_count"] = read_countraw_posts = json.loads(result.extracted_content)# print(raw_posts)print(f"✅ 成功提取 {len(raw_posts)} 条帖子")posts = []main_content = raw_posts[0].get("content", "")for  i, raw_post in enumerate(raw_posts):post = {}author_inf = raw_post.get("author_inf", "")rank_match = re.search(r"^(Sr\. Member|Hero Member|Legendary|Full Member|Member|Newbie)", author_inf)activity_match = re.search(r"Activity:\s*(\d+)", author_inf)merit_match = re.search(r"Merit:\s*(\d+)", author_inf)post["author"] = raw_post.get("author", "")post["rank"] = rank_match.group(1) if rank_match else "unknown"post["activity"] = int(activity_match.group(1)) if activity_match else 0post["merit"] = int(merit_match.group(1)) if merit_match else 0post["time"] = raw_post.get("time", "unknown")# 如果是回复,并包含主贴内容,就移除主贴部分if i > 0 and main_content in raw_post.get("content", ""):cleaned_text = raw_post.get("content", "").replace(main_content, "").strip()post["content"] = cleaned_textelse:post["content"] = raw_post.get("content", "")# print(f"作者: {post['author']}, 时间: {post['time']}, 等级: {post['rank']}, 活动: {post['activity']}, Merit: {post['merit']}, 内容: {post['content'][:50]}...,")posts.append(post)# 主贴 + 回复整合if raw_posts:main_post = posts[0]result_data.update({"author": main_post["author"],"time": main_post["time"],"rank": main_post["rank"],"activity": main_post["activity"],"merit": main_post["merit"],"content": main_post["content"],"replies_count": len(posts) - 1,  # 回复数量"replies": posts[1:]})# print(result_data)return result_dataexcept Exception as e:print(f"❌ 抓取失败:{e}")return result_dataasync def load_urls(URL_path,board_url,pages):if os.path.exists(URL_path):print(f"✅ url文件已存在,跳过爬取url: {URL_path}")with open(URL_path, "r", encoding="utf-8") as f:ALL_URLS = json.load(f)else:ALL_URLS = await crawl_bitcointalk_dynamic_list(board_url, max_pages=pages)  #获取帖子url并保存到文件return ALL_URLSdef load_done_urls():if Path(DONE_URLS_FILE).exists():with open(DONE_URLS_FILE, "r", encoding="utf-8") as f:return set(json.load(f))return set()def save_done_urls(done_urls: set):with open(DONE_URLS_FILE, "w", encoding="utf-8") as f:json.dump(list(done_urls), f, ensure_ascii=False, indent=2)def append_post(post: dict):if not Path(RESULTS_FILE).exists():with open(RESULTS_FILE, "w", encoding="utf-8") as f:json.dump([post], f, ensure_ascii=False, indent=2)else:with open(RESULTS_FILE, "r+", encoding="utf-8") as f:data = json.load(f)data.append(post)f.seek(0)json.dump(data, f, ensure_ascii=False, indent=2)f.truncate()async def crawl_bitcointalk_by_keywords(pages=bitcointalk_page, board_url=board_url):ALL_URLS = await load_urls(URL_path,board_url,pages)all_done_urls = load_done_urls()new_done_urls = set()print(f"🔍 Bitcointalk - urls - start")for URL in async_tqdm(ALL_URLS, desc="📡 正在异步爬取"):if URL in all_done_urls:print(f"⏩ 已完成跳过:{URL}")continuetry:print(f"📥 正在抓取内容:{URL}")final_post = await crawl_bitcointalk_post_detail_with_replies(URL)# ✅ 实时保存append_post(final_post)new_done_urls.add(URL)# ✅ 实时保存进度save_done_urls(all_done_urls.union(new_done_urls))print(f"✅ 已保存:{URL}")except Exception as e:print(f"❌ 错误跳过:{URL} - {e}")continueprint("🎉 全部关键词抓取完毕")

main.py:

import asynciofrom bitcointalk_crawler import crawl_bitcointalk_by_keywordsasync def main():keywords = ["bitcoin", "crypto"]# 爬取 Bitcointalkprint("开始爬取 Bitcointalk...")await crawl_bitcointalk_by_keywords() # # 爬取 Twitter# print("开始爬取 Twitter...")# await crawl_twitter_by_keywords(keywords)## # 爬取 Reddit# print("开始爬取 Reddit...")# reddit_data = await crawl_reddit_by_keywords(keywords, pages)# save_data("Reddit", reddit_data)if __name__ == "__main__":asyncio.run(main())

爬取结果:

URLS_path保存所有帖子的 URL 列表的 JSON 文件
DONE_URLS_FILE已经爬取完成的 URL 列表,防止重复抓取
RESULTS_FILE保存结构化帖子内容的结果文件

🔁 爬取流程总览

第一步:获取帖子列表 URL

函数:crawl_bitcointalk_dynamic_list(board_url, max_pages)

  1. 访问指定板块的 URL(例如技术讨论区)。

  2. 模拟翻页抓取前 max_pages 页的帖子链接。

  3. 通过 CSS selector 提取帖子标题及 URL。

  4. 使用正则进一步筛选帖子链接。

  5. 将结果保存至 URL_path 指定的文件中。

🔗 示例结果:

[ "https://bitcointalk.org/index.php?topic=123456.0", "https://bitcointalk.org/index.php?topic=234567.0" ]


第二步:提取主贴与回复详细内容

函数:crawl_bitcointalk_post_detail_with_replies(url)

对每个帖子 URL:

  1. 使用爬虫打开页面并等待加载完成。

  2. 提取原始 HTML,并用 BeautifulSoup 解析出标题与阅读数。

  3. 使用 JSON CSS 提取策略,批量提取每个楼层的内容:

    • 作者

    • 作者信息(等级、Merit、Activity)

    • 发布时间

    • 帖子正文内容

  4. 将第一个帖子识别为主贴,后续为回复。

  5. 主贴与所有回复打包为结构化字典。

📌 关键正则解析逻辑

提取阅读数和标题:

match = re.search(r"Topic:\s*(.+?)\s*\(Read\s+(\d+)\s+times\)", text)

提取用户等级 / 活跃度 / Merit:

rank_match = re.search(r"^(Sr\. Member|Hero Member|Legendary|Full Member|Member|Newbie)", author_inf) activity_match = re.search(r"Activity:\s*(\d+)", author_inf) merit_match = re.search(r"Merit:\s*(\d+)", author_inf)


✅ 成功与失败处理机制

  • 成功后保存:append_post()save_done_urls() 实时写入文件

  • 如果请求或解析失败,则打印错误并继续下一个 URL(不会中断全流程)

  • 日志中提供清晰提示(✅、❌、⏩ 等符号)

📚 示例数据输出

保存在 RESULTS_FILE 中的 JSON 数组,每个元素是一个完整帖子的结构化数据,便于后续 NLP、分类、情感分析等处理。

http://www.lryc.cn/news/587772.html

相关文章:

  • 嵌入式硬件篇---ne555定时器
  • 嵌入式硬件篇---晶体管的分类
  • Android 中 实现格式化字符串
  • Selenium动态网页爬虫编写与解释
  • 【Linux】Jenkins Lts 配置构建 Maven 项目
  • C++之哈希表的基本介绍以及其自我实现(开放定址法版本)
  • Maven 依赖原则和依赖冲突
  • 【Spring AI Alibaba】接入大模型
  • openGL学习(EBO)
  • Spring 学习笔记
  • 2025/7/14——java学习总结
  • JavaSE-8-多态
  • 机械硬盘文件丢失为何大概率能恢复
  • JavaScript中Object.defineProperty的作用和用法以及和proxy的区别
  • Linux多进程
  • 《美术教育研究》是什么级别的期刊?是正规期刊吗?能评职称吗?
  • Combine的介绍与使用
  • C++-linux 7.文件IO(三)文件元数据与 C 标准库文件操作
  • SVD、DCT图像压缩实践
  • 什么是电磁锁控制板?24路锁控板的使用步骤概述
  • MySQL数据库的基础操作
  • Java Integer包装类缓存机制详解
  • 《汇编语言:基于X86处理器》第7章 复习题和练习,编程练习
  • 最大最小公平策略(Max-Min Fairness)
  • 测试驱动开发(TDD)实战:在 Spring 框架实现中践行 “红 - 绿 - 重构“ 循环
  • 软考 系统架构设计师系列知识点之杂项集萃(111)
  • EasyExcel实现Excel文件导入导出
  • 文心4.5开源之路:引领技术开放新时代!
  • Cannot add property 0, object is not extensible
  • 收集飞花令碎片——VS调试技巧