【数据挖掘】关联规则算法学习—Apriori
关联规则算法学习—Apriori
Apriori算法是关联规则挖掘中的经典算法,用于发现数据集中的频繁项集和强关联规则。其核心思想基于先验性质:若一个项集是频繁的,则其所有子集也一定是频繁的。该算法通过逐层搜索的迭代方法高效挖掘关联规则。
要求:
理解并掌握关联规则经典算法Apriori算法,理解算法的原理,能够实现算法,并对给定的数据集进行关联规则挖掘
代码实现:
import pandas as pd
from itertools import combinations
from collections import defaultdict# 读取数据
data = pd.read_csv('实验2-Groceries(1).csv')# 预处理数据,将字符串格式的项集转换为集合
transactions = []
for items in data['items']:# 去除大括号和引号,然后分割items_cleaned = items.strip('{}"').replace('"', '').split(',')transactions.append(set(items_cleaned))print(f"总交易数: {len(transactions)}")
print(f"前5条交易示例: {transactions[:5]}")def get_frequent_itemsets(transactions, min_support):"""实现Apriori算法找出频繁项集"""# 第一次扫描:计算单个项目的支持度item_counts = defaultdict(int)for transaction in transactions:for item in transaction:item_counts[item] += 1# 筛选满足最小支持度的单项num_transactions = len(transactions)frequent_items = {}for item, count in item_counts.items():support = count / num_transactionsif support >= min_support:frequent_items[frozenset([item])] = supportcurrent_frequent = frequent_itemsfrequent_itemsets = {}k = 1while current_frequent:frequent_itemsets.update(current_frequent)# 生成候选项集next_candidates = set()items = [item for itemset in current_frequent.keys() for item in itemset]unique_items = list(set(items))# 生成k+1大小的候选项集if k == 1:# 对于k=1,直接两两组合for i in range(len(unique_items)):for j in range(i+1, len(unique_items)):next_candidates.add(frozenset([unique_items[i], unique_items[j]]))else:# 对于k>1,使用先验性质for itemset1 in current_frequent:for itemset2 in current_frequent:union_set = itemset1.union(itemset2)if len(union_set) == k + 1:next_candidates.add(union_set)# 第二次扫描:计算候选项集的支持度candidate_counts = defaultdict(int)for transaction in transactions:for candidate in next_candidates:if candidate.issubset(transaction):candidate_counts[candidate] += 1# 筛选满足最小支持度的项集current_frequent = {}for itemset, count in candidate_counts.items():support = count / num_transactionsif support >= min_support:current_frequent[itemset] = supportk += 1return frequent_itemsetsdef generate_association_rules(frequent_itemsets, min_confidence):"""生成关联规则"""rules = []for itemset in frequent_itemsets.keys():if len(itemset) < 2:continuesupport_itemset = frequent_itemsets[itemset]# 生成所有可能的非空子集all_subsets = []for i in range(1, len(itemset)):all_subsets.extend(combinations(itemset, i))for subset in all_subsets:subset = frozenset(subset)remaining = itemset - subsetif remaining:support_subset = frequent_itemsets.get(subset, 0)if support_subset > 0:confidence = support_itemset / support_subsetif confidence >= min_confidence:rules.append((subset, remaining, support_itemset, confidence))return rules# 设置支持度和置信度阈值
min_support = 0.05 # 5%的支持度
min_confidence = 0.3 # 30%的置信度# 找出频繁项集
frequent_itemsets = get_frequent_itemsets(transactions, min_support)# 生成关联规则
rules = generate_association_rules(frequent_itemsets, min_confidence)# 按支持度排序
sorted_rules = sorted(rules, key=lambda x: x[2], reverse=True)# 打印频繁项集
print("\n频繁项集 (支持度 ≥ {}):".format(min_support))
for itemset, support in frequent_itemsets.items():if len(itemset) >= 2: # 只显示多项集print(f"{set(itemset)}: {support:.3f}")# 打印关联规则
print("\n关联规则 (置信度 ≥ {}):".format(min_confidence))
for rule in sorted_rules[:20]: # 显示前20条规则antecedent, consequent, support, confidence = ruleprint(f"{set(antecedent)} => {set(consequent)} (支持度: {support:.3f}, 置信度: {confidence:.3f})")# 尝试不同的支持度和置信度
parameters = [(0.05, 0.3), # 原始参数(0.03, 0.4), # 更低支持度,更高置信度(0.08, 0.25) # 更高支持度,更低置信度
]for sup, conf in parameters:print(f"\n参数: 最小支持度={sup}, 最小置信度={conf}")freq_itemsets = get_frequent_itemsets(transactions, sup)rules = generate_association_rules(freq_itemsets, conf)print(f"频繁项集数量: {len(freq_itemsets)}")print(f"关联规则数量: {len(rules)}")if rules:# 显示支持度最高的规则top_rule = max(rules, key=lambda x: x[2])print("支持度最高的规则:")print(f"{set(top_rule[0])} => {set(top_rule[1])} (支持度: {top_rule[2]:.3f}, 置信度: {top_rule[3]:.3f})")