潜在语义分析(Latent Semantic Analysis,LSA)—无监督学习方法、非概率模型、判别模型、线性模型、非参数化模型、批量学习
定义
输入: X = [ x 11 x 12 ⋯ x 1 n x 21 x 22 ⋯ x 2 n ⋮ ⋮ ⋮ ⋮ x m 1 x m 2 ⋯ x m n ] , 文本集合 D = { d 1 , d 2 , ⋯ , d n } , 单词集合 W = { ω 1 , ω 2 , ⋯ , ω m } , x i j : 单词 ω i 在文本 d j 中出现的频数或权值 X=\left[ \begin{array}{cccc} x_{11} & x_{12} & \cdots & x_{1n} \\ x_{21} & x_{22} & \cdots & x_{2n}\\ \vdots & \vdots & \vdots & \vdots \\ x_{m1} & x_{m2} & \cdots & x_{mn} \end{array} \right],文本集合D=\{ d_1,d_2,\cdots,d_n \},单词集合W=\{ \omega_1,\omega_2,\cdots,\omega_m \},x_{ij}:单词\omega_i在文本d_j中出现的频数或权值 X= x11x21⋮xm1x12x22⋮xm2⋯⋯⋮⋯x1nx2n⋮xmn ,文本集合D={d1,d2,⋯,dn},单词集合W={ω1,ω2,⋯,ωm},xij:单词ωi在文本dj中出现的频数或权值
输出: U k = [ u 1 u 2 ⋯ u k ] , u i : 一个话题 U_k = \left[ \begin{array}{cccc} u_1 & u_2 & \cdots & u_{k} \end{array} \right],u_i:一个话题 Uk=[u1u2⋯uk],ui:一个话题
对单词-文本矩阵X进行奇异值分解,将其左矩阵作为话题向量空间,将其对角矩阵与右矩阵的乘积作为文本在话题向量空间的表示。
输入空间
X= [ x 11 x 12 ⋯ x 1 n x 21 x 22 ⋯ x 2 n ⋮ ⋮ ⋮ ⋮ x m 1 x m 2 ⋯ x m n ] \left[ \begin{array}{cccc} x_{11} & x_{12} & \cdots & x_{1n} \\ x_{21} & x_{22} & \cdots & x_{2n}\\ \vdots & \vdots & \vdots & \vdots \\ x_{m1} & x_{m2} & \cdots & x_{mn} \end{array} \right] x11x21⋮xm1x12x22⋮xm2⋯⋯⋮⋯x1nx2n⋮xmn
import numpy as np
import pandas as pd
import string
# import nltk
# nltk.download('stopwords') #离线下载地址:https://download.csdn.net/download/nanxiaotao/89743735,注需放置对应ENV的/nltk_data/corpora/stopwords目录下
from nltk.corpus import stopwords
import time
def load_data(file):'''加载数据 下载地址:https://download.csdn.net/download/nanxiaotao/89743739INPUT:file - (str) 数据文件的路径OUTPUT:org_topics - (list) 原始话题标签列表text - (list) 文本列表words - (list) 单词列表'''df = pd.read_csv(file)org_topics = df['category'].unique().tolist()df.drop('category', axis=1, inplace=True)n = df.shape[0]text = []words = []for i in df['text'].values:t = i.translate(str.maketrans('', '', string.punctuation))t = [j for j in t.split() if j not in stopwords.words('english')]t = [j for j in t if len(j) > 3]text.append(t)words.extend(set(t))words = list(set(words))return org_topics, text, words
def frequency_counter(text, words):'''构建单词-文本矩阵XINPUT:text - (list) 文本列表words - (list) 单词列表OUTPUT:X - (array) 单词-文本矩阵'''X = np.zeros((len(words), len(text)))for i in range(len(text)):t = text[i]for w in t:ind = words.index(w)X[ind][i] += 1return X
org_topics, text, words = load_data('bbc_text.csv') #加载数据
print('Original Topics:')
print(org_topics) #打印原始的话题标签列表
X = frequency_counter(text, words) #构建单词-文本矩阵
算法
X ≈ U k ∑ k V k T = [ u 1 u 2 ⋯ u k ] [ σ 1 u 11 σ 1 u 21 ⋯ σ 1 u n 1 σ 2 u 12 σ 2 u 22 ⋯ σ 2 u n 2 ⋮ ⋮ ⋮ ⋮ σ k u 1 k σ k u 2 k ⋯ σ k u n k ] , 其中 u l = [ u 1 l u 2 l ⋮ u m l ] , l = 1 , 2 , ⋯ , k X \approx U_k \sum_k V_k^T=\left[ \begin{array}{cccc} u_{1} & u_{2} & \cdots & u_{k} \end{array} \right]\left[ \begin{array}{cccc} \sigma_{1}u_{11} & \sigma_{1}u_{21} & \cdots & \sigma_{1}u_{n1} \\ \sigma_{2}u_{12} & \sigma_{2}u_{22} & \cdots & \sigma_{2}u_{n2}\\ \vdots & \vdots & \vdots & \vdots \\ \sigma_{k}u_{1k} & \sigma_{k}u_{2k} & \cdots & \sigma_{k}u_{nk} \end{array} \right],其中u_l = \left[ \begin{array}{cccc} u_{1l} \\ u_{2l} \\ \vdots \\ u_{ml} \end{array} \right],l=1,2,\cdots,k X≈Ukk∑VkT=[u1u2⋯uk] σ1u11σ2u12⋮σku1kσ1u21σ2u22⋮σku2k⋯⋯⋮⋯σ1un1σ2un2⋮σkunk ,其中ul= u1lu2l⋮uml ,l=1,2,⋯,k
def do_lsa(X, k, words):'''潜在语义分析INPUT:X - (array) 单词-文本矩阵k - (int) 设定的话题数words - (list) 单词列表OUTPUT:topics - (list) 生成的话题列表'''w, v = np.linalg.eig(np.matmul(X.T, X)) sort_inds = np.argsort(w)[::-1]w = np.sort(w)[::-1]V_T = []for ind in sort_inds:V_T.append(v[ind]/np.linalg.norm(v[ind]))V_T = np.array(V_T)Sigma = np.diag(np.sqrt(w))U = np.zeros((len(words), k))for i in range(k):ui = np.matmul(X, V_T.T[:, i]) / Sigma[i][i]U[:, i] = uitopics = []for i in range(k):inds = np.argsort(U[:, i])[::-1]topic = []for j in range(10):topic.append(words[inds[j]])topics.append(' '.join(topic))return topics
k = 5 #设定话题数为5
topics = do_lsa(X, k, words) #进行潜在语义分析
print('Generated Topics:')
for i in range(k):print('Topic {}: {}'.format(i+1, topics[i])) #打印分析后得到的每个话题