使用TF-IDF分数进行文本分类的KNN

我有一个CSV文件(corpus.csv),其中包含以下格式的语料库分级摘要(文本):

Institute,Score,Abstract


----------------------------------------------------------------------


UoM,3.0,Hello,this is abstract one

UoM,3.2,this is abstract two and yet counting.

UoE,3.1,yet another abstract but this is a unique one.

UoE,2.2,please no more abstract.

我正在尝试用python创建一个KNN分类程序,该程序能够获取用户输入摘要,例如“这是一个新的唯一摘要”,然后将该用户输入摘要分类为最接近语料库(CSV)和还返回预测摘要的分数/等级。我该如何实现?

我有以下代码:

from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.corpus import stopwords
import numpy as np
import pandas as pd
from csv import reader,writer
import operator as op
import string

#Read data from corpus
r = reader(open('corpus.csv','r'))
abstract_list = []
score_list = []
institute_list = []
row_count = 0
for row in list(r)[1:]:
    institute,score,abstract = row
    if len(abstract.split()) > 0:
      institute_list.append(institute)
      score = float(score)
      score_list.append(score)
      abstract = abstract.translate(string.punctuation).lower()
      abstract_list.append(abstract)
      row_count = row_count + 1

print("Total processed data: ",row_count)

#Vectorize (TF-IDF,ngrams 1-4,no stop words) using sklearn -->
vectorizer = TfidfVectorizer(analyzer='word',ngram_range=(1,4),min_df = 0,stop_words = 'english',sublinear_tf=True)
response = vectorizer.fit_transform(abstract_list)
feature_names = vectorizer.get_feature_names()

在上述代码中,如何将TF-IDF计算中的功能用于如上所述的KNN分类? (可能使用sklearn.neighborsKNeighborsClassifier框架)

P.S。适用案例的类别是摘要的相应分数/等级。

我具有视觉深度学习的背景,但是,我在文本分类方面缺乏很多知识,尤其是使用KNN时。任何帮助将非常感激。预先谢谢你。

szv123_rier 回答:使用TF-IDF分数进行文本分类的KNN

KNN是一种分类算法-意味着您必须具有class属性。 KNN可以将TFIDF的输出用作输入矩阵-TrainX,但是您仍然需要TrainY-数据中每一行的类。但是,您可以使用KNN回归器。 将您的分数用作班级变量:

from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.corpus import stopwords
import numpy as np
import pandas as pd
from csv import reader,writer
import operator as op
import string
from sklearn import neighbors

#Read data from corpus
r = reader(open('corpus.csv','r'))
abstract_list = []
score_list = []
institute_list = []
row_count = 0
for row in list(r)[1:]:
    institute,score,abstract = row[0],row[1],row[2]
    if len(abstract.split()) > 0:
      institute_list.append(institute)
      score = float(score)
      score_list.append(score)
      abstract = abstract.translate(string.punctuation).lower()
      abstract_list.append(abstract)
      row_count = row_count + 1

print("Total processed data: ",row_count)

#Vectorize (TF-IDF,ngrams 1-4,no stop words) using sklearn -->
vectorizer = TfidfVectorizer(analyzer='word',ngram_range=(1,4),min_df = 0,stop_words = 'english',sublinear_tf=True)
response = vectorizer.fit_transform(abstract_list)
classes = score_list
feature_names = vectorizer.get_feature_names()

clf = neighbors.KNeighborsRegressor(n_neighbors=1)
clf.fit(response,classes)
clf.predict(response)

“预测”将预测每个实例的分数。

本文链接:https://www.f2er.com/3026663.html

大家都在问