Python NLP-Sklearn-负号和正号标签的文本分类器,字母组合和双字相同

我正在尝试创建文本分类器,以确定摘要是否表示对护理研究项目的访问。我正在从具有两个字段的数据集中导入:抽象和访问类。摘要是有关该项目的500字描述,accessclass对于非访问相关为0,对于访问相关为1。我仍处于开发阶段,但是当我查看0和1标签的unigram和bigrams时,它们是相同的,尽管文本的色调明显不同。我的代码中缺少什么吗?例如,我是否不小心加了负数或正数?任何帮助表示赞赏。

import pandas as pd
import numpy as np
import nltk
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn import naive_bayes

df = pd.read_excel("accessclasses.xlsx")
df.head()

from io import StringIO
col = ['accessclass','abstract']
df = df[col]
df = df[pd.notnull(df['abstract'])]
df.columns = ['accessclass','abstract']
df['category_id'] = df['accessclass'].factorize()[0]
category_id_df = df[['accessclass','category_id']].drop_duplicates().sort_values('category_id')
category_to_id = dict(category_id_df.values)
id_to_category = dict(category_id_df[['category_id','accessclass']].values)
df.head()

from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(sublinear_tf=True,min_df=4,norm='l2',encoding='latin-1',ngram_range=(1,2),stop_words='english')
features = tfidf.fit_transform(df.abstract).toarray()
labels = df.category_id
print(features.shape)

from sklearn.feature_selection import chi2
import numpy as np
N = 2
for accessclass,category_id in sorted(category_to_id.items()):
   features_chi2 = chi2(features,labels == category_id)
   indices = np.argsort(features_chi2[0])
   feature_names = np.array(tfidf.get_feature_names())[indices]
   unigrams = [v for v in feature_names if len(v.split(' ')) == 1]
   bigrams = [v for v in feature_names if len(v.split(' ')) == 2]
   print("# '{}':".format(accessclass))
   print("  . Most correlated unigrams:\n. {}".format('\n. '.join(unigrams[-N:])))
   print("  . Most correlated bigrams:\n. {}".format('\n. '.join(bigrams[-N:])))
violawu 回答:Python NLP-Sklearn-负号和正号标签的文本分类器,字母组合和双字相同

我认为您的代码中的问题是在这个小型数据集上为min_df设置了4之类的大数字。根据您发布的数据,最常见的单词是停用词,使用TfidfVectorizer后将被删除。他们在这里:

to :  19
and :  11
a :  6
the :  6
are :  6
of :  6
for :  5
is :  4
in :  4
will :  4
access :  4
I :  4
times :  4
healthcare :  3
more :  3
have :  3
with :  3
...

这些是unigram ...双字母组的计数会更低。

您可以通过以下两个选项之一解决该问题:

  • stopwords一样将None参数设置为stopwords=None
  • 例如,将min_df设置为低于4,例如12

我建议使用第二个选项,因为第一个选项将返回相关的停用词,这根本没有帮助。我尝试使用min_df=1,结果如下:

  . Most correlated unigrams:
. times
. access

  . Most correlated bigrams:
. enjoyed watching
. wait times
本文链接:https://www.f2er.com/3158474.html

大家都在问