匹配器返回一些重复项

我希望输出为["good customer service","great ambience"],但我得到["good customer","good customer service","great ambience"],因为模式也与良好的客户匹配,但是这句话没有任何意义。如何删除此类重复项

import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
doc = nlp("good customer service and great ambience")
matcher = Matcher(nlp.vocab)

# Create a pattern matching two tokens: adjective followed by one or more noun
 pattern = [{"POS": 'ADJ'},{"POS": 'NOUN',"OP": '+'}]

matcher.add("ADJ_NOUN_PATTERN",None,pattern)

matches = matcher(doc)
print("Matches:",[doc[start:end].text for match_id,start,end in matches])

lyc5748056 回答:匹配器返回一些重复项

Spacy 有一个内置函数可以做到这一点。检查filter_spans

文档说:

当跨度重叠时,(第一个)最长跨度优先于较短跨度。

示例:

doc = nlp("This is a sentence.")
spans = [doc[0:2],doc[0:2],doc[0:4]]
filtered = filter_spans(spans)
,

您可以通过将元组与起始索引分组并仅保留末端索引最大的元组来对匹配进行后处理:

import functions from 'firebase-functions';            // DOESN'T WORK
import * as functions from 'firebase-functions';       // WORKS
import admin from 'firebase-admin';                    // WORKS

from itertools import * #... matches = matcher(doc) results = [max(list(group),key=lambda x: x[2]) for key,group in groupby(matches,lambda prop: prop[1])] print("Matches:",[doc[start:end].text for match_id,start,end in results]) # => Matches: ['good customer service','great ambience'] 会将匹配项按起始索引进行分组,此处为groupby(matches,lambda prop: prop[1])[(5488211386492616699,2),(5488211386492616699,3)](5488211386492616699,4,6)将抓取最终索引(值3)最大的项目。

本文链接:https://www.f2er.com/3118842.html

大家都在问