从另一个数据帧中的一个数据帧中查找所有出现的值的最佳方法是什么?

我正在研究Spark集群,并且有两个数据框。一个包含文本。另一个是查询表。两个表都很大(M和N都可以轻易超过100,000个条目)。匹配它们的最佳方法是什么?

进行交叉联接然后基于匹配项过滤结果似乎是一个疯狂的主意,因为我肯定会耗尽内存。

我的数据框看起来像这样:

df1:

        text
0       i like apples
1       oranges are good
2       eating bananas is healthy
.       ...
.       ...
M       tomatoes are red,bananas are yellow

df2:

        fruit_lookup
0       apples
1       oranges
2       bananas
.       ...
.       ...
N       tomatoes

我希望输出数据帧看起来像这样:

output_df:

        text                                     extracted_fruits
0       i like apples                            ['apples']
1       oranges are good                         ['oranges']
2       eating bananas is healthy                ['bananas']
.       ...
.       ...
M       tomatoes are red,bananas are yellow .   ['tomatoes','bananas']
uwycny 回答:从另一个数据帧中的一个数据帧中查找所有出现的值的最佳方法是什么?

一种方法是使用 CountVectorizerModel ,因为此模型应可管理100K个查找字(默认 vocabSize = 262144 ):

基本思想是根据df2(查找表)中的自定义列表创建 CountVectorizerModel 。将df1.text拆分为一个数组列,然后将该列转换为SparseVector,然后可以将其映射为单词:

编辑: split 函数中,将正则表达式从\s+调整为[\s\p{Punct}]+,以便删除所有标点符号。如果查询不区分大小写,请将'text'更改为lower(col('text'))

from pyspark.ml.feature import CountVectorizerModel
from pyspark.sql.functions import split,udf,regexp_replace,lower

df2.show()                                                                                                   
+---+------------+
| id|fruit_lookup|
+---+------------+
|  0|      apples|
|  1|     oranges|
|  2|     bananas|
|  3|    tomatoes|
|  4|dragon fruit|
+---+------------+

编辑2 :添加了以下df1预处理步骤,并创建了一个包含所有N-gram组合的数组列。对于每个包含L个单词的字符串,N = 2将在数组中添加(L-1)个项目,如果N = 3,则另外(L-1)+(L-2)个项目。

# max number of words in a single entry of the lookup table df2
N = 2

# Pre-process the `text` field up to N-grams,# example: ngram_str('oranges are good',3) 
#      --> ['oranges','are','good','oranges are','are good','oranges are good']
def ngram_str(s_t_r,N):
  arr = s_t_r.split()           
  L = len(arr)           
  for i in range(2,N+1):           
    if L - i < 0: break           
    arr += [ ' '.join(arr[j:j+i]) for j in range(L-i+1) ]           
  return arr           

udf_ngram_str = udf(lambda x: ngram_str(x,N),'array<string>')

df1_processed = df1.withColumn('words_arr',udf_ngram_str(lower(regexp_replace('text',r'[\s\p{Punct}]+',' '))))

在处理的df1上实现模型:

lst = [ r.fruit_lookup for r in df2.collect() ]

model = CountVectorizerModel.from_vocabulary(lst,inputCol='words_arr',outputCol='fruits_vec')

df3 = model.transform(df1_processed)
df3.show(20,40)
#+----------------------------------------+----------------------------------------+-------------------+
#|                                    text|                               words_arr|         fruits_vec|
#+----------------------------------------+----------------------------------------+-------------------+
#|                           I like apples|  [i,like,apples,i like,like apples]|      (5,[0],[1.0])|
#|                        oranges are good|[oranges,are,good,oranges are,are...|      (5,[1],[1.0])|
#|               eating bananas is healthy|[eating,bananas,is,healthy,eating...|      (5,[2],[1.0])|
#|    tomatoes are red,bananas are yellow|[tomatoes,red,ye...|(5,[2,3],[1.0,1.0])|
#|                                    test|                                  [test]|          (5,[],[])|
#|I have dragon fruit and apples in my bag|[i,have,dragon,fruit,and,...|(5,[0,4],1.0])|
#+----------------------------------------+----------------------------------------+-------------------+

然后您可以使用model.vocabulary

将Fruits_vec映射回水果
vocabulary = model.vocabulary
#['apples','oranges','bananas','tomatoes','dragon fruit']

to_match = udf(lambda v: [ vocabulary[i] for i in v.indices ],'array<string>')

df_new = df3.withColumn('extracted_fruits',to_match('fruits_vec')).drop('words_arr','fruits_vec')
df_new.show(truncate=False)                                      
#+----------------------------------------+----------------------+
#|text                                    |extracted_fruits      |
#+----------------------------------------+----------------------+
#|I like apples                           |[apples]              |
#|oranges are good                        |[oranges]             |
#|eating bananas is healthy               |[bananas]             |
#|tomatoes are red,bananas are yellow    |[bananas,tomatoes]   |
#|test                                    |[]                    |
#|I have dragon fruit and apples in my bag|[apples,dragon fruit]|
#+----------------------------------------+----------------------+

方法2:由于就Spark上下文而言,您的数据集并不庞大,因此以下方法可能会起作用,这将适用于根据您的注释具有多个单词的查找值:

from pyspark.sql.functions import expr,collect_set

df1.alias('d1').join(
      df2.alias('d2'),expr('d1.text rlike concat("\\\\b",d2.fruit_lookup,"\\\\b")'),'left'
).groupby('text') \
 .agg(collect_set('fruit_lookup').alias('extracted_fruits')) \
 .show()
+--------------------+-------------------+                                      
|                text|   extracted_fruits|
+--------------------+-------------------+
|    oranges are good|          [oranges]|
|       I like apples|           [apples]|
|tomatoes are red,...|[tomatoes,bananas]|
|eating bananas is...|          [bananas]|
|                test|                 []|
+--------------------+-------------------+

其中:"\\\\b是单词边界,因此查找值不会弄乱它们的上下文。

注意:在数据框加入之前,您可能需要清除两列上的所有标点符号和冗余空间。

本文链接:https://www.f2er.com/3159163.html

大家都在问