Scala和Python API中的LSH

我正在关注此帖子Efficient string matching in Apache Spark,以使用LSH算法获得一些字符串匹配。由于某种原因,需要通过python API获得结果,但不能在Scala中获得结果。我看不到Scala代码中真正缺少的地方。

以下是这两个代码:

from pyspark.ml import Pipeline
from pyspark.ml.feature import RegexTokenizer,NGram,HashingTF,MinHashLSH

query = spark.createDataFrame(["Bob Jones"],"string").toDF("text")

db = spark.createDataFrame(["Tim Jones"],"string").toDF("text")

model = Pipeline(stages=[
    RegexTokenizer(
        pattern="",inputCol="text",outputCol="tokens",minTokenLength=1
    ),NGram(n=3,inputCol="tokens",outputCol="ngrams"),HashingTF(inputCol="ngrams",outputCol="vectors"),MinHashLSH(inputCol="vectors",outputCol="lsh")
]).fit(db)

db_hashed = model.transform(db)
query_hashed = model.transform(query)

model.stages[-1].approxSimilarityJoin(db_hashed,query_hashed,0.75).show()

它返回:

> +--------------------+--------------------+-------+ |            dataseta|            datasetB|distCol|
> +--------------------+--------------------+-------+ |[Tim Jones,[t,i...|[Bob Jones,[b,o...|    0.6|
> +--------------------+--------------------+-------+

但是Scala不返回任何内容,这是代码:

import org.apache.spark.ml.feature.RegexTokenizer
val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")
import org.apache.spark.ml.feature.NGram
val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")
import org.apache.spark.ml.feature.HashingTF
val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")
import org.apache.spark.ml.feature.{MinHashLSH,MinHashLSHModel}
val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline().setStages(Array(tokenizer,ngram,vectorizer,lsh))
val query = Seq("Bob Jones").toDF("text")
val db = Seq("Tim Jones").toDF("text")
val model = pipeline.fit(db)
val dbHashed = model.transform(db)
val queryHashed = model.transform(query)
model.stages.last.asInstanceOf[MinHashLSHModel].approxSimilarityJoin(dbHashed,queryHashed,0.75).show

我正在使用Spark 3.0,我知道它是一个测试,但是不能真正在其他版本上对其进行测试。而且我怀疑是否存在这样的错误:)

zhuxiwangzhenliang 回答:Scala和Python API中的LSH

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/3114656.html

大家都在问