PySpark:如何为所有列强制转换字符串数据类型

我的主要目标是将任何df的所有列都转换为字符串,这样比较就容易了。

我已经尝试过以下多种建议的方法。但无法成功:

target_df = target_df.select([col(c).cast("string") for c in target_df.columns])

这给出了错误:

pyspark.sql.utils.AnalysisException: "Can't extract value from SDV#155: need struct type but got string;"

我尝试过的下一个是:

target_df = target_df.select([col(c).cast(StringType()).alias(c) for c in columns_list])

错误:

pyspark.sql.utils.AnalysisException: "Can't extract value from SDV#27: need struct type but got string;"

下一个方法是:

        for column in target_df.columns:
             target_df = target_df.withColumn(column,target_df[column].cast('string'))

错误:

pyspark.sql.utils.AnalysisException: "Can't extract value from SDV#27: need struct type but got string;"

在强制转换之前存在的几行代码:

        columns_list = source_df.columns.copy()
        target_df = target_df.toDF(*columns_list)

我正在尝试的示例df模式:

root
 |-- A: string (nullable = true)
 |-- S: string (nullable = true)
 |-- D: string (nullable = true)
 |-- F: string (nullable = true)
 |-- G: double (nullable = true)
 |-- H: double (nullable = true)
 |-- J: string (nullable = true)
 |-- K: string (nullable = true)
 |-- L: string (nullable = true)
 |-- M: string (nullable = true)
 |-- N: string (nullable = true)
 |-- B: string (nullable = true)
 |-- V: string (nullable = true)
 |-- C: string (nullable = true)
 |-- X: string (nullable = true)
 |-- Y: string (nullable = true)
 |-- U: double (nullable = true)
 |-- I: string (nullable = true)
 |-- R: string (nullable = true)
 |-- T: string (nullable = true)
 |-- Q: string (nullable = true)
 |-- E: double (nullable = true)
 |-- W: string (nullable = true)
 |-- AS: string (nullable = true)
 |-- DSC: string (nullable = true)
 |-- DCV: string (nullable = true)
 |-- WV: string (nullable = true)
 |-- SDV: string (nullable = true)
 |-- SDV.1: string (nullable = true)
 |-- WDV: string (nullable = true)
 |-- FWFV: string (nullable = true)
 |-- ERBVSER: string (nullable = true)
web_css_web_css 回答:PySpark:如何为所有列强制转换字符串数据类型

如建议的那样,错误出自名为.的列中的点SDV.1,在选择该列时必须用反引号将其括起来:

for column in target_df.columns:
    target_df = target_df.withColumn(column,target_df['`{}`'.format(column)].cast('string'))

target_df = target_df.select([col('`{}`'.format(c)).cast(StringType()).alias(c) for c in columns_list])
,

我认为您的方法没什么问题

>>> df = spark.createDataFrame([(1,25),(1,20),(2,26)],['id','age'])
>>> df.show()
+---+---+
| id|age|
+---+---+
|  1| 25|
|  1| 20|
|  1| 20|
|  2| 26|
+---+---+

>>> df.printSchema()
root
 |-- id: long (nullable = true)
 |-- age: long (nullable = true)

>>> df.select([col(i).cast('string') for i in df.columns]).printSchema()
root
 |-- id: string (nullable = true)
 |-- age: string (nullable = true)

>>> df.select([col(i).cast('string') for i in df.columns]).show()
+---+---+
| id|age|
+---+---+
|  1| 25|
|  1| 20|
|  1| 20|
|  2| 26|
+---+---+
本文链接:https://www.f2er.com/3102437.html

大家都在问