PySpark无法在Koalas DataFrame中计算列式标准差

我在PySpark中有一个Koalas DataFrame。我想计算列标准偏差。我已经尝试过:

df2['x_std'] = df2[['x_1','x_2','x_3','x_4','x_5','x_6','x_7','x_8','x_9','x_10','x_11','x_12']].std(axis = 1) 

我收到以下错误:

TypeError: 'DataFrame' object does not support item assignment

我也在做类似的事情:

d1 = df2[['x_1','x_12']].std(axis = 1) 

df2['x_std'] = d1 # d1 is a Koalas Series that should get assigned to the new column.

执行此操作时出现此错误:

Cannot combine column argument because it comes from a different dataframe

对考拉来说是全新的。任何人都可以提出一些想法吗?谢谢。

wjw850329 回答:PySpark无法在Koalas DataFrame中计算列式标准差

您可以将选项"compute.ops_on_diff_frames"设置为True,然后执行操作。

import databricks.koalas as ks

ks.set_option("compute.ops_on_diff_frames",True)

kdf = ks.DataFrame(
    {'a': [1,2,3,4,5,6],'b': [2,1,7,3],'c': [3,6,5],'d': [4,8],},)

kdf['dev'] = kdf[['a','b','c','d']].std(axis=1)
print (kdf)

   a  b  c  d       dev
0  1  2  3  4  1.241909
5  6  3  5  8  2.363684
1  2  1  7  2  2.348840
3  4  4  4  4  1.788854
2  3  7  1  3  2.223378
4  5  2  6  3  1.856200

尽管默认情况下不允许,但不确定是good practice

本文链接:https://www.f2er.com/3141258.html

大家都在问