我有一堆json文档,它对应于数据库更改,类似于以下格式:
{
"key": "abc","timestamp": 1573085110000,"cols": [{
"name": "COL1","value": "14",},{
"name": "COL2","value": "Some Text",}
]
}
将其加载到spark数据框会产生:
+---+-------------+--------------------+
|key| timestamp| cols|
+---+-------------+--------------------+
|abc|1573084821000|[[COL1,14],[COL...|
|def|1573171513000|[[COL1,xx],[COL...|
| | | |
+---+-------------+--------------------+
我分解了cols
数组,所以现在db列名在行urgh上
+---+----+---------+
|key|name| value|
+---+----+---------+
|abc|COL1| 14|
|abc|COL2|Some Text|
| | | |
+---+----+---------+
现在,我想到了枢轴。.所以我开始写:
dt.groupBy($"key").pivot("name").agg($"value")
在这一点上,我显然意识到spark不允许在非数字列上进行聚合。
因此,从本质上讲,给定在json ..中定义数据的烦人方式。还有一种更好的方法来实现此目的:
+---+----+---------+
|key|COL1| COL2|
+---+----+---------+
|abc|14 |Some Text|
| | | |
+---+----+---------+
需要回家,整整一天..可能缺少明显的东西,ta!