使用pyspark从对象JSON的嵌套数组访问keyof对象

我有一个名为Class.json的JSON文件,并希望在一定条件下计算所有数据。

Class.json

{
  "class": [
    {
      "class_id": "1","data": {
        "lesson3": {
          "id": 3,"schedule": [
            {
              "schedule_id": "1","schedule_date": "2017-07-11","lesson_price": "USD 25","status": "ONGOING"
            },{
              "schedule_id": "2","schedule_date": "2016-09-24","lesson_price": "USD 15","status": "OPEN REGISTRATION"
            }
          ]
        },"lesson4": {
          "id": 4,"schedule_date": "2016-12-17","lesson_price": "USD 19","schedule_date": "2015-11-12","lesson_price": "USD 29",{
              "schedule_id": "3","schedule_date": "2015-11-10","lesson_price": "USD 14","status": "ON SCHEDULE"
            }
          ]
        }
      }
    },{
      "class_id": "2","data": {
        "lesson1": {
          "id": 1,"schedule_date": "2017-05-21","lesson_price": "USD 50","status": "CANCELLED"
            }
          ]
        },"lesson2": {
          "id": 2,"schedule_date": "2017-06-04","lesson_price": "USD10","status": "FINISHED"
            },{
              "schedule_id": "5","schedule_date": "2018-03-01","lesson_price": "USD12","status": "CLOSED"
            }
          ]
        }
      }
    }
  ]
}

我尝试过

df = spark.read.json("class.json",multiLine=True)
df.show()

及其节目:

+--------------------+
|               class|
+--------------------+
|[[1,[,[3,[[US...|
+--------------------+

然后为了加入数组,我尝试了这个 try = df.select("class").map(lambda s: s['data'])

但是出现错误AttributeError: 'DataFrame' object has no attribute 'map'

或做df['class'][0]['data']得到Column<b'class[0][data]'>

目标:

  • 计数状态为“ ONGOING”的计划,其计划日期在2017年1月之前
  • 2017年1月之前的平均课程价格

如何使用pyspark?

QQ516061641 回答:使用pyspark从对象JSON的嵌套数组访问keyof对象

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/3115238.html

大家都在问