数小时的Pyspark拆分功能

PE

 root
  |-- address: string (nullable = true)
  |-- attributes: map (nullable = true)
  |    |-- key: string
  |    |-- value: string (valueContainsnull = true)
  |-- business_id: string (nullable = true)
  |-- categories: string (nullable = true)
  |-- city: string (nullable = true)
  |-- hours: map (nullable = true)
  |    |-- key: string
  |    |-- value: string (valueContainsnull = true)
  |-- is_open: long (nullable = true)
  |-- latitude: double (nullable = true)
  |-- longitude: double (nullable = true)
  |-- name: string (nullable = true)
  |-- postal_code: string (nullable = true)
  |-- review_count: long (nullable = true)
  |-- stars: double (nullable = true)
  |-- state: string (nullable = true)

我目前正在使用Yelp的数据集,我的目标是找到一家公司每天/每周营业的总时数。从数据中,我可以提取特定日期每一天的时间范围,这些时间范围看起来像[9:0,0:0]。如何使用pyspark获得两列,一列用于开放时间显示[9:0],另一列用于关闭时间显示[0:0]?

这是我用来在数据集中简单显示营业时间的一些代码。

import pyspark.sql.functions as f
from pyspark.sql.functions import expr

df_hours = df_MappedBusiness.select(
    "business_id","name",f.explode("hours").alias("hourDay","hourValue"),f.split("hourValue","[-]").alias("split_hours")
).show(50,truncate=False)


Expected Output
---------------

+---------------------------------------------------------------- 
|hourDay  |hourValue  |split_hours   | open_hours   | close_hours
+-----------------------------------------------------------------
|Monday   |9:0-0:0    |[9:0,0:0]    | [9,0]        | [0,0]       |
hustkazz 回答:数小时的Pyspark拆分功能

调用pyspark.sql.functions.split后,您将在Column中创建一个ArrayType(其中进一步包含字符串)。要访问诸如嵌套列之类的元素,您将使用与列表相同的语法,甚至对Pandas Dataframe也使用相同的语法,即split(some_column,some_character)[some_index]

示例:

df = (spark.createDataFrame(
    (("shop","Monday","9:0-0:0"),("shop","Tuesday","12:30-21:30")),schema=("shopname","day_of_week","opening_hours")))

from pyspark.sql.functions import split

(df
 .withColumn("opens",split(df.opening_hours,"-")[0])
 .withColumn("closes","-")[1])
 .show()
 )

+--------+-----------+-------------+-----+------+
|shopname|day_of_week|opening_hours|opens|closes|
+--------+-----------+-------------+-----+------+
|    shop|     Monday|      9:0-0:0|  9:0|   0:0|
|    shop|    Tuesday|  12:30-21:30|12:30| 21:30|
+--------+-----------+-------------+-----+------+

请注意,您的方法将使您剩下StringType()的两列(我在此处添加的最后两列)。您可能会将其转换为数字(例如,午夜以来的分钟数?),但是您需要查看可能的负数,因为“在00:00关闭”实际上意味着在午夜之前关闭。无论如何,我会把它当作挑战。

,

这是此问题的代码。我在网上寻找Yelp的数据集,并在其上应用了解决方案。

from pyspark.sql import  SparkSession

spark = SparkSession.builder.master("local").appName("Test").getOrCreate()
import pyspark.sql.functions as f
from pyspark.sql.functions import expr,col,when,lit
import json

df1=spark.read.json(r"your_data_path")



df_mon=df1.select("business_id","name",lit("Monday").alias("hourday"),when(col("hours.Monday").isNotNull(),f.split("hours.Monday",'-')[0]).alias("OpenHours"),'-')[1]).alias("CloseHours"))
df_tue=df1.select("business_id",lit("Tuesday").alias("hourday"),when(col("hours.Tuesday").isNotNull(),f.split("hours.Tuesday",'-')[1]).alias("CloseHours"))
df_wed=df1.select("business_id",lit("Wednesday").alias("hourday"),when(col("hours.Wednesday").isNotNull(),f.split("hours.Wednesday",'-')[1]).alias("CloseHours"))
df_thu=df1.select("business_id",lit("Thursday").alias("hourday"),when(col("hours.Thursday").isNotNull(),f.split("hours.Thursday",'-')[1]).alias("CloseHours"))
df_fri=df1.select("business_id",lit("Friday").alias("hourday"),when(col("hours.Friday").isNotNull(),f.split("hours.Friday",'-')[1]).alias("CloseHours"))
df_sat=df1.select("business_id",lit("Saturday").alias("hourday"),when(col("hours.Saturday").isNotNull(),f.split("hours.Saturday",'-')[1]).alias("CloseHours"))
df_sun=df1.select("business_id",lit("Sunday").alias("hourday"),when(col("hours.Sunday").isNotNull(),f.split("hours.Sunday",'-')[1]).alias("CloseHours"))

df_final=df_mon.unionAll(df_tue).unionAll(df_wed).unionAll(df_thu).unionAll(df_fri).unionAll(df_sat).unionAll(df_sun)

df_final.show(10,False)

如果您有任何疑问,请告诉我。

本文链接:https://www.f2er.com/3070250.html

大家都在问