我正试图启动一个Druid主管来提取存储在hadoop中的Parqurt数据。但是,我收到以下错误,但找不到有关它的任何信息:
“错误”:“无法将类型ID'index_hadoop'解析为的子类型 [简单类型,类 io.druid.indexing.overlord.supervisor.SupervisorSpec]:已知类型ID = [NoopSupervisorSpec,kafka] \ n,位于[来源:(org.eclipse.jetty.server.HttpInputOverHTTP)
我试图修复此问题,将hadoop深度存储,镶木地板和avro扩展加载到扩展加载列表中,但这不起作用。
这是我的主管JSON配置:
{
"type" : "index_hadoop","spec" : {
"dataSchema" : {
"dataSource" : "hadoop-batch-timeseries","parser" : {
"type": "parquet","parseSpec" : {
"format" : "parquet","flattenSpec": {
"useFieldDiscovery": true,"fields": [
]
},"timestampSpec" : {
"column" : "timestamp","format" : "auto"
},"dimensionsspec" : {
"dimensions": [ "installation","var_id","value" ],"dimensionExclusions" : [],"spatialDimensions" : []
}
}
},"metricsspec" : [
{
"type" : "count","name" : "count"
}
],"granularitySpec" : {
"type" : "uniform","segmentGranularity" : "DAY","queryGranularity" : "NONE","intervals" : [ "2018-10-01/2018-11-30" ]
}
},"ioConfig": {
"type": "hadoop","inputSpec": {
"type": "granularity","dataGranularity": "day","inputFormat": "org.apache.druid.data.input.parquet.DruidParquetInputFormat","inputPath": "/warehouse/tablespace/external/hive/demo.db/integers","filePattern": "*.parquet","pathFormat": "'year'=yyy/'month'=MM/'day'=dd"
},},"tuningConfig" : {
"type": "hadoop"
}
},"hadoopDependencyCoordinates": "3.1.0"
}