SparkSqlJob

A Dataproc job for running Apache Spark SQL queries.

JSON representation
 { 
 "scriptVariables" 
 : 
 { 
 string 
 : 
 string 
 , 
 ... 
 } 
 , 
 "properties" 
 : 
 { 
 string 
 : 
 string 
 , 
 ... 
 } 
 , 
 "jarFileUris" 
 : 
 [ 
 string 
 ] 
 , 
 "loggingConfig" 
 : 
 { 
 object (  LoggingConfig 
 
) 
 } 
 , 
 // Union field queries 
can be only one of the following: 
 "queryFileUri" 
 : 
 string 
 , 
 "queryList" 
 : 
 { 
 object (  QueryList 
 
) 
 } 
 // End of list of possible types for union field queries 
. 
 } 
Fields
scriptVariables

map (key: string, value: string)

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value"; ).

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" } .

properties

map (key: string, value: string)

Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" } .

jarFileUris[]

string

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

loggingConfig

object ( LoggingConfig )

Optional. The runtime log config for job execution.

Union field queries . Required. The sequence of Spark SQL queries to execute, specified as either an HCFS file URI or as a list of queries. queries can be only one of the following:
queryFileUri

string

The HCFS URI of the script that contains SQL queries.

queryList

object ( QueryList )

A list of queries.

Create a Mobile Website
View Site in Mobile | Classic
Share by: