{"scriptVariables":{string:string,...},"properties":{string:string,...},"jarFileUris":[string],"loggingConfig":{object (LoggingConfig)},// Union fieldqueriescan be only one of the following:"queryFileUri":string,"queryList":{object (QueryList)}// End of list of possible types for union fieldqueries.}
Fields
scriptVariables
map (key: string, value: string)
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SETname="value";).
An object containing a list of"key": valuepairs. Example:{ "name": "wrench", "mass": "1.3kg", "count": "3" }.
properties
map (key: string, value: string)
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
An object containing a list of"key": valuepairs. Example:{ "name": "wrench", "mass": "1.3kg", "count": "3" }.
jarFileUris[]
string
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
Optional. The runtime log config for job execution.
Union fieldqueries. Required. The sequence of Spark SQL queries to execute, specified as either an HCFS file URI or as a list of queries.queriescan be only one of the following:
queryFileUri
string
The HCFS URI of the script that contains SQL queries.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-20 UTC."],[[["\u003cp\u003eThis document details the JSON representation of a Dataproc job for running Apache Spark SQL queries.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003escriptVariables\u003c/code\u003e field allows mapping query variable names to values, similar to the Spark SQL \u003ccode\u003eSET\u003c/code\u003e command.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eproperties\u003c/code\u003e field facilitates configuring Spark SQL's SparkConf, with potential overwrites by Dataproc API values.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003equeries\u003c/code\u003e field, a required union field, specifies the sequence of Spark SQL queries to be executed, either as a file URI (\u003ccode\u003equeryFileUri\u003c/code\u003e) or as a list of queries (\u003ccode\u003equeryList\u003c/code\u003e).\u003c/p\u003e\n"],["\u003cp\u003eOptional fields like \u003ccode\u003ejarFileUris\u003c/code\u003e and \u003ccode\u003eloggingConfig\u003c/code\u003e provide further customization, allowing for adding JAR files to the Spark CLASSPATH and configuring runtime logs, respectively.\u003c/p\u003e\n"]]],[],null,["# SparkSqlJob\n\n- [JSON representation](#SCHEMA_REPRESENTATION)\n\nA Dataproc job for running [Apache Spark SQL](https://spark.apache.org/sql/) queries."]]