The ML.STANDARD_SCALER function
This document describes the ML.STANDARD_SCALER
function, which lets you scale
a numerical expression by using z-score
.
When used in the TRANSFORM
clause
,
the standard deviation
and mean
values calculated to standardize the
expression are automatically used in prediction.
You can use this function with models that support manual feature preprocessing . For more information, see the following documents:
Syntax
ML.STANDARD_SCALER(numerical_expression) OVER()
Arguments
ML.STANDARD_SCALER
takes the following argument:
-
numerical_expression: the numerical expression to scale.
Output
ML.STANDARD_SCALER
returns a FLOAT64
value that represents the scaled
numerical expression.
Example
The following example scales a set of numerical expressions to have a
mean of 0
and standard deviation of 1
:
SELECT f , ML . STANDARD_SCALER ( f ) OVER () AS output FROM UNNEST ([ 1 , 2 , 3 , 4 , 5 ]) AS f ;
The output looks similar to the following:
+---+---------------------+ | f | output | +---+---------------------+ | 1 | -1.2649110640673518 | | 5 | 1.2649110640673518 | | 2 | -0.6324555320336759 | | 4 | 0.6324555320336759 | | 3 | 0.0 | +---+---------------------+
What's next
- For information about feature preprocessing, see Feature preprocessing overview .

