Reference documentation and code samples for the Data Loss Prevention V2 Client class TimespanConfig.
Configuration of the timespan of the items to include in scanning.
Currently only supported when inspecting Cloud Storage and BigQuery.
Generated from protobuf message google.privacy.dlp.v2.StorageConfig.TimespanConfig
Namespace
Google \ Cloud \ Dlp \ V2 \ StorageConfigMethods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ start_time
Google\Protobuf\Timestamp
Exclude files, tables, or rows older than this value. If not set, no lower time limit is applied.
↳ end_time
Google\Protobuf\Timestamp
Exclude files, tables, or rows newer than this value. If not set, no upper time limit is applied.
↳ timestamp_field
Google\Cloud\Dlp\V2\FieldId
Specification of the field containing the timestamp of scanned items. Used for data sources like Datastore and BigQuery. For BigQuery
If this value is not specified and the table was modified between the given start and end times, the entire table will be scanned. If this value is specified, then rows are filtered based on the given start and end times. Rows with a NULL
value in the provided BigQuery column are skipped. Valid data types of the provided BigQuery column are: INTEGER
, DATE
, TIMESTAMP
, and DATETIME
. If your BigQuery table is partitioned at ingestion time
, you can use any of the following pseudo-columns as your timestamp field. When used with Cloud DLP, these pseudo-column names are case sensitive.
-
_PARTITIONTIME
-
_PARTITIONDATE
-
_PARTITION_LOAD_TIME
TIMESTAMP
. See the known issue
related to this operation.↳ enable_auto_population_of_timespan_config
bool
When the job is started by a JobTrigger we will automatically figure out a valid start_time to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger or the timespan end_time used in the last run of the JobTrigger. For BigQueryInspect jobs triggered by automatic population will scan data that is at least three hours old when the job starts. This is because streaming buffer rows are not read during inspection and reading up to the current timestamp will result in skipped rows. See the known issue related to this operation.
getStartTime
Exclude files, tables, or rows older than this value.
If not set, no lower time limit is applied.
hasStartTime
clearStartTime
setStartTime
Exclude files, tables, or rows older than this value.
If not set, no lower time limit is applied.
$this
getEndTime
Exclude files, tables, or rows newer than this value.
If not set, no upper time limit is applied.
hasEndTime
clearEndTime
setEndTime
Exclude files, tables, or rows newer than this value.
If not set, no upper time limit is applied.
$this
getTimestampField
Specification of the field containing the timestamp of scanned items.
Used for data sources like Datastore and BigQuery. For BigQuery
If this value is not specified and the table was modified between the
given start and end times, the entire table will be scanned. If this
value is specified, then rows are filtered based on the given start and
end times. Rows with a NULL
value in the provided BigQuery column are
skipped.
Valid data types of the provided BigQuery column are: INTEGER
, DATE
, TIMESTAMP
, and DATETIME
.
If your BigQuery table is partitioned at ingestion
time
,
you can use any of the following pseudo-columns as your timestamp field.
When used with Cloud DLP, these pseudo-column names are case sensitive.
-
_PARTITIONTIME
-
_PARTITIONDATE
-
_PARTITION_LOAD_TIME
TIMESTAMP
.
See the known
issue
related to this operation.hasTimestampField
clearTimestampField
setTimestampField
Specification of the field containing the timestamp of scanned items.
Used for data sources like Datastore and BigQuery. For BigQuery
If this value is not specified and the table was modified between the
given start and end times, the entire table will be scanned. If this
value is specified, then rows are filtered based on the given start and
end times. Rows with a NULL
value in the provided BigQuery column are
skipped.
Valid data types of the provided BigQuery column are: INTEGER
, DATE
, TIMESTAMP
, and DATETIME
.
If your BigQuery table is partitioned at ingestion
time
,
you can use any of the following pseudo-columns as your timestamp field.
When used with Cloud DLP, these pseudo-column names are case sensitive.
-
_PARTITIONTIME
-
_PARTITIONDATE
-
_PARTITION_LOAD_TIME
TIMESTAMP
.
See the known
issue
related to this operation.$this
getEnableAutoPopulationOfTimespanConfig
When the job is started by a JobTrigger we will automatically figure out a valid start_time to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger or the timespan end_time used in the last run of the JobTrigger.
For BigQueryInspect jobs triggered by automatic population will scan data that is at least three hours old when the job starts. This is because streaming buffer rows are not read during inspection and reading up to the current timestamp will result in skipped rows. See the known issue related to this operation.
bool
setEnableAutoPopulationOfTimespanConfig
When the job is started by a JobTrigger we will automatically figure out a valid start_time to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger or the timespan end_time used in the last run of the JobTrigger.
For BigQueryInspect jobs triggered by automatic population will scan data that is at least three hours old when the job starts. This is because streaming buffer rows are not read during inspection and reading up to the current timestamp will result in skipped rows. See the known issue related to this operation.
var
bool
$this