Specification of the field containing the timestamp of scanned items. Used for data sources like Datastore and BigQuery.For BigQueryIf this value is not specified and the table was modified between the given start and end times, the entire table will be scanned. If this value is specified, then rows are filtered based on the given start and end times. Rows with aNULLvalue in the provided BigQuery column are skipped. Valid data types of the provided BigQuery column are:INTEGER,DATE,TIMESTAMP, andDATETIME. If your BigQuery table ispartitioned at ingestion time, you can use any of the following pseudo-columns as your timestamp field. When used with Cloud DLP, these pseudo-column names are case sensitive. -_PARTITIONTIME-_PARTITIONDATE-_PARTITION_LOAD_TIMEFor DatastoreIf this value is specified, then entities are filtered based on the given start and end times. If an entity does not contain the provided timestamp property or contains empty or invalid values, then it is included. Valid data types of the provided timestamp property are:TIMESTAMP. See theknown issuerelated to this operation.
↳ enable_auto_population_of_timespan_config
bool
When the job is started by a JobTrigger we will automatically figure out a valid start_time to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger or the timespan end_time used in the last run of the JobTrigger.For BigQueryInspect jobs triggered by automatic population will scan data that is at least three hours old when the job starts. This is because streaming buffer rows are not read during inspection and reading up to the current timestamp will result in skipped rows. See theknown issuerelated to this operation.
getStartTime
Exclude files, tables, or rows older than this value.
Specification of the field containing the timestamp of scanned items.
Used for data sources like Datastore and BigQuery.For BigQueryIf this value is not specified and the table was modified between the
given start and end times, the entire table will be scanned. If this
value is specified, then rows are filtered based on the given start and
end times. Rows with aNULLvalue in the provided BigQuery column are
skipped.
Valid data types of the provided BigQuery column are:INTEGER,DATE,TIMESTAMP, andDATETIME.
If your BigQuery table ispartitioned at ingestion
time,
you can use any of the following pseudo-columns as your timestamp field.
When used with Cloud DLP, these pseudo-column names are case sensitive.
_PARTITIONTIME
_PARTITIONDATE
_PARTITION_LOAD_TIMEFor DatastoreIf this value is specified, then entities are filtered based on the given
start and end times. If an entity does not contain the provided timestamp
property or contains empty or invalid values, then it is included.
Valid data types of the provided timestamp property are:TIMESTAMP.
See theknown
issuerelated to this operation.
Specification of the field containing the timestamp of scanned items.
Used for data sources like Datastore and BigQuery.For BigQueryIf this value is not specified and the table was modified between the
given start and end times, the entire table will be scanned. If this
value is specified, then rows are filtered based on the given start and
end times. Rows with aNULLvalue in the provided BigQuery column are
skipped.
Valid data types of the provided BigQuery column are:INTEGER,DATE,TIMESTAMP, andDATETIME.
If your BigQuery table ispartitioned at ingestion
time,
you can use any of the following pseudo-columns as your timestamp field.
When used with Cloud DLP, these pseudo-column names are case sensitive.
_PARTITIONTIME
_PARTITIONDATE
_PARTITION_LOAD_TIMEFor DatastoreIf this value is specified, then entities are filtered based on the given
start and end times. If an entity does not contain the provided timestamp
property or contains empty or invalid values, then it is included.
Valid data types of the provided timestamp property are:TIMESTAMP.
See theknown
issuerelated to this operation.
When the job is started by a JobTrigger we will automatically figure out
a valid start_time to avoid scanning files that have not been modified
since the last time the JobTrigger executed. This will be based on the
time of the execution of the last run of the JobTrigger or the timespan
end_time used in the last run of the JobTrigger.
For BigQueryInspect jobs triggered by automatic population will scan data that is at
least three hours old when the job starts. This is because streaming
buffer rows are not read during inspection and reading up to the current
timestamp will result in skipped rows.
See theknown
issuerelated to this operation.
Returns
Type
Description
bool
setEnableAutoPopulationOfTimespanConfig
When the job is started by a JobTrigger we will automatically figure out
a valid start_time to avoid scanning files that have not been modified
since the last time the JobTrigger executed. This will be based on the
time of the execution of the last run of the JobTrigger or the timespan
end_time used in the last run of the JobTrigger.
For BigQueryInspect jobs triggered by automatic population will scan data that is at
least three hours old when the job starts. This is because streaming
buffer rows are not read during inspection and reading up to the current
timestamp will result in skipped rows.
See theknown
issuerelated to this operation.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Data Loss Prevention V2 Client - Class TimespanConfig (2.6.1)\n\nVersion latestkeyboard_arrow_down\n\n- [2.6.1 (latest)](/php/docs/reference/cloud-dlp/latest/V2.StorageConfig.TimespanConfig)\n- [2.6.0](/php/docs/reference/cloud-dlp/2.6.0/V2.StorageConfig.TimespanConfig)\n- [2.4.1](/php/docs/reference/cloud-dlp/2.4.1/V2.StorageConfig.TimespanConfig)\n- [2.3.0](/php/docs/reference/cloud-dlp/2.3.0/V2.StorageConfig.TimespanConfig)\n- [2.2.3](/php/docs/reference/cloud-dlp/2.2.3/V2.StorageConfig.TimespanConfig)\n- [2.1.0](/php/docs/reference/cloud-dlp/2.1.0/V2.StorageConfig.TimespanConfig)\n- [2.0.0](/php/docs/reference/cloud-dlp/2.0.0/V2.StorageConfig.TimespanConfig)\n- [1.19.0](/php/docs/reference/cloud-dlp/1.19.0/V2.StorageConfig.TimespanConfig)\n- [1.18.0](/php/docs/reference/cloud-dlp/1.18.0/V2.StorageConfig.TimespanConfig)\n- [1.17.0](/php/docs/reference/cloud-dlp/1.17.0/V2.StorageConfig.TimespanConfig)\n- [1.16.0](/php/docs/reference/cloud-dlp/1.16.0/V2.StorageConfig.TimespanConfig)\n- [1.15.1](/php/docs/reference/cloud-dlp/1.15.1/V2.StorageConfig.TimespanConfig)\n- [1.14.0](/php/docs/reference/cloud-dlp/1.14.0/V2.StorageConfig.TimespanConfig)\n- [1.13.2](/php/docs/reference/cloud-dlp/1.13.2/V2.StorageConfig.TimespanConfig)\n- [1.12.0](/php/docs/reference/cloud-dlp/1.12.0/V2.StorageConfig.TimespanConfig)\n- [1.11.0](/php/docs/reference/cloud-dlp/1.11.0/V2.StorageConfig.TimespanConfig)\n- [1.10.2](/php/docs/reference/cloud-dlp/1.10.2/V2.StorageConfig.TimespanConfig)\n- [1.9.0](/php/docs/reference/cloud-dlp/1.9.0/V2.StorageConfig.TimespanConfig)\n- [1.8.6](/php/docs/reference/cloud-dlp/1.8.6/V2.StorageConfig.TimespanConfig) \nReference documentation and code samples for the Data Loss Prevention V2 Client class TimespanConfig.\n\nConfiguration of the timespan of the items to include in scanning.\n\nCurrently only supported when inspecting Cloud Storage and BigQuery.\n\nGenerated from protobuf message `google.privacy.dlp.v2.StorageConfig.TimespanConfig`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ Dlp \\\\ V2 \\\\ StorageConfig\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getStartTime\n\nExclude files, tables, or rows older than this value.\n\nIf not set, no lower time limit is applied.\n\n### hasStartTime\n\n### clearStartTime\n\n### setStartTime\n\nExclude files, tables, or rows older than this value.\n\nIf not set, no lower time limit is applied.\n\n### getEndTime\n\nExclude files, tables, or rows newer than this value.\n\nIf not set, no upper time limit is applied.\n\n### hasEndTime\n\n### clearEndTime\n\n### setEndTime\n\nExclude files, tables, or rows newer than this value.\n\nIf not set, no upper time limit is applied.\n\n### getTimestampField\n\nSpecification of the field containing the timestamp of scanned items.\n\nUsed for data sources like Datastore and BigQuery.\n**For BigQuery**\nIf this value is not specified and the table was modified between the\ngiven start and end times, the entire table will be scanned. If this\nvalue is specified, then rows are filtered based on the given start and\nend times. Rows with a `NULL` value in the provided BigQuery column are\nskipped.\nValid data types of the provided BigQuery column are: `INTEGER`, `DATE`,\n`TIMESTAMP`, and `DATETIME`.\nIf your BigQuery table is [partitioned at ingestion\ntime](https://cloud.google.com/bigquery/docs/partitioned-tables#ingestion_time),\nyou can use any of the following pseudo-columns as your timestamp field.\nWhen used with Cloud DLP, these pseudo-column names are case sensitive.\n\n- `_PARTITIONTIME`\n- `_PARTITIONDATE`\n- `_PARTITION_LOAD_TIME` **For Datastore** If this value is specified, then entities are filtered based on the given start and end times. If an entity does not contain the provided timestamp property or contains empty or invalid values, then it is included. Valid data types of the provided timestamp property are: `TIMESTAMP`. See the [known\n issue](https://cloud.google.com/sensitive-data-protection/docs/known-issues#bq-timespan) related to this operation.\n\n### hasTimestampField\n\n### clearTimestampField\n\n### setTimestampField\n\nSpecification of the field containing the timestamp of scanned items.\n\nUsed for data sources like Datastore and BigQuery.\n**For BigQuery**\nIf this value is not specified and the table was modified between the\ngiven start and end times, the entire table will be scanned. If this\nvalue is specified, then rows are filtered based on the given start and\nend times. Rows with a `NULL` value in the provided BigQuery column are\nskipped.\nValid data types of the provided BigQuery column are: `INTEGER`, `DATE`,\n`TIMESTAMP`, and `DATETIME`.\nIf your BigQuery table is [partitioned at ingestion\ntime](https://cloud.google.com/bigquery/docs/partitioned-tables#ingestion_time),\nyou can use any of the following pseudo-columns as your timestamp field.\nWhen used with Cloud DLP, these pseudo-column names are case sensitive.\n\n- `_PARTITIONTIME`\n- `_PARTITIONDATE`\n- `_PARTITION_LOAD_TIME` **For Datastore** If this value is specified, then entities are filtered based on the given start and end times. If an entity does not contain the provided timestamp property or contains empty or invalid values, then it is included. Valid data types of the provided timestamp property are: `TIMESTAMP`. See the [known\n issue](https://cloud.google.com/sensitive-data-protection/docs/known-issues#bq-timespan) related to this operation.\n\n### getEnableAutoPopulationOfTimespanConfig\n\nWhen the job is started by a JobTrigger we will automatically figure out\na valid start_time to avoid scanning files that have not been modified\nsince the last time the JobTrigger executed. This will be based on the\ntime of the execution of the last run of the JobTrigger or the timespan\nend_time used in the last run of the JobTrigger.\n\n**For BigQuery**\nInspect jobs triggered by automatic population will scan data that is at\nleast three hours old when the job starts. This is because streaming\nbuffer rows are not read during inspection and reading up to the current\ntimestamp will result in skipped rows.\nSee the [known\nissue](https://cloud.google.com/sensitive-data-protection/docs/known-issues#recently-streamed-data)\nrelated to this operation.\n\n### setEnableAutoPopulationOfTimespanConfig\n\nWhen the job is started by a JobTrigger we will automatically figure out\na valid start_time to avoid scanning files that have not been modified\nsince the last time the JobTrigger executed. This will be based on the\ntime of the execution of the last run of the JobTrigger or the timespan\nend_time used in the last run of the JobTrigger.\n\n**For BigQuery**\nInspect jobs triggered by automatic population will scan data that is at\nleast three hours old when the job starts. This is because streaming\nbuffer rows are not read during inspection and reading up to the current\ntimestamp will result in skipped rows.\nSee the [known\nissue](https://cloud.google.com/sensitive-data-protection/docs/known-issues#recently-streamed-data)\nrelated to this operation."]]