- 1.35.0 (latest)
- 1.34.0
- 1.33.0
- 1.32.1
- 1.31.0
- 1.30.0
- 1.26.0
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.1
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.39.0
- 0.38.0
- 0.37.1
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.2
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.13.0
- 0.12.0
- 0.11.1
- 0.10.0
Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class ImportRagFilesConfig.
Config for importing RagFiles.
Generated from protobuf message google.cloud.aiplatform.v1.ImportRagFilesConfig
Namespace
Google \ Cloud \ AIPlatform \ V1Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ gcs_source
GcsSource
Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats: - gs://bucket_name/my_directory/object_name/my_file.txt
- gs://bucket_name/my_directory
↳ google_drive_source
GoogleDriveSource
Google Drive location. Supports importing individual files as well as Google Drive folders.
↳ slack_source
↳ jira_source
↳ share_point_sources
↳ partial_failure_gcs_sink
GcsDestination
The Cloud Storage path to write partial failures to. Deprecated. Prefer to use import_result_gcs_sink
.
↳ partial_failure_bigquery_sink
BigQueryDestination
The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g. "bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the table does not exist, it will be created with the expected schema. If the table exists, the schema will be validated and data will be added to this existing table. Deprecated. Prefer to use import_result_bq_sink
.
↳ rag_file_transformation_config
↳ max_embedding_requests_per_min
int
Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here. If unspecified, a default value of 1,000 QPM would be used.
getGcsSource
Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats:
-
gs://bucket_name/my_directory/object_name/my_file.txt
-
gs://bucket_name/my_directory
hasGcsSource
setGcsSource
Google Cloud Storage location. Supports importing individual files as well as entire Google Cloud Storage directories. Sample formats:
-
gs://bucket_name/my_directory/object_name/my_file.txt
-
gs://bucket_name/my_directory
$this
getGoogleDriveSource
Google Drive location. Supports importing individual files as well as Google Drive folders.
hasGoogleDriveSource
setGoogleDriveSource
Google Drive location. Supports importing individual files as well as Google Drive folders.
$this
getSlackSource
Slack channels with their corresponding access tokens.
hasSlackSource
setSlackSource
Slack channels with their corresponding access tokens.
$this
getJiraSource
Jira queries with their corresponding authentication.
hasJiraSource
setJiraSource
Jira queries with their corresponding authentication.
$this
getSharePointSources
SharePoint sources.
hasSharePointSources
setSharePointSources
SharePoint sources.
$this
getPartialFailureGcsSink
The Cloud Storage path to write partial failures to.
Deprecated. Prefer to use import_result_gcs_sink
.
hasPartialFailureGcsSink
setPartialFailureGcsSink
The Cloud Storage path to write partial failures to.
Deprecated. Prefer to use import_result_gcs_sink
.
$this
getPartialFailureBigquerySink
The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g.
"bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the
table does not exist, it will be created with the expected schema. If the
table exists, the schema will be validated and data will be added to this
existing table.
Deprecated. Prefer to use import_result_bq_sink
.
hasPartialFailureBigquerySink
setPartialFailureBigquerySink
The BigQuery destination to write partial failures to. It should be a bigquery table resource name (e.g.
"bq://projectId.bqDatasetId.bqTableId"). The dataset must exist. If the
table does not exist, it will be created with the expected schema. If the
table exists, the schema will be validated and data will be added to this
existing table.
Deprecated. Prefer to use import_result_bq_sink
.
$this
getRagFileTransformationConfig
Specifies the transformation config for RagFiles.
hasRagFileTransformationConfig
clearRagFileTransformationConfig
setRagFileTransformationConfig
Specifies the transformation config for RagFiles.
$this
getMaxEmbeddingRequestsPerMin
Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here.
If unspecified, a default value of 1,000 QPM would be used.
int
setMaxEmbeddingRequestsPerMin
Optional. The max number of queries per minute that this job is allowed to make to the embedding model specified on the corpus. This value is specific to this job and not shared across other import jobs. Consult the Quotas page on the project to set an appropriate value here.
If unspecified, a default value of 1,000 QPM would be used.
var
int
$this
getImportSource
string
getPartialFailureSink
string