Users describe the type of Google Cloud Vision API tasks to perform
over images by usingFeature\ s. Each Feature indicates a type of
image detection task to perform. Features encode the Cloud Vision
API vertical to operate on and the number of top-scoring results to
return.
A 3D position in the image, used primarily for Face detection
landmarks. A valid Position must have both x and y coordinates.
The position coordinates are in the same scale as the original
image.
Set of features pertaining to the image, computed by computer
vision methods over safe-search verticals (for example, adult,
spoof, medical, violence).
TextAnnotation contains a structured representation of OCR extracted
text. The hierarchy of an OCR extracted text structure is like this:
TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each
structural component, starting from Page, may further have their own
properties. Properties describe detected languages, breaks etc..
Please refer to theTextAnnotation.TextPropertymessage definition below for more detail.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Package types (3.10.2)\n\nVersion latestkeyboard_arrow_down\n\n- [3.10.2 (latest)](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types)\n- [3.10.0](/python/docs/reference/vision/3.10.0/google.cloud.vision_v1p1beta1.types)\n- [3.9.0](/python/docs/reference/vision/3.9.0/google.cloud.vision_v1p1beta1.types)\n- [3.8.1](/python/docs/reference/vision/3.8.1/google.cloud.vision_v1p1beta1.types)\n- [3.7.4](/python/docs/reference/vision/3.7.4/google.cloud.vision_v1p1beta1.types)\n- [3.6.0](/python/docs/reference/vision/3.6.0/google.cloud.vision_v1p1beta1.types)\n- [3.5.0](/python/docs/reference/vision/3.5.0/google.cloud.vision_v1p1beta1.types)\n- [3.4.5](/python/docs/reference/vision/3.4.5/google.cloud.vision_v1p1beta1.types)\n- [3.3.1](/python/docs/reference/vision/3.3.1/google.cloud.vision_v1p1beta1.types)\n- [3.2.0](/python/docs/reference/vision/3.2.0/google.cloud.vision_v1p1beta1.types)\n- [3.1.4](/python/docs/reference/vision/3.1.4/google.cloud.vision_v1p1beta1.types)\n- [3.0.0](/python/docs/reference/vision/3.0.0/google.cloud.vision_v1p1beta1.types)\n- [2.8.0](/python/docs/reference/vision/2.8.0/google.cloud.vision_v1p1beta1.types)\n- [2.7.3](/python/docs/reference/vision/2.7.3/google.cloud.vision_v1p1beta1.types)\n- [2.6.3](/python/docs/reference/vision/2.6.3/google.cloud.vision_v1p1beta1.types)\n- [2.5.0](/python/docs/reference/vision/2.5.0/google.cloud.vision_v1p1beta1.types)\n- [2.4.4](/python/docs/reference/vision/2.4.4/google.cloud.vision_v1p1beta1.types)\n- [2.3.2](/python/docs/reference/vision/2.3.2/google.cloud.vision_v1p1beta1.types)\n- [2.2.0](/python/docs/reference/vision/2.2.0/google.cloud.vision_v1p1beta1.types)\n- [2.1.0](/python/docs/reference/vision/2.1.0/google.cloud.vision_v1p1beta1.types)\n- [2.0.0](/python/docs/reference/vision/2.0.0/google.cloud.vision_v1p1beta1.types)\n- [1.0.2](/python/docs/reference/vision/1.0.2/google.cloud.vision_v1p1beta1.types)\n- [0.42.0](/python/docs/reference/vision/0.42.0/google.cloud.vision_v1p1beta1.types)\n- [0.41.0](/python/docs/reference/vision/0.41.0/google.cloud.vision_v1p1beta1.types)\n- [0.40.0](/python/docs/reference/vision/0.40.0/google.cloud.vision_v1p1beta1.types)\n- [0.39.0](/python/docs/reference/vision/0.39.0/google.cloud.vision_v1p1beta1.types)\n- [0.38.1](/python/docs/reference/vision/0.38.1/google.cloud.vision_v1p1beta1.types) \nAPI documentation for `vision_v1p1beta1.types` package. \n\nClasses\n-------\n\n### [AnnotateImageRequest](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.AnnotateImageRequest)\n\nRequest for performing Google Cloud Vision API tasks over a\nuser-provided image, with user-requested features.\n\n### [AnnotateImageResponse](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.AnnotateImageResponse)\n\nResponse to an image annotation request.\n\n### [BatchAnnotateImagesRequest](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.BatchAnnotateImagesRequest)\n\nMultiple image annotation requests are batched into a single\nservice call.\n\n### [BatchAnnotateImagesResponse](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.BatchAnnotateImagesResponse)\n\nResponse to a batch image annotation request.\n\n### [Block](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Block)\n\nLogical element on the page.\n\n### [BoundingPoly](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.BoundingPoly)\n\nA bounding polygon for the detected image annotation.\n\n### [ColorInfo](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.ColorInfo)\n\nColor information consists of RGB channels, score, and the\nfraction of the image that the color occupies in the image.\n\n### [CropHint](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.CropHint)\n\nSingle crop hint that is used to generate a new crop when\nserving an image.\n\n### [CropHintsAnnotation](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.CropHintsAnnotation)\n\nSet of crop hints that are used to generate new crops when\nserving images.\n\n### [CropHintsParams](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.CropHintsParams)\n\nParameters for crop hints annotation request.\n\n### [DominantColorsAnnotation](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.DominantColorsAnnotation)\n\nSet of dominant colors and their corresponding scores.\n\n### [EntityAnnotation](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.EntityAnnotation)\n\nSet of detected entity features.\n\n### [FaceAnnotation](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.FaceAnnotation)\n\nA face annotation object contains the results of face\ndetection.\n\n### [Feature](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Feature)\n\nUsers describe the type of Google Cloud Vision API tasks to perform\nover images by using *Feature*\\\\ s. Each Feature indicates a type of\nimage detection task to perform. Features encode the Cloud Vision\nAPI vertical to operate on and the number of top-scoring results to\nreturn.\n\n### [Image](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Image)\n\nClient image to perform Google Cloud Vision API tasks over.\n\n### [ImageContext](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.ImageContext)\n\nImage context and/or feature-specific parameters.\n\n### [ImageProperties](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.ImageProperties)\n\nStores image properties, such as dominant colors.\n\n### [ImageSource](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.ImageSource)\n\nExternal image source (Google Cloud Storage image location).\n\n### [LatLongRect](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.LatLongRect)\n\nRectangle determined by min and max `LatLng` pairs.\n\n### [Likelihood](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Likelihood)\n\nA bucketized representation of likelihood, which is intended\nto give clients highly stable results across model upgrades.\n\n### [LocationInfo](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.LocationInfo)\n\nDetected entity location information.\n\n### [Page](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Page)\n\nDetected page from OCR.\n\n### [Paragraph](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Paragraph)\n\nStructural unit of text representing a number of words in\ncertain order.\n\n### [Position](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Position)\n\nA 3D position in the image, used primarily for Face detection\nlandmarks. A valid Position must have both x and y coordinates.\nThe position coordinates are in the same scale as the original\nimage.\n\n### [Property](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Property)\n\nA `Property` consists of a user-supplied name/value pair.\n\n### [SafeSearchAnnotation](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.SafeSearchAnnotation)\n\nSet of features pertaining to the image, computed by computer\nvision methods over safe-search verticals (for example, adult,\nspoof, medical, violence).\n\n### [Symbol](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Symbol)\n\nA single symbol representation.\n\n### [TextAnnotation](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.TextAnnotation)\n\nTextAnnotation contains a structured representation of OCR extracted\ntext. The hierarchy of an OCR extracted text structure is like this:\nTextAnnotation -\\\u003e Page -\\\u003e Block -\\\u003e Paragraph -\\\u003e Word -\\\u003e Symbol Each\nstructural component, starting from Page, may further have their own\nproperties. Properties describe detected languages, breaks etc..\nPlease refer to the\nTextAnnotation.TextProperty\nmessage definition below for more detail.\n\n### [TextDetectionParams](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.TextDetectionParams)\n\nParameters for text detections. This is used to control\nTEXT_DETECTION and DOCUMENT_TEXT_DETECTION features.\n\n### [Vertex](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Vertex)\n\nA vertex represents a 2D point in the image.\nNOTE: the vertex coordinates are in the same scale as the\noriginal image.\n\n### [WebDetection](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.WebDetection)\n\nRelevant information for the image from the Internet.\n\n### [WebDetectionParams](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.WebDetectionParams)\n\nParameters for web detection request.\n\n### [Word](/python/docs/reference/vision/latest/google.cloud.vision_v1p1beta1.types.Word)\n\nA word representation."]]