class Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoAnnotationResults

Annotation results for a single video.

Attributes

celebrity_recognition_annotations[RW]

Celebrity recognition annotation per video. Corresponds to the JSON property `celebrityRecognitionAnnotations` @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1CelebrityRecognitionAnnotation]

error[RW]

The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [ gRPC](github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https: //cloud.google.com/apis/design/errors). Corresponds to the JSON property `error` @return [Google::Apis::VideointelligenceV1p1beta1::GoogleRpcStatus]

explicit_annotation[RW]

Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame. Corresponds to the JSON property `explicitAnnotation` @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1ExplicitContentAnnotation]

face_annotations[RW]

Deprecated. Please use `face_detection_annotations` instead. Corresponds to the JSON property `faceAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1FaceAnnotation>]

face_detection_annotations[RW]

Face detection annotations. Corresponds to the JSON property `faceDetectionAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1FaceDetectionAnnotation>]

frame_label_annotations[RW]

Label annotations on frame level. There is exactly one element for each unique label. Corresponds to the JSON property `frameLabelAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]

input_uri[RW]

Video file location in [Cloud Storage](cloud.google.com/storage/). Corresponds to the JSON property `inputUri` @return [String]

logo_recognition_annotations[RW]

Annotations for list of logos detected, tracked and recognized in video. Corresponds to the JSON property `logoRecognitionAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LogoRecognitionAnnotation>]

object_annotations[RW]

Annotations for list of objects detected and tracked in video. Corresponds to the JSON property `objectAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1ObjectTrackingAnnotation>]

person_detection_annotations[RW]

Person detection annotations. Corresponds to the JSON property `personDetectionAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1PersonDetectionAnnotation>]

segment[RW]

Video segment. Corresponds to the JSON property `segment` @return [Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment]

segment_label_annotations[RW]

Topical label annotations on video level or user-specified segment level. There is exactly one element for each unique label. Corresponds to the JSON property `segmentLabelAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]

segment_presence_label_annotations[RW]

Presence label annotations on video level or user-specified segment level. There is exactly one element for each unique label. Compared to the existing topical `segment_label_annotations`, this field presents more fine-grained, segment-level labels detected in video content and is made available only when the client sets `LabelDetectionConfig.model` to “builtin/latest” in the request. Corresponds to the JSON property `segmentPresenceLabelAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]

shot_annotations[RW]

Shot annotations. Each shot is represented as a video segment. Corresponds to the JSON property `shotAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1VideoSegment>]

shot_label_annotations[RW]

Topical label annotations on shot level. There is exactly one element for each unique label. Corresponds to the JSON property `shotLabelAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]

shot_presence_label_annotations[RW]

Presence label annotations on shot level. There is exactly one element for each unique label. Compared to the existing topical `shot_label_annotations`, this field presents more fine-grained, shot-level labels detected in video content and is made available only when the client sets `LabelDetectionConfig. model` to “builtin/latest” in the request. Corresponds to the JSON property `shotPresenceLabelAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1LabelAnnotation>]

speech_transcriptions[RW]

Speech transcription. Corresponds to the JSON property `speechTranscriptions` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1SpeechTranscription>]

text_annotations[RW]

OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it. Corresponds to the JSON property `textAnnotations` @return [Array<Google::Apis::VideointelligenceV1p1beta1::GoogleCloudVideointelligenceV1p3beta1TextAnnotation>]

Public Class Methods

new(**args) click to toggle source
# File lib/google/apis/videointelligence_v1p1beta1/classes.rb, line 6311
def initialize(**args)
   update!(**args)
end

Public Instance Methods

update!(**args) click to toggle source

Update properties of this object

# File lib/google/apis/videointelligence_v1p1beta1/classes.rb, line 6316
def update!(**args)
  @celebrity_recognition_annotations = args[:celebrity_recognition_annotations] if args.key?(:celebrity_recognition_annotations)
  @error = args[:error] if args.key?(:error)
  @explicit_annotation = args[:explicit_annotation] if args.key?(:explicit_annotation)
  @face_annotations = args[:face_annotations] if args.key?(:face_annotations)
  @face_detection_annotations = args[:face_detection_annotations] if args.key?(:face_detection_annotations)
  @frame_label_annotations = args[:frame_label_annotations] if args.key?(:frame_label_annotations)
  @input_uri = args[:input_uri] if args.key?(:input_uri)
  @logo_recognition_annotations = args[:logo_recognition_annotations] if args.key?(:logo_recognition_annotations)
  @object_annotations = args[:object_annotations] if args.key?(:object_annotations)
  @person_detection_annotations = args[:person_detection_annotations] if args.key?(:person_detection_annotations)
  @segment = args[:segment] if args.key?(:segment)
  @segment_label_annotations = args[:segment_label_annotations] if args.key?(:segment_label_annotations)
  @segment_presence_label_annotations = args[:segment_presence_label_annotations] if args.key?(:segment_presence_label_annotations)
  @shot_annotations = args[:shot_annotations] if args.key?(:shot_annotations)
  @shot_label_annotations = args[:shot_label_annotations] if args.key?(:shot_label_annotations)
  @shot_presence_label_annotations = args[:shot_presence_label_annotations] if args.key?(:shot_presence_label_annotations)
  @speech_transcriptions = args[:speech_transcriptions] if args.key?(:speech_transcriptions)
  @text_annotations = args[:text_annotations] if args.key?(:text_annotations)
end