If you specify AUTO , Amazon Rekognition chooses the quality bar. Value representing sharpness of the face. Training takes a while to complete. If you don't specify MinSegmentConfidence , GetSegmentDetection returns segments with confidence values greater than or equal to 50 percent. Any word more than half in a region is kept in the results. Provides the S3 bucket name and object name. ProtectiveEquipmentModelVersion (string) --. Gets face detection results for a Amazon Rekognition Video analysis started by StartFaceDetection . Array of celebrities recognized in the video. ID of the collection from which to list the faces. Provides information about a face in a target image that matches the source image face analyzed by CompareFaces . The face in the source image that was used for comparison. The number of faces that are indexed into the collection. Amazon Rekognition Video can detect labels in a video. Every word and line has an identifier (Id ). The video must be stored in an Amazon S3 bucket. Valid values are TECHNICAL_CUE and SHOT. You get the job identifier from an initial call to StartFaceSearch . The current status of the face detection job. Detects text in the input image and converts it into machine-readable text. Polygon represents a fine-grained polygon around a detected item. SourceImageOrientationCorrection (string) --. Bounding boxes are returned for common object labels such as people, cars, furniture, apparel or pets. Amazon Rekognition Video sends analysis results to Amazon Kinesis Data Streams. The unsafe content label detected by in the stored video. Creates a collection in an AWS Region. ARN for the newly create stream processor. Detects unsafe content in a specified JPEG or PNG format image. Generate a presigned url given a client, its method, and arguments. If there is more than one region, the word will be compared with all regions of the screen. You pass the input image as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. Starts the asynchronous search for faces in a collection that match the faces of persons detected in a stored video. For more information, see FaceDetail in the Amazon Rekognition Developer Guide. A bounding box around the detected person. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. Default attribute. In the previous example, Car , Vehicle , and Transportation are returned as unique labels in the response. If the segment is a shot detection, contains information about the shot detection. An array of faces that matched the input face, along with the confidence in the match. The image must be either a PNG or JPG formatted file. This operation requires permissions to perform the rekognition:DetectLabels action. An array of facial attributes you want to be returned. More specifically, it is an array of metadata for each face match that is found. With this component you can consume Amazon Rekognition, more specifically, object and scene detection and facial analysis after uploading an image. For example, you would use the Bytes property to pass an image loaded from a local file system. The end time of the detected segment, in milliseconds, from the start of the video. Confidence level that the selected bounding box contains a face. An array element will exist for each time a person's path is tracked. The job identifer for the search request. Use JobId to identify the job in a subsequent call to GetFaceSearch . Default attribute. You pass the input and target images either as base64-encoded image bytes or as references to images in an Amazon S3 bucket. To get the search results, first check that the status value published to the Amazon SNS topic is SUCCEEDED . The image must be either a PNG or JPEG formatted file. The input image is passed either as base64-encoded image bytes, or as a reference to an image in an Amazon S3 bucket. AWS is the Amazon’s cloud platform which is full of ready-to-use services. To filter images, use the labels returned by DetectModerationLabels to determine which types of content are appropriate. Level of confidence that what the bounding box contains is a face. The identifier is not stored by Amazon Rekognition. ID of the collection that contains the faces you want to search for. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported. For example, if the image height is 200 pixels and the y-coordinate of the landmark is at 50 pixels, this value is 0.25. You can add up to 10 model version names to the list. The identifier for the label detection job. Filters that are specific to technical cues. Gets the celebrity recognition results for a Amazon Rekognition Video analysis started by StartCelebrityRecognition . Use JobId to identify the job in a subsequent call to GetContentModeration . For an example, see Listing Faces in a Collection in the Amazon Rekognition Developer Guide. The confidence that Amazon Rekognition Video has in the accuracy of the detected segment. The API is only making a determination of the physical appearance of a person's face. Rekognition Image lets you easily build powerful applications to search, verify, and organize millions of images. An array of Personal Protective Equipment items detected around a body part. The CelebrityFaces and UnrecognizedFaces bounding box coordinates represent face locations after Exif metadata is used to correct the image orientation. Indicates whether or not the mouth on the face is open, and the confidence level in the determination. Indicates whether or not the face has a beard, and the confidence level in the determination. The supported file formats are .mp4, .mov and .avi. Gets the face search results for Amazon Rekognition Video face search started by StartFaceSearch . It also includes the time(s) that faces are matched in the video. Name is idempotent. AWS Rekognition is a simple, easy, quick, and cost-effective way to detect objects, faces, text and more in both still images and videos. Information about a label detected in a video analysis request and the time the label was detected in the video. If you specify NONE , no filtering is performed. In this example, the detection algorithm more precisely identifies the flower as a tulip. However, activity detection is supported for label detection in videos. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination. Includes the collection to use for face recognition and the face attributes to detect. The service returns a value between 0 and 100 (inclusive). How to do face recognition, object detection, and face comparisons using AWS Rekognition service from Python. Use QualityFilter to set the quality bar for filtering by specifying LOW , MEDIUM , or HIGH . Gets the path tracking results of a Amazon Rekognition Video analysis started by StartPersonTracking .

aws rekognition documentation 2021