CometChat Pro Documentation

You'll find comprehensive guides and documentation to help you start working with CometChat as quickly as possible. Let's jump right in!

Image Moderation

Learn how to filter unsafe images.

It's difficult to control the types of images being shared on your platform. So the Image Moderation extension analyzes every image to check if it's unsafe. After analyzing, it classifies the image into four categories- Explicit Nudity, Suggestive Nudity, Violence or Visually Disturbing. Along with that, you will receive the confidence, on a scale of 0 to 100.

"@injected": {
  "extensions": {
    "image-moderation": {
      'unsafe' => 'yes/no',
      'confidence' => '99',
      'category' => 'explicit_nudity/suggestive/violence/visually_disturbing'
    }
  }
}

A value for confidence that is less than 50 is likely to be a false-positive. So we recommend moderating only if confidence is higher than 50.

You can then either show a warning or drop the image message. This is how Instagram shows a warning for sensitive content:

At the recipients' end, from the message object, you can fetch the metadata by calling the getMetadata() method. Using this metadata, you can fetch information whether the image is safe or unsafe.

var metadata : [String : Any]? = message.metaData
if metadata != nil {
            
  var injectedObject : [String : Any]? = (metadata?["@injected"] as? [String : Any])!

  if injectedObject != nil && (injectedObject!["extensions"] != nil){
                
    var extensionsObject : [String : Any]? = injectedObject?["extensions"] as? [String : Any]

    if extensionsObject != nil && extensionsObject?["image-moderation"] != nil {
                    
      var imageModerationObject = extensionsObject?["image-moderation"] as! [String :  Any]

      let unsafe = imageModerationObject["unsafe"] as! String
      let confidence = imageModerationObject["confidence"] as! String
      let category = imageModerationObject["category"] as! String
      }
   }
}

Image Moderation


Learn how to filter unsafe images.

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.