Skip to content

Ads classification

Unsafe Content Detection

Unsafe Content Detection, powered by Google Cloud Vision, classifies potentially offensive content across five categories:

  • Adult: Identifies content deemed as nudity or sexually explicit
  • Racy: Identifies content deemed suggestive or containing mature visual elements
  • Medical: Identifies content containing medically graphic images, also referred to as gore
  • Violent: Identifies content considered violent and potentially disturbing for end users
  • Spoof: Identifies content that can be considered as parody, misleading, or “fake news.”

Ad Labels

Ad Labels provides keyword classification for digital ad content to flag specific products that could be unwanted on a website, such as alcohol, or tobacco products.

Logo Detection

Logo detection Identifies all ad imagery that contain brand and institution logos used by malicious advertisers to trick end users by mimicking the branding of other companies or organisations.