Content classification

Content Classification removes offensive ads, identifies ad labels and detects use of fake logos

content-02-01
unsafe-content-01
ad labels-01
logo detection-02

Unsafe Content Detection

Unsafe Content Detection, powered by Google Cloud Vision, classifies potentially offensive content across five categories:

  • Adult: Identifies content deemed as nudity or sexually explicit
  • Racy: Identifies content deemed suggestive or containing mature visual elements
  • Medical: Identifies content containing medically graphic images, also referred to as gore
  • Violent: Identifies content considered violent and potentially disturbing for end users
  • Spoof: Identifies content that can be considered as parody, misleading, or “fake news.”

Ad Labels

Ad Labels provides keyword classification for digital ad content to flag specific products that could be unwanted on a website, such as alcohol, or tobacco products.

Logo Detection

Logo detection Identifies all ad imagery that contain brand and institution logos used by malicious advertisers to trick end users by mimicking the branding of other companies or organisations.

Publishers

Get constant visibility on what imagery is being displayed on your web page by your ad supply chain. Protect your end users from seeing offensive ad content and being exploited by vendors of fake products.

Ad ops teams and ad networks

Content Classification can automate much of the verification work needed in order to maintain control of the content of ad campaigns.