Anonymizer protects privacy

Anonymizing data with private information, it makes them usable for ML models.

• World's first patent in South Korea, US, filed PCT

• Supports image, video, audio and text

• Real-time processing possible via embedded system

• Datasets continuously scaled up by anonymizing new data

• Anonymizing model's accuracy corresponds to dataset scale

• Once processed, data cannot be reverted to original

• Can be used for new ML models as well as existing ones

Learn more >

Obfuscator protects confidentiality

Obfuscating data with confidential

or sensitive information,

it prevents data leaks and makes them usable for ML models.

• World's first patent in South Korea, US, filed PCT

• Supports image, video, audio and text

• Real-time processing possible via embedded system

• Datasets continuously scaled up by anonymizing new data

• Obfuscator model's accuracy correspondent to dataset scale

• Once processed, data cannot be reverted to original

• Can be used for new ML models as well as existing ones

Learn more >

Jammer protects data ownership

Jammer prevents illegal copy for ML training purposes

by jamming data.

• Severe accuracy drop happens with jammed data

• Once processed, data cannot be reverted to the original

Learn more >

Watermarker protects data ownership

Watermarker puts watermarks into data to prevent illegal thefts

such as reselling.

• Once processed, data cannot be reverted to the original

• Can be used for training new ML models as well as existing ones

Learn more >