Deeping Source Inc.

Copyright 2020. Deeping source Inc. all rights reserved.

contact@deepingsource.io

14F, 312, Teheran-ro, Gangnam-gu, Seoul, 06211, Republic of Korea

Anonymizer

Revitalize valuable but unavailable data

The Long Unsolved Trade off:

Data Utility vs. Privacy

De-identified data meet privacy regulations
Privacy regulations, such as GDPR, are applied to information that are relatable to an identifiable person. Therefore, utilizing anonymous data or
de-identified information is not subject to legal
regulations.
Existing technologies make data unusable
Existing de-identifying technologies in the industry significantly limit the capability of utilizing data.
Since they simply detect and delete personal
information, all the other key attributes for machine
learning and analyzing are also erased.

Anonymizer achieves both

and this what that makes the big difference

Then, what is it that makes Anonymizer so different

from other de-identifying technologies?

Anonymizer allows companies or ML developers to 

collect data that are usable for their target uses 

but also guarantee privacy. It is the only possible way

to achieve both data utility and privacy regulation 

compliance.

While removing Personally Identifiable Information(PII),

Anonymizer preserves data quality which is equivalent

to the original. As data are anonymized, they become

invisible to human but visible to AI, allowing users to

train actual ML models while ensuring other's privacy.

The big change that only Anonymizer can bring is to 

develop machine learning models without using

original data.

Anonymizer

The Innovation Process

Building a new ML model without using original data

Anonymizer
First, anonymize original data

Anonymizer obfuscates data task-specifically for users. For instance, a data consumer who wants to build a cat detecting ML model is provided with anonymized data without any private information but with key attributes necessary for the cat detection. 

Sharing Anonymized Data
Model G
"cat"
Second, train a new model
With anonymized data provided by Deeping Source, users can train a new ML model(G) whose output is nearly identical to that of the original data.
Deploy Model G
Model G
"cat"
Third, deploy the model in actual cases

Trained with anonymized data, model G is highly useful in actual environments where new original data are collected - that is to say, if anonymized with our Anonymizer, users can develop actual ML models even with anonymized data.

 

Obfuscator

The safest way to use data with confidential information

Confidential data

that needs to be shared

Confidential data contains classified or

sensitive information that only authorized 

persons can fully access, which may cause

critical damage to  the organization when leaked.

This data, essential information of the 

organization, requires high level security 

and careful management when being shared and utilized for further purposes, such as

building an AI model.

Obfuscator as a solution

A reliable way of sharing confidential data

Obfuscator hides confidential information by making data unreadable while preserving their utility for machine learning.

Only the necessary features needed for target uses remain but all the other data components are obfuscated and cannot be reversed to the original.

In other words, Obfuscator allows users to train

ML models without using the original data.

Therefore, our users don't have to be concerned 

even if it is handed to the third party who is

not fully authorized for accessing the original information.

Obfuscator is the solution for any kinds of organizations who are seeking for new opportunities via data driven AI developments.

Obfuscator