According to a new study, the user data Apple is collecting through data mining isn’t as anonymous as you might think.
Last year, the US-based tech giant has been using a new “differential privacy” technique to mine the data from its users. The technique basically aims at adding a certain degree of noise to randomize the data it collected from its users. Although the same technique is mostly being followed by a majority of the tech companies, the problem, however, has been found to lie in the way Apple implements the technique on its users.
The new study was brought forward by the researchers at the University of Southern California, Indiana University, and China’s Tsinghua University. The researchers were able to study the available Apple code that goes into employing this “differential privacy” techniques on the MacOS and iOS operating systems.
“Apple’s privacy loss parameters exceed the levels typically considered acceptable by the differential privacy research community,” a professor at University of Southern California Aleksandra Korolova says. Aleksandra has also experience of working as a research scientist at Google, working on its implementation of differential privacy.
The extent to which a user’s data is made anonymous as it follows the differential privacy technique is described by “epsilon.” The researchers studied the noise that was put into the user’s data by the Apple operating system before being uploaded to the servers. The company keeps its “epsilon” a secret. The researchers found that the Apple’s epsilon made way for a lot more identifiable personal data, then generally allowable. Astonishingly, the iOS 10 permitted even more data. The company, however, declined to the findings of the report.