The time you spend shopping online and streaming YouTube videos may be fleeting, but the data you leave behind can stay forever.
Big Data, more formally termed “large-scale data”, is data that is collected and stored on a mass scale. With the use of AI-powered analytical tools, organisations can process vast troves of data to identify patterns in records of sales transactions, health information, and even customer service phone conversations.
The insights gleaned from Big Data have turned it into a billion-dollar industry, with many applications in research, government, business, and burgeoning technologies such as the Internet of Things (IoT).
While Big Data is an invaluable resource for making predictions about human behaviour and identifying future trends, it also raises a plethora of data privacy and ethical issues; which recent privacy laws such as the General Data Protection Regulation (GDPR) are trying to address.
Here’s four ways in which the GDPR has changed how Big Data is collected and used today:
Most people can remember giving their name or email address away on a website, but there are many other types of personal data being collected about us in more inconspicuous ways. Information such as your device data, Facebook “likes”, and browsing history are often tracked on a large and rapid scale by many online and third-party services.
By itself, this information seems harmless; however, each piece of data has the potential to be collated and used to identify you or for other purposes without your informed consent.
Laws like the GDPR have attempted to regulate and minimise mass data collection through the introduction of six legal bases for processing. Before an organisation can collect the personal data of a citizen based in the EU, they must demonstrate how they meet at least one of these legal bases. In this way, organisations are nudged towards collecting the minimum amount of personal data required to deliver their product or service.
To avoid overcomplicating GDPR compliance, there has been an industry-wide shift towards “real-time data analytics”, where organisations can quickly analyse, use, and dispose of data without the need to store and secure it for long periods of time.
Under the GDPR, citizens based in the EU have the right to request a copy of their personal data and transfer it to another organisation so that they can benefit from other online services.
To comply with this requirement, many platforms like Facebook have now given users the option to download a copy of all the personal data collected about them.
Many people willingly share what is considered “sensitive personal data” through social media websites and dating apps. Information such as race, sexuality, political and religious beliefs potentially reveal more than just the intimate details of our personal life — it can also be used against us.
Some of the most infamous political advertising campaigns conducted online, such as those enabled by Cambridge Analytica and Facebook during the 2016 US presidential election, have shown how people’s personal data could be exploited to manipulate them.
Another key concern of privacy advocates and lawmakers is how Big Data analysis is used to drive automated decision-making, which could lead to “data-driven discrimination”. Common examples of automated decision-making, which relies on Big Data analysis, include mortgage applications, policing, company hiring processes, and marketing which could unfairly discriminate against an individual through biased data. To combat these risks, the GDPR restricts organisations from making “solely automated decisions” based on data processing.
Despite the numerous data privacy scandals and insidious threats to people’s basic rights, the Big Data industry is showing no sign of slowing down.