While many of us worry about our personal data privacy and security on an ongoing basis, these interrelated issues tend to become political hot buttons each time an election nears. A few weeks ago, the FBI reported that widespread election interference and misinformation was coming from Russia and Iran. This time the communication medium was email, but it’s well-established that social media has also been frequently used to influence the hearts and minds of American voters. The past decade has seen a stream of interference incidents that call into question our online privacy, our overall information-related security and the role of technology in a democracy. Beginning in July and continuing through this fall, we’ve seen the leaders of the largest tech companies called into Congress and asked to prove that their services are safe, secure and that they aren’t using our personal data to form an oligopoly.
While we have witnessed that many of their practices were not ideal, we, the public, are not blameless in these scenarios. We sign up for these services in droves without considering the potential misuses and freely agree to terms and conditions that surrender our data in exchange for convenience, simply because we’re too lazy to read the fine print.
So, what should we do to protect ourselves and our democracy? A good first step is knowing exactly what data is being collected and how it’s being used. When it comes down to the basics, most social platforms have two simple goals:
They achieve these goals through algorithms, which are a complex set of calculations or rules that are modified and adapted over time based on individual habits to get increasingly better at maintaining user interest. Put simply, it’s how a social platform learns what you like and puts more of that content in front of you to keep your attention.
Social networks rely on these algorithms to understand who we are, our interests, behaviors and insecurities. This data is then used to provide advertisers with access to highly valuable, pre-qualified audiences that can be targeted on their platforms and across the web. The more accurate the algorithm, the more relevant the content, and the more likely a desired outcome (e.g. purchase, charity donation, or site visit) is achieved. In an auction environment, where advertisers bid on relevant audiences, these highly accurate algorithms are designed to maximize profit for the social platforms.
Because advertisers pay a premium to access data from accurate algorithms, it’s in the social platform’s best interest to hire top talent and create the best formula. In the 2010s this led to a cascade of success for social media companies. Their algorithms started getting so smart, everyone wanted to be on them—people and advertisers alike. Ad sales produced staggering profits, which were then invested into upgrades in product feature enhancements. And with mobile applications becoming more popular, the amount of data collection skyrocketed.
Simultaneously, ad platforms were simplified so that anyone could become an advertiser. Self-serve advertising on this scale had never been experienced by the advertising industry, as there had always been an intermediary in the form of agencies or publishers, to both gut check the program and the data. This shift to self-serve fundamentally changed the landscape of marketing.
Positive Impacts of Curated Content
By maintaining social connections and improving access to information, the relevance and convenience of content displayed by social algorithms has made social media an integral part of our lives. Social media platforms figure out what you like, find you relevant information and place it in your feed – front and center.
Social algorithms have saved consumers time and money, helped researchers find relevant data and increased political engagement across the world. Also, social causes that would have taken years to build momentum can now rely on social media to access wide networks of like-minded people to quickly gain traction, support and donations. And beyond altruistic benefits, most of us have been entertained, uplifted, or at least pleasantly distracted by the latest TikTok challenge, YouTube pet video or Instagram travel photos.
In sum, these algorithms have provided a multitude of benefits, maximized advertising dollars that have been reinvested in improving the overall user experience – but at what cost?
The Dark Side of Algorithms
As the industry exploded with exponential growth in users and advertisers, oversight became more difficult, and we began seeing abuses by bad actors. This environment became a breeding ground for sensational conspiracy theories, fake news, and a flurry of black hat activity. The end goals of these bad actors varied from money-stealing scams and foreign political meddling to organized troll farms—cells of social advertisers paid to conduct this black hat activity. Abusive entities learned to take advantage of algorithms and used that power against the platform and its viewers. Despite questionable or downright harmful aims, the abuse grew fairly unchecked for some time as it generated high interaction rates and increased profits.
The “bubble effect” also started to become more common as users were only shown content the algorithm could be certain they liked. This led to vast online echo chambers where differing opinions or content would rarely make it into a person’s feed. It has been argued that this bubble effect has been a contributing factor in increased polarization in countries around the globe.
Finding a Balance of Power
Over time algorithms have continued to evolve as data processing power continues to grow exponentially. Today, algorithms are anticipating the content and ads that users will want to see based on predictive modeling. This is an ethical gray area, with some arguing predictive data has always been the goal of marketing, while others are concerned the potential for abuse is simply too great.
The power of algorithms is also directly influencing human behavior. On the macro level, critics note that exposing people to a set type of content can change their decisions, beliefs, or even their personality. Even on a small scale, our moods can be changed based on the content we are shown, and social media companies are actively testing mood alterations within their algorithms despite negative consumer reactions.
The discussion regarding social media’s positive potential and its potential for abuse of power will continue into the next decade. This is a new frontier of government regulation and legal debate with laws like Europe’s GDPR and California’s Consumer Privacy Act taking small steps in a bigger journey about standardized regulation. One thing that is rapidly changing is the public’s knowledge of what’s going on behind the scenes. Education and online literacy continue to be central to stopping abuse.
So, the next time you are about to agree to terms and conditions without opening the privacy statement, stop and consider the value of what you’re willing to give away and know that the future of our open democracy requires an informed citizenry.