The Social Dilemma: The Deadly Trap of Social Media
Netflix has been increasingly attentive to its documentary films in recent times. One such impactful documentary is The Social Dilemma. This film is highly relevant in today’s context, shedding light on the dark truths of social media. While we suspected platforms like Google and Facebook of surveilling their users, the documentary reveals that the extent of this surveillance is far more alarming than we imagined. Everyone should watch this documentary to understand the root causes of the widespread unrest across the globe and how it has become a threat to our very existence. The film also explores whether we can overcome this challenge.
There’s a saying: “Data is the new oil.” In the modern era, data has become a tool to exploit users. Social media platforms are essentially hacking into users’ psychology. Compared to hacking computers, hacking the human mind has become the more potent tool. For instance, when a user searches for a product on Google, advertisements for that product appear across platforms like YouTube, Facebook, Twitter, and Snapchat.
All this data is stored and controlled by Artificial Intelligence (AI). While movies like Terminator imagined AI controlling the world in the future, The Social Dilemma suggests that AI is already in control, subtly manipulating our psychology in ways we don’t even notice.
This phenomenon aligns with Edward Tufte’s observation:
“There are only two industries that refer to their customers as ‘users’: illegal drugs and software.”
“Get Involved with Cnews TODAY: reflects the stories from every corners, network’s consumers”
- Advertise / Partnership with cnews TODAY
- Strategy: How to Aware Consumers
- Get to Know: How to Write a Review?
Key Highlights of The Social Dilemma
The documentary features interviews with former engineers from companies like Facebook, Google, and Apple, who share their experiences of designing systems that manipulate users. Features like the “like” button, notification systems, and Google Inbox’s architecture were created to engage users more deeply. Initially, everything seemed positive, with increased user engagement. However, as more features were added—such as comment sections—the problems escalated. These developments started around 2009, marking a turning point when platforms like Instagram were acquired by Facebook and Google began expanding its data storage systems.
The engineers were driven to make these platforms as addictive as possible. Even small design choices, such as notification tones and logos, were crafted to maximize user attention. For instance, Google’s 50 engineers influenced 2 billion users worldwide through these minor tweaks.
The Business Model: Selling Users
Silicon Valley initially focused on selling software and hardware for profit. However, in the last decade, the model shifted to selling users themselves. Social media platforms generate revenue by selling advertising space. Advertisers spend millions to display their ads on users’ screens, essentially treating users as products.
As Tristan Harris, co-founder of the Center for Humane Technology, aptly put it:
“If you are not paying for the product, then you are the product.”
Most people perceive Google as just a search engine and Facebook as a platform to connect with friends. However, these platforms are designed to monopolize user attention. The time users spend on these platforms is monetized through advertisements, which influence their thoughts, behaviors, and even identities.
Surveillance Capitalism
Modern technology has given rise to a new form of capitalism: surveillance capitalism. Companies like Google, Facebook, and YouTube track every action users take online. Even the time spent viewing a photo is recorded, allowing AI to better understand user behavior. These companies know intimate details about their users, from their personality traits to their emotional vulnerabilities.
Manipulation through Persuasive Technology
Social media platforms manipulate users in the same way neuroscientists and magicians exploit psychological principles. Persuasive technology labs in Silicon Valley train professionals to apply psychological insights to increase user engagement. Many influential figures in the tech industry have taken these courses, contributing to the massive growth of companies like Facebook and Uber.
Persuasive Technology: The Hidden Hand of Behavioral Manipulation
Persuasive technology refers to the use of technology to subtly influence and alter users’ behavior. Tech giants leverage this to shape user actions to align with their desired outcomes. For instance, platforms like Facebook engage users in endless news feed scrolling. Every time users reach the bottom of their feed, a single refresh resets the process, mirroring what psychology calls Positive Intermittent Reinforcement. Users are drawn to refresh because they are uncertain about what new content will appear—a phenomenon similar to the allure of slot machines in Las Vegas.
Using persuasive technology doesn’t just involve conscious product usage; it embeds itself deeply into users’ subconscious, converting behaviors into habitual actions. A phone, always within reach, prompts the user to enable mobile data, unlocking a world of updates, much like pulling a lever on a slot machine to reveal what lies ahead. This isn’t a natural reaction but a behavior artificially crafted to maximize engagement.
Example: Photo Tagging
Consider photo tagging. A notification or email informs you that a friend tagged you in a photo, prompting you to check it. While it’s rare to ignore such alerts, the design is intentional. If the email displayed the photo directly, the process would be more straightforward. However, platforms like Facebook or Gmail prefer users to interact within their ecosystems, increasing engagement and growth—a method known as Growth Hacking. Engineers deliberately craft these methods to exploit user psychology, driving sign-ups, increased activity, and extended engagement, effectively growing the platform’s user base.
The Architects of Persuasive Technology
In Silicon Valley, legendary figures specialize in growth hacking and persuasive technology, following playbooks tested on social media platforms. These playbooks often involve small-scale experiments like scientific A/B testing. Companies like Google and Facebook frequently conduct such tests, subtly shaping user behavior through continuous adjustments. Over time, these micro-experiments create optimal engagement strategies. Facebook’s experiments, termed Massive-scale contagion experiments, demonstrate how subtle cues in user interfaces can sway real-world emotions and behaviors. Political campaigns, for example, might use these strategies to influence voter turnout during midterm elections.
Facebook’s realization that it could alter user emotions and behaviors without explicit consent exemplifies manipulation. Here, users become test subjects—digital guinea pigs. Unlike medical trials, the experiments don’t aim to cure but to harvest user attention, turning them into “techno-zombies.” The ultimate goal is to serve more advertisements, thereby maximizing revenue.
Who Controls These Algorithms?
The answer lies in Artificial Intelligence (AI). Engineers design these algorithms, setting specific goals for them to achieve. Using Machine Learning, these systems continually refine themselves to deliver optimal results. However, even the creators often don’t fully understand their systems’ inner workings. Algorithms develop an independent “mind,” evolving beyond their original design to maximize outcomes, such as user engagement.
The Unfair Game
When browsing platforms like Facebook, users unknowingly “play” against an AI that knows everything about them, predicting their next moves. Yet users know little about the system—besides, perhaps, the names of a few viral cat video creators. This imbalance underscores the manipulative nature of these platforms.
Unlike Wikipedia, which provides consistent information for all users, platforms like Facebook and Google customize content based on user profiles. For instance, searching “climate change” might yield results like “Climate change is a hoax” for one user and “Climate change is destroying nature” for another, depending on their browsing history and preferences. This personalization creates isolated realities, fostering polarization and enabling easy manipulation.
Emotional Manipulation and Its Consequences
Social media algorithms are designed to exploit users’ emotions, presenting content that elicits strong reactions. Platforms track user engagement with content related to specific emotions, such as sadness or joy, and use this data to serve similar content repeatedly. This “emotional hacking” feeds a continuous cycle of engagement, often leading to increased depression and anxiety among users.
For instance, after 2009, suicide rates, particularly among teenagers, doubled due to the isolating effects of excessive social media use. Human interaction, which triggers dopamine release, has been replaced by digital interactions, optimizing our innate need for connection for addictive behavior.
The Future of Persuasive Technology
The potential of persuasive technology extends beyond social media. Tech visionaries are exploring DNA technology, suggesting that within the next two decades, DNA-based data storage might become commonplace, fundamentally reshaping human interaction with technology.
Ultimately, persuasive technology, powered by AI, continues to influence human behavior in unprecedented ways, often without users’ knowledge. Understanding these dynamics is crucial to reclaiming control over our choices in an increasingly tech-driven world.
If something is free, you’re the product” Richard Serra
We’re not completely clueless. “A lot of people don’t know” – they do. They just dont care, we “figured this out” years ago. People just dont have the foresight to see the implications, even after explained to them. This docu is about 10 years late for most everyone, and 20 years late for a few.
Finally
The Social Dilemma reveals the alarming extent to which social media platforms exploit user psychology for profit. It urges us to reflect on how these technologies shape our lives and challenges us to think critically about their ethical implications.