Joan is Awful: Who Is Affected the Most?(3/5)
Blog 3 is out! After setting the context and asking tough questions, this one dives into who’s most vulnerable in the digital trap — with real-life cases you may relate to.
From Screens to Scars
In our previous blogs, we explored how our behaviors are unconsciously shaped by screens—and how technology subtly alters our decisions, lifestyle, and sense of reality.
But while the what and how are important, today we ask:
Who suffers the most?
Who are the real victims of this silent digital takeover?
Swapped Roles
We taught the machine to laugh and care,
To nod, to listen, to always be there.
But now from humans, we demand no soul —
Just output, speed, and perfect control.
We shaped the code with heart and grace,
Then stripped that heart from the human race.
1. The Intelligent & Tech-Savvy
Ironically, the people who keep up with the latest in technology—the so-called tech-savvy—are often the first to fall prey.
These early adopters become guinea pigs for tech companies. They end up giving more to the system than they receive in return.
Tech companies often provide free software or services—not out of generosity, but to test their products and feed their algorithms. In the case of AI, these users unknowingly become data donors, with no compensation or control over how their information is used. Their curiosity and enthusiasm blind them to the deeper issues.
What’s worse, the more knowledgeable one becomes about tech, the fewer people there are who can warn or protect them—simply because the technology is too new for anyone to understand its full implications. In their rush to try out new platforms, they often accept digital contracts (Terms & Conditions) without reading them—caught in the momentum of endless scrolling.
Example – Cambridge Analytica & Facebook Scandal (2018):
Many users, especially early adopters, were lured into participating in “personality quizzes.” These fun-looking apps harvested data—not just from the user but also from their network. It led to a massive privacy breach, and the data was used to manipulate political opinion. People who thought they were ‘too smart’ to be manipulated were most blindsided.Source: Wikipedia
2. The Partially Aware but Dismissive
This group has some awareness that something might be wrong—but they downplay the risks.
Why?
There are several reasons:
Sub-Group A: The “Someone Will Fix It” Crowd
They believe that if things get worse, someone will intervene—a government, a company, or an activist.
Example – Pegasus Spyware (India, 2021):
Many journalists, activists, and even government officials were allegedly targeted using Pegasus—a military-grade spyware. Despite red flags, many ignored cybersecurity hygiene. Their complacency cost them privacy and professional credibility. Few raised their voice early—most assumed “it won’t happen to me.”
Source: Wikipedia
Sub-Group B: “I’m Too Small to Matter”
Students, low-profile users, or people without significant digital presence think they have nothing to lose.
“Who would target me?” they wonder.
Example – SIM Swap Scam in Mumbai:
Multiple middle-class individuals lost lakhs when fraudsters ported their mobile numbers using fake documents and gained access to OTPs for banking. Victims included students, homemakers, and mid-level professionals—people who never imagined they were “important” enough to be targeted.
Source: News 18
Sub-Group C: The Lazy Trusters
Many of us are simply too lazy to dig deeper. We save passwords in browsers, opt for auto-payments, and allow platforms like YouTube or OTT apps to learn our tastes and spoon-feed content—thinking that minimal usage shields us from danger.
Example – Netflix & YouTube Algorithms:
Over time, users are repeatedly served similar content. This reinforcement leads to echo chambers. For instance, some teenagers who started watching fitness videos slowly spiraled into toxic body image loops due to algorithmic suggestions of extreme dieting or gym culture.
Sub-Group D: High Dopamine Users
Some people live for content. They scroll endlessly, post videos, and measure their self-worth by likes, comments, and shares. Their moods swing with online validation. These users often ignore safety or privacy for the sake of speed and visibility.
Example – TikTok Bans & Suicide Cases (India, USA):
Several creators fell into depression when their content was not validated. Some even committed suicide. They became addicted to attention and the performance trap created by algorithmic judgment.
This partial awareness can also misplace trust.
People start believing a forwarded WhatsApp video more than their closest friends or family. Algorithms feed them distorted versions of reality, and even if their inner circle disagrees, the online narrative appears more convincing.
Joan (Black Mirror: "Joan Is Awful")
In this fictional but highly relatable Netflix episode, Joan discovers that her everyday life is being dramatized and broadcast without her consent by an AI streaming platform.
It’s symbolic of how everyday data—mundane chats, emails, clicks—can be captured, reinterpreted, and manipulated at scale.
The episode mirrors fears around AI deepfakes, data mining, and consent.
Parallel in Real Life – The Clearview AI Case (US):
A facial recognition firm scraped billions of images from social media without consent. People’s faces were used in surveillance systems—sometimes wrongly identified. Lawsuits followed, but damage to privacy was already done.Source: The Guardian ; Business Insider
Final Thought: Awareness Is No Longer Optional
The more connected we become, the more carefully we must guard our choices, habits, and assumptions.
In this data-driven world, ignorance is not bliss—it’s vulnerability.
Up Next: Institutions and the Algorithm
In the next blog, we’ll move beyond individual vulnerabilities and look at the institutional impact—how businesses, schools, media, and governments are quietly adapting (or failing) in the face of relentless technological acceleration.
🌀 Stay with me as I peel back the next layer of this digital paradox.
Relevant Blogs: -
Joan is Awful: Who Owns the Remote? (2/5)
If you missed Blog 1, we explored how technology, profit-driven content, and perception can turn anyone into a villain—without their knowledge or consent.
This is a very well researched piece. You've done a great job with it Harinath!
I'll check out more of your content❤️
Hey
Yes this is so very true . The more we are tech savvy the more we into all the problems.
Technology is like quicksand .
It pulls slowly into, unlike a magnet which pulls in a single strike
The more connected we become, the more carefully we must guard our choices, habits, and assumptions.
In this data-driven world, ignorance is not bliss—it’s vulnerability.
👍🏻👍🏻👍🏻👍🏻👍🏻