Social media companies like Instagram, Facebook, X, and Threads continue to promise safer spaces and stronger protections for their users, yet in 2025 the reality tells a very different story. These platforms repeat the same statements about improving community safety, removing harmful accounts, and reducing online abuse, but anyone who actually spends time on these apps knows the truth. Trolls and bullies flourish with little accountability, scammers operate freely, bot networks grow by the thousands, and real users—especially those who value kindness and respect—are left to defend themselves. This isn’t a minor oversight or a random flaw. It’s a systemic failure, and it’s long overdue for these companies to take real responsibility.
Trolling has evolved far beyond the occasional rude commenter. It now includes coordinated harassment, mass-reporting attacks, targeted pile-ons, and relentless negativity directed at public figures, creators, actors, and even everyday users. People who try to advocate for safety or defend others often become targets themselves. Platforms publicly claim to discourage harassment, but behind the scenes their algorithms reward outrage because it drives engagement. The more arguments and toxicity a post generates, the more visibility it gets. Trolls are unintentionally rewarded while kind voices are buried beneath negativity. This dynamic has turned social media into an environment where harmful behaviors are amplified instead of discouraged.
Scammers and impersonators also pose a severe threat, particularly in fan communities. Well-known public figures, such as actors like Sam Heughan, face constant waves of fake accounts stealing their photos, mimicking their biographies, copying their captions, and messaging fans with manipulative schemes. The impersonation problem has become more layered than people realize. Not only are scammers trying to duplicate or mimic verified checkmark accounts, but even when they don’t use a fake checkmark at all, they still pose as the real person with alarming confidence. They simply copy photos, use similar usernames, and craft believable stories. These scammers pretend to be the real deal without needing verification badges—they rely on stolen identity, emotional manipulation, and the trust fans naturally have for the public figure.
Despite all this, when users report these impersonators, the platforms often respond with the same robotic message: “This account does not violate our community guidelines.” It is alarming that companies with advanced artificial intelligence can detect copyrighted music in seconds but cannot recognize a stolen photo or an account that is clearly pretending to be a well-known actor. Whether it’s outdated tools or a lack of priority, the result is the same—users, fans, and the actors themselves are left vulnerable.
And this is where something much bigger needs to happen. In fact, I think the management teams of all actors—including Sam Heughan’s team—should request a formal meeting with the founders and executive leadership of all major social media platforms. Impersonation does not just target fans; it violates the actors themselves. It abuses their identity, misrepresents their character, misleads their audience, and exposes them to liability, harassment, and long-term reputational harm. No actor or public figure should be forced to continually reassure fans that they are not the ones sending private messages. And no management team should have to spend their time chasing down hundreds of fake profiles across multiple apps.
A collective meeting between management teams and platform founders would send a powerful message that impersonation is not a casual nuisance—it is a serious issue with legal, emotional, and professional consequences. These teams should present documented evidence of repeated impersonation attempts, the emotional and financial harm caused to fans, and the ongoing damage to their clients’ reputations. They should request mandatory anti-impersonation protocols, stronger identity verification systems, faster takedown times for likeness violations, and dedicated platform liaisons assigned specifically to public figures. Platforms should not only listen—they should implement real and enforceable solutions, because protecting someone’s identity is just as important as protecting their privacy.
Bot networks have also become a major force in online conversations. They no longer simply spam links or post meaningless content. Today’s automated systems are sophisticated enough to spread misinformation, fuel rumors, amplify drama, and artificially inflate negativity. Many harmful narratives—including false rumors, composite photos, and exaggerated controversies—begin with clusters of automated accounts rather than real users. These bots manipulate what trends, what gains traction, and what dominates conversations. Meanwhile, real people struggle to drown out the noise. Platforms know the bot issue exists, yet their interventions remain slow, inconsistent, and ineffective.
Beyond scammers and bots, there is a growing issue of fan groups and online communities that repeatedly circulate unverified stories, half-truths, or outright fabricated rumors. Groups like sis_brasil have become examples of how misinformation spreads rapidly when platforms don’t enforce accuracy or accountability. When communities consistently share rumors about public figures without fact-checking, it not only misleads fans—it harms reputations, fuels unnecessary drama, and creates long-lasting confusion. If platforms can flag medical or political misinformation, then they should also be able to identify and address groups that repeatedly distribute unverified stories about celebrities or private individuals. The lack of moderation in these areas allows false narratives to flourish unchecked. And I believe platforms need to do something about groups like that, because misinformation—especially when it becomes a pattern—can be just as damaging as harassment or impersonation.
Across all major social media apps, the failures are consistent. Impersonation is not taken seriously, even when the target is a verified account. Identity theft slips through weak detection systems. Harassment goes undetected until it becomes extreme. Scammers can create brand-new accounts and instantly send mass DMs to hundreds of people. Misinformation spreads because certain groups face no consequences for repeatedly posting unverified claims. Moderation remains inconsistent at best and nonexistent at worst. Algorithms continue to boost negativity because it generates engagement. Trolls and bullies face little consequence, while honest users risk being punished simply for reporting abuse or correcting false claims.
This issue extends beyond platform mismanagement—it is a global problem. Social media affects nearly every country, every age group, and every community. That’s why I believe all countries should hold these platforms accountable and push for stronger international standards. Just as nations enforce laws for privacy, consumer protection, advertising transparency, and digital safety, they also need clear expectations for how social media companies handle impersonation, bullying, mass harassment, misinformation, and bot activity. There is absolutely no reason for multi-billion-dollar companies with cutting-edge technology to overlook such basic responsibilities.
Social media itself is not bad. In fact, it has enormous potential to connect people, inspire creativity, and build meaningful communities. The problem isn’t the platforms—it’s the harmful individuals who are allowed to operate freely because the systems meant to stop them are weak or ignored. When trolls, bullies, scammers, rumor-spreaders, and bots aren’t stopped, they turn social media into a hostile place. When platforms fail to enforce safety, negativity becomes normalized. When impersonation and misinformation are ignored, fans, creators, and public figures are put at risk. But if companies—and governments—commit to real accountability, social media could return to what it was always meant to be: a place where people can share their lives, their passions, their creativity, and their stories in harmony.
At the end of the day, social media doesn’t become toxic on its own. It becomes toxic when platforms fail to protect their communities. And it is absolutely possible to change that. If users demand better, if countries demand accountability, and if companies decide that people matter more than metrics, then social media can once again become a safe, welcoming, inspiring place for everyone.

