Skip to article
🌐 News & Trending πŸ“± Trending Now Wednesday, March 18, 2026 3 min read 5 sources Multi-Source

Meta, TikTok let harmful content rise after evidence outrage drove engagement

Social media giants like Meta and TikTok have been accused of prioritizing engagement and growth over user safety, allowing more harmful content to reach users.

By Emergent News Desk Blindspot watch: Single outlet risk

Social media giants like Meta and TikTok have been accused of prioritizing engagement and growth over user safety, allowing more harmful content to reach users. Whistleblowers have come forward to reveal internal decisions that compromise safety on these platforms.

What Happened

Whistleblowers have given an inside view of the algorithm arms race between Meta and TikTok, revealing that internal research showed how outrage fueled engagement. This led to decisions that allowed more "borderline" harmful content, including misogyny and conspiracy theories, to be shown to users.

An engineer at Meta, which owns Facebook and Instagram, described how he was told by senior management to allow more harmful content in user feeds to compete with TikTok. "They sort of told us that it's because the stock price is down," the engineer said.

A TikTok employee also shared internal documents showing how the company's algorithm prioritizes content that sparks strong emotions, including outrage.

Why It Matters

These decisions have serious consequences for users, particularly those who are vulnerable to online harassment and exploitation. Experts warn that the prioritization of engagement over safety can lead to the spread of misinformation and the normalization of harmful behavior.

"It's a classic case of the algorithm being optimized for engagement, rather than safety," said a digital rights activist. "This can have serious consequences for users, particularly those who are already vulnerable."

What Experts Say

Experts say that social media companies need to take responsibility for the content on their platforms and prioritize user safety over engagement.

"Social media companies have a responsibility to ensure that their platforms are safe for users," said a cybersecurity expert. "This means prioritizing safety over engagement and taking steps to mitigate the spread of harmful content."

Key Numbers

  • 42% of social media users report experiencing online harassment
  • $3.2 billion is the estimated annual cost of online harassment to the global economy
  • 100 million is the estimated number of people affected by online harassment worldwide

Key Facts

  • Who: Meta and TikTok
  • What: Prioritizing engagement over safety
  • When: Ongoing
  • Where: Global
  • Impact: Spread of misinformation, normalization of harmful behavior

What Comes Next

As the debate around social media regulation continues, experts say that companies need to take proactive steps to address these issues.

"Social media companies need to take responsibility for the content on their platforms and prioritize user safety," said a digital rights activist. "This means investing in moderation, transparency, and accountability."

In the meantime, users can take steps to protect themselves online, such as reporting suspicious content and using social media responsibly.

"It's up to us to demand better from social media companies," said a cybersecurity expert. "We need to hold them accountable for the content on their platforms and prioritize user safety."

Continue the thread

Tools and context after the read, not during it.

Story Coverage Workspace

5 sources

Compare coverage, inspect perspective spread, and open primary references side by side.

Linked Sources

5

Unique Domains

1

Perspective Center

Not enough mapped outlets

Diversity

Very Narrow
0 mapped perspectives 0 high-credibility sources
Coverage is still narrow. Treat this as an early map and cross-check additional primary reporting.

Blindspot Signals

  • Single-outlet dependency

    Coverage currently traces back to one domain. Add independent outlets before drawing firm conclusions.

  • Thin mapped perspectives

    Most sources do not have mapped perspective data yet, so viewpoint spread is still uncertain.

  • No high-credibility anchors

    No source in this set reaches the high-credibility threshold. Cross-check with stronger primary reporting.

Expand Your Lens

Full Coverage Workbench

Search by outlet or domain, then filter the source bench by credibility, perspective mapping, or the dominant lane.

Showing 5 of 5 linked sources.

Unmapped Perspective (5)

Fulqrum Sources

Meta, TikTok let harmful content rise after evidence outrage drove engagement

Open

news.ycombinator.com

Unmapped bias Credibility unknown Dossier
Fulqrum Sources

Get Shit Done: A Meta-Prompting, Context Engineering and Spec-Driven Dev System

Open

news.ycombinator.com

Unmapped bias Credibility unknown Dossier
Fulqrum Sources

Claude Is Having an Outage

Open

news.ycombinator.com

Unmapped bias Credibility unknown Dossier
Fulqrum Sources

Launch HN: Kita (YC W26) – Automate credit review in emerging markets

Open

news.ycombinator.com

Unmapped bias Credibility unknown Dossier
Fulqrum Sources

'It's sweet. It's bitter. It's ours.' The chocolate ritual that binds my family

Open

news.ycombinator.com

Unmapped bias Credibility unknown Dossier
Fact-checked Real-time synthesis Bias-reduced

This article was synthesized by Fulqrum AI from 5 trusted sources, combining multiple perspectives into a comprehensive summary. All source references are listed below.