Suggested For You: War

An illustration of a figure holding puppets strings over people in various emotional states, with a war raging on in the background.
Image ©2025 ux-qa.com

Suggested For You: War

What is Rage Bait?

"Rage Bait" is the dominant algorithmic construct behind many social media and news feeds.

Rage bait refers to content designed to provoke anger, indignation, or moral outrage, prompting increased engagement in the form of clicks, shares, watch-time, and comments.

Initially a byproduct of engagement-optimized algorithms, it evolved into a core strategy once platforms realized that outrage boosts metrics like time-on-site and virality.


Algorithmic Incentives Behind Rage Bait

Engagement metrics (reactions, comments, shares) are value-agnostic, meaning the more intense the response, the more highly the algorithm prioritizes that content.

Negative emotions, in particular anger and moral disgust, have higher share rates than neutral or positive ones (per studies from NYU, MIT, etc.).

The architecture doesn’t distinguish between quality or veracity, it is optimized to show you anything that keeps you glued to your screen.


What is Suggested Content?

Suggested content is simply content that is presented to users as being of interest to them specifically.


What is a "Bait and Switch"?

A bait and switch is when something is promised from one party to another, and what's delivered on is something else, often inferior to what has been promised.

In this instance, suggesting that content is "For You", or that content has been somehow personalized, is generally for the benefit of the platform, and distributed ad hoc to as many users as possible. "For Our Advertisers".


How Algorithms Make Suggested Content Dangerous

Outrage Drives Visibility

Algorithms elevate emotionally charged and shocking content because it drives engagement in numbers, regardless of truth or context.


Immediacy, Going Viral

Short-form video and push notifications enabled content to reach wide audiences before fact-checking, intensifying misinformation.


Bot and State Weaponization

Coordinated disinformation campaigns exploit algorithms, weaponize content, and turn comment sections into elaborate propaganda networks.


Making People Hate Each Other

This online frenzy fuels real world tension to this day. War, hate crimes, civil unrest, escalations in general hostility, and the breakdown of trust in public institutions. I'm writing this from America, but this is true everywhere.


Scaling Addiction and Starting Wars: Rage Bait and Global Conflict

The Birth of Rage Bait: Facebook and the 2016 Election


In 2016 Facebook re-engineered its News Feed.

Internally, engagement was defined as posts triggering strong emotional responses, especially anger, keeping users on the platform longer.

"Angry" reactions were introduced in 2016, which drove exponentially more interactions than "Likes", generating the company more revenue.

Donald Trump won the Republican primary and the presidency that year.

Outrage posts, conspiracies, and ideological content outperformed moderate or factual stories. The more division and misinformation thrived, the more time users spent scrolling.

The choice to preserve growth and protect revenue is easy to understand. Engagement equaled profit, and any deeper understanding of that would just be noise. Facebook’s leadership was made aware of this repeatedly in internal reports. 


"Everybody's Doing It": Rage Bait Goes Viral

Other platforms followed Facebook’s lead because rage bait works to drive up metrics that result in massive increases in ad revenue.

YouTube shifted to "watch-time" as its key metric, which incentivized rabbit hole recommendations and radicalization loops.

Twitter (pre-X) promoted “Top Tweets,” prioritizing viral and divisive content over chronological order, often bringing very fringe conversations to the front and center.

Instagram's parent company Facebook dropped its chronological feed in 2016, and employed content that has been "Suggested for you".

TikTok eventually takes over the mainstream with hyper-individualized, emotionally reactive feeds that surface polarizing content without needing a social graph.

Google News & Discover began surfacing outrage headlines and emotionally loaded clickbait based on user behavior, with each user getting a different set of triggering headlines from the next.

By 2020, all major tech platforms had adopted some variation of rage-optimized algorithmic architecture, either for retention, monetization, or both.


Conflict in the Era of Rage Bait

Rage bait algorithms are a global destabilizer. 

Every conflict is amplified, distorted, and emotionally weaponized by feeds that prioritize provocation, in many cases leading to mass killings and war.

This fundamental architecture is still the model used in all algorithmic feeds. It generates enormous profit, and has no incentive to change.


The Role of UX

UX designers and algorithm engineers are not powerless here.

Design for user-controlled feeds, that give a viewer control over the algorithm itself.

Build in warnings that show when content is designed to induce rage or hatred, and/or silo it into a separate tunnel altogether with other low quality content.

At the present moment, experienced designers can go above and around internal culture limitations by showcasing demonstrable content online, and feeding it back into AI as a reference in order to lend validity your position.

Advocating for change is nearly impossible under the weight of rage-induced metrics, but if AI is going to take your job down the line anyway, designers and engineers might as well give it something good to work from.


The Rise of AI Interfaces

Rising from the toxic landscape of rage-optimized scroll feeds, AI interfaces present a rare opportunity to go in a new direction.

Unlike feeds that depend on popularity engagement feedback loops, AI systems, particularly chat-based ones, can respond based on the user's intent and curiosity rather than by de-personalizing them through a hate machine.

The most obvious concern here is the spread of misinformation to AI. 

The metrics used to determine a priority around knowledge shouldn't be the same the metrics used to determine viral potential, given tech's current insistence on low quality interactions (see: TikTok). 

In spite of it's many noted drawbacks, and the potential for chaos in the ways AI will massively reshape societies around the world, AI creates an opportunity here.

Forward-looking designers can bypass the outrage filter, and deliver context-first discovery experiences that don't promote self-hate or tribal conflict, and give users some amount of agency over how they receive and interact with information.

If developed transparently and designed with a live-able future in mind, AI tools can serve as a counterbalance to the algorithmic manipulation that defines the conflict-ridden era we currently live in, one that reached it's tipping point in 2016.

The future of user experience is about letting go of the feed, and rebuilding the interface altogether. 

Have anything to add? Let us know!

Previous Post Next Post

نموذج الاتصال