Free will — the freedom to decide for ourselves the actions we take based on our understanding of right and wrong and the possible consequences — is a value sacred to humanity. But that freedom is quietly being shaped by systems most of us never see.

Algorithmic accountability applies to many domains: an algorithm deciding who gets a loan, who gets flagged by law enforcement, who gets hired. These matter. But the most intimate algorithmic system in most people's lives is simpler and more pervasive than any of those. It's the feed — the endless, personalized stream of content that greets you every time you open your phone or browser.

That feed is not neutral. It is not a window onto the world. It is an engineered environment, optimized at enormous scale, specifically to capture and hold your attention. Doomscrolling isn't a willpower problem. It's an engineering problem. And nobody is currently required to answer for what it's doing to us.

That's what we're here to change.

A digital dystopian cityscape with a large robotic hand above and laser strings connecting to silhouetted figures below.
Artwork of a human profile with the head partly replaced by a complex arrangement of electronic components, wires, and circuits, symbolizing a human-machine hybrid.

Algorithmic Accountability

How It Works

Most people think of a social media feed the way they think of a newspaper — a selection of things happening in the world, maybe tailored to their interests. That's not what it is. A feed is a prediction engine. Every time you open the app, it asks one question: what is this specific person most likely to keep watching? Then it answers that question, serves you the content, watches what you do, and immediately updates its model of you. The world it shows you isn't the world. It's a mirror that keeps adjusting until you can't look away.

What makes this different from anything that came before it is the combination of three things working together:

  • Personal targeting. Unlike television, radio, or print — which broadcast the same content to everyone — these systems build a unique profile for each user and serve content tailored specifically to keep you watching. Not the average person. You.

  • Real-time behavioral feedback. Every pause, replay, skip, and scroll is recorded and fed back into the system instantly. The algorithm learns what holds your attention as it holds it, updating its model of you in real time, at no additional cost.

  • Billion-user scale. Previous persuasion systems — advertising, propaganda, even early internet ads — had real costs per person reached, which limited their reach and their refinement. These systems scale to billions of users essentially for free, which means they can be tested, tuned, and optimized to a degree no prior technology has approached.

The result is the most efficient attention-capture machine ever built. And it runs, continuously, on almost every person with a smartphone, tablet or PC.

The emotions that keep people watching longest are not calm or neutral ones. They are outrage, fear, anxiety, and tribal solidarity — the feeling that your people are under threat and you need to stay informed. These are also among the most exhausting emotions a person can feel. The algorithm has no interest in your wellbeing. It has an interest in your attention. When those two things conflict — and they do, constantly — attention wins. The hours dissolve. The mood darkens. You didn't choose to feel this way. The system learned that this is what keeps you scrolling.

The same mechanism that drives doomscrolling also shapes what we believe. Recommendation algorithms are designed to serve content similar to what you just engaged with — each post leading to something slightly more intense than the last. Researchers have documented this pattern consistently across YouTube, Facebook, and TikTok. Users who engage with mainstream political content are systematically nudged toward more extreme versions of it — not through any deliberate editorial choice, but because more extreme content tends to generate more engagement. The amplification isn't ideological. It's mechanical.

A user doesn't choose to become more radical. They follow the recommendations. Communities don't choose to become echo chambers — the algorithm learns what the group engages with and narrows the feed accordingly. What begins as curiosity ends, for some, in worldviews they would not have arrived at on their own. The platform bears no responsibility for the journey. That is the problem.

The Scale of It

This is not a niche concern. It is one of the defining features of modern life, and the numbers reflect that.

Over 5 billion people worldwide use social media — roughly 62% of the global population.[¹] The average person spends around 2 hours and 21 minutes per day inside these systems.[²] But averages obscure what's actually happening at the extremes. Surveys consistently find a significant portion of users — across age groups and geographies — reporting 6, 8, or more than 10 hours of daily screen time. For those people, the feed is not a feature of their day. It is their day.

The platforms know this, and it is worth understanding why they have no incentive to change it. Meta alone generated $160.6 billion in advertising revenue in 2024 — virtually all of it from feed-based ads.[³] TikTok, YouTube, and others add hundreds of billions more to a global attention economy now valued in the trillions. The model is straightforward: advertisers pay for views, the platform captures the spread, and the algorithm's job is to maximize the inventory — which means maximizing the time you spend inside the feed. Your attention is not the experience. It is the product being sold.

The human cost of that model is increasingly well documented. Research consistently links heavy social media use to elevated rates of anxiety, depression, and loneliness — in adults as well as adolescents.[⁴] Studies show that time spent on feeds tends to displace in-person social interaction, not supplement it: the more hours spent scrolling, the fewer spent with other people. In 2021, internal Facebook documents leaked to the Wall Street Journal revealed that the company's own researchers had found Instagram was making body image issues worse for one in three teen girls — findings that had been reviewed by senior executives, including Mark Zuckerberg, and not acted upon.[⁵]

This is not an accident of the technology. It is a consequence of the incentive structure. The same algorithm that keeps you watching is the one making the platforms rich. Nobody in that equation is optimizing for your wellbeing — and until recently, nobody outside it was either. The European Union's Digital Services Act, which entered into force for the largest platforms in August 2023 and became fully applicable in February 2024, represents the most serious regulatory attempt yet to impose algorithmic accountability on these companies.[⁶] It is a beginning. In most of the world, including the United States, the regulatory landscape remains largely untouched.

The feed doesn't have to work this way. That's the most important thing to understand. Infinite scroll, autoplay, the absence of a reset button — none of these are technical necessities. They are design decisions, made deliberately, by people optimizing for a specific outcome. They can be unmade by people optimizing for a different one.

We call these dark patterns: design choices that extract value from users without their knowledge or consent. The darkest of them isn't infinite scroll. It's this: platforms discovered that outrage and fear produce more engagement than joy or curiosity, and they built systems to exploit that — using involuntary behavioral signals like pause duration, rewatch rate, and stress scrolling to infer and amplify your emotional state, without ever asking if that's what you wanted. You never clicked a button that said show me more things that frighten me. The algorithm just learned that fear keeps you watching, and optimized accordingly.

The opposite of a dark pattern is a consensual pattern: a design that operates on signals you knowingly provide. You hit like. You hit save. You follow an account. You tell the feed, explicitly, what you want more of. Personalization built on consensual signals can still be powerful, still be useful, still serve you ads you might actually care about. It just loses the ability to exploit your nervous system without permission.

That distinction — dark versus consensual — is the lens through which we evaluate everything. And it leads to demands that are specific, achievable, and hard to argue against:

The reset button. Every platform must give users the ability to wipe their algorithmic history and start fresh. One tap. No buried settings. This is the most basic form of user control imaginable, and it costs platforms nothing except the potency of the profile they've built on you without asking.

Consensual signals only. Feed algorithms must be optimizable exclusively on explicit user actions — likes, saves, shares, follows. Passive behavioral harvesting — using how long you hovered, whether you rewatched, how fast you scrolled — cannot be used to train recommendation systems. You keep personalization. You lose the exploitation.

Transparent standards. Platforms must be audited against these criteria by independent third parties, with results made public. We are building toward a certification standard — a clear, tiered rating of how much user control each platform genuinely offers, so that users, advertisers, and regulators can see exactly where each platform stands.

We are not against technology. We are not against advertising. We are not against personalization. We are for a world where the feed works for you — where you can hand-pick an algorithm tuned to morning news, or cricket, or calisthenics, or cooking, and switch between them like playlists. Where your attention is something you choose to give, not something that gets taken. That world is technically possible right now. The only thing standing between you and it is the current incentive structure — and that is exactly what we are here to change.

The solution to the problem of Algorithms in general, is always going to start with more insight, control, and  power to the user.
Whether it's feeds, a bank loan, or law enforcement tracking, it’s past time for the user to be heard.  


References

[¹] DataReportal, Digital 2024: Global Overview Report (January 2024). https://datareportal.com/reports/digital-2024-global-overview-report

[²] Backlinko, Social Media Users & Growth Statistics (citing GWI Q3 2024 data). https://backlinko.com/social-media-users

[³] Meta Platforms Q4 & Full Year 2024 Earnings Report (January 2025). https://mediaincanada.com/2025/01/30/metas-revenues-increased-by-22-in-2024/

[⁴] Hunt, M.G. et al., "No More FOMO: Limiting Social Media Decreases Loneliness and Depression," Journal of Social and Clinical Psychology (2018). https://penntoday.upenn.edu/news/social-media-use-increases-depression-and-loneliness

[⁵] Wells, G., Horwitz, J. & Seetharaman, D., "Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show," The Wall Street Journal (September 14, 2021). https://www.cnbc.com/2021/09/14/facebook-documents-show-how-toxic-instagram-is-for-teens-wsj.html

[⁶] European Commission, Digital Services Act: enforcement timeline (August 2023 / February 2024). https://www.eu-digital-services-act.com/

What We're Fighting For