AI Algorithms Are Not the Villains We Think They Are (2/3)

Brian Lenihan – January 27th, 2025

Algorithms Are Not the Villains We Think They Are

The common narrative around algorithms often casts them as shadowy manipulators, subtly controlling what we see, think, and consume. While it’s easy to blame these systems for the perceived decline of objective journalism and the rise of polarizing media, this perspective oversimplifies a much more nuanced reality. The truth is that algorithms, at their core, are tools—and like any tool, their impact depends on how they are used and the intentions of their creators. To critique algorithms without acknowledging their benefits and the broader human context is to misunderstand the complexity of the problem.

Self-Fulfilling Feedback Loop of Algorithms in Media • This image depicts a circular loop of data streams, social media icons, and engagement metrics, reinforcing the cycle of content amplification. A journalist looks frustrated as serious news is ignored, while clickbait thrives, illustrating the tension in media.

 

Equal Access to Information Is a Reality

One of the foundational arguments against algorithms is the claim that they distort our access to information. However, it is equally valid to argue that algorithms have democratized information like never before. In a pre-digital age, access to news and knowledge was gatekept by publishers and broadcasters, often controlled by a small group of influential individuals or corporations. Today, anyone with a smartphone and an internet connection has access to a near-infinite pool of information.

Algorithms play a crucial role in managing this vast landscape. Without them, the overwhelming volume of available content would make it nearly impossible for users to find relevant information efficiently. Algorithms enable personalization, helping users navigate the sea of information by prioritizing content that aligns with their interests, behaviors, and needs. Far from restricting access, algorithms have expanded it, breaking down traditional barriers to entry for creators and audiences alike.

The Media’s Incentives Are Not New

Critics often argue that algorithms reinforce a self-fulfilling feedback loop, driving media outlets to prioritize sensationalism and engagement over substance. While this critique holds some merit, it overlooks a critical point: the economic incentives driving the media are not new. Long before the rise of digital platforms, newspapers relied on eye-catching headlines to sell copies, and television networks competed for viewer ratings to attract advertisers. Sensationalism and profit-driven motives have always been part of the media landscape.

What algorithms have done is make these dynamics more transparent. By optimizing for engagement, algorithms have exposed what audiences are truly drawn to—whether it’s sensational stories, feel-good content, or hard-hitting journalism. If sensationalism dominates, it reflects audience preferences as much as algorithmic design. Algorithms are mirrors, not creators, of societal appetites. The responsibility lies not with the technology but with the collective choices of content creators and consumers.

Algorithms Are Not Indifferent; They Are Adaptive

Another criticism is that algorithms are indifferent, focused solely on mathematical optimization without regard for ethical considerations or public good. However, this perspective ignores the dynamic and adaptive nature of algorithms. These systems can—and often do—incorporate ethical guidelines and constraints set by their designers. For example, platforms like YouTube and Facebook have implemented measures to reduce the spread of misinformation and harmful content, showing that algorithms can be tailored to serve societal goals.

Moreover, algorithms are increasingly being designed to incorporate human feedback. Machine learning models, for instance, can be trained to prioritize credible sources or diverse viewpoints. While these systems are not perfect, they are far from the cold, indifferent mechanisms they are often portrayed as. With proper oversight and iterative improvement, algorithms have the potential to enhance, rather than undermine, public discourse.

The Divide Between Algorithmic Influence and Human Perception
• A surreal image of a human brain connected to streams of binary code, absorbing data. On the other side, a faceless AI entity manipulates media feeds, filtering and distorting information before it reaches the human. The background contrasts raw, unbiased data with viral trends, highlighting the curation problem

Human Behavior Is the True Driver

The argument that algorithms are the primary force shaping today’s media landscape risks overlooking a fundamental truth: human behavior is at the root of these dynamics. Algorithms do not create demand for sensational or polarizing content; they respond to it. The metrics that algorithms optimize for—clicks, shares, likes—are direct reflections of human choices. If algorithms amplify certain types of content, it is because that content resonates with audiences.

Blaming algorithms for societal issues is akin to blaming a thermometer for a fever. The technology is not the cause; it is a tool that reflects underlying conditions. Addressing the challenges of media bias and sensationalism requires confronting the deeper cultural and psychological factors that drive these phenomena, rather than scapegoating the technology that brings them to light.

The Positive Potential of Algorithms

It is also important to recognize the immense positive potential of algorithms. These systems have revolutionized industries, from healthcare to education, by enabling personalized experiences and scalable solutions. In the realm of media, algorithms have empowered independent creators, amplified marginalized voices, and made niche content more accessible to global audiences. They have also driven innovations in investigative journalism, data visualization, and storytelling, proving that technology can be a force for good.

To unlock the full potential of algorithms, we must focus on improving their design and governance rather than demonizing them. Transparency, accountability, and user education are key to ensuring that algorithms align with societal values. By working collaboratively to refine these systems, we can harness their power to foster a more informed, equitable, and connected world.

 

Self-Fulfilling Feedback Loop of Algorithms in Media
• This image depicts a circular loop of data streams, social media icons, and engagement metrics, reinforcing the cycle of content amplification. A journalist looks frustrated as serious news is ignored, while clickbait thrives, illustrating the tension in media.

Conclusion

Algorithms are not the villains they are often made out to be. They are tools—complex, adaptive, and capable of both harm and benefit. The challenges attributed to algorithms are not solely technological; they are deeply human, rooted in the incentives, behaviors, and values of the societies they serve. Rather than fearing algorithms, we should embrace the opportunity to shape them responsibly. In doing so, we can move beyond the simplistic narrative of blame and toward a future where technology and humanity work in harmony.

 

*This was written with the assistance of ChatGPT as an experiment. Images are generated by ChatGPT in context of what was written.