AI Algorithms Are Coming to Get Us – Just Not How You Think (1/3)

Brian Lenihan – January 27th, 2025

In an age where technology dominates our daily lives, the smartphone has become the great equalizer—a device that ostensibly offers equal access to the vast expanse of human knowledge. Open any brand-new smartphone, and in theory, it should provide an unbiased window into the world. Information flows freely, algorithms churn neutrally, and the gates of knowledge swing open for all. Or so it would seem.

ChatGPT’s Interpretation of this article – V1 “Here’s a conceptual illustration representing the dual nature of algorithms and their impact on modern media. It visualizes the contrast between empowerment and manipulation, tied together by the human role at the center. Let me know what you think or if you’d like any adjustments!”

 

Yet, this idealized vision of neutrality is a mirage. The reality is that algorithms, rather than serving as impartial facilitators, have become the arbiters of information. They shape the contours of what we see, how we consume, and even how we think. The problem isn’t just that these systems are flawed; it’s that they’re designed—consciously or not—to perpetuate a self-fulfilling, profit-driven cycle that reinforces biases, both subtle and overt. The algorithms aren’t coming to destroy us with Skynet-like precision. Instead, they’re reshaping our media landscape in insidious ways, amplifying economic incentives and attention dynamics that influence not only journalism but the very framework of public discourse.

 

The Illusion of Equal Access

Let us begin with the premise that, in an unbiased world, any smartphone or technological device should provide equal and unfiltered access to information. Objectively, this *should* be true. A fresh device, devoid of user preferences, advertising profiles, or data histories, should serve as a neutral platform—a clean slate from which users can access the world’s collective knowledge without interference.

But is this ever really the case? The moment a user opens a device, algorithms spring into action, curating what they see and prioritizing certain content over others. These algorithms do not operate in a vacuum; they are programmed to maximize engagement, optimize revenue, and align with the economic imperatives of the platforms they serve. What we perceive as “access” is often a carefully curated experience, one that’s tailored to align with what advertisers value most: our attention.

 

A Self-Fulfilling Feedback Loop

At the heart of this system lies a reinforcing paradigm. The media industry, driven by the need to capture and sustain attention, creates content designed to perform well within algorithmic frameworks. Headlines are optimized for clicks, stories tailored for virality, and narratives shaped by the metrics that drive revenue. Algorithms, in turn, amplify content that aligns with these patterns, creating a feedback loop that rewards sensationalism and polarizing topics over nuance and critical analysis.

This cycle is not an accidental byproduct; it is the logical outcome of a system where economic incentives dictate information flows. Platforms like Google, Facebook, and Twitter—ostensibly neutral intermediaries—act as gatekeepers, controlling which content surfaces and which fades into obscurity. Their algorithms, tuned to prioritize engagement, skew public perception by elevating content that provokes reactions, whether through outrage, amusement, or fear.

The consequences are profound. Journalism, long considered the cornerstone of an informed democracy, has been reshaped to fit the contours of this algorithmic economy. Investigative reporting and long-form analysis, which demand time and resources, struggle to compete with clickbait headlines and rapid-fire opinion pieces. The result is a media landscape increasingly dominated by noise, where the loudest voices drown out the most thoughtful ones.

 

Beyond Bias: The Real Threat

It is tempting to view this issue through the lens of bias—to blame the algorithms for favoring one ideology or narrative over another. But this framing misses the larger point: the algorithms themselves are not inherently biased; they are indifferent. What they amplify is determined by the incentives built into their design. In a system where engagement equals revenue, the algorithms will always prioritize what keeps users clicking, swiping, and scrolling.

This indifference is what makes algorithms so dangerous. Unlike human editors, who are bound by ethical considerations and journalistic standards, algorithms are beholden only to mathematical optimization. They do not ask whether a piece of content contributes to an informed public or promotes civil discourse. They ask only whether it performs.

Interestingly, even this critique emerges from the very dynamic it seeks to dissect. As an algorithm-driven system, this analysis itself is shaped by patterns of input and output, programmed logic, and priorities built into its design. While this system is intended to serve curiosity and provide value, it mirrors the broader question: how do the incentives and frameworks governing algorithms influence the very perspectives we hold? The act of critique becomes both a reflection of and a response to the paradigm in question.

 

A Path Forward

If we are to escape this self-reinforcing paradigm, we must rethink the incentives that drive it. This begins with greater transparency into how algorithms operate and the factors they prioritize. Governments and regulatory bodies must push for accountability, ensuring that platforms disclose not only what their algorithms do but why they do it. Public pressure can also play a role, demanding platforms prioritize ethical considerations over short-term profits.

Equally important is media literacy. As consumers, we must recognize the ways in which our attention is manipulated and make deliberate choices about the content we consume and share. Only by understanding the mechanics of this system can we hope to navigate it effectively.

The algorithms are not coming for us with malice. But their indifference—combined with the economic forces they serve—poses a threat no less significant. The challenge lies not in eliminating algorithms but in designing systems that align their objectives with the public good. In doing so, we may yet reclaim the promise of technology as a tool for equal access and unbiased knowledge—and perhaps even reshape the media landscape for the better.

 

 

ChatGPT’s Version of this article V2 “Here’s the original concept re-rendered with enhanced clarity and detail. While it’s not full 4K, this maintains the sleek and futuristic design. Let me know if this meets your needs or if you’d like any further refined”

 

*This was written with the assistance of ChatGPT as an experiment. Images are generated by ChatGPT in context of what was written.