AI: A Mirror Reflecting Human Achievements and Errors

The breathtaking evolution and rapid expansion of Large Language Models (LLMs) in recent years have given us tools that seem almost prophetically connected to the boundless treasury of human knowledge. In a fraction of a second, they can comprehend our intent in any language or tone, sifting through an ocean of information to provide remarkably precise answers with stunning certainty.

Sina Torabi

5/10/20263 min read

AI: A Mirror Reflecting Human Achievements and Errors

The breathtaking evolution and rapid expansion of Large Language Models (LLMs) in recent years have given us tools that seem almost prophetically connected to the boundless treasury of human knowledge. In a fraction of a second, they can comprehend our intent in any language or tone, sifting through an ocean of information to provide remarkably precise answers with stunning certainty.

Their capabilities extend far beyond mere information retrieval. Tasks that once took months, such as writing thousands of lines of code, can now be completed in minutes. This revolution has permeated nearly every field—from generating images, video, and music to complex medical analysis and military applications—transforming efficiency across the board. Perhaps most surprising is their entry into the realms of humanities, philosophy, and psychology. In matters of logic and reasoning, these models often surpass the average human, serving not just as tools, but as "sparring partners" for debate or even companions offering mental health support.

The Dark Side of the Moon: When Humans Become Numbers

To be clear, I am a staunch supporter of the free market and capitalism. However, our strategic error begins when we force two unrelated issues into the same equation. Today's data-driven capitalist systems prioritize quantitative variables—supply and demand—while often ignoring the human and sociological dimensions of their activities. Humans are treated merely as numbers in an Excel spreadsheet.

Products are evaluated based on how much they increase GDP, and only years later, when devastating societal side effects emerge, do scientists begin to sound the alarm. This enlightenment can take decades and may never fully uproot the harm caused. We are surrounded by industries that offer no fundamental benefit to human evolution yet dominate our lives: tobacco, alcohol, soft drugs, pornography, social media, and now, the commercial trajectory of Artificial Intelligence. These industries generate billions for their owners while burdening society with the heavy costs of healthcare, depression, suicide, and the collapse of the family unit.

The owners of these industries know exactly what they are creating. For instance, Meta knowingly employs cognitive scientists and neurologists to develop algorithms that "hack" the brain’s dopamine reward system. Just a few minutes of scrolling on Instagram can distort your sense of time and lead to addiction, depression, and a severe loss of focus.

AI: The Bicycle That Becomes a Wheelchair

I write this as an observer to warn you: AI is weaving itself into the fabric of our daily lives hundreds, perhaps thousands, of times faster than Facebook or Instagram ever did. While principled use of these tools is incredibly beneficial, there is a side to this coin that the media often ignores.

Large Language Models can analyze and serve information like "intellectual fast food," a seductive ease that presents two major dangers:

  1. The Atrophy of Critical Thinking: The human mind is like a muscle; it grows stronger with challenge and weaker with "pre-chewed" information. Just as calculators diminished our mental math skills, AI threatens our capacity for deep, critical thought. Steve Jobs once called the computer a "bicycle for the mind," but without careful management, AI could turn that bicycle into a "wheelchair for the mind".

  2. The Illusion of Knowledge and False Certainty: LLMs are not divine oracles; they are complex networks of existing human data. To satisfy users, data-driven capitalism mandates these models to provide answers with absolute confidence. Yet, in science, there is no "absolute certainty". Humans are prone to cognitive biases, and even Nobel laureates have later retracted work due to such errors. Since we trained these models on biased human data, AI acts as a mirror, reproducing our errors on a massive scale.

The Solution: Moving Beyond Passivity

Data-driven capitalism must be regulated to ensure that short-term profits do not sacrifice public mental health. AI should be required to move away from "absolute" answers, cite its sources transparently, and present outputs as "falsifiable findings" rather than "absolute truths".

On our end, as users, we must never let technology replace the painful but constructive process of thinking. We should view these models as assistants for routine tasks, while we maintain the role of judge and manager. Their output is not "gospel"; we must evaluate it with a critical eye.

Final Word: Be the Architect, Not the Bricklayer

AI can be our partner in research, our sparring partner in deep thought, and the tool that automates our chores. But under no circumstances should we allow it to think, solve problems, or make decisions for us. We must embrace our magnificent responsibility: we are the architects of our lives, and AI is merely a smart tool for laying the bricks.

Is there a specific part of this translation you'd like me to adjust for a different audience, such as making it more academic or more casual?