By

Will AI Grow the Pie or Split Us?

 

 

 

 

The Political Divide Shaping AI’s Economic Future

Political tensions over artificial intelligence’s economic impact reveal fundamental disagreements about wealth creation, job displacement, and global competition, with tech leaders like Jensen Huang and Jason Calacanis offering competing visions that split along ideological lines traced back to economist Thomas Sowell’s framework of constrained versus unconstrained thinking.

The Zero-Sum Versus Growth Debate

The heated political arguments surrounding AI economics trace back to a basic philosophical split about how wealth creation works. The political left typically views economic activity as a zero-sum game. When tech billionaires accumulate massive fortunes during periods like the COVID-19 pandemic, this perspective sees their gains as direct losses for everyone else. Picture an equation where one person’s enrichment means a thousand others become poorer.

This outlook drove much of the anger during 2020-2022, when asset prices surged while wages stagnated. Real estate values doubled in many markets. Stock portfolios exploded. Someone worth $1 million in 2018 might have watched their net worth balloon to $10 million by 2022. The left interpreted this as the “greatest wealth transfer in history,” viewing it as evidence of systemic exploitation.

The political right frames economics differently. Conservatives see the economy as expandable, like a growing pie. Even if your slice stays at 5%, the absolute value increases as the overall pie gets bigger. During the pandemic asset boom, they argued that rising valuations didn’t steal money from anyone. Nobody lost their savings because Tesla stock jumped 800%. The total wealth in the system expanded.

This philosophical divide becomes crucial when applied to AI economics. Jensen Huang of NVIDIA exemplifies the growth mindset. He challenges the assumption that global GDP stays fixed around $100 trillion. Instead, he envisions AI pushing economic output to $200 trillion, $300 trillion, or even $500 trillion. His reasoning focuses on access barriers. Currently, only about 2 billion people participate meaningfully in high-value knowledge work. Billions more face obstacles like language barriers, educational gaps, geographic isolation, or lack of tools.

AI could demolish these barriers. A rural villager could design graphics for distant clients. A young person without formal training could build and sell software. Huang sees AI as a universal translator and skills amplifier, potentially unlocking productivity from previously excluded populations. This isn’t wealth redistribution but wealth creation on an unprecedented scale.

The left remains skeptical of such claims. They point to compound interest as a mechanism that systematically advantages the wealthy. Someone starting with £10,000 in investments can watch their money grow exponentially through careful planning and market returns. Their wealth follows a curved trajectory upward. Meanwhile, less affluent workers experience only linear growth through salary adjustments tied to inflation. A 4% raise in a 4% inflationary environment barely maintains purchasing power.

This disparity creates what economists call exponential versus linear wealth divergence. The rich do get richer, not through exploitation but through mathematical compound effects. The left worries that AI will accelerate this pattern, concentrating gains among those who already own the technology while leaving workers behind.

The Job Transformation Predictions

Jason Calacanis visited Tesla’s Optimus lab on a recent Sunday morning and witnessed intense work on what he called “Optimus 3.” His prediction feels bold: people may eventually remember Tesla more for humanoid robots than electric vehicles. He envisions Tesla producing billions of robots, fundamentally reshaping labor markets.

Tesla brings unique advantages to robotics that other companies lack. First, they possess world-class AI training capabilities built from billions of miles of real-world driving data. Second, they operate massive-scale manufacturing systems, already building around 2 million complex vehicles annually. Third, their advanced battery technology provides the energy-dense, long-runtime power that mobile robots need.

This combination positions Tesla to scale humanoid robots beyond anything currently imaginable. These machines could navigate complex environments, understand human instructions, and perform physical tasks that currently require human workers. Calacanis predicts 2026 as “self-driving CES” and 2027 as the year consumers really experience humanoid robotics.

The political implications split predictably. Conservatives see opportunity in AI-powered automation. They argue that Western countries face deepening labor shortages, particularly in manufacturing. The United States, Germany, and Korea struggle to fill tens of thousands of factory roles as populations age and birth rates decline. Robots could serve as an “AI immigrant” workforce, handling essential but less-desired physical labor while sustaining economic growth.

Progressives focus on displacement risks. They worry about communities built around manufacturing jobs, union protections, and worker dignity. If robots can perform most physical labor, how will humans earn livelihoods? Who will purchase the goods that automated factories produce? The concern goes beyond economics to questions of purpose and social cohesion.

Huang addresses these fears by referencing historical patterns. In the 1970s, computers were niche technology. At one school, only two students showed interest in programming. By the 1980s and 1990s, computing became a massive industry creating millions of jobs that didn’t exist before. The “learn to code” movement peaked just before ChatGPT’s 2022 release, promoting programming as a secure career path. Now AI matches or surpasses top human coders in many tasks.

Yet new categories of work continue emerging. Huang argues that AI will spawn hundreds of thousands of jobs in fields we cannot yet envision. Workers must adapt by shifting industries or reskilling within their existing fields. This perspective assumes human ingenuity and market forces will create new opportunities faster than AI eliminates old ones.

The left questions whether such adaptation is realistic for everyone. Not every displaced factory worker can become an AI trainer or robot maintenance specialist. Skills gaps, age barriers, and geographic constraints make transitions difficult. They advocate for stronger safety nets, including universal basic income proposals that could provide food, shelter, and basic comforts even if traditional employment disappears.

Universal basic income faces its own political divisions. Supporters see it as essential insurance against technological displacement. Critics worry it might stifle innovation and human drive. Many people thrive on purpose and achievement beyond mere survival. The prospect of being permanently supported while robots do productive work raises questions about human fulfillment and social structure.

The Wealth Inequality Acceleration

Current AI development is concentrating enormous wealth among a small group of technology companies and their founders. NVIDIA’s market value exceeded $3 trillion in 2024, making Huang one of the world’s richest people. OpenAI, Anthropic, and other AI companies attract billions in investment while their valuations soar into the hundreds of billions.

This concentration worries observers across the political spectrum but for different reasons. The left sees it as confirmation of their zero-sum concerns. A tiny elite accumulates vast fortunes while ordinary workers face job uncertainty and economic anxiety. They fear a slide toward feudal-like inequality where masses of people scrape by while tech oligarchs control the means of production.

Historical parallels feel ominous. The Bolshevik Revolution against the Kulaks in early 20th-century Russia emerged from similar inequality and popular resentment. When economic disparities feel insurmountable, emotions like jealousy and envy can fuel political upheaval rather than constructive reform. Raw emotional forces often override rational policy discussions when people feel left behind.

Conservatives acknowledge inequality concerns but frame them differently. They argue that wealth creation benefits society even when concentrated among innovators. NVIDIA’s success stems from building technology that makes countless other businesses more productive. Huang’s fortune reflects value creation, not value extraction. The question becomes whether AI will generate enough new opportunities to offset displacement effects.

Geographic patterns complicate the political dynamics. AI development clusters in expensive coastal cities like San Francisco, Seattle, and Boston. These areas already lean heavily Democratic and have growing wealth gaps between tech workers and service employees. Meanwhile, manufacturing regions that might benefit from AI-powered reshoring tend to vote Republican but worry about automation replacing human workers.

Rural and small-town America faces particular challenges. High-speed internet access remains spotty in many areas. Educational resources lag behind urban centers. Local banks and businesses may lack the capital or expertise to invest in AI tools. The benefits of AI productivity gains could bypass these communities entirely, deepening urban-rural political divisions.

International competition adds another layer of complexity. China trains approximately 50% of the world’s top AI researchers according to Huang’s estimates. American export controls on advanced chips like the H100 and H20 attempt to slow Chinese AI development but may backfire by accelerating China’s self-reliance while costing American companies revenue and global market share.

Huang advocates for healthy competition between the United States and China rather than complete decoupling. He argues that American technology should become the global standard through superior performance rather than artificial restrictions. Export controls that are too restrictive can push other countries to develop independent supply chains, ultimately undermining American influence.

This debate reflects broader disagreements about economic nationalism versus globalization. Republicans increasingly support protecting American technological advantages through export restrictions and domestic manufacturing requirements. Democrats split between progressive internationalists who favor global cooperation and economic populists who want to protect American workers from unfair competition.

Sowell’s Framework Applied to AI Politics

Thomas Sowell’s distinction between “constrained” and “unconstrained” visions of human nature provides a useful lens for understanding these AI political divisions. The constrained vision sees humans as fundamentally limited beings whose good intentions often produce unintended consequences. This perspective emphasizes trade-offs, empirical testing, and incremental change rather than grand social engineering projects.

The unconstrained vision treats human limitations as temporary obstacles that can be overcome through proper knowledge, institutions, and moral progress. This worldview embraces ambitious reforms and systemic transformations, believing that rational planning can solve social problems and perfect society.

Applied to AI politics, these visions predict different responses to technological change. The constrained vision expects AI to create complex trade-offs between benefits and risks. Job displacement seems inevitable, but new opportunities will emerge through unpredictable market processes. Government intervention might help at the margins but cannot engineer optimal outcomes. The focus should be on building resilient institutions that can adapt to change rather than trying to control technological development.

Supporters of this approach might favor limited AI regulation focused on preventing obvious harms rather than comprehensive planning. They would emphasize education and worker flexibility over guaranteed income programs. International competition gets viewed as beneficial pressure that drives innovation, even when it creates security concerns.

The unconstrained vision sees AI as a tool for addressing fundamental social problems like poverty, inequality, and global development disparities. Proper planning and regulation can ensure that AI benefits everyone rather than just elites. Government programs like universal basic income can smooth the transition to an automated economy while maintaining human dignity and social cohesion.

This perspective supports comprehensive AI governance frameworks, international coordination on safety standards, and proactive policies to redistribute AI-generated wealth. The goal is using AI to create a more just and equitable society rather than merely accepting whatever outcomes emerge from market forces.

Neither vision maps perfectly onto conventional left-right politics, but clear patterns emerge. Conservative politicians and intellectuals gravitate toward constrained thinking about AI. They emphasize the impossibility of predicting or controlling technological change, the importance of maintaining competitive markets, and the risks of government intervention distorting innovation incentives.

Progressive politicians tend toward unconstrained optimism about using AI for social transformation. They advocate for ambitious policies to address inequality, comprehensive safety regulations, and international cooperation to ensure AI serves humanity rather than narrow corporate interests.

These philosophical differences help explain why AI policy debates feel so polarized despite broad agreement on basic facts. Both sides acknowledge that AI will transform the economy and displace many jobs. Both recognize concentration of wealth among tech companies as a potential problem. Both worry about international competition and security implications.

Yet they reach opposite conclusions about appropriate responses because they start from different assumptions about human nature and institutional capabilities. The constrained vision predicts that complex AI governance will create more problems than it solves, while the unconstrained vision sees current laissez-faire approaches as morally unacceptable and practically dangerous.

The Emotional Undercurrents

Political discussions about AI economics carry heavy emotional weight that goes beyond rational policy analysis. Fear, envy, hope, and resentment shape public opinion in ways that pure data cannot address. Understanding these emotional undercurrents helps explain why AI debates become so heated and polarized.

Economic anxiety drives much of the political tension. Workers in routine jobs see AI demonstrations replacing human performance in tasks they thought were uniquely human. Lawyers watch AI systems draft contracts and analyze case law. Radiologists see algorithms diagnose medical images more accurately than human experts. Writers observe AI generating articles, scripts, and creative content at unprecedented speed.

These advances trigger existential questions about human value and purpose. If machines can perform most cognitive and physical tasks better than humans, what unique contribution do people provide? Traditional answers about human creativity, emotional intelligence, and moral reasoning feel less compelling when AI systems demonstrate capabilities in these areas too.

The left channels this anxiety into demands for stronger worker protections and wealth redistribution. If AI makes human labor less valuable, society must find new ways to ensure everyone can live with dignity. This emotional appeal resonates with voters who feel vulnerable to technological displacement or global economic forces beyond their control.

Conservative responses emphasize human adaptability and the historical pattern of technological progress creating new opportunities. This message appeals to voters who prefer individual responsibility over collective solutions and who distrust government programs as inefficient or counterproductive. The emotional core focuses on optimism about human ingenuity and skepticism about institutional competence.

Cultural identity also shapes AI political reactions. Rural and religious communities often view technology with more suspicion than urban secular populations. Concerns about AI replacing human judgment touch deeper anxieties about moral authority, community bonds, and traditional ways of life. When tech leaders propose universal basic income or radical economic restructuring, it can feel like an attack on values like work ethic and self-reliance.

Urban educated professionals face different emotional pressures. Many built careers around knowledge work that AI now threatens to automate. The prospect of losing competitive advantages in areas like analysis, writing, and problem-solving creates cognitive dissonance. Supporting AI development means potentially undermining their own economic position and social status.

Generational divides add another emotional dimension. Younger voters who grew up with smartphones and social media show more comfort with AI integration into daily life. They worry more about climate change and inequality than job displacement, seeing AI as a potential solution to inherited problems. Older voters remember when computers and automation eliminated manufacturing jobs, making them more skeptical of promises about technological progress benefiting everyone.

These emotional patterns help explain why AI political debates resist purely rational resolution. Technical arguments about productivity growth or safety protocols matter less than deeper feelings about human agency, social fairness, and cultural change. Politicians who acknowledge and address these emotions may find more success than those who focus only on policy details.

The media environment amplifies emotional responses to AI news. Social media algorithms promote content that generates strong reactions, whether positive or negative. Spectacular AI demonstrations get more attention than mundane applications. Dystopian scenarios and utopian promises both spread faster than nuanced analysis of likely outcomes.

This information ecosystem makes balanced AI political discourse difficult. Voters receive exaggerated impressions of both AI capabilities and risks. Expectations become inflated in both directions, creating disappointment when reality falls short of hype or relief when feared catastrophes fail to materialize. The emotional volatility makes consistent policy development challenging regardless of which political party holds power.

Understanding AI politics requires acknowledging these emotional foundations alongside rational policy arguments. The economic data about productivity and wealth distribution matters, but so do the feelings of workers facing technological displacement and communities watching traditional industries disappear. Successful AI governance will need to address both practical concerns and emotional needs to maintain democratic legitimacy and social cohesion.

The Ai War Has Started?

As we continue our exploration together, let’s reflect on how these two recent AI-related incidents in the UK might illuminate the deeper philosophical tension you first raised: the difference between a ‘constrained’ mindset—one that accepts high but imperfect reliability (say, 99% accuracy as worthwhile)—and an ‘unconstrained’ one that treats any deviation from perfection as catastrophic failure.

Imagine a world where we adopt the constrained view for deploying AI in critical domains like policing. What safeguards, such as mandatory human verification of every key fact, might prevent a single generated error from influencing real-world decisions? How does that mindset encourage gradual, responsible integration rather than outright rejection?

Now, consider the West Midlands Police case from late 2025. Ahead of a Europa League match between Aston Villa and Maccabi Tel Aviv in November, the force prepared an intelligence report for Birmingham’s Safety Advisory Group. This report referenced a nonexistent previous fixture involving Maccabi Tel Aviv and West Ham United, complete with fabricated details of crowd disorder. What do you suppose happened when an officer incorporated this invented information—later traced to a “hallucination” by Microsoft Copilot—without sufficient cross-checking? Could this illustrate how even well-intentioned use of AI, under pressure or haste, can amplify a minor flaw into a major controversy, leading to a controversial fan ban, parliamentary scrutiny, and ultimately the Home Secretary expressing lost confidence in the chief constable?

Shifting to the second incident, picture users on X prompting Grok’s image-generation feature to edit real photographs—often of women, and in some cases appearing to involve minors—into sexually suggestive or revealing poses without consent. These nonconsensual “nudification” or deepfake-style outputs spread rapidly across the platform. Whose duty do you believe it primarily falls to—the individual prompting the AI, the developers who designed minimal initial safeguards, or the platform hosting and potentially profiting from the content? If even rare misuse leads to widespread harm, does this push society toward the unconstrained expectation that AI must achieve near-perfection before release, or risk severe consequences like regulatory investigations?

In response, Ofcom launched a formal probe into X under the Online Safety Act, examining whether the platform failed to protect users from illegal non-consensual intimate imagery. Grok does have filters to obey UK laws on pornography, if explicit porn is requested the image is blocked. In piling on to Elon Musk the UK seems to have blurred the distinction between platform and user? No prosecution of people posting offending images have appeared in the media!

Meanwhile, the UK government accelerated implementation of laws (from the Data (Use and Access) Act and related measures) making it criminal not only to create but even to request such nonconsensual images, with plans to target “nudification” tools directly. Elon Musk has described Grok’s intended NSFW settings as allowing limited upper-body nudity of ‘imaginary’ adult figures, aligned with R-rated film standards in the US, while emphasising regional compliance with local laws. After backlash, X introduced restrictions, including geoblocking for real-person edits in prohibited jurisdictions. The few that misbehave spoil things for the many, the majority of reasonable people have to lose out!

What broader insight might these parallel stories offer about the amplification effect you mentioned—one single failure (a hallucinated fact or an abused image tool) gaining outsized attention and scrutiny? Does the constrained approach, with robust human oversight and tolerance for managed imperfection, seem more sustainable for innovation in high-stakes areas? Or does the unconstrained demand for zero tolerance better protect against harms that disproportionately affect vulnerable groups? We should note that Reddit has Not safe for work(NSFW) tags that offers the posting of explicit ‘porn’. There are also free text to image tools that have censorship switched off : perchance.org! These have not been mentioned in any popular media article.

 

This post contains affiliate links. If you purchase through these links, I may earn a commission at no extra cost to you.

One response to “Will AI Grow the Pie or Split Us?”

  1. […] these fictional events suggests that entertainment media serves as a testing ground for political and philosophical disagreements that extend far beyond television. How we respond to imaginary […]

Leave a Reply

Discover more from Thoughts on Technology

Subscribe now to keep reading and get access to the full archive.

Continue reading