April 5, 2024 Reading Time: 4 minutes
Owners and troops defend a textile factory in England from Luddites breaking down the door to destroy equipment. 1816 engraving with modern watercolor.

In a bellicose op-ed, Andy Meek BGR’s trending news editor and a senior contributor at Forbes, takes aim at Sam Altman, OpenAI, and “late-stage capitalism” — whatever that means — for proliferation of and investment into AI research. Undergirding Mr. Meek’s opposition to Altman and AI are the feeble foundations of luddism, economic sophistry, and epistemological double standards. 

Mr. Meek accuses those bullish on AI — fetishists, in his language — of harboring “profoundly disgusting anti-human sentiment.” He stakes this accusation on the claim that AI is “largely (but not completely) about replacing people” (emphasis added). Though Mr. Meek admits that AI development is not entirely about replacing people, he does not specify what else it’s about. 

AI, like all other capital, is about maximizing production. 

Far from inimical to the interests of mankind, production is the means by which our material interests are satisfied. As Adam Smith says so succinctly in the Wealth of Nations, “consumption is the sole end and purpose of all production.” We are only able to consume to the extent that we produce. Those who oppose technologies that increase productivity actually oppose mankind’s wellbeing. 

If those responsible for maximizing firm productivity are to be believed, Generative AI is one such technology. In 2023, IBM’s Institute for Business Value conducted their annual survey of 3,000 CEOs from 24 industries and more than 30 countries. The study, CEO decision-making in the age of AI, found that 48 percent and 45 percent of CEOs identify productivity and tech modernization as their top priorities. 75 percent “believe the organization with the most advanced generative AI will have competitive advantage.” In other words, the vast majority of CEOs across the world and industries believe generative AI will determine which firm is most productive. Unsurprisingly, then, half report having already integrated generative AI into their business. 

Later, Meek points to Altman’s call for trillions of dollars of investment in the AI industry as yet more evidence of AI advocates’ callous disregard for humanity. He invites us to imagine the “far-reaching and compounding good” that would result from directing this money to public projects such as infrastructure, schools, and health care. 

Meek should have chosen his words more carefully; economic growth is by far and away the most far-reaching and compounding good mankind has ever experienced — the very phenomenon responsible for modern infrastructure, schools, and health care. 

As any development economist will tell you — and as the empirical record substantiates — the productivity gains accompanying the Industrial Revolution is causally responsible for the eighty percentage point decrease in the proportion of humans living in absolute poverty since 1820. If the expected value of investment in AI is as great as Altman anticipates, he should have no trouble attracting private capital to such a profitable investment opportunity. Altman does not deserve public subsidies, but he, OpenAI, and other AI firms have nothing to apologize for in attracting private investors. 

Meek may look no further than the thousands of companies building services in the interest of improving public services. Magic School provides intelligent teaching tools to more than a million educators, Viz AI powers care coordination for hundreds of millions of patients at over 1,500 hospitals, and Automotus builds curb management technology to improve the accessibility and safety of our urban centers, to name a few. This is precisely the “far-reaching and compounding good” that Meek fantasizes about; he just doesn’t approve of the capital allocators behind this impact.

Altruistic virtue-signaling and inarticulate hand-waving about “late-stage capitalism” aside, what’s really animating Mr. Meek’s AI antagonism? Though he doesn’t come right out and say it, we can infer his motivation is protectionism. No wonder he describes Google’s and Microsoft’s AI chatbots as “essentially automated plagiarists at scale, eating and then regurgitating other people’s content.” 

Is the way we humans learn and produce fundamentally different?

While the architecture of modern large language models (LLMs) is not perfectly analogous with the human brain or our understanding of human intelligence, we can draw important parallels that highlight their similarities.

During their training period, an LLM will view hundreds of billions of words comprising just a snippet of the corpus of human knowledge. Like humans, they remember some information well while other information is forgotten. Through this process, the LLMs not only learn about names, places, and facts, but build generalizable competencies for understanding and producing written language. The result is a machine that knows more information and can use that information more productively than the average human. 

But this is a difference of degree, not of type. 

In Mr. Meek’s world, we are all plagiarists. If you read Shakespeare and learn the word “malmsey,” you plagiarized. If you listen to a public company’s earnings call and learn something new about their profitability, you plagiarized. If you so much as glance at an image of a cat playing piano that your aging mother texted you, you plagiarized. By Mr. Meek’s lights, merely existing and perceiving stimuli is unethical. 

Unlike Mr. Meek, software engineers do not make the absurd appeal that “AI code is not real code” while demanding its boycott. Instead, Programmers have embraced these tools as complements that significantly improve their productivity by eliminating menial work. The value proposition is clear: AI tools empower developers to be more creative, not less. This is almost universally accepted and adopted within the programming world, from university students and startups to Fortune 500s and government agencies. It’s also worth noting that many in the profession regard their coding as an art, and themselves as artists.

The analogous tools exist for information and creative workflows, which will continue to improve as billions of dollars of investment flow into new and existing enterprises. Those who are open-minded and eager to incorporate these tools into their workflow will forge new paths in their respective industries: amplifying productivity, improving experience for the end consumer, and unlocking latent creativity. Those who reflexively oppose AI and refuse to avail themselves of its productive powers are needlessly shooting themselves in the foot — and then blaming Sam Altman for pulling the trigger.

The extent to which AI in its various forms — chatbot or otherwise — will function as substitutes to human labor is yet to be seen. Only time will tell. Whatever the future may hold, concern for one’s own job security is reasonable; what’s unreasonable is letting parochial interests impede technological and economic progress to mankind’s collective detriment. 

Mr. Meek ought to reconsider who harbors anti-human sentiment. 

Jack Nicastro

Jack Nicastro is a senior at Dartmouth College majoring in Economics and Philosophy.
He is an Executive Producer with the Foundation for Economic Education, leads Students For Liberty’s Hazlitt House for Journalism and Content Creation, and is Director of Programming of the Dartmouth Libertarians. Jack was a Research Intern at the American Institute for Economic Research.

Get notified of new articles from Jack Nicastro and AIER.

Samuel Crombie

Samuel Crombie is currently a Product Manager at Microsoft based in Seattle, WA, where he works on AI features for the Edge Browser. Sam graduated from Dartmouth College in 2023 with an AB in Computer Science.

Get notified of new articles from Samuel Crombie and AIER.