AI: Hope or Hype? - Project Syndicate (2024)

  • The High Cost of GPT-4oAngela Huyue Zhang, et al.
  • What the AI Pessimists Are MissingMichael R. Strain
  • Can AI Improve Family Life?Anne-Marie Slaughter, et al.
  • Can AI Foster a Global Consciousness?Jamie Metzl
  • Human Capital for the Age of Generative AISimon Johnson, et al.
  • Don’t Believe the AI HypeDaron Acemoglu
Angela Huyue Zhang , S. Alex Yang

OpenAI's newest artificial-intelligence tool, GPT-4o, leverages a large and growing user base – drawn in by the promise that the service is free – to crowdsource massive amounts of multimodal data and use it to train its AI model. But much of the data is not owned by its users, and copyright holders will have little recourse.

HONG KONG/LONDON – With the launch of GPT-4o, OpenAI has once again shown itself to be the world’s most innovative artificial-intelligence company. This new multimodal AI tool – which seamlessly integrates text, voice, and visual capabilities – is significantly faster than previous models, greatly enhancing the user experience. But perhaps the most attractive feature of GPT-4o is that it is free – or so it seems.

One does not have to pay a subscription fee to use GPT-4o. Instead, users pay with their data. Like a black hole, GPT-4o increases in mass by sucking up any and all material that gets too close, accumulating every piece of information that users enter, whether in the form of text, audio files, or images.

GPT-4o gobbles up not only users’ own information but also third-party data that are revealed during interactions with the AI service. Let’s assume you are seeking a summary of a New York Times article’s content. You take a screenshot and share it with GPT-4o, which reads the screenshot and generates the requested summary within seconds. For you, the interaction is over. But OpenAI is now in possession of all the copyrighted material from the screenshot you provided, and it can use that information to train and enhance its model.

OpenAI is not alone. In the past year, many firms – including Microsoft, Meta, Google, and X (formerly Twitter) – have quietly updated their privacy policies in ways that potentially allow them to collect user data and apply it to train generative AI models. Though leading AI companies have already faced numerous lawsuits in the United States over their unauthorized use of copyrighted content for this purpose, their appetite for data remains as voracious as ever. After all, the more they obtain, the better they can make their models.

The problem for leading AI firms is that high-quality training data has become increasingly scarce. In late 2021, OpenAI was so desperate for more data that it reportedly transcribed over a million hours of YouTube videos, violating the platform’s rules. (Google, YouTube’s parent company, has not pursued legal action against OpenAI, possibly to avoid accountability for its own harvesting of YouTube videos, the copyrights for which are owned by their creators.)

With GPT-4o, OpenAI is trying a different approach, leveraging a large and growing user base – drawn in by the promise of free service – to crowdsource massive amounts of multimodal data. This approach mirrors a well-known tech-platform business model: charge users nothing for services, from search engines to social media, while profiting from app tracking and data harvesting – what Harvard professor Shoshana Zuboff famously called “surveillance capitalism.”

To be sure, users can prohibit OpenAI from using their “chats” with GPT-4o for model training. But the obvious way to do this – on ChatGPT’s settings page – automatically turns off the user’s chat history, causing users to lose access to their past conversations. There is no discernable reason why these two functions should be linked, other than to discourage users from opting out of model training.

If users want to opt out of model training without losing their chat history, they must, first, figure out that there is another way, as OpenAI highlights only the first option. They must then navigate through OpenAI’s privacy portal – a multi-step process. Simply put, OpenAI has made sure that opting out carries significant transaction costs, in the hopes that users will not do it.

Even if users consent to the use of their data for AI training, consent alone would not guard against copyright infringement, because users are providing data that they may not actually own. Their interactions with GPT-4o thus have spillover effects on the creators of the content being shared – what economists call “externalities.” In this sense, consent means little.

While OpenAI’s crowdsourcing activities could lead to copyright violations, holding the company – or others like it – accountable will be no easy feat. AI-generated output rarely looks like the data that informed it, which makes it difficult for copyright holders to know for certain whether their content was used in model training. Moreover, a firm might be able to claim ignorance: users provided the content during interactions with its services, so how can the company know where they got it from?

Creators and publishers have employed a number of methods to keep their content from being sucked into the AI-training blackhole. Some have introduced technological solutions to block data scraping. Others have updated their terms of service to prohibit the use of their content for AI training. Last month, Sony Music – one of the world’s largest record labels – sent letters to more than 700 generative-AI companies and streaming platforms, warning them not to use its content without explicit authorization.

But as long as OpenAI can exploit the “user-provided” loophole, such efforts will be in vain. The only credible way to address GPT-4o’s externality problem is for regulators to limit AI firms’ ability to collect and use the data their users share.

AI: Hope or Hype? - Project Syndicate (7) Philipp von Ditfurth/picture alliance via Getty Images
Michael R. Strain

The current debate about generative AI focuses disproportionately on the disruption it might unleash. While it is true technological advances always disrupt legacy industries and existing systems and processes, one must not ignore the opportunities they can create or the risks they can mitigate.

WASHINGTON, DC – Pessimism suffuses current discussions about generative artificial intelligence. A YouGov survey in March found that Americans primarily feel “cautious” or “concerned” about AI, whereas only one in five are “hopeful” or “excited.” Around four in ten are very or somewhat concerned that AI could put an end to the human race.

Such fears illustrate the human tendency to focus more on what could be lost than on what could be gained from technological change. Advances in AI will cause disruption. But creative destruction creates as well as destroys, and that process ultimately is beneficial. Often, the problems created by a new technology can also be solved by it. We are already seeing this with AI, and we will see more of it in the coming years.

Recall the panic that sweptthrough schools and universities when OpenAI first demonstrated that its ChatGPT tool can write in natural language. Many educators raised valid concerns that generative AI would help students cheat on assignments and exams, shortchanging their educations. But the same technology that enables this abuse also enables detection and prevention of it.

Moreover, generative AI can help to improve education quality. The longstanding classroom model of education faces serious challenges. Aptitude and preparation vary widely across students within a given classroom, as do styles of learning and levels of engagement, attention, and focus. In addition, the quality of teaching varies across classrooms.

AI could address these issues by acting as a private tutor for every student. If a particular student learns math best by playing math games, AI can play math games. If another student learns better by quietly working on problems and asking for help when needed, AI can accommodate that. If one student is falling behind while another in the same classroom has already mastered the material and grown bored, AI tutors can work on remediation with the former student and more challenging material with the latter. AI systems will also serve as customized teaching assistants, helping teachers develop lesson plans and shape classroom instruction.

The economic benefits of these applications would be substantial. When every child has a private AI tutor, educational outcomes will improve overall, with less-advantaged students and pupils in lower-quality schools likely benefiting disproportionately. These better-educated students will then grow into more productive workers who can command higher wages. They also will be wiser citizens, capable of brightening the outlook for democracy. Because democracy is a foundation for long-term prosperity, this, too, will have salutary economic effects.

Many commentators worry that AI will undermine democracy by supercharging misinformation and disinformation. They ask us to imagine a “deep fake” of, say, President Joe Biden announcing that the United States is withdrawing from NATO, or perhaps of Donald Trump suffering a medical event. Such a viral video might be so convincing as to affect public opinion in the run-up to the November election.

But while deep fakes of political leaders and candidates for high office are a real threat, concerns about AI-driven risks to democracy are overblown. Again, the same technology that allows for deep fakes and other forms of information warfare can also be deployed to counter them. Such tools are already being introduced. For example, SynthID, a watermarking tool developed by Google DeepMind, imbues AI-generated content with a digital signature that is imperceptible to humans but detectable by software. Three months ago, OpenAI added watermarks to all images generated by ChatGPT.

Will AI weapons create a more dangerous world? It is too early to say. But as with the examples above, the same technology that can create better offensive weapons can also create better defenses. Many experts believe that AI will increase security by mitigating the “defender’s dilemma”: the asymmetry whereby bad actors need to succeed only once, whereas defensive systems must work every time.

In February, Google CEO Sundar Pichai reported that his firm had developed a large language model designed specifically for cyber defense and threat intelligence. “Some of our tools are already up to 70 per cent better at detecting malicious scripts and up to 300 per cent more effective at identifying files that exploit vulnerabilities,” he wrote.

The same logic applies to national-security threats. Military strategists worry that swarms of low-cost, easy-to-make drones could threaten large, expensive aircraft carriers, fighter jets, and tanks – all systems that the US military relies on – if they are controlled and coordinated by AI. But the same underlying technology is already being used to create defenses against such attacks.

Finally, many experts and citizens are concerned about AI displacing human workers. But, as I wrote a few months ago, this common fear reflects a zero-sum mentality that misunderstands how economies evolve. Though generative AI will displace many workers, it also will create new opportunities. Work in the future will look vastly different from work today because generative AI will create new goods and services whose production will require human labor. A similar process happened with previous technological advances. As the MIT economist David Autor and his colleagues have shown, the majority of today’s jobs are in occupations introduced after 1940.

The current debate around generative AI focuses disproportionately on the disruption it might unleash. But technological advances not only disrupt; they also create. There will always be bad actors seeking to wreak havoc with new technologies. Fortunately, there is an enormous financial incentive to counter such risks, as well as to preserve and generate profits.

The personal computer and the internet empowered thieves, facilitated the spread of false information, and led to substantial labor-market disruptions. Yet very few today would turn back the clock. History should inspire confidence – but not complacency – that generative AI will lead to a better world.

AI: Hope or Hype? - Project Syndicate (8) BSIP/Universal Images Group via Getty Images
Anne-Marie Slaughter , Avni Patel Thompson

An AI assistant for caregivers would free up time and energy for empathy, creativity, and connection. More importantly, identifying which parts of caregiving can be automated is likely to teach us a great deal about which family functions and activities should remain fully and solely human.

BERLIN/VANCOUVER – The public debate about the future of artificial intelligence often focuses on two main concerns: the technology’s broader impact on humanity, and its immediate effects on individuals. For the most part, people want to know how automation will transform work. Which industries will still be around tomorrow? And whose job is on the line today?

But the debate has overlooked an important pillar of society: the family. If we are going to build AI systems that will help solve, rather than exacerbate, pressing social and economic problems, we should remember that families comprise 89% of American households, and we should consider the complex pressures they face when deciding how to apply the technology.

After all, families in the United States are in desperate need of support. According to the World Economic Forum, America’s $6 trillion care economy is at risk of collapsing, owing to labor shortages, administrative burdens, and a broken market model whereby most families cannot afford the full cost of care and workers are chronically underpaid. Moreover, parenthood has changed: more parents are working, and demands on their time, from caring for children and aging parents to managing information overload and coordinating household tasks, have intensified.

Using AI as a co-pilot for families could save time – and sanity. An AI assistant could decipher school emails and activity schedules or help prepare for an upcoming family trip by making a packing list and confirming travel plans. If augmented by AI, the care robots being developed in Japan and elsewhere could support the privacy and autonomy of those receiving care and enable human caregivers to spend more time establishing emotional connections and providing companionship.

Designing AI to assist with complex human problems such as parenting or elder care requires defining its role. In today’s world, caregiving, and especially parenting, consists of too many mundane tasks that eat into the time available for more meaningful activities. AI could thus function as “anti-tech tech” – a shield from the always-on culture of email, text messages, and endless to-dos. The ideal AI co-pilot would shoulder the bulk of this busywork, allowing families to spend more time together.

But complex human tasks are typically “iceberg” problems, with the majority of the work hidden beneath the surface. An AI co-pilot that handles only the visible labor would do little to alleviate the caregiver’s burden, because completing these tasks requires a full understanding of what needs to be done.

For example, we can build the technology to create calendar entries from an email with the schedule for a youth soccer team (and then delete and recreate them when it inevitably changes a week later). But to free a parent from the invisible load of managing a kid’s sports season, AI would need to understand the various other tasks that lie beneath the surface: looking for field locations, noting jersey colors, signing up for snack duties, and creating the appropriate reminders. If one parent had a scheduling conflict, the AI assistant would have to alert the other parent, and if both had conflicts, it would have to schedule time for a conversation, in recognition of how important it can be for a child to have a parent or loved one at their game.

The challenge is not coming up with an answer, but rather coming up with the right answer given the complex context, much of which is embedded in parents’ brains. Through careful exploration and curation, this knowledge could one day be converted into data for training specialized family AI models. By contrast, large language models such as ChatGPT-4, Gemini, and Claude are generally trained on public data collected from the internet.

Developing an AI co-pilot for caregivers would undoubtedly test the technology’s technical limits and determine the extent to which it can account for moral considerations and societal values. In a forthcoming paper titled “Computational Frameworks for Care and Caregiving Frameworks for Computing,” the cognitive scientist Brian Christian explores some of the biggest challenges of trying to translate care into the mathematical “reward functions” necessary for machine learning. One example is when a caregiver intervenes on the basis of what they believe to be in a child’s best interests, even if that child disagrees. Christian concludes that “the process of trying to formalize core aspects of the human experience is revealing to us what care really is – and perhaps even how much we have yet to understand about it.”

Like office work, much of family life consists of repetitive and mundane tasks that could be completed by AI. But unlike office work, training such an AI model would require carefully collecting and transmitting the specialized practices of an intimate world. It is worth the effort, though: an AI assistant for caregivers would free up time and energy for empathy, creativity, and connection. More importantly, identifying which parts of caregiving can be performed by AI is likely to teach us a great deal about which family functions and activities should remain fully and solely human.

AI: Hope or Hype? - Project Syndicate (9) Google via Getty Images
Jamie Metzl

Technological development has shaped religious belief for thousands of years. This suggests that powerful new technologies like artificial intelligence could help people incorporate a greater awareness of how to meet the collective needs of society into their traditional identities.

NEW YORK – Our ancestors long feared the world-ending wrath of angry gods. But it is only recently that we have developed the capacity to do ourselves in, whether from climate change, nuclear weapons, artificial intelligence, or synthetic biology. Although our ability to cause harm on a planetary scale has increased exponentially as a result of our technology, our means of responsibly managing these newfound powers have not. This must change if humanity is to survive and thrive.

Today’s deeply interconnected world demands that we develop a collective consciousness and purpose to address common challenges and ensure that technological advances serve everyone. So far, zero-sum competition between countries and communities has posed an insurmountable obstacle to mitigating global risks. But the same technologies that are ripe for misuse also have the potential to help foster a shared sense of responsibility.

Technological development has shaped religious belief for thousands of years. Domesticating plants and animals made civilization – and thus all religions (other than animism) – possible, while the invention of writing, followed by parchment and later paper, helped these belief systems spread through holy books like the Torah, Bible, Quran, and Bhagavad Gita. The success of Protestantism can be largely attributed to the printing press.

Now companies are building large-language-model chatbots – such as GitaGPT, Quran GPT, and BibleChat – that people can use to receive automated personal advice inspired by traditional religious texts. Given that the Talmud, an interpretation of sacred texts, has itself become sacredin Judaism, it is not inconceivable that AI-generated interpretive texts might someday take on a sanctified status.

This suggests that while powerful technologies like AI can cause harm, they could just as easily have a positive influence on the continued evolution of social traditions and belief systems. Specifically, these technologies could help people incorporate a global consciousness and a greater awareness of how to meet the collective needs of society into their traditional identities. Following in the footsteps of animism, Buddhism, and Unitarianism (movements that, to limited effect, have long tried to expand the concept of collective responsibility), AI systems could help supercharge these efforts in a hyper-connected world.

In 2016, AlphaGo, an algorithm developed by Google’s DeepMind, defeated Go grandmaster Lee Sedol by four games to one in a competition in Seoul. This astounding display of technological prowess underscores AI’s transformative potential. The success of AlphaGo, which had been trained on digitized games played by thousands of Go masters, was, in fact, a profound victory for humanity. It was as if all those human masters were sitting across the table from Lee, their combined wisdom channeled through the algorithm. On that day, a computer program, in many ways, represented the best of us.

Imagine if we instructed a future algorithm to study all of humankind’s recorded religious and secular traditions, and to create a manifesto referencing the best of our cultural and spiritual achievements and devising a plan to improve upon them. Using all that it has learned, this bot might advise us on how to strike the optimal balance between our individual needs as members of smaller communities and our collective needs as humans sharing the planet. One could even imagine its output having the same legitimacy as the Talmud, or as the tablets our ancestors purportedly received atop mountains or dug up in their backyards.

Today many people regularly engage with generative AI bots, whether using the predictive text function in Gmail or querying ChatGPT. Computer systems in cars alert drivers when they veer into another lane, while those in planes warn pilots when they make an error. Soon, seamless natural-language interfaces will plan our vacations, write computer programs based on prompts from people who can’t otherwise code, suggest treatment options to doctors, and recommend planting strategies to farmers. AI systems will end up playing an outsize role in many critical life decisions, so it will make sense for us to program a concern for the common good into them.

In the second game between AlphaGo and Lee, the algorithm made a move that human experts saw as a mistake. By existing metrics, the move had a one in 10,000 chance of being beneficial. But it turned out to be an optimal move that no human had previously considered. Instead of undermining human players, AlphaGo ultimately made them better by introducing new ways of playing the game. All of this can be credited to humans, who invented these technologies and Go itself. Humans also created the AlphaZero program, which defeated its predecessor, AlphaGo, after learning the game only by playing against itself.

Technology, in other words, is us, so it must be developed for us.

We should prompt the broader AI systems being trained on the cultural content that humans have generated over thousands of years to help us imagine a better path forward, as AlphaGo and AlphaZero did for Go players.

The clock is ticking to develop a global framework for addressing the dangers we are generating. It’s our move.

AI: Hope or Hype? - Project Syndicate (10) Getty Images
Simon Johnson , Eric Hazan

Shared prosperity can flow from new technology only if its adoption is accompanied by upgraded skills and proactive worker redeployment. In the age of generative AI, employers should be candid about nascent skills gaps, and governments should focus on enabling all workers to upgrade their skills in a timely and appropriate fashion.

WASHINGTON, DC/PARIS – Generative artificial intelligence has captured the world’s imagination because it appears likely to automate tasks that previously required advanced cognitive skills. With it, there is a real prospect that many highly educated and experienced workers may be replaced by algorithms. What happens when machines come for the jobs not of handloom weavers and autoworkers, but of scriptwriters, lawyers, middle managers, and even high-level executives?

One response is to think that skills no longer matter, or even that we should de-emphasize education. On the contrary, while the potential for increased productivity (and higher incomes for all) through human-machine interaction has never been greater, we humans will need to up our game. We must get better at everything the computers struggle with, including understanding context, thinking outside the box, and managing relationships with other humans.

According to a recent report from the McKinsey Global Institute, up to 30% of current work hours in industrialized countries could be automated by 2030, under a scenario of moderate automation. While automation has squeezed workers for decades, generative AI heralds a significant acceleration and gut-wrenching change for many people who assumed their careers were stable.

In the United States and the European Union, the number of people employed as office workers, in manufacturing, and as customer-service representatives will almost certainly decline as generative AI takes hold. (The report considers nine EU countries – the Czech Republic, Denmark, France, Germany, Italy, the Netherlands, Poland, Spain, and Sweden – representing 75% of the European working population, as well as the United Kingdom).

But the news is not all bad. The report estimates that demand for workers in health care, clean energy, and other high-skill professions (such as scientific research and development) is likely to rise in those same countries. Of course, there are other factors at work here, including efforts to achieve net-zero emissions (important for new job creation across all industrialized countries), an aging workforce (particularly in Europe), the continuing expansion of private sector e-commerce, and the strengthening of government-financed infrastructure.

Rather than mass unemployment, the most likely outcome is that many people will soon face pressure to change jobs. Under reasonable assumptions, Europe could experience up to 12 million occupational transitions over the next six years.

While the projected annual occupational transition rate (0.8% of employed people) is lower than the relatively high rate observed in Europe during the COVID-19 pandemic (1.2%), it is twice as high as the pre-pandemic norm (0.4%). In the US, employment transitions over the same period could also reach almost 12 million, although this seems more manageable, as the US already had an elevated pre-pandemic transition rate (1.2%) compared to Europe.

Executives on both sides of the Atlantic are already concerned about existing skill shortages and mismatches in a tight labor market. It is good news for suitably qualified humans if demand for social and emotional skills rises with the new technologies. The more than 1,100 executives that the McKinsey team surveyed in Europe and the US not only stressed the need for advanced information-technology and data-analytics skills but also for more workers who are competent in critical thinking, creativity, and “teaching and training.”

The wage implications are likely to be significant. Demand for labor will shift toward occupations that already have higher wages in both Europe and the US. And there is a real risk of some employment reduction in lower-wage white-collar occupations. These workers will need to acquire new skills to obtain better-paying work. If they can acquire these skills – by themselves, through employers, or with the assistance of government – they will have an opportunity to climb the wage ladder.

But there is a real risk of an even more polarized labor market in which there are more high-wage job openings than qualified workers (further driving up the top wages), and many more workers compete for increasingly limited lower-wage positions (further driving down the lower end of the wage distribution). This outcome would be a disappointing reversal of the reduced wage inequality in the post-pandemic labor market. Fortunately, it is avoidable.

For policymakers, the major takeaway is that human capital matters more than ever for national competitiveness and shared prosperity. Some manual jobs will remain with humans (robots have a relatively hard time with many basic mobility and cleaning tasks). But executives are currently convinced they need to retrain many workers in order to meet all their skill needs. Public policy should encourage employers as much as possible to maintain this disposition and reskill workers rather than replace them.

Significantly faster productivity growth, especially in Europe, and shared prosperity can flow from new technology, but only if its adoption is accompanied by upgraded human skills and more proactive worker redeployment. To achieve this in the age of generative AI, executives should be as candid as possible about nascent skills gaps, and governments should focus on making it as easy as possible for all workers to upgrade their skills in a timely and appropriate fashion.

AI: Hope or Hype? - Project Syndicate (11) Getty Images
Daron Acemoglu

If you listen to tech industry leaders, business-sector forecasters, and much of the media, you may believe that recent advances in generative AI will soon bring extraordinary productivity benefits, revolutionizing life as we know it. Yet neither economic theory nor the data support such exuberant forecasts.

BOSTON – According to tech leaders and many pundits and academics, artificial intelligence is poised to transform the world as we know it through unprecedented productivity gains. While some believe that machines soon will do everything humans can do, ushering in a new age of boundless prosperity, other predictions are at least more grounded. For example, Goldman Sachs predicts that generative AI will boost global GDP by 7% over the next decade, and the McKinsey Global Institute anticipates that the annual GDP growth rate could increase by 3-4 percentage points between now and 2040. For its part, The Economist expects that AI will create a blue-collar bonanza.

Is this realistic? As I note in a recent paper, the outlook is far more uncertain than most forecasts and guesstimates suggest. Still, while it is basically impossible to predict with any confidence what AI will do in 20 or 30 years, one can say something about the next decade, because most of these near-term economic effects must involve existing technologies and improvements to them.

It is reasonable to suppose that AI’s biggest impact will come from automating some tasks and making some workers in some occupations more productive. Economic theory provides some guidance for assessing these aggregate effects. According to Hulten’s theorem (named for economist Charles Hulten), aggregate “total factor productivity” (TFP) effects are simply the product of the share of tasks that are automated multiplied by the average cost savings.

While average cost savings are difficult to estimate and will vary by activity, there have already been some careful studies of AI’s effects on certain tasks. For example, Shakked Noy and Whitney Zhang have examined the impact of ChatGPT on simple writing tasks (such as summarizing documents or writing routine grant proposals or marketing material), while Erik Brynjolfsson, Danielle Li, and Lindsey Raymond have assessed the use of AI assistants in customer service. Taken together, this research suggests that currently available generative-AI tools yield average labor-cost savings of 27% and overall cost savings of 14.4%.

What about the share of tasks that will be affected by AI and related technologies? Using numbers from recent studies, I estimate this to be around 4.6%, implying that AI will increase TFP by only 0.66% over ten years, or by 0.06% annually. Of course, since AI will also drive an investment boom, the increase in GDP growth could be a little larger, perhaps in the 1-1.5% range.

These figures are much smaller than the ones from Goldman Sachs and McKinsey. If you want to get those bigger numbers, you either must boost the productivity gains at the micro level or assume that many more tasks in the economy will be affected. But neither scenario seems plausible. Labor-cost savings far above 27% not only fall out of the range offered by existing studies; they also do not align with the observed effects of other, even more promising technologies. For example, industrial robots have transformed some manufacturing sectors, and they appear to have reduced labor costs by about 30%.

Similarly, we are unlikely to see far more than 4.6% of tasks being taken over, because AI is nowhere close to being able to perform most manual or social tasks (including seemingly simple functions with some social aspects, like accounting). As of 2019, a survey of essentially all US businesses found that only about 1.5% of them had any AI investments. Even if such investments have picked up over the past year and a half, we have a long, long way to go before AI becomes widespread.

Of course, AI could have larger effects than my analysis allows if it revolutionizes the process of scientific discovery or creates many new tasks and products. The recent AI-enabled discoveries of new crystal structures and advances in protein folding do suggest such possibilities. But these breakthroughs are unlikely to be a major source of economic growth within ten years. Even if new discoveries could be tested and turned into actual products much faster, the tech industry is currently focused excessively on automation and monetizing data, rather than on introducing new production tasks for workers.

Moreover, my own estimates could be too high. Early adoption of generative AI has naturally occurred where it performs reasonably well, meaning tasks for which there are objective measures of success, such as writing simple programming subroutines or verifying information. Here, the model can learn on the basis of outside information and readily available historical data.

But many of the 4.6% of tasks that could feasibly be automated within ten years – evaluating applications, diagnosing health problems, providing financial advice – do not have such clearly defined objective measures of success, and often involve complex context-dependent variables (what is good for one patient will not be right for another). In these cases, learning from outside observation is much harder, and generative AI models must rely instead on the behavior of existing workers.

Under these circ*mstances, there will be less room for major improvements over human labor. Thus, I estimate that about one-quarter of the 4.6% tasks are of the “harder-to-learn” category and will have lower productivity gains. Once this adjustment is made, the 0.66% TFP growth figure declines to about 0.53%.

What about the effects on workers, wages, and inequality? The good news is that, compared to earlier waves of automation – such as those based on robots or software systems – the effects of AI may be more broadly distributed across demographic groups. If so, it will not have as extensive an impact on inequality as earlier automation technologies did (I estimated these effects in my previous work with Pascual Restrepo). However, I find no evidence that AI will reduce inequality or boost wage growth. Some groups – especially white, native-born women – are significantly more exposed and will be negatively affected, and capital will gain more than labor overall.

Economic theory and the available data justify a more modest, realistic outlook for AI. There is little to support the argument that we should not worry about regulation, because AI will be the proverbial rising tide that lifts all boats. AI is what economists call a general-purpose technology. We can do many things with it, and there are certainly better things to do than automate work and boost the profitability of digital advertising. But if we embrace techno-optimism uncritically or let the tech industry set the agenda, much of the potential could be squandered.

AI: Hope or Hype? - Project Syndicate (2024)

References

Top Articles
Latest Posts
Article information

Author: Van Hayes

Last Updated:

Views: 6532

Rating: 4.6 / 5 (66 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Van Hayes

Birthday: 1994-06-07

Address: 2004 Kling Rapid, New Destiny, MT 64658-2367

Phone: +512425013758

Job: National Farming Director

Hobby: Reading, Polo, Genealogy, amateur radio, Scouting, Stand-up comedy, Cryptography

Introduction: My name is Van Hayes, I am a thankful, friendly, smiling, calm, powerful, fine, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.