This is a sample from MIT Horizon's Introduction to AI module. Contact us for full access to our comprehensive resources.
-Exclusive Library
-Real World Case Studies
-Regular Live Events
-100+ eBooks from MIT Press
-Weekly Newsletter
Explore the Complete Offering
This is a sample from MIT Horizon's Introduction to AI module. Contact us for full access to our comprehensive resources.
Explore the Complete Offering

Impact
Spotlight

Button Text

The Future of AI

Realistic predictions for what the next five years will bring to AI
12 min read
Listen to this article
0:00
0:00
https://www.buzzsprout.com/693595/episodes/15588928-mit-horizon-audio-2023-the-future-of-ai-revision.mp3

Download PDF

Organizations will grapple with how to integrate generative AI

The launch of ChatGPT in late 2022 brought enormous attention to generative AI, a category of artificial intelligence systems that create new data, such as written language, images, videos, or audio. Since then, interest and investment have only grown as generative AI’s capabilities improve. Fearful of being left behind, many organizations raced to incorporate generative AI into their workflows. (See Types of Generative AI Applications for details.) But many other organizations are still searching for a good fit for the technology. In a KPMG survey of more than 200 U.S. executives, though nearly two-thirds said generative AI would have a high impact on their organization within five years, 60% said their organization hadn’t yet implemented its first generative AI application—and wouldn’t for another couple years.

Following the initial hype, many organizations are realizing that integrating generative AI into their workflows and products can be difficult. “It has been more work than anticipated,” Sharon Mandell, chief information officer for networking technology company Juniper Networks, told the Wall Street Journal. Juniper Networks is testing generative AI tools but hasn’t committed to any yet, in part answers produced by the models aren’t always reliable, according to Mandell. Representatives at other companies, including agricultural company Cargill and pharmaceutical company Eli Lilly, also told the Journal that generative AI models produced errors when asked questions about basic company information, such as the members of the company’s executive team or general expense policies. (Generative AI models can often be inaccurate, as we explain in Limitations of Generative AI.)

It can also be challenging for organizations to evaluate the impact of generative AI. For example, after testing a generative legal tool, called Harvey, for more than a year, legal firm Paul Weiss LLP told Bloomberg Lawthat it hasn’t been able to measure specific improvements, such as time saved or efficiency gains, in part because lawyers must review the tool’s output for accuracy. Instead, the firm is evaluating Harvey based on its lawyers’ reviews of the tool—so far, experienced lawyers like using Harvey for brainstorming but don’t find it useful for fact-checking.

The law firm’s use of generative AI mirrors a larger trend: Organizations often find the technology is helpful for small productivity improvements but has yet to make a larger impact. “One of the phrases that you hear sometimes from people in this space is, ‘Demos are easy, but products are hard.’ It’s very easy to put something together that looks really flashy and impressive and looks like it works. But once you start trying to actually use it for real, you start running into all the limitations and all the risks,” Armando Solar-Lezama, associate director and chief operating officer of MIT’s Computer Science and Artificial Intelligence Laboratory, tells MIT Horizon.

Experts say the potential payoff from generative AI is big enough that companies will continue investing in the technology. But organizations may no longer be content with generative AI facilitating only productivity gains. To justify further spending, organizations will likely seek a larger and more concrete return on investment. In a survey of 100 U.S. executives released by KPMG in July 2024, 52% said revenue generation was their primary measure of whether generative AI tools are successful. That is a shift from the first quarter of 2024, when 51% of executives said improved productivity was their top measure of success.

The next several years will be a test for enterprise use of generative AI. Many organizations are still in the beginning stages of implementing the technology. It remains unclear when and whether generative AI will successfully deliver revenue increases or meaningful organization wide improvements rather than small-scale productivity boosts.

AI will disrupt jobs, causing instability and unknown long-term outcomes

Advancements in AI have long caused unrest about whether the technology will displace workers, making certain jobs obsolete and increasing unemployment. The rise of generative AI is amplifying those concerns. Without a doubt, AI will affect many workers. Researchers from the International Monetary Fund estimate that 40% of jobs worldwide will be affected by AI in some way. That percentage is higher (60%) in advanced economies. In half those affected jobs, AI will improve productivity, while the other half may see AI assume tasks previously performed by human workers, according to the report.

As more organizations adopt AI, especially generative models, workers’ roles are likely to change. “This will affect tasks that are more repetitive, more formulaic, more generic. This can liberate some people who are not good at repetitive tasks. At the same time, there is a threat to people who specialize in the repetitive part,” Zachary Lipton, a professor at Carnegie Mellon focused on the societal impacts of AI, told The New York Times. However, it’s difficult to say with certainty the ultimate impact on the global workforce. Some experts’ predictions are more dire than others: While some say automation could displace millions of jobs, others expect that jobs lost will be counterbalanced by new types of jobs created by AI advancements. Analysts at Goldman Sachs predict that about two-thirds of the more than 900 occupations in the U.S. will be affected by AI—and for those affected, AI could replace as much as half of their workload. Even so, the analysts say that doesn’t necessarily mean layoffs, and they point to prior technological advancements that created new jobs, such as the emergence of the internet creating a need for web designers, software developers, and digital marketers.

In the near term, though, some workers are likely to find their roles eliminated or significantly reduced due to AI. For many people, the technology’s impact on jobs is no longer a matter of debate. In 2023, more than a third (37%) of 750 business leaders surveyed by software firm ResumeBuilder said their companies replaced workers with AI that year. Even more respondents (44%) said their companies would have layoffs in 2024 due to AI. Certain professions will be at more risk than others. Financial services company Klarna has boasted that its AI chatbot performs the same amount of work as 700 full-time customer service representatives, part of a larger trend of AI reducing the number of humans working in customer service. Research studies have also shown a decline of as much as 21% in online freelance job listings for tasks such as copywriting, translation, or basic coding since the launch of ChatGPT. Anecdotally, some freelancers say they lost gigs due to companies turning to generative AI, only for those same companies to later ask them for help fixing the AI model’s output.

Other near-term layoffs will likely be collateral damage as hype around generative AI fuels an investment race in the technology. Already some companies have attributed their layoffs to AI-related restructuring. “Our next stage of growth requires a different mix of skill sets, particularly in AI and early stage product development,” Dropbox CEO Drew Houston said in April 2023, when announcing 500 layoffs. Intuit, which makes financial software like TurboTax and QuickBooks, announced in July 2024 it would lay off 1,800 workers, or roughly 10% of its staff, with plans to replace those employees with others who specialize in AI. Other employees will face similar fates—though whether an organization creates new, AI-related roles or simply eliminates jobs made redundant by AI will depend on the organization and the occupations in question.

AI will make robots more adaptable

Roboticists have struggled to create general-purpose robots able to perform a wide range of tasks or adapt to new circumstances and actions. Most robots typically follow specific programming to perform narrow, well-defined tasks in specific environments, such as welding sheets of metal on an assembly line or picking up an object and placing it on a conveyor belt.

Recent advancements in AI have created a shift, as roboticists experiment with new ways to program and train their machines. Some of the same techniques that allow generative AI models to accomplish a broad range of tasks could also allow robots to generalize better. As an example, Solar-Lezama describes building a robot to empty a dishwasher. Traditionally, developers would have to program the task step-by-step with specific details like the location of the silverware drawer, how to open the drawer, and where to place each type of silverware. Pairing robots with generative AI models allows users to communicate high-level goals with the robot using natural language, he says. The robots are embedded with enough general knowledge to recognize, for example, that a silverware drawer is in the kitchen and that forks are stored with other forks—basic details roboticists would typically have to hard-code into a machine. “The ability of AI models to combine vision and natural language and all of this global knowledge about the world means that suddenly there is a potential for robots that are much more capable and much more robust,” Solar-Lezama tells MIT Horizon.

Some developers are working to train robotic systems in the same way that text and image generators are trained—essentially exposing a robot to many examples of a human performing an action. Using this type of approach, researchers with the Toyota Research Institute, Columbia University, and MIT taught a single robot over the course of one afternoon to perform more than 60 different tasks requiring significant dexterity, such as peeling vegetables, flipping pancakes, and using a hand mixer. The researchers trained the robot through dozens to hundreds of demonstrations of a particular task, performed by a human remotely operating the robot. Then the robot learns to complete the task on its own, adapting a technique popular in text-to-image models known as diffusion to generate robot behaviors. (For more on diffusion models, see How Generative AI Works.) Researchers from Stanford accomplished a similar feat, training a robot to perform tasks such as cooking shrimp, wiping up spilled wine, and high-fiving based on around 50 human demonstrations of each task. Some experts are already coining the term “large behavior models” for models that allow robots to generate physical actions by training on videos or demonstrations of humans performing various tasks.

Training a robot to load a dishwasher previously took engineers at the Toyota Research Institute months of step-by-step programming. With their new approach inspired by generative AI, the researchers can teach the robot this task in a single day. Video © Toyota Research Institute. Used with permission.

There are several other ways researchers are hoping to incorporate generative AI into the robotics field. A team at Google’s DeepMind created a generative model that translates text and image data into physical behaviors. Roboticists at startup Agility Robotics are incorporating large language models into their machines, allowing them to respond to text and voice prompts. Other companies are building software that uses generative AI to help create the simulations that developers often use to train robots, such as NVIDIA’s Isaac Platform. Some researchers are exploring how to integrate generative AI into robot design. “The idea of using AI not only for the brain of the robot but also its body basically opens up new possibilities of building strong intelligence that can interact with the real world,” Tsun-Hsuan (Johnson) Wang, a doctoral researcher in the Distributed Robotics Lab at MIT’s Computer Science & Artificial Intelligence Laboratory, said in a 2024 TEDx talk.

Rising demand for hardware could limit AI development

As AI models have grown in size and complexity, many companies have developed computer chips purpose-built for AI. This includes custom graphics processing units, or GPUs, whose architectures are better suited for machine learning tasks and training, as well as new types of processors specifically created for use in machine learning systems. (For a more technical look at these types of chips, see Types of AI Hardware.) By relying on AI-specific hardware rather than general-purpose chips, organizations can run their AI systems more cheaply and efficiently—an important feat, especially as organizations look to adopt large, data-hungry generative AI models in various applications. Specialized hardware built for AI and machine learning “can help us unlock the cost efficiency, performance, and energy efficiency that we look for when we scale our workloads,” Sairam Menon, AI product line software engineering manager at Johnson & Johnson, said of the company’s use of AI-specific chips from Amazon Web Services.

However, state-of-the-art AI hardware is neither cheap to acquire nor readily available. “For many years we got used to every year there's new hardware that is twice as good as the hardware we had before but for the same price. Now suddenly we're seeing these advances where we are getting hardware that is twice as good as the one we had before, but it also costs twice as much,” Solar-Lezama tells MIT Horizon. “Suddenly buying a machine with a handful of top-of-the-line GPUs is a major financial commitment, and not the sort of thing that you can buy very easily.” The fast-growing demand for hardware, coupled with increasing costs, could trigger chip shortages. Already some AI developers have raised concerns about whether companies can develop and manufacture hardware quickly enough and at the scale needed to support the generative AI boom. In its 2023 annual report to investors, Microsoft listed a lack of availability of GPUs as a potential risk for the first time. Open AI CEO Sam Altman has frequently complained about limited chip supply, suggesting there isn’t enough available hardware to meet the company’s demand.

“Suddenly buying a machine with a handful of top-of-the-line GPUs is a major financial commitment, and not the sort of thing that you can buy very easily.”

—Armando Solar-Lezama, associate director and chief operating officer of MIT’s Computer Science and Artificial Intelligence Laboratory

Large technology firms like OpenAI and Microsoft typically have enough capital and influence to weather supply disruptions. Smaller companies, startups, and research organizations often aren’t so lucky, leaving them scrambling to find chips. Those that aren’t able to secure hardware could risk falling behind or missing out on the opportunity to build a generative AI application. Experts say that the hardware space is ripe for new players and greater competition—as of 2024, NVIDIA’s GPUs service up to 70% of the AI market, though much of their output is committed to major companies. More competition could in turn reduce the price of hardware, making chips more readily available to a broader range of organizations. No matter what, pressure on hardware developers is unlikely to lessen in the near term.

Previous Article
Next Article

Return to Module Home

Return to Module Home