This is a sample from MIT Horizon's Introduction to AI module. Contact us for full access to our comprehensive resources.
-Exclusive Library
-Real World Case Studies
-Regular Live Events
-100+ eBooks from MIT Press
-Weekly Newsletter
Explore the Complete Offering
This is a sample from MIT Horizon's Introduction to AI module. Contact us for full access to our comprehensive resources.
Explore the Complete Offering

Impact
Spotlight

Button Text

Benefits of AI

The advantages and opportunities of AI
8 min read
Listen to this article
0:00
0:00
https://www.buzzsprout.com/693595/episodes/15626342-mit-horizon-audio-2023-benefits-of-ai-revision.mp3

Download PDF

AI systems can recognize and produce natural language

The most high-profile modern AI systems are able to process written and spoken language—a subfield known as natural language processing (NLP). Advances in NLP have enabled a range of applications, including automated translations between languages, virtual assistants such as Amazon’s Alexa and Apple’s Siri that can recognize spoken commands, and customer service chatbots that provide round-the-clock support across many industries.

Like other Amazon Alexa devices, the Echo Dot’s natural language processing software runs on remote cloud computing hardware, meaning the smart speaker can’t interpret spoken language without an internet connection. Photo by Jonathan Borba via Pexels.

Recent developments in large language models (LLMs) such as OpenAI’s GPT-4 have further expanded the reach and capabilities of language-based AI, particularly for chatbots. Wells Fargo's virtual assistant—which runs on a Google-produced LLM—uses conversational language to respond to a user's requests, such as for details about a specific transaction or to monitor ongoing subscriptions. The chatbot handled 20 million customer interactions during its first year of deployment, according to the bank's CIO. The city of New Orleans uses an LLM-based chatbot to field 311 contacts, which are usually information requests or notifications that something needs fixing. By understanding the user’s written text and responding with relevant information the chatbot is able to answer 46% of resident questions automatically; the others it can route to a human worker. This AI-powered screening process—which accounts for some 2,000 interactions monthly—lets municipal staff focus on more complex requests and reports. (For more about this application see our impact spotlight.)

Besides recognizing and generating language, AI systems can mine it for latent patterns. Pyrra, a software startup, offers an AI system that tracks more than 30 social media platforms to monitor public discussion of a corporate brand or its executives, or to identify threats of disinformation, hate speech, or violence. Pyrra’s AI model scans for patterns within the language of social media posts to automatically score them for metrics like negative sentiment, reputational risk, and offensive content. The model flags troubling scores for a closer look by humans, alerting an organization’s staff or law enforcement for potential action. Researchers can also use Pyrra’s AI system to better understand patterns in public or political discourse. In either scenario, the system reviews far more social media activity—millions of records daily—than a human analyst could alone.

AI systems can interpret and generate imagery

In addition to numbers and text, modern computers can interpret a breadth of visual stimuli. This AI subfield, called computer vision (CV), is the AI technology behind facial recognition software, including the systems used by law enforcement as well as features that allow phone owners to unlock their devices with their face. CV techniques also overlap with NLP capabilities, such as for optical character recognition, whereby algorithms detect words or even handwritten phrases in images (such as scanned documents) and convert them to machine-readable text that language models can work with.

The implications of computer vision are especially powerful in medicine. A system named EyeArt automatically examines photographs of patients’ eyes for lesions and yields a preliminary diagnosis of diabetic retinopathy in under a minute. Other AI systems work alongside human operators. Caption Health, a subsidiary of GE Healthcare, has developed software that uses computer vision to assist with cardiac ultrasounds. Observing the heart with an ultrasound is an effective way of diagnosing heart conditions, but only a trained cardiologist has been able to position an ultrasound device to get a proper view. Caption Health’s system, which is authorized by the United States Food & Drug Administration, expands who can do this job with computer vision that guides users toward taking high-quality images. So, a health clinician could use it to quickly take ultrasound images and send them to a cardiologist for diagnosis.

Computer vision is also a foundational technology for autonomous vehicles, meaning mobile robots that can at least temporarily navigate themselves. Automakers like Tesla, Toyota, and GM have captured international attention with their ongoing efforts to develop self-driving cars. But computer vision already enables a vast number of industrial robots to ferry materials through warehouses and drones to safely pilot themselves in environments too remote for wireless control signals to reach. For more details, see How Autonomous Vehicles Work.

These images from the publicly available nuScenes data set are used to create better computer vision algorithms—including those used in autonomous vehicles—for identifying cars, objects, and pedestrians in a variety of conditions, such as at night and while it’s raining. (CC BY-NC-SA 4.0)

And just as ChatGPT and other large language models have expanded access to NLP techniques, a similar class of text-to-image generators have boosted existing design applications with computer vision capabilities. Adobe’s suite of generative AI tools, collectively called Firefly, enables users to automate image production, such as letting AI fill in the background in an illustration or creating multiple variations of a visual to pick from. IBM is an early Firefly adopter, using the system to create over 1,000 variations of visual assets for a marketing campaign in minutes, a process that normally would have taken days or months. For more about how organizations are using text-to-image generators as well as large language models, see Types of Generative AI Applications.

AI can help make sense of complex conditions

In certain settings, especially with a modest pool of variables or known relationships among them, AI is doing something a human could do—just more quickly, and with a potentially smaller margin of error. But a computer can also discern patterns in larger and more intricate ecosystems. AI can be useful for analyzing data with many variables in play, including different types of variables, which may be interconnected and interdependent. Some AI systems are capable of finding patterns in data that would overwhelm a human or a simpler computer program.

This capability is extremely valuable when AI can identify abnormalities before they become obvious problems for companies. Spanish oil company Cepsa uses an AI system from Amazon to perform predictive maintenance—the system analyzes sensor data related to temperature, pipe pressure, flow rate, and other factors, to identify equipment that needs to be replaced or repaired. Amazon also offers battery-powered temperature and vibration sensors that can be attached to existing equipment and connected to an AI system.

Though the trend in AI is toward processing larger amounts of data, some applications can make sense of complex systems using relatively little information. Swedish water utility VA SYD uses an AI system to process data from the limited number of flow meters that monitor its 5,000 kilometers of pipeline. “We used the system to simulate different leaks and then evaluated the data,” Simon Granath, a development engineer at VA SYD, said. “We were able to detect leaks as small as 0.5 liters per second—a huge improvement over the previous solution, which provided no means of detecting small leaks at all.” Though the company is adding sensors in some locations as it expands its use of the AI system, according to Granath, “a smarter leakage detection system requires less data from the pipelines, so we can reduce the number of installed meters.”

Other industries have used AI to understand the complex patterns of human behavior. Customers at the Royal Bank of Canada can use AI to save money more easily. An AI system analyzes their spending patterns and automatically creates personalized budgets. Alternatively, major banks use AI from Feedzai to detect fraud, by automatically identifying suspicious transactions.

AI can adapt and improve over time

Computers generally perform calculations within parameters set by humans. A human programmer furnishes instructions; algorithms fulfill those instructions literally, and will not improve or adjust without upgraded software providing new instructions.

This is not the case with modern AI systems. Many AI applications currently use machine learning algorithms that are capable of improving through training, with little or no human intervention. DeepMind, a British AI lab owned by Google’s parent company, used machine learning techniques to create AlphaZero, an AI system that has beat world champion AI systems at the games of Go and chess. DeepMind presented the system with no information other than the basic rules of those games. AlphaZero then taught itself better strategies by playing thousands of games against itself. DeepMind used similar techniques to create an AI system that can predict the shape of proteins based on their molecular structure. This was a major unsolved problem in biology, with huge implications for future research: Andrei Lupas, an evolutionary biologist at the Max Planck Institute for Developmental Biology, told Nature that DeepMind’s AI system “will change medicine. It will change research. It will change bioengineering. It will change everything.”

Getting a machine learning algorithm to improve itself to a desired level usually requires large amounts of data about the problem it’s trying to solve (e.g., computer vision algorithms need to see thousands of image files to improve). Typically, machine learning algorithms are “trained” before deployment, and are no longer improving themselves while in use. In some fields, regular improvement is a necessity. Some cybersecurity systems use machine learning algorithms that must adapt to new threats in order to identify suspicious behavior. Microsoft’s cybersecurity team has developed a way to simulate networks and cybersecurity threats, creating more training data on what future threats might look like to improve its algorithms. In other industries, machine learning algorithms can regularly test different strategies in the real world to improve their performance. For instance, music streaming service Spotify deploys an algorithm that is constantly testing slightly different approaches to personalized song recommendations. Rather than analyzing the larger patterns of activity among its hundreds of millions of users, the system learns from individual users’ listening history to keep them engaged and subscribing.

AI can make predictions

Conventional predictive software relies wholly on historical data modeled to project forthcoming events. AI systems can do more: accommodate higher dimensions of data, create their own data, and discern patterns and distinguish noise more accurately than statistical models designed by humans. In some scenarios, especially when relationships among variables are unknown, temporary, or volatile, and thus difficult to model well, AI systems provide a more accurate resource.

Commercial applications include helping to anticipate customer wants and to personalize shopping experiences, based not simply on prior purchases but on geography, time of day, and potentially dozens of other conditions. Media streaming services including YouTube and Netflix rely on machine learning to suggest new content, with Netflix saying its suggestion software “influences choice for about 80% of hours streamed.”

Predictions are also important in retail, where AI systems can forecast demand and sales to keep inventories efficiently stocked. Previously, managing inventories has involved a combination of simple rules and human judgment. Walmart uses AI to predict which products will be in high demand at a given store based on data such as economic trends, weather forecasts, and demographic information. The company’s AI system connects its suppliers, distribution centers, and fulfillment centers with its more than 4,700 stores to preemptively boost inventory before demand hits.

Walmart uses AI not only to predict when stores will need more of certain items, but also to plot the most efficient trucking routes for shipping inventory to those locations. Photo © Walmart. (CC BY 2.0)

AI systems can make predictions in complex real-world environments, as well. U.K. startup Vivacity Labs, which makes systems for controlling stoplights, uses machine learning to predict and avoid traffic jams. Ongoing testing in Manchester has seen a 30% reduction in average journey time on key routes, compared to systems that dynamically adjust stoplights without using modern machine learning techniques.

Previous Article
Next Article

Return to Module Home

Return to Module Home