What is deep learning?
What is machine learning?
发布时间:2024年5月30日 06:21
Author麦肯锡全球研究院
Image 0
Deep learning has been around for a while, but most of us never used a deep learning–based tool until the release of OpenAI’s ChatGPT, in late 2022. (And even as we marveled at ChatGPT’s outputs, most of us didn’t know it was using deep learning to generate them.) Like its predecessors DALL-E, Google’s Imagen and PaLM, Stable Diffusion, and others, ChatGPT relies on large deep learning models trained on massive data sets to generate content based on prompts. But unlike its predecessors, ChatGPT works via an open-access API, which means the general public can experience the power of deep learning for the first time.
The world of artificial intelligence and machine learning (for which deep learning is the next evolutionary step) is undergoing a generational transformation, from an idea studied by scientists to a tool used by all kinds of people for all kinds of tasks. McKinsey analysis has shown that between 2015 and 2021, the cost to train an image classification system (which runs on deep learning models) fell by 64 percent. Training times improved by 94 percent in the same period. We’ve also found that generative AI (gen AI) could add the equivalent of up to $4.4 trillion annually to the global economy. These profound changes are all powered by deep learning.
But what actually is deep learning? And how does it make all this possible? Read on to find out.
Learn more about McKinsey Digital.
Before we move to deep learning, let’s get the basics down. Machine learning is a form of artificial intelligence that can adapt to a wide range of inputs, including large data sets and human instruction. These algorithms can detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by receiving explicit programming instruction. The algorithms also adapt in response to new data and experiences to improve over time.
The volume and complexity of the data that is now being generated, too vast for humans to reckon with, has increased the need for machine learning—and has enhanced its potential. In the years since its widespread deployment, machine learning has had impact in a number of industries, including medical-imaging analysis and high-resolution weather forecasting.
For more on machine learning, check out our McKinsey Explainer.
Deep learning is a more advanced version of machine learning that is particularly adept at processing a wider range of data resources (text, as well as unstructured data including images), requires even less human intervention, and can often produce more accurate results than traditional machine learning. Deep learning uses neural networks—based on the ways neurons interact in the human brain—to ingest and process data through multiple neuron layers that recognize increasingly complex features of the data. For example, an early neuron layer might recognize something as being in a specific shape; building on this knowledge, a later layer might be able to identify the shape as a stop sign. Similar to machine learning, deep learning uses iteration to self-correct and to improve its prediction capabilities. Once it “learns” what an object looks like, it can recognize the object in a new image.
ChatGPT made AI visible—and accessible—to the general public for the first time. ChatGPT, and other language models like it, were trained on deep learning tools called transformer networks to generate content in response to prompts. Transformer networks allow gen AI tools to weigh different parts of the input sequence differently when making predictions. Transformer networks, comprising encoder and decoder layers, enable gen AI models to learn relationships and dependencies between words in a more flexible way compared with traditional machine and deep learning models. That’s because transformer networks are trained on huge swaths of the internet (for example, all traffic footage ever recorded and uploaded) instead of a specific subset of data (certain images of a stop sign, for instance). Foundation models, as further discussed below, trained on transformer network architecture—like OpenAI’s ChatGPT or Google’s BERT—are able to transfer what they’ve learned from a specific task to a more generalized set of tasks, including generating content. At this point, you could ask a model to create a video of a car going through a stop sign.
Image 13
Foundation models can create content, but they don’t know the difference between right and wrong, or even what is and isn’t socially acceptable. When ChatGPT was first created, it required a great deal of human input to learn. OpenAI employed a large number of human workers all over the world to help hone the technology, cleaning and labeling data sets and reviewing and labeling toxic content, then flagging it for removal. This human input is a large part of what has made ChatGPT so revolutionary.
There are three types of artificial neural networks used in deep learning:
For more on deep learning, and neural networks and their use cases, see our executive’s guide to AI. Learn more about McKinsey Digital.
Foundation models are deep learning models trained on transformer network architecture: vast quantities of unstructured, unlabeled data. Foundation models can be used for a wide range of tasks, either out of the box or adapted to specific tasks through fine-tuning. Fine-tuning involves a relatively short period of training on a labeled data set, which is typically much smaller than the data set on which the model was initially trained. This additional training allows the model to learn and adapt to the nuances, terminology, and specific patterns found in the smaller data set. Examples of foundation models include DALL-E 2, GPT-4, and Stable Diffusion.
Large language models are a class of foundation models that can process massive amounts of unstructured text. These models can learn the relationships between words or portions of words, also known as tokens. This enables large language models to generate natural language text, or perform tasks like summarization or knowledge extraction. Google’s Gemini runs on a large language model called LaMDA.
Learn more about McKinsey Digital.
McKinsey collated more than 400 use cases of machine and deep learning across 19 industries and nine business functions. Based on our analysis, we believe that nearly any industry can benefit from machine and deep learning. Here are a few examples of use cases that cut across several sectors:
Learn more about McKinsey Digital. And check out deep learning–related job opportunities if you’re interested in working with McKinsey.
Articles referenced:
Image 27Image 28
产品
缤商APP用户后台
关于我们
公司简介加入我们
用户协议
隐私政策
联系我们
合作:135-8566-0971
客服:021-61673695
邮箱:support@bincial.com
地址:上海市浦东新区御桥路1220弄3号
DownloadAPP
视频号
WeChat
公众号
抖音
快手
Copyright© 上海播知科技有限公司 沪ICP备2023012989号-4