-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathOLD aises 2 5
35 lines (30 loc) · 9.25 KB
/
OLD aises 2 5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<h1 id="sec:timelines">2.5 Speed of AI Development</h1>
<h3 id="introduction">Introduction</h3>
<p>It is comfortable to believe that we are nowhere close to creating AI systems that match or surpass human performance on a wide range of cognitive tasks. However, given the wide range of opinions among experts and current trends in compute and algorithmic efficiency, we do not have strong reasons to rule out the possibility that such AI systems will exist in the near future. Even if development in this direction is slower than the more optimistic projections, the development of AI systems with powerful capabilities on a narrower set of tasks is already happening and is likely to introduce novel risks that will be challenging to manage.
</p>
<p><strong>HLAI is a helpful but flawed milestone for AI development.</strong> When discussing the speed of developments in AI capabilities, it is important to clarify what reference points we are using. Concepts such as HLAI, AGI or transformative AI, introduced earlier in this chapter, are under-specified and ambiguous in some ways, so it is often more helpful to focus on specific capabilities or types of economic impact. Despite this, there has been intense debate over when AI systems on this level might be achieved, and insight into this question could be valuable for better managing the risks posed by increasingly capable AI systems. In this section, we discuss when we might see general AI systems that can match average human skill across all or nearly all cognitive tasks. This is equivalent to some ways of operationalizing the concept of AGI.
</p>
<h3 id="rapid-hlai">Potential for Rapid Development of HLAI</h3>
<p><strong>HLAI systems are possible.</strong> The human brain is widely regarded by scientists as a physical object that is fundamentally a complex biological machine and yet is able to give rise to a form of general intelligence. This suggests that there is no reason another physical object could not be built with at least the same level of cognitive functioning. While some would argue that an intelligence based on silicon or other materials will be unable to match one built on biological cells, we see no compelling reason to believe that particular materials are required. Such statements seem uncomfortably similar to the claims of vitalists, who argued that living beings are fundamentally different from non-living entities due to containing some non-physical components or having other special properties. Another objection is that copying a biological brain in silicon will be a huge scientific challenge. However, there is no need for researchers looking to create HLAI to create an exact copy or ''whole brain emulation''. Airplanes are able to fly but do not flap their wings like birds - nonetheless they function because their creators have understood some key underlying principles. Similarly, we might hope to create AI systems that can perform as well as humans through looser forms of imitation rather than exact copying.
</p>
<p><strong>High uncertainty for HLAI timelines.</strong> Opinions on "timelines"---how difficult it will be to create human-level AI---vary widely among experts. A 2023 survey of over 2,700 AI experts found a wide range of estimates of when HLAI was likely to appear. The combined responses estimated a 10% probability of this happening by 2027, and a 50% probability by 2047. A salient point is that more recent surveys generally indicate shorter timelines, suggesting that many AI researchers have been surprised by the pace of advances in AI capabilities. For example, a similar survey conducted in 2022 yielded a 50% probability of HLAI by 2059. In other words, over a period of just one year, experts brought forward their estimates of when HLAI had a 50% chance of appearing by 12 years. Nonetheless, it is also worth being cautious about experts interpreting evidence of rapid growth over a short period too narrowly. In the 1950s and 1960s, many top AI scientists were overly optimistic about what was achievable in the short term, and disappointed expectations contributed to the subsequent "AI Winter."
</p>
<p><strong>Intense incentives and investment for AGI.</strong> Vast sums of money are being dedicated to building AGI, with leaders in the field having secured billions of dollars. The cost of training GPT-3 has been estimated at around $5 million, while the cost for training GPT-4 was reported to be over $100 million. As of 2024, AI developers are spending billions of dollars on GPUs for training the next generation of AI systems.
</p>
<p>Increasing investment has translated to growing amounts spent on compute; between 2009 and 2024, the cost of compute used to train notable ML models has roughly tripled each year. Moreover, although scaling compute may seem like a relatively simple approach, it has so far proven remarkably effective at improving capabilities over many orders of magnitude of scale. For example, looking at the task of next-token prediction, not only has the loss in performance reduced with increasing training compute, but the trend has also remained consistent as compute has spanned over a dozen orders of magnitude. These developments have defied the expectations of some skeptics who believed that the approach of scaling would quickly reach its limits and saturate. Additionally, since compute costs are falling, the amount being used has increased more than spending on it; although spending has been tripling each year, the amount of training compute for notable models has been quadrupling.
</p>
<p>Improvements in drivers, software and other elements are also contributing to the training of ever-larger AI models. For example, FlashAttention made the training of transformers more efficient by minimizing redundant operations and efficiently utilizing hardware resources during training.
</p>
<p>Besides increasing compute, another indicator of the growth of AI research is the number of papers published in the field. This metric has also risen rapidly in the past few years, more than doubling from around 128,000 papers in 2017 to around 282,000 in 2022. This suggests that increasing investment is not solely going towards funding ever-larger models, but is also associated with a large increase in the amount of research going into improving AI systems.
</p>
<h3 id="hlai-obstacles">Obstacles to HLAI</h3>
<p><strong>More conceptual breakthroughs may be needed to achieve HLAI.</strong> Although simply scaling compute has yielded improvements so far, we cannot necessarily rely on this trend to continue indefinitely. Achieving HLAI may require qualitative changes, rather than merely quantitative ones. For example, there may be conceptual breakthroughs required of which we are so far unaware. This possibility adds more uncertainty to projected timelines; whereas we can extrapolate previous patterns to predict how training compute will increase, we do not know what conceptual breakthroughs might be needed, let alone when they might be made.
</p>
<p><strong>High-quality data for training might run out.</strong> The computational operations performed in the training of ML models require data to work with. The more compute used in training, the more data can be processed, and the better the model's capabilities will be. However, as compute being used for training continues to rise, we may reach a point where there is not enough high-quality data to fuel the process. But there are strong incentives for AI developers to find ways to work around this. In the short term, they will find ways to access new sources of training data, for example by paying owners of relevant private datasets. Beyond this, they may try a variety of approaches to reduce the reliance on human-generated data. For example, they may use AI systems to create synthetic or augmented data. Alternatively, AI systems may be able to improve further by competing against themselves through self-play, in a similar way to how AlphaGo learned to play Go at superhuman level.
</p>
<p><strong>Investment in AI may drop if financial returns are disappointing.</strong> Although substantial resources are currently being invested in scaling ML models, we do not know how much scaling is required to reach HLAI (even if scaling alone were enough). As companies increase their spending on compute, we do not know whether their revenue from the technology they monetise will increase at the same rate. If the costs of improving the ML models grow more quickly than financial returns, then companies may turn out not to be economically viable, and investment may slow down.
</p>
<h3 id="conclusion">Conclusion</h3>
<p>There is high uncertainty around when HLAI might be achieved. There are strong economic incentives for AI developers to pursue this goal, and advances in deep learning have surprised many researchers in recent years. We should not be confident in ruling out the possibility that HLAI could also appear in coming years.
</p>
<p><strong>AI can be dangerous long before HLAI is achieved.</strong> Although discussions of possible timelines for HLAI are pertinent to understanding when the associated risks might appear, it can be misleading to focus too much on HLAI. This technology does not need to achieve the same level of general intelligence as a human in order to pose a threat. Indeed, systems that are highly proficient in just one area have the potential to cause great harm.</p>