The IT Law Wiki
Tag: Source edit
 
(104 intermediate revisions by the same user not shown)
Line 1: Line 1:
  +
::::{{Quote|''The development of AI will shape the future of power. The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership.''
== Definition ==
 
  +
::::::::::::::::— [[NSCAI Interim Report]], at 9.}}
  +
:::{{Quote|''The development of full artificial intelligence could spell the end of the human race.''
  +
::::::::::— Stephen Hawking, interview with the BBC (Dec. 2014).}}
   
  +
== Definitions ==
'''Artificial intelligence''' ('''AI''') is a branch of [[computer science]] that studies how to develop [[computer]]s that equal or exceed human performance on complex intellectual tasks.
 
   
 
'''Artificial intelligence''' ('''AI''') is
 
'''Artificial intelligence''' ('''AI''') is
  +
  +
{{Quote|[a]n umbrella term that is used to refer to a set of sciences, theories and techniques dedicated to improving the ability of [[machine]]s to do things requiring [[intelligence]].<ref>[[Unboxing Artificial Intelligence: 10 steps to protect Human Rights]], at 24.</ref>}}
  +
  +
{{Quote|a science and a set of [[computational]] [[technologies]] that are inspired by &mdash; but typically operate quite differently from &mdash; the ways people use their nervous systems and bodies to sense, learn, reason, and take action.<ref>[[One Hundred Year Study on Artificial Intelligence]], at 4.</ref>}}
  +
  +
{{Quote|[t]he [[capability]] of a [[device]] to perform functions that are normally associated with human [[intelligence]] such as reasoning, learning, and self-improvement.<ref>[[ITU]], "Compendium of Approved ITU-T Security Definitions," at 23 (Feb. 2003 ed.) ([https://www.itu.int/itudoc/itu-t/com17/activity/def004_ww9.doc full-text]).</ref>}}
  +
  +
{{Quote|[a]ny artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance.... They may solve tasks requiring human-like perception, cognition, planning, learning, [[communication]], or physical action.<ref>[[U.S. Congress]], H.R. 4625 and S. 2217 (Dec. 12, 2017).</ref>}}
  +
  +
{{Quote|are considered to comprise [[software]] and/or [[hardware]] that can learn to solve complex problems, make predictions or undertake tasks that require human-like [[sensing]] (such as [[vision]], [[speech]], and [[touch]]), [[perception]], [[cognition]], planning, learning, [[communication]], or physical action. Examples are wide-ranging and expanding rapidly. They include, but are not limited to, AI assistants, [[computer vision]] [[system]]s, [[biomedical research]], [[unmanned vehicle system]]s, advanced [[game-playing]] [[software]], and [[facial recognition]] [[system]]s as well as application of AI in both [[Information Technology]] ([[IT]]) and [[Operational Technology]] ([[OT]]).<ref>[[U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools]], at 7-8.</ref>}}
   
 
{{Quote|the collection of [[computation]]s that at any time make it possible to assist [[user]]s to perceive, reason, and act. Since it is [[computation]]s that make up AI, the functions of perceiving, reasoning, and acting can be accomplished under the control of the [[computational]] [[device]] (e.g., [[computer]]s or [[robotics]]) in question.
 
{{Quote|the collection of [[computation]]s that at any time make it possible to assist [[user]]s to perceive, reason, and act. Since it is [[computation]]s that make up AI, the functions of perceiving, reasoning, and acting can be accomplished under the control of the [[computational]] [[device]] (e.g., [[computer]]s or [[robotics]]) in question.
   
  +
'''AI'''
AI at a minimum includes
 
   
  +
{{Quote|include[s] the following:
* Representations of "reality," [[cognition]], and [[information]], along with associated methods of representation;
 
  +
  +
:(1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to [[data set]]s.
  +
:(2) An artificial system developed in [[computer software]], physical [[hardware]], or another context that solves tasks requiring human-like perception, [[cognition]], planning, learning, [[communication]], or physical action
  +
:(3) An artificial system designed to think or act like a human, including cognitive architectures and [[neural network]]s.
  +
:(4) A set of techniques, including [[machine learning]], that is designed to approximate a cognitive task.
  +
:(5) An artificial system designed to act rationally, including an [[intelligent]] [[software agent]] or [[embodied]] [[robot]] that achieves goals using perception, planning, reasoning, [[learning]], [[communicating]], [[decision-making]], and acting.<ref>Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019, Pub. L. No. 115-232, 132 Stat. 1636, 1695 (Aug. 13, 2018) (''codified at'' 10 U.S.C. § 2358, note).</ref>}}
  +
 
'''AI''' at a minimum includes
  +
 
* Representations of "[[reality]]," [[cognition]], and [[information]], along with associated methods of representation;
 
* [[Machine learning]];
 
* [[Machine learning]];
 
* Representations of vision and language;
 
* Representations of vision and language;
 
* [[Robotics]]; and
 
* [[Robotics]]; and
 
* [[Virtual reality]].<ref>[[Computer Science and Artificial Intelligence]], at 1.</ref>}}
 
* [[Virtual reality]].<ref>[[Computer Science and Artificial Intelligence]], at 1.</ref>}}
  +
  +
== Brief History of AI ==
  +
  +
Endowing [[computer]]s with human-like [[intelligence]] has been a dream of [[computer]] experts since the dawn of [[electronic computing]]. Although the term "Artificial Intelligence" was not coined until 1956, the roots of the field go back to at least the 1940s,<ref>''See, e.g.,'' Warren S. McCulloch & Walter H. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity," 5 Bull. of Mathematical Biophysics 115 (1943).</ref> and the idea of AI was crystalized in Alan Turing's famous 1950 paper, "[[Computing Machinery and Intelligence]]." Turing's paper posed the question: "Can machines think?" It also proposed a test for answering that question,<ref>Restated in modern terms, the "[[Turing Test]]" (also called the "[[Imitation game]]" puts a human judge in a [[text]]-based [[chat room]] with either another person or a [[computer]]. The human judge can interrogate the other party and carry on a conversation, and then the judge is asked to guess whether the other party is a person or a [[computer]]. If a [[computer]] can consistently fool human judges in this game, then the [[computer]] is deemed to be exhibiting [[intelligence]].</ref> and raised the possibility that a machine might be [[program]]med to learn from experience much as a young child does.
  +
  +
The field of Artificial Intelligence (AI) can be traced back to a 1956 workshop organized by John McCarthy, held at Dartmouth College.<ref>''See'' J. McCarthy et al., "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (Aug. 31, 1955) ([http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html full-text]).</ref> The workshop's goal was to explore how machines could be used to simulate human intelligence. Disciplines that contribute to AI include [[computer science]], economics, [[linguistics]], mathematics, [[statistics]], evolutionary biology, neuroscience, and psychology, among others.
  +
  +
In the ensuing decades, the field of AI went through ups and downs as some AI research problems proved more difficult than anticipated and others proved insurmountable with the [[technologies]] of the time.
  +
  +
=== Waves of AI ===
  +
  +
The [[Defense Advanced Research Projects Agency]] ([[DARPA]]), which has funded [[AI]] [[R&D]] since the 1960s, has described the development of [[AI technologies]] in terms of three waves.<ref>''See'' "DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies" (Sept. 7, 2018) ([https://www.darpa.mil/news-events/2018-09-07 full-text]).</ref> These waves are described by the varying abilities of [[technologies]] in each to perceive rich, complex, and subtle [[information]]; to ''learn'' within an environment; to ''abstract'' to create new meanings; and to ''reason'' in order to plan and reach decisions.<ref>Arati Prabhakar, former Director of DARPA, "Powerful but Limited: A DARPA Perspective on AI," presentation at National Academies of Sciences, Engineering, and Medicine workshop, Robotics and Artificial Intelligence: Policy Implications for the Next Decade (Dec. 12, 2016) ([https://www.nationalacademies.org/event/12-12-2016/robotics-and-artificial-intelligence-policy-implications-for-the-next-decade full-text]).</ref>
  +
  +
* '''First wave: handcrafted knowledge.''' The first wave of [[AI technologies]] have abilities primarily to perceive and reason but no learning capability and poor handling of [[uncertainty]]. For such [[technologies]], researchers and engineers create sets of rules to represent [[knowledge]] in well-defined domains for narrowly defined problems. The TurboTax software, an [[expert system]], is one example. Rules are built into the [[application]], which then turns [[input]] [[information]] into tax form [[output]]s, but it has only a rudimentary ability to perceive and no ability to learn (e.g., about a new tax law) or to abstract beyond what it is programmed to know.
  +
  +
* '''Second wave: statistical learning.''' Starting in the 1990s, a second wave of [[AI technologies]] were developed with more nuanced abilities to perceive and learn, with some ability to abstract, minimal reasoning ability, but no contextual ability. For these systems, engineers create [[statistical]] models for specific problem domains and train them on [[big data]]. Generally, while such systems are statistically powerful, they can be individually unreliable, especially in the presence of skewed [[training]] data (e.g., a [[facial recognition technology|face recognition system]] trained on a limited range of skin tones can be powerful for similar faces, but highly unreliable for individuals outside of the training spectrum). As noted by [[DARPA]], these [[technologies]] are "dependent on large amounts of high quality [[training data]], do not adapt to changing conditions, offer limited performance guarantees, and are unable to provide users with explanations of their results.”<ref>''See'' "DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies" (Sept. 7, 2108) ([https://www.darpa.mil/news-events/2018-09-07 full-text]).</ref> Additional examples of second wave [[AI technologies]] include [[voice recognition]] and [[text analysis]].
  +
  +
* '''Third wave: contextual adaptation.''' The third wave of [[AI technologies]] is oriented toward making it possible for machines to adapt to changing situations (i.e., [[contextual adaptation]]). Engineers create systems that construct explanatory models of real world phenomena, and "[[AI system]]s learn and reason as they encounter new tasks and situations." Examples of third wave technologies would include [[explainable AI]] ([[XAI]]), as described below.
  +
  +
It wasn't until the late 1990s that research progress in AI began to accelerate, as researchers focused more on sub-problems of AI and the [[application]] of AI to [[real-world]] problems such as [[image recognition]] and medical diagnosis. An early milestone was the 1997 victory of [[IBM]]'s chess-playing [[computer]] Deep Blue over world champion Garry Kasparov. Other significant breakthroughs included [[DARPA]]'s [[Cognitive Agent that Learns and Organizes]] ([[CALO]]), which led to [[Apple]] Inc.'s Siri; [[IBM]]'s question-answering [[computer]] Watson's victory in the TV game show "Jeopardy!"; and the surprising success of [[self-driving car]]s in the [[DARPA]] Grand Challenge competitions in the 2000s.
  +
  +
The current wave of progress and enthusiasm for AI began around 2010, driven by three factors that built upon each other: the [[availability]] of [[big data]] from sources including [[e-commerce]], businesses, [[social media]], science, and government; which provided raw material for dramatically improved [[machine learning]] approaches and [[algorithm]]s; which in turn relied on the [[capabilities]] of more powerful [[computer]]s.<ref>A more detailed history of AI is available in the Appendix of the AI 100 Report &mdash; [[One Hundred Year Study on Artificial Intelligence]].</ref>
  +
  +
This growth has advanced the state of [[Narrow AI]], which refers to [[algorithm]]s that address specific problem sets like [[game]] playing, [[image recognition]], and [[navigation]]. All current [[AI system]]s fall into the [[Narrow AI]] category. The most prevalent approach to [[Narrow AI]] is [[machine learning]], which involves [[statistical algorithm]]s that replicate human [[cognitive]] tasks by deriving their own procedures through analysis of large training [[data set]]s. During the training process, the [[computer system]] creates its own statistical model to accomplish the specified task in situations it has not previously encountered.
  +
  +
Experts generally agree that it will be many decades before the field advances to develop [[General AI]], which refers to [[system]]s capable of human-level [[intelligence]] across a broad range of tasks.<ref>[[Preparing for the Future of Artificial Intelligence]], at 7-9.</ref> Nevertheless, the growing power of [[Narrow AI]] [[algorithm]]s has sparked a wave of commercial interest.
   
 
== Overview ==
 
== Overview ==
  +
  +
"Artificial intelligence is more than the simple [[automation]] of existing [[process]]es: it involves, to greater or lesser degrees, setting an outcome and letting a [[computer program]] find its own way there. It is this creative capacity that gives artificial intelligence its power. But it also challenges some of our assumptions about the role of [[computer]]s and our relationship to them."<ref>[[Artificial Intelligence: Opportunities and Implications for the Future of Decision Making]], at 5.</ref>
   
 
AI attempts to emulate the results of human reasoning by organizing and [[manipulating]] factual and heuristic [[knowledge]]. Areas of AI activity include [[expert system]]s, [[natural language]] understanding, [[speech recognition]], [[vision]], and [[robotics]].
 
AI attempts to emulate the results of human reasoning by organizing and [[manipulating]] factual and heuristic [[knowledge]]. Areas of AI activity include [[expert system]]s, [[natural language]] understanding, [[speech recognition]], [[vision]], and [[robotics]].
  +
  +
{{Quote|Examples of AI already in use include: [[communicating]] with [[computer]]s in [[natural language]], deriving new insights from [[transport data]], operating [[autonomous]] and [[adaptive]] [[robotic system]]s, managing [[supply chain]]s, and designing more life-like [[video game]]s. Applied AI is already changing business practices across [[financial services]], law, medicine, accounting, tax, [[audit]], architecture, consulting, customer service, manufacturing and [[transport]]. . . . AI could improve the functioning of most [[digital]] operations, products and services. Wherever a [[process]] uses [[digital data]], AI may enable us to use that [[data]] more effectively and in new ways.<ref>[[Growing the Artificial Intelligence Industry in the UK]], at 8/</ref>}}
  +
  +
What has made AI possible is
  +
  +
{{Quote|the confluence of four advancing [[technologies]] . . . vast increases in [[computing power]] and progress in [[machine learning]] techniques . . . breakthroughs in the field of [[machine perception]] . . . [and] improvements in the [[industrial design]] of [[robot]]s.<ref>Jerry Kaplan, "Humans Need Not Apply – A Guide to Wealth and Work in the Age of Artificial Intelligence" 38-39 (2015).</ref>}}
  +
  +
=== Cybersecurity ===
  +
  +
Today's AI has important applications in [[cybersecurity]], and is expected to play an increasing role for both defensive and offensive [[cybermeasure]]s. Currently, designing and operating [[secure system]]s requires significant time and attention from [[expert]]s. Automating this [[expert]] work partially or entirely may increase [[security]] across a much broader range of [[system]]s and [[application]]s at dramatically lower cost, and could increase the agility of the Nation's [[ cyber-defense]]s. Using AI may help maintain the rapid response required to [[detect]] and react to the landscape of evolving [[threat]]s.
  +
  +
=== Military ===
  +
  +
Challenging issues are raised by the potential use of AI in [[weapon system]]s.<ref>''See generally''[[Artificial Intelligence and National Security]].</ref> The United States has incorporated [[autonomy]] in certain weapon systems for decades, allowing for greater precision in the use of weapons and safer, more humane military operations. Nonetheless, moving away from direct human control of [[weapon system]]s involves some risks and can raise legal and ethical questions.
  +
  +
"The key to incorporating [[autonomous]] and [[semi-autonomous]] [[weapon system]]s into American defense planning is to ensure that U.S. Government entities are always acting in accordance with international humanitarian law, taking appropriate steps to control proliferation, and working with partners and Allies to develop [[standard]]s related to the development and use of such [[weapon system]]s. The United States has actively participated in ongoing international discussion on [[Lethal Autonomous Weapon Systems]], and anticipates continued robust international discussion of these potential [[weapon system]]s. Agencies across the U.S. Government are working to develop a single, government-wide [[policy]], consistent with international humanitarian law, on [[autonomous]] and [[semi-autonomous]] weapons.
  +
  +
=== Safety ===
  +
  +
Use of [[AI]] to control physical-world [[equipment]] leads to concerns about [[safety]], especially as [[system]]s are exposed to the full complexity of the human environment. A major challenge in [[AI]] safety is building [[system]]s that can safely transition from the 'closed world' of the laboratory into the outside 'open world' where unpredictable things can happen. Adapting gracefully to unforeseen situations is difficult yet necessary for safe operation. Experience in building other types of safety-critical [[system]]s and [[infrastructure]], such as aircraft, power plants, bridges, and vehicles, has much to teach [[AI]] practitioners about [[verification]] and [[validation]], how to build a safety case for a [[technology]], how to [[manage risk]], and how to [[communicate]] with [[stakeholder]]s about [[risk]].
  +
  +
=== Economic impact ===
  +
  +
:::{{Quote|''[B]etween now and 2030, artificial intelligence will . . . increase global gross economic product by $13 trillion.''<ref>[[Artificial Intelligence: A Roadmap for California]], at 4.</ref>}}
  +
  +
AI's central economic effect in the short term will be the [[automation]] of tasks that could not be [[automated]] before. This will likely increase [[productivity]] and create wealth, but it may also affect particular types of jobs in different ways, reducing demand for certain skills that can be [[automated]] while increasing demand for other skills that are complementary to AI. Analysis by the White House Council of Economic Advisors (CEA) suggests that the negative effect of [[automation]] will be greatest on lower-wage jobs, and that there is a risk that AI-driven [[automation]] will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality. Public policy can address these risks, ensuring that workers are retrained and able to succeed in occupations that are complementary to, rather than competing with, [[automation]]. [[Public policy]] can also ensure that the economic benefits created by AI are shared broadly, and assure that AI responsibly ushers in a new age in the global economy.
   
 
== References ==
 
== References ==
 
<references />
 
<references />
  +
  +
== Source ==
  +
  +
* "Brief History of AI", "Cybersecurity", "Economic impact", "Military", and "Safety" sections: [[Preparing for the Future of Artificial Intelligence]].
  +
* "Brief History of AI" section: [[Technology Assessment: Artificial Intelligence: Emerging Opportunities, Challenges, and Implications]], at 15.
   
 
== See also ==
 
== See also ==
   
  +
<div style="{{column-count|2}}">
  +
  +
* [[Adversarial AI]]
  +
* [[AI system]]
  +
* [[Artificial general intelligence]]
  +
* [[Artificial Intelligence and National Security]]
  +
* [[Artificial Intelligence and National Security (Belfer)]]
  +
* [[Artificial Intelligence, The Next Digital Frontier?]]
  +
* [[Explainable artificial intelligence]]
  +
* [[General AI]]
 
* [[Human-machine interface]]
 
* [[Human-machine interface]]
  +
* [[Narrow AI]]
  +
* [[National Security Commission on Artificial Intelligence]]
  +
* [[NISTIR 8332]]‎‎
  +
* [[One Hundred Year Study on Artificial Intelligence]]
  +
* [[Preparing for the Future of Artificial Intelligence]]
  +
* [[Recommendation of the Council on Artificial Intelligence]]
  +
* [[Technology Assessment: Artificial Intelligence: Emerging Opportunities, Challenges, and Implications]]
  +
* [[Technology Assessment: Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care]]
  +
  +
</div>
  +
  +
== External resources ==
  +
  +
* Nick Bostrom, "Superintelligence: Paths, Dangers, Strategies" (Oxford Univ. Press, 2014).
  +
* Frank Chen, "AI, Deep Learning, and Machine Learning: A Primer," Andreessen Horowitz (June 10, 2016) ([http://a16z.com/2016/06/10/ai-deep-learning-machines full-text]).
  +
* Kate Crawford, "Artificial Intelligence's White Guy Problem," The New York Times (June 25, 2016) ([http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html full-text]).
  +
 
[[Category:Technology]]
 
[[Category:Technology]]
 
[[Category:Definition]]
 
[[Category:Definition]]
  +
[[Category:Cybersecurity]]
  +
[[Category:Military]]
  +
[[Category:AI]]

Latest revision as of 20:45, 13 January 2022

The development of AI will shape the future of power. The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership.
NSCAI Interim Report, at 9.
The development of full artificial intelligence could spell the end of the human race.
— Stephen Hawking, interview with the BBC (Dec. 2014).

Definitions[]

Artificial intelligence (AI) is

[a]n umbrella term that is used to refer to a set of sciences, theories and techniques dedicated to improving the ability of machines to do things requiring intelligence.[1]
a science and a set of computational technologies that are inspired by — but typically operate quite differently from — the ways people use their nervous systems and bodies to sense, learn, reason, and take action.[2]
[t]he capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement.[3]
[a]ny artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance.... They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.[4]
are considered to comprise software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action. Examples are wide-ranging and expanding rapidly. They include, but are not limited to, AI assistants, computer vision systems, biomedical research, unmanned vehicle systems, advanced game-playing software, and facial recognition systems as well as application of AI in both Information Technology (IT) and Operational Technology (OT).[5]
the collection of computations that at any time make it possible to assist users to perceive, reason, and act. Since it is computations that make up AI, the functions of perceiving, reasoning, and acting can be accomplished under the control of the computational device (e.g., computers or robotics) in question.

AI

include[s] the following:
(1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
(2) An artificial system developed in computer software, physical hardware, or another context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action
(3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
(4) A set of techniques, including machine learning, that is designed to approximate a cognitive task.
(5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting.[6]

AI at a minimum includes

Brief History of AI[]

Endowing computers with human-like intelligence has been a dream of computer experts since the dawn of electronic computing. Although the term "Artificial Intelligence" was not coined until 1956, the roots of the field go back to at least the 1940s,[8] and the idea of AI was crystalized in Alan Turing's famous 1950 paper, "Computing Machinery and Intelligence." Turing's paper posed the question: "Can machines think?" It also proposed a test for answering that question,[9] and raised the possibility that a machine might be programmed to learn from experience much as a young child does.

The field of Artificial Intelligence (AI) can be traced back to a 1956 workshop organized by John McCarthy, held at Dartmouth College.[10] The workshop's goal was to explore how machines could be used to simulate human intelligence. Disciplines that contribute to AI include computer science, economics, linguistics, mathematics, statistics, evolutionary biology, neuroscience, and psychology, among others.

In the ensuing decades, the field of AI went through ups and downs as some AI research problems proved more difficult than anticipated and others proved insurmountable with the technologies of the time.

Waves of AI[]

The Defense Advanced Research Projects Agency (DARPA), which has funded AI R&D since the 1960s, has described the development of AI technologies in terms of three waves.[11] These waves are described by the varying abilities of technologies in each to perceive rich, complex, and subtle information; to learn within an environment; to abstract to create new meanings; and to reason in order to plan and reach decisions.[12]

  • First wave: handcrafted knowledge. The first wave of AI technologies have abilities primarily to perceive and reason but no learning capability and poor handling of uncertainty. For such technologies, researchers and engineers create sets of rules to represent knowledge in well-defined domains for narrowly defined problems. The TurboTax software, an expert system, is one example. Rules are built into the application, which then turns input information into tax form outputs, but it has only a rudimentary ability to perceive and no ability to learn (e.g., about a new tax law) or to abstract beyond what it is programmed to know.
  • Second wave: statistical learning. Starting in the 1990s, a second wave of AI technologies were developed with more nuanced abilities to perceive and learn, with some ability to abstract, minimal reasoning ability, but no contextual ability. For these systems, engineers create statistical models for specific problem domains and train them on big data. Generally, while such systems are statistically powerful, they can be individually unreliable, especially in the presence of skewed training data (e.g., a face recognition system trained on a limited range of skin tones can be powerful for similar faces, but highly unreliable for individuals outside of the training spectrum). As noted by DARPA, these technologies are "dependent on large amounts of high quality training data, do not adapt to changing conditions, offer limited performance guarantees, and are unable to provide users with explanations of their results.”[13] Additional examples of second wave AI technologies include voice recognition and text analysis.
  • Third wave: contextual adaptation. The third wave of AI technologies is oriented toward making it possible for machines to adapt to changing situations (i.e., contextual adaptation). Engineers create systems that construct explanatory models of real world phenomena, and "AI systems learn and reason as they encounter new tasks and situations." Examples of third wave technologies would include explainable AI (XAI), as described below.

It wasn't until the late 1990s that research progress in AI began to accelerate, as researchers focused more on sub-problems of AI and the application of AI to real-world problems such as image recognition and medical diagnosis. An early milestone was the 1997 victory of IBM's chess-playing computer Deep Blue over world champion Garry Kasparov. Other significant breakthroughs included DARPA's Cognitive Agent that Learns and Organizes (CALO), which led to Apple Inc.'s Siri; IBM's question-answering computer Watson's victory in the TV game show "Jeopardy!"; and the surprising success of self-driving cars in the DARPA Grand Challenge competitions in the 2000s.

The current wave of progress and enthusiasm for AI began around 2010, driven by three factors that built upon each other: the availability of big data from sources including e-commerce, businesses, social media, science, and government; which provided raw material for dramatically improved machine learning approaches and algorithms; which in turn relied on the capabilities of more powerful computers.[14]

This growth has advanced the state of Narrow AI, which refers to algorithms that address specific problem sets like game playing, image recognition, and navigation. All current AI systems fall into the Narrow AI category. The most prevalent approach to Narrow AI is machine learning, which involves statistical algorithms that replicate human cognitive tasks by deriving their own procedures through analysis of large training data sets. During the training process, the computer system creates its own statistical model to accomplish the specified task in situations it has not previously encountered.

Experts generally agree that it will be many decades before the field advances to develop General AI, which refers to systems capable of human-level intelligence across a broad range of tasks.[15] Nevertheless, the growing power of Narrow AI algorithms has sparked a wave of commercial interest.

Overview[]

"Artificial intelligence is more than the simple automation of existing processes: it involves, to greater or lesser degrees, setting an outcome and letting a computer program find its own way there. It is this creative capacity that gives artificial intelligence its power. But it also challenges some of our assumptions about the role of computers and our relationship to them."[16]

AI attempts to emulate the results of human reasoning by organizing and manipulating factual and heuristic knowledge. Areas of AI activity include expert systems, natural language understanding, speech recognition, vision, and robotics.

Examples of AI already in use include: communicating with computers in natural language, deriving new insights from transport data, operating autonomous and adaptive robotic systems, managing supply chains, and designing more life-like video games. Applied AI is already changing business practices across financial services, law, medicine, accounting, tax, audit, architecture, consulting, customer service, manufacturing and transport. . . . AI could improve the functioning of most digital operations, products and services. Wherever a process uses digital data, AI may enable us to use that data more effectively and in new ways.[17]

What has made AI possible is

the confluence of four advancing technologies . . . vast increases in computing power and progress in machine learning techniques . . . breakthroughs in the field of machine perception . . . [and] improvements in the industrial design of robots.[18]

Cybersecurity[]

Today's AI has important applications in cybersecurity, and is expected to play an increasing role for both defensive and offensive cybermeasures. Currently, designing and operating secure systems requires significant time and attention from experts. Automating this expert work partially or entirely may increase security across a much broader range of systems and applications at dramatically lower cost, and could increase the agility of the Nation's cyber-defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of evolving threats.

Military[]

Challenging issues are raised by the potential use of AI in weapon systems.[19] The United States has incorporated autonomy in certain weapon systems for decades, allowing for greater precision in the use of weapons and safer, more humane military operations. Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions.

"The key to incorporating autonomous and semi-autonomous weapon systems into American defense planning is to ensure that U.S. Government entities are always acting in accordance with international humanitarian law, taking appropriate steps to control proliferation, and working with partners and Allies to develop standards related to the development and use of such weapon systems. The United States has actively participated in ongoing international discussion on Lethal Autonomous Weapon Systems, and anticipates continued robust international discussion of these potential weapon systems. Agencies across the U.S. Government are working to develop a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.

Safety[]

Use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment. A major challenge in AI safety is building systems that can safely transition from the 'closed world' of the laboratory into the outside 'open world' where unpredictable things can happen. Adapting gracefully to unforeseen situations is difficult yet necessary for safe operation. Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners about verification and validation, how to build a safety case for a technology, how to manage risk, and how to communicate with stakeholders about risk.

Economic impact[]

[B]etween now and 2030, artificial intelligence will . . . increase global gross economic product by $13 trillion.[20]

AI's central economic effect in the short term will be the automation of tasks that could not be automated before. This will likely increase productivity and create wealth, but it may also affect particular types of jobs in different ways, reducing demand for certain skills that can be automated while increasing demand for other skills that are complementary to AI. Analysis by the White House Council of Economic Advisors (CEA) suggests that the negative effect of automation will be greatest on lower-wage jobs, and that there is a risk that AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality. Public policy can address these risks, ensuring that workers are retrained and able to succeed in occupations that are complementary to, rather than competing with, automation. Public policy can also ensure that the economic benefits created by AI are shared broadly, and assure that AI responsibly ushers in a new age in the global economy.

References[]

  1. Unboxing Artificial Intelligence: 10 steps to protect Human Rights, at 24.
  2. One Hundred Year Study on Artificial Intelligence, at 4.
  3. ITU, "Compendium of Approved ITU-T Security Definitions," at 23 (Feb. 2003 ed.) (full-text).
  4. U.S. Congress, H.R. 4625 and S. 2217 (Dec. 12, 2017).
  5. U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, at 7-8.
  6. Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019, Pub. L. No. 115-232, 132 Stat. 1636, 1695 (Aug. 13, 2018) (codified at 10 U.S.C. § 2358, note).
  7. Computer Science and Artificial Intelligence, at 1.
  8. See, e.g., Warren S. McCulloch & Walter H. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity," 5 Bull. of Mathematical Biophysics 115 (1943).
  9. Restated in modern terms, the "Turing Test" (also called the "Imitation game" puts a human judge in a text-based chat room with either another person or a computer. The human judge can interrogate the other party and carry on a conversation, and then the judge is asked to guess whether the other party is a person or a computer. If a computer can consistently fool human judges in this game, then the computer is deemed to be exhibiting intelligence.
  10. See J. McCarthy et al., "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (Aug. 31, 1955) (full-text).
  11. See "DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies" (Sept. 7, 2018) (full-text).
  12. Arati Prabhakar, former Director of DARPA, "Powerful but Limited: A DARPA Perspective on AI," presentation at National Academies of Sciences, Engineering, and Medicine workshop, Robotics and Artificial Intelligence: Policy Implications for the Next Decade (Dec. 12, 2016) (full-text).
  13. See "DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies" (Sept. 7, 2108) (full-text).
  14. A more detailed history of AI is available in the Appendix of the AI 100 Report — One Hundred Year Study on Artificial Intelligence.
  15. Preparing for the Future of Artificial Intelligence, at 7-9.
  16. Artificial Intelligence: Opportunities and Implications for the Future of Decision Making, at 5.
  17. Growing the Artificial Intelligence Industry in the UK, at 8/
  18. Jerry Kaplan, "Humans Need Not Apply – A Guide to Wealth and Work in the Age of Artificial Intelligence" 38-39 (2015).
  19. See generallyArtificial Intelligence and National Security.
  20. Artificial Intelligence: A Roadmap for California, at 4.

Source[]

See also[]

External resources[]

  • Nick Bostrom, "Superintelligence: Paths, Dangers, Strategies" (Oxford Univ. Press, 2014).
  • Frank Chen, "AI, Deep Learning, and Machine Learning: A Primer," Andreessen Horowitz (June 10, 2016) (full-text).
  • Kate Crawford, "Artificial Intelligence's White Guy Problem," The New York Times (June 25, 2016) (full-text).