The IT Law Wiki

Artificial intelligence

32,081pages on
this wiki
Add New Page
Add New Page Talk0

Definitions Edit

Artificial intelligence (AI) is

a branch of computer science that studies how to develop computers that equal or exceed human performance on complex intellectual tasks.
a science and a set of computational technologies that are inspired by — but typically operate quite differently from — the ways people use their nervous systems and bodies to sense, learn, reason, and take action.[1]
[t]he capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement.[2]
the collection of computations that at any time make it possible to assist users to perceive, reason, and act. Since it is computations that make up AI, the functions of perceiving, reasoning, and acting can be accomplished under the control of the computational device (e.g., computers or robotics) in question.

AI at a minimum includes

Brief History of AI Edit

Endowing computers with human-like intelligence has been a dream of computer experts since the dawn of electronic computing. Although the term "Artificial Intelligence" was not coined until 1956, the roots of the field go back to at least the 1940s,[4] and the idea of AI was crystalized in Alan Turing's famous 1950 paper, "Computing Machinery and Intelligence." Turing's paper posed the question: "Can machines think?" It also proposed a test for answering that question,[5] and raised the possibility that a machine might be programmed to learn from experience much as a young child does.

In the ensuing decades, the field of AI went through ups and downs as some AI research problems proved more difficult than anticipated and others proved insurmountable with the technologies of the time. It wasn't until the late 1990s that research progress in AI began to accelerate, as researchers focused more on sub-problems of AI and the application of AI to real-world problems such as image recognition and medical diagnosis. An early milestone was the 1997 victory of IBM's chess-playing computer Deep Blue over world champion Garry Kasparov. Other significant breakthroughs included DARPA's Cognitive Agent that Learns and Organizes (CALO), which led to Apple Inc.'s Siri; IBM's question-answering computer Watson's victory in the TV game show "Jeopardy!"; and the surprising success of self-driving cars in the DARPA Grand Challenge competitions in the 2000s.

The current wave of progress and enthusiasm for AI began around 2010, driven by three factors that built upon each other: the availability of big data from sources including e-commerce, businesses, social media, science, and government; which provided raw material for dramatically improved machine learning approaches and algorithms; which in turn relied on the capabilities of more powerful computers.[6] During this period, the pace of improvement surprised AI experts. For example, on a popular image recognition challenge6 that has a 5% human error rate according to one error measure, the best AI result improved from a 26% error rate in 2011 to 3.5% in 2015.

Overview Edit

AI attempts to emulate the results of human reasoning by organizing and manipulating factual and heuristic knowledge. Areas of AI activity include expert systems, natural language understanding, speech recognition, vision, and robotics.

Cybersecurity Edit

Today's AI has important applications in cybersecurity, and is expected to play an increasing role for both defensive and offensive cybermeasures. Currently, designing and operating secure systems requires significant time and attention from experts. Automating this expert work partially or entirely may increase security across a much broader range of systems and applications at dramatically lower cost, and could increase the agility of the Nation's cyber-defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of evolving threats.

Military Edit

Challenging issues are raised by the potential use of AI in weapon systems. The United States has incorporated autonomy in certain weapon systems for decades, allowing for greater precision in the use of weapons and safer, more humane military operations. Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions.

"The key to incorporating autonomous and semi-autonomous weapon systems into American defense planning is to ensure that U.S. Government entities are always acting in accordance with international humanitarian law, taking appropriate steps to control proliferation, and working with partners and Allies to develop standards related to the development and use of such weapon systems. The United States has actively participated in ongoing international discussion on Lethal Autonomous Weapon Systems, and anticipates continued robust international discussion of these potential weapon systems. Agencies across the U.S. Government are working to develop a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.

Safety Edit

Use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment. A major challenge in AI safety is building systems that can safely transition from the 'closed world' of the laboratory into the outside 'open world' where unpredictable things can happen. Adapting gracefully to unforeseen situations is difficult yet necessary for safe operation. Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners about verification and validation, how to build a safety case for a technology, how to manage risk, and how to communicate with stakeholders about risk.

References Edit

  1. One Hundred Year Study on Artificial Intelligence, at 4.
  2. ITU, "Compendium of Approved ITU-T Security Definitizons," at 23 (Feb. 2003 ed.) (full-text).
  3. Computer Science and Artificial Intelligence, at 1.
  4. See, e.g., Warren S. McCulloch & Walter H. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity," 5 Bull. of Mathematical Biophysics 115 (1943).
  5. Restated in modern terms, the "Turing Test" puts a human judge in a text-based chat room with either another person or a computer. The human judge can interrogate the other party and carry on a conversation, and then the judge is asked to guess whether the other party is a person or a computer. If a computer can consistently fool human judges in this game, then the computer is deemed to be exhibiting intelligence.
  6. A more detailed history of AI is available in the Appendix of the AI 100 Report — One Hundred Year Study on Artificial Intelligence.

Source Edit

See also Edit

External resource Edit

  • Kate Crawford, "Artificial Intelligence's White Guy Problem," The New York Times, June 25, 2016, (full-text).

Also on Fandom

Random Wiki