Fandom

The IT Law Wiki

Artificial intelligence

32,167pages on
this wiki
Add New Page
Talk0 Share

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.

Definitions Edit

Artificial intelligence (AI) is

a branch of computer science that studies how to develop computers that equal or exceed human performance on complex intellectual tasks.
a science and a set of computational technologies that are inspired by — but typically operate quite differently from — the ways people use their nervous systems and bodies to sense, learn, reason, and take action.[1]
[t]he capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement.[2]
the collection of computations that at any time make it possible to assist users to perceive, reason, and act. Since it is computations that make up AI, the functions of perceiving, reasoning, and acting can be accomplished under the control of the computational device (e.g., computers or robotics) in question.

AI at a minimum includes

Brief History of AI Edit

Endowing computers with human-like intelligence has been a dream of computer experts since the dawn of electronic computing. Although the term "Artificial Intelligence" was not coined until 1956, the roots of the field go back to at least the 1940s,[4] and the idea of AI was crystalized in Alan Turing's famous 1950 paper, "Computing Machinery and Intelligence." Turing's paper posed the question: "Can machines think?" It also proposed a test for answering that question,[5] and raised the possibility that a machine might be programmed to learn from experience much as a young child does.

In the ensuing decades, the field of AI went through ups and downs as some AI research problems proved more difficult than anticipated and others proved insurmountable with the technologies of the time. It wasn't until the late 1990s that research progress in AI began to accelerate, as researchers focused more on sub-problems of AI and the application of AI to real-world problems such as image recognition and medical diagnosis. An early milestone was the 1997 victory of IBM's chess-playing computer Deep Blue over world champion Garry Kasparov. Other significant breakthroughs included DARPA's Cognitive Agent that Learns and Organizes (CALO), which led to Apple Inc.'s Siri; IBM's question-answering computer Watson's victory in the TV game show "Jeopardy!"; and the surprising success of self-driving cars in the DARPA Grand Challenge competitions in the 2000s.

The current wave of progress and enthusiasm for AI began around 2010, driven by three factors that built upon each other: the availability of big data from sources including e-commerce, businesses, social media, science, and government; which provided raw material for dramatically improved machine learning approaches and algorithms; which in turn relied on the capabilities of more powerful computers.[6] During this period, the pace of improvement surprised AI experts. For example, on a popular image recognition challenge6 that has a 5% human error rate according to one error measure, the best AI result improved from a 26% error rate in 2011 to 3.5% in 2015.

Overview Edit

AI attempts to emulate the results of human reasoning by organizing and manipulating factual and heuristic knowledge. Areas of AI activity include expert systems, natural language understanding, speech recognition, vision, and robotics.

What has made AI possible is

the confluence of four advancing technologies . . . vast increases in computing power and progress in machine learning techniques . . . breakthroughs in the field of machine perception . . . [and] improvements in the industrial design of robots.[7]

Cybersecurity Edit

Today's AI has important applications in cybersecurity, and is expected to play an increasing role for both defensive and offensive cybermeasures. Currently, designing and operating secure systems requires significant time and attention from experts. Automating this expert work partially or entirely may increase security across a much broader range of systems and applications at dramatically lower cost, and could increase the agility of the Nation's cyber-defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of evolving threats.

Military Edit

Challenging issues are raised by the potential use of AI in weapon systems. The United States has incorporated autonomy in certain weapon systems for decades, allowing for greater precision in the use of weapons and safer, more humane military operations. Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions.

"The key to incorporating autonomous and semi-autonomous weapon systems into American defense planning is to ensure that U.S. Government entities are always acting in accordance with international humanitarian law, taking appropriate steps to control proliferation, and working with partners and Allies to develop standards related to the development and use of such weapon systems. The United States has actively participated in ongoing international discussion on Lethal Autonomous Weapon Systems, and anticipates continued robust international discussion of these potential weapon systems. Agencies across the U.S. Government are working to develop a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.

Safety Edit

Use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment. A major challenge in AI safety is building systems that can safely transition from the 'closed world' of the laboratory into the outside 'open world' where unpredictable things can happen. Adapting gracefully to unforeseen situations is difficult yet necessary for safe operation. Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners about verification and validation, how to build a safety case for a technology, how to manage risk, and how to communicate with stakeholders about risk.

Economic impact Edit

AI's central economic effect in the short term will be the automation of tasks that could not be automated before. This will likely increase productivity and create wealth, but it may also affect particular types of jobs in different ways, reducing demand for certain skills that can be automated while increasing demand for other skills that are complementary to AI. Analysis by the White House Council of Economic Advisors (CEA) suggests that the negative effect of automation will be greatest on lower-wage jobs, and that there is a risk that AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality. Public policy can address these risks, ensuring that workers are retrained and able to succeed in occupations that are complementary to, rather than competing with, automation. Public policy can also ensure that the economic benefits created by AI are shared broadly, and assure that AI responsibly ushers in a new age in the global economy.

References Edit

  1. One Hundred Year Study on Artificial Intelligence, at 4.
  2. ITU, "Compendium of Approved ITU-T Security Definitizons," at 23 (Feb. 2003 ed.) (full-text).
  3. Computer Science and Artificial Intelligence, at 1.
  4. See, e.g., Warren S. McCulloch & Walter H. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity," 5 Bull. of Mathematical Biophysics 115 (1943).
  5. Restated in modern terms, the "Turing Test" puts a human judge in a text-based chat room with either another person or a computer. The human judge can interrogate the other party and carry on a conversation, and then the judge is asked to guess whether the other party is a person or a computer. If a computer can consistently fool human judges in this game, then the computer is deemed to be exhibiting intelligence.
  6. A more detailed history of AI is available in the Appendix of the AI 100 Report — One Hundred Year Study on Artificial Intelligence.
  7. Jerry Kaplan, "Humans Need Not Apply – A Guide to Wealth and Work in the Age of Artificial Intelligence" 38-39 (2015).

Source Edit

See also Edit

External resources Edit

  • Frank Chen, "AI, Deep Learning, and Machine Learning: A Primer," Andreessen Horowitz (June 10, 2016) (full-text).
  • Kate Crawford, "Artificial Intelligence's White Guy Problem," The New York Times, June 25, 2016, (full-text).

Also on Fandom

Random Wiki