There is no RationalWiki without you. We are a small non-profit with no staff – we are hundreds of volunteers who document pseudoscience and crankery around the world every day. We will never allow ads because we must remain independent. We cannot rely on big donors with corresponding big agendas. We are not the largest website around, but we believe we play an important role in defending truth and objectivity.
If everyone who saw this today donated $5, we would meet our goal for 2019.
| Fighting pseudoscience isn't free.|
We are 100% user-supported! Help and donate $5, $20 or whatever you can today with !
| The poetry of reality|
|We must know. |
We will know.
|A view from the|
shoulders of giants.
| Thinking hard|
or hardly thinking?
|Major trains of thought|
|The good, the bad|
and the brain fart
|Come to think of it|
Artificial intelligence (AI) refers to the construction of a device (or program) with independent reasoning power — an artificial brain. The test for intelligence is most widely accepted to be that devised by Alan Turing: (roughly) If a conversation with the device cannot be differentiated from a similar conversation with a human being then the device can be called intelligent.
AI research has produced a number of excellent tools and products, including handwriting recognition, computerized chess and other strategy games, the Lisp programming language, advanced robotics, basic visual recognition capability, and (as a by-product) open source software and the GNU toolchain. However, despite immense amounts of money and research, and despite all these ancillary products, true artificial intelligence — a sentient computer, capable of initiative and seamless human interaction — has yet to come to fruition, though some argue that a sentient computer might be more appropriately referred to as artificial consciousness than artificial intelligence.
John Searle proposed his "Chinese room" thought experiment to demonstrate that a computer program merely shuffles symbols around according to simple rules of syntax, but no semantic grasp of what the symbols really mean is obtained by the program. Proponents of "strong AI", who believe an awareness can exist within a purely algorithmic process, have put forward various critiques of Searle's argument. Hubert Dreyfus's critique of artificial intelligence research has been especially enduring. However, it does not explicitly deny the possibility of strong AI, merely that the fundamental assumptions of AI researchers are either baseless or misguided. Because Dreyfus's critique draws on philosophers such as Heidegger and Merleau-Ponty, it was largely ignored (and lampooned) at the time of its arrival. However, as the fantastic predictions of early AI researchers continually failed to pan out (which included the solution to all philosophical problems), his critique has largely been vindicated, and even incorporated into modern AI research.
On the medical level an artificial brain would need to fulfill the biological functions of the absent organ, and the device itself would not fall under the current biological definition of life any more than a kidney dialysis machine. An example of a fictional character with this kind of prosthetic is Cyborg from the Teen Titans comics. Brains and cognition are not currently well understood, and the scale of computation for an artificial brain is unknown, however the power consumption of computers leads to speculation it would have to be orders of magnitude greater than its biological equivalent. The human brain consumes about 20 W of power whereas current supercomputers may use as much as 1 MW or an order of 100,000 more, suggesting AI may be a staggeringly energy-inefficient form of intelligence. Critics of brain simulation believe that artificial intelligence can be modeled without imitating nature, using the analogy of early attempts to construct flying machines modeled after birds.
In the field of artificial intelligence, machine learning is a set of techniques that make it possible to train a computer model so that it behaves according to some given sample inputs and expected outputs. For example, machine learning can recognize objects in images or perform other complex tasks that would be too complicated to be described with traditional procedural code.
Stephen Hawking's view
In a recent humorous interview with John Oliver, Stephen Hawking references AI as potentially dangerous.
- ProPublica's series on Machine Bias
- The Rise of the Weaponized AI Propaganda Machine
- How to Keep Your AI From Turning Into a Racist Monster
- Goertzel, Ben (December 2007). "Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil". Artificial Intelligence 171 (18, Special Review Issue): 1161–1173. http://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792. Retrieved April 1, 2009.
- Fox and Hayes quoted in Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, p581 Morgan Kaufmann Publishers