

- R.I.P. Traditional ITSM. Let AI Handle It.
- Pre-built AI agents for help desk efficiency.
- Artificial intelligence (AI) refers to the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[
- The earliest substantial work in artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing. In 1935, Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The scanner’s actions are dictated by a program of instructions that is also stored in the memory in the form of symbols. This is Turing’s stored-program concept, and implicit in it is the possibility of the machine operating on, and so modifying or improving, its own program. Turing’s conception is now known simply as the universal Turing machine. All modern computers are in essence universal Turing machines.
- In 1957, Frank Rosenblatt of the Cornell Aeronautical Laboratory at Cornell University in Ithaca, New York, began investigating artificial neural networks that he called perceptrons. He made major contributions to the field of AI, both through experimental investigations of the properties of neural networks (using computer simulations) and through detailed mathematical analysis. Rosenblatt was a charismatic communicator, and there were soon many research groups in the United States studying perceptrons. Rosenblatt and his followers called their approach connectionist to emphasize the importance in learning of the creation and modification of connections between neurons. Modern researchers have adopted this term.
- Early milestones in AI. The first AI programs. Expert systems.
- Traditional AI has by and large attempted to build disembodied intelligences whose only interaction with the world has been indirect (CYC, for example). Nouvelle AI, on the other hand, attempts to build embodied intelligences situated in the real world—a method that has come to be known as the situated approach. Brooks quoted approvingly from the brief sketches that Turing gave in 1948 and 1950 of the situated approach. By equipping a machine “with the best sense organs that money can buy,” Turing wrote, the machine might be taught “to understand and speak English” by a process that would “follow the normal teaching of a child.” Turing contrasted this with the approach to AI that focuses on abstract activities, such as the playing of chess. He advocated that both approaches be pursued, but until the new AI, little attention was paid to the situated approach.
- Early Foundations (1950s-1960s):
-
1950:Alan Turing’s influential paper, “Computing Machinery and Intelligence,” introduced the Turing Test, a framework for assessing machine intelligence.
-
-
1960s: The first industrial robots started working, and the ELIZA program demonstrated early conversational capabilities.0s: The Perceptron by Frank Rosenblatt introduced the concept of a binary classifier that could learn from data, laying the foundation for future neural networks.The first anthropomorphic robot was built in Japan, and early bacteria identification systems were developed.1980s: Mercedes-Benz tested the first driverless car, and Jabberwacky was released as an early chatbot.
-
-
1990s: IBM’s Deep Blue defeated Garry Kasparov in a chess match, demonstrating AI’s capabilities in a complex game.1990s: Advances in computing power and the availability of large datasets enabled researchers to evolve learning algorithms, setting the stage for modern AI.New robots, like Honda’s ASIMO and MIT’s Kismet, were developed.2010s: Companies like Google, Facebook, and Netflix began using AI for personalization and other applications.Recent Years: The rise of deep learning and machine learning has revolutionized AI applications, including image and speech recognition, natural language processing, and autonomous systems.


- ert systems occupy a type of microworld—for example, a model of a ship’s hold and its cargo—that is self-contained and relatively uncomplicated. For such AI systems, every effort is made to incorporate all the information about some narrow field that an expert (or group of experts) would know, so that a good expert system can often outperform any single human expert. There are many commercial expert systems, including programs for medical diagnosis, chemical analysis, credit authorization, financial management, corporate planning, financial document routing, oil and mineral prospecting, genetic engineering, automobile design and manufacture, camera lens design, computer installation design, airline scheduling, cargo placement, and automatic help services for home computer owners.
- Knowledge and inference.
- The ic cmponents of an expert system are a knowledge base, or KB, and an inference engine. The information to be stored in the KB is obtained by interviewing people who are expert in the area in question.
- In 1965 the AI researcher Edward Feigenbaum and the geneticist Joshua Lederberg, both of Stanford University, began work on Heuristic DENDRAL (later shortened to DENDRAL), a chemical-analysis expert system. The substance to be analyzed might, for example, be a complicated compound of carbon, hydrogen, and nitrogen. Starting from spectrographic data obtained from the substance, DENDRAL would hypothesize the substance’s molecular structure. DENDRAL’s performance rivaled that of chemists expert at this task, and the program was used in industry and in academia.
- MYCIN
- Work on MYCIN, an expert system for treating blood infections, began at Stanford University in 1972. MYCIN would attempt to diagnose patients based on reported symptoms and medical test results. The program could request further information concerning the patient, as well as suggest additional laboratory tests, to arrive at a probable diagnosis, after which it would recommend a course of treatment. If requested, MYCIN would explain the reasoning that led to its diagnosis and recommendation. Using about 500 production rules, MYCIN operated at roughly the same level of competence as human specialists in blood infections and rather better than general practitioners.
- Nevertheless, expert systems have no common sense or understanding of the limits of their expertise. For instance, if MYCIN were told that a patient who had received a gunshot wound was bleeding to death, the program would attempt to diagnose a bacterial cause for the patient’s symptoms. Expert systems can also act on absurd clerical errors, such as prescribing an obviously incorrect dosage of a drug for a patient whose weight and age data were accidentally transposed.
- The CYC project. CYC is a large experiment in symbolic AI. The project began in 1984 under the auspices of the Microelectronics and Computer Technology Corporation, a consortium of computer, semiconductor, and electronics manufacturers. In 1995 Douglas Lenat, the CYC project director, spun off the project as Cycorp, Inc., based in Austin, Texas. The most ambitious goal of Cycorp was to build a KB containing a significant percentage of the commonsense knowledge of a human being. Millions of commonsense assertions, or rules, were coded into CYC. The expectation was that this “critical mass” would allow the system itself to extract further rules directly from ordinary prose and eventually serve as the foundation for future generations of expert systems.
- Connectionism, or neuronlike computing, developed out of attempts to understand how the human brain works at the neural level and, in particular, how people learn and remember. In 1943 the neurophysiologist Warren McCulloch of the University of Illinois and the mathematician Walter Pitts of the University of Chicago published an influential treatise on neural nets and automatons, according to which each neuron in the brain is a simple digital processor and the brain as a whole is a form of computing machine. As McCulloch put it subsequently, “What we thought we were doing (and I think we succeeded fairly well) was treating the brain as a Turing machine.”
- The simple neural network depicted in the figure illustrates the central ideas of connectionism. Four of the network’s five neurons are for input, and the fifth, to which each of the others is connected, is for output. Each of the neurons is either firing (1) or not firing (0). Each connection leading to N, the output neuron, has a “weight.” What is called the total weighted input into N is calculated by adding up the weights of all the connections leading to N from neurons that are firing. For example, suppose that only two of the input neurons, X and Y, are firing. Since the weight of the connection from X to N is 1.5 and the weight of the connection from Y to N is 2, it follows that the total weighted input to N is 3.5. As shown in the figure, N has a firing threshold of 4. That is to say, if N’s total weighted input equals or exceeds 4, then N fires; otherwise, N does not fire. So, for example, N does not fire if the only input neurons to fire are X and Y, but N does fire if X, Y, and Z all fire. Training the network involves two steps. First, the external agent inputs a pattern and observes the behavior of N. Second, the agent adjusts the connection weights in accordance with the rules:
-
If the actual output is 0 and the desired output is 1, increase by a small fixed amount the weight of each connection leading to N from neurons that are firing (thus making it more likely that N will fire the next time the network is given the same pattern);
-
If the actual output is 1 and the desired output is 0, decrease by that same small amount the weight of each connection leading to the output neuron from neurons that are firing (thus making it less likely that the output neuron will fire the next time the network is given that pattern as input). The external agent—a computer program—goes through this two-step procedure with each pattern in a training sample, which is then repeated several times. During these many repetitions, a pattern of connection weights is forged that enables the network to respond correctly to each pattern. The striking thing is that the learning process is entirely mechanical and requires no human intervention or adjustment. The connection weights are increased or decreased automatically by a constant amount, and the same learning procedure applies to different tasks.
- Conjugating verbs
- In one famous connectionist experiment conducted at the University of California at San Diego (published in 1986), David Rumelhart and James McClelland trained a network of 920 artificial neurons, arranged in two layers of 460 neurons, to form the past tenses of English verbs. Root forms of verbs—such as come, look, and sleep—were presented to one layer of neurons, the input layer. A supervisory computer program observed the difference between the actual response at the layer of output neurons and the desired response—came, say, and then mechanically adjusted the connections throughout the network by the procedure described above to give the network a slight push in the direction of the correct response. About 400 different verbs were presented one by one to the network, and the connections were adjusted after each presentation. This whole procedure was repeated about 200 times using the same verbs, after which the network could correctly form the past tense of many unfamiliar verbs as well as of the original verbs. For example, when presented for the first time with guard, the network responded guarded; with weep, wept; with cling, clung; and with drip, dripped (complete with double p). This is a striking example of learning involving generalization. (Sometimes, though, the peculiarities of English were too much for the network, and it formed squawked from squat, shipped from shape, and membled from mail.)
- Another name for connectionism is parallel distributed processing, which emphasizes two important features. First, a large number of relatively simple processors—the neurons—operate in parallel. Second, neural networks store information in a distributed fashion, with each individual connection participating in the storage of many different items of information. The know-how that enabled the past-tense network to form wept from weep, for example, was not stored in one specific location in the network but was spread throughout the entire pattern of connection weights that was forged during training. The human brain also appears to store information in a distributed fashion, and connectionist research is contributing to attempts to understand how it does so.
- Other work on neuronlike computing includes the following:
-
Viual perception. Networks can recognize faces and other objects from visual data. For example, neural networks can distinguish whether an animal in a picture is a cat or a dog. Such networks can also distinguish a group of people as separate individuals.
-
Language processing. Neural networks can convert handwritten and typewritten material to electronic text. Neural networks also convert speech to printed text and printed text to speech.
-
Financial analysis. Neural networks are increasingly being used for loan risk assessment, real estate valuation, bankruptcy prediction, share price prediction, and other business applications.
-
Medicine. Medical applications include detecting lung nodules and heart arrhythmias, and predicting adverse drug reactions.
-
Telecommunications. Telecommunications applications of neural networks include control of telephone switching networks and echo cancellation on satellite links.
- Nouvelle AI New foundations
- The approach known as nouvelle AI was pioneered at the MIT AI Laboratory by the Australian Rodney Brooks during the latter half of the 1980s. Nouvelle AI distances itself from strong AI, with its emphasis on human-level performance, in favor of the relatively modest aim of insect-level performance. At a very fundamental level, nouvelle AI rejects symbolic AI’s reliance upon constructing internal models of reality, such as those described in the section Microworld programs. Practitioners of nouvelle AI assert that true intelligence involves the ability to function in a real-world environment.
- A central idea of nouvelle AI is that intelligence, as expressed by complex behavior, “emerges” from the interaction of a few simple behaviors. For example, a robot whose simple behaviors include collision avoidance and motion toward a moving object will appear to stalk the object, pausing whenever it gets too close. Herbert the robotDesigned by Rodney Brooks and affectionately named for artificial intelligence pioneer Herbert Simon, Herbert the robot employed 30 infrared sensors, a laser scanner, and a magnetic compass to locate soft-drink cans and keep itself oriented as it wandered throughout the MIT Artificial Intelligence Laboratory. After collecting an empty can with its robotic arm, Herbert would return it to a recycling bin.
- . Nouvelle systems do not contain a complicated symbolic model of their environment. Instead, information is left “out in the world” until the system needs it. A nouvelle system refers continuously to its sensors rather than to an internal model of the world: it “reads off” the external world whatever information it needs at precisely the time it needs it. (As Brooks insisted, the world is its own best model—always exactly up-to-date and complete in every detail.)
-
#image_title
#image_title
#image_title
#image_title
#image_title
#image_title
#image_title