Artificial intelligence; a new synthesis / (Record no. 3539)

MARC details
000 -LEADER
fixed length control field 09359nam a2200145 4500
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
International Standard Book Number 9788181471901)pb)
040 ## - CATALOGING SOURCE
Transcribing agency CUS
082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER
Classification number 006.3
Item number NIL/A
100 ## - MAIN ENTRY--PERSONAL NAME
Personal name Nils j. Nilsson
245 ## - TITLE STATEMENT
Title Artificial intelligence; a new synthesis /
Statement of responsibility, etc. NilsJ, Nilsson
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT)
Place of publication, distribution, etc. Burlington:
Name of publisher, distributor, etc. Morgan Kaufmann ,
Date of publication, distribution, etc. 1998.
300 ## - PHYSICAL DESCRIPTION
Extent xxi, 513 p.
Other physical details ill. ;
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note <br/>Introduction<br/>1.1 What Is AI?<br/>1.2 Approaches to Artificial Intelligence<br/>1.5 Brief History of AI<br/>1.4 Plan of the Book<br/>1.5 Additional Readings and Discussion<br/>Exercises<br/>Reactive Machines<br/>Stimulus-Response Agents<br/>2.1 Perception and Action<br/>2.1.1 Perception<br/>2.1.2 Action<br/>2.1.3 Boolean Algebra<br/>2.1.4 Classes and Forms of Boolean Functions<br/>2.2 Representing and Implementing Action Functions<br/>2.2.1 Production Systems<br/>2.2.2 Networks<br/>2.2.3 The Subsumption Architecture<br/>2.3 Additional Readings and Discussion<br/>Exercises<br/>3 ^ Neural Networks<br/>5.1 Introduction<br/>3.2 Training Single TLUs<br/>3.2.1 TLU Geometry<br/>3.2.2 Augmented Vectors<br/>3.2.3 Gradient Descent Methods<br/>3.2.4 The Widrow-Hoff Procedure<br/>3.2.5 The Generalized Delta Procedure<br/>3.2.6 The Error-Correction Procedure<br/>3.3 Neural Networks<br/>3.3.1 Motivation<br/>3.3.2 Notation<br/>33.3 The Backpropagation Method<br/>3.3.4 Computing Weight Changes in the Final Layer<br/>3.3.5 Computing Changes to the Weights in Intermediate Layers<br/>3.4 Generalization, Accuracy, and Overfitting<br/>3.5 Additional Readings and Discussion<br/>Exercises<br/>Machine Evolution<br/>4.1 Evolutionary Computation<br/>4.2 Genetic Programming<br/>4.2.1 Program Representation in GP<br/>Contents<br/>4.2.2 The GP Process<br/>4.2.5 Evolving a Wall-Following Robot<br/>4.3 Additional Readings and Discussion<br/>Exercises<br/>BFp^H State Machines<br/>5.1 Representing the Environment by Feature Vectors<br/>5.2 Elman Networks<br/>5.3 Iconic Representations<br/>5.4 Blackboard Systems<br/>5.5 Additional Readings and Discussion<br/>Exercises<br/>Robot Vision<br/>6.1 Introduction<br/>6.2 Steering an Automobile<br/>6.3 Two Stages of Robot Vision<br/>6.4 Image Processing<br/>6.4.1 Averaging<br/>6.4.2 Edge Enhancement<br/>6.4.3 Combining Edge Enhancement with Averaging<br/>6.4.4 Region Finding<br/>6.4.5 Using Image Attributes Other Than Intensity<br/>6.5 Scene Analysis<br/>6.5.1 Interpreting Lines and Curves in the Image<br/>6.5.2 Model-Based Vision<br/>6.6 Stereo Vision and Depth Information<br/>6.7 Additional Readings and Discussion<br/>Exercises<br/>w<br/>Contents<br/>II Search in State Spaces<br/>7L :• Agents That Plan<br/>7.1 Memory Versus Computation<br/>7.2 State-Space Graphs<br/>7.3 Searching Explicit State Spaces<br/>7.4 Feature-Based State Spaces<br/>7.5 Graph Notation<br/>7.6 Additional Readings and Discussion<br/>Exercises<br/>Uninformed Search<br/>8.1 Formulating the State Space<br/>8.2 Components of Implicit State-Space Graphs<br/>8.3 Breadth-First Search<br/>8.4 Depth-First or Backtracking Search<br/>8.5 Iterative Deepening<br/>8.6 Additional Readings and Discussion<br/>Exercises<br/>^ 9 J Heuristic Search<br/>9.1 Using Evaluation Functions<br/>9.2 A General Graph-Searching Algorithm<br/>9.2.1 Algorithm A*<br/>9.2.2 Admissibility of A*<br/>9.2.5 The Consistency (or Monotone) Condition<br/>9.2.4 Iterative-Deepening A*<br/>9.2.5 Recursive Best-First Search<br/>Ff7<iVT.l<br/>Contents<br/>9.5 Heuristic Functions and Search Efficiency<br/>9.4 Additionad Readings and Discussion<br/>Exercises<br/>Planning, Acting, and Learning<br/>10.1 The Sense/Plan/Act Cycle<br/>10.2 Approximate Search<br/>10.2.1 Island-Driven Search<br/>10.2.2 Hierarchical Search<br/>10.2.5 Limited-Horizon Search<br/>10.2.4 Cycles<br/>10.2.5 Building Reactive Procedures<br/>10.3 Learning Heuristic Functions<br/>10.3.1 Explicit Graphs<br/>10.3.2 Implicit Graphs<br/>10.4 Rewards Instead of Goals<br/>10.5 Additional Readings and Discussion<br/>Exercises<br/>Alternative Search Formulations and<br/>Applications<br/>11.1 Assignment Problems<br/>11.2 Constructive Methods<br/>11.3 Heuristic Repair<br/>11.4 Function Optimization<br/>Exercises<br/>j|2 Adversarial Search<br/>12.1 Two-Agent Games<br/>12.2 The Minimax Procedure<br/>xii Contents<br/>12.5 The Alpha-Beta Procedure<br/>12.4 The Search Efficiency of the Alpha-Beta Procedure<br/>12.5 Other Important Matters<br/>12.6 Games of Chance<br/>12.7 Learning Evaluation Functions<br/>12.8 Additional Readings and Discussion<br/>Exercises<br/>III Knowledge Representation and<br/>Reasoning 21;<br/>The Propositional Calculus<br/>13.1 Using Constraints on Feature Values<br/>13.2 The Language<br/>13.3 Rules of Inference<br/>13.4 Definition of Proof<br/>13.5 Semantics<br/>13.5.1 Interpretations<br/>13.5.2 The Propositional Truth Table<br/>13.5.3 Satisfiability and Models<br/>13.5.4 Validity<br/>13.5.5 Equivalence<br/>13.5.6 Entailment<br/>13.6 Soundness and Completeness<br/>13.7 The PSAT Problem<br/>13.8 Other Important Topics<br/>13.8.1 Language Distinctions<br/>13.8.2 Metatheorems<br/>13.8.3 Associative Laws<br/>13.8.4 Distributive Laws<br/>Exercises<br/>I Resolution in the Propositional Calculns<br/>14.1 A New Rule of Inference: Resolution<br/>14.1.1 Clauses as wffs<br/>14.1.2 Resolution on Clauses<br/>14.1.5 Soundness of Resolution<br/>- 14.2 Converting Arbitrary wffs to Conjunctions of Clauses<br/>14.5 Resolution Refutations<br/>14.4 Resolution Refutation Search Strategies<br/>14.4.1 Ordering Strategies<br/>14.4.2 Refinement Strategies<br/>14.5 Horn Clauses<br/>Exercises<br/>The Predicate Calculus<br/>15.1 Motivation<br/>15.2 The Language and Its Syntax<br/>15.5 Semantics<br/>15.3.1 Worlds<br/>15.3.2 Interpretations<br/>15.3.3 Models and Related Notions<br/>15.3.4 Knowledge<br/>15.4 Quantification<br/>15.5 Semantics of Quantifiers<br/>15.5.1 Universal Quantifiers<br/>15.5.2 Existential Quantifiers<br/>15.5.3 Useful Equivalences<br/>15.5.4 Rules of Inference<br/>15.6 Predicate Calculus as a Language for Representing<br/>Knowledge<br/>15.6.1 Conceptualizations<br/>15.6.2 Examples<br/>15.7 Additional Readings and Discussion<br/>Exercises<br/>Resolution in the Predicate Calculus<br/>16.1 Unification<br/>16J2 Predicate-Calculus Resolution<br/>16.5 Completeness and Soundness<br/>16.4 Converting Arbitrary wffs to Clause Form<br/>16.5 Using Resolution to Prove Theorems<br/>16.6 Answer Extraction<br/>16.7 Hie Equality Predicate<br/>16.8 Additional Readings and Discussion<br/>Exercises<br/>Knowledge-Based Systems<br/>17.1 Confronting the Real World<br/>17.2 Reasoning Using Horn Clauses<br/>17.3 Maintenance in Dynamic Knowledge Bases<br/>V7A Rule-Based Expert Systems<br/>17.5 Rule Learning<br/>17.5.1 Learning Propositional Calculus Rules<br/>17.5.2 Learning First-Order Logic Rules<br/>17.5.3 Explanation-Based Generalization<br/>17.6 Additional Readings and Discussion<br/>Exercises<br/>' J Representing Commonsense Knowledge<br/>18.1 The Commonsense World<br/>18.1.1 What Is Commonsense Knowledge?<br/>18.1.2 Difficulties in Representing Commonsense Knowledge<br/>18.1.3 The Importance of Commonsense Knowledge<br/>18.1.4 Research Areas<br/>18.2 Time<br/>18.5 Knowledge Representation by Networks<br/>18.5.1 Taxonomic Knowledge<br/>18.3.2 Semantic Networks<br/>18.3.3 Nonmonotonic Reasoning in Semantic Networks<br/>18.3.4 Frames<br/>18.4 Additional Readings and Discussion<br/>Exercises<br/>Reasoning with Uncertain Information<br/>19.1 Review of Probability Theory<br/>19.1.1 Fundamental Ideas<br/>19.1.2 Conditional Probabilities<br/>19.2 Probabilistic Inference<br/>19.2.1 A General Method<br/>19.2.2 Conditional Independence<br/>19.5 Bayes Networks<br/>19.4 Patterns of Inference in Bayes Networks<br/>19.5 Uncertain Evidence<br/>19.6 D-Separation<br/>19.7 Probabilistic Inference in Polytrees<br/>19.7.1 Evidence Above<br/>19.7.2 Evidence Below<br/>19.7.3 Evidence Above and Below<br/>19.7.4 A Numerical Example<br/>19.8 Additional Readings and Discussion<br/>Exercises<br/>lOlg] Learning and Acting with Bayes Nets<br/>20.1 Learning Bayes Nets<br/>20.1.1 Known Network Structure<br/>20.1.2 Learning Network Structure<br/>20.2 Probabilistic Inference and Action<br/>20.2.1 The General Setting<br/>20.2.2 An Extended Example<br/>20.2.3 Generalizing the Example<br/>20.3 Additional Readings and Discussion<br/>Exercises<br/>IV Planning Methods Based on<br/>Logic<br/>21 The Situation Calculus<br/>21.1 Reasoning about States and Actions<br/>21.2 Some Difficulties<br/>21.2.1 Frame Axioms<br/>21.2.2 Qualifications<br/>21.2.3 Ramifications<br/>21.3 Generating Plans<br/>21.4 Additional Readings and Discussion<br/>Exercises<br/>H-''. 'iZZ ' 'I Planning<br/>22.1 STRIPS Planning Systems<br/>22.1.1 Describing States and Goals<br/>22.1.2 Forward Search Methods<br/>22.1.3 Recursive STRIPS<br/>22.1.4 Plans with Run-Time Conditionals<br/>22.1.5 The Sussman Anomaly<br/>22.1.6 Backward Search Methods<br/>22.2 Plan Spaces and Partial-Order Planning<br/>22.3 Hierarchical Planning<br/>22.3.1 ABSTRIPS<br/>22.3.2 Combining Hierarchical and Partial-Order Planning<br/>22.4 Learning Plans<br/>22.5 Additional Readings and Discussion<br/>Exercises<br/>V Communication and Integration<br/>Multiple Agents<br/>23.1 Interacting Agents<br/>23.2 Models of Other Agents<br/>23.2.1 Varieties of Models<br/>23.2.2 Simulation Strategies<br/>23.2.3 Simulated Databases<br/>23.2.4 The Intentional Stance<br/>23.3 A Modal Logic of Knowledge<br/>23.3.1 Modal Operators<br/>23.3.2 Knowledge Axioms<br/>23.5.3 Reasoning about Other Agents' Knowledge<br/>23.5.4 Predicting Actions of Other Agents<br/>25.4 Additional Readings and Discussion<br/>Exercises<br/>Communication among Agents<br/>24.1 Speech Acts<br/>24.1.1 Planning Speech Acts<br/>24.1.2 Implementing Speech Acts<br/>24.2 Understanding Language Strings<br/>24.2.1 Phrase-Structure Grammars<br/>24.2.2 Semantic Analysis<br/>24.2.5 Expanding the Grammar<br/>24.3 Efficient Communication<br/>24.3.1 Use of Context<br/>24.3.2 Use of Knowledge to Resolve Ambiguities<br/>24.4 Natural Language Processing<br/>24.5 Additional Readings and Discussion<br/>Exercises<br/>Agent Architectures<br/>25.1 Three-Level Architectures<br/>25.2 Goal Arbitration<br/>25.5 The Triple-Tower Architecture<br/>25.4 Bootstrapping<br/>25.5 Additional Readings and Discussion<br/>Exercises
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Koha item type GN Books
Holdings
Withdrawn status Lost status Damaged status Not for loan Home library Current library Shelving location Date acquired Full call number Accession number Date last seen Koha item type
        Central Library, Sikkim University Central Library, Sikkim University General Book Section 24/06/2016 006.3 NIL/A P18904 24/06/2016 General Books
SIKKIM UNIVERSITY
University Portal | Contact Librarian | Library Portal

Powered by Koha