tarafından

Yapay Zeka Konu Başlıkları

23 Kasım 2018 Bilgisayar içinde

Artificial Intelligence AI

Artificial Intelligence AI

 

 

Yapay Zeka (Artificial Intelligence, AI) ile ilgilenmek isteyenler için konu başlıkları:

Part I: Artificial Intelligence

 Chapter 2: Intelligent Agents
2.1. Agents and Environments
2.2. Good Behavior: The Concept of Rationality
      2.2.1. Rationality
      2.2.2. Omniscience, learning, and autonomy
2.3. The Nature of Environments
      2.3.1. Specifying the task environment
      2.3.2. Properties of task environments
2.4. The Structure of Agents
      2.4.1. Agent programs
      2.4.2. Simple reflex agents
      2.4.3. Model-based reflex agents
      2.4.4. Goal-based agents
      2.4.5. Utility-based agents
      2.4.6. Learning agents
      2.4.7. How the components of agent programs work

Part II: Problem-solving

Chapter 3: Solving Problems by Searching
 
3.1. Problem-Solving Agents
      3.1.1. Well-defined problems and solutions
      3.1.2. Formulating problems
3.2. Example Problems
      3.2.1. Toy problems
      3.2.2. Real-world problems
3.3. Searching for Solutions
      3.3.1. Infrastructure for search algorithms
      3.3.2. Measuring problem-solving performance
3.4. Uninformed Search Strategies
      3.4.1. Breadth-first search
      3.4.2. Uniform-cost search
      3.4.3. Depth-first search
      3.4.4. Depth-limited search
      3.4.5. Iterative deepening depth-first search
      3.4.6. Bidirectional search
      3.4.7. Comparing uninformed search strategies
3.5. Informed (Heuristic) Search Strategies
      3.5.1. Greedy best-first search
      3.5.2. A* search: Minimizing the total estimated solution cost
             Conditions for optimality: Admissibility and consistency
             Optimality of A*
      3.5.3. Memory-bounded heuristic search
      3.5.4. Learning to search better
3.6. Heuristic Functions
      3.6.1. The effect of heuristic accuracy on performance
      3.6.2. Generating admissible heuristics from relaxed problems
      3.6.3. Generating admissible heuristics from subproblems: Pattern databases
      3.6.4. Learning heuristics from experience
Chapter 4: Beyond Classical Search
 
4.1. Local Search Algorithms and Optimization Problems
      4.1.1. Hill-climbing search
      4.1.2. Simulated annealing
      4.1.3. Local beam search
      4.1.4. Genetic algorithms
4.2. Local Search in Continuous Spaces
4.3. Searching with Nondeterministic Actions
      4.3.1. The erratic vacuum world
      4.3.2 AND-OR search trees
      4.3.3. Try, try again
4.4. Searching with Partial Observations
      4.4.1. Searching with no observation
      4.4.2. Searching with observations
      4.4.3. Solving partially observable problems
      4.4.4. An agent for partially observable environments
4.5. Online Search Agents and Unknown Environments
      4.5.1. Online search problems
      4.5.2. Online search agents
      4.5.3. Online local search
      4.5.4. Learning in online search
Chapter 5: Adversarial Search
 
5.1. Games
5.2. Optimal Decisions in Games
      5.2.1. The minimax algorithm
      5.2.2. Optimal decisions in multiplayer games
5.3. Alpha–Beta Pruning
      5.3.1. Move ordering
5.4. Imperfect Real-Time Decisions
      5.4.1. Evaluation functions
      5.4.2. Cutting off search
      5.4.3. Forward pruning
      5.4.4. Search versus lookup
5.5. Stochastic Games
      5.5.1. Evaluation functions for games of chance
5.6. Partially Observable Games
      5.6.1. Kriegspiel: Partially observable chess
      5.6.2. Card games
5.7. State-of-the-Art Game Programs
5.8. Alternative Approaches
Chapter 6: Constraint Satisfaction Problems
 
6.1. Defining Constraint Satisfaction Problems
      6.1.1. Example problem: Map coloring
      6.1.2. Example problem: Job-shop scheduling
      6.1.3. Variations on the CSP formalism
6.2. Constraint Propagation: Inference in CSPs
      6.2.1. Node consistency
      6.2.2. Arc consistency
      6.2.3. Path consistency
      6.2.4. K-consistency.
      6.2.5. Global constraints
      6.2.6. Sudoku example
6.3. Backtracking Search for CSPs
      6.3.1. Variable and value ordering
      6.3.2. Interleaving search and inference
      6.3.3. Intelligent backtracking: Looking backward
6.4. Local Search for CSPs
6.5. The Structure of Problems
Part III: Knowledge, reasoning, and planning
Chapter 7: Logical Agents
 
7.1. Knowledge-Based Agents
7.2. The Wumpus World
7.3. Logic
7.4. Propositional Logic: A Very Simple Logic
      7.4.1. Syntax
      7.4.2. Semantics
      7.4.3. A simple knowledge base
      7.4.4. A simple inference procedure
7.5. Propositional Theorem Proving
      7.5.1. Inference and proofs
      7.5.2. Proof by resolution
             Conjunctive normal form
             A resolution algorithm
             Completeness of resolution
      7.5.3. Horn clauses and definite clauses
      7.5.4. Forward and backward chaining
7.6. Effective Propositional Model Checking
      7.6.1. A complete backtracking algorithm
      7.6.2. Local search algorithms
      7.6.3. The landscape of random SAT problems
7.7. Agents Based on Propositional Logic
      7.7.1. The current state of the world
      7.7.2. A hybrid agent
      7.7.3. Logical state estimation
      7.7.4. Making plans by propositional inference
7.8. Summary
Bibliographical and Historical Notes
Exercises
Chapter 8: First-Order Logic
 
8.1. Representation Revisited
      8.1.1. The language of thought
      8.1.2. Combining the best of formal and natural languages
8.2. Syntax and Semantics of First-Order Logic
      8.2.1. Models for first-order logic
      8.2.2. Symbols and interpretations
      8.2.3. Terms
      8.2.4. Atomic sentences
      8.2.5. Complex sentences
      8.2.6. Quantifiers
             Universal quantification (∀)
             Existential quantification (∃)
             Nested quantifiers
             Connections between ∀ and ∃
      8.2.7. Equality
      8.2.8. An alternative semantics?
8.3. Using First-Order Logic
      8.3.1. Assertions and queries in first-order logic
      8.3.2. The kinship domain
      8.3.3. Numbers, sets, and lists
      8.3.4. The wumpus world
8.4. Knowledge Engineering in First-Order Logic
      8.4.1. The knowledge-engineering process
      8.4.2. The electronic circuits domain
             Identify the task
             Assemble the relevant knowledge
             Decide on a vocabulary
             Encode general knowledge of the domain
             Encode the specific problem instance
             Pose queries to the inference procedure
             Debug the knowledge base
8.5. Summary
Bibliographical and Historical Notes
Exercises
Chapter 9: Inference in First-Order Logic
 
9.1. Propositional vs. First-Order Inference
      9.1.1. Inference rules for quantifiers
      9.1.2. Reduction to propositional inference
9.2. Unification and Lifting
      9.2.1. A first-order inference rule
      9.2.2. Unification
      9.2.3. Storage and retrieval
9.3. Forward Chaining
      9.3.1. First-order definite clauses
      9.3.2. A simple forward-chaining algorithm
      9.3.3. Efficient forward chaining
             Matching rules against known facts
             Incremental forward chaining
             Irrelevant facts
9.4. Backward Chaining
      9.4.1. A backward-chaining algorithm
      9.4.2. Logic programming
      9.4.3. Efficient implementation of logic programs
      9.4.4. Redundant inference and infinite loops
      9.4.5. Database semantics of Prolog
      9.4.6. Constraint logic programming
9.5. Resolution
      9.5.1. Conjunctive normal form for first-order logic
      9.5.2. The resolution inference rule
      9.5.3. Example proofs
      9.5.4. Completeness of resolution
      9.5.5. Equality
      9.5.6. Resolution strategies
             Practical uses of resolution theorem provers
9.6. Summary
Bibliographical and Historical Notes
Exercises
Chapter 10: Classical Planning
 
10.1. Definition of Classical Planning
      10.1.1. Example: Air cargo transport
      10.1.2. Example: The spare tire problem
      10.1.3. Example: The blocks world
      10.1.4. The complexity of classical planning
10.2. Algorithms for Planning as State-Space Search
      10.2.1. Forward (progression) state-space search
      10.2.2. Backward (regression) relevant-states search
      10.2.3. Heuristics for planning
10.3. Planning Graphs
      10.3.1. Planning graphs for heuristic estimation
      10.3.2. The Graphplan algorithm
      10.3.3. Termination of Graphplan
10.4. Other Classical Planning Approaches
      10.4.1. Classical planning as Boolean satisfiability
      10.4.2. Planning as first-order logical deduction: Situation calculus
      10.4.3. Planning as constraint satisfaction
      10.4.4. Planning as refinement of partially ordered plans
10.5. Analysis of Planning Approaches
10.6. Summary
Bibliographical and Historical Notes
Exercises
Chapter 11: Planning and Acting in the Real World
 
11.1. Time, Schedules, and Resources
      11.1.1. Representing temporal and resource constraints
      11.1.2. Solving scheduling problems
11.2. Hierarchical Planning
      11.2.1. High-level actions
      11.2.2. Searching for primitive solutions
      11.2.3. Searching for abstract solutions
11.3. Planning and Acting in Nondeterministic Domains
      11.3.1. Sensorless planning
      11.3.2. Contingent planning
      11.3.3. Online replanning
11.4. Multiagent Planning
      11.4.1. Planning with multiple simultaneous actions
      11.4.2. Planning with multiple agents: Cooperation and coordination
11.5. Summary
Bibliographical and Historical Notes
Exercises
Chapter 12: Knowledge Representation
 
12.1. Ontological Engineering
12.2. Categories and Objects
      12.2.1. Physical composition
      12.2.2. Measurements
      12.2.3. Objects: Things and stuff
12.3. Events
      12.3.1. Processes
      12.3.2. Time intervals
      12.3.3. Fluents and objects
12.4. Mental Events and Mental Objects
12.5. Reasoning Systems for Categories
      12.5.1. Semantic networks
      12.5.2. Description logics
12.6. Reasoning with Default Information
      12.6.1. Circumscription and default logic
      12.6.2. Truth maintenance systems
12.7. The Internet Shopping World
      12.7.1. Following links
      12.7.2. Comparing offers
12.8. Summary
Bibliographical and Historical Notes
Exercises
Part IV: Uncertain knowledge and reasoning
Chapter 13: Quantifying Uncertainty
 
13.1. Acting under Uncertainty
      13.1.1. Summarizing uncertainty
      13.1.2. Uncertainty and rational decisions
13.2. Basic Probability Notation
      13.2.1. What probabilities are about
      13.2.2. The language of propositions in probability assertions
      13.2.3. Probability axioms and their reasonableness
13.3. Inference Using Full Joint Distributions
13.4. Independence
13.5. Bayes’ Rule and Its Use
      13.5.1. Applying Bayes’ rule: The simple case
      13.5.2. Using Bayes’ rule: Combining evidence
13.6. The Wumpus World Revisited
13.7. Summary
Bibliographical and Historical Notes
Exercises
Chapter 14: Probabilistic Reasoning
 
14.1. Representing Knowledge in an Uncertain Domain
14.2. The Semantics of Bayesian Networks
      14.2.1. Representing the full joint distribution
             A method for constructing Bayesian networks
             Compactness and node ordering
      14.2.2. Conditional independence relations in Bayesian networks
14.3. Efficient Representation of Conditional Distributions
             Bayesian nets with continuous variables
14.4. Exact Inference in Bayesian Networks
      14.4.1. Inference by enumeration
      14.4.2. The variable elimination algorithm
             Operations on factors
             Variable ordering and variable relevance
      14.4.3. The complexity of exact inference
      14.4.4. Clustering algorithms
14.5. Approximate Inference in Bayesian Networks
      14.5.1. Direct sampling methods
             Rejection sampling in Bayesian networks
             Likelihood weighting
      14.5.2. Inference by Markov chain simulation
             Gibbs sampling in Bayesian networks
             Why Gibbs sampling works
14.6. Relational and First-Order Probability Models
      14.6.1. Possible worlds
      14.6.2. Relational probability models
      14.6.3. Open-universe probability models
14.7. Other Approaches to Uncertain Reasoning
      14.7.1. Rule-based methods for uncertain reasoning
      14.7.2. Representing ignorance: Dempster–Shafer theory
      14.7.3. Representing vagueness: Fuzzy sets and fuzzy logic
14.8. Summary
Bibliographical and Historical Notes
Exercises
Chapter 15: Probabilistic Reasoning over Time
 
15.1. Time and Uncertainty
      15.1.1. States and observations
      15.1.2. Transition and sensor models
15.2. Inference in Temporal Models
      15.2.1. Filtering and prediction
      15.2.2. Smoothing
      15.2.3. Finding the most likely sequence
15.3. Hidden Markov Models
      15.3.1. Simplified matrix algorithms
      15.3.2. Hidden Markov model example: Localization
15.4. Kalman Filters
      15.4.1. Updating Gaussian distributions
      15.4.2. A simple one-dimensional example
      15.4.3. The general case
      15.4.4. Applicability of Kalman filtering
15.5. Dynamic Bayesian Networks
      15.5.1. Constructing DBNs
      15.5.2. Exact inference in DBNs
      15.5.3. Approximate inference in DBNs
15.6. Keeping Track of Many Objects
15.7. Summary
Bibliographical and Historical Notes
Exercises
Chapter 16: Making Simple Decisions
 
16.1. Combining Beliefs and Desires under Uncertainty
16.2. The Basis of Utility Theory
      16.2.1. Constraints on rational preferences
      16.2.2. Preferences lead to utility
16.3. Utility Functions
      16.3.1. Utility assessment and utility scales
      16.3.2. The utility of money
      16.3.3. Expected utility and post-decision disappointment
      16.3.4. Human judgment and irrationality
16.4. Multiattribute Utility Functions
      16.4.1. Dominance
      16.4.2. Preference structure and multiattribute utility
             Preferences without uncertainty
             Preferences with uncertainty
16.5. Decision Networks
      16.5.1. Representing a decision problem with a decision network
      16.5.2. Evaluating decision networks
16.6. The Value of Information
      16.6.1. A simple example
      16.6.2. A general formula for perfect information
      16.6.3. Properties of the value of information
      16.6.4. Implementation of an information-gathering agent
16.7. Decision-Theoretic Expert Systems
16.8. Summary
Bibliographical and Historical Notes
Exercises
Chapter 17: Making Complex Decisions
 
17.1. Sequential Decision Problems
      17.1.1. Utilities over time
      17.1.2. Optimal policies and the utilities of states
17.2. Value Iteration
      17.2.1. The Bellman equation for utilities
      17.2.2. The value iteration algorithm
      17.2.3. Convergence of value iteration
17.3. Policy Iteration
17.4. Partially Observable MDPs
      17.4.1. Definition of POMDPs
      17.4.2. Value iteration for POMDPs
      17.4.3. Online agents for POMDPs
17.5. Decisions with Multiple Agents: Game Theory
      17.5.1. Single-move games
      17.5.2. Repeated games
      17.5.3. Sequential games
17.6. Mechanism Design
      17.6.1. Auctions
      17.6.2. Common goods
17.7. Summary
Bibliographical and Historical Notes
Exercises
Part V: Learning
Chapter 18: Learning from Examples
 
18.1. Forms of Learning
             Components to be learned
             Representation and prior knowledge
             Feedback to learn from
18.2. Supervised Learning
18.3. Learning Decision Trees
      18.3.1. The decision tree representation
      18.3.2. Expressiveness of decision trees
      18.3.3. Inducing decision trees from examples
      18.3.4. Choosing attribute tests
      18.3.5. Generalization and overfitting
      18.3.6. Broadening the applicability of decision trees
18.4. Evaluating and Choosing the Best Hypothesis
      18.4.1. Model selection: Complexity versus goodness of fit
      18.4.2. From error rates to loss
      18.4.3. Regularization
18.5. The Theory of Learning
      18.5.1. PAC learning example: Learning decision lists
18.6. Regression and Classification with Linear Models
      18.6.1. Univariate linear regression
      18.6.2. Multivariate linear regression
      18.6.3. Linear classifiers with a hard threshold
      18.6.4. Linear classification with logistic regression
18.7. Artificial Neural Networks
      18.7.1. Neural network structures
      18.7.2. Single-layer feed-forward neural networks (perceptrons)
      18.7.3. Multilayer feed-forward neural networks
      18.7.4. Learning in multilayer networks
      18.7.5. Learning neural network structures
18.8. Nonparametric Models
      18.8.1. Nearest neighbor models
      18.8.2. Finding nearest neighbors with k-d trees
      18.8.3. Locality-sensitive hashing
      18.8.4. Nonparametric regression
18.9. Support Vector Machines
18.10. Ensemble Learning
      18.10.1. Online Learning
18.11. Practical Machine Learning
      18.11.1. Case study: Handwritten digit recognition
      18.11.2. Case study: Word senses and house prices
18.12. Summary
Bibliographical and Historical Notes
Exercises
Chapter 19: Knowledge in Learning
 
19.1. A Logical Formulation of Learning
      19.1.1. Examples and hypotheses
      19.1.2. Current-best-hypothesis search
      19.1.3. Least-commitment search
19.2. Knowledge in Learning
      19.2.1. Some simple examples
      19.2.2. Some general schemes
19.3. Explanation-Based Learning
      19.3.1. Extracting general rules from examples
      19.3.2. Improving efficiency
19.4. Learning Using Relevance Information
      19.4.1. Determining the hypothesis space
      19.4.2. Learning and using relevance information
19.5. Inductive Logic Programming
      19.5.1. An example
      19.5.2. Top-down inductive learning methods
      19.5.3. Inductive learning with inverse deduction
      19.5.4. Making discoveries with inductive logic programming
19.6. Summary
Bibliographical and Historical Notes
Exercises
Chapter 20: Learning Probabilistic Models
 
20.1. Statistical Learning
20.2. Learning with Complete Data
      20.2.1. Maximum-likelihood parameter learning: Discrete models
      20.2.2. Naive Bayes models
      20.2.3. Maximum-likelihood parameter learning: Continuous models
      20.2.4. Bayesian parameter learning
      20.2.5. Learning Bayes net structures
      20.2.6. Density estimation with nonparametric models
20.3. Learning with Hidden Variables: The EM Algorithm
      20.3.1. Unsupervised clustering: Learning mixtures of Gaussians
      20.3.2. Learning Bayesian networks with hidden variables
      20.3.3. Learning hidden Markov models
      20.3.4. The general form of the EM algorithm
      20.3.5. Learning Bayes net structures with hidden variables
20.4. Summary
Bibliographical and Historical Notes
Exercises
Chapter 21: Reinforcement Learning
 
21.1. Introduction
21.2. Passive Reinforcement Learning
      21.2.1. Direct utility estimation
      21.2.2. Adaptive dynamic programming
      21.2.3. Temporal-difference learning
21.3. Active Reinforcement Learning
      21.3.1. Exploration
      21.3.2. Learning an action-utility function
21.4. Generalization in Reinforcement Learning
21.5. Policy Search
21.6. Applications of Reinforcement Learning
      21.6.1. Applications to game playing
      21.6.2. Application to robot control
21.7. Summary
Bibliographical and Historical Notes
Exercises
Part VI: Communicating, perceiving, and acting
Chapter 22: Natural Language Processing
 
22.1. Language Models
      22.1.1 N-gram character models
      22.1.2. Smoothing n-gram models
      22.1.3. Model evaluation
      22.1.4 N-gram word models
22.2. Text Classification
      22.2.1. Classification by data compression
22.3. Information Retrieval
      22.3.1. IR scoring functions
      22.3.2. IR system evaluation
      22.3.3. IR refinements
      22.3.4. The PageRank algorithm
      22.3.5. The HITS algorithm
      22.3.6. Question answering
22.4. Information Extraction
      22.4.1. Finite-state automata for information extraction
      22.4.2. Probabilistic models for information extraction
      22.4.3. Conditional random fields for information extraction
      22.4.4. Ontology extraction from large corpora
      22.4.5. Automated template construction
      22.4.6. Machine reading
22.5. Summary
Bibliographical and Historical Notes
Exercises
Chapter 23: Natural Language for Communication
 
23.1. Phrase Structure Grammars
      23.1.1. The lexicon of E0
      23.1.2. The Grammar of E0
23.2. Syntactic Analysis (Parsing)
      23.2.1. Learning probabilities for PCFGs
      23.2.2. Comparing context-free and Markov models
23.3. Augmented Grammars and Semantic Interpretation
      23.3.1. Lexicalized PCFGs
      23.3.2. Formal definition of augmented grammar rules
      23.3.3. Case agreement and subject–verb agreement
      23.3.4. Semantic interpretation
      23.3.5. Complications
23.4. Machine Translation
      23.4.1. Machine translation systems
      23.4.2. Statistical machine translation
23.5. Speech Recognition
      23.5.1. Acoustic model
      23.5.2. Language model
      23.5.3. Building a speech recognizer
23.6. Summary
Bibliographical and Historical Notes
Exercises
Chapter 24: Perception
 
24.1. Image Formation
      24.1.1. Images without lenses: The pinhole camera
      24.1.2. Lens systems
      24.1.3. Scaled orthographic projection
      24.1.4. Light and shading
      24.1.5. Color
24.2. Early Image-Processing Operations
      24.2.1. Edge detection
      24.2.2. Texture
      24.2.3. Optical flow
      24.2.4. Segmentation of images
24.3. Object Recognition by Appearance
      24.3.1. Complex appearance and pattern elements
      24.3.2. Pedestrian detection with HOG features
24.4. Reconstructing the 3D World
      24.4.1. Motion parallax
      24.4.2. Binocular stereopsis
      24.4.3. Multiple views
      24.4.4. Texture
      24.4.5. Shading
      24.4.6. Contour
      24.4.7. Objects and the geometric structure of scenes
24.5. Object Recognition from Structural Information
      24.5.1. The geometry of bodies: Finding arms and legs
      24.5.2. Coherent appearance: Tracking people in video
24.6. Using Vision
      24.6.1. Words and pictures
      24.6.2. Reconstruction from many views
      24.6.3. Using vision for controlling movement
24.7. Summary
Bibliographical and Historical Notes
Exercises
Chapter 25: Robotics
 
25.1. Introduction
25.2. Robot Hardware
      25.2.1. Sensors
      25.2.2. Effectors
25.3. Robotic Perception
      25.3.1. Localization and mapping
      25.3.2. Other types of perception
      25.3.3. Machine learning in robot perception
25.4. Planning to Move
      25.4.1. Configuration space
      25.4.2. Cell decomposition methods
      25.4.3. Modified cost functions
      25.4.4. Skeletonization methods
25.5. Planning Uncertain Movements
      25.5.1. Robust methods
25.6. Moving
      25.6.1. Dynamics and control
      25.6.2. Potential-field control
      25.6.3. Reactive control
      25.6.4. Reinforcement learning control
25.7. Robotic Software Architectures
      25.7.1. Subsumption architecture
      25.7.2. Three-layer architecture
      25.7.3. Pipeline architecture
25.8. Application Domains

Yanıt bırak

Yorum göndermek için giriş yapmış olmalısınız.

Araç çubuğuna atla