Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Learning Agents Center: Machine Learning - An Overview of Different Learning Strategies, Study notes of Computer Science

This document, from the learning agents center at george mason university, provides an overview of various machine learning strategies, including inductive learning, deductive learning, abductive learning, and multistrategy learning. It discusses the architecture of a knowledge-based agent and the role of learning in improving an agent's competence.

Typology: Study notes

Pre 2010

Uploaded on 02/12/2009

koofers-user-iz2
koofers-user-iz2 🇺🇸

10 documents

1 / 114

Toggle sidebar

Related documents


Partial preview of the text

Download Learning Agents Center: Machine Learning - An Overview of Different Learning Strategies and more Study notes Computer Science in PDF only on Docsity!  2008, Learning Agents Center 1 CS 681 Fall 2008 Learning Agents Center and Computer Science Department George Mason University Gheorghe Tecuci [email protected] http://lac.gmu.edu/  2008, Learning Agents Center 2 Machine Learning: Introduction Inductive Learning Overview Analogical Learning Deductive Learning Abductive Learning Multistrategy Learning What is Machine Learning Machine Learning is the domain of Artificial fc) Intelligence which is concerned with building adaptive computer systems that are able to improve their performance through learning from input data, from a user, or from their own problem solving experience. C © © 2008, Leaming Agents Center  2008, Learning Agents Center 6 Two Complementary Dimensions of Learning A system is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. A system is improving its efficiency, if it learns to solve the problems from its area of competence faster or by using fewer resources. Competence Efficiency  2008, Learning Agents Center 7 Learning Strategies A Learning Strategy is a basic form of learning characterized by the employment of a certain type of: • inference (e.g. deduction, induction, abduction or analogy); • computational or representational mechanism (e.g. rules, trees, neural networks, etc.); • learning goal (e.g. learn a concept, discover a formula, acquire new facts, acquire new knowledge about an entity, refine an entity).  2008, Learning Agents Center 10 Given • a language of instances; • a language of generalizations; • a set of positive examples (E1, ..., En) of a concept • a set of negative examples (C1, ... , Cm) of the same concept • a learning bias • other background knowledge Determine • a concept description which is a generalization of the positive examples that does not cover any of the negative examples Purpose of concept learning Predict if an instance is an example of the learned concept. The Learning Problem  2008, Learning Agents Center 11 Generalization (and Specialization) Rules Climbing the generalization hierarchy Dropping condition Extending intervals Extending ordered sets of intervals Turning constants into variables Using feature definitions Using inference rules Turning occurrences of a variable into variables Extending discrete sets  2008, Learning Agents Center 12 Problem color shape size class orange square large + i1 blue ellipse small - i2 red triangle small + i3 green rectangle small - i4 yellow circle large + i5 any-color warm-color cold-color red yelloworange blackblue green any-shape polygone round triangle rectangle square circle ellipse any-size large small Language of instances: Objects with three attributes: color, shape, size. Language of generalization: Object generalization characterized by a set of colors, shapes and sized, as defined by the following generalization hierarchies: Background knowledge: Problem: Learn the concept represented by the following examples: Solution: (C color warm-color shape any-shape size any-size)} color shape size warm-color any-shape any-size  2008, Learning Agents Center 15 The Candidate Elimination Algorithm (cont.) • • • • • ••• • • • • • •• • • •• more general UB LB more specific As new positive and negative examples are presented to the program, candidate concepts are eliminated from H. This is practically done by updating the set G (which is the set of the most general elements in H) and the set S (which is the set of the most specific elements in H).  2008, Learning Agents Center 16 Version spaces and the candidate elimination algorithm This is a concept learning method based on exhaustive search. It was developed by Mitchell and his colleagues. Let us suppose that we have an example e1 of a concept to be learned. Then, any sentence of the representation language which is more general than this example, is a plausible hypothesis for the concept. H is the set of the concepts covering the example e1. The following figure is an intuitive representation of the version space H (each hypothesis being represented as a point in the network): Because the more general than relation is a partial ordering relation, one may represent the version spaces H by its boundaries: H = { h | h is more general than e1 and h is less general than eg } or H = {S, G} As new examples and counterexamples are presented to the program, candidate concepts are eliminated from H. This is practically done by updating the set G (which is the set of the most general elements in H) and the set S (which is the set of the most specific elements in H): Thus, the version space H is the set of all concept descriptions that are consistent with all the training instances seen so far. When the set H contains only one candidate concept, the desired concept has been found. The set H of all the plausible hypotheses for the concept to be learned, is called the version space: H = { h | h is more general than e1 } Let S be the set containing the example e1, and G be the set containing the most general description of the representation language which is more general than e1: S = { e1 }, G = { eg } Explanation  2008, Learning Agents Center 17 The Candidate Elimination Algorithm 1. Initialize S to the first positive example and G to its most general generalization 2. Accept a new training instance I If I is a positive example then - remove from G all the concepts that do not cover I; - generalize the elements in S as little as possible to cover I but remain less general than some concept in G; - keep in S the minimally general concepts. If I is a negative example then - remove from S all the concepts that cover I; - specialize the elements in G as little as possible to uncover I and be more general than at least one element from S; - keep in G the maximally general concepts. 3. Repeat 2 until G=S and they contain a single concept C (this is the learned concept)  2008, Learning Agents Center 20 any-color warm-color cold-color red yelloworange blackblue green any-shape polygone round triangle rectangle square circle ellipse any-size large small 2. If the new training instance “I” is a positive example then: - Remove from G all the concepts that do not cover I; - Generalize the elements in S as little as possible to cover I but remain less general than some concept in G. Keep in S the minimally general concepts. (+i3 color red shape triangle size small) (C color warm-color shape any-shape size any-size) (C color any-color shape polygon size any-size) G: { (C color any-color shape any-shape size large)} (C color orange shape square size large)} S: { (C color warm-color shape polygon size any-size)} S: { The Candidate Elimination Algorithm: Illustration  2008, Learning Agents Center 21 any-color warm-color cold-color red yelloworange blackblue green any-shape polygone round triangle rectangle square circle ellipse any-size large small (-i4 color green shape rectangle size small) (C color warm-color shape any-shape size any-size) (C color any-color shape polygon size any-size)} G: { (C color warm-color shape polygon size any-size)} S: { 2. If the new training instance “I” is a negative example then: - Remove from S all the concepts that cover “I”; - Specialize the elements in G as little as possible to uncover “I” and be more general than at least one element from S. Keep in G the maximally general concepts. (C color warm-color shape polygon size any-size)} S: { The Candidate Elimination Algorithm: Illustration (C color warm-color shape any-shape size any-size) (C color warm-color shape polygon size any-size)} G: {  2008, Learning Agents Center 22 any-color warm-color cold-color red yelloworange blackblue green any-shape polygone round triangle rectangle square circle ellipse any-size large small (+i5 color yellow shape circle size large) (C color warm-color shape any-shape size any-size)} S = G: { 2. If the new training instance “I” is a positive example then: - Remove from G all the concepts that do not cover I; - Generalize the elements in S as little as possible to cover I but remain less general than some concept in G. Keep in S the minimally general concepts. 3. Repeat 2 until G=S and they contain a single concept C (this is the learned concept) (C color warm-color shape polygon size any-size)} S: { (C color warm-color shape any-shape size any-size)} G: { The Candidate Elimination Algorithm: Illustration  2008, Learning Agents Center 25 What happens if there are not enough examples for S and G to become identical? Could we still learn something useful? How could we classify a new instance? When could we be sure that the classification is the same as the one made if the concept were completely learned? Could we be sure that the classification is correct? Discussion  2008, Learning Agents Center 26 What happens if there are not enough examples for S and G to become identical? Let us assume that one learns only from the first 3 examples: color shape size class orange square large + i1 blue ellipse small - i2 red triangle small + i3 The final version space will be: (C color warm-color shape any-shape size any-size) (C color any-color shape polygon size any-size)} G: { (C color warm-color shape polygon size any-size)} S: { Discussion  2008, Learning Agents Center 27 color shape size class blue circle large orange square small red ellipse large blue polygon small Assume that the final version space is: How could we classify the following examples, how certain we are about the classification, and why? _ + don’t know don’t know (C color warm-color shape any-shape size any-size) (C color any-color shape polygon size any-size)} G: { (C color warm-color shape polygon size any-size)} S: { Discussion  2008, Learning Agents Center 30 What will be the result of the learning algorithm if there are errors in examples? Let us assume that the 4th example is incorrectly classified: color shape size class orange square large + i1 blue ellipse small - i2 red triangle small + i3 green rectangle small + i4 (incorrect classification) yellow circle large + i5 The version space after the first three examples is: Continue learning (C color warm-color shape any-shape size any-size) (C color any-color shape polygon size any-size)} G: { (C color warm-color shape polygon size any-size)} S: { Discussion  2008, Learning Agents Center 31 Types of bias: - restricted hypothesis space bias; - preference bias. A bias is any basis for choosing one generalization over another, other than strict consistency with the observed training examples. The Learning Bias  2008, Learning Agents Center 32 Some of the restricted spaces investigated: - logical conjunctions (i.e. the learning system will look for a concept description in the form of a conjunction); - decision tree; - three-layer neural networks with a fixed number of hidden units. The hypothesis space H (i.e. the space containing all the possible concept descriptions) is defined by the generalization language. This language may not be capable of expressing all possible classes of instances. Consequently, the hypothesis space in which the concept description is searched is restricted. Restricted Hypothesis Space Bias  2008, Learning Agents Center 35 In general, the preference bias may be implemented as an order relationship 'better(f1, f2)' over the hypothesis space H. Then, the system will choose the "best" hypothesis f, according to the "better" relationship. An example of such a relationship: "less-general-than" which produces the least general expression consistent with the data. How could the preference bias be represented? Preference Bias: Representation  2008, Learning Agents Center 36 • In its original form learns only conjunctive descriptions. • However, it could be applied successively to learn disjunctive descriptions. • Requires an exhaustive set of examples. • Conducts an exhaustive bi-directional breadth-first search. • The sets S and G can be very large for complex problems. • It is very important from a theoretical point of view, clarifying the process of inductive concept learning from examples. • Has very limited practical applicability because of the combinatorial explosion of the S and G sets. • It is at the basis of the powerful Disciple multistrategy learning method which has practical applications. Features of the Version Space Method  2008, Learning Agents Center 37 The instance space for a concept learning problem is a set of objects, each object having two features - shape and size. The shape of an object can be ball, brick, cube or star. The size of an object can be small, medium or large. An instance is represented by a feature vector with two features. For example, (ball, large) represents a large ball. There is no other background knowledge. Each concept is also represented by a feature vector with two features, shape and size, except that there are two additional values for these features, any-shape and any-size. Consider the following positive and negative examples of a concept to be learned: + (ball, large), - (brick, small), - (cube, large), + (ball, small). Learn the concept represented by the above examples by applying the candidate elimination algorithm. Which will be the results of learning if only the first three examples are available? Exercise  2008, Learning Agents Center 40 Exercise (cont.) a. Which are the sets S and G corresponding to the first example e1? b. Which are the new sets S and G after learning from the negative example c1? c. Assume that after learning from another example, the sets S and G are the following: S: {[(workstation = mac) & (software = publishing-sw) & (printer = any-printer)]} G: {[(workstation = mac) & (software = any-software) & (printer = any-printer)], [(workstation = any-workstation) & (software = publishing-sw) & (printer = any-printer)]} What will be the new sets S and G after learning from the following example: workstation software printer class sun frame-maker laserwriter + e3  2008, Learning Agents Center 41 Reading Tecuci G., These Lecture Notes (required). Mitchell T.M., Machine Learning, Chapter 2: Concept learning and the general to specific ordering, pp. 20-51, McGraw Hill, 1997 (recommended). Mitchell, T.M., Utgoff P.E., Banerji R., Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics, in Readings in Machine Learning (recommended). Russell S., and Norvig P., Artificial Intelligence: A Modern Approach, Prentice Hall, Second edition, pp. 649 – 653, 678 – 686 (recommended).  2008, Learning Agents Center 42 Machine Learning: Introduction Inductive Learning Overview Analogical Learning Deductive (Explanation-based) Learning Abductive Learning Multistrategy Learning  2008, Learning Agents Center 45 The Explanation-Based Learning Method Explain Construct an explanation that proves that the training example is an example of the concept to be learned. Generalize Generalize the found explanation as much as possible so that the proof still holds, and extract from it a concept definition that satisfies the learning goal.  2008, Learning Agents Center 46 Prove that the training example is a cup:CUP(OBJ1) OPEN-VESSEL(OBJ1) STABLE(OBJ1) LIFTABLE(OBJ1) PART-OF (CONCAVITY1, OBJ1) ISA(CONCAVITY1,CONCAVITY) IS(CONCAVITY1, UPWARD-POINTING) PART-OF (BOTTOM1, OBJ1) ISA(BOTTOM1,BOTTOM) IS(BOTTOM1, FLAT) IS(OBJ1,LIGHT) PART-OF(HANDLE1, OBJ1) ISA(HANDLE1, HANDLE) The leaves of the proof tree are those features of the training example that allows one to recognize it as a cup. By building the proof one isolates the relevant features of the training example. Background Knowledge x, LIFTABLE(x) & STABLE(x) & OPEN-VESSEL(x)  CUP(x) x y, IS(x, LIGHT) & PART-OF(y, x) & ISA(y, HANDLE)  LIFTABLE(x) x y, PART-OF(y, x) & ISA(y, BOTTOM) & IS(y, FLAT)  STABLE(x) x y, PART-OF(y,x) & ISA(y, CONCAVITY) & IS(y, UPWARD-POINTING)  OPEN-VESSEL(x) Explain  2008, Learning Agents Center 47 CONCAVITY CONCAVITY-1 UPWARD-POINTING OBJ1 LIGHT RED EDGAR PART-OF PART-OF PART-OF BOTTOM1 BODY1 HANDLE1 ISA BOTTOM IS FLAT ISA BODY IS SMALL ISA LENGTH 5 ISA IS PART-OF IS COLOR OWNER HANDLE The ontological representation of the cup example. The enclosed features are the relevant ones. Discovery of the Relevant Features  2008, Learning Agents Center 50 Exercise Given • A training Example The following example of “supports”: [ book(book1) & material(book1, rigid) & cup(cup1) & material(cup1, rigid) & above(cup1, book1) & touches(cup1, book1) ] => supports(book1, cup1) • Learning goal Find a sufficient concept definition for “supports”, expressed in terms of the features used in the training example. • Background Knowledge x y [on-top-of(y, x) & material(x, rigid)  supports(x, y)] x y [above(x, y) & touches(x, y)  on-top-of(x, y)] x y z [above(x, y) & above(y, z)  above(x, z)] Determine A deductive generalization of the training example that satisfies the learning goal.  2008, Learning Agents Center 51 Solution • Training Example: [ book(book1) & material(book1, rigid) & cup(cup1) & material(cup1, rigid) & above(cup1, book1) & touches(cup1, book1) ] => supports(book1, cup1) • Background Knowledge: x y [on-top-of(y, x) & material(x, rigid)  supports(x, y)] x y [above(x, y) & touches(x, y)  on-top-of(x, y)] x y z [above(x, y) & above(y, z)  above(x, z)]  2008, Learning Agents Center 52 Discussion How does this learning method improve the efficiency of the problem solving process?  2008, Learning Agents Center 55 • Needs only one example • Requires complete knowledge about the concept (which makes this learning strategy, in its pure form, impractical) • Improves agent's efficiency in problem solving General Features of Explanation-Based Learning • Shows the importance of explanations in learning  2008, Learning Agents Center 56 Exercise Given • Training Example An example of the concept "LIKES(x, y)": HUMAN(John) & HAPPY(John) & AGE(John, 32) => LIKES(John, John) • Learning goal Find a sufficient concept definition for "LIKES", expressed only in terms of the features used in the training example (i.e. HUMAN, HAPPY, AGE) • Background Knowledge x y KNOWS(x, y) & PERSON-TYPE(y, nice)  LIKES(x, y) z ANIMATE(z)  KNOWS(z, z) u HUMAN(u) ANIMATE(u) v FRIENDLY(v)  PERSON-TYPE(v, nice) w HAPPY(w)  PERSON-TYPE(w, nice) Determine A deductive generalization of the training example that satisfies the learning goal  2008, Learning Agents Center 57 Reading Tecuci G., These Lecture Notes (required). Russell S., and Norvig P., Artificial Intelligence: A Modern Approach, Prentice Hall, Second edition, pp. 690 – 694 (recommended). Mitchell T.M., Machine Learning, Chapter 11: Analytical Learning, pp. 307 - 333, McGraw Hill, 1997 (recommended). Mitchell T.M., Keller R.M., Kedar-Cabelli S.T., Explanation-Based Generalization: A Unifying View, Machine Learning 1, pp. 47-80, 1986. Also in Readings in Machine Learning, J.W.Shavlik, T.G.Dietterich (eds), Morgan Kaufmann, 1990 (recommended). DeJong G., Mooney R., Explanation-Based Learning: An Alternative View, Machine Learning 2, 1986. Also in Readings in Machine Learning, J.W.Shavlik, T.G.Dietterich (eds), Morgan Kaufmann, 1990 (recommended). Tecuci G. & Kodratoff Y., Apprenticeship Learning in Imperfect Domain Theories, in Kodratoff Y. & Michalski R. (eds), Machine Learning, vol 3, Morgan Kaufmann, 1990 (recommended).  2008, Learning Agents Center 60 Raining causes the streets to be wet. Hypothesize that it was raining on the University Dr. Provide other examples of abductive reasoning. University Dr. is wet. Which are other potential explanations? Abduction  2008, Learning Agents Center 61 D is a collection of data (facts, observations, givens), H explains D (would, if true, explain D), No other hypothesis explains D as well as H does. Therefore, H is probably correct. If B is true and A  B then hypothesize A. Definition (Josephson, 2000): Abstract illustrations: If A=A1 & A2 & ... & An and A2 & ...& An is true then hypothesize A1. Abduction  2008, Learning Agents Center 62 Why is abduction a form of learning? Which are the basic operations in abductive learning? - generation of explanatory hypotheses; - selection of the "best" hypothesis; - (testing the best hypothesis). It discovers (learns) new facts. Discussion  2008, Learning Agents Center 65 Given • A surprising observation that is not explained by the background knowledge KILL(John, John) ; John committed suicide Background knowledge x, y, BUY(x, y)  POSSESS(x, y) x, y, HATE(x, y) & POSSESS(x, z) & WEAPON(z)  KILL(x, y) x, GUN(x)  WEAPON(x) x, DEPRESSED(x)  HATE(x, x) ... DEPRESSED(John), AGE(John, 45), BUY(John, obj1), ... Learning goal Find an assumption which is consistent with the background knowledge and represents the best explanation of the new observation. Determine The “best” assumption satisfying the learning goal: GUN(obj1) The Abductive Learning Problem: Illustration  2008, Learning Agents Center 66 Build partial explanations of the observation: KILL(John, John) HATE(John, John) POSSESS(John, obj1) WEAPON(obj1) DEPRESSED(John) BUY(John, obj1) true true unknown If one assumes that "WEAPON(obj1)" is true Then "KILL(John, John)" is explained. Therefore, a possible assumption is "WEAPON(obj1)". The Abductive Learning Method: Illustration  2008, Learning Agents Center 67 Another partial proof tree: If one assumes that "GUN(obj1)" is true Then "KILL(John, John)" is also explained. Therefore, another possible assumption is "GUN(obj1)". What hypothesis to adopt? - the most specific one? - the most general one? The Abductive Learning Method: Illustration  2008, Learning Agents Center 70 Given • A surprising observation that is not explained by the background knowledge LIKES(John, John) Background knowledge … Learning goal Find an assumption which is consistent with the background knowledge and represents the best explanation of the new observation. Determine The “best” assumption satisfying the learning goal. Change the exercise from the previous slide to represent an abductive learning problem and then solve it. Partial solution Consider the explanation-based learning problem from the previous slide and the abductive learning problem from this exercise. Compare abductive learning with explanation-based learning, based on these problem formulations and their solutions. Exercise  2008, Learning Agents Center 71 Tecuci G., These Lecture Notes (required). P. A. Flach and A. C. Kakas (Eds.), Abduction and Induction: Essays on their Relation and Integration, Kluwer Academic Publishers, 2000. P. A. Flach and A. C. Kakas (Eds.), Abductive and Inductive reasoning: backround and issues, in the above volume. J. R. Josephson, Smart inductive generalizations are abductions, in the above volume. J. R. Josephson and S. G. Josephson, Abductive inference: computation, philosophy, technology, Cambridge University Press, 1994. O'Rorke P., Morris S., and Schulenburg D., Theory Formation by Abduction: A Case Study Based on the Chemical Revolution, in Shrager J. and Langley P. (eds.), Computational Models of Scientific Discovery and Theory Formnation, Morgan Kaufmann, San Mateo, CA, 1990. Subramanian S and Mooney R.J., Combining Abduction and Theory Revision, in R.S.Michalski and G.Tecuci (eds), Proc. of the First International Workshop on Multistrategy Learning, MSL-91, Harpers Ferry, Nov. 7-9, 1991. Recommended Reading  2008, Learning Agents Center 72 Machine Learning: Introduction Inductive Learning Overview Analogical Learning Deductive Learning Abductive Learning Multistrategy Learning  2008, Learning Agents Center 75 The hydrogen atom is like our solar system. The Sun has a greater mass than the Earth and attracts it, causing the Earth to revolve around the Sun. The nucleus also has a greater mass then the electron and attracts it. Therefore it is plausible that the electron also revolves around the nucleus. Rutherford’s Analogy  2008, Learning Agents Center 76 Given: • A partially known target entity T and a goal concerning it. • Background knowledge containing known entities. Find: • New knowledge about T obtained from a source entity S belonging to the background knowledge. Partially understood structure of the hydrogen atom under study. Knowledge from different domains, including astronomy, geography, etc. In a hydrogen atom the electron revolves around the nucleus, in a similar way in which a planet revolves around the sun. Learning by Analogy: The Learning Problem  2008, Learning Agents Center 77 • ACCESS: find a known entity that is analogous with the input entity. • MATCHING: match the two entities and hypothesize knowledge. • EVALUATION: test the hypotheses. • LEARNING: store or generalize the new knowledge. Store that, in a hydrogen atom, the electron revolves around the nucleus. By generalization from the solar system and the hydrogen atom, learn the abstract concept that a central force can cause revolution. In the Rutherford’s analogy the access is no longer necessary because the source entity is already given (the solar system). One may map the nucleus to the sun and the electron to the planet, allowing one to infer that the electron revolves around the nucleus because the nucleus attracts the electron and the mass of the nucleus is greater than the mass of the electron. A specially designed experiment shows that indeed the electron revolves around the nucleus. Learning by Analogy: The Learning Method  2008, Learning Agents Center 80 In this case, the fact that S and T are analogous is already known. Therefore, the access part is solved and the only purpose of the matching function remains that of identifying the correct correspondence between the elements of the solar system and those of the hydrogen atom. This is an example of a special (simpler) form of analogy: “A T is like an S.” This is useful mostly in teaching based on analogy. "The hydrogen atom is like our solar system". Case Study Discussion: Rutherford’s Analogy  2008, Learning Agents Center 81 Which are the possible matchings between the elements of S and the elements of T? sun planet yellow mass mass temperature greater color revolves- around attracts Tsun Tplanet Msun Mplanet causes temperature greater mass mass attracts Mnucleus greater nucleus electron Melectron Case Study Discussion: Potential Matchings  2008, Learning Agents Center 82 There are several possible matchings between the elements of S and the elements of T and one has to select the best one: Matching1: sun  nucleus, planet  electron, Msun  Mnucleus, Mplanet  Melectron, which is supported by the following correspondences mass(sun, Msun)  mass(nucleus, Mnucleus) mass(planet , Mplanet )  mass(electron, Melectron) greater(Msun, Mplanet)  greater(Mnucleus, Melectron), attracts(sun, planet)  attracts(nucleus, electron) Matching2: sun  nucleus, planet  electron, Tsun  Mnucleus, Tplanet  Melectron, that is supported by the following correspondences greater(Tsun, Tplanet)  greater(Mnucleus, Melectron), attracts(sun, planet)  attracts(nucleus, electron) Matching3: sun  electron, planet  nucleus, Msun  Melectron, Mplanet  Mnucleus Case Study Discussion: Potential Matchings  2008, Learning Agents Center 85 The evaluating phase shows that The hydrogen atom has the features: • revolves-around(nucleus, electron) • causes((attracts(nucleus,electron), greater(Mnucleus, Melectron)), revolves-around(nucleus, electron)) The hydrogen atom does not have the features: • color(nucleus, yellow) • temperature(nucleus, Tn) • temperature(electron, En) • greater(Tn, En) Which is, in your opinion, the most critical issue in analogical learning? Case Study Discussion: Evaluation  2008, Learning Agents Center 86 What kind of features may be transferred from the source to the target so as to make sound analogical inferences? Which is the most critical issue in analogical learning? Discussion  2008, Learning Agents Center 87 attracts(sun,planet) mass(sun,Msun) mass(planet,Mplanet) greater(Msun,Mplanet) s (sun <- planet, planet <- electron, Msun <- Mnucleus , Mplanet <- Melectron) = attracts(nucleus,electron) mass(nucleus,Mnucleus) mass(electron,Melectron) greater(Mnucleus,Melectron) revolves-arround(planet,sun) revolves-arround(electron,nucleus) s CAUSE CAUSE ? Case Study: Transfer of Causal Relations  2008, Learning Agents Center 90 Analogy means deriving new knowledge about an input entity by transferring it from a known similar entity. How could we define problem solving by analogy? Problem Solving by Analogy  2008, Learning Agents Center 91 Problem solving by analogy is the process of transferring knowledge from past problem-solving episodes to new problems that share significant aspects with corresponding past experience and using the transferred knowledge to construct solutions to the new problems. What could be the overall structure of a problem solving by analogy method? Problem Solving by Analogy: Definition  2008, Learning Agents Center 92 Let P be a problem to solve. First, look into the knowledge base for a previous problem solving episode which shares significant aspects with the problem to solve. Next transform the past episode to obtain a solution to the current problem. The Problem Solving by Analogy Method  2008, Learning Agents Center 95 GIVEN: PROVE: AC = BD A B C D AB = CD AB + BC = BC + CD AC = BD AB = CD GIVEN: PROVE: <BAD = <CAE <BAC = <DAE B A C D E BC = BC <BAC = <DAE <BAC + <CAD= <CAD + <DAE <BAD = <CAE <CAD = <CAD AB <- <BAC CD <- <DAE AC <- <BAD BD <- <CAE ) s = ( Transformational Analogy Method: Illustration  2008, Learning Agents Center 96 How does analogy facilitate the problem solving process? How does the transformational analogy method relates to the generally accepted idea that the relations which are usually imported by analogy from a source concept S to the target concept T are those belonging to causal networks? Discussion  2008, Learning Agents Center 97 How does this method relates to the generally accepted idea that the relations which are usually imported by analogy from a source concept S to the target concept T are those belonging to causal networks? Intuition: The relation between a problem and its solution is a kind of cause-effect relationship. Fermat’s last theorem: There is no integer solutions of xn + yn = zn for n>2 Previously solved problem: Find integer solutions of the problem x2 + y2 = z2 Problem: Find integer solutions of the problem x3 + y3 = z3 Consider the following problem solving situation: What does this example suggests? Discussion