Professional Documents
Culture Documents
REPORT HOMEWORK
Pablo Silva LZ1LUZ pablosnr@hotmail.com | Artificial Intelligence | December 13, 2013
PAGE 1
If the student likes or not the class depends on whether there is a delay to start the lesson and which the subject was taught. Some late in class could happens if the lecturer does not come in time and this could be affected by the place and the weather or not. Finally the subject only depends on about what the lecturer thinks that is good for teaching for his students.
PAGE 2
Peter is the official lecturer of the AI classes, therefore just in specials occasions will be not him to be a lecturer. P(Winter) True 0.25 In Hungary, its winter about 25% of the time. False 0.75
P(Place)
E504 0.5
Q408 0.5
There are two classes per week and on Mondays it is in E504 and on Wednesday it is in Q408.
Peter seems to like of BNs and the others times that was not him teaching, the classes were middle about BNs.
P(Delay) Lecturer=Another,Winter=False,Place=Q408 Lecturer=Another,Winter=False,Place= E504 Lecturer= Another,Winter=True,Place= Q408 Lecturer= Another,Winter=True,Place=E504 Lecturer= Peter Antal,Winter=False,Place= Q408 Lecturer=Peter Antal,Winter=False,Place=E504 Lecturer=Peter Antal,Winter=True,Place= Q408 Lecturer=Peter Antal,Winter=True,Place=E504
PAGE 3
Delays occur with greater intensity when the classes are at building E504 because the lecturer need get the projector far from that building. Despite this, most of the classes are in time.
In general the students do not complain about the AI classes, just in case of some delay, but nevertheless are just few ones. The best classes are when does not have a delay and the subject is BNs.
Because of the rigorous winter Peter could not go to class and the substitute lecturer taught search methods because him did not liked BNs. What the students though about the class? Bad 0.12 Good 0.42 Excellent 0.46
If the class was excellent, what is the chance of the lecture has been Peter Antal?
PAGE 4
Computing the values using normal inference: P(Agreeableness|Delay) Delay=true Delay=False Difference is: ~0.132 P(Agreeableness|Subject) Subject=BNs Subject=Another Excellent 0.473 0.447 Excellent 0.368 0.5
PAGE 5
Difference is: ~0.026 As we can see, the value of Delay has a greater impact on the random variable Agreeableness than Subject.
Figure 3 - Learning with 10000 samples As we can see, the structure resembles the original except for the Winter random variable. The CPTs are changed accordingly the new model which can be seen in the attached PHAS_LZ1LUZ_HW_AI2013_learning_structure_10000s.xml. Proper examination whether the two models are the same is out of this works reach but let us retry some queries: P(Agreeableness=Excellent) = 0.472 (prev.: 0.464) P(Agreeableness=Excellent|Delay=true) = 0.365 (prev.: 0.368)
PAGE 6
As we can see, even the model learnt does not be exactly identical to the designed the queries had almost the same result which indicates the resemblance.
P(Delay) Lecturer=Another,Winter=False,Place=Q408 Lecturer=Another,Winter=False,Place= E504 Lecturer= Another,Winter=True,Place= Q408 Lecturer= Another,Winter=True,Place=E504 Lecturer= Peter Antal,Winter=False,Place= Q408 Lecturer=Peter Antal,Winter=False,Place=E504 Lecturer=Peter Antal,Winter=True,Place= Q408 Lecturer=Peter Antal,Winter=True,Place=E504
Rerunning my queries, I got the following result: P(Agreeableness=Excellent) = 0.469 (prev.: 0.464) P(Agreeableness=Excellent|Delay=true) = 0.368 (prev.: 0.368) P(Agreeableness=Excellent|Lecturer=Another,Winter=true,Place=E504) = 0.461 (prev.: 0.455)
PAGE 7
P(Lecturer=Peter Antal|Agreeableness=Excellent) = 0.914 (prev.: 0.898) It is possible see that the probability of the class be excellent if the lecturer is Peter Antal increases. This happens because before the variable Delay had a greater bias that associated the delay on classes with the Lecturer Peter Antal and with overconfidence the delay associated with Peter decreased.
UNDERCONFIDENCE
I have changed the Lecturer and Delay variables to make my model underconfident. P(Lecturer) Peter Antal 0.5626 Another 0.4374
P(Delay) Lecturer=Another,Winter=False,Place=Q408 Lecturer=Another,Winter=False,Place= E504 Lecturer= Another,Winter=True,Place= Q408 Lecturer= Another,Winter=True,Place=E504 Lecturer= Peter Antal,Winter=False,Place= Q408 Lecturer=Peter Antal,Winter=False,Place=E504 Lecturer=Peter Antal,Winter=True,Place= Q408 Lecturer=Peter Antal,Winter=True,Place=E504
Rerunning my queries, I got the following result: P(Agreeableness=Excellent) = 0.437 (prev.: 0.464) P(Agreeableness=Excellent|Delay=true) = 0.361 (prev.: 0.368)
PAGE 8
P(Agreeableness=Excellent|Lecturer=Another,Winter=true,Place=E504) = 0.431 (prev.: 0.455) P(Agreeableness|Lecturer=Another,Subject=Another, Winter= true) 0.145 (prev.: 0.12) 0.445 (prev.: 0.42) 0.410 (prev.: 0.46) Bad Good Excellent
P(Lecturer=Peter Antal|Agreeableness=Excellent) = 0.567 (prev.: 0.898) We can see for all queries about Agreeableness had a lower excellent value. This is due to the previously values for the Lecturer and Delay had a bias in the sense that the lecturer be almost always Peter Antal and the Delay in classes happens in very few quantities. When the Peter is the lecturer the chance of has a class about BNs is higher and has less delays and seems that the students like more to learn about BNs and when the classes starts on time. With this two facts underconfidents, the positive effect in Agreeableness was lower. On the last query about which the chance of the lecturer be Peter if the class was excellent had a big decrease once previously had a bias in the variable Delay in sense that when the lecturer was Peter Antal the delay happens with less frequency and then the classes were better and also had a bias on the variable Lecturer that in the most part of the classes the lecturer was Peter Antal and combined with less Delay the class were nicer. Therefore the effect of the Peter in the Agreeableness of the classes is lower now.
PAGE 9
We can see that the model learnt has significant differences compare with the original one. The arrows now indicate dependencies that didnt have before. In this sense we can make a few queries to discover how much this changes influence in the knowledge of the model: Original P(Agreeableness=Excellent) P(Agreeableness=Excellent|Delay=true) P(Agreeableness=Excellent|Lecturer=Another,Winter =true,Place=E504) 0.464 0.368 0.455 Learnt (10000 samples) 0.472 0.365 0.447 Learnt (1000 samples) 0.476 0.391 0.457
Analyzing the results we can see that the role of the samples has a big influence on the results. As we can see Agreeableness is an independent variable and because this it tends to be greater without influence of the evidences.
Underconfident model: The parameters I used for sample generation and learning:
PAGE 10
Sample Size: 10000 Prior: CH Max. parent count: 5 Max. permutation: 80 The learnt structure:
Analyzing the structured learned we can see that in this time there are no isolated nodes, but there are serious problems with the dependencies and the newer structure does not remember the original one. Lets check how it is accurate comparing the queries: Original underconfident 0.437 0.361 0.431 Learnt underconfident 0.467 0.369 0.467
Looking in the table we can see that the queries had similar answers without big discrepancies but of course was expected that the accuracy would be lower than the original one. Analyzing both test realized we can conclude that the models probability distributions (CPTs) and sample size make a big difference when learning is being talked about. In the first one, with few samples, the entire structure was prejudiced and in the second one the dependencies were big influenced by the probabilities distributions.
PAGE 11