Background
There are 3 methods to establish a classifier, these are:
- Model a classification rule directly
Examples of this include the KNN, Decision Trees, Perceptrons and SVM. - Model the probability of class memberships given the input data
Examples include Multi-layered Perceptrons with the cross Entropy cost. - Make a probabilistic model of data within each class
Examples are the Naive Bayes and Model Based classifiers.
With the following 3 methods, we can also say that:
- 1 and 2 are examples of Discriminative classification
- 3 is an example of Generative classification
- 2 and 3 are examples of Probabilistic classification
Bayesian Rule
The Bayesian Rule is:
where we could say that:
So for example, if we apply this to a Spam Filter, then P(C) would be the probability that the message is Spam, and P(X|C) is the probability that the given word (input) is Spam, given that the message is Spam. P(X) is just the probability of a word appearing in a message using the given training data.
Naive Bayes
For the Bayesian Rule above, we have to extend it so that we have:
Where, if we continued to use the spam filter idea, X1, …. Xn would be the input, or the words from the training data.
Naive Bayes is called so because it makes the assumption that all the input attributes are independent, such as one word doesn't affect the other in deciding whether or not a message is spam.
Example - Training
Lets say we have a table that decided if we should play tennis under certain circumstances. These could be the outlook of the weather; the temperature; the humidity and the strength of the wind:
Day | Outlook | Temperature | Humidity | Wind | PlayTennis? |
1 | Sunny | Hot | High | Weak | No |
2 | Sunny | Hot | High | Strong | No |
3 | Overcast | Hot | High | Weak | Yes |
4 | Rain | Mild | High | Weak | Yes |
5 | Rain | Cool | Normal | Weak | Yes |
6 | Rain | Cool | Normal | Strong | No |
7 | Overcast | Cool | Normal | Strong | Yes |
8 | Sunny | Mild | High | Weak | No |
9 | Sunny | Cool | Normal | Weak | Yes |
10 | Rain | Mild | Normal | Weak | Yes |
11 | Sunny | Mild | Normal | Strong | Yes |
12 | Overcast | Mild | High | Strong | Yes |
13 | Overcast | Hot | Normal | Weak | Yes |
14 | Rain | Mild | High | Strong | No |
So here we have 4 attributes. What we need to do is to create “look-up tables” for each of these attributes, and write in the probability that a game of tennis will be played based on this attribute. In these tables we have to note that there are 5 cases of not being able to play a game, and 9 cases of being able to play a game.
OUTLOOK | Play = Yes | Play = No |
Sunny | 2/9 | 3/5 |
Overcast | 4/9 | 0/5 |
Rain | 3/9 | 2/5 |
TEMPERATURE | Play = Yes | Play = No |
Hot | 2/9 | 2/5 |
Mild | 4/9 | 2/5 |
Cool | 3/9 | 1/5 |
HUMIDITY | Play = Yes | Play = No |
High | 3/9 | 4/5 |
Normal | 6/9 | 1/5 |
WIND | Play = Yes | Play = No |
Strong | 3/9 | 3/5 |
Weak | 6/9 | 2/5 |
We also must note that the P(Play=Yes) = 9/14 and P(Play=No)=5/14
Testing
Now we hit the Testing phase, woo!
For this, say we were given a new instance, and we want to know if we can play a game or not, then we need to lookup the results from the tables above. So, this new instance is:
- X = (Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong)
Firstly we look at the probability that we can play the game, so we use the lookup tables to get:
- P(Outlook=Sunny | Play=Yes) = 2/9
- P(Temperature=Cool | Play=Yes) = 3/9
- P(Humidity=High | Play=Yes) = 3/9
- P(Wind=Strong | Play=Yes) = 3/9
- P(Play=Yes) = 9/14
Next we consider the fact that we cannot play a game. I think i’ll leave you to look them up ;)
Then, using those results, you have to multiple the whole lot together. So you multiple all the probabilities for Play=Yes such as:
- (2/9) * (3/9) * (3/9) * (3/9) * (9/14) = 0.0053
For those of you who could be bothered to lookup the Play=No probabilities, you should get the answer of 0.0206, or something along those lines :)
So, given the probabilities, can we play a game or not? Well:
- P(Yes | X) < P(No | X) => 0.0053 < 0.0206
So the answer is no, you can’t play tennis, mwuhahaha!
Nice tutorial, very conceptual and made the idea easy to digest it.
ReplyDeleteI have a question;
What would you do if you saw an unseen event? For example Humidity is Average.
This will yield 0 probability for the whole test instance.
Thanks.
Srego
Hey,
ReplyDeleteThanks for the comment.
When i wrote this post i should have clearly stated that all of the values for testing and training were discrete-values. With that principle in mind, if we saw an unseen event, take your example Humidity is Average; then quite simply, yes, the probability yielded would be 0 as we never originally trained the classifier on that event. This is because when we use discrete-values, the Naive Bayes Classifier can be viewed as a linear classifier; that is, every such Naive Bayes classifier corresponds to a hyperplane decision from the original values – with no unseen events occurring
However, if we instead used continuous-values rather than the discrete values, then we need to map the events onto a curve, meaning that Naive Bayes is no longer linear – therefore we can now accept unseen events; like Humidity is Average.
Hope that helped
Badgerati
Thank you Badgerati.
ReplyDeleteThis helped.
By the way nice blog, you can be sure that this blog have a follower :D.
it's nice...
ReplyDeletebut btw...
how is it use for numeric data???