So then, time for part 2 of Perceptrons :D
Here im going to give a few examples of the algorithm working, then its onto the next part :)
The Decision Rule
So from Part 1 we learnt that to draw a Perceptron we needs input values and weights. These will be used to calculate an activation value, which is then checked against some target value, so that we have:
If the statement is true, return 1, else return 0.
Cats and Dogs Again
So, looking at that graph with Cats and Dogs with the Perceptron drawn through it. We can say that the if the activation value is less than the target, the output is a 0, so a Cat, else 1 – so a Dog.
Cat = 0 // Dog = 1
Example
Lets say we have the following values:
- x = [ 1, 0.5, 2 ]
- w = [ 0.2, 0.5, 0.5 ]
- t = 1
Here we are given the target value of 1. So with this, what is the activation value?
Well, using the equation, we have:
- a = (x1*w1) + (x2*w2) + (x3*w3)
a = (1*0.2) + (0.5*0.5) + (2*0.5)
a = 0.2 + 0.25 + 1
a = 1.45
So, with this, what value does the output equal? Well, is a>t
The answer is true, so we output 1. However, if we had these values:
- x = [ 1, 0.5, 2 ]
- w = [ 0.2, 0.5, 0 ]
- t = 0.5
Then the activation would be:
- a = (x1*w1) + (x2*w2) + (x3*w3)
a = (1*0.2) + (0.5*0.5) + (2*0)
a = 0.2 + 0.25 + 0
a = 0.45
In this case, a>t is false, so we output 0.
Perceptron Learning Rule
In order for a Perceptron to learn – to find that position where is best identifies new classes correctly, then we need to use this:
new weight = old weight + n * (target – output) * input
n = learning curve – how fast do you want the Perceptron to learn?
If you want the full Algorithm for this, then read the chapter on Perceptrons in Ethem Alpaydin’s book “Introduction to Machine Learning”.
No comments:
Post a Comment