Wednesday 6 January 2010

Machine Learning – Artificial Neurons / Perceptrons (Part 1)

Quick Perceptron Example

Here we go… a Decision Boundary… again. Yes you may now kill me :)

Actually, this time it really isn't that bad, because a Perceptron literally IS a decision boundary. Lets take a look at a rounded decision boundary from KNN:

ground

Now then, lets take a look at a Perceptron for the same example:

gpercep

As you may already see, a Perceptron is somewhat easier. In actual fact, a Perceptron requires less space and time complexity than the KNN. Even though a KNN is more accurate than a Perceptron, some times a Perceptron is better to use as it is faster in its calculations.

 

The Algorithm

Now at first the algorithm seems fairly complex, when actually its pretty simple – well… i say simple.

An Artificial Neuron only has one calculation – seriously! This calculation has 2 components – input values xi and weight values wi.

  • xi – these values are the input values. So if we had two pixels, one was black and one was white, then we would have:
    • x1 = 0      - black
    • x2 = 255  - white
  • wi – this is the weight value. There is one weight value per input value, and weights are normally randomly generated numbers between the values of –1 and 1.
    (no not just –1,0,1. We can have decimal numbers to!)

With these values, we then calculate the sum of the multiplication of each input value with its corresponding weight – these will then all add up to an activation value:

fireeq Once the activation value has been calculated, you then compare it to a target value. If the activation is great than this target the output=1, else output=0.

Here is a picture to hopefully better your understanding (hopefully :S )

activatepic

 

In Part 2 you get to see some proper examples as well as the learning rule for Perceptrons.

No comments:

Post a Comment