Self-Organizing Maps Algorithm falls under clustering category.
Self-organizing feature maps (SOMs or SOFMs) are popular neural network method developed by Kohonen as way of representing multidimensional data into lower dimensions usually 2D. This algorithms is un-supervised i.e. it is self learning or self trained.
From SOM tutorial part 1 –
Training occurs in several steps and over many iterations:
- Each node’s weights are initialized.
- A vector is chosen at random from the set of training data and presented to the lattice.
- Every node is examined to calculate which one’s weights are most like the input vector. The winning node is commonly known as the Best Matching Unit (BMU).
- The radius of the neighbourhood of the BMU is now calculated. This is a value that starts large, typically set to the ‘radius’ of the lattice, but diminishes each time-step. Any nodes found within this radius are deemed to be inside the BMU’s neighbourhood.
- Each neighbouring node’s (the nodes found in step 4) weights are adjusted to make them more like the input vector. The closer a node is to the BMU, the more its weights get altered.
- Repeat step 2 for N iterations.
Difficult to understand? No worries. Check the below videos in which this algorithm is explained with an actual example in a simple manner.
Further reading with very good examples but very technical articles – http://www.mql5.com/en/articles/283
and http://www.cs.bham.ac.uk/~jxb/NN/l16.pdf [PDF]