Friday 7 April 2017

Links within Links - Pointers the journey continues. (Wiring an RBM)

Having a renewed interest in using pointers for other than passing arrays and functions between functions. I am now using them to create a Link matrix that preserves symetry and equality between weights from i to j and from j to i.

I first decied to have each of my weights for each neuron pointing to a weight matrix. Having setup my array of pointers for each neuron I then had to award it the right amount of memory as * **Array needs *** much memory! and not this much * **!

I then had the problem of pointing each Weight to the same Weight as its symetric opposite and I found the simplest way was to turn the Weight matrix into another array of pointers that point to their own symetric weight values.

Now I am all sorted and everything points to something else and I feel complete I feel whole!

Its all connected and I feel great!

Neuron -> Weight Pointer[n] -> Weight Matrix [n][m] -> Weight Matrix [m][n]

Easy!

Now I need only write to and read from Each Neuron for updates and learning etc. In fact I can just isolate one neuron and run everything on that one neuron.

Another triumpth for the Neuron Centered Algo. [WARNING Weight Vectors STUNT GROWTH!]


Saturday 1 April 2017

The Wonder of Pointers and their use in Building Neuron Centered Algo's.

Well I have been looking at the wiring on one of my nets and I found it just wasnt up to the requirement of forward and backward updates. I need it to produce values at the input and the output nodes - this is for CD - contrastive divergence. A clever learning algorithm that uses samples taken at various phases :

Phase 1 clamp inputs generate hidden values (Sample h given v)
Phase 2 unclamp inputs and generate inputs  (Sample v' given h)
Phase 3 clamp to v' and generate h'

This process is repeated and the values are used to compute weight changes as with backpropogation.

My neuron centered design needed to update itself using weights that had formally been assigned to the neurons in successive layers.

The answer to this was the use of pointers as weights. I had tried using a function to equate the symetrical links between neurons but ofcourse all I needed ws these magic pointers. It was as though they were designed for this purpose.

Each neuron has an array of pointers to a weight matrix that holds all the weights of the network. For a restricted boltzman the pointers will point to the same weights allowing for both the forward and backward propogation and for the weight updates to affect both the forward and backward weights that belong to neurons in adjacent layers.

 If I create two pointers to the same weight value and I change either one of them then this single value is changed. Fantasticaly simple and to think I had only been using pinters to pass arrays between functions. They are so much more powerful for this purpose and fully compatable with the ANSI c Open-CL.

Here is my little test program to prove the point:

#include <stdio.h>
#include <stdlib.h>
#include <iostream>

using namespace std;

class neuron{


public:

double  *Wgt;
double* *LWgt;

void init(){
Wgt   = new double[26];
LWgt = new double*[26];
}

};

int main(){

neuron *node;
node = new neuron[10];

for(int i=0;i<10;i++){
node[i].init();
}

for(int i=0;i<5;i++){
node[i].LWgt[23] = &node[i+5].Wgt[12];
             }

      
node[6].Wgt[12]=50.6987;   //These are the initial values in the Weight matrix

node[8].Wgt[12]=0.999923;

cout<<*node[1].LWgt[23]<<*node[3].LWgt[23]<<"--\n";

*node[1].LWgt[23]=33.234; //Here is the pointer to this matrix see it change
                                              //the value on the Weight matrix like magic

cout<<node[6].Wgt[12]<<"---\n";

    }

Wiring a Neuron Centered Network just became a whole lot easier!