Saturday 27 January 2024

Seed Recall - The latest developments

 Ok so in case you havent read it I uploaded a post about what I call Seed Recall here.

Basically you train a Neural Network on, in this case, a reinforcement learning task and it quickly finds a solution. Now if you create a random seed that is used to change the activation functions in that neural network then you can recall a solution found unique to that seed. 

Ie. I put seed 0.52 into the NN, train it on the problem. Reload the weights and the seed and I get the same solution I trained for. And you might say - so what?

Well the beauty of it is if I provide another seed without retraining I get another unique solution based around the form of the first solution - a varient of that solution if you like. This is the essense of 'Seed Recall'

Now here is the good bit. It turns out there are actually three seeds here that are useful. One is the Training seed which you can choose before you train and will always train to the same result guided by the seed. 

The second is the Recall seed. This is the seed that is also set at training time and is then used to recall the solution.

The third is the Discovery seed this is set after Training and Recall is done and you want to explore the solution space around the Recall solution.

Three seeds to rule them all :

1. Training Seed

2. Recall Seed

3. Discovery Seed

Stay tuned for a github page for this one. Email me for more info.

Sunday 14 January 2024

General GIT usage for forgetful old GITs

 create repository on github website


git clone https://github.com/yourname/repository


copy your files into repository/


git add eachfile.cc


git config --global user.email "youremail" (add you login email)


git commit -a (commit the changes locally)


git status (check the status of your changes)


git push (commit to the remote server)


For Vim recall : Entering Editing Mode in Vim Editor - GeeksforGeeks

Wednesday 5 August 2020

A Kalman Filter for Predicting Randomized Chaos

I downloaded from github this wonderfully elegant Kalman filter : https://github.com/hbcbh1999/kalman-filter Its a really nice and simple implementation using the equations you can find on wikipedia https://en.wikipedia.org/wiki/Kalman_filter It looks really awsome when applied to the Arneodo XS dimension Chaotic Attractor with added noise. KalmanFilteronChaos
Sneak a peak @ Shepherds latest works DarkSide - Chapter 2 - The Watchers


Or Read Kromos now
In [45]:
%matplotlib inline
import numpy as np
In [46]:
class KalmanFilter(object):
    def __init__(self, F = None, B = None, H = None, Q = None, R = None, P = None, x0 = None):

        if(F is None or H is None):
            raise ValueError("Set proper system dynamics.")

        self.n = F.shape[1]
        self.m = H.shape[1]

        self.F = F
        self.H = H
        self.B = 0 if B is None else B
        self.Q = np.eye(self.n) if Q is None else Q
        self.R = np.eye(self.n) if R is None else R
        self.P = np.eye(self.n) if P is None else P
        self.x = np.zeros((self.n, 1)) if x0 is None else x0

    def predict(self, u = 0):
        self.x = np.dot(self.F, self.x) + np.dot(self.B, u)
        self.P = np.dot(np.dot(self.F, self.P), self.F.T) + self.Q
        return self.x

    def update(self, z):
        y = z - np.dot(self.H, self.x)
        S = self.R + np.dot(self.H, np.dot(self.P, self.H.T))
        K = np.dot(np.dot(self.P, self.H.T), np.linalg.inv(S))
        self.x = self.x + np.dot(K, y)
        I = np.eye(self.n)
        self.P = np.dot(np.dot(I - np.dot(K, self.H), self.P), 
        	(I - np.dot(K, self.H)).T) + np.dot(np.dot(K, self.R), K.T)

        
In [47]:
def arneodo(x, y, z, a=-5.5, b=3.5, c=-1):
    '''
    Given:
       x, y, z: a point of interest in three dimensional space
       s, r, b: parameters defining the lorenz attractor
    Returns:
       x_dot, y_dot, z_dot: values of the lorenz attractor's partial
           derivatives at the point x, y, z
    '''
    x_dot = y
    y_dot = z
    z_dot = -a*x-b*y-z+c*(x**3)
    return x_dot, y_dot, z_dot


dt = 0.01
num_steps = 7000

# Need one more for the initial values
xs = np.empty(num_steps + 1)
ys = np.empty(num_steps + 1)
zs = np.empty(num_steps + 1)

# Set initial values
xs[0], ys[0], zs[0] = (0.1, 0, 0.1)

# Step through "time", calculating the partial derivatives at the current point
# and using them to estimate the next point
for i in range(num_steps):
    x_dot, y_dot, z_dot = arneodo(xs[i], ys[i], zs[i])
    xs[i + 1] = xs[i] + (x_dot * dt)
    ys[i + 1] = ys[i] + (y_dot * dt)
    zs[i + 1] = zs[i] + (z_dot * dt)
    
In [69]:
def example():
	dt = 1.0/60
	F = np.array([[1, dt, 0], [0, 1, dt], [0, 0, 1]]).reshape(3,3)
	H = np.array([1, 0, 0]).reshape(1, 3)
	Q = np.array([[0.05, 0.05, 0.0], [0.05, 0.05, 0.0], [0.0, 0.0, 0.0]]).reshape(3,3)
	R = np.array([0.5]).reshape(1, 1)
      
	x = np.linspace(-10, 10, 100)
	measurements = xs + np.random.normal(0, 2, 7001)
#- (x**2 + 2*x - 2)
	kf = KalmanFilter(F = F, H = H, Q = Q, R = R)
	predictions = []

	for z in measurements:
		predictions.append(np.dot(H,  kf.predict())[0])
		kf.update(z)

	import matplotlib.pyplot as plt
	plt.plot(range(len(measurements)), measurements, label = 'Measurements')
	plt.plot(range(len(predictions)), np.array(predictions), label = 'Kalman Filter Prediction')
	plt.legend()
	plt.show()
    
In [70]:
if __name__ == '__main__':
    example()
In [63]:
H = np.array([1, 0, 0]).reshape(1,3)
In [33]:
H
Out[33]:
array([[1, 0, 0]])
In [24]:
dt = 1.0/60
F = np.array([[1, dt, 0], [0, 1, dt], [0, 0, 1]])
In [25]:
F
Out[25]:
array([[1.        , 0.01666667, 0.        ],
       [0.        , 1.        , 0.01666667],
       [0.        , 0.        , 1.        ]])
In [ ]: