So I took some random set of data from the internet and set up the following example:
double[] data
2 inputs:
* data[n-1]
* data[n]
Correct Output:
data[n]
So the correct output is always identical with the second input.
I use the following network and teacher:
- Code: Select all
m_network = new ActivationNetwork(new BipolarSigmoidFunction(2), 2, 10, 1);
m_teacher = new BackPropagationLearning(m_network) { LearningRate = 0.1, Momentum = 0 };
But the results are very bad. They dont come even close. Usually they stabilize ~10% above or below the correct results. They are even worse than a try with just the first (the wrong) input.
* What Im doing wrong?
* Is this even a problem a NN is capable to do?
* Am I using the correct NN?
* am I using the correct teacher?
* Whats about the BipolarSigmoidFunction?
This is what I tried so far:
* Using only 1 input which is always identical to the output => did not work; Network was not able to figure out that it just has to use the input as an outpu
* Using a konstant "0" first input => did not work either. Looks more promising, but is still far away from a correct output.