AForge.NET

  :: AForge.NET Framework :: Articles :: Forums ::

ActivationNetwork

Forum to discuss AForge.NET Framework, its features, API, how-tos, etc.

ActivationNetwork

Postby Generator » Thu Aug 24, 2017 9:54 am

I try to learn to work with NNs, and so I figured out a "easy" example, just to see how this works.

So I took some random set of data from the internet and set up the following example:

double[] data

2 inputs:
* data[n-1]
* data[n]

Correct Output:
data[n]

So the correct output is always identical with the second input.

I use the following network and teacher:

Code: Select all
m_network = new ActivationNetwork(new BipolarSigmoidFunction(2), 2, 10, 1);
m_teacher = new BackPropagationLearning(m_network) { LearningRate = 0.1, Momentum = 0 };


But the results are very bad. They dont come even close. Usually they stabilize ~10% above or below the correct results. They are even worse than a try with just the first (the wrong) input.

* What Im doing wrong?
* Is this even a problem a NN is capable to do?
* Am I using the correct NN?
* am I using the correct teacher?
* Whats about the BipolarSigmoidFunction?

This is what I tried so far:
* Using only 1 input which is always identical to the output => did not work; Network was not able to figure out that it just has to use the input as an outpu
* Using a konstant "0" first input => did not work either. Looks more promising, but is still far away from a correct output.
Generator
 
Posts: 4
Joined: Wed Aug 23, 2017 6:43 pm

Re: ActivationNetwork

Postby andrew.kirillov » Thu Aug 24, 2017 11:54 am

What sort of input data you use? You understand that network's output will be in the range of the activation function's output, right? So if you use bipolar activation function, which produces values in the [-1, 1] range, then your network will provide only values from that range.

If you have fundamental questions, like "Is it the right task for ANN?", maybe you should start from theory of ANNs?
With best regards,
Andrew


Interested in supporting AForge.NET Framework?
User avatar
andrew.kirillov
Site Admin, AForge.NET Developer
 
Posts: 3453
Joined: Fri Jan 23, 2009 9:12 am
Location: UK

Re: ActivationNetwork

Postby Generator » Thu Aug 24, 2017 12:35 pm

All data is normalized so it fits between -1 and 1.

I already read about the theory of ANNs but I could not think about any reason why my example should not work. But as my example does not work there is obviously something I overlook.

This is my first example where I wanted to test my theoretic skills.

As I understand it, the weights related to the first input - the wrong one - should decline while training so only the second input - the correct one - will be relevant.

Which is some kind of true because when I look at the weights in debug mode, it looks like this (just an example):
Code: Select all
Weights   {double[2]}   double[]
      [0]   0.0043117559239923   double
      [1]   0.39307293927399745   double

Generator
 
Posts: 4
Joined: Wed Aug 23, 2017 6:43 pm

Re: ActivationNetwork

Postby Generator » Fri Sep 08, 2017 4:36 pm

Now I reduced the result set in a way that the network should only find out if next result is rising or lowering. This works fine for the first 200 entries where the network is close to 100% right. But then, the correct rate drops suddenly and ends with about 70% correct values.

Any clue why it starts guessing wrong after about 200 perfect guesses?
Generator
 
Posts: 4
Joined: Wed Aug 23, 2017 6:43 pm




Return to AForge.NET Framework

cron