Andrew,
I am converting all inputs into numbers between -1,1 is that correct for your NeuralNet?
Or should it be 0,1?
![]() |
|
|||||
|
||||||
Input conversation
8 posts
• Page 1 of 1
Input conversationAndrew,
I am converting all inputs into numbers between -1,1 is that correct for your NeuralNet? Or should it be 0,1?
Re: Input conversationHello,
Converting inputs is not that critical as converting desired outputs. If outputs are not converted to the range of your sigmoid function, the network will never reach the right answer. The range for outputs is determined by the type of sigmoid function you use (which is what you need to check in your code). If you use SigmoidFunction, then its output is in [0, 1] range. If you use BipolarSigmoidFunction, then [-1, 1] range. Documentation is always the first place to check. Personally I try to avoid [0, 1] range for inputs. The [-1, 1] range leads to quicker learning. And there is a reason for this. If you check theory (and the code), you will find that weights' updates are calculated by multiplying neuron's error value with its inputs. If inputs are zeros, then there is no update to weights of the neuron.
Re: Input conversationHi Andrew,
I think it is not really true that all outputs or inputs need to be set in the range of [0,1] or [-1,1] . To me, It could be 1 of the reason to cause wrong testing outputs.I have test with 9600 inputs and 2 output in the range [-1,1] like [1,1,1,-1,-1...,1,-1] or [-0.5,0,5,-0,5...0.5] or[0.001,-0.001,....,0.001] .In the end, the output I get doesnt match with the outputdersired. I think it still depend on input's quantity because sum += weight * input and somehow this sum is larger than the range [-1,1] even threshold cannot compensate this amount. For example,assuming that in 9600 inputs we have 7000 inputs "-1" and 2600 input"1" and the weight are all positive , then the sum somehow << [-1,1] then applying the bipolarsigmoidfunction we get the output always become -1 .
Re: Input conversation
Did I say they must be exactly in this range? What I meant is that they are not supposed to go out of that range. If your sigmoid function can not provide anything out of the [-1, 1] (or whatever), then all expected outputs must be in that range. But then it is up to you to use full range or smaller sub-range like [-0.5, 0.5], etc.
Re: Input conversationHi Andrew,
Yes, you are right , but I mean that with the large number of inputs and outputs are expected in [-1,1] or [-0.5,0.5] , then the output results are not distributed in the range[-1,1], it is almost be a constant (1 or -1) if I use sigmoid function. I dont know how to overcome this problem , do you have any suggestion ? Even I scaled outputs and inputs many times , I still get stuck in that problem.
Re: Input conversationFor large number of inputs, you may need to play with alpha parameter of sigmoid function. Make it a small value.
Re: Input conversationI have 9600 inputs as mention before. And 7000 of them is "-1" and 2600 is "1" then the sum is aprroximate -4k , but the sum sometimes
has different value like -0.7 aprroximate 0 , the reason could be the percentage between "-1" and "1" is 50:50 . I trained 1000 samples ,each sample has 9600 inputs . Inputs and outputs are [-1,1...-1] and[-1,1] , learning rate :0.1, momentum :0.1, alpha : 2, neurons of 1st layer :2 , neurons of 2nd layer : 1 After that I try to change alpha to 1/4k6 and the errors decrease from 609 to 608 for 1000 iterations .In order to make the error decreasing to 0 , I have to set 600k iterations (I think). To me , it takes a lot of time just for 1000 samples . If someone want to increase the number of samples to 10k or 100k just for simple structure network ( 9600 inputs, 2 outputs, 2 neurons of 1st layer, 1 neuron of 2nd layer), are there any better solutions or better approaches ?
Re: Input conversation
Maybe convolutional neural networks?
8 posts
• Page 1 of 1
|
![]() |
|