What is the difference between multi-layer perceptron and generalized feed forward neural network?
Clash Royale CLAN TAG#URR8PPP
up vote
4
down vote
favorite
I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:
model B: Simple multilayer perceptron
with Sigmoid
activation function and 4 layers
in which the number of nodes
are: 5-10-10-1, respectively.
model C: Generalized feedforward
with Sigmoid
activation function and 4 layers
in which the number of nodes
are: 5-10-10-1, respectively.
In the Results and discussion section of the paper, the author concludes that :
Model C
enhanced the performance compared to Model A
and B
. This suggests that the generalized feedforward network
performed better than the simple multilayer perceptron network
in this study
Is there a difference between these 2 architectures?
machine-learning neural-network deep-learning mlp gfnn
add a comment |Â
up vote
4
down vote
favorite
I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:
model B: Simple multilayer perceptron
with Sigmoid
activation function and 4 layers
in which the number of nodes
are: 5-10-10-1, respectively.
model C: Generalized feedforward
with Sigmoid
activation function and 4 layers
in which the number of nodes
are: 5-10-10-1, respectively.
In the Results and discussion section of the paper, the author concludes that :
Model C
enhanced the performance compared to Model A
and B
. This suggests that the generalized feedforward network
performed better than the simple multilayer perceptron network
in this study
Is there a difference between these 2 architectures?
machine-learning neural-network deep-learning mlp gfnn
If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
â DuttaA
Sep 8 at 10:22
add a comment |Â
up vote
4
down vote
favorite
up vote
4
down vote
favorite
I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:
model B: Simple multilayer perceptron
with Sigmoid
activation function and 4 layers
in which the number of nodes
are: 5-10-10-1, respectively.
model C: Generalized feedforward
with Sigmoid
activation function and 4 layers
in which the number of nodes
are: 5-10-10-1, respectively.
In the Results and discussion section of the paper, the author concludes that :
Model C
enhanced the performance compared to Model A
and B
. This suggests that the generalized feedforward network
performed better than the simple multilayer perceptron network
in this study
Is there a difference between these 2 architectures?
machine-learning neural-network deep-learning mlp gfnn
I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:
model B: Simple multilayer perceptron
with Sigmoid
activation function and 4 layers
in which the number of nodes
are: 5-10-10-1, respectively.
model C: Generalized feedforward
with Sigmoid
activation function and 4 layers
in which the number of nodes
are: 5-10-10-1, respectively.
In the Results and discussion section of the paper, the author concludes that :
Model C
enhanced the performance compared to Model A
and B
. This suggests that the generalized feedforward network
performed better than the simple multilayer perceptron network
in this study
Is there a difference between these 2 architectures?
machine-learning neural-network deep-learning mlp gfnn
machine-learning neural-network deep-learning mlp gfnn
edited Sep 8 at 15:10
Stephen Rauch
1,29341128
1,29341128
asked Sep 8 at 9:40
hyTuev
312
312
If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
â DuttaA
Sep 8 at 10:22
add a comment |Â
If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
â DuttaA
Sep 8 at 10:22
If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
â DuttaA
Sep 8 at 10:22
If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
â DuttaA
Sep 8 at 10:22
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
3
down vote
Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:
Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.
I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.
Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
â hyTuev
Sep 8 at 12:32
I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
â DuttaA
Sep 8 at 12:42
add a comment |Â
up vote
1
down vote
I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.
This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.
I have to add this point that the paper is so much old. People usually use Relu
nonlinearity these days. Also take a look at here.
I do not think most people will be able to read the paper due to 'Elseiver' membership
â DuttaA
Sep 8 at 10:22
That's why I've provided the second link.
â Media
Sep 8 at 10:49
1
Thank you very much for the answer and the Enlightening link to the method they used.
â hyTuev
Sep 8 at 12:36
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:
Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.
I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.
Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
â hyTuev
Sep 8 at 12:32
I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
â DuttaA
Sep 8 at 12:42
add a comment |Â
up vote
3
down vote
Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:
Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.
I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.
Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
â hyTuev
Sep 8 at 12:32
I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
â DuttaA
Sep 8 at 12:42
add a comment |Â
up vote
3
down vote
up vote
3
down vote
Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:
Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.
I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.
Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:
Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.
I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.
answered Sep 8 at 10:20
DuttaA
440116
440116
Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
â hyTuev
Sep 8 at 12:32
I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
â DuttaA
Sep 8 at 12:42
add a comment |Â
Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
â hyTuev
Sep 8 at 12:32
I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
â DuttaA
Sep 8 at 12:42
Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
â hyTuev
Sep 8 at 12:32
Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
â hyTuev
Sep 8 at 12:32
I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
â DuttaA
Sep 8 at 12:42
I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
â DuttaA
Sep 8 at 12:42
add a comment |Â
up vote
1
down vote
I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.
This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.
I have to add this point that the paper is so much old. People usually use Relu
nonlinearity these days. Also take a look at here.
I do not think most people will be able to read the paper due to 'Elseiver' membership
â DuttaA
Sep 8 at 10:22
That's why I've provided the second link.
â Media
Sep 8 at 10:49
1
Thank you very much for the answer and the Enlightening link to the method they used.
â hyTuev
Sep 8 at 12:36
add a comment |Â
up vote
1
down vote
I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.
This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.
I have to add this point that the paper is so much old. People usually use Relu
nonlinearity these days. Also take a look at here.
I do not think most people will be able to read the paper due to 'Elseiver' membership
â DuttaA
Sep 8 at 10:22
That's why I've provided the second link.
â Media
Sep 8 at 10:49
1
Thank you very much for the answer and the Enlightening link to the method they used.
â hyTuev
Sep 8 at 12:36
add a comment |Â
up vote
1
down vote
up vote
1
down vote
I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.
This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.
I have to add this point that the paper is so much old. People usually use Relu
nonlinearity these days. Also take a look at here.
I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.
This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.
I have to add this point that the paper is so much old. People usually use Relu
nonlinearity these days. Also take a look at here.
edited Sep 8 at 10:01
answered Sep 8 at 9:56
Media
5,49541443
5,49541443
I do not think most people will be able to read the paper due to 'Elseiver' membership
â DuttaA
Sep 8 at 10:22
That's why I've provided the second link.
â Media
Sep 8 at 10:49
1
Thank you very much for the answer and the Enlightening link to the method they used.
â hyTuev
Sep 8 at 12:36
add a comment |Â
I do not think most people will be able to read the paper due to 'Elseiver' membership
â DuttaA
Sep 8 at 10:22
That's why I've provided the second link.
â Media
Sep 8 at 10:49
1
Thank you very much for the answer and the Enlightening link to the method they used.
â hyTuev
Sep 8 at 12:36
I do not think most people will be able to read the paper due to 'Elseiver' membership
â DuttaA
Sep 8 at 10:22
I do not think most people will be able to read the paper due to 'Elseiver' membership
â DuttaA
Sep 8 at 10:22
That's why I've provided the second link.
â Media
Sep 8 at 10:49
That's why I've provided the second link.
â Media
Sep 8 at 10:49
1
1
Thank you very much for the answer and the Enlightening link to the method they used.
â hyTuev
Sep 8 at 12:36
Thank you very much for the answer and the Enlightening link to the method they used.
â hyTuev
Sep 8 at 12:36
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37962%2fwhat-is-the-difference-between-multi-layer-perceptron-and-generalized-feed-forwa%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
â DuttaA
Sep 8 at 10:22