What is the difference between multi-layer perceptron and generalized feed forward neural network?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite












I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:



model B: Simple multilayer perceptron with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.



model C: Generalized feedforward with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.



In the Results and discussion section of the paper, the author concludes that :



Model C enhanced the performance compared to Model A and B. This suggests that the generalized feedforward network performed better than the simple multilayer perceptron network in this study



Is there a difference between these 2 architectures?










share|improve this question























  • If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
    – DuttaA
    Sep 8 at 10:22














up vote
4
down vote

favorite












I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:



model B: Simple multilayer perceptron with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.



model C: Generalized feedforward with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.



In the Results and discussion section of the paper, the author concludes that :



Model C enhanced the performance compared to Model A and B. This suggests that the generalized feedforward network performed better than the simple multilayer perceptron network in this study



Is there a difference between these 2 architectures?










share|improve this question























  • If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
    – DuttaA
    Sep 8 at 10:22












up vote
4
down vote

favorite









up vote
4
down vote

favorite











I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:



model B: Simple multilayer perceptron with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.



model C: Generalized feedforward with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.



In the Results and discussion section of the paper, the author concludes that :



Model C enhanced the performance compared to Model A and B. This suggests that the generalized feedforward network performed better than the simple multilayer perceptron network in this study



Is there a difference between these 2 architectures?










share|improve this question















I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:



model B: Simple multilayer perceptron with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.



model C: Generalized feedforward with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.



In the Results and discussion section of the paper, the author concludes that :



Model C enhanced the performance compared to Model A and B. This suggests that the generalized feedforward network performed better than the simple multilayer perceptron network in this study



Is there a difference between these 2 architectures?







machine-learning neural-network deep-learning mlp gfnn






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 8 at 15:10









Stephen Rauch

1,29341128




1,29341128










asked Sep 8 at 9:40









hyTuev

312




312











  • If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
    – DuttaA
    Sep 8 at 10:22
















  • If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
    – DuttaA
    Sep 8 at 10:22















If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
– DuttaA
Sep 8 at 10:22




If you are looking for intuition why it might work better as given in the paper, i'll add a link to my answer
– DuttaA
Sep 8 at 10:22










2 Answers
2






active

oldest

votes

















up vote
3
down vote













Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:



enter image description here



Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.



I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.






share|improve this answer




















  • Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
    – hyTuev
    Sep 8 at 12:32











  • I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
    – DuttaA
    Sep 8 at 12:42

















up vote
1
down vote













I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.




This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.




I have to add this point that the paper is so much old. People usually use Relu nonlinearity these days. Also take a look at here.






share|improve this answer






















  • I do not think most people will be able to read the paper due to 'Elseiver' membership
    – DuttaA
    Sep 8 at 10:22










  • That's why I've provided the second link.
    – Media
    Sep 8 at 10:49






  • 1




    Thank you very much for the answer and the Enlightening link to the method they used.
    – hyTuev
    Sep 8 at 12:36










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37962%2fwhat-is-the-difference-between-multi-layer-perceptron-and-generalized-feed-forwa%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
3
down vote













Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:



enter image description here



Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.



I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.






share|improve this answer




















  • Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
    – hyTuev
    Sep 8 at 12:32











  • I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
    – DuttaA
    Sep 8 at 12:42














up vote
3
down vote













Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:



enter image description here



Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.



I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.






share|improve this answer




















  • Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
    – hyTuev
    Sep 8 at 12:32











  • I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
    – DuttaA
    Sep 8 at 12:42












up vote
3
down vote










up vote
3
down vote









Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:



enter image description here



Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.



I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.






share|improve this answer












Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:



enter image description here



Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.



I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 8 at 10:20









DuttaA

440116




440116











  • Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
    – hyTuev
    Sep 8 at 12:32











  • I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
    – DuttaA
    Sep 8 at 12:42
















  • Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
    – hyTuev
    Sep 8 at 12:32











  • I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
    – DuttaA
    Sep 8 at 12:42















Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
– hyTuev
Sep 8 at 12:32





Thanks for your answer. Is it right if I ask here that if Keras deep learning library is able to do this? I know that Keras is able to create MLPs and I have done some projects with these types of models. But this generalized feed forward NN seems to be awesome.
– hyTuev
Sep 8 at 12:32













I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
– DuttaA
Sep 8 at 12:42




I don't know about keras but it is certainly possible in Tensorflow, which makes me assume that it'll also be possible in keras.. Anyways you can ask it a new question but it definitely seems possible.
– DuttaA
Sep 8 at 12:42










up vote
1
down vote













I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.




This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.




I have to add this point that the paper is so much old. People usually use Relu nonlinearity these days. Also take a look at here.






share|improve this answer






















  • I do not think most people will be able to read the paper due to 'Elseiver' membership
    – DuttaA
    Sep 8 at 10:22










  • That's why I've provided the second link.
    – Media
    Sep 8 at 10:49






  • 1




    Thank you very much for the answer and the Enlightening link to the method they used.
    – hyTuev
    Sep 8 at 12:36














up vote
1
down vote













I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.




This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.




I have to add this point that the paper is so much old. People usually use Relu nonlinearity these days. Also take a look at here.






share|improve this answer






















  • I do not think most people will be able to read the paper due to 'Elseiver' membership
    – DuttaA
    Sep 8 at 10:22










  • That's why I've provided the second link.
    – Media
    Sep 8 at 10:49






  • 1




    Thank you very much for the answer and the Enlightening link to the method they used.
    – hyTuev
    Sep 8 at 12:36












up vote
1
down vote










up vote
1
down vote









I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.




This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.




I have to add this point that the paper is so much old. People usually use Relu nonlinearity these days. Also take a look at here.






share|improve this answer














I guess the best way to understand it is to read its paper called A generalized feedforward neural network architecture for classification and regression.




This article presents a new generalized feedforward neural network (GFNN) architecture for pattern classification and regression. The GFNN architecture uses as the basic computing unit a generalized shunting neuron (GSN) model, which includes as special cases the perceptron and the shunting inhibitory neuron. GSNs are capable of forming complex, nonlinear decision boundaries. This allows the GFNN architecture to easily learn some complex pattern classification problems. In this article the GFNNs are applied to several benchmark classification problems, and their performance is compared to the performances of SIANNs and multilayer perceptrons. Experimental results show that a single GSN can outperform both the SIANN and MLP networks.




I have to add this point that the paper is so much old. People usually use Relu nonlinearity these days. Also take a look at here.







share|improve this answer














share|improve this answer



share|improve this answer








edited Sep 8 at 10:01

























answered Sep 8 at 9:56









Media

5,49541443




5,49541443











  • I do not think most people will be able to read the paper due to 'Elseiver' membership
    – DuttaA
    Sep 8 at 10:22










  • That's why I've provided the second link.
    – Media
    Sep 8 at 10:49






  • 1




    Thank you very much for the answer and the Enlightening link to the method they used.
    – hyTuev
    Sep 8 at 12:36
















  • I do not think most people will be able to read the paper due to 'Elseiver' membership
    – DuttaA
    Sep 8 at 10:22










  • That's why I've provided the second link.
    – Media
    Sep 8 at 10:49






  • 1




    Thank you very much for the answer and the Enlightening link to the method they used.
    – hyTuev
    Sep 8 at 12:36















I do not think most people will be able to read the paper due to 'Elseiver' membership
– DuttaA
Sep 8 at 10:22




I do not think most people will be able to read the paper due to 'Elseiver' membership
– DuttaA
Sep 8 at 10:22












That's why I've provided the second link.
– Media
Sep 8 at 10:49




That's why I've provided the second link.
– Media
Sep 8 at 10:49




1




1




Thank you very much for the answer and the Enlightening link to the method they used.
– hyTuev
Sep 8 at 12:36




Thank you very much for the answer and the Enlightening link to the method they used.
– hyTuev
Sep 8 at 12:36

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37962%2fwhat-is-the-difference-between-multi-layer-perceptron-and-generalized-feed-forwa%23new-answer', 'question_page');

);

Post as a guest













































































這個網誌中的熱門文章

How to combine Bézier curves to a surface?

Carbon dioxide

Why am i infinitely getting the same tweet with the Twitter Search API?