TY - JOUR
T1 - Bio-Optimization of Deep Learning Network Architectures
AU - Shanmugavadivu, P.
AU - Mary Shanthi Rani M, Shanthi Rani M
AU - Chitra P, P
AU - Lakshmanan S, S
AU - Nagaraja P, P
AU - Vignesh U, U
N1 - Publisher Copyright:
© 2022 Shanmugavadivu P et al.
PY - 2022
Y1 - 2022
N2 - Deep learning is reaching new heights as a result of its cutting-edge performance in a variety of fields, including computer vision, natural language processing, time series analysis, and healthcare. Deep learning is implemented using batch and stochastic gradient descent methods, as well as a few optimizers; however, this led to subpar model performance. However, there is now a lot of effort being done to improve deep learning's performance using gradient optimization methods. The suggested work analyses convolutional neural networks (CNN) and deep neural networks (DNN) using several cutting-edge optimizers to enhance the performance of architectures. This work uses specific optimizers (SGD, RMSprop, Adam, Adadelta, etc.) to enhance the performance of designs using different types of datasets for result matching. A thorough report on the optimizers' performance across a variety of architectures and datasets finishes the study effort. This research will be helpful to researchers in developing their framework and appropriate architecture optimizers. The proposed work involves eight new optimizers using four CNN and DNN architectures. The experimental results exploit breakthrough results for improving the efficiency of CNN and DNN architectures using various datasets.
AB - Deep learning is reaching new heights as a result of its cutting-edge performance in a variety of fields, including computer vision, natural language processing, time series analysis, and healthcare. Deep learning is implemented using batch and stochastic gradient descent methods, as well as a few optimizers; however, this led to subpar model performance. However, there is now a lot of effort being done to improve deep learning's performance using gradient optimization methods. The suggested work analyses convolutional neural networks (CNN) and deep neural networks (DNN) using several cutting-edge optimizers to enhance the performance of architectures. This work uses specific optimizers (SGD, RMSprop, Adam, Adadelta, etc.) to enhance the performance of designs using different types of datasets for result matching. A thorough report on the optimizers' performance across a variety of architectures and datasets finishes the study effort. This research will be helpful to researchers in developing their framework and appropriate architecture optimizers. The proposed work involves eight new optimizers using four CNN and DNN architectures. The experimental results exploit breakthrough results for improving the efficiency of CNN and DNN architectures using various datasets.
UR - http://www.scopus.com/inward/record.url?scp=85139561942&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139561942&partnerID=8YFLogxK
U2 - 10.1155/2022/3718340
DO - 10.1155/2022/3718340
M3 - Article
AN - SCOPUS:85139561942
SN - 1939-0114
VL - 2022
JO - Security and Communication Networks
JF - Security and Communication Networks
M1 - 3718340
ER -