TY - GEN
T1 - Meta-learner with sparsified backpropagation
AU - Paithankar, Rohan
AU - Verma, Aayushi
AU - Agnihotri, Manish
AU - Singh, Sanjay
PY - 2019/1/1
Y1 - 2019/1/1
N2 - In todays world, Deep Learning is an area of research that has ever increasing applications. It deals with the use of neural networks to bring improvements in areas like speech recognition, computer vision, natural language processing and several automated systems. Training deep neural networks involves careful selection of appropriate training examples, tuning of hyperparameters and scheduling step sizes, finding a proper combination of all these is a tedious and time-consuming task. In the recent times, a few learning-to-learn models have been proposed, that can learn automatically. The time and accuracy of the model are exceedingly important. A technique named meProp was proposed to accelerate Deep Learning with reduced over-fitting. meProp is a method that proposes a sparsified back propagation method which reduces the computational cost. In this paper, we propose an application of meProp to the learning-to-learn models to focus on learning of the most significant parameters which are consciously chosen. We demonstrate improvement in accuracy of the learning-to-learn model with the proposed technique and compare the performance with that of the unmodified learning-to-learn model.
AB - In todays world, Deep Learning is an area of research that has ever increasing applications. It deals with the use of neural networks to bring improvements in areas like speech recognition, computer vision, natural language processing and several automated systems. Training deep neural networks involves careful selection of appropriate training examples, tuning of hyperparameters and scheduling step sizes, finding a proper combination of all these is a tedious and time-consuming task. In the recent times, a few learning-to-learn models have been proposed, that can learn automatically. The time and accuracy of the model are exceedingly important. A technique named meProp was proposed to accelerate Deep Learning with reduced over-fitting. meProp is a method that proposes a sparsified back propagation method which reduces the computational cost. In this paper, we propose an application of meProp to the learning-to-learn models to focus on learning of the most significant parameters which are consciously chosen. We demonstrate improvement in accuracy of the learning-to-learn model with the proposed technique and compare the performance with that of the unmodified learning-to-learn model.
UR - http://www.scopus.com/inward/record.url?scp=85070583942&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85070583942&partnerID=8YFLogxK
U2 - 10.1109/CONFLUENCE.2019.8776608
DO - 10.1109/CONFLUENCE.2019.8776608
M3 - Conference contribution
T3 - Proceedings of the 9th International Conference On Cloud Computing, Data Science and Engineering, Confluence 2019
SP - 315
EP - 319
BT - Proceedings of the 9th International Conference On Cloud Computing, Data Science and Engineering, Confluence 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th International Conference On Cloud Computing, Data Science and Engineering, Confluence 2019
Y2 - 10 January 2019 through 11 January 2019
ER -