A Strategic Weight Refinement Maneuver for Convolutional Neural Networks

This is just the first 2 chapters of the thesis.
More work will follow on this same thesis over the next 2 weeks.

Study area: Computer Science Research Area: Improving the accuracy rate and training time of Stochastic Gradient Descent Algorithm on Convolutional Neural Networks. Brief Overview : To explore the several optimization techniques available for deep learning networks like CNN, and attempt to apply a strategic weight refinement maneuver with GSGD to the most commonly used optimization techniques for CNN. The aim of this research is to significantly improve prediction accuracies when compared to the canonical variants of the machine learning optimization techniques. The effective application of GSGD will realize the significance of inconsistencies on gradient computation in deep learning networks and aim to perform gradient computation and weight update using only consistent data. The algorithm will attempt to hide the inconsistencies present in the large training datasets for deep learning networks considering that it may become consistent over the next few iterations. While this enhancement comes at a cost of delay in network training, this research will further enhance the CNN-GSGD algorithm by parallelizing the gradient computation process in deep learning CNN to speed up the training process. I have already got proof of concept program code and favourable results in this research area. Thesis outline: Chapter 1: Big Data, Deep Learning, Neural Networks, Convolutional Neural Networks, Chapter2 : Optimization Algorithms, Gradient Descent Algorithms, Stochastic Gradient Descent Algorithm Chapter 3: I already have a published conference paper which would form basis of chapter 3. It involves addition of a strategic weight refinement maneuvre to Stochastic Gradient Descent Algorithms for ConvolutionalNeuralNetworks. This maneuvre will act as a guide to SGD Algorithms which results in improved accuracy rates. Chapter 4: the same maneuvre is now parallelized and applied. Discuss drawbacks of previous chapter~ while improving accuracy it makes the training time longer. Solution is to parallelize thr strategic weight refinement maneuvre that I discussed in chapter 3. For reference and help ij writing this chapter , I will provide an additional paper which was applied to logistic regression and you can refer same concept being applied to Convolutional Neural Networks Chapter 5: Overall discussions and benefit of approach discussed in the thesis. Closing remarks and conclusion. References.