Nitesh Malhotra and Aksh Chahal
700 DEEP LEARNING TECHNIQUES AND THEIR APPLICATIONS BIOSCIENCE BIOTECHNOLOGY RESEARCH COMMUNICATIONS
Deep learning models use multiple layers which are
the composition of multiple linear and non-linear trans-
formations. With the increase in the size of data, or with
the developments in the eld of big data, conventional
machine learning techniques have shown their limita-
tion in analysis with the size of data (Chen, 2014). Deep
learning techniques have been giving better results in
this task of analysis. This technique has been introduced
worldwide as breakthrough technology because has dif-
ferentiated machine learning techniques working on old
and traditional algorithms by exploiting more human
brain capabilities.It is useful in modeling the complex
relationship among data. Instead of working on task-
speci c algorithms it is based on learning data represen-
tations. This learning can be supervised, unsupervised or
semi-supervised, (Hoff, 2018).
In deep learning models, multiple layers composed of
non-linear processing units perform the task of feature
extraction transformation. Every layer takes the input
as the output of its corresponding previous layer. It is
applied in classi cation problems in a supervised man-
ner and in pattern analysis problems in an unsupervised
manner. The multiple layers which provide the high-level
abstraction, form a hierarchy of concepts. There are deep
learning models which are mostly based on arti cial neu-
ral networks which are organized layer-wise in deep gen-
erative models. The concept behind this distributed repre-
sentation is the generation of observed data through the
interaction of layered factors. The high-level abstraction
is achieved by these layered factors. A different degree of
abstraction is achieved by varying the number of layers
and the size of layer (Najafabadi et al, 2015).
The abstraction is achieved through learning from the
lower level by exploiting the hierarchical exploratory
factors. By converting the data into compact immediate
representations of principal components and removing
redundancies in representation through derived layered
structures, the deep learning methods avoid feature engi-
neering in supervised learning applications. In unsuper-
vised learning where unlabeled data is more abundant
than labeled data, deep learning algorithms can be applied
to such kind of problems. The deep belief networks are
the example of deep learning model which are applied to
such unsupervised problems, (Auer et al., 2018).
Deep learning algorithms exploit the abstract repre-
sentation of data which is because of the fact that more
abstract repetitions are based on less abstraction. Due to
this fact, these models are invariant to the local changes in
the input data. This has the advantage in many pattern rec-
ognition problems. This invariance helps the deep learning
models feature extraction in the data. This abstraction in
representation provides these models the ability to separate
the different sources of variations in data. The deep learn-
ing models outperform old machine learning models by
manually de ning the learning features. This is because of
the fact that it relies on human domain knowledge rather
than relying on available data and the design of models
are independent of the system’s training.
There are many deep learning models developed by
the researchers which give a better learning from the
representation o arge-scaleunlabeled data. Some popu-
lar deep learning architectures like Convolutional Neu-
ral Networks (CNN), Deep Neural Networks (DNN), Deep
Belief Network (DBN) and Recurrent Neural Networks
(RNN) are applied as predictive models in the domains
of computer vision and predictive analytics in order to
nd the insights from data. With an increase in the size
of data and necessity of producing a fast and accurate
result, deep learning models are proving their capabili-
ties in the task of predictive analytics to address the data
analysis and learning problems.
Since, there are various deep learning techniques are
in existence and each of these has a speci c application
due to their working model. So, it is necessary to review
these models based on their working and applications.
In this paper, we now present a review of popular deep
learning models focused on arti cial neural networks.
We will discuss ANNs, CNNs, DNNs, DBNs,and RNNs
with their working and application.
ARTIFICIAL NEURAL NETWORK
Arti cial Neural Network is a computational model
inspired by the biological neural networks. Billions of neu-
rons are connected together in the biological neural net-
work which receives electrochemical signals from it neigh-
boring neurons. They process these signals and either store
them or forward to the next neighboring neurons in the
network (Yegnarayana, 2018, Garven et al, 2018).
It is represented in gure 1 given below.
Every biological neuron connected to the neighbor-
ing neurons and communicate to eachother. The axons
in the network carry the input-output signals. Theyre-
ceive the inputs from the environment which create the
impulse in form of electrochemical signals which travel
quickly in the network.A neuron may store the informa-
tion or it may forward it to the network. Theytransfer
the information to the neighbors through theirdendrites.
Arti cial neural networks work similarly to the work-
ing of biological neural networks. An ANN is an inter-
connection of arti cial neurons. Every neuron in the
layer is connected to all the neurons of previous and
next layers. There is a weight given as the labels at each
interconnection between neurons.Each neuron receives
input which is the output of neurons of the previous
layer. They process this input and generate an output
which is then forwarded to the neurons of next layer.
There is an activation function used by each neuron of