In a feed-forward network data flow only in one direction from input to hidden layer and then to the output layer whereas, in a recurrent neural network the data flows in back direction too, this process is a passage through time.
A Feed-Forward NN is not good for time series data or sequential data as it takes fixed-size input and hence gives the fixed-size output.
A recurrent neural network is a method to train artificial neural networks, where connections in between the nodes form a directed graph along a temporal sequence. It is best in the processing of sequential data for prediction.
A recurrent neural network is a method to train artificial neural networks, where connections in between the nodes form a directed graph along a temporal sequence. It is best in the processing of sequential data for prediction.
We can say it is a recursive network and explain by a recursive formula as:
Recurrent Neural Network Theory:
Theorem 1: All Turing machine (a Turing machine is underlying abstract machine or computer of any kind which is capable to run any kind of an algorithm) can be simulated by a fully connected recurrent neural network with a sigmoid activation function.
The Universal Approximation Theorem: A feed-Forward network with a single hidden layer is sufficient to approximate, to arbitrary precision, any continuous function.
What is Markov Probability?
As per the Markov probability theorem: the sum of all the transition probability is always one, and the future event is dependent on the present event also as it is memoryless hence it is not dependent on the previous event.
Conditional probability - Chain rule:
Conditional probability is to generate the sequence using a neural language model
Make a Markov Assumption
Where K is the context length
In speech recognition, the prediction is depending only on the last K-word spoken.
0 Comments