Introduction to deep learning

In this blog i will tell you about introduction of deep learning as well as basics of deep learning like how deep learning works.

What is Deep Learning ?

Deep learning is branch of machine learning which is completely based on artificial neural networks, as neural networks is going to copy the human brain. In deep learning, we don't need to explicitly program everything. The concept of deep learning is not new. It has been around for a couple of years now. Its on demand nowadays we have that much of processing power and lot of data. As in last 20 years, the processing power increases exponentially, deep learning and machine learning came in the picture.

Deep learning is a particular kind of machine learning that achieves great power and flexibility by learning to represent the world as a nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones.

In human brain approximately 100 billion neurons all together this is a picture of an individual neuron and each neuron is connected through thousand of their neighbours.

The question here is how do we recreate these neurons in a computer. So, we create an artificial structure called an artificial neural net where we have nodes or neurons. We have some neurons for input value and some for output value and in between, there may be lots of neurons interconnected in the hidden layer.

Generally speaking, deep learning is a machine learning method that takes in an input X, and uses it to predict an output of Y. As an example, given the stock prices of the past week as input, my deep learning algorithm will try to predict the stock price of the next day.

How does Deep learning work ?

Deep learning algorithms use something called neural network to find relation between of set of inputs and outputs. The basic structure is seen below:

A neural network is composed of input, hidden, and output layers — all of which are composed of “nodes”. Input layers take in a numerical representation of data (e.g. images with pixel specs), output layers output predictions, while hidden layers are correlated with most of the computation.

After the neural network passes its inputs all the way to its outputs, the network evaluates how good its prediction was (relative to the expected output) through something called a loss function. As an example, the “Mean Squared Error” loss function is shown below.

Y hat represents the prediction, while Y represents the expected output. A mean is used if batches of inputs and outputs are used simultaneously (n represents sample count)

9 views0 comments

Contact Us

Mob:  9067957548

  • Instagram
  • Facebook
  • LinkedIn
Rate UsDon’t love itNot greatGoodGreatLove itRate Us
Google Home: Over 100 of the funniest questions to ask Google Home