#Machine Learning in Hyderabad

#Machine Learning in Hyderabad

Machine Learning Course in Hyderabad

Machine learning is a use of man-made reasoning (AI) that gives frameworks the capacity to naturally take in and enhance for a fact without being unequivocally customized. Machine learning centers around the improvement of PC programs that can get to information and utilize it learn for themselves.

Machine learning algorithms are often categorized as supervised or unsupervised.

The way toward learning starts with perceptions or information, for example, precedents, coordinate understanding, or guidance, with the end goal to search for examples in information and settle on better choices later on dependent on the models that we give. The essential point is to enable the PCs to learn naturally without human mediation or help and modify activities as needs be.

Steps involved in fitting a machine learning model to a data set.

  • Suppose you are provided with a data-set that has area of the housein square feet and the respective price
  • How do you think you will come up with a Machine Learning Modelto learn the dataand use this model to predict the price of a random house given the area?

You will learn that in the following cards.

Lets get started…

 

House Price Prediction

  • We have a data-set consisting of houses with their area in sq feet and their respective prices
  • Assume that the prices are dependent on the area of the house
  • Let us learn how to represent this idea in Machine Learning parlance

 

ML Notations

  • The input / independent variables are denoted by ‘x’
  • The output / dependent variable(s) are denoted by ‘y’

In our problem the area in sq foot values are’x’and house prices are ‘y’.

Here change in one variable is dependent on change in another variable. This technique is called Regression.

 

  • The objective is, given a set of training data, the algorithm needs to come up with a way to map ‘x’ to ‘y’
  • This is denoted by h: X → Y

h(x) is called the hypothesis that does the mapping.

 

Why Cost Function?

  • You have learnt how to mapthe input and output variables through the hypothesis function in the previous example.
  • After defining the hypothesis function, the accuracyof the function has to be determined to gauge the predictive power. i.e., how are the square feet values predicting the housing prices accurately?
  • Cost functionis the representation of this objective.

Demystifying Cost Function

In the cost function,

  • m – number of observations
  • y^- predicted value
  • y – actual value
  • i – single observation

The objective is to minimize the sum of squared errors between the predicted and actualvalues.

Cost Function Intuition

Gradient descent Explained

  • Imagine the error surface as a 3D image with parameters theta0 and theta1 on x and y axis and Error value in z axis
  • The intuition behind gradient descent is to choose the parameters that minimize the cost as low as possible
  • Descending down the cost function is made in the direction of the steepest descent
  • The learning parameter(alpha) decides the magnitude of each step

 

Convergence

  • If the learning rate is small then the convergence takes time
  • If the learning rate is high the values overshoot

Initializing the right learning rate is very important.

Multiple Features

  • For theoretical purposes, single variable is used for illustration. But practically, multiple features / independent variables are used for predicting a variable.
  • In the first example you saw how housing prices were predicted based on their sq feet value. But ideally problems can get more complex and have multiple features required to map the output.

Hypothesis Representation

  • θ0 could be the basic price of a house
  • θ1 could be the price per square feet
  • θ2 the price per level
  • x1 could be the area of square feet in the house
  • x2 could be the number of floors

Why Feature Scaling?

When there are multiple features and each feature variable has a large magnitude, combining them into a model and predicting a value becomes computationally intensive.

Scaling comes to our help in these scenarios.

What is Feature Scaling?
  • In scaling, each feature / variableis divided by its range (maximum minus minimum)
  • The result of scalingis a variable in the range of 1
  • This eases the computation intensityto a considerable extent
Mean Normalization
  • In mean normalization, the mean of each variable is subtracted from the variable
  • In many cases, the mean normalizationand scaling are performed together

 

 

 

 

 

Leave A Reply

Your email address will not be published. Required fields are marked *

WhatsApp chat