Skip to main content

Supervised and Unsupervised Learning. Classification, Regression and Clustering.


Nowadays, data is growing faster than ever before and this data comes from every sector: businesses, biology, economics, etc. Technology and artificial intelligence allows us to process the large amount of information that is produced from all of these sectors.

Data mining refers to the study of pre-existing databases in order to get new insights or information about the data.  Data mining uses different techniques to discover patterns and establish relationships to solve problems.


Machine Learning uses Data mining techniques and other learning algorithms to build a model of what is happening behind the data so that it can be used to predict future outcomes.

The main focus of Machine Learning is the study and design of systems or algorithm that can learn from data.

Supervised learning and Unsupervised Learning:

Supervised learning is when the algorithm works with input (x) and output (y) variables.
Using input and output variables the supervised machine learning algorithm gives us a function or model that best fits our data.
It is called supervised learning because the algorithm is fitting a model knowing the output.

Examples of supervised learning are: Classification, Regression
Unsupervised learning is when the algorithm only works with input data (x), there is no output data to work with.
The unsupervised learning algorithm uses the data to model the structure or distribution of the data which gives more information and insights about the data we are working with.
These are called unsupervised learning because unlike supervised learning there is no output. In this case, the algorithms are left to their own to discover and identify the structure in the data.
Examples of supervised learning are: Clustering

Differences between Classification, Regression and Clustering:

As we have seen classification and regression are supervised learning algorithms, while clustering is an unsupervised learning algorithm.
Classification and regression, have input and output variables to work with.
  • Classification: the algorithm identifies and classifies an object into a category. Example: given specific input of variables, for instance, physical measures of a tumor, like radius, shape, etc., the algorithm is able to classify into two categories, benign or malignant, the tumor.
  • Regression: the algorithm predicts a value of a continuous variable. Example: given specific input variables, for instance metric measures of participants, like height, waist measure, etc., the algorithm is able to predict the weight of the participants.

Clustering is an unsupervised learning algorithm. The algorithm only works with input)variables.
  • Clustering: the algorithm identifies groups of objects that have similar characteristics. Example: given a specific input variables, for example population of a city,  the algorithm groups the individuals of the populations into sets (clusters or groups) that are similar among them.

Popular posts from this blog

Support Vector Machines (SVM) in R (package 'kernlab')

Support Vector Machines (SVM) learning combines of both the instance-based nearest neighbor algorithm and the linear regression modeling. Support Vector Machines can be imagined as a surface that creates a boundary (hyperplane) between points of data plotted in multidimensional that represents examples and their feature values. Since it is likely that the line that leads to the greatest separation will generalize the best to the future data, SVM involves a search for the Maximum Margin Hyperplane (MMH) that creates the greatest separation between the 2 classes. If the data ara not linearly separable is used a slack variable, which creates a soft margin that allows some points to fall on the incorrect side of the margin. But, in many real-world applications, the relationship between variables are nonlinear. A key featureof the SVMs are their ability to map the problem to a higher dimension space using a process known as the Kernel trick, this involves a process of constructing ne

Initial Data Analysis (infert dataset)

Initial analysis is a very important step that should always be performed prior to analysing the data we are working with. The data we receive most of the time is messy and may contain mistakes that can lead us to wrong conclusions. Here we will use the dataset infert , that is already present in R. To get to know the data is very important to know the background and the meaning of each variable present in the dataset. Since infert is a dataset in R we can get information about the data using the following code: require(datasets) ?infert #gives us important info about the dataset inf <- infert #renamed dataset as 'inf' This gives us the following information: Format 1.Education: 0 = 0-5 years, 1 = 6-11 years, 2 = 12+ years 2.Age: Age in years of case 3.Parity: Count 4.Number of prior induced abortions: 0 = 0, 1 = 1, 2 = 2 or more 5.Case status: 1 = case 0 = control 6.Number of prior spontaneous abortions: 0 = 0, 1 = 1, 2

Ant Colony Optimization (part 2) : Graph optimization using ACO

The Travelling Salesman Problem (TSP) is one of the most famous problems in computer science for studying optimization, the objective is to find a complete route that connects all the nodes of a network, visiting them only once and returning to the starting point while minimizing the total distance of the route. The problem of the traveling agent has an important variation, and this depends on whether the distances between one node and another are symmetric or not, that is, that the distance between A and B is equal to the distance between B and A, since in practice is very unlikely to be so. The number of possible routes in a network is determined by the equation: (𝒏−𝟏)! This means that in a network of 5 nodes the number of probable routes is equal to (5-1)! = 24, and as the number of nodes increases, the number of possible routes grows factorially. In the case that the problem is symmetrical the number of possible routes is reduced to half: ( (𝒏−𝟏)! ) / 𝟐 The complexity o