Complete Machine Learning Course
Description:
Embark on an enriching journey into the world of Machine Learning with our comprehensive course designed to take you from a complete novice to a proficient expert. Whether you’re a budding data scientist, a seasoned programmer looking to delve into AI, or a professional seeking to upgrade your skill set, this course offers the perfect blend of theory and hands-on practice to master the intricacies of Machine Learning.
With a focus on practical applications and real-world projects, you’ll learn the fundamental concepts of Machine Learning algorithms, data preprocessing, model evaluation, and deployment techniques. From understanding regression and classification algorithms to advanced topics like neural networks and deep learning, this course covers it all.
Requirements:
- Basic understanding of programming fundamentals (Python preferred but not mandatory)
- Familiarity with high school-level mathematics (algebra, calculus, probability)
- A computer with internet access for hands-on exercises and projects
Who this course is for:
- Aspiring data scientists eager to unlock the potential of Machine Learning
- Programmers looking to expand their expertise into the realm of AI and data science
- Professionals seeking to enhance their career prospects by mastering cutting-edge technologies
- Students and academics keen on exploring the transformative power of Machine Learning in various domains
Unlock the door to endless possibilities in the realm of AI and data science with our meticulously crafted Machine Learning course. Whether you aim to build intelligent systems, analyze vast datasets, or revolutionize industries, this course equips you with the knowledge and skills to turn your ambitions into reality. Join us and embark on a transformative learning journey that will shape your future in the era of Machine Learning.
Machine Learning Fundamentals
Introduction - Preprocessing and Analysis
Visualization - Principal Component Analysis
Visualization - Locally Linear Embedding (LLE)
-
7Introduction to PCA
On this lesson we introduce the Principal Component Analysis and give a brief background to the technique.
-
8Introduction to the Dataset
On this lesson we introduce the dataset crabs.csv.
Later we will use PCA for the visualization of this dataset. -
9Initial Visualization
On this lesson we perform a basic exploration of the data set and an initial visualization.
At the end we discuss the importance of applying dimensionality reduction techniques.
On the following lesson we will apply PCA.
-
10Using PCA
On this lesson we use the Principal Component Analysis technique to visualize the separation between the classes.
Visualization - t-Stochastic Neighbor Embedding (t-SNE)
-
11Introduction to LLE
On this lesson we introduce the Localy Linear Embedding (LLE) algorithm.
-
12Locally Linear Embedding Algorithm
On this lesson we introduce the steps that are followed in the LLE algorithm.
On future lessons we will apply this method in practise using Python. -
13Introduction to the Dataset
On this lesson we introduce the dataset crabs.csv.
Later we will use LLE for the dimensionality reduction and visualization of this dataset. -
14Using LLE
On this lesson we use the Locally Linear Embedding technique to reduce the dimensionality of our dataset.
We also visualize the new dimensional space with 2 components. This is a visualization in 2 dimensions. -
15LLE with 3 Dimensions
On this lesson we use the Locally Linear Embedding technique to reduce the dimensionality of our dataset.
Now we use 3 components. At the end we do a visualization of the new dimensional space with 3 dimensions. With this, we end the practise of Locally Linear Embedding.
Visualization - Multidimensional Scaling (MDS)
-
16Introduction to t-SNE
On this lesson we make a brief introduction to the t-Stochastic Neighbor Embedding dimensionality reduction technique.
-
17Dataset
On this lesson we mention that we will use the crabs.csv dataset on this section.
This is a dataset we already worked with previously. Those that are familiar with this
dataset can skip the following lesson and just go to the next one. -
18Introduction to the Dataset
On this lesson we introduce the dataset crabs.csv.
Later we will use t-SNE for the visualization of this dataset. -
19t-SNE on Raw Data
On this lesson we apply the t-Stochastic Neighbor Embedding technique on the original dataset of the crabs. At the end, we visualize the separation between the classes using 2 dimensions and 3 dimensions.
-
20t-SNE on Scaled Data
On this lesson we apply the t-Stochastic Neighbor Embedding technique on the scaled dataset of the crabs. At the end, we visualize the separation between the classes using 2 dimensions and 3 dimensions.
-
21t-SNE on Standardized Data
On this lesson we apply the t-Stochastic Neighbor Embedding technique on the standardized dataset of the crabs. At the end, we visualize the separation between the classes using 2 dimensions and 3 dimensions.
Visualization - ISOMAP
-
22Introduction to MDS
On this lesson we introduce the Multidimensional Scaling dimensionality reduction technique.
-
23Using MDS with 2 Dimensions
On this lesson we apply the MDS dimensionality reduction technique in the crabs dataset.
At the end, we visualize the separation between classes in 2 Dimensions. -
24Using MDS with 3 Dimensions
On this lesson we apply the MDS dimensionality reduction technique in the crabs dataset.
At the end, we visualize the separation between classes in 3 Dimensions.
Visualization - Fisher Discriminant Analysis
-
25Introducción to ISOMAP
On this lesson we make a brief introduction to the ISOMAP dimensionality reduction technique.
-
26ISOMAP with 2 Dimensions
On this lesson we use the ISOMAP technique to reduce the dimensionality of our dataset.
We also visualize the new dimensional space with 2 components. This is a visualization in 2 dimensions. -
27ISOMAP with 3 Dimensions
On this lesson we use the ISOMAP technique to reduce the dimensionality of our dataset.
We also visualize the new dimensional space with 3 components. This is a visualization in 3 dimensions.
Visualization Final Project - Images
-
28Introduction to Fisher Discriminant Analysis
On this lesson we make a brief introduction to the Fisher Discriminant Analysis dimensionality reduction technique.
-
29Dataset Information
On this lesson we mention that we will use the crabs.csv dataset on this section.
This is a dataset we already worked with previously. Those that are familiar with this
dataset can skip the following lesson and just go to the next one. -
30Introduction to the Dataset
On this lesson we introduce the dataset crabs.csv.
Later we will use Fisher Discriminant Analysis for the visualization of this dataset and the separation of the crabs. -
31Fisher Discriminant Analysis with 2 Dimensions
On this lesson we use the Fisher Discriminant Analysis technique to reduce the dimensionality of our dataset.
We also visualize the new dimensional space with 2 components. This is a visualization in 2 dimensions. -
32Fisher Discriminant Analysis with 3 Dimensions
On this lesson we use the Fisher Discriminant Analysis technique to reduce the dimensionality of our dataset.
We also visualize the new dimensional space with 3 components. This is a visualization in 3 dimensions.
Linear Regression
-
33Images
On this lesson we explain that images can be transformed for use by dimensionality reduction methods by converting them to arrays of numeric values.
-
34Introduction to Image Dataset
On this lesson we introduce the image dataset digits.
Later we will use Dimensionality Reduction techniques for the visualization of this dataset and the separation of the images. -
35Fisher Discriminant Analysis
On this lesson we use the Fisher Discriminant Analysis technique to reduce the dimensionality of our dataset.
We visualize a new dimensional space with 2 components in 2 dimensions.
We then also visualize a new dimensional space with 3 components in 3 dimensions. -
36Locally Linear Embedding
On this lesson we use the Locally Linear Embedding technique to reduce the dimensionality of our dataset.
We visualize a new dimensional space with 2 components in 2 dimensions.
We then also visualize a new dimensional space with 3 components in 3 dimensions. -
37Principal Component Analysis
On this lesson we use Principal Component Analysis to reduce the dimensionality of our dataset.
We visualize a new dimensional space with 2 components in 2 dimensions.
We then also visualize a new dimensional space with 3 components in 3 dimensions. -
38ISOMAP
On this lesson we use ISOMAP to reduce the dimensionality of our dataset.
We visualize a new dimensional space with 2 components in 2 dimensions.
We then also visualize a new dimensional space with 3 components in 3 dimensions.
Ridge Regression
-
39Introduction to the Dataset
On this lesson we introduce the dataset LifeExpectancy.csv.
Later we will use Linear Regression to predict the life expectancy of different countries during different years. -
40Preprocessing
On this lesson we perform the preprocessing of the dataset so that we can apply linear regression.
It is mandatory to remove categorical variables and missing values. -
41Linear Regression
On this lesson we apply Linear Regression to predict the target variable: life expectancy.
-
42Metrics
On this lesson we study the fundamental metrics to evaluate our model.
-
43Cross Validation
On this lesson we use cross validation so we can get a generalization error as close as possible to the error we would get using the model with completely new data.
Regression - Understand the Models
Classification
-
46Analysis
On this lesson we perform an initial analysis of the models.
We visualize the weights and discuss the importance of scaling our data. -
47Data Scaling
On this lesson we scale out data and evaluate a performance analysis.
-
48One-Hot Encoding
On this lesson we apply One-Hot Encoding to our categorical variables.
-
49Regularization
On this lesson we apply regularization after the preprocessing (Scaling + One Hot Encoding).
Therefore, we use the regularized regression models: Ridge Regression and Lasso Regression. -
50Final Results
On this lesson we explore and visualize the final results of our models to predict the Life Expectancy.
Support Vector Machines for Regression
-
51Introduction to the Dataset
On this lesson we introduce the breast cancer dataset.
Later we will use Linear Classification to predict if the patient has a malignant or benign tumor. -
52Partition of the Dataset: Train and Test
On this lesson we partition our dataset into training and test.
-
53Preprocessing
On this lesson we preprocess and prepare our data to apply visualization techniques and classification models.
-
54Principal Component Analysis
-
55Linear Discriminant Analysis
On this lesson we implement our first classifier: Linear Discriminant Analysis.
We study the results in the classification report. Moreover, we plot the Confusion Matrix and the Roc Curve. -
56Naive Bayes Classifier
On this lesson we implement the Naive Bayes classifier.
We study the results in the classification report. Moreover, we plot the Confusion Matrix and the Roc Curve. -
57Quadratic Classifier
On this lesson we implement the Quadratic classifier.
We study the results in the classification report. Moreover, we plot the Confusion Matrix and the Roc Curve. -
58Logistic Regression
On this lesson we implement Logistic Regression for our classification.
We study the results in the classification report. Moreover, we plot the Confusion Matrix and the Roc Curve.
Support Vector Machines for Classification
-
59Introduction to Support Vector Machines
On this lesson we introduce Support Vector Machines for Regression
-
60Introduction to the Dataset
On this lesson we introduce the energy dataset.
Later we will use Support Vector Machines to predict the energy consumption of household appliances. -
61Partition of the Dataset - Target Variable
On this lesson we begin the partition of our dataset.
We partition the target variable into traning and test. -
62Partition of the Dataset - Time Series Windows
On this lesson we generate the data matrix that we will use to make the predictions.
We use the 4 previous moments in time as data to make the predictions. -
63Support Vector Machine - Linear Kernel
On this lesson we use a Support Vector Machine (SVM) with Linear Kernel to predict the energy consumption of household appliances in the house. We use the 4 previous moments in time as data to make the predictions.
-
64Support Vector Machines - Polynomial Kernels
On this lesson we use a Support Vector Machine (SVM) with Polynomial Kernel to predict the energy consumption of household appliances in the house. We use the 4 previous moments in time as data to make the predictions.
-
65Support Vector Machine - Radial Basis Function (RBF) Kernel
On this lesson we use a Support Vector Machine (SVM) with RBF Kernel to predict the energy consumption of household appliances in the house. We use the 4 previous moments in time as data to make the predictions.
Neural Networks
-
66Introduction to Support Vector Machines
On this lecture we briefly introduce Support Vector Machines (SVM) for Classification.
-
67Introduction to the Dataset
On this lesson we introduce the dataset arxivs that we will work along this section.
Later we will use Support Vector Machines to predict to which category each document belongs. -
68Partition of the Dataset
On this lesson we perform the partition of the dataset.
Hence we divide our dataset into two partitions: training and test. -
69Transformation to Data Matrix
On this lesson we apply the transformation of our unstructured dataset to a data matrix.
Thus, allowing us to apply learning models. -
70Dimensionality Reduction
On this lecture we apply Dimensionality Reduction.
-
71Support Vector Machine - Linear Kernel
On this lesson we use a Support Vector Machine (SVM) with Linear Kernel to predict which category (class) each document (sample) belongs to.
-
72Support Vector Machines - Polynomial Kernels
On this lesson we use a Support Vector Machine (SVM) with Polynomial Kernel to predict which category (class) each document (sample) belongs to.
-
73Support Vector Machine - Radial Basis Function (RBF) Kernel
On this lesson we use a Support Vector Machine (SVM) with RBF Kernel to predict which category (class) each document (sample) belongs to.
Neural Networks for Regression
Neural Networks for Classification
-
75Dataset Information
On this lesson we mention that we will use the Energy.csv dataset on this section.
This is a dataset we already worked with previously. Those who are familiar with this
dataset can skip the following lesson and just go to the next one. -
76Introduction to the Dataset
On this lesson we introduce the energy dataset.
Later we will use the MLP neural network to predict the energy consumption of household appliances. -
77Partition of the Dataset - Target Variable
On this lesson we begin the partition of our dataset.
We partition the target variable into traning and test. -
78Partition of the Dataset - Time Series Windows
On this lesson we generate the data matrix that we will use to make the predictions.
We use the 4 previous moments in time as data to make the predictions. -
79Multilayer Perceptron Neural Network
On this lesson we use a Multilayer Perceptron (MLP) Neural Network to predict the energy consumption of household appliances in the house. We use the 4 previous moments in time as data to make the predictions.