Skip to content

Commit

Permalink
Merge branch 'main' into release
Browse files Browse the repository at this point in the history
  • Loading branch information
reveurmichael committed Jan 8, 2024
2 parents e73c281 + dfed683 commit 8154476
Show file tree
Hide file tree
Showing 2 changed files with 132 additions and 19 deletions.
126 changes: 110 additions & 16 deletions open-machine-learning-jupyter-book/ml-fundamentals/ml-summary.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Machine Learning Landscape : Discriminative Models\n",
"## Discriminative Models and Generative Models "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Machine Learning Landscape : Discriminative Models\n",
"\n",
"Most of supervised machine learning can be looked at using the following framework: \n",
"You have a set of training points $(x_i, y_i)$, and you want to find a function f that \"fits the data well\", \n",
Expand Down Expand Up @@ -65,52 +72,139 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## How to conceive a \"new\" Machine Learning algorithm : Discriminative Models\n",
"### Machine Learning Landscape : Generative Models \n",
"\n",
"Generative models attempt to capture the overall distribution characteristics of data, including the relationships between variables. These models can generate new data instances that are statistically similar to the original data. Many generative models learn a latent space that represents complex data structures in a more concise form than the original data space.\n",
"\n",
"- [How about perpendicular distance instead of vertical distance for Linear Regression?](https://math.stackexchange.com/questions/1530298/variant-of-linear-regression-using-perpendicular-distance-instead-of-vertical)\n",
"In generative models, **Defining the Model Structure** involves choosing a model that can represent or approximate the data generation process. Unlike a direct mapping from input to output, this model aims to capture the entire distribution of data. For example, \n",
"- Variational Autoencoders (VAEs) learn the high-dimensional distribution of data through a latent space. \n",
"- Generative Adversarial Networks (GANs) generate realistic data samples through an adversarial process.\n",
"\n",
"- [How about Logistic Regression with Kernel Trick?](https://www.quora.com/How-can-one-use-kernels-utilizing-the-kernel-trick-in-logistic-regression)\n",
"**Defining the Loss Function** tends to be more complex in generative models than in supervised learning, as the goal is not just to minimize prediction error. For instance, \n",
"- GANs use an adversarial loss, where the generator aims to maximize the misjudgment rate of the discriminator, which tries to distinguish between real and generated samples. \n",
"- In VAEs, the loss function includes a reconstruction error (to make generated samples as close to real data as possible) and regularization of the latent space.\n",
"\n",
"- [How about a Neural Network with Kernel Trick]\n",
"\n",
"- [How about a Neural Network not by Layers but inter-connected]\n",
"\n",
"- [How about horizontal or vertical lines - then we have Decision Tree]\n",
"Depending on the amount of data and model usage you choose, here are some generative models:\n",
"- Gaussian Mixture Models (GMM): Models the overall distribution of data by combining multiple Gaussian distributions, commonly used for clustering and density estimation.\n",
"- Hidden Markov Models (HMM): Describes sequence data with hidden states, where each state depends on the previous one, suitable for speech recognition and natural language processing.\n",
"- Generative Adversarial Networks (GANs): Consists of two parts: a generator that creates data and a discriminator that evaluates its authenticity, mainly used for generating realistic images and videos.\n",
"- Variational Autoencoders (VAEs): Combines encoders and decoders to learn the latent representation of data, used for image generation and feature learning.\n",
"- Naive Bayes Classifiers: Based on Bayes' theorem and assumes independence among features, commonly used for text classification and spam detection."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Comparison with Discriminative Models\n",
"\n",
"- [How about counting numbers - then we have KNN]\n",
"Generative models focus on modeling while discriminative models focus on solutions. Thus, we can use generative algorithms to generate new data points. Discriminative algorithms cannot serve this purpose. Discriminative algorithms usually perform better in classification tasks. And the real strength of generative algorithms is their ability to express complex relationships between variables.\n",
"\n",
"- [How about counting numbers with Kernel trick](https://stats.stackexchange.com/questions/44166/kernelised-k-nearest-neighbour)\n"
"Generative algorithms converge faster than discriminative algorithms. Therefore, we prefer generative models when we have a small training dataset.Although generative models converge faster, they converge to higher asymptotic errors. On the contrary, discriminative models converge to smaller asymptotic errors. Therefore, as the number of training samples increases, the error rate of discriminative models decreases."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Machine Learning Landscape : Generative Models "
"## LDA \n",
"\n",
"LDA is a supervised learning dimensionality reduction technique, meaning that each sample in its dataset has a class label output. This is different from PCA, an unsupervised dimensionality reduction technique that does not consider the class label output of samples. The idea of LDA can be summarized in one sentence: \"Minimize within-class variance and maximize between-class variance after projection.\" What does this mean? We want to project the data onto a lower dimension such that the projection points of each class are as close as possible to each other, while the distances between the centers of different classes are maximized as much as possible.\n",
"\n",
":::{figure} https://static-1300131294.cos.ap-shanghai.myqcloud.com/data/ml-fundamental/LDA.png\n",
"---\n",
"name: LDA example\n",
"---\n",
"LDA example\n",
":::\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## LDA "
"## Unsupervised learning\n",
"\n",
"Unsupervised learning is a type of machine learning that deals with unlabeled or unannotated data. The goal of this learning approach is to explore the intrinsic structure and patterns in data, rather than predicting or classifying known outputs.\n",
"\n",
"### Key Concepts and Techniques\n",
"\n",
"1. Unlabeled Data: At the heart of unsupervised learning is the use of data that does not come with pre-defined labels or categories. The algorithms are designed to identify patterns and structures without external guidance or annotations.\n",
"\n",
"2. Clustering: This technique involves grouping data points based on similarity measures. Clustering algorithms, like K-means, are used extensively for segmenting data into distinct groups, each representing a specific characteristic or feature within the data.\n",
"\n",
"3. Dimensionality Reduction: Techniques such as Principal Component Analysis (PCA) are employed to reduce the number of variables under consideration. This process simplifies the dataset while retaining its essential characteristics, facilitating easier visualization and analysis.\n",
"\n",
"4. Association Rules: Used predominantly in large datasets to find interesting relationships between variables. Market basket analysis is a classic example, revealing product purchasing patterns in retail.\n",
"\n",
"### Applications of Unsupervised Learning\n",
"Unsupervised learning has a broad range of applications:\n",
"\n",
"- Market Segmentation: Identifying distinct customer clusters for targeted marketing strategies.\n",
"- Recommendation Systems: Suggesting products or services to users based on their historical preferences.\n",
"- Anomaly Detection: Recognizing unusual patterns that could indicate fraudulent activity or system faults.\n",
"- Social Network Analysis: Uncovering structures within social platforms, such as community clusters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Unsupervised learning"
"## Semi-supervised learning\n",
"\n",
"Semi-Supervised Learning is a hybrid approach in machine learning that utilizes both labeled and unlabeled data for training. This approach is particularly useful when acquiring a fully labeled dataset is costly or impractical, but unlabeled data is abundant. Semi-supervised learning bridges the gap between supervised and unsupervised learning, leveraging the strengths of both to improve learning accuracy and efficiency.\n",
"\n",
"### Key Principles and Methods\n",
"1. Combining Labeled and Unlabeled Data: The core of semi-supervised learning is the combination of a small amount of labeled data with a large amount of unlabeled data during the training process.\n",
"\n",
"2. Self-training: A common technique where a model initially trained on a small labeled dataset is used to label the unlabeled data. The model is then retrained on this newly labeled dataset.\n",
"\n",
"3. Co-training: This involves training two separate models on different views of the data and then using each model to label the unlabeled data for the other model.\n",
"\n",
"4. Graph-based Methods: These methods use graph structures to represent data, exploiting the relationships between labeled and unlabeled points to propagate labels through the graph.\n",
"\n",
"### Applications and Use Cases\n",
"\n",
"Semi-supervised learning is widely applicable in scenarios where labeled data is scarce or expensive to obtain:\n",
"\n",
"- Natural Language Processing (NLP): For tasks like sentiment analysis and language translation where labeled data can be limited.\n",
"- Image and Video Recognition: Where labeling large datasets of images or videos is labor-intensive.\n",
"- Medical Diagnosis: In fields where labeled data requires expert knowledge and time-consuming annotation.\n",
"\n",
"### Semi-Supervised SVM\n",
"\n",
"In semi-supervised learning, semi-supervised support vector machines are more widely used methods.\n",
"\n",
"Semi-Supervised Support Vector Machine (S3VM) is an extension of the traditional Support Vector Machine that combines a small amount of labeled data with a large volume of unlabeled data for training. The core idea of S3VM is to find the optimal separating hyperplane while utilizing the distribution information of the unlabeled data to guide the process. This approach aims to maximize the margin between labeled data points while ensuring a reasonable classification of the unlabeled data along this boundary. S3VM is particularly effective in scenarios where labeled data is scarce, but it involves a more complex optimization problem and may require more computational resources. Despite these challenges, S3VM has shown significant potential in enhancing classification performance, especially in situations where there is an abundance of unlabeled data available.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Semi-supervised learning\n",
"## How to conceive a \"new\" Machine Learning algorithm\n",
"\n",
"- Using Perpendicular Distance Instead of Vertical Distance in Linear Regression\n",
" - Modified linear regression model. This approach may require redefining the loss function to minimize the perpendicular distance from data points to the regression line, instead of the traditional vertical error. This variant of traditional linear regression might be more suitable for certain types of data distributions.\n",
"\n",
"- Logistic Regression with Kernel Trick\n",
" - Kernel logistic regression. By applying the kernel trick, logistic regression can be extended to handle non-linear relationships. The kernel trick involves mapping data into a higher-dimensional space where linearly inseparable data can become separable.\n",
"\n",
"- Neural Network with Kernel Trick\n",
" - Kernelized neural network. This is a theoretical concept where certain layers or operations in a neural network might be enhanced using kernel functions to better capture non-linear patterns in data. This approach could increase the complexity and computational demands of the model.\n",
"\n",
"- Neural Network with Non-Hierarchical Structure\n",
" - Graph Neural Networks (GNN) or other non-traditional structured neural networks. These networks do not follow the typical layered structure but are connected in different ways, such as based on the graph structure of the data.\n",
"\n",
"- Horizontal or Vertical Lines - Decision Trees\n",
" - Decision trees. This is a rule-based learning method that makes predictions by building a series of decisions based on features. Decision trees create splits in feature space, which can be seen as horizontal or vertical lines.\n",
"\n",
"- Counting Numbers - K-Nearest Neighbors (KNN)\n",
" - K-Nearest Neighbors algorithm. This is an instance-based learning method that predicts the classification of a sample point by looking at its K nearest neighbors. It is based on the principle of similarity, where similar samples tend to have similar outputs.\n",
"\n",
"[Semi-supervised learning](https://www.baeldung.com/cs/svm-vs-neural-network)"
"- K-Nearest Neighbors with Kernel Trick\n",
" - Related Method: Kernelized K-Nearest Neighbors. This method extends KNN by measuring similarity in a high-dimensional space, allowing the algorithm to recognize non-linear relationships in the original feature space.\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,34 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# parameter-optimization\n"
"# Parameter Optimization\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{tableofcontents}\n",
"```"
"In machine learning, parameter optimization is a critical process that involves fine-tuning the parameters of a model to minimize a predefined loss function. This optimization is essential for enhancing the model's ability to accurately make predictions. Two fundamental concepts in this process are the loss function and gradient descent."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":::{seealso}\n",
"<div class=\"yt-container\">\n",
" <iframe src=\"https://www.youtube.com/watch?v=JXQT_vxqwIs\" allowfullscreen></iframe>\n",
"</div>\n",
"Click the video above for a quick introduction to this section.\n",
":::"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":::{tableofcontents}\n",
":::"
]
}
],
Expand Down

0 comments on commit 8154476

Please sign in to comment.