From 4be82f6033397958f25724c6ac517faaf4c6c390 Mon Sep 17 00:00:00 2001 From: Fly <2946399650fly@gmail> Date: Sat, 23 Dec 2023 21:21:43 +0800 Subject: [PATCH 1/5] add Neural Network section --- open-machine-learning-jupyter-book/_toc.yml | 1 + .../deep-learning/nn.ipynb | 610 ++++++++++++++++++ 2 files changed, 611 insertions(+) create mode 100644 open-machine-learning-jupyter-book/deep-learning/nn.ipynb diff --git a/open-machine-learning-jupyter-book/_toc.yml b/open-machine-learning-jupyter-book/_toc.yml index c0462654a..7af8fe2fa 100644 --- a/open-machine-learning-jupyter-book/_toc.yml +++ b/open-machine-learning-jupyter-book/_toc.yml @@ -95,6 +95,7 @@ parts: numbered: True chapters: - file: deep-learning/dl-overview + - file: deep-learning/nn - file: deep-learning/cnn/cnn sections: - file: deep-learning/cnn/cnn-vgg diff --git a/open-machine-learning-jupyter-book/deep-learning/nn.ipynb b/open-machine-learning-jupyter-book/deep-learning/nn.ipynb new file mode 100644 index 000000000..b06b0ef54 --- /dev/null +++ b/open-machine-learning-jupyter-book/deep-learning/nn.ipynb @@ -0,0 +1,610 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 2, + "id": "cc99d119-6597-4d49-b013-f31694aed38e", + "metadata": { + "tags": [ + "hide-cell" + ] + }, + "outputs": [], + "source": [ + "# Install the necessary dependencies\n", + "\n", + "import os\n", + "import sys \n", + "!{sys.executable} -m pip install --quiet pandas scikit-learn numpy matplotlib jupyterlab_myst ipython imageio scikit-image requests ucimlrepo seaborn keras\n", + "# Neural Networks" + ] + }, + { + "cell_type": "markdown", + "id": "52e69c41-73fc-43dd-a615-9db97592b588", + "metadata": { + "tags": [ + "remove-cell" + ] + }, + "source": [ + "---\n", + "license:\n", + " code: MIT\n", + " content: CC-BY-4.0\n", + "github: https://github.com/ocademy-ai/machine-learning\n", + "venue: By Ocademy\n", + "open_access: true\n", + "bibliography:\n", + " - https://raw.githubusercontent.com/ocademy-ai/machine-learning/main/open-machine-learning-jupyter-book/references.bib\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "ec28bbd0-5ff6-48ab-bd15-f92783e6a183", + "metadata": {}, + "source": [ + "# Neural Networks" + ] + }, + { + "cell_type": "markdown", + "id": "85c9021c-c560-485d-91f2-1642c6402ac5", + "metadata": {}, + "source": [ + "Neural Networks are the functional unit of [Deep Learning](https://press.ocademy.cc/deep-learning/dl-overview.html) and are known to mimic the behavior of the human brain to solve complex data-driven problems.\n", + "The input data is processed through different layers of artificial neurons stacked together to produce the desired output.\n", + "From speech recognition and person recognition to healthcare and marketing, Neural Networks have been used in a varied set of domains." + ] + }, + { + "cell_type": "markdown", + "id": "cc2d8e9d-ffb2-4562-a180-92e5e9a87d79", + "metadata": {}, + "source": [ + "## Key Components of the Neural Network Architecture" + ] + }, + { + "cell_type": "markdown", + "id": "460b8583-75ed-47ff-bea0-18913ac02aba", + "metadata": {}, + "source": [ + "The Neural Network architecture is made of individual units called neurons that mimic the biological behavior of the brain. \n", + "Here are the various components of a neuron." + ] + }, + { + "cell_type": "markdown", + "id": "11568845-e197-4e99-be19-27509431a4c1", + "metadata": {}, + "source": [ + "
Image: Neuron in Artificial Neural Network" + ] + }, + { + "cell_type": "markdown", + "id": "bd6cd958-b6af-4f18-ad97-83307a697c7c", + "metadata": {}, + "source": [ + "### Input\n", + "It is the set of features that are fed into the model for the learning process. For example, the input in object detection can be an array of pixel values pertaining to an image.\n", + "\n", + "### Weight\n", + "Its main function is to give importance to those features that contribute more towards the learning. It does so by introducing scalar multiplication between the input value and the weight matrix. For example, a negative word would impact the decision of the sentiment analysis model more than a pair of neutral words.\n", + "\n", + "### Transfer function\n", + "The job of the transfer function is to combine multiple inputs into one output value so that the activation function can be applied. It is done by a simple summation of all the inputs to the transfer function. " + ] + }, + { + "cell_type": "markdown", + "id": "53f32b90-5cc1-41fb-93a5-7776d9b3d9c2", + "metadata": {}, + "source": [ + "### Activation Function\n", + "It introduces non-linearity in the working of perceptrons to consider varying linearity with the inputs. Without this, the output would just be a linear combination of input values and would not be able to introduce non-linearity in the network.\n", + "In the realm of deep learning, several common activation functions are widely used due to their impact on network training and performance. Here are some prevalent activation functions:\n", + "#### 1. ReLU (Rectified Linear Activation):\n", + "ReLU is one of the most commonly used activation functions. It sets negative input values to zero and keeps positive values unchanged:\n", + "\n", + "$$\n", + "f(x) = \\max(0, x) \n", + "$$\n", + "\n", + "ReLU effectively mitigates the vanishing gradient problem and computes faster. However, it can cause neurons to \"die\" by setting negative outputs to zero.\n", + "#### 2. Sigmoid Function:\n", + "The sigmoid function maps inputs to the range (0, 1):\n", + "\n", + "$$\n", + "f(x) = \\frac{1}{1 + e^{-x}}\n", + "$$\n", + "\n", + "It's often used in binary classification problems at the output layer but suffers from the vanishing gradient problem in deep neural networks.\n", + "#### 3. Tanh Function:\n", + "Tanh function maps inputs to the range (-1, 1):\n", + "\n", + "$$\n", + "f(x) = \\frac{e^x - e^{-x}}{e^x + e^{-x}}\n", + "$$\n", + "\n", + "Similar to the sigmoid function but outputs in the range (-1, 1), aiding zero-centered data. However, it also faces the issue of vanishing gradients.\n", + "#### 4. Leaky ReLU:\n", + "Leaky ReLU is an improvement over ReLU, allowing a small slope for negative input values:\n", + "\n", + "$$\n", + "f(x) = \\begin{cases} x & \\text{if } x > 0 \\\\ \\alpha x & \\text{otherwise} \\end{cases}\n", + "$$\n", + "\n", + "Where $\\alpha$ is a small positive number. This function addresses the \"neuron death\" problem in ReLU.\n", + "#### 5. ELU (Exponential Linear Unit):\n", + "ELU is similar to Leaky ReLU but allows a slightly negative slope for negative values and tends to zero-center:\n", + "\n", + "$$\n", + "f(x) = \\begin{cases} x & \\text{if } x > 0 \\\\ \\alpha (e^x - 1) & \\text{otherwise} \\end{cases}\n", + "$$\n", + " \n", + "ELU helps reduce the vanishing gradient problem and improves training stability in some scenarios.\n", + "\n", + "Each activation function has its advantages and drawbacks. Their performance can vary in different network architectures and tasks. Research in deep learning continually explores new activation functions to enhance training effectiveness and network performance." + ] + }, + { + "cell_type": "markdown", + "id": "5137abb6-81fa-4ab6-aa57-28d215074735", + "metadata": {}, + "source": [ + "### Bias\n", + "The role of bias is to shift the value produced by the activation function. Its role is similar to the role of a constant in a linear function.\n", + "\n", + "When multiple neurons are stacked together in a row, they constitute a layer, and multiple layers piled next to each other are called a multi-layer neural network.\n", + "\n", + "We've described the main components of this type of structure below." + ] + }, + { + "cell_type": "markdown", + "id": "19aeaa65-4f77-47cf-8c98-9ce0ba304908", + "metadata": {}, + "source": [ + "
Image: Multi-layer neural network" + ] + }, + { + "cell_type": "markdown", + "id": "bfb44613-7e13-41a9-a621-bec3327a9ed2", + "metadata": {}, + "source": [ + "### Input Layer\n", + "\n", + "The data that we feed to the model is loaded into the input layer from external sources like a CSV file or a web service. It is the only visible layer in the complete Neural Network architecture that passes the complete information from the outside world without any computation.\n", + "\n", + "### Hidden Layers\n", + "\n", + "The hidden layers are what makes deep learning what it is today. They are intermediate layers that do all the computations and extract the features from the data.\n", + "\n", + "There can be multiple interconnected hidden layers that account for searching different hidden features in the data. For example, in image processing, the first hidden layers are responsible for higher-level features like edges, shapes, or boundaries. On the other hand, the later hidden layers perform more complicated tasks like identifying complete objects (a car, a building, a person).\n", + "\n", + "### Output Layer\n", + "\n", + "The output layer takes input from preceding hidden layers and comes to a final prediction based on the model’s learnings. It is the most important layer where we get the final result.\n", + "\n", + "In the case of classification/regression models, the output layer generally has a single node. However, it is completely problem-specific and dependent on the way the model was built." + ] + }, + { + "cell_type": "markdown", + "id": "98b6b84a-b5cd-4cd9-89b6-3945ba0505ed", + "metadata": {}, + "source": [ + "## Standard Neural Networks" + ] + }, + { + "cell_type": "markdown", + "id": "b4e9a68e-2b30-44b6-b651-d80e871f5ddc", + "metadata": {}, + "source": [ + "The following are several standard types of neural networks" + ] + }, + { + "cell_type": "markdown", + "id": "6a8abe53-1148-41cc-9651-2bcb0a3e916e", + "metadata": {}, + "source": [ + "### The Perceptron\n", + "Perceptron is the simplest Neural Network architecture.\n", + "\n", + "It is a type of Neural Network that takes a number of inputs, applies certain mathematical operations on these inputs, and produces an output. It takes a vector of real values inputs, performs a linear combination of each attribute with the corresponding weight assigned to each of them.\n", + "\n", + "The weighted input is summed into a single value and passed through an activation function. \n", + "\n", + "These perceptron units are combined to form a bigger Artificial Neural Network architecture." + ] + }, + { + "cell_type": "markdown", + "id": "02af3b8a-5251-4fff-8d16-87fbf1d7a87b", + "metadata": {}, + "source": [ + "### Feed-Forward Networks\n", + "Perceptron represents how a single neuron works.\n", + "\n", + "But—\n", + "\n", + "What about a series of perceptrons stacked in a row and piled in different layers? How does the model learn then?\n", + "\n", + "It is a multi-layer Neural Network, and, as the name suggests, the information is passed in the forward direction—from left to right.\n", + "\n", + "In the forward pass, the information comes inside the model through the input layer, passes through the series of hidden layers, and finally goes to the output layer. This Neural Networks architecture is forward in nature—the information does not loop with two hidden layers.\n", + "\n", + "The later layers give no feedback to the previous layers. The basic learning process of Feed-Forward Networks remain the same as the perceptron." + ] + }, + { + "cell_type": "markdown", + "id": "84f7dc96-ab00-4b00-827a-5790918ed462", + "metadata": {}, + "source": [ + "### Residual Networks (ResNet)\n", + "Now that you know more about the Feed-Forward Networks, one question might have popped up in your head—how to decide on the number of layers in our neural network architecture?\n", + "\n", + "A naive answer would be: The greater the number of hidden layers, the better is the learning process.\n", + "\n", + "More layers enrich the levels of features.\n", + "\n", + "But—\n", + "\n", + "Is that so?\n", + "\n", + "Very deep Neural Networks are extremely difficult to train due to vanishing and exploding gradient problems.\n", + "\n", + "ResNets provide an alternate pathway for data to flow to make the training process much faster and easier.\n", + "\n", + "This is different from the feed-forward approach of earlier Neural Networks architectures. \n", + "\n", + "The core idea behind ResNet is that a deeper network can be made from a shallow network by copying weight from the shallow counterparts using identity mapping.\n", + "\n", + "The data from previous layers is fast-forwarded and copied much forward in the Neural Networks. This is what we call skip connections first introduced in Residual Networks to resolve vanishing gradients." + ] + }, + { + "cell_type": "markdown", + "id": "c59bd0fe-75ec-473a-aacc-cac0e6eccb35", + "metadata": {}, + "source": [ + "## Code\n", + "Now, let's train a neural network model for Heart Disease Classification as an example to help you better understand" + ] + }, + { + "cell_type": "code", + "execution_count": 171, + "id": "7aaffb2a-8b9c-49e4-944d-884969408638", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Epoch 1/100\n", + "25/25 [==============================] - 1s 1ms/step - loss: 0.6931 - accuracy: 0.5331\n", + "Epoch 2/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.6928 - accuracy: 0.5331\n", + "Epoch 3/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.6926 - accuracy: 0.5331\n", + "Epoch 4/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.6905 - accuracy: 0.5331\n", + "Epoch 5/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.6767 - accuracy: 0.5372\n", + "Epoch 6/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.6246 - accuracy: 0.8347\n", + "Epoch 7/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.5233 - accuracy: 0.8719\n", + "Epoch 8/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.4264 - accuracy: 0.8719\n", + "Epoch 9/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.3577 - accuracy: 0.8760\n", + "Epoch 10/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.3254 - accuracy: 0.8802\n", + "Epoch 11/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.3170 - accuracy: 0.8843\n", + "Epoch 12/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.3123 - accuracy: 0.8843\n", + "Epoch 13/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.3085 - accuracy: 0.8843\n", + "Epoch 14/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.3047 - accuracy: 0.8926\n", + "Epoch 15/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2998 - accuracy: 0.8926\n", + "Epoch 16/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2968 - accuracy: 0.8967\n", + "Epoch 17/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2942 - accuracy: 0.9008\n", + "Epoch 18/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2920 - accuracy: 0.9008\n", + "Epoch 19/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2902 - accuracy: 0.9008\n", + "Epoch 20/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2871 - accuracy: 0.9008\n", + "Epoch 21/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2850 - accuracy: 0.9008\n", + "Epoch 22/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2832 - accuracy: 0.9050\n", + "Epoch 23/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2822 - accuracy: 0.9008\n", + "Epoch 24/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2796 - accuracy: 0.9008\n", + "Epoch 25/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2787 - accuracy: 0.9050\n", + "Epoch 26/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2743 - accuracy: 0.9091\n", + "Epoch 27/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2740 - accuracy: 0.9091\n", + "Epoch 28/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.2714 - accuracy: 0.9091\n", + "Epoch 29/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2710 - accuracy: 0.9132\n", + "Epoch 30/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2681 - accuracy: 0.9091\n", + "Epoch 31/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2674 - accuracy: 0.9091\n", + "Epoch 32/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2656 - accuracy: 0.9132\n", + "Epoch 33/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2642 - accuracy: 0.9132\n", + "Epoch 34/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2627 - accuracy: 0.9132\n", + "Epoch 35/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2609 - accuracy: 0.9132\n", + "Epoch 36/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.2603 - accuracy: 0.9132\n", + "Epoch 37/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2580 - accuracy: 0.9132\n", + "Epoch 38/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2578 - accuracy: 0.9132\n", + "Epoch 39/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2570 - accuracy: 0.9132\n", + "Epoch 40/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2546 - accuracy: 0.9132\n", + "Epoch 41/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2528 - accuracy: 0.9132\n", + "Epoch 42/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2515 - accuracy: 0.9132\n", + "Epoch 43/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.2488 - accuracy: 0.9132\n", + "Epoch 44/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2470 - accuracy: 0.9132\n", + "Epoch 45/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2451 - accuracy: 0.9132\n", + "Epoch 46/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2434 - accuracy: 0.9174\n", + "Epoch 47/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2401 - accuracy: 0.9256\n", + "Epoch 48/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2393 - accuracy: 0.9256\n", + "Epoch 49/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2367 - accuracy: 0.9256\n", + "Epoch 50/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2360 - accuracy: 0.9256\n", + "Epoch 51/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2340 - accuracy: 0.9256\n", + "Epoch 52/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2313 - accuracy: 0.9256\n", + "Epoch 53/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2300 - accuracy: 0.9256\n", + "Epoch 54/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2277 - accuracy: 0.9256\n", + "Epoch 55/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2264 - accuracy: 0.9256\n", + "Epoch 56/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2283 - accuracy: 0.9256\n", + "Epoch 57/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2292 - accuracy: 0.9256\n", + "Epoch 58/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.2223 - accuracy: 0.9298\n", + "Epoch 59/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.2191 - accuracy: 0.9298\n", + "Epoch 60/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.2168 - accuracy: 0.9298\n", + "Epoch 61/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.2152 - accuracy: 0.9339\n", + "Epoch 62/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2130 - accuracy: 0.9339\n", + "Epoch 63/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2123 - accuracy: 0.9339\n", + "Epoch 64/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2102 - accuracy: 0.9339\n", + "Epoch 65/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2091 - accuracy: 0.9339\n", + "Epoch 66/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2074 - accuracy: 0.9339\n", + "Epoch 67/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2073 - accuracy: 0.9298\n", + "Epoch 68/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2018 - accuracy: 0.9339\n", + "Epoch 69/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.2012 - accuracy: 0.9339\n", + "Epoch 70/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1999 - accuracy: 0.9339\n", + "Epoch 71/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1983 - accuracy: 0.9380\n", + "Epoch 72/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.1961 - accuracy: 0.9380\n", + "Epoch 73/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1950 - accuracy: 0.9380\n", + "Epoch 74/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1925 - accuracy: 0.9380\n", + "Epoch 75/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1906 - accuracy: 0.9380\n", + "Epoch 76/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1894 - accuracy: 0.9380\n", + "Epoch 77/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1896 - accuracy: 0.9380\n", + "Epoch 78/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.1847 - accuracy: 0.9380\n", + "Epoch 79/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1820 - accuracy: 0.9421\n", + "Epoch 80/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1798 - accuracy: 0.9421\n", + "Epoch 81/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1774 - accuracy: 0.9421\n", + "Epoch 82/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1766 - accuracy: 0.9421\n", + "Epoch 83/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1740 - accuracy: 0.9421\n", + "Epoch 84/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1742 - accuracy: 0.9421\n", + "Epoch 85/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1701 - accuracy: 0.9421\n", + "Epoch 86/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1685 - accuracy: 0.9421\n", + "Epoch 87/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1660 - accuracy: 0.9421\n", + "Epoch 88/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1653 - accuracy: 0.9463\n", + "Epoch 89/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1642 - accuracy: 0.9421\n", + "Epoch 90/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1616 - accuracy: 0.9463\n", + "Epoch 91/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1592 - accuracy: 0.9463\n", + "Epoch 92/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1577 - accuracy: 0.9463\n", + "Epoch 93/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1551 - accuracy: 0.9463\n", + "Epoch 94/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1498 - accuracy: 0.9504\n", + "Epoch 95/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1481 - accuracy: 0.9545\n", + "Epoch 96/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1444 - accuracy: 0.9545\n", + "Epoch 97/100\n", + "25/25 [==============================] - 0s 2ms/step - loss: 0.1418 - accuracy: 0.9545\n", + "Epoch 98/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1400 - accuracy: 0.9545\n", + "Epoch 99/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1388 - accuracy: 0.9545\n", + "Epoch 100/100\n", + "25/25 [==============================] - 0s 1ms/step - loss: 0.1371 - accuracy: 0.9545\n", + "2/2 [==============================] - 0s 2ms/step\n", + "accuracy of the model: 0.8032786885245902\n" + ] + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAhsAAAGdCAYAAAC7JrHlAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8o6BhiAAAACXBIWXMAAA9hAAAPYQGoP6dpAAARP0lEQVR4nO3cf3SVhX3H8e8lIZcfChKQAK0KrVpXp2gDpFqtMpnUtij9gbXrJtpTPZ6unI6grWzrXNuzYietnYLaOjvddrri1vmrp2pX/NV1KCyKzhXbWmhJlQQQTUyKIZC7P7pmy9FCA3zzRHi9zsk53Oe5uefzT3LePPe5KVUqlUoAACQZUvQAAODAJjYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIVV30gF8bfvInip4AJHlxzbKiJwBJhv0WJeHKBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQqrroARyYLpl3WlzywdPjqEm1ERGxbn1LfOFr98Z3f/DDiIgo11TH1Y3vj3mz66NcUx3fW7UuPvmFFbF528tFzgb2wo3Lr4+bbljW59jkKVPirm/fV9AiBhuxQYrnWl+Kz1x/Vzy7cUuUohR/OKch/vnaS+PtF1wd69a3xF9f/oE457Tj4yOfuiXaO7bHtVeeH9/80sfi9y6+tujpwF5489HHxNf+9u96H1dVVxW4hsFGbJDiO4883efxXy6/Jy6Zd1rMOHFKPLf5pbho7ilx0Z/eGg+v+XFERFx61T/Gk3d8JmacMDlW/9fPClgM7IvqqqoYd/jhRc9gkHLPBumGDCnFvNn1MXJ4TTz21IY4+XeOjJqh1fHAoz/qfc6Pf9YaGzdti4YTpxS4FNhbP9/485h15mnx7tlnxeJPLYpNzz9f9CQGkX5f2di6dWt8/etfj1WrVkVLS0tEREyYMCFOPfXUuOiii+JwZcv/Ov7oSfHQbYtiWE11dGzvig8tujmeWd8SU499Y3Tt6I62ju19nr/5hfaoGzuqoLXA3jrhxBPj83+1JCZPnhJbtmyJr964PC6+8CPxrbvuiZEjDyl6HoNAv2JjzZo1MXv27BgxYkTMmjUrjj322IiIaG1tjeuuuy6uvvrquP/++2PatGm7fZ2urq7o6urqc6zSsytKQ7zHdyD58c9ao+GCJTH6kOHxvlknx82f+6M4+2N/U/QsYD877fQzev997FuOixNOnBrn/P7MuP++e+P9H5hX4DIGi37FxoIFC2LevHlx0003RalU6nOuUqnEZZddFgsWLIhVq1bt9nWWLFkSn/3sZ/scq6qbHkMnzujPHAa57p27Yn3z1oiIeGJdc9Qff2T88YfPjH/57uNRrhkaow8Z3ufqxvixo6L1hfai5gL7yahRo+KooyZH88aNRU9hkOjXPRtPPvlkLFy48FWhERFRKpVi4cKFsXbt2j2+zuLFi6Otra3PV3VdfX+m8Do0pFSKck11PLFuY+zo3hkzG97Se+6Yo8bHkRNr47GnNhS4ENgfftnZGc3NzW4YpVe/rmxMmDAhVq9eHccdd9xrnl+9enXU1dXt8XXK5XKUy+U+x7yFcmD53IJz4/4f/Hc0b3oxDh05LD50zrR457RjYs7Hb4j2jlfi1jtXxRcXvT+2tXXGy52vxJc/PS8efXK9T6LA69CXrvlinHHmzJg4aVJs2bw5blx+fVRVDYlz3v3eoqcxSPQrNi6//PK49NJLo6mpKc4666zesGhtbY2VK1fGzTffHEuXLk0ZyuvL4bWHxC2fvzAmjBsVbR2vxNM/eS7mfPyGeOCxZyIi4lNLvxU9PZX4p6Uf+9Uf9fqPdfHJJSsKXg3sjdbWlrjyisZ46aWXYkxtbZz8tvr4h2/cHrW1tUVPY5AoVSqVSn++YcWKFXHttddGU1NT7Nq1KyIiqqqqor6+PhobG+P888/fqyHDT/7EXn0fMPi9uGbZnp8EvC4N+y0uW/Q7Nn6tu7s7tm791c1/48aNi6FDh+7Ny/QSG3DgEhtw4PptYmOv/4Lo0KFDY+LEiXv77QDAQcJfEAUAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUpUqlUql6BERES3t3UVPAJL8yR1PFz0BSPLN+Sfv8TmubAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJCquugBHBw+dO7Z0bLp+Vcdn/vBC2Lhp/+8gEXA3jqubmTMOb4upowdEbUjhsbSB9bHfza39Z4fPaw6/qB+UpwwaVSMrKmKda0dcetjv4iWl7sKXE2RxAYD4qu3fTN27erpfbzhpz+JRZ+4JM6cdXaBq4C9May6Kn7+4vZ46NkXYtHMN73q/KKZb4pdlUosfWB9bO/eFe956/j4s7OPjsvvWhddO3te4xU50HkbhQFx2JjaGDtuXO/Xqn9/ON7wxiPipLdNL3oa0E9rn2uP25/YFGs2tr3q3MRR5Th2/Mi45dHmWP/CL2NTe1fc8mhz1FSV4tQpYwpYy2AgNhhw3d3d8W/3fjvOOfd9USqVip4D7EfVQ371M939/65kViJiZ08ljhs/sqBVFE1sMOC+/9DK6Oh4Oc5579yipwD72fNtr8SWjh1xwdsmxciaqqgaUopzf3d8jB1ZE4cNH1r0PAqy32Ojubk5PvrRj+72OV1dXdHe3t7nq6vLjUMHi+/c/a8x45TTYtzh44ueAuxnuyoRX35wfUwcVY5bPnxi/P1HpsZbJxwaT/yiLSqVotdRlP0eG9u2bYvbbrttt89ZsmRJjB49us/X9V/+4v6ewiDUsun5aFr9aLx37geKngIk2bBte1x5z4/i4m88GZfd/nRc/b2fxqHl6mjt8J/Kg1W/P41y99137/b8+vXr9/gaixcvjsbGxj7HXuzyjs7B4N577ojDxtTG29/xzqKnAMm2d/dERE9MOLQcbxo7Im5fu6noSRSk37Exd+7cKJVKUdnN9bA93fRXLpejXC73OfbL9u7+TuF1pqenJ+69585413vOi+pqn7qG16ty9ZCYcOj//Q4ff2hNHDVmeHTs2BkvdHZHw1GHxcuv7IytnTviiDHD46IZb4g1zW3x1PMvF7iaIvX7N/7EiRPjhhtuiPPOO+81z69duzbq6+v3eRgHnqbVq6K1ZVO8+9z3FT0F2AdvHjsi/uJdx/Q+vnD6GyMi4uFnX4gbf7AxxgwfGhdOf0OMHlYdL27fGd//6bb41lMtRc1lEOh3bNTX10dTU9NvjI09XfXg4DX97e+Ih9c8XfQMYB/9sLUjLrjtid94/r5ntsR9z2wZwEUMdv2OjSuuuCI6Ozt/4/mjjz46HnzwwX0aBQAcOPodG6effvpuz48cOTLOOOOMvR4EABxYfAQEAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEhVqlQqlaJHcHDp6uqKJUuWxOLFi6NcLhc9B9iP/HzzWsQGA669vT1Gjx4dbW1tMWrUqKLnAPuRn29ei7dRAIBUYgMASCU2AIBUYoMBVy6X46qrrnLzGByA/HzzWtwgCgCkcmUDAEglNgCAVGIDAEglNgCAVGKDAbV8+fKYPHlyDBs2LBoaGmL16tVFTwL2g0ceeSTmzJkTkyZNilKpFHfeeWfRkxhExAYDZsWKFdHY2BhXXXVVPP744zF16tSYPXt2bN68uehpwD7q7OyMqVOnxvLly4uewiDko68MmIaGhpg+fXosW7YsIiJ6enriiCOOiAULFsSVV15Z8DpgfymVSnHHHXfE3Llzi57CIOHKBgNix44d0dTUFLNmzeo9NmTIkJg1a1asWrWqwGUAZBMbDIitW7fGrl27oq6urs/xurq6aGlpKWgVAANBbAAAqcQGA2LcuHFRVVUVra2tfY63trbGhAkTCloFwEAQGwyImpqaqK+vj5UrV/Ye6+npiZUrV8Ypp5xS4DIAslUXPYCDR2NjY8yfPz+mTZsWM2bMiK985SvR2dkZF198cdHTgH3U0dERzz77bO/jDRs2xNq1a6O2tjaOPPLIApcxGPjoKwNq2bJlcc0110RLS0ucdNJJcd1110VDQ0PRs4B99NBDD8XMmTNfdXz+/Plx6623DvwgBhWxAQCkcs8GAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqf4HoLMovbq5LyEAAAAASUVORK5CYII=", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import numpy as np\n", + "import pandas as pd\n", + "import seaborn as sns\n", + "import keras\n", + "from sklearn.model_selection import train_test_split\n", + "from sklearn.preprocessing import StandardScaler\n", + "from sklearn.metrics import confusion_matrix\n", + "from sklearn.metrics import accuracy_score\n", + "from keras.models import Sequential\n", + "from keras.layers import Dense\n", + "from ucimlrepo import fetch_ucirepo \n", + "%matplotlib inline\n", + "heart_disease = fetch_ucirepo(id=45)\n", + "pd.options.mode.chained_assignment = None\n", + "X = heart_disease.data.features \n", + "y = heart_disease.data.targets\n", + "y[y != 0] = 1\n", + "chest_pain = pd.get_dummies(X['cp'], prefix='cp', drop_first=True)\n", + "X = pd.concat([X, chest_pain], axis=1)\n", + "X.drop(['cp'], axis=1, inplace=True)\n", + "\n", + "sp = pd.get_dummies(X['slope'], prefix='slope')\n", + "th = pd.get_dummies(X['thal'], prefix='thal')\n", + "rest_ecg = pd.get_dummies(X['restecg'], prefix='restecg')\n", + "\n", + "frames = [X, sp, th, rest_ecg]\n", + "X = pd.concat(frames, axis=1)\n", + "X.drop(['slope', 'thal', 'restecg'], axis=1, inplace=True)\n", + "\n", + "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)\n", + "sc = StandardScaler()\n", + "X_train = sc.fit_transform(X_train)\n", + "X_test = sc.transform(X_test)\n", + "X_train = pd.DataFrame(X_train).fillna(method='ffill')\n", + "X_test = pd.DataFrame(X_test).fillna(method='ffill')\n", + "classifier = Sequential()\n", + "classifier.add(Dense(units=11, kernel_initializer='uniform', activation='relu', input_dim=21))\n", + "classifier.add(Dense(units=10, kernel_initializer='uniform', activation='relu'))\n", + "classifier.add(Dense(units=10, kernel_initializer='uniform', activation='relu'))\n", + "classifier.add(Dense(units=5, kernel_initializer='uniform', activation='relu'))\n", + "classifier.add(Dense(units=1, kernel_initializer='uniform', activation='sigmoid'))\n", + "classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])\n", + "classifier.fit(X_train, y_train, batch_size=10, epochs=100)\n", + "y_pred = classifier.predict(X_test)\n", + "cm = confusion_matrix(y_test, y_pred.round())\n", + "sns.heatmap(cm,annot=True,cmap=\"Blues\",fmt=\"d\",cbar=False)\n", + "ac=accuracy_score(y_test, y_pred.round())\n", + "print('accuracy of the model: ',ac)" + ] + }, + { + "cell_type": "markdown", + "id": "fb10720d-9756-4ce8-a284-18a882e11f64", + "metadata": {}, + "source": [ + "## Your turn! 🚀\n", + "\n", + "TBD." + ] + }, + { + "cell_type": "markdown", + "id": "fc673adb-1d3a-4ea5-b9b6-c84b45953be0", + "metadata": {}, + "source": [ + "## Self study\n", + "\n", + "TBD." + ] + }, + { + "cell_type": "markdown", + "id": "7f5b2350-2265-4919-9503-8f191fb33a52", + "metadata": {}, + "source": [ + "## Acknowledgments\n", + "\n", + "Thanks to [Pragati Baheti](https://www.v7labs.com/authors/pragati-baheti) and [Rajesh kumar jha](https://www.kaggle.com/rajeshjnv) for creating the open-source course [The Essential Guide to Neural Network Architectures](https://www.v7labs.com/blog/neural-network-architectures-guide#standard-neural-networks) and [Heart Disease Classification - Neural Network](https://www.kaggle.com/code/rajeshjnv/heart-disease-classification-neural-network#Loading-appropriate-libraries). It inspires the majority of the content in this chapter.\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "open-machine-learning-jupyter-book", + "language": "python", + "name": "open-machine-learning-jupyter-book" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.16" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From c66df0f75a90fa7133aaee3f938831a73b8b337f Mon Sep 17 00:00:00 2001 From: Fly <2946399650fly@gmail> Date: Sun, 24 Dec 2023 11:13:03 +0800 Subject: [PATCH 2/5] remove neural network content in ml-fundamentals --- .../neural-network/autoencoders.md | 0 .../convolutional-neural-networks.md | 0 .../neural-network/introduction.md | 5 - .../neural-network/neural-network-overview.md | 0 .../neural-network/nn-basics.md | 193 ------------------ .../neural-network/nn-hands-on.md | 151 -------------- .../neural-network/nn-implementation.md | 56 ----- .../recurrent-neural-networks.md | 0 8 files changed, 405 deletions(-) delete mode 100644 open-machine-learning-jupyter-book/ml-fundamentals/neural-network/autoencoders.md delete mode 100644 open-machine-learning-jupyter-book/ml-fundamentals/neural-network/convolutional-neural-networks.md delete mode 100644 open-machine-learning-jupyter-book/ml-fundamentals/neural-network/introduction.md delete mode 100644 open-machine-learning-jupyter-book/ml-fundamentals/neural-network/neural-network-overview.md delete mode 100644 open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-basics.md delete mode 100644 open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-hands-on.md delete mode 100644 open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-implementation.md delete mode 100644 open-machine-learning-jupyter-book/ml-fundamentals/neural-network/recurrent-neural-networks.md diff --git a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/autoencoders.md b/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/autoencoders.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/convolutional-neural-networks.md b/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/convolutional-neural-networks.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/introduction.md b/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/introduction.md deleted file mode 100644 index 093b11fce..000000000 --- a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/introduction.md +++ /dev/null @@ -1,5 +0,0 @@ -# Introduction - -```{tableofcontents} - -``` diff --git a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/neural-network-overview.md b/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/neural-network-overview.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-basics.md b/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-basics.md deleted file mode 100644 index b7e91be8b..000000000 --- a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-basics.md +++ /dev/null @@ -1,193 +0,0 @@ -# Neural network basics - -```{admonition} Tip -:class: tip -Tensorflow Playground: https://playground.tensorflow.org/ -``` - -## Logistic Regression is ... an artificial neuron - -### Recall logistic regression - -```{figure} ../../../images/nn/artificial_neuron.png ---- -name: 'artificial_neuron' -width: 90% ---- -artificial neuron. [source](https://towardsdatascience.com/beginners-crash-course-to-deep-learning-and-cnns-a32f35234038) -``` - -### How the output varies with different weights - -```{figure} ../../../images/nn/1_KNZZYteeBqkJViS1_LT1CQ.gif ---- -name: '1_KNZZYteeBqkJViS1_LT1CQ' -width: 90% ---- -how the output varies with input. [source](https://towardsai.net/p/machine-learning/introduction-to-neural-networks-and-their-key-elements-part-c-activation-functions-layers-ea8c915a9d9) -``` - -## Two neurons - -```{figure} ../../../images/nn/1sdfasdfadffa.gif ---- -name: '1sdfasdfadffa' -width: 90% ---- -two artificial neurons. [source](https://towardsdatascience.com/beginners-crash-course-to-deep-learning-and-cnns-a32f35234038) -``` - -## Hidden layers - - -```{figure} ../../../images/nn/Feed-Forward-Neural-Network.gif ---- -name: 'Feed-Forward-Neural-Network' -width: 90% ---- -A feed forward neural network with one hidden layer [source](https://machinelearningknowledge.ai/animated-explanation-of-feed-forward-neural-network-architecture/) -``` - -## Predict & Forward propagation (ideal case) - -```{figure} ../../../images/nn/predict.gif ---- -name: 'predictnn' -width: 90% ---- -predict. [source](https://www.youtube.com/watch?v=aircAruvnKk) -``` - -## Well, what we usually get ... is trash - -```{figure} ../../../images/nn/trash.gif ---- -name: 'trashnn' -width: 90% ---- -trash. [source](https://www.youtube.com/watch?v=aircAruvnKk) -``` - -## So, how to have a better model? - - -### Overview of backpropagation - -```{figure} ../../../images/nn/Backpropagation.gif ---- -name: 'Backpropagation' -width: 90% ---- -Backpropagation. [source](https://machinelearningknowledge.ai/animated-explanation-of-feed-forward-neural-network-architecture/) -``` - - -### Train with sample dataset - -```{figure} ../../../images/nn/train.gif ---- -name: 'trainnn' -width: 90% ---- -train. [source](https://www.youtube.com/watch?v=aircAruvnKk) -``` - - -### Each training data has his/her own saying ... - - -```{figure} ../../../images/nn/bp.gif ---- -name: 'bp' -width: 90% ---- -Each training data has his/her own saying. [source](https://www.youtube.com/watch?v=aircAruvnKk) -``` - -### Let's summarize their opinions ... in a democratic way - -```{figure} ../../../images/nn/average.gif ---- -name: 'average' -width: 90% ---- -Average. [source](https://www.youtube.com/watch?v=aircAruvnKk) -``` - -### Let's propagate Forward and Backward ... with many epochs - - -```{figure} ../../../images/nn/epoch.gif ---- -name: 'epochnn' -width: 90% ---- -Epoch. [source](https://www.youtube.com/watch?v=aircAruvnKk) -``` - - -## Finally, we have a neural network model that works well! - -```{figure} ../../../images/nn/dog.gif ---- -name: 'dog' -width: 90% ---- -A neural network for dog/cat classification. [source](https://medium.com/the-21st-century/solution-to-failing-convolutional-neural-networks-ff8857b2eaf0) -``` - -## Activation functions - -```{figure} ../../../images/nn/activation_functions.gif ---- -name: 'activation_functions' -width: 90% ---- -activation_functions. [source](https://theffork.com/activation-functions-in-neural-networks/) -``` - -### Why using activation function at all? -- To introduce **non-linarities**. -- Without them, our Neural Network would be a simple linear model! - -## Output layer - -- Regression tasks require linear activation functions -- Classification tasks requires softmax/sigmoid -- Softmax turns numbers into probabilities that sums to 1 - - - -## Neural network vs human brain - -```{figure} ../../../images/nn/nn-872d.gif ---- -name: 'nn-872d' -width: 90% ---- -Human brain neurons. [source](https://www.kdnuggets.com/2019/10/introduction-artificial-neural-networks.html) -``` - - -## Conclusion - -### All in all, neural network is nothing more than -- multiple linear regressions stacked together -- non-linear functions: the activation functions - -### Mathematically speaking - -Neural networks are universal approximations: with just one hidden layer, they can approximate any continuous function with arbitrary precision. - -This does not guarantee that you can easily find these optimal parameters of your model! - -It may require extremely large sample size or computing power. - - -### Tips -- First layer needs the size of your input -- Last layer's number of neurons equals the output dimension -- Last layer's activation is Linear (regression) or softmax/sigmoid (classification) -- Almost always start with the relu activation function - if it is not the last layer - - diff --git a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-hands-on.md b/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-hands-on.md deleted file mode 100644 index 78c4a699c..000000000 --- a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-hands-on.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -jupytext: - cell_metadata_filter: -all - formats: md:myst - text_representation: - extension: .md - format_name: myst - format_version: 0.13 - jupytext_version: 1.11.5 -kernelspec: - display_name: Python 3 - language: python - name: python3 ---- - - -# Hands on neural network - -## Neural network with Tensorflow : Face recognition - -```{code-cell} -import numpy as np -import matplotlib.pyplot as plt -import pandas as pd - -# Load data -from sklearn.datasets import fetch_lfw_people -faces = fetch_lfw_people(min_faces_per_person=200, resize=0.25) - -# 766 images of 31 * 23 pixel black & white -print(faces.images.shape) -``` - -```{code-cell} -# 2 different target classes -np.unique(faces.target) -``` -Let's visualize some faces: - - -```{code-cell} -fig = plt.figure(figsize=(13,10)) -for i in range(15): - plt.subplot(5, 5, i + 1) - plt.title(faces.target_names[faces.target[i]], size=12) - plt.imshow(faces.images[i], cmap=plt.cm.gray) - plt.xticks(()); plt.yticks(()) -``` - -### Minimal preprocessing - -```{code-cell} -# Flatten our 766 images -X = faces.images.reshape(766, 31*23) -X.shape -``` - - - -```{code-cell} -y = faces.target -y.shape -``` - - - - -```{code-cell} -# Train test split -from sklearn.model_selection import train_test_split -X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=3) -``` - - - - -```{code-cell} -# Standardize -from sklearn.preprocessing import StandardScaler -scaler = StandardScaler() -X_train = scaler.fit_transform(X_train) -``` - -### Simple model with two hidden layers - -```{code-cell} -from tensorflow.keras.models import Sequential -from tensorflow.keras import layers - -# Model definition -model = Sequential() -model.add(layers.Dense(20, activation='relu', input_dim=713)) -model.add(layers.Dense(10, activation='relu')) -model.add(layers.Dense(1, activation='sigmoid')) -model.summary() -``` - - - -```{code-cell} -model.compile( - optimizer='adam', - loss='binary_crossentropy', - metrics = ['accuracy']) - -model.fit(X_train, y_train, batch_size=16, epochs=20) -``` - - -### Evaluate performance - -```{code-cell} -model.evaluate(scaler.transform(X_test), y_test) -# returns [loss, metrics] -``` - -Is it good? What's our baseline? - - - - -```{code-cell} -pd.Series(y).value_counts() - -``` - - - - -```{code-cell} -# Baseline score -530 / (530+236) -``` - - - -### Let's check our predictions! - - -```{code-cell} -# Predicted probabilities -model.predict(scaler.transform(X_test)) -``` - - - - -## Linear regression with Tensorflow - -## Logistic regression with Tensorflow - diff --git a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-implementation.md b/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-implementation.md deleted file mode 100644 index 154240953..000000000 --- a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/nn-implementation.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -jupytext: - cell_metadata_filter: -all - formats: md:myst - text_representation: - extension: .md - format_name: myst - format_version: 0.13 - jupytext_version: 1.11.5 -kernelspec: - display_name: Python 3 - language: python - name: python3 ---- - -# Neural network implementation from scratch - - -
- -
- - -
- -
- -
- -
- - -
- -
- - -
- -
- -
- -
- - - - - diff --git a/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/recurrent-neural-networks.md b/open-machine-learning-jupyter-book/ml-fundamentals/neural-network/recurrent-neural-networks.md deleted file mode 100644 index e69de29bb..000000000 From 4bacc38d3f229cd00eab8ed2c88046e178c3c9cc Mon Sep 17 00:00:00 2001 From: Fly <2946399650fly@gmail> Date: Wed, 27 Dec 2023 14:24:10 +0800 Subject: [PATCH 3/5] update --- .../deep-learning/nn.ipynb | 58 ++++++++----------- 1 file changed, 23 insertions(+), 35 deletions(-) diff --git a/open-machine-learning-jupyter-book/deep-learning/nn.ipynb b/open-machine-learning-jupyter-book/deep-learning/nn.ipynb index b06b0ef54..35dcc4140 100644 --- a/open-machine-learning-jupyter-book/deep-learning/nn.ipynb +++ b/open-machine-learning-jupyter-book/deep-learning/nn.ipynb @@ -88,14 +88,11 @@ "id": "bd6cd958-b6af-4f18-ad97-83307a697c7c", "metadata": {}, "source": [ - "### Input\n", - "It is the set of features that are fed into the model for the learning process. For example, the input in object detection can be an array of pixel values pertaining to an image.\n", + "**Input:** It is the set of features that are fed into the model for the learning process. For example, the input in object detection can be an array of pixel values pertaining to an image.\n", "\n", - "### Weight\n", - "Its main function is to give importance to those features that contribute more towards the learning. It does so by introducing scalar multiplication between the input value and the weight matrix. For example, a negative word would impact the decision of the sentiment analysis model more than a pair of neutral words.\n", + "**Weight:** Its main function is to give importance to those features that contribute more towards the learning. It does so by introducing scalar multiplication between the input value and the weight matrix. For example, a negative word would impact the decision of the sentiment analysis model more than a pair of neutral words.\n", "\n", - "### Transfer function\n", - "The job of the transfer function is to combine multiple inputs into one output value so that the activation function can be applied. It is done by a simple summation of all the inputs to the transfer function. " + "**Transfer function:** The job of the transfer function is to combine multiple inputs into one output value so that the activation function can be applied. It is done by a simple summation of all the inputs to the transfer function. " ] }, { @@ -103,10 +100,10 @@ "id": "53f32b90-5cc1-41fb-93a5-7776d9b3d9c2", "metadata": {}, "source": [ - "### Activation Function\n", - "It introduces non-linearity in the working of perceptrons to consider varying linearity with the inputs. Without this, the output would just be a linear combination of input values and would not be able to introduce non-linearity in the network.\n", + "**Activation Function:** It introduces non-linearity in the working of perceptrons to consider varying linearity with the inputs. Without this, the output would just be a linear combination of input values and would not be able to introduce non-linearity in the network.\n", "In the realm of deep learning, several common activation functions are widely used due to their impact on network training and performance. Here are some prevalent activation functions:\n", - "#### 1. ReLU (Rectified Linear Activation):\n", + "\n", + "**1. ReLU (Rectified Linear Activation):**\n", "ReLU is one of the most commonly used activation functions. It sets negative input values to zero and keeps positive values unchanged:\n", "\n", "$$\n", @@ -114,7 +111,8 @@ "$$\n", "\n", "ReLU effectively mitigates the vanishing gradient problem and computes faster. However, it can cause neurons to \"die\" by setting negative outputs to zero.\n", - "#### 2. Sigmoid Function:\n", + "\n", + "**2. Sigmoid Function:**\n", "The sigmoid function maps inputs to the range (0, 1):\n", "\n", "$$\n", @@ -122,7 +120,8 @@ "$$\n", "\n", "It's often used in binary classification problems at the output layer but suffers from the vanishing gradient problem in deep neural networks.\n", - "#### 3. Tanh Function:\n", + "\n", + "**3. Tanh Function:**\n", "Tanh function maps inputs to the range (-1, 1):\n", "\n", "$$\n", @@ -130,7 +129,8 @@ "$$\n", "\n", "Similar to the sigmoid function but outputs in the range (-1, 1), aiding zero-centered data. However, it also faces the issue of vanishing gradients.\n", - "#### 4. Leaky ReLU:\n", + "\n", + "**4. Leaky ReLU:**\n", "Leaky ReLU is an improvement over ReLU, allowing a small slope for negative input values:\n", "\n", "$$\n", @@ -138,7 +138,8 @@ "$$\n", "\n", "Where $\\alpha$ is a small positive number. This function addresses the \"neuron death\" problem in ReLU.\n", - "#### 5. ELU (Exponential Linear Unit):\n", + "\n", + "**5. ELU (Exponential Linear Unit):**\n", "ELU is similar to Leaky ReLU but allows a slightly negative slope for negative values and tends to zero-center:\n", "\n", "$$\n", @@ -155,7 +156,7 @@ "id": "5137abb6-81fa-4ab6-aa57-28d215074735", "metadata": {}, "source": [ - "### Bias\n", + "**Bias:**\n", "The role of bias is to shift the value produced by the activation function. Its role is similar to the role of a constant in a linear function.\n", "\n", "When multiple neurons are stacked together in a row, they constitute a layer, and multiple layers piled next to each other are called a multi-layer neural network.\n", @@ -176,18 +177,15 @@ "id": "bfb44613-7e13-41a9-a621-bec3327a9ed2", "metadata": {}, "source": [ - "### Input Layer\n", - "\n", + "**Input Layer:**\n", "The data that we feed to the model is loaded into the input layer from external sources like a CSV file or a web service. It is the only visible layer in the complete Neural Network architecture that passes the complete information from the outside world without any computation.\n", "\n", - "### Hidden Layers\n", - "\n", + "**Hidden Layers:**\n", "The hidden layers are what makes deep learning what it is today. They are intermediate layers that do all the computations and extract the features from the data.\n", "\n", "There can be multiple interconnected hidden layers that account for searching different hidden features in the data. For example, in image processing, the first hidden layers are responsible for higher-level features like edges, shapes, or boundaries. On the other hand, the later hidden layers perform more complicated tasks like identifying complete objects (a car, a building, a person).\n", "\n", - "### Output Layer\n", - "\n", + "**Output Layer:**\n", "The output layer takes input from preceding hidden layers and comes to a final prediction based on the model’s learnings. It is the most important layer where we get the final result.\n", "\n", "In the case of classification/regression models, the output layer generally has a single node. However, it is completely problem-specific and dependent on the way the model was built." @@ -214,7 +212,7 @@ "id": "6a8abe53-1148-41cc-9651-2bcb0a3e916e", "metadata": {}, "source": [ - "### The Perceptron\n", + "**The Perceptron:**\n", "Perceptron is the simplest Neural Network architecture.\n", "\n", "It is a type of Neural Network that takes a number of inputs, applies certain mathematical operations on these inputs, and produces an output. It takes a vector of real values inputs, performs a linear combination of each attribute with the corresponding weight assigned to each of them.\n", @@ -229,7 +227,7 @@ "id": "02af3b8a-5251-4fff-8d16-87fbf1d7a87b", "metadata": {}, "source": [ - "### Feed-Forward Networks\n", + "**Feed-Forward Networks:**\n", "Perceptron represents how a single neuron works.\n", "\n", "But—\n", @@ -248,7 +246,7 @@ "id": "84f7dc96-ab00-4b00-827a-5790918ed462", "metadata": {}, "source": [ - "### Residual Networks (ResNet)\n", + "**Residual Networks (ResNet):**\n", "Now that you know more about the Feed-Forward Networks, one question might have popped up in your head—how to decide on the number of layers in our neural network architecture?\n", "\n", "A naive answer would be: The greater the number of hidden layers, the better is the learning process.\n", @@ -562,17 +560,7 @@ "source": [ "## Your turn! 🚀\n", "\n", - "TBD." - ] - }, - { - "cell_type": "markdown", - "id": "fc673adb-1d3a-4ea5-b9b6-c84b45953be0", - "metadata": {}, - "source": [ - "## Self study\n", - "\n", - "TBD." + "Try the exercises about [neural networks classify 15 fruits](../../assignments/deep-learning/nn-classify-15-fruits-assignment.ipynb) and [neural networks for classification](../../assignments/deep-learning/nn-for-classification-assignment.ipynb)." ] }, { @@ -590,7 +578,7 @@ "kernelspec": { "display_name": "open-machine-learning-jupyter-book", "language": "python", - "name": "open-machine-learning-jupyter-book" + "name": "python3" }, "language_info": { "codemirror_mode": { From a2996e194b04d7dd545518c74f1200839d6b774d Mon Sep 17 00:00:00 2001 From: Fly <2946399650fly@gmail> Date: Sat, 30 Dec 2023 16:46:34 +0800 Subject: [PATCH 4/5] update according to comments --- .../deep-learning/cnn/cnn.ipynb | 15 +- .../deep-learning/nn.ipynb | 408 +++++++----------- 2 files changed, 164 insertions(+), 259 deletions(-) diff --git a/open-machine-learning-jupyter-book/deep-learning/cnn/cnn.ipynb b/open-machine-learning-jupyter-book/deep-learning/cnn/cnn.ipynb index ff04796c5..29bb544d7 100644 --- a/open-machine-learning-jupyter-book/deep-learning/cnn/cnn.ipynb +++ b/open-machine-learning-jupyter-book/deep-learning/cnn/cnn.ipynb @@ -39,7 +39,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -47,7 +46,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -95,7 +93,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -107,7 +104,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -203,7 +199,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -546,7 +541,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -562,7 +556,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -698,7 +691,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -706,7 +698,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -866,7 +857,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -876,7 +866,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -889,7 +878,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -903,7 +891,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -929,7 +916,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.18" + "version": "3.9.13" } }, "nbformat": 4, diff --git a/open-machine-learning-jupyter-book/deep-learning/nn.ipynb b/open-machine-learning-jupyter-book/deep-learning/nn.ipynb index 35dcc4140..7818c0244 100644 --- a/open-machine-learning-jupyter-book/deep-learning/nn.ipynb +++ b/open-machine-learning-jupyter-book/deep-learning/nn.ipynb @@ -58,6 +58,14 @@ "From speech recognition and person recognition to healthcare and marketing, Neural Networks have been used in a varied set of domains." ] }, + { + "cell_type": "markdown", + "id": "cae8690c-88aa-4980-bff6-4b315ced4ca8", + "metadata": {}, + "source": [ + "

Image: Neurons in human brain
" + ] + }, { "cell_type": "markdown", "id": "cc2d8e9d-ffb2-4562-a180-92e5e9a87d79", @@ -80,7 +88,7 @@ "id": "11568845-e197-4e99-be19-27509431a4c1", "metadata": {}, "source": [ - "
Image: Neuron in Artificial Neural Network" + "

Image: Neuron in Artificial Neural Network
" ] }, { @@ -95,12 +103,27 @@ "**Transfer function:** The job of the transfer function is to combine multiple inputs into one output value so that the activation function can be applied. It is done by a simple summation of all the inputs to the transfer function. " ] }, + { + "cell_type": "markdown", + "id": "d87d58f8-445e-4381-aeb2-931dfc397992", + "metadata": {}, + "source": [ + "**Activation Function:** It introduces non-linearity in the working of perceptrons to consider varying linearity with the inputs. Without this, the output would just be a linear combination of input values and would not be able to introduce non-linearity in the network." + ] + }, + { + "cell_type": "markdown", + "id": "37a9359d-ade4-4218-aad8-eaba6712f77e", + "metadata": {}, + "source": [ + "

Image: Different Activation Functions
" + ] + }, { "cell_type": "markdown", "id": "53f32b90-5cc1-41fb-93a5-7776d9b3d9c2", "metadata": {}, "source": [ - "**Activation Function:** It introduces non-linearity in the working of perceptrons to consider varying linearity with the inputs. Without this, the output would just be a linear combination of input values and would not be able to introduce non-linearity in the network.\n", "In the realm of deep learning, several common activation functions are widely used due to their impact on network training and performance. Here are some prevalent activation functions:\n", "\n", "**1. ReLU (Rectified Linear Activation):**\n", @@ -169,7 +192,7 @@ "id": "19aeaa65-4f77-47cf-8c98-9ce0ba304908", "metadata": {}, "source": [ - "
Image: Multi-layer neural network" + "

Image: Multi-layer neural network
" ] }, { @@ -212,7 +235,7 @@ "id": "6a8abe53-1148-41cc-9651-2bcb0a3e916e", "metadata": {}, "source": [ - "**The Perceptron:**\n", + "### The Perceptron\n", "Perceptron is the simplest Neural Network architecture.\n", "\n", "It is a type of Neural Network that takes a number of inputs, applies certain mathematical operations on these inputs, and produces an output. It takes a vector of real values inputs, performs a linear combination of each attribute with the corresponding weight assigned to each of them.\n", @@ -227,7 +250,7 @@ "id": "02af3b8a-5251-4fff-8d16-87fbf1d7a87b", "metadata": {}, "source": [ - "**Feed-Forward Networks:**\n", + "### Feed-Forward Networks\n", "Perceptron represents how a single neuron works.\n", "\n", "But—\n", @@ -246,7 +269,7 @@ "id": "84f7dc96-ab00-4b00-827a-5790918ed462", "metadata": {}, "source": [ - "**Residual Networks (ResNet):**\n", + "### Residual Networks (ResNet)\n", "Now that you know more about the Feed-Forward Networks, one question might have popped up in your head—how to decide on the number of layers in our neural network architecture?\n", "\n", "A naive answer would be: The greater the number of hidden layers, the better is the learning process.\n", @@ -279,278 +302,173 @@ }, { "cell_type": "code", - "execution_count": 171, - "id": "7aaffb2a-8b9c-49e4-944d-884969408638", + "execution_count": 3, + "id": "ed7d3c46-b1dd-43c2-9e72-09c929bccba2", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Epoch 1/100\n", - "25/25 [==============================] - 1s 1ms/step - loss: 0.6931 - accuracy: 0.5331\n", - "Epoch 2/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.6928 - accuracy: 0.5331\n", - "Epoch 3/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.6926 - accuracy: 0.5331\n", - "Epoch 4/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.6905 - accuracy: 0.5331\n", - "Epoch 5/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.6767 - accuracy: 0.5372\n", - "Epoch 6/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.6246 - accuracy: 0.8347\n", - "Epoch 7/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.5233 - accuracy: 0.8719\n", - "Epoch 8/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.4264 - accuracy: 0.8719\n", - "Epoch 9/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.3577 - accuracy: 0.8760\n", - "Epoch 10/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.3254 - accuracy: 0.8802\n", - "Epoch 11/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.3170 - accuracy: 0.8843\n", - "Epoch 12/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.3123 - accuracy: 0.8843\n", - "Epoch 13/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.3085 - accuracy: 0.8843\n", - "Epoch 14/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.3047 - accuracy: 0.8926\n", - "Epoch 15/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2998 - accuracy: 0.8926\n", - "Epoch 16/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2968 - accuracy: 0.8967\n", - "Epoch 17/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2942 - accuracy: 0.9008\n", - "Epoch 18/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2920 - accuracy: 0.9008\n", - "Epoch 19/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2902 - accuracy: 0.9008\n", - "Epoch 20/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2871 - accuracy: 0.9008\n", - "Epoch 21/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2850 - accuracy: 0.9008\n", - "Epoch 22/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2832 - accuracy: 0.9050\n", - "Epoch 23/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2822 - accuracy: 0.9008\n", - "Epoch 24/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2796 - accuracy: 0.9008\n", - "Epoch 25/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2787 - accuracy: 0.9050\n", - "Epoch 26/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2743 - accuracy: 0.9091\n", - "Epoch 27/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2740 - accuracy: 0.9091\n", - "Epoch 28/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.2714 - accuracy: 0.9091\n", - "Epoch 29/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2710 - accuracy: 0.9132\n", - "Epoch 30/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2681 - accuracy: 0.9091\n", - "Epoch 31/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2674 - accuracy: 0.9091\n", - "Epoch 32/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2656 - accuracy: 0.9132\n", - "Epoch 33/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2642 - accuracy: 0.9132\n", - "Epoch 34/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2627 - accuracy: 0.9132\n", - "Epoch 35/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2609 - accuracy: 0.9132\n", - "Epoch 36/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.2603 - accuracy: 0.9132\n", - "Epoch 37/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2580 - accuracy: 0.9132\n", - "Epoch 38/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2578 - accuracy: 0.9132\n", - "Epoch 39/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2570 - accuracy: 0.9132\n", - "Epoch 40/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2546 - accuracy: 0.9132\n", - "Epoch 41/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2528 - accuracy: 0.9132\n", - "Epoch 42/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2515 - accuracy: 0.9132\n", - "Epoch 43/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.2488 - accuracy: 0.9132\n", - "Epoch 44/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2470 - accuracy: 0.9132\n", - "Epoch 45/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2451 - accuracy: 0.9132\n", - "Epoch 46/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2434 - accuracy: 0.9174\n", - "Epoch 47/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2401 - accuracy: 0.9256\n", - "Epoch 48/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2393 - accuracy: 0.9256\n", - "Epoch 49/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2367 - accuracy: 0.9256\n", - "Epoch 50/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2360 - accuracy: 0.9256\n", - "Epoch 51/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2340 - accuracy: 0.9256\n", - "Epoch 52/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2313 - accuracy: 0.9256\n", - "Epoch 53/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2300 - accuracy: 0.9256\n", - "Epoch 54/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2277 - accuracy: 0.9256\n", - "Epoch 55/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2264 - accuracy: 0.9256\n", - "Epoch 56/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2283 - accuracy: 0.9256\n", - "Epoch 57/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2292 - accuracy: 0.9256\n", - "Epoch 58/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.2223 - accuracy: 0.9298\n", - "Epoch 59/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.2191 - accuracy: 0.9298\n", - "Epoch 60/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.2168 - accuracy: 0.9298\n", - "Epoch 61/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.2152 - accuracy: 0.9339\n", - "Epoch 62/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2130 - accuracy: 0.9339\n", - "Epoch 63/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2123 - accuracy: 0.9339\n", - "Epoch 64/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2102 - accuracy: 0.9339\n", - "Epoch 65/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2091 - accuracy: 0.9339\n", - "Epoch 66/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2074 - accuracy: 0.9339\n", - "Epoch 67/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2073 - accuracy: 0.9298\n", - "Epoch 68/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2018 - accuracy: 0.9339\n", - "Epoch 69/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.2012 - accuracy: 0.9339\n", - "Epoch 70/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1999 - accuracy: 0.9339\n", - "Epoch 71/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1983 - accuracy: 0.9380\n", - "Epoch 72/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.1961 - accuracy: 0.9380\n", - "Epoch 73/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1950 - accuracy: 0.9380\n", - "Epoch 74/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1925 - accuracy: 0.9380\n", - "Epoch 75/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1906 - accuracy: 0.9380\n", - "Epoch 76/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1894 - accuracy: 0.9380\n", - "Epoch 77/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1896 - accuracy: 0.9380\n", - "Epoch 78/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.1847 - accuracy: 0.9380\n", - "Epoch 79/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1820 - accuracy: 0.9421\n", - "Epoch 80/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1798 - accuracy: 0.9421\n", - "Epoch 81/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1774 - accuracy: 0.9421\n", - "Epoch 82/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1766 - accuracy: 0.9421\n", - "Epoch 83/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1740 - accuracy: 0.9421\n", - "Epoch 84/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1742 - accuracy: 0.9421\n", - "Epoch 85/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1701 - accuracy: 0.9421\n", - "Epoch 86/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1685 - accuracy: 0.9421\n", - "Epoch 87/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1660 - accuracy: 0.9421\n", - "Epoch 88/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1653 - accuracy: 0.9463\n", - "Epoch 89/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1642 - accuracy: 0.9421\n", - "Epoch 90/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1616 - accuracy: 0.9463\n", - "Epoch 91/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1592 - accuracy: 0.9463\n", - "Epoch 92/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1577 - accuracy: 0.9463\n", - "Epoch 93/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1551 - accuracy: 0.9463\n", - "Epoch 94/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1498 - accuracy: 0.9504\n", - "Epoch 95/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1481 - accuracy: 0.9545\n", - "Epoch 96/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1444 - accuracy: 0.9545\n", - "Epoch 97/100\n", - "25/25 [==============================] - 0s 2ms/step - loss: 0.1418 - accuracy: 0.9545\n", - "Epoch 98/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1400 - accuracy: 0.9545\n", - "Epoch 99/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1388 - accuracy: 0.9545\n", - "Epoch 100/100\n", - "25/25 [==============================] - 0s 1ms/step - loss: 0.1371 - accuracy: 0.9545\n", - "2/2 [==============================] - 0s 2ms/step\n", - "accuracy of the model: 0.8032786885245902\n" - ] - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAhsAAAGdCAYAAAC7JrHlAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8o6BhiAAAACXBIWXMAAA9hAAAPYQGoP6dpAAARP0lEQVR4nO3cf3SVhX3H8e8lIZcfChKQAK0KrVpXp2gDpFqtMpnUtij9gbXrJtpTPZ6unI6grWzrXNuzYietnYLaOjvddrri1vmrp2pX/NV1KCyKzhXbWmhJlQQQTUyKIZC7P7pmy9FCA3zzRHi9zsk53Oe5uefzT3LePPe5KVUqlUoAACQZUvQAAODAJjYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIVV30gF8bfvInip4AJHlxzbKiJwBJhv0WJeHKBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQSmwAAKnEBgCQqrroARyYLpl3WlzywdPjqEm1ERGxbn1LfOFr98Z3f/DDiIgo11TH1Y3vj3mz66NcUx3fW7UuPvmFFbF528tFzgb2wo3Lr4+bbljW59jkKVPirm/fV9AiBhuxQYrnWl+Kz1x/Vzy7cUuUohR/OKch/vnaS+PtF1wd69a3xF9f/oE457Tj4yOfuiXaO7bHtVeeH9/80sfi9y6+tujpwF5489HHxNf+9u96H1dVVxW4hsFGbJDiO4883efxXy6/Jy6Zd1rMOHFKPLf5pbho7ilx0Z/eGg+v+XFERFx61T/Gk3d8JmacMDlW/9fPClgM7IvqqqoYd/jhRc9gkHLPBumGDCnFvNn1MXJ4TTz21IY4+XeOjJqh1fHAoz/qfc6Pf9YaGzdti4YTpxS4FNhbP9/485h15mnx7tlnxeJPLYpNzz9f9CQGkX5f2di6dWt8/etfj1WrVkVLS0tEREyYMCFOPfXUuOiii+JwZcv/Ov7oSfHQbYtiWE11dGzvig8tujmeWd8SU499Y3Tt6I62ju19nr/5hfaoGzuqoLXA3jrhxBPj83+1JCZPnhJbtmyJr964PC6+8CPxrbvuiZEjDyl6HoNAv2JjzZo1MXv27BgxYkTMmjUrjj322IiIaG1tjeuuuy6uvvrquP/++2PatGm7fZ2urq7o6urqc6zSsytKQ7zHdyD58c9ao+GCJTH6kOHxvlknx82f+6M4+2N/U/QsYD877fQzev997FuOixNOnBrn/P7MuP++e+P9H5hX4DIGi37FxoIFC2LevHlx0003RalU6nOuUqnEZZddFgsWLIhVq1bt9nWWLFkSn/3sZ/scq6qbHkMnzujPHAa57p27Yn3z1oiIeGJdc9Qff2T88YfPjH/57uNRrhkaow8Z3ufqxvixo6L1hfai5gL7yahRo+KooyZH88aNRU9hkOjXPRtPPvlkLFy48FWhERFRKpVi4cKFsXbt2j2+zuLFi6Otra3PV3VdfX+m8Do0pFSKck11PLFuY+zo3hkzG97Se+6Yo8bHkRNr47GnNhS4ENgfftnZGc3NzW4YpVe/rmxMmDAhVq9eHccdd9xrnl+9enXU1dXt8XXK5XKUy+U+x7yFcmD53IJz4/4f/Hc0b3oxDh05LD50zrR457RjYs7Hb4j2jlfi1jtXxRcXvT+2tXXGy52vxJc/PS8efXK9T6LA69CXrvlinHHmzJg4aVJs2bw5blx+fVRVDYlz3v3eoqcxSPQrNi6//PK49NJLo6mpKc4666zesGhtbY2VK1fGzTffHEuXLk0ZyuvL4bWHxC2fvzAmjBsVbR2vxNM/eS7mfPyGeOCxZyIi4lNLvxU9PZX4p6Uf+9Uf9fqPdfHJJSsKXg3sjdbWlrjyisZ46aWXYkxtbZz8tvr4h2/cHrW1tUVPY5AoVSqVSn++YcWKFXHttddGU1NT7Nq1KyIiqqqqor6+PhobG+P888/fqyHDT/7EXn0fMPi9uGbZnp8EvC4N+y0uW/Q7Nn6tu7s7tm791c1/48aNi6FDh+7Ny/QSG3DgEhtw4PptYmOv/4Lo0KFDY+LEiXv77QDAQcJfEAUAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUokNACCV2AAAUpUqlUql6BERES3t3UVPAJL8yR1PFz0BSPLN+Sfv8TmubAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJCquugBHBw+dO7Z0bLp+Vcdn/vBC2Lhp/+8gEXA3jqubmTMOb4upowdEbUjhsbSB9bHfza39Z4fPaw6/qB+UpwwaVSMrKmKda0dcetjv4iWl7sKXE2RxAYD4qu3fTN27erpfbzhpz+JRZ+4JM6cdXaBq4C9May6Kn7+4vZ46NkXYtHMN73q/KKZb4pdlUosfWB9bO/eFe956/j4s7OPjsvvWhddO3te4xU50HkbhQFx2JjaGDtuXO/Xqn9/ON7wxiPipLdNL3oa0E9rn2uP25/YFGs2tr3q3MRR5Th2/Mi45dHmWP/CL2NTe1fc8mhz1FSV4tQpYwpYy2AgNhhw3d3d8W/3fjvOOfd9USqVip4D7EfVQ371M939/65kViJiZ08ljhs/sqBVFE1sMOC+/9DK6Oh4Oc5579yipwD72fNtr8SWjh1xwdsmxciaqqgaUopzf3d8jB1ZE4cNH1r0PAqy32Ojubk5PvrRj+72OV1dXdHe3t7nq6vLjUMHi+/c/a8x45TTYtzh44ueAuxnuyoRX35wfUwcVY5bPnxi/P1HpsZbJxwaT/yiLSqVotdRlP0eG9u2bYvbbrttt89ZsmRJjB49us/X9V/+4v6ewiDUsun5aFr9aLx37geKngIk2bBte1x5z4/i4m88GZfd/nRc/b2fxqHl6mjt8J/Kg1W/P41y99137/b8+vXr9/gaixcvjsbGxj7HXuzyjs7B4N577ojDxtTG29/xzqKnAMm2d/dERE9MOLQcbxo7Im5fu6noSRSk37Exd+7cKJVKUdnN9bA93fRXLpejXC73OfbL9u7+TuF1pqenJ+69585413vOi+pqn7qG16ty9ZCYcOj//Q4ff2hNHDVmeHTs2BkvdHZHw1GHxcuv7IytnTviiDHD46IZb4g1zW3x1PMvF7iaIvX7N/7EiRPjhhtuiPPOO+81z69duzbq6+v3eRgHnqbVq6K1ZVO8+9z3FT0F2AdvHjsi/uJdx/Q+vnD6GyMi4uFnX4gbf7AxxgwfGhdOf0OMHlYdL27fGd//6bb41lMtRc1lEOh3bNTX10dTU9NvjI09XfXg4DX97e+Ih9c8XfQMYB/9sLUjLrjtid94/r5ntsR9z2wZwEUMdv2OjSuuuCI6Ozt/4/mjjz46HnzwwX0aBQAcOPodG6effvpuz48cOTLOOOOMvR4EABxYfAQEAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEglNgCAVGIDAEhVqlQqlaJHcHDp6uqKJUuWxOLFi6NcLhc9B9iP/HzzWsQGA669vT1Gjx4dbW1tMWrUqKLnAPuRn29ei7dRAIBUYgMASCU2AIBUYoMBVy6X46qrrnLzGByA/HzzWtwgCgCkcmUDAEglNgCAVGIDAEglNgCAVGKDAbV8+fKYPHlyDBs2LBoaGmL16tVFTwL2g0ceeSTmzJkTkyZNilKpFHfeeWfRkxhExAYDZsWKFdHY2BhXXXVVPP744zF16tSYPXt2bN68uehpwD7q7OyMqVOnxvLly4uewiDko68MmIaGhpg+fXosW7YsIiJ6enriiCOOiAULFsSVV15Z8DpgfymVSnHHHXfE3Llzi57CIOHKBgNix44d0dTUFLNmzeo9NmTIkJg1a1asWrWqwGUAZBMbDIitW7fGrl27oq6urs/xurq6aGlpKWgVAANBbAAAqcQGA2LcuHFRVVUVra2tfY63trbGhAkTCloFwEAQGwyImpqaqK+vj5UrV/Ye6+npiZUrV8Ypp5xS4DIAslUXPYCDR2NjY8yfPz+mTZsWM2bMiK985SvR2dkZF198cdHTgH3U0dERzz77bO/jDRs2xNq1a6O2tjaOPPLIApcxGPjoKwNq2bJlcc0110RLS0ucdNJJcd1110VDQ0PRs4B99NBDD8XMmTNfdXz+/Plx6623DvwgBhWxAQCkcs8GAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqcQGAJBKbAAAqf4HoLMovbq5LyEAAAAASUVORK5CYII=", - "text/plain": [ - "
" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ + "# Import necessary libraries\n", "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "import keras\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.preprocessing import StandardScaler\n", - "from sklearn.metrics import confusion_matrix\n", - "from sklearn.metrics import accuracy_score\n", + "from sklearn.metrics import confusion_matrix, accuracy_score\n", "from keras.models import Sequential\n", "from keras.layers import Dense\n", - "from ucimlrepo import fetch_ucirepo \n", - "%matplotlib inline\n", + "from ucimlrepo import fetch_ucirepo\n", + "%matplotlib inline" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "a3dc3e5b-a784-449d-b93f-336139d6d1f6", + "metadata": {}, + "outputs": [], + "source": [ + "# Fetch dataset from UCI ML Repository (Heart Disease dataset)\n", "heart_disease = fetch_ucirepo(id=45)\n", + "\n", + "# Suppress warnings about chained assignments in Pandas\n", "pd.options.mode.chained_assignment = None\n", + "\n", + "# Preprocess the dataset\n", + "# Extract features and targets\n", "X = heart_disease.data.features \n", "y = heart_disease.data.targets\n", - "y[y != 0] = 1\n", + "\n", + "# Convert multiclass labels to binary (0 or 1)\n", + "y[y != 0] = 1" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "ae66405b-04c8-4072-a09b-32a8f047ec8a", + "metadata": {}, + "outputs": [], + "source": [ + "# Perform one-hot encoding on categorical variables\n", "chest_pain = pd.get_dummies(X['cp'], prefix='cp', drop_first=True)\n", "X = pd.concat([X, chest_pain], axis=1)\n", "X.drop(['cp'], axis=1, inplace=True)\n", "\n", + "# More one-hot encoding\n", "sp = pd.get_dummies(X['slope'], prefix='slope')\n", "th = pd.get_dummies(X['thal'], prefix='thal')\n", - "rest_ecg = pd.get_dummies(X['restecg'], prefix='restecg')\n", - "\n", + "rest_ecg = pd.get_dummies(X['restecg'], prefix='restecg')" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "a165dfaa-9c7e-4b77-9678-4692ee8eb174", + "metadata": {}, + "outputs": [], + "source": [ + "# Concatenate encoded columns\n", "frames = [X, sp, th, rest_ecg]\n", "X = pd.concat(frames, axis=1)\n", - "X.drop(['slope', 'thal', 'restecg'], axis=1, inplace=True)\n", + "X.drop(['slope', 'thal', 'restecg'], axis=1, inplace=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "0df1f6ea-0162-4ee2-9579-e8057781179f", + "metadata": {}, + "outputs": [], + "source": [ + "# Split dataset into training and testing sets\n", + "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\n", "\n", - "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)\n", + "# Feature scaling using StandardScaler\n", "sc = StandardScaler()\n", "X_train = sc.fit_transform(X_train)\n", "X_test = sc.transform(X_test)\n", + "\n", + "# Handle missing values by forward filling\n", "X_train = pd.DataFrame(X_train).fillna(method='ffill')\n", - "X_test = pd.DataFrame(X_test).fillna(method='ffill')\n", + "X_test = pd.DataFrame(X_test).fillna(method='ffill')" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "824e4151-5dab-41cf-8c8d-b53f0448fa79", + "metadata": {}, + "outputs": [], + "source": [ + "# Build the neural network model\n", "classifier = Sequential()\n", "classifier.add(Dense(units=11, kernel_initializer='uniform', activation='relu', input_dim=21))\n", "classifier.add(Dense(units=10, kernel_initializer='uniform', activation='relu'))\n", "classifier.add(Dense(units=10, kernel_initializer='uniform', activation='relu'))\n", "classifier.add(Dense(units=5, kernel_initializer='uniform', activation='relu'))\n", - "classifier.add(Dense(units=1, kernel_initializer='uniform', activation='sigmoid'))\n", + "classifier.add(Dense(units=1, kernel_initializer='uniform', activation='sigmoid'))" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "6afa0b7b-80b0-431a-9f09-ee31aae1ef08", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2/2 [==============================] - 0s 2ms/step\n" + ] + } + ], + "source": [ + "# Compile the model\n", "classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])\n", - "classifier.fit(X_train, y_train, batch_size=10, epochs=100)\n", - "y_pred = classifier.predict(X_test)\n", + "\n", + "# Train the model\n", + "classifier.fit(X_train, y_train, batch_size=10, epochs=100, verbose=0)\n", + "\n", + "# Predict on the test set\n", + "y_pred = classifier.predict(X_test)" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "ecb94414-444d-4790-876e-2b96f1a711c7", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "accuracy of the model: 0.7868852459016393\n" + ] + }, + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAhsAAAGdCAYAAAC7JrHlAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8o6BhiAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAQdklEQVR4nO3cf3DXhX3H8fc3Ab4JIPEQTMDB1OKP1p7IgoIoWicnbndMt5vaW69FXWXO+uOMuprbD/xRhxU9LQNqRVG7tpOrdzqZblhRpl1ROBTRqXX+2MmYCY0ikYBRw3d/7MqagWiUdz4xPB53+eP7+Xzyvdc/uXvm8/0kpUqlUgkAgCRVRQ8AAPo3sQEApBIbAEAqsQEApBIbAEAqsQEApBIbAEAqsQEApBIbAECqAUUP+LXaCRcWPQFIsmn1/KInAElqPkFJuLMBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSGwBAKrEBAKQSG6Q474zjY9WS5mh9Ym60PjE3Vtx9WZxy3Jd2nD/3j46LZYsuidYn5sa2Z+ZH3dDaAtcCe9Idi26L8UccFjfMua7oKfQRYoMUG1rfib/+u3+MKV+7IY772txYserl+OnNs+KLBzdERMTgmoHxs1+8EHMXP1zwUmBPev65dXHvT++JQw89rOgp9CEDih5A//TQ4893e33VgqVx3hnHxzFHHhQvvtYS83+yIiIipjYeUsA6IMPWjo5o/vYVMfvq78SiH3y/6Dn0Ie5skK6qqhRnTG+MIbWD4ql1rxc9B0jyt9+5Jk444cSYfOyUoqfQx/T4zkZbW1ssXrw4Vq5cGS0tLRER0dDQEFOmTImzzz47Ro4cucdH8vl0xLjRseLuy6Jm0IDYsq0zzrpsUbz0WkvRs4AE//zQg/Hiiy/ET5bcW/QU+qAexcbq1atj+vTpMXjw4Jg2bVoceuihERHR2toa8+bNi+uvvz6WLVsWEydO3O37dHZ2RmdnZ7djle1dUaqq7uF8+rKX/7M1Jn11TtQNrY0/nDYhFl3z9Tjlm98THNDPtLz5Ztxw/XXxg0WLo1wuFz2HPqhUqVQqn/TiyZMnx/jx4+PWW2+NUqnU7VylUonzzz8/1q1bFytXrtzt+1x11VVx9dVXdztWXX90DBx1TA+m83nz4K0Xxmvr2+Ki6+7ZcWxq4yHx8O2XRMPUK2Lzlm0FriPTptXzi55AokeXPxKXXvytqK7+v18Yu7q6olQqRVVVVax+5rlu5+hfaj7BbYse3dl49tln46677topNCIiSqVSXHrppTFhwoSPfZ/m5uZoamrqdmz/qd/uyRQ+h6pKpSgP8kwy9DeTJk+Oe+9f2u3Y7L9sjgMPPjjO+dPzhAY9i42GhoZYtWpVHH744bs8v2rVqqivr//Y9ymXyzvdavMRSv9yzUV/EMv+7d9j/ZubYp8hNXHW702MEyYeEjMuWBgREfX77RP1+w2LL4wdERERXz5kdLzb8V6sb9kUm9q3Fjkd6KEhQ4bGIYcc2u1Y7eDBsW/dvjsdZ+/Uo9i4/PLLY9asWbFmzZo4+eSTd4RFa2trLF++PBYtWhQ33nhjylA+X0YOHxp3XPuNaBgxLDZveS+e/48NMeOChfHoUy9FRMQ3/3hq/NX5v7/j+kcWXxoREef9zd/Hj5Y+VchmAHL06JmNiIglS5bEzTffHGvWrImurq6IiKiuro7GxsZoamqKM88881MNqZ1w4af6PqDv88wG9F+f5JmNHsfGr33wwQfR1tYWEREjRoyIgQMHfpq32UFsQP8lNqD/2uMPiP6mgQMHxqhRoz7ttwMAewn/QRQASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASCU2AIBUYgMASFWqVCqVokdERDz/X1uKngAkufOZDUVPAJLcNOOwj73GnQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSiQ0AIJXYAABSDSh6AHuPbVs74h/u/H489fPHov2dTXHQuMPi3G9dHuMOP6LoaUAPHDy8Nr7yheHxW/vWRF3NgLhz9YZ4vmXLjvM3zThsl9+39IWNseLVTb01kz5EbNBrFt50bbzx+qtxcfO1MXy/kfH4Iw/F1X/x53HLHffGfiP3L3oe8AkNGlAV/93eGavWb45zjj5gp/NXPfxKt9eH7z8kzhzfEOve3LLTtewdfIxCr+jsfC+efPzR+Masi+OII38nRh0wJs6a+WfRMHpMLFt6b9HzgB54aWNH/Msv27rdzfhN73Z2dfv6csPQeLVta7y99YNeXkpfITboFdu7umL79q4YOKjc7figcjleen5tMaOAdEMHVccX9x8aT63fXPQUCiQ26BW1g4fEYV86Mu790e3xdtuvoqurK/71Zw/Fyy88F5veait6HpDk6DF10fnh9njORyh7tT0eG+vXr49zzz13t9d0dnZGe3t7t6/3Ozv39BT6mIubr4lKpRLnnXVqfPXUY+Oh++6J40+aHqWqUtHTgCTHjB0WT29ojw+3V4qeQoH2eGy8/fbbcffdd+/2mjlz5kRdXV23r9sX3LSnp9DHNIweE9fevCh+/E8/j9vueTC+u/CH8WHXh1E/aucHzIDPv4OG18b+Q8vx5Bs+Qtnb9fivUR544IHdnn/ttdc+9j2am5ujqamp27FXfuXBob1FTW1t1NTWxpZ322Pt6pXx9VmXFD0JSDBpbF2sf+e9eLPdneu9XY9j4/TTT49SqRSVykffEiuVdn9bvFwuR7n8/x4UbPd5Xn/3zOpfRFQiRo/57WjZsD5+eNv34oCxB8bvnjqj6GlADwyqLsWIIYN2vB4+eGCMHlaOrR90xTvbPoyIiPKAqjhy1D6x9IWNRc2kD+lxbIwaNSoWLlwYp5122i7Pr127NhobGz/zMPqfrR1b4se3z4+32jbG0H2GxeSpJ8efnHtBDBgwsOhpQA+M2bcmLpgydsfr04743/+Ts3r95rhnbUtEREwYvU+UShHPbHi3kI30LT2OjcbGxlizZs1HxsbH3fVg73XcV06J475yStEzgM/o1be2xWVLf7nba558Y7NnNdihx7FxxRVXREdHx0eeHzduXDz22GOfaRQA0H/0ODamTp262/NDhgyJE0888VMPAgD6F//UCwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIJTYAgFRiAwBIVapUKpWiR7B36ezsjDlz5kRzc3OUy+Wi5wB7kJ9vdkVs0Ova29ujrq4uNm/eHMOGDSt6DrAH+flmV3yMAgCkEhsAQCqxAQCkEhv0unK5HLNnz/bwGPRDfr7ZFQ+IAgCp3NkAAFKJDQAgldgAAFKJDQAgldigVy1YsCAOPPDAqKmpiUmTJsWqVauKngTsAY8//njMmDEjRo8eHaVSKe6///6iJ9GHiA16zZIlS6KpqSlmz54dTz/9dIwfPz6mT58eGzduLHoa8Bl1dHTE+PHjY8GCBUVPoQ/yp6/0mkmTJsXRRx8d8+fPj4iI7du3x5gxY+Kiiy6KK6+8suB1wJ5SKpXivvvui9NPP73oKfQR7mzQK95///1Ys2ZNTJs2bcexqqqqmDZtWqxcubLAZQBkExv0ira2tujq6or6+vpux+vr66OlpaWgVQD0BrEBAKQSG/SKESNGRHV1dbS2tnY73traGg0NDQWtAqA3iA16xaBBg6KxsTGWL1++49j27dtj+fLlceyxxxa4DIBsA4oewN6jqakpZs6cGRMnToxjjjkmbrnllujo6Ihzzjmn6GnAZ7Rly5Z45ZVXdrx+/fXXY+3atTF8+PAYO3ZsgcvoC/zpK71q/vz5MXfu3GhpaYmjjjoq5s2bF5MmTSp6FvAZrVixIk466aSdjs+cOTPuuuuu3h9EnyI2AIBUntkAAFKJDQAgldgAAFKJDQAgldgAAFKJDQAgldgAAFKJDQAgldgAAFKJDQAgldgAAFKJDQAg1f8AAgXg3cWxorsAAAAASUVORK5CYII=", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "# Create confusion matrix\n", "cm = confusion_matrix(y_test, y_pred.round())\n", - "sns.heatmap(cm,annot=True,cmap=\"Blues\",fmt=\"d\",cbar=False)\n", - "ac=accuracy_score(y_test, y_pred.round())\n", - "print('accuracy of the model: ',ac)" + "\n", + "# Visualize confusion matrix using seaborn heatmap\n", + "sns.heatmap(cm, annot=True, cmap=\"Blues\", fmt=\"d\", cbar=False)\n", + "\n", + "# Calculate and print accuracy score\n", + "ac = accuracy_score(y_test, y_pred.round())\n", + "print('accuracy of the model: ', ac)" ] }, { @@ -578,7 +496,7 @@ "kernelspec": { "display_name": "open-machine-learning-jupyter-book", "language": "python", - "name": "python3" + "name": "open-machine-learning-jupyter-book" }, "language_info": { "codemirror_mode": { From 0295ddf2b3aac4aba9005191f2eb103608be6014 Mon Sep 17 00:00:00 2001 From: Fly <2946399650fly@gmail> Date: Sun, 31 Dec 2023 16:35:15 +0800 Subject: [PATCH 5/5] update --- .../deep-learning/nn.ipynb | 75 +++++++++++++++---- 1 file changed, 59 insertions(+), 16 deletions(-) diff --git a/open-machine-learning-jupyter-book/deep-learning/nn.ipynb b/open-machine-learning-jupyter-book/deep-learning/nn.ipynb index 7818c0244..412ddb456 100644 --- a/open-machine-learning-jupyter-book/deep-learning/nn.ipynb +++ b/open-machine-learning-jupyter-book/deep-learning/nn.ipynb @@ -297,17 +297,18 @@ "metadata": {}, "source": [ "## Code\n", - "Now, let's train a neural network model for Heart Disease Classification as an example to help you better understand" + "Now, let's train a neural network model for Heart Disease Classification as an example to help you better understand.\n", + "\n", + "First, let's import the necessary libraries." ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 1, "id": "ed7d3c46-b1dd-43c2-9e72-09c929bccba2", "metadata": {}, "outputs": [], "source": [ - "# Import necessary libraries\n", "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", @@ -321,9 +322,17 @@ "%matplotlib inline" ] }, + { + "cell_type": "markdown", + "id": "214903a7-3974-4428-b550-2a053adde3b3", + "metadata": {}, + "source": [ + "Let's start by importing the dataset from the UCI ML Repository. The UCI ML Repository is a comprehensive resource for machine learning datasets. Within this repository is a dataset specifically focused on heart disease, containing diverse data from multiple cardiac patients.However, it encompasses various types of heart conditions. For our task of building a binary classification neural network, we'll unify the heart disease types into a single category." + ] + }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 2, "id": "a3dc3e5b-a784-449d-b93f-336139d6d1f6", "metadata": {}, "outputs": [], @@ -343,9 +352,17 @@ "y[y != 0] = 1" ] }, + { + "cell_type": "markdown", + "id": "8e9ee95f-8cc1-47c6-9a01-7bf9deab1382", + "metadata": {}, + "source": [ + "Now, let's perform specific encoding on the 'cp', 'slope', 'thal', and 'restecg' columns in the dataset to facilitate the handling of categorical data by the neural network model. " + ] + }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 3, "id": "ae66405b-04c8-4072-a09b-32a8f047ec8a", "metadata": {}, "outputs": [], @@ -358,25 +375,25 @@ "# More one-hot encoding\n", "sp = pd.get_dummies(X['slope'], prefix='slope')\n", "th = pd.get_dummies(X['thal'], prefix='thal')\n", - "rest_ecg = pd.get_dummies(X['restecg'], prefix='restecg')" + "rest_ecg = pd.get_dummies(X['restecg'], prefix='restecg')\n", + "\n", + "# Concatenate encoded columns\n", + "frames = [X, sp, th, rest_ecg]\n", + "X = pd.concat(frames, axis=1)\n", + "X.drop(['slope', 'thal', 'restecg'], axis=1, inplace=True)" ] }, { - "cell_type": "code", - "execution_count": 7, - "id": "a165dfaa-9c7e-4b77-9678-4692ee8eb174", + "cell_type": "markdown", + "id": "9e3ac8a7-bcb9-4a3a-9198-6d2023763ba4", "metadata": {}, - "outputs": [], "source": [ - "# Concatenate encoded columns\n", - "frames = [X, sp, th, rest_ecg]\n", - "X = pd.concat(frames, axis=1)\n", - "X.drop(['slope', 'thal', 'restecg'], axis=1, inplace=True)" + "Let's proceed with the final steps in data processing: splitting the data into training and testing sets, normalizing the data, and filling missing values with preceding data." ] }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 4, "id": "0df1f6ea-0162-4ee2-9579-e8057781179f", "metadata": {}, "outputs": [], @@ -394,9 +411,19 @@ "X_test = pd.DataFrame(X_test).fillna(method='ffill')" ] }, + { + "cell_type": "markdown", + "id": "649a025e-84b1-4bcb-b008-05cbf1a551a9", + "metadata": {}, + "source": [ + "Okay, we can now build the neural network structure and choose the activation function for each layer.\n", + "\n", + "From my code, you can see I've constructed an input layer with 11 units, limiting the input features to 21. There are three hidden layers, and an output layer with just one unit, using the sigmoid activation function. It will directly output the probability of a patient having heart disease." + ] + }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 5, "id": "824e4151-5dab-41cf-8c8d-b53f0448fa79", "metadata": {}, "outputs": [], @@ -410,6 +437,14 @@ "classifier.add(Dense(units=1, kernel_initializer='uniform', activation='sigmoid'))" ] }, + { + "cell_type": "markdown", + "id": "6ca77548-cbe4-4daf-86e1-749f8aee3a66", + "metadata": {}, + "source": [ + "It is time for training our neural network." + ] + }, { "cell_type": "code", "execution_count": 10, @@ -435,6 +470,14 @@ "y_pred = classifier.predict(X_test)" ] }, + { + "cell_type": "markdown", + "id": "399434e5-aa76-4926-8c2f-f1394c427a6f", + "metadata": {}, + "source": [ + "Let's examine the model's confusion matrix and accuracy to assess its performance." + ] + }, { "cell_type": "code", "execution_count": 11,