diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/README.md" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/README.md" index b28a0c16..019411d7 100644 --- "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/README.md" +++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/README.md" @@ -1,7 +1,51 @@ # 赛题一:遗失宠物的智能寻找 -伴随着人们物质生活水平的提高,越来越多的家庭选择饲养小动物,它们不仅是家庭的宠物,很多人也将他们视作家庭成员之一,给予百般宠爱和呵护。然而即便如此,意外还是难免发生——宠物的意外走失给很多宠物家庭带来了困扰,满大街发传单、贴广告不仅浪费时间,成效也甚微。 +* **作品介绍:** -如果可以通过人工智能的方式解决这一难题,相信将成为不少养宠家庭的福音——“狗脸识别”、智能追踪设备、“离家出走”路线预测…… + 伴随着人们物质生活水平的提高,宠物狗在人们生活中扮演着越来越重要的角色,它们不仅是家庭的宠物,很多人也将他们视作家庭成员之一,给予百般宠爱和呵护。然而即便如此,意外还是难免发生——宠物的意外走失给很多宠物家庭带来了困扰,满大街发传单、贴广告不仅浪费时间,成效也甚微。在实际的生活中,宠物的种类多种多样,每种宠物也都有着不少的品种。以宠物狗为例,有很多品种的狗狗长相都相当接近,连经常接触狗的专家也不太好分辨。所以,当宠物丢失的时候,确认宠物的品种就变成了一大难题。所以为了提高辨别宠物种类的能力,本项目以常见宠物种类——狗做为研究对象,使用[斯坦福的犬类数据集](http://vision.stanford.edu/aditya86/ImageNetDogs/)(包括常见的120个品种的20580张狗的图片)进行训练,得到了一个识别率颇高的犬类分类模型。该模型使用四种网络对宠物狗图片进行特征提取,然后将提取到的特征进行融合,使用DNN网络进行分类。实验证明,该方法能有效地对宠物狗进行分类。 + +* **作品截图:** + + 1.预测结果和实际结果的对比(每一个数字代表宠物狗的一个种类) + + ![1](https://github.com/ZxfBugProgrammer/aws-hackathon-2020/blob/master/1%20%E9%81%97%E5%A4%B1%E5%AE%A0%E7%89%A9%E7%9A%84%E6%99%BA%E8%83%BD%E5%AF%BB%E6%89%BE/%E7%8B%97%E7%8B%97%E7%A7%8D%E7%B1%BB%E8%AF%86%E5%88%AB%20-%20Sunburst/src/1.png) + + 2.预测的正确率 + + ![2](https://github.com/ZxfBugProgrammer/aws-hackathon-2020/blob/master/1%20%E9%81%97%E5%A4%B1%E5%AE%A0%E7%89%A9%E7%9A%84%E6%99%BA%E8%83%BD%E5%AF%BB%E6%89%BE/%E7%8B%97%E7%8B%97%E7%A7%8D%E7%B1%BB%E8%AF%86%E5%88%AB%20-%20Sunburst/src/2.png) + + 3.以概率的形式输出预测结果 + + ![3](https://github.com/ZxfBugProgrammer/aws-hackathon-2020/blob/master/1%20%E9%81%97%E5%A4%B1%E5%AE%A0%E7%89%A9%E7%9A%84%E6%99%BA%E8%83%BD%E5%AF%BB%E6%89%BE/%E7%8B%97%E7%8B%97%E7%A7%8D%E7%B1%BB%E8%AF%86%E5%88%AB%20-%20Sunburst/src/3.png) + +* **安装、编译指南:** + + 1.下载[斯坦福的犬类数据集](http://vision.stanford.edu/aditya86/ImageNetDogs/)(下载images.tar文件)存放到 “狗狗种类识别 - Sunburst” 文件夹下,并进行解压。解压之后会在该目录下生成“Images”文件夹。 + + 2.安装代码运行所需要的环境: + + ```python + numpy + pandas + tensorflow-gpu==1.15.1 + matplotlib + sklearn + tqdm + keras==2.3.1 + ``` + 3.使用juputer note打开code.ipynb,依次运行每个单元格进行训练 + + 4.传入新的图片数据,调用相关函数进行预测 + +* 团队介绍: + + 团队名称:Sunburst + + 团队成员:赵现锋 + + 联系方式:17864211005 + +* 使用到的 AWS 技术: + + 使用Amazon SageMaker提供的GPU对模型进行训练。 -请以“遗失宠物的智能寻找”为主题,利用人工智能技术完成一款产品(软件或硬件)的开发。 diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/code.ipynb" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/code.ipynb" new file mode 100644 index 00000000..05464de5 --- /dev/null +++ "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/code.ipynb" @@ -0,0 +1,667 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Using TensorFlow backend.\n" + ] + } + ], + "source": [ + "#引入需要的包\n", + "import numpy as np\n", + "import pandas as pd\n", + "import os\n", + "import tensorflow as tf\n", + "import matplotlib.pyplot as plt \n", + "import matplotlib.image as mpimg\n", + "from sklearn.model_selection import train_test_split\n", + "from tqdm import tqdm\n", + "np.random.seed(42)\n", + "\n", + "import keras\n", + "from keras.preprocessing.image import ImageDataGenerator\n", + "from keras.models import Model\n", + "from keras.layers import BatchNormalization, Dense, GlobalAveragePooling2D, Lambda, Dropout, InputLayer, Input\n", + "from keras.utils import to_categorical\n", + "from tensorflow.keras.preprocessing.image import load_img" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "120\n" + ] + }, + { + "data": { + "text/plain": [ + "['affenpinscher',\n", + " 'afghan_hound',\n", + " 'african_hunting_dog',\n", + " 'airedale',\n", + " 'american_staffordshire_terrier',\n", + " 'appenzeller',\n", + " 'australian_terrier',\n", + " 'basenji',\n", + " 'basset',\n", + " 'beagle']" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "data_dir = './Images'\n", + "dog_breeds = []\n", + "for name in os.listdir(data_dir):\n", + " dog_breeds.append(name.split('-')[1].lower())\n", + "dog_breeds = sorted(dog_breeds)\n", + "n_classes = len(dog_breeds)\n", + "print(n_classes)\n", + "dog_breeds[:10]" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "class_to_num = dict(zip(dog_breeds, range(n_classes)))" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "20580\n" + ] + } + ], + "source": [ + "data_size = sum([len(files) for root,dirs,files in os.walk(data_dir)])\n", + "print(data_size)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "def images_to_array(data_dir, img_size = (224,224,3)):\n", + " '''\n", + " 1- 从数据集的目录中读取图像\n", + " 2- 将图像大小进行调整,并存放在一个numpy数组里\n", + " 3- 读取图像对应的品种信息\n", + " 4- 将品种信息转化为One hot编码\n", + " 5- 将图像信息和品种信息进行随机打乱\n", + " '''\n", + " X = np.zeros([data_size, img_size[0], img_size[1], img_size[2]], dtype=np.uint8)\n", + " y = np.zeros([data_size,1], dtype=np.uint8)\n", + " #read data and lables.\n", + " cnt_dir = 0\n", + " cnt_file = 0\n", + " for root,dirs,files in os.walk(data_dir):\n", + " for now_file in files:\n", + " img_dir = os.path.join(root, now_file)\n", + " img_pixels = load_img(img_dir, target_size=img_size)\n", + " X[cnt_file] = img_pixels\n", + "\n", + " image_breed = root.split('-')[1].lower()\n", + " y[cnt_file] = class_to_num[image_breed]\n", + " cnt_file += 1\n", + " print(\"%3d / %3d , %5d / %5d done! \"%(cnt_dir,n_classes,cnt_file,data_size),end='\\r')\n", + " cnt_dir += 1\n", + " print(\"%3d / %3d , %5d / %5d done! \"%(cnt_dir,n_classes,cnt_file,data_size),end='\\r')\n", + " \n", + " print()\n", + " #One hot encoder\n", + " print('To one hot encoding! ',end='\\r')\n", + " y = to_categorical(y)\n", + " print('one hot encoding done! ',end='\\r')\n", + " #shuffle \n", + " print('shuffling! ',end='\\r')\n", + " ind = np.random.permutation(data_size)\n", + " X = X[ind]\n", + " y = y[ind]\n", + " print('shuffling done! ',end='\\r')\n", + " print('To one hot encoding and shuffling done ')\n", + " print('Ouptut Data Size : ' ,X.shape)\n", + " print('Ouptut Label Size : ' ,y.shape)\n", + " return X, y" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "121 / 120 , 20580 / 20580 done! \n", + "To one hot encoding and shuffling done \n", + "Ouptut Data Size : (20580, 331, 331, 3)\n", + "Ouptut Label Size : (20580, 120)\n" + ] + } + ], + "source": [ + "#图像大小选择为(331,331,3)\n", + "img_size = (331,331,3)\n", + "X, y = images_to_array(data_dir, img_size)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "train Data Size : 14406\n", + "test Data Size : 6174\n" + ] + } + ], + "source": [ + "X_train,X_test,y_train,y_test= train_test_split(X,y,test_size=0.3,random_state=0)\n", + "print(\"train Data Size : \",X_train.shape[0])\n", + "print(\"test Data Size : \",X_test.shape[0])" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "del X,y" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "def get_features(model_name, data_preprocessor, input_size, data):\n", + " '''\n", + " 1- 建立一个特征提取器从数据中提取特征\n", + " 2- 返回提取到的特征\n", + " '''\n", + " #Prepare pipeline.\n", + " input_layer = Input(input_size)\n", + " preprocessor = Lambda(data_preprocessor)(input_layer)\n", + " base_model = model_name(weights='imagenet', include_top=False,\n", + " input_shape=input_size)(preprocessor)\n", + " avg = GlobalAveragePooling2D()(base_model)\n", + " feature_extractor = Model(inputs = input_layer, outputs = avg)\n", + " #Extract feature.\n", + " feature_maps = feature_extractor.predict(data, batch_size=128, verbose=1)\n", + " print('Feature maps shape: ', feature_maps.shape)\n", + " return feature_maps" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n", + "Instructions for updating:\n", + "If using Keras pass *_constraint arguments to layers.\n", + "WARNING:tensorflow:From /home/mist/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n", + "\n", + "WARNING:tensorflow:From /home/mist/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4074: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.\n", + "\n", + "WARNING:tensorflow:From /home/mist/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n", + "\n", + "14406/14406 [==============================] - 52s 4ms/step\n", + "Feature maps shape: (14406, 2048)\n" + ] + } + ], + "source": [ + "# 使用 InceptionV3 作为特征提取器\n", + "from keras.applications.inception_v3 import InceptionV3, preprocess_input\n", + "inception_preprocessor = preprocess_input\n", + "inception_features = get_features(InceptionV3,\n", + " inception_preprocessor,\n", + " img_size, X_train)" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "14406/14406 [==============================] - 71s 5ms/step\n", + "Feature maps shape: (14406, 2048)\n" + ] + } + ], + "source": [ + "# 使用 Xception 作为特征提取器\n", + "from keras.applications.xception import Xception, preprocess_input\n", + "xception_preprocessor = preprocess_input\n", + "xception_features = get_features(Xception,\n", + " xception_preprocessor,\n", + " img_size, X_train)" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "14406/14406 [==============================] - 192s 13ms/step\n", + "Feature maps shape: (14406, 4032)\n" + ] + } + ], + "source": [ + "# 使用 NASNetLarge 作为特征提取器\n", + "from keras.applications.nasnet import NASNetLarge, preprocess_input\n", + "nasnet_preprocessor = preprocess_input\n", + "nasnet_features = get_features(NASNetLarge,\n", + " nasnet_preprocessor,\n", + " img_size, X_train)" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.7/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5\n", + "219062272/219055592 [==============================] - 144s 1us/step\n", + "14406/14406 [==============================] - 99s 7ms/step\n", + "Feature maps shape: (14406, 1536)\n" + ] + } + ], + "source": [ + "# 使用 InceptionResNetV2 作为特征提取器\n", + "from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input\n", + "inc_resnet_preprocessor = preprocess_input\n", + "inc_resnet_features = get_features(InceptionResNetV2,\n", + " inc_resnet_preprocessor,\n", + " img_size, X_train)" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [], + "source": [ + "# 释放内存空间\n", + "del X_train" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Final feature maps shape (14406, 9664)\n" + ] + } + ], + "source": [ + "final_features = np.concatenate([inception_features,\n", + " xception_features,\n", + " nasnet_features,\n", + " inc_resnet_features,], axis=-1)\n", + "print('Final feature maps shape', final_features.shape)" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [], + "source": [ + "from keras.callbacks import EarlyStopping\n", + "#Prepare call backs\n", + "EarlyStop_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True)\n", + "my_callback=[EarlyStop_callback]" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "WARNING:tensorflow:Large dropout rate: 0.7 (>0.5). In TensorFlow 2.x, dropout() uses dropout rate instead of keep_prob. Please ensure that this is intended.\n", + "Train on 12965 samples, validate on 1441 samples\n", + "Epoch 1/60\n", + "12965/12965 [==============================] - 5s 396us/step - loss: 1.3162 - accuracy: 0.7753 - val_loss: 0.1831 - val_accuracy: 0.9403\n", + "Epoch 2/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.1698 - accuracy: 0.9440 - val_loss: 0.1643 - val_accuracy: 0.9480\n", + "Epoch 3/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.1435 - accuracy: 0.9530 - val_loss: 0.1621 - val_accuracy: 0.9452\n", + "Epoch 4/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.1252 - accuracy: 0.9602 - val_loss: 0.1635 - val_accuracy: 0.9507\n", + "Epoch 5/60\n", + "12965/12965 [==============================] - 1s 53us/step - loss: 0.1159 - accuracy: 0.9603 - val_loss: 0.1600 - val_accuracy: 0.9473\n", + "Epoch 6/60\n", + "12965/12965 [==============================] - 1s 51us/step - loss: 0.1015 - accuracy: 0.9665 - val_loss: 0.1630 - val_accuracy: 0.9500\n", + "Epoch 7/60\n", + "12965/12965 [==============================] - 1s 51us/step - loss: 0.0930 - accuracy: 0.9679 - val_loss: 0.1610 - val_accuracy: 0.9473\n", + "Epoch 8/60\n", + "12965/12965 [==============================] - 1s 53us/step - loss: 0.0819 - accuracy: 0.9728 - val_loss: 0.1624 - val_accuracy: 0.9459\n", + "Epoch 9/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.0730 - accuracy: 0.9749 - val_loss: 0.1598 - val_accuracy: 0.9452\n", + "Epoch 10/60\n", + "12965/12965 [==============================] - 1s 51us/step - loss: 0.0716 - accuracy: 0.9762 - val_loss: 0.1578 - val_accuracy: 0.9514\n", + "Epoch 11/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.0642 - accuracy: 0.9782 - val_loss: 0.1698 - val_accuracy: 0.9410\n", + "Epoch 12/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.0630 - accuracy: 0.9799 - val_loss: 0.1689 - val_accuracy: 0.9473\n", + "Epoch 13/60\n", + "12965/12965 [==============================] - 1s 51us/step - loss: 0.0589 - accuracy: 0.9800 - val_loss: 0.1635 - val_accuracy: 0.9486\n", + "Epoch 14/60\n", + "12965/12965 [==============================] - 1s 51us/step - loss: 0.0542 - accuracy: 0.9806 - val_loss: 0.1768 - val_accuracy: 0.9445\n", + "Epoch 15/60\n", + "12965/12965 [==============================] - 1s 51us/step - loss: 0.0515 - accuracy: 0.9825 - val_loss: 0.1629 - val_accuracy: 0.9528\n", + "Epoch 16/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.0462 - accuracy: 0.9856 - val_loss: 0.1656 - val_accuracy: 0.9452\n", + "Epoch 17/60\n", + "12965/12965 [==============================] - 1s 51us/step - loss: 0.0433 - accuracy: 0.9862 - val_loss: 0.1761 - val_accuracy: 0.9417\n", + "Epoch 18/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.0436 - accuracy: 0.9869 - val_loss: 0.1703 - val_accuracy: 0.9431\n", + "Epoch 19/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.0403 - accuracy: 0.9875 - val_loss: 0.1730 - val_accuracy: 0.9452\n", + "Epoch 20/60\n", + "12965/12965 [==============================] - 1s 52us/step - loss: 0.0377 - accuracy: 0.9888 - val_loss: 0.1744 - val_accuracy: 0.9466\n" + ] + } + ], + "source": [ + "# 构建DNN模型\n", + "dnn = keras.models.Sequential([\n", + " InputLayer(final_features.shape[1:]),\n", + " Dropout(0.7),\n", + " Dense(n_classes, activation='softmax')\n", + "])\n", + "\n", + "dnn.compile(optimizer='adam',\n", + " loss='categorical_crossentropy',\n", + " metrics=['accuracy'])\n", + "\n", + "# 用提取到的特征训练DNN\n", + "h = dnn.fit(final_features, y_train,\n", + " batch_size=512,\n", + " epochs=60,\n", + " validation_split=0.1,\n", + " callbacks=my_callback)" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "6174/6174 [==============================] - 28s 5ms/step\n", + "Feature maps shape: (6174, 2048)\n" + ] + } + ], + "source": [ + "inception_features = get_features(InceptionV3, inception_preprocessor, img_size, X_test)" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "6174/6174 [==============================] - 37s 6ms/step\n", + "Feature maps shape: (6174, 2048)\n" + ] + } + ], + "source": [ + "xception_features = get_features(Xception, xception_preprocessor, img_size, X_test)" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "6174/6174 [==============================] - 97s 16ms/step\n", + "Feature maps shape: (6174, 4032)\n" + ] + } + ], + "source": [ + "nasnet_features = get_features(NASNetLarge, nasnet_preprocessor, img_size, X_test)" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "6174/6174 [==============================] - 55s 9ms/step\n", + "Feature maps shape: (6174, 1536)\n" + ] + } + ], + "source": [ + "inc_resnet_features = get_features(InceptionResNetV2, inc_resnet_preprocessor, img_size, X_test)" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Final feature maps shape (6174, 9664)\n" + ] + } + ], + "source": [ + "# 提取测试集的特征\n", + "test_features = np.concatenate([inception_features,\n", + " xception_features,\n", + " nasnet_features,\n", + " inc_resnet_features],axis=-1)\n", + "print('Final feature maps shape', test_features.shape)" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [], + "source": [ + "# 预测\n", + "y_pred = dnn.predict(test_features, batch_size=128)" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": {}, + "outputs": [], + "source": [ + "y_result = y_pred.argmax(axis=1)\n", + "y_test_result = y_test.argmax(axis=1)" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[ 85 62 40 ... 39 114 67]\n", + "[ 85 62 40 ... 39 114 67]\n" + ] + } + ], + "source": [ + "print(y_result)\n", + "print(y_test_result)" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0.9442824748947198\n" + ] + } + ], + "source": [ + "print((y_result == y_test_result.astype(int)).sum()/y_test_result.shape[0])" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[1.2796981e-05 1.3250595e-06 7.1861774e-07 ... 1.1318945e-07\n", + " 2.0019799e-07 1.4043926e-06]\n", + " [1.1272825e-06 9.4958580e-07 1.1536797e-06 ... 5.3315927e-07\n", + " 4.4875711e-07 8.8954891e-07]\n", + " [1.6093598e-07 8.2706998e-07 1.0283359e-06 ... 2.5320762e-07\n", + " 2.4572310e-07 9.4702358e-08]\n", + " ...\n", + " [8.5133320e-07 4.0933867e-07 6.8925914e-07 ... 7.4626388e-07\n", + " 1.1052952e-05 6.5475359e-07]\n", + " [9.8053363e-07 8.3240536e-07 1.5329607e-06 ... 8.9162114e-07\n", + " 1.3118838e-06 1.4811222e-06]\n", + " [1.6023027e-06 2.2533499e-07 2.5402828e-06 ... 4.4936533e-06\n", + " 2.0640696e-06 9.9410147e-07]]\n" + ] + } + ], + "source": [ + "print(y_pred)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/1.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/1.png" new file mode 100644 index 00000000..442c41b6 Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/1.png" differ diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/2.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/2.png" new file mode 100644 index 00000000..9c18441e Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/2.png" differ diff --git "a/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/3.png" "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/3.png" new file mode 100644 index 00000000..893f873a Binary files /dev/null and "b/1 \351\201\227\345\244\261\345\256\240\347\211\251\347\232\204\346\231\272\350\203\275\345\257\273\346\211\276/\347\213\227\347\213\227\347\247\215\347\261\273\350\257\206\345\210\253 - Sunburst/src/3.png" differ