Skip to content

Commit

Permalink
pic
Browse files Browse the repository at this point in the history
  • Loading branch information
weslynn committed Dec 24, 2019
1 parent 44aefc0 commit 70c03c4
Show file tree
Hide file tree
Showing 4 changed files with 46 additions and 46 deletions.
4 changes: 2 additions & 2 deletions DNN深度神经网络/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ caffe 模型可视化网址 http://ethereon.github.io/netscope/#/editor

* LeNet 最经典的CNN网络

<a href="https://github.com/weslynn/graphic-deep-neural-network/blob/master/object%20classification%20%E7%89%A9%E4%BD%93%E5%88%86%E7%B1%BB/LeNet.md"> <img src="https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/lenet-org.jpg" width="705"> </a>
<a href="https://github.com/weslynn/graphic-deep-neural-network/blob/master/object%20classification%20%E7%89%A9%E4%BD%93%E5%88%86%E7%B1%BB/LeNet.md"> <img src="https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/basicpic/lenet-org.jpg" width="705"> </a>

<a href="https://github.com/weslynn/graphic-deep-neural-network/blob/master/object%20classification%20%E7%89%A9%E4%BD%93%E5%88%86%E7%B1%BB/LeNet.md"> <img src="https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/modelpic/lenet.png" width="600"> </a>

Expand Down Expand Up @@ -148,7 +148,7 @@ caffe 模型可视化网址 http://ethereon.github.io/netscope/#/editor
### AlexNet [详解 detail](https://github.com/weslynn/graphic-deep-neural-network/blob/master/object%20classification%20%E7%89%A9%E4%BD%93%E5%88%86%E7%B1%BB/AlexNet.md) Alex Krizhevsky, Geoffrey Hinton
* AlexNet 2012年,Alex Krizhevsky用AlexNet 在当年的ImageNet图像分类竞赛中(ILSVRC 2012),以top-5错误率15.3%拿下第一。 他的top-5错误率比上一年的冠军下降了十个百分点,而且远远超过当年的第二名。

<a href="https://github.com/weslynn/graphic-deep-neural-network/blob/master/object%20classification%20%E7%89%A9%E4%BD%93%E5%88%86%E7%B1%BB/AlexNet.md"> <img src="https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/alexnet-org.jpg" width="805"></a>
<a href="https://github.com/weslynn/graphic-deep-neural-network/blob/master/object%20classification%20%E7%89%A9%E4%BD%93%E5%88%86%E7%B1%BB/AlexNet.md"> <img src="https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/basicpic/alexnet-org.jpg" width="805"></a>

<a href="https://github.com/weslynn/graphic-deep-neural-network/blob/master/object%20classification%20%E7%89%A9%E4%BD%93%E5%88%86%E7%B1%BB/AlexNet.md"> <img src="https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/modelpic/alexnet.png" width="700"></a>

Expand Down
62 changes: 31 additions & 31 deletions GAN对抗生成网络/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# GAN 生成式对抗网络


![GAN](https://github.com/weslynn/graphic-deep-neural-network/blob/master/map/Art&Ganpic.png)
![GAN](https://github.com/weslynn/graphic-deep-neural-network/blob/master/map/Art&pic/ganpic.png)

-----------------------------------

Expand Down Expand Up @@ -30,13 +30,13 @@

GAN的目标,就是G生成的数据在D看来,和真实数据误差越小越好,目标函数如下:

![basictarget](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/basictarget.png)
![basictarget](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/basictarget.png)

从判别器 D 的角度看,它希望自己能尽可能区分真实样本和虚假样本,因此希望 D(x) 尽可能大,D(G(z)) 尽可能小, 即 V(D,G)尽可能大。从生成器 G 的角度看,它希望自己尽可能骗过 D,也就是希望 D(G(z)) 尽可能大,即 V(D,G) 尽可能小。两个模型相对抗,最后达到全局最优。

从数据分布来说,就是开始的噪声noise,在G不断修正后,产生的分布,和目标数据分布达到一致:

![data](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/data.png)
![data](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/data.png)


[1] Ian Goodfellow. "Generative Adversarial Networks." arXiv preprint arXiv:1406.2661v1 (2014). [pdf] (https://arxiv.org/pdf/1406.2661v1.pdf)
Expand Down Expand Up @@ -132,7 +132,7 @@ GAN的很多研究,都是对Generative modeling生成模型的一种研究,
2 Sampling 取样,用对数据分布建模,并进行取样,生成符合原有数据分布的新数据。


![gang](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/gang.jpg)
![gang](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/gang.jpg)


----------------------------------------------
Expand All @@ -157,7 +157,7 @@ GAN的很多研究,都是对Generative modeling生成模型的一种研究,

在标准的GAN中,生成数据的来源一般是一段连续单一的噪声z, 在半监督式学习CGAN中,会加入c的class 分类。InfoGan 找到了Gan的latent code 使得Gan的数据生成具有了可解释性。

![ganmodule](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/ganmodule.png)
![ganmodule](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/ganmodule.png)



Expand Down Expand Up @@ -192,7 +192,7 @@ Invertible Conditional GANs for image editing

利用一个encoder网络,对输入图像提取得到一个特征向量z,将特征向量z,以及需要转换的目标attribute向量y串联输入生成网络,得到生成图像,网络结构如下,

![icgan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/icgan.png)
![icgan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/icgan.png)


https://arxiv.org/pdf/1611.06355.pdf
Expand Down Expand Up @@ -304,7 +304,7 @@ https://github.com/martinarjovsky/WassersteinGAN
## WGAN-GP
Regularization and Normalization of the Discriminator:

![wgangp](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/wgangp.png)
![wgangp](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/wgangp.png)

WGAN-GP:

Expand All @@ -330,7 +330,7 @@ https://www.leiphone.com/news/201704/pQsvH7VN8TiLMDlK.html



![face](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/face.jpg)
![face](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/face.jpg)



Expand Down Expand Up @@ -369,10 +369,10 @@ Batch Normalization

这是CNN在unsupervised learning领域的一次重要尝试,这个架构能极大地稳定GAN的训练,以至于它在相当长的一段时间内都成为了GAN的标准架构,给后面的探索开启了重要的篇章。

![dcgan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/dcgang.jpg)
![dcgan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/dcgang.jpg)


![dcganr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/dcganr.jpg)
![dcganr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/dcganr.jpg)


## ImprovedDCGAN
Expand All @@ -399,7 +399,7 @@ Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
顾名思义,PGGAN 通过一种渐进式的结构,实现了从低分辨率到高分辨率的过渡,从而能平滑地训练出高清模型出来。论文还提出了自己对正则化、归一化的一些理解和技巧,值得思考。当然,由于是渐进式的,所以相当于要串联地训练很多个模型,所以 PGGAN 很慢。


![progan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/progan.gif)
![progan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/progan.gif)
论文地址:https://arxiv.org/pdf/1710.10196.pdf
代码实现地址:https://github.com/tkarras/progressive_growing_of_gans

Expand All @@ -420,13 +420,13 @@ CelebA HQ 数据集
## SAGAN Ian Goodfellow
由于卷积的局部感受野的限制,如果要生成大范围相关(Long-range dependency)的区域会出现问题,用更深的卷积网络参数量太大,于是采用将 Self Attention 引入到了生成器(以及判别器)中,使用来自所有特征位置的信息生成图像细节,同时保证判别器能鉴别距离较远的两个特征之间的一致性,获取全局信息。
IS从36.8提到了52.52,并把FID(Fréchet Inception Distance)从27.62降到了18.65。
![sagan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/sagan.jpg)
![sagan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/sagan.jpg)

![sagan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/sagan.png)
![sagan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/sagan.png)

SAGAN 使用注意力机制,高亮部位为注意力机制关注的位置。

![saganr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/saganr.jpg)
![saganr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/saganr.jpg)



Expand Down Expand Up @@ -454,9 +454,9 @@ BigGAN模型是基于ImageNet生成图像质量最高的模型之一。BigGAN作

这篇文章提供了 128、256、512 的自然场景图片的生成结果。 自然场景图片的生成可是比 CelebA 的人脸生成要难上很多

![biggan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/biggan.png)
![biggan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/biggan.png)

![bigganr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/bigganr.png)
![bigganr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/bigganr.png)

Github:https://github.com/AaronLeong/BigGAN-pytorch

Expand Down Expand Up @@ -484,10 +484,10 @@ StyleGAN首先重点关注了ProGAN的生成器网络,它发现,渐进层的
  3、高质的——分辨率为64^2到1024^2,影响颜色(眼睛、头发和皮肤)和微观特征;


![stylegan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/stylegan.png)
![stylegan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/stylegan.png)

![stylegan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/stylegan.gif)
![styleganr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/styleganr.jpg)
![stylegan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/stylegan.gif)
![styleganr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/styleganr.jpg)


![stylegan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/modelpic/gan/stylegan.png)
Expand Down Expand Up @@ -588,7 +588,7 @@ Github :https://github.com/google/compare_gan
- [ Paired two domain data](#1-Paired-Image-to-Image-Translation)
- [ Unpaired two domain data](#2-Unpaired-Image-to-Image-Translation)

![compare](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/compare.png)
![compare](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/compare.png)


|Title| Co-authors| Publication|Links|
Expand Down Expand Up @@ -640,12 +640,12 @@ https://arxiv.org/pdf/1611.07004v1.pdf

Pix2Pix对传统的CGAN做了个小改动,它不再输入随机噪声,而是输入用户给的图片:

![pix2pix](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/pix2pix.png)
![pix2pix](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/pix2pix.png)


通过pix2pix来完成成对的图像转换(Labels to Street Scene, Aerial to Map,Day to Night等),可以得到比较清晰的结果。

![pix2pixr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/pix2pixr.png)
![pix2pixr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/pix2pixr.png)

代码:

Expand Down Expand Up @@ -679,9 +679,9 @@ Ming-Yu Liu在介入过许多CV圈内耳熟能详的项目,包括vid2vid、pix2p
2 Loss设计
3 使用Instance-map的图像进行训练。

![pix2pixhd](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/pix2pixhd.png)
![pix2pixhd](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/pix2pixhd.png)

![pix2pixhd](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/pix2pixhd.gif)
![pix2pixhd](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/pix2pixhd.gif)

官方代码 :https://github.com/NVIDIA/pix2pixHD

Expand Down Expand Up @@ -733,9 +733,9 @@ https://arxiv.org/abs/1703.10593

CycleGan是让两个domain的图片互相转化。传统的GAN是单向生成,而CycleGAN是互相生成,一个A→B单向GAN加上一个B→A单向GAN,网络是个环形,所以命名为Cycle。理念就是,如果从A生成的B是对的,那么从B再生成A也应该是对的。CycleGAN输入的两张图片可以是任意的两张图片,也就是unpaired。

![CycleGan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/cyclegan.png)
![CycleGan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/cyclegan.png)

![CycleGanr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/cyclegan.jpg)
![CycleGanr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/cyclegan.jpg)

官方pytorch代码(CycleGAN、pix2pix):https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix

Expand All @@ -758,9 +758,9 @@ https://github.com/NVlabs/FUNIT
主要解决两个问题,小样本Few-shot和没见过的领域转换Unseen Domains。
人能针对一个新物种,看少量样本,也能进行想象和推算 。关键就是 一个大类型的物种中,信息可以相互转换。

![FUNIT](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/FUNIT.png)
![FUNIT](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/FUNIT.png)

![FUNITr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/FUNITr.png)
![FUNITr](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/FUNITr.png)

demo http://nvidia-research-mingyuliu.com/petswap

Expand All @@ -769,7 +769,7 @@ demo http://nvidia-research-mingyuliu.com/petswap

StarGAN的引入是为了解决多领域间的转换问题的,之前的CycleGAN等只能解决两个领域之间的转换,那么对于含有C个领域转换而言,需要学习Cx(C-1)个模型,但StarGAN仅需要学习一个

![starGan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/stargan.png)
![starGan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/stargan.png)



Expand Down Expand Up @@ -903,7 +903,7 @@ Semantic Image Synthesis with Spatially-Adaptive Normalization--CVPR 2019。

在基于语义合成图像这个领域里,这可是目前效果最强的方法。

![gaugan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/gaugan.jpg)
![gaugan](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/gaugan.jpg)


paper:https://arxiv.org/abs/1903.07291
Expand Down Expand Up @@ -1519,7 +1519,7 @@ https://github.com/ycjing/Neural-Style-Transfer-Papers

包含图像风格化综述论文对应论文、源码和预训练模型。 [中文](https://mp.weixin.qq.com/s?__biz=MzIwMTc4ODE0Mw==&mid=2247489172&idx=1&sn=42f567fb57d2886da71a07dd16388022&chksm=96e9c914a19e40025bf88e89514d5c6f575ee94545bd5d854c01de2ca333d4738b433d37d1f5#rd)

![neuralstyle](https://github.com/weslynn/graphic-deep-neural-network/blob/master/ganpic/overview.jpg)
![neuralstyle](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/ganpic/overview.jpg)

### 风格迁移 Neural Style

Expand Down
12 changes: 6 additions & 6 deletions GNN图神经网络/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ A Comprehensive Survey of Graph Embedding: Problems, Techniques and Applications
要想对图进行学习,首先需要对图的顶点数据、边数据和子图数据进行降维,这就是图嵌入(graph embedding)。


![graphembeding](https://github.com/weslynn/graphic-deep-neural-network/blob/master/gnnpic/graphembeding.jpg)
![graphembeding](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/gnnpic/graphembeding.jpg)


如图1所示:一幅图(image)所抽取的特征图(features map)里每个元素,可以理解为图(image)上的对应点的像素及周边点的像素的加权和(还需要再激活一下)。
Expand All @@ -33,12 +33,12 @@ A Comprehensive Survey of Graph Embedding: Problems, Techniques and Applications



![image-graphhd](https://github.com/weslynn/graphic-deep-neural-network/blob/master/gnnpic/image-graphhd.jpg)
![image-graphhd](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/gnnpic/image-graphhd.jpg)


## Graph Embedding

![graphembedingpaper](https://github.com/weslynn/graphic-deep-neural-network/blob/master/gnnpic/graphembedingpaper.jpg)
![graphembedingpaper](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/gnnpic/graphembedingpaper.jpg)



Expand Down Expand Up @@ -95,7 +95,7 @@ DeepWalk和node2vec随机初始化节点嵌入以训练模型。由于它们的
DeepWalk和node2vec通过随机游走生成的序列,隐式地保持节点之间的高阶邻近性,由于其随机性,这些随机游走会得到不同距离的连接节点。另一方面,基于因子分解的方法,如GF和HOPE,通过在目标函数中对节点进行建模,明确地保留了节点之间的距离。Walklets将显式建模与随机游走的思想结合起来。该模型通过跳过图中的某些节点来修改DeepWalk中使用的随机游走策略。这是针对多个尺度的跳跃长度执行的,类似于在GraRep中分解,并且随机行走获得的一组点的序列用于训练类似于DeepWalk的模型。


![DeepWalkmore](https://github.com/weslynn/graphic-deep-neural-network/blob/master/gnnpic/DeepWalkmore.jpg)
![DeepWalkmore](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/gnnpic/DeepWalkmore.jpg)



Expand All @@ -112,7 +112,7 @@ DeepWalk和node2vec通过随机游走生成的序列,隐式地保持节点之
- 图时空网络(Graph Spatial-temporal Networks)


![notDeepwalk](https://github.com/weslynn/graphic-deep-neural-network/blob/master/gnnpic/notDeepwalk.jpg)
![notDeepwalk](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/gnnpic/notDeepwalk.jpg)

### 3.1. Graph Auto-Encoders

Expand Down Expand Up @@ -159,7 +159,7 @@ Bruna等人(2013)首次提出了对GCNs的突出研究,该研究基于频谱
- 边级输出与边分类和连接预测任务相关. 为了预测边的标记和连接强度, 一个附加函数将会把来自图卷积模块的两个节点的潜在表达作为输入.
- 图级输出与图分类任务有关. 为了获取一个图级的紧凑表达, 池化模块被用来粗化图到子图, 或用来加和/平均节点表达.

![gcn](https://github.com/weslynn/graphic-deep-neural-network/blob/master/gnnpic/gcn.png)
![gcn](https://github.com/weslynn/graphic-deep-neural-network/blob/master/pic/gnnpic/gcn.png)


GCN的概念首次提出于ICLR2017(成文于2016年)
Expand Down
Loading

0 comments on commit 70c03c4

Please sign in to comment.