Lecture 12: Visualizing and Understanding

xiaoxiao2021-02-28  17


Lecture 12: Visualizing and Understanding


Filters of first layerfeature sof last layer: t-SNEVisualizing ActivationsOcclusion ExperimentsSaliency Maps: 梯度反传到输入

Intermediate features via guided backprop: grad *= relu(y)

Pick a single intermediate neuronCompute gradient of neuron value with respect to image pixels

Images come out nicer if you only backprop positive gradients through each ReLU Find the part of an image that a neuron responds to 对象是输入图像,需要给定

Visualizing CNN features: Gradient Ascent Generate a synthetic image that maximally activates a neuron, 对象是模型本身,无需给定输入图像

I=argmaxIf(I)+R(I)argmaxISc(I)λI22 I ∗ = arg ⁡ max I f ( I ) + R ( I ) ⇒ arg ⁡ max I S c ( I ) − λ ‖ I ‖ 2 2

Initialize image to zerosForward image to compute current scoresBackprop to get gradient of neuron value with respect to image pixelsMake a small update to the image

Adversarial Examples, 可以给定输入图像

xadv=x+argminδδs.t.Pred(xadv)Pred(x) x a d v = x + arg ⁡ min δ ‖ δ ‖ s . t . P r e d ( x a d v ) ≠ P r e d ( x ) DeepDream, 可以给定输入图像

Forward: compute activations at chosen layerSet gradient of chosen layer equal to its activationBackward: Compute gradient on imageUpdate imageFeature Inversion, 由特征找回原图像 Given a CNN feature vector for an image, find a new image that: Matches the given feature vector“looks natural” (image prior regularization) x=argminxl(Φ(x),Φ0)+λR(x)=argminxΦ(x)Φ02+λfβdr x ∗ = arg ⁡ min x l ( Φ ( x ) , Φ 0 ) + λ R ( x ) = arg ⁡ min x ‖ Φ ( x ) − Φ 0 ‖ 2 + λ ∫ ‖ ∇ f ‖ β d r Texture Synthesis: Gram matrix Neural Style Transfer 原有nueral style方法对于一张内容图像和风格图像迭代生成太慢 Fast Style Transfer: 训练一个专门的模型生成某一特定的风格图像(训练网络使生成的图像满足传统neural style中的目标,或者对图像进行多尺度融合直接生成相应风格的图像) 单个模型生成多种风格(学习每一种风格的scale和shift,选定一组scale和shift后用同一个模型生成相应风格的图像)
转载请注明原文地址: https://www.6miu.com/read-2150238.html