Rating: 3.0

Full explanation [by clicking here](https://zenhack.it/writeups/Confidence2019/neuralflag/)!

python
import numpy as np
import keras
import keras.backend as K
import matplotlib.pyplot as plt
import os

features = 50*11

x0 = np.ones((1, 11, 50), dtype=float)*100

eps = 1
target = np.array([0, 1])

#If you want to find the 0 gradient, just plug this loss inside the derivative.
loss = keras.losses.categorical_crossentropy(target, model.output)

# Gradient wrt to last dense layer! Why?
# Because the softmax flattens everything to zero (try and see, just swap the definitions)

session = K.get_session()

prediction = model.predict(x0)
while np.argmax(prediction) != 1:

# Thank you Keras + Tensorflow!
# That [0][0] is just ugly, but it is needed to obtain the value as an array.

# We don't need to cap the sign, hence, we need to craft the flag point.
# The FSGM resude uniformly the image in general, because it flattens the gradient to barely +/-1.
# We need pixels to be modified according to the value of the gradient in their position.

# The gradient always points to maximum ascent direction, but we need to minimize.
# Hence, we swap the sign of the gradient.
x0 = x0 - fsgm

# We do not need to clip, as we don't need to save the result as a regular image.
prediction = model.predict(x0)
print(prediction)

# I will rescale the output for visualization purpouses.
plt.imshow(x0[0] / np.max(x0[0]) - np.min(x0[0]))
plt.show()


Original writeup (https://zenhack.it/writeups/Confidence2019/neuralflag/).