J'essaie d'obtenir un CNN simple pour m'entraîner au cours des 3 derniers jours.
Tout d'abord, j'ai installé une configuration de pipeline/file d'attente d'entrée qui lit les images à partir d'une arborescence de répertoires et prépare les lots.
J'ai obtenu le code pour cela à ce lien . Donc, j'ai maintenant train_image_batch et train_label_batch que je dois alimenter mon CNN.
train_image_batch, train_label_batch = tf.train.batch(
[train_image, train_label],
batch_size=BATCH_SIZE
# ,num_threads=1
)
Et je suis incapable de comprendre comment. J'utilise le code pour CNN donné à ce lien .
# Input Layer
input_layer = tf.reshape(train_image_batch, [-1, IMAGE_HEIGHT, IMAGE_WIDTH, NUM_CHANNELS])
# Convolutional Layer #1
conv1 = new_conv_layer(input_layer, NUM_CHANNELS, 5, 32, 2)
# Pooling Layer #1
pool1 = new_pooling_layer(conv1, 2, 2)
La couche input_layer à l'impression montre ceci
Tenseur ("Remodeler: 0", forme = (5, 120, 120, 3), dtype = uint8)
La ligne suivante se bloque avec TypeError; conv1 = new_conv_layer (...). Le corps de la fonction new_conv_layer est donné ci-dessous
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
stride):
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = tf.Variable(tf.truncated_normal(shape, stddev=0.05))
# Create new biases, one for each filter.
biases = tf.Variable(tf.constant(0.05, shape=[num_filters]))
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, stride, stride, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
Précisément, il se bloque à tf.nn.conv2d avec cette erreur
TypeError: la valeur passée au paramètre 'input' a DataType uint8 pas dans la liste des valeurs autorisées: float16, float32
L'image de votre pipeline d'entrée est de type 'uint8', vous devez la convertir en 'float32', vous pouvez le faire après le décodeur jpeg d'image:
image = tf.image.decode_jpeg(...
image = tf.cast(image, tf.float32)