By ethamine

2018-12-06 15:25:38 8 Comments

I'm trying to make a model from scratch using TensorFlow, Keras and ImageDataGenerator, but it does not go as expected. I use the generator just for loading the images, so no data augmentation is used. There are two folders with train and test data, each folder has 36 subfolders filled with images. I get the following output:

Using TensorFlow backend.
Found 13268 images belonging to 36 classes.
Found 3345 images belonging to 36 classes.
Epoch 1/2
1/3 [=========>....................] - ETA: 0s - loss: 15.2706 - acc: 0.0000e+00
3/3 [==============================] - 1s 180ms/step - loss: 14.7610 - acc: 0.0667 - val_loss: 15.6144 - val_acc: 0.0312
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 14.5063 - acc: 0.1000
3/3 [==============================] - 0s 32ms/step - loss: 15.5808 - acc: 0.0333 - val_loss: 15.6144 - val_acc: 0.0312

Even though it seems OK, apparently it does not train at all. I've tried using different amount of epochs, steps and larger datasets - almost nothing changes. It takes around half a second to train each epoch even with over 60k images! The weird thing is that when I tried saving images to respective folders, it saves only around 500-600 of them and most likely stops.

from tensorflow.python.keras.applications import ResNet50
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D, Conv2D, Dropout
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
import keras
import os

if __name__ == '__main__':
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

    image_size = 28
    img_rows = 28
    img_cols = 28
    num_classes = 36

    data_generator = ImageDataGenerator()

    train_generator = data_generator.flow_from_directory(
        directory="/final train 1 of 5/",
        save_to_dir="/image generator output/train/",
        target_size=(image_size, image_size),

    validation_generator = data_generator.flow_from_directory(
        directory="/final test 1 of 5/",
        save_to_dir="/image generator output/test/",
        target_size=(image_size, image_size),

    model = Sequential()
    model.add(Conv2D(20, kernel_size=(3, 3),
                     input_shape=(img_rows, img_cols, 1)))
    model.add(Conv2D(20, kernel_size=(3, 3), activation='relu'))
    model.add(Dense(100, activation='relu'))
    model.add(Dense(num_classes, activation='softmax'))

                  optimizer='adam',  # adam/sgd


Seems like something silently fails and cripples the training process.


@ethamine 2018-12-11 11:20:24

As @today suggested the problem was in not normalized images.

Passing rescale=1/255 to ImageDataGenerator solved it.

@today 2018-12-06 15:40:37

The problem is that you are misunderstanding the steps_per_epoch argument of fit_generator. Let's take a look at the documentation:

steps_per_epoch: Integer. Total number of steps (batches of samples) to yield from generator before declaring one epoch finished and starting the next epoch. It should typically be equal to the number of samples of your dataset divided by the batch size. Optional for Sequence: if unspecified, will use the len(generator) as a number of steps.

So basically, it determines how many batches would be generated in each epoch. Since, by definition, an epoch means to go over the whole training data therefore we must set this argument to the total number of samples divided by batch size. So in your example it would be steps_per_epoch = 13268 // 10. Of course, as mentioned in the docs, you can leave it unspecified and it would automatically infer that.

Further, the same thing applies to validation_steps argument as well.

@ethamine 2018-12-10 11:47:31

Thank you for the response, but apparently the problem is in something different. I just copy-pasted those values from a tutorial on Kaggle. After I removed them, it calculated 1327 steps, but the training was still very quick and not fruitful, resulting in 1327/1327 [==============================] - 26s 20ms/step - loss: 15.6347 - acc: 0.0300 - val_loss: 15.9109 - val_acc: 0.0129 Why the accuracy could be that low? It was around 40% with Apple's CreateML tool on exact same data (afaik they use transfer learning though)

@today 2018-12-10 13:40:05

@ethamine I don't know about the specific task and dataset you are working on. One reason might be that you are not normalizing the images. Pass rescale=1/255. as an argument to ImageDataGenerator and see if it helps. Further, depending on the number of images and complexity of the task, your network might be too small and therefore underfit.

Related Questions

Sponsored Content

2 Answered Questions

0 Answered Questions

Keras cnn: training accuracy and validation accuracy are incredibly high

  • 2018-11-14 18:33:50
  • 21 View
  • 0 Score
  • 0 Answer
  • Tags:   tensorflow keras

1 Answered Questions

Keras image classification validation accuracy higher

0 Answered Questions

Keras fit_generator and fit results are different

2 Answered Questions

[SOLVED] Test score vs test accuracy when evaluating model using Keras

  • 2017-04-24 13:45:04
  • Stephen Johnson
  • 10567 View
  • 9 Score
  • 2 Answer
  • Tags:   neural-network keras

0 Answered Questions

model.fit_generator invalid loss and score

1 Answered Questions

[SOLVED] Keras log_loss error is same

1 Answered Questions

Keras NoteBook GPU Timeout

1 Answered Questions

[SOLVED] training loss increases while validation accuracy increases

  • 2017-07-14 09:57:26
  • Abhijit Balaji
  • 454 View
  • 1 Score
  • 1 Answer
  • Tags:   optimization keras

Sponsored Content