By Asif Mohammed


2017-01-28 07:58:17 8 Comments

I am trying to train my model which classifies images. The problem I have is, they have different sizes. how should i format my images/or model architecture ?

2 comments

@sunside 2017-01-28 23:31:24

You didn't say what architecture you're talking about. Since you said you want to classify images, I'm assuming it's a partly convolutional, partly fully connected network like AlexNet, GoogLeNet, etc. In general, the answer to your question depends on the network type you are working with.

If, for example, your network only contains convolutional units - that is to say, does not contain fully connected layers - it can be invariant to the input image's size. Such a network could process the input images and in turn return another image ("convolutional all the way"); you would have to make sure that the output matches what you expect, since you have to determine the loss in some way, of course.

If you are using fully connected units though, you're up for trouble: Here you have a fixed number of learned weights your network has to work with, so varying inputs would require a varying number of weights - and that's not possible.

If that is your problem, here's some things you can do:

  • Don't care about squashing the images. A network might learn to make sense of the content anyway; does scale and perspective mean anything to the content anyway?
  • Center-crop the images to a specific size. If you fear you're losing data, do multiple crops and use these to augment your input data, so that the original image will be split into N different images of correct size.
  • Pad the images with a solid color to a squared size, then resize.
  • Do a combination of that.

The padding option might introduce an additional error source to the network's prediction, as the network might (read: likely will) be biased to images that contain such a padded border. If you need some ideas, have a look at the Images section of the TensorFlow documentation, there's pieces like resize_image_with_crop_or_pad that take away the bigger work.

As for just don't caring about squashing, here's a piece of the preprocessing pipeline of the famous Inception network:

# This resizing operation may distort the images because the aspect
# ratio is not respected. We select a resize method in a round robin
# fashion based on the thread number.
# Note that ResizeMethod contains 4 enumerated resizing methods.

# We select only 1 case for fast_mode bilinear.
num_resize_cases = 1 if fast_mode else 4
distorted_image = apply_with_random_selector(
    distorted_image,
    lambda x, method: tf.image.resize_images(x, [height, width], method=method),
    num_cases=num_resize_cases)

They're totally aware of it and do it anyway.

Depending on how far you want or need to go, there actually is a paper here called Spatial Pyramid Pooling in Deep Convolution Networks for Visual Recognition that handles inputs of arbitrary sizes by processing them in a very special way.

@CMCDragonkai 2018-03-15 04:28:12

This topic seems far more complicated when you're dealing with object detection and instance segmentation, because anchor box sizes which are also hyperparameters need to adjust if you have a dataset with high variance in image sizes.

@HelloGoodbye 2018-04-30 02:19:38

Aspect ratios play a pretty important role for a network that is to distinguish between circles and ellipses.

@sunside 2018-04-30 07:44:11

That is true, but you can provide the original aspect ratio itself as an input to the network and/or make use of learned transformations (e.g. Spatial Transformer Networks). During training, padded batching might work, but it is essentially the same as aspect-correct resizing into a bigger frame.

@sunside 2018-04-30 07:52:04

Another general observation is that batches do not necessarily have to have the same dimensions; the first batch could deal with 4:3 images, the second with 16:9 etc, as long as the dense layers are taken care of.

@Jonny Vu 2019-01-25 07:50:45

Is there anyone train the classification model using ROI pooling layer (deepsense.ai/region-of-interest-pooling-explained)?

@aspiring1 2019-12-03 06:11:07

@CMCDragonkai : But, don't object detection algorithms such as Yolo, already resize the images to (416, 416) i.e multiples of 32 (in case of Yolo) , and I believe anchor sizes depends on the objects in the images, i.e, the sizes of the bounding boxes, and yes resizing will change them, but during training they will remain constant. As per Yolo the 3 anchor sizes are decided using K means here

@Tobitor 2020-04-28 20:32:06

What do you mean by Don't care about squashing the images. A network might learn to make sense of the content anyway; does scale and perspective mean anything to the content anyway? ? I have some license plate data where sizes are also very diverse, from 100x20 to 1100x200. I put the images in the center of a square but I am wondering if this is necessary? I understand your sentence about squashing in the sense that I can resize all images to e.g. 256x256 images and the distortion does not have effects? Because in my case, I don't see why the image deformation should have any effects?!

@sunside 2020-04-29 17:10:07

@Tobitor, always make the inputs of the network as close to the actual (test, or inference-time) data as you can. If all your images are going much wider than high, you should also model your network to process your images like this. That said, if you cannot possibly say how your "usage" data will look like, you have to make some sacrifices during training. And in that case, resizing an image from 1000x200 to 256x256 is generally okay (imagine looking at that license plate at a 60 degree angle - its very roughly square now).

@Tobitor 2020-04-30 09:33:10

Ok, thanks a lot! :-) Another possibility would certainly be to make the images wider than high. for example 100x20 or is it in general better to have squares? I think that such a change of the images would also be advantageous because then we would have less data that the network would have to learn and thus a faster calculation would be possible than with images of sizes e.g. 1200x600 or something like this. Furthermore, such images would be better processed by the network than images with a huge black frame, I think. What do you think about that?

@sunside 2020-05-25 12:23:55

@Tobitor There is no requirement at all for images to be square, it just happens to be the least bad tradeoff if you don't know the actual image sizes during inference. :^) As for size, the smaller the better, but the images need to be big enough to still capture the finest required details - generally speaking, just keep in mind that if you, as a human expert, cannot possibly determine what's in the image, the network won't be able, too.

@Pranay Mukherjee 2018-03-04 14:27:48

Try making a spatial pyramid pooling layer. Then put it after your last convolution layer so that the FC layers always get constant dimensional vectors as input . During training , train the images from the entire dataset using a particular image size for one epoch . Then for the next epoch , switch to a different image size and continue training .

@Matthieu 2019-06-16 21:26:20

Could you elaborate a bit on what is "spatial pyramid pooling" compared to regular pooling?

@Asif Mohammed 2019-08-30 04:35:53

please read Spatial pyramid pooling in deep convolutional networks for visual recognition in blog.acolyer.org/2017/03/21/convolution-neural-nets-part-2 @Matthieu

Related Questions

Sponsored Content

1 Answered Questions

2 Answered Questions

1 Answered Questions

[SOLVED] How to train a keras model on images of different sizes

3 Answered Questions

4 Answered Questions

[SOLVED] Keras, how do I predict after I trained a model?

1 Answered Questions

How to use two different sized images as input into a deep network?

  • 2018-04-25 01:31:27
  • firefly
  • 175 View
  • 0 Score
  • 1 Answer
  • Tags:   deep-learning

3 Answered Questions

Sponsored Content