In the function "atrous_spatial_pyramid_pooling", (line 21, deeplab_model.py) There is "image_level_features" (line 54--61) ``` # (b) the image-level features with tf.variable_scope("image_level_features"): # global average pooling image_level_features = tf.reduce_mean(inputs, [1, 2], name='global_average_pooling', keepdims=True) # 1x1 convolution with 256 filters( and batch normalization) image_level_features = layers_lib.conv2d(image_level_features, depth, [1, 1], stride=1, scope='conv_1x1') # bilinearly upsample features image_level_features = tf.image.resize_bilinear(image_level_features, inputs_size, name='upsample') ``` I think "image_level_features" is same size as "inputs", since it is just a `reduce_mean` with `keepdims`. Also, `input_size = tf.shape(inputs[1:3])`. => Then they are the same size, and why one should do the `tf.image.resize_bilinear(image_level_features, inputs_size)`?