Skip to content

Why don't we need to convert BGR to RGB and resize the input image? #1

@kid-pc-chen

Description

@kid-pc-chen

Thanks for sharing this tutorial.
However, I still got some questions; I would appreciate it if you could elaborate a little bit more.

  1. You mentioned “ OpenCV uses BGR whereas Tensorflow uses RGB” in https://medium.com/greppy/object-detection-using-a-ssd-mobilenet-coco-model-with-opencv-3-3-tensorflow-1-4-in-c-and-xcode-28b3e1d955db …but I didn’t find the related code to convert the input image into RGB in your code.
  2. Could you please tell me how you figured out the input and output tensor names of the COCO graphs?
  3. Why don’t we need to resize the input image (e.g. 300x300 for SSD, 299x299 for Faster RCNN) for the input tensor? I didn’t find the related code to resize the input image in your code.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions