Home Datasets Demo Publications People SRIP Acknowledgements

Overview

The hypothesis that image datasets gathered online “in the wild“ can produce biased object recognizers, e.g. preferring professional photography or certain viewing angles, is studied. A new “in the lab“ data collection infrastructure is proposed consisting of a drone which captures images as it circles around objects. It's inexpensive and easily replicable nature may also potentially lead to a scalable data collection effort by the vision community. The procedure's usefulness is demonstrated by creating a dataset of Objects Obtained With fLight (OOWL). Currently, OOWL contains 120,000 images of 500 objects and is the largest “in the lab“ image dataset available when both number of classes and objects per class are considered.

OOWL "In the Lab" [Online Preview]


OOWL "In the Wild"


Demo Video

Publications

CVPR 2019

PIEs: Pose Invariant Embeddings
Chih-Hui Ho, Pedro Morgado, Amir Persekian, Nuno Vasconcelos
Website Paper Supplementary material BibTex Poster Code

Catastrophic Child’s Play: Easy to Perform, Hard to Defend Adversarial Attacks
Chih-Hui Ho*, Brandon Leung*, Erik Sandström, Yen Chang, Nuno Vasconcelos (*Indicates equal contribution)
Website Paper Supplementary Material BibTex Poster Turk Dataset

Summer Research Internship Program (SRIP)

2020

Project

Prior works on single view 3D reconstruction mostly relies on 3D model supervisions, which is impractical for real world applications. Recently, there are some works focus on single view 3D reconstruction without using 3D supervisions. Inspired by this direction of work, we investigate whether noisy 3D CAD models from commercial scanning device assist the problem of single view 3D reconstruction.

We also formed a paper reading group which covers most of the recent work of single view 3D reconstruction. Please check our paper reading list.

Member

Sean Kamano, Jake Pollard, Edward Yang, Brandon Leung, Chih-Hui Ho

2019

Project

To investigate the problem of 3D reconstruction of real world objects, a new dataset containing 343 object scans are curated. A turntable along with a camera is used to captured images of an object from multiple views. These views are then imported to Agisoft to construct real world 3D models. In addition, in further extend the drone data collecion framework, we research various algorithms, including using OpenCV CSRT Tracker to make Intel Drone to autonomously circulate objects.

We also formed a paper reading group which covers most of the recent work of 3D representations, including voxel, point cloud, mesh and primitives. Please check our paper reading list.

Member

Sean Kamano, Po Hsiang Huang, Dacheng Li, Jayi Wang, Yixuan Huang, Brandon Leung, Chih-Hui Ho, Amir Persekian

2018

Project

The datset of OOWL and OOWL in the wild are collected which are later used in Catastrophic Child’s Play: Easy to Perform, Hard to Defend Adversarial Attacks and PIEs: Pose Invariant Embeddings respectively.

Member

Brandon Leung, Pedro Morgado, Bo Liu, Chih-Hui Ho, Yen Chang, Erik Sandstrom, David Orozco, Amir Persekian

People

Current OOWLers

Brandon Leung, Pedro Morgado, Bo Liu, Chih-Hui Ho, Nuno Vasconcelos

OOWL Alumni

Yen Chang, Erik Sandstrom, David Orozco, Amir Persekian, Sean Kamano, Po Hsiang Huang, Dacheng Li, Jayi Wang, Yixuan Huang

Acknowledgements

This work was partially funded by NSF awards IIS-1546305 and IIS-1637941, a gift from Northrop Grumman, and NVIDIA GPU donations.