Object Extraction Using Probabilistic Maps of Color, Depth, and Near-Infrared Information

Muhammad Attamimi, Kelvin Liusiani, Astria Nur Irfansyah, Hendra Kusuma, Djoko Purwanto

Abstract


Object extraction is one of the important and chal-
lenging tasks in the computer vision and/or robotics ? elds.
This task is to extract the object from the scene using any
possible cues. The scenario discussed in this study was the object
extraction which considering the Space of Interest (SOI), i.e.,
the three dimensional area where the object probably existed.
To complete such task, the object extraction method based on
the probabilistic maps of multiple cues was proposed. Thanks
to the Kinect V2 sensor, multiple cues such as color, depth, and
near-infrared information can be acquired simultaneously. The
SOI was modeled by a simple probabilistic model by considering
the geometry of the possible objects and the reachability of the
system acquired from depth information. To model the color and
near-infrared information, a Gaussian mixture models (GMM)
was used. All of the models were combined to generate the
probabilistic maps that were used to extract the object from
the scene. To validate the proposed object extraction, several
experiments were conducted to investigate the best combination
of the cues used in this study.

Keywords: color information, depth information, near-infrared information, object extraction, probabilistic maps.


Full Text:

PDF

References


Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for

accurate object detection and semantic segmentation. IEEE Conference

on Computer Vision and Pattern Recognition. Columbus. 2014;580587.

Okuma K, Taleghani A, de Freitas N, Little JJ, Lowe DG. A Boosted

Particle Filter: Multitarget Detection and Tracking. 2004;2839.

Obata M, Nishida T, Miyagawa H, Ohkawa F. Target tracking and

posture estimation of 3d objects by using particle filter and para-

metric eigenspace method. Proc of Comput Vis, & Pattern Recognit.

;6772.

Muhammad Attamimi, Takaya Araki, Tomoaki Nakamura, and

Takayuki Nagai. Visual Recognition System for Cleaning Tasks by

Humanoid Robots. International Journal of Advanced Robotic Systems:

Humanoid, 2013;114.

Attamimi, M., Nagai, T., Purwanto, D. Particle filter with inte-

grated multiple features for object detection and tracking. Telkomnika

(Telecommunication Computing Electronics and Control).

Muhammad Attamimi, Akira Mizutani, Tomoaki Nakamura, Takayuki

Nagai, Kotaro Funakoshi, Mikio Nakano. Real-time 3D visual sensor

for robust object recognition. IROS Taipei. 2010;45604565.

Muhammad Attamimi, Tomoaki Nakamura, and Takayuki Nagai. Hi-

erarchical Multilevel Object Recognition Using Markov Model. Pro-

ceedings of the 21st International Conference on Pattern Recognition.

Kinect for Xbox One: https://en.wikipedia.org/wiki/Kinect.

Salamati, N, Fredembach, C, Ssstrunk, S. Material Classification Using

colour and NIR Images. IST 17th Colour Imaging Conf. 2009;216222.

Mokhtar M. Hasan, Pramod K. Mishra. Superior Skin Color Model

using Multiple of Gaussian Mixture Model. British Journal of Science.

; 6(1): 114.

Stephen J. Krotosky and Mohan M. Trivedi. A Comparison of Color and

Infrared Stereo Approaches to Pedestrian Detection. IEEE Intelligent

Vehicles Symposium, 2007;8186.

Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time

object detection with region proposal networks. Adv Neural Inf Process

Syst. 2015;9199.

Dong W, Wang Y, Jing W, Peng T. An Improved Gaussian Mix-

ture Model Method for Moving Object Detection. TELKOMNIKA

(Telecommunication Comput Electron Control). 2016;14(3A):115.

Sumardi, Taufiqurrahman M, Riyadi MA. Street mark detection using

raspberry pi for self-driving system. Telkomnika (Telecommunication

Comput Electron Control). 2018;16(2):62934.

N. Salamati and S. Ssstrunk. Material-Based Object Segmentation Using

Near-Infrared Information.

Lai K, Bo L, Ren X, Fox D. A Large-Scale Hierarchical Multi-view

RGB-D Object Dataset. Int. Conf. on ICRA. 2011;18171824.

Kai Welke, Jan Issac, David Schiebener, Tamim Asfour, Rdiger Dill-

mann. Autonomous acquisition of visual multi-view object representa-

tions for object recognition on a humanoid robot. Int. Conf. on ICRA.

;20122019.

Kinect for Xbox 360: https://en.wikipedia.org/wiki/Kinect.

Alex Krizhevsky. ImageNet Classification with Deep Convolutional

Neural Networks. NIPS. 2012.

Mansib Rahman. Beginning Microsoft Kinect for Windows SDK 2.0:

Motion and Depth Sensing for Natural User Interfaces. Apress Berkely,

CA, USA 2017.

Muhammad Attamimi and Takayuki Nagai. A Visual Sensor for Do-

mestic Service Robots. Journal on Advanced Research in Electrical

Engineering. 2018; 2(1):3136.

Conrad Sanderson, Ryan Curtin. An Open Source C++ Implementation

of Multi-Threaded Gaussian Mixture Models, k-Means and Expectation

Maximisation. In Proc. of International Conference on Signal Process-

ing and Communication Systems. 2017.

Everingham M, Gool L, Williams C. K, Winn J, and Zisserman A. The

Pascal Visual Object Classes (VOC) Challenge. Int. J. of Computer

Vision. 2010; 88(2): 303338.

Rasmussen C. E. The Infinite Gaussian Mixture Model. In Advances

in Neural Information Processing Systems. 2000;554560.




DOI: https://doi.org/10.12962/j25796216.v4.i1.106

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.