Updated on 2024/04/11

写真a

 
WADA Toshikazu
 
Other name(s)
T.Wada
Name of department
Faculty of Systems Engineering, Intelligent Informatics
Job title
Professor
Concurrent post
Director,College of Engineering and Natural Sciences、Informatics Division(Professor)
Mail Address
E-mail address
Homepage
External link

Education

  • Tokyo Institute of Technology   Graduate School, Division of Integrated Science and Engineering   Department of Applied Electronics  

  • Tokyo Institute of Technology   Interdisciplinary Science and Engineering   電子システム専攻  

  • Okayama University   Faculty of Engineering   Department of Electrical Engineering  

  • Okayama University   Faculty of Engineering   電気工学科  

Degree

  • Doctor

Academic & Professional Experience

  • 2002.09

    Wakayama University   Faculty of Systems Engineering

  • 1999
    -
    2000

    カナダ、ブリティッシュコロンビア大、客員助教授

  • 1998
    -
    2002

    京都大学情報学研究科助教授

  • 1997

    Kyoto University   Faculty of Engineering

  • 1995
    -
    1996

    Okayama University   Faculty of Engineering

  • 1994

    Okayama University   The Graduate School of Natural Science and Technology

  • 1990
    -
    1993

    Okayama University   Faculty of Engineering

▼display all

Association Memberships

  • 電子情報通信学会

  • IEEE

  • 情報処理学会

Research Areas

  • Informatics / Perceptual information processing

  • Informatics / Human interfaces and interactions

  • Informatics / Intelligent informatics

  • Informatics / Intelligent robotics

  • Informatics / Information theory

  • Informatics / Database science

▼display all

Classes (including Experimental Classes, Seminars, Graduation Thesis Guidance, Graduation Research, and Topical Research)

  • 2023   Data Structures and Algorithms   Specialized Subjects

  • 2023   Pattern Recognition Seminar   Specialized Subjects

  • 2023   Seminar on Algorithms and Data Structure   Specialized Subjects

  • 2022   Pattern Recognition Advanced   Specialized Subjects

  • 2022   Intelligent Informatics Seminar   Specialized Subjects

  • 2022   Graduation Research   Specialized Subjects

  • 2022   Introduction to Current Systems Engineering B   Specialized Subjects

  • 2022   Computer Systems C   Specialized Subjects

  • 2022   Programming Language Systems   Specialized Subjects

  • 2022   Pattern Recognition Seminar   Specialized Subjects

  • 2022   Data Structures and Algorithms   Specialized Subjects

  • 2022   Introductory Seminar in Systems Engineering   Specialized Subjects

  • 2022   Seminar on Algorithms and Data Structure   Specialized Subjects

  • 2021   Programming Language Systems   Specialized Subjects

  • 2021   Intelligent Informatics Seminar   Specialized Subjects

  • 2021   Computer Systems C   Specialized Subjects

  • 2021   Data Structures and Algorithms   Specialized Subjects

  • 2021   Graduation Research   Specialized Subjects

  • 2021   Graduation Research   Specialized Subjects

  • 2021   Introduction to Current Systems Engineering B   Specialized Subjects

  • 2021   Pattern Recognition Seminar   Specialized Subjects

  • 2021   Seminar on Algorithms and Data Structure   Specialized Subjects

  • 2020   Graduation Research   Specialized Subjects

  • 2020   Graduation Research   Specialized Subjects

  • 2020   Introduction to Current Systems Engineering B   Specialized Subjects

  • 2020   Seminar on Algorithms and Data Structure   Specialized Subjects

  • 2020   Computer Systems C   Specialized Subjects

  • 2020   Programming Language Systems   Specialized Subjects

  • 2020   Data Structures and Algorithms   Specialized Subjects

  • 2020   Intelligent Informatics Seminar   Specialized Subjects

  • 2020   Pattern Recognition Seminar   Specialized Subjects

  • 2019   Introduction to Current Systems EngineeringⅡ   Specialized Subjects

  • 2019   Data Structures and Algorithms   Specialized Subjects

  • 2019   Intelligent Informatics Seminar   Specialized Subjects

  • 2019   Pattern Recognition Seminar   Specialized Subjects

  • 2019   Introduction to Majors 1   Specialized Subjects

  • 2019   Introduction to Majors 1   Specialized Subjects

  • 2019   Introductory Seminar in Systems Engineering   Specialized Subjects

  • 2019   Computer Systems Ⅱ   Specialized Subjects

  • 2019   System Software   Specialized Subjects

  • 2019   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2018   Introduction to Majors 1   Specialized Subjects

  • 2018   Graduation Research   Specialized Subjects

  • 2018   Data Structures and Algorithms   Specialized Subjects

  • 2018   Intelligent Informatics Seminar   Specialized Subjects

  • 2018   Pattern Recognition Seminar   Specialized Subjects

  • 2018   Introduction to Majors 1   Specialized Subjects

  • 2018   Computer Systems Ⅱ   Specialized Subjects

  • 2018   System Software   Specialized Subjects

  • 2018   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2017   Introduction to Current Systems EngineeringⅡ   Specialized Subjects

  • 2017   Introduction to Majors 1   Specialized Subjects

  • 2017   Introduction to Majors 1   Specialized Subjects

  • 2017   Computer Systems Ⅱ   Specialized Subjects

  • 2017   System Software   Specialized Subjects

  • 2017   Intelligent Informatics Seminar   Specialized Subjects

  • 2017   NA   Specialized Subjects

  • 2017   NA   Specialized Subjects

  • 2017   Graduation Research   Specialized Subjects

  • 2017   Introduction to Current Systems EngineeringⅡ   Specialized Subjects

  • 2017   Computer Systems Ⅱ   Specialized Subjects

  • 2017   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2017   Pattern Recognition Seminar   Specialized Subjects

  • 2017   Data Structures and Algorithms   Specialized Subjects

  • 2016   Introduction to Majors 1   Specialized Subjects

  • 2016   Introduction to Majors 1   Specialized Subjects

  • 2016   Graduation Research   Specialized Subjects

  • 2016   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2016   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2016   Data Structures and Algorithms   Specialized Subjects

  • 2016   Introductory Seminar in Systems Engineering   Specialized Subjects

  • 2016   Computer Systems Ⅱ   Specialized Subjects

  • 2016   System Software   Specialized Subjects

  • 2015   Graduation Research   Specialized Subjects

  • 2015   Computer Systems Ⅱ   Specialized Subjects

  • 2015   Introduction to Majors 1   Specialized Subjects

  • 2015   Introduction to Majors 1   Specialized Subjects

  • 2015   Data structures and programmingtechniques   Specialized Subjects

  • 2015   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2015   Graduation Research   Specialized Subjects

  • 2015   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2015   System Software   Specialized Subjects

  • 2014   Graduation Research   Specialized Subjects

  • 2014   Graduation Research   Specialized Subjects

  • 2014   Computer Systems Ⅱ   Specialized Subjects

  • 2014   System Software   Specialized Subjects

  • 2014   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2014   Data structures and programmingtechniques   Specialized Subjects

  • 2014   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2014   Introduction to Computer and Communication Sciences   Specialized Subjects

  • 2014   Introduction to information science   Liberal Arts and Sciences Subjects

  • 2014   Introductory Seminar   Liberal Arts and Sciences Subjects

  • 2013   System Software   Specialized Subjects

  • 2013   Graduation Research   Specialized Subjects

  • 2013   Computer Systems Ⅱ   Specialized Subjects

  • 2013   System Software   Specialized Subjects

  • 2013   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2013   Data structures and programmingtechniques   Specialized Subjects

  • 2013   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2013   Introduction to Computer and Communication Sciences   Specialized Subjects

  • 2013   Introduction to information science   Liberal Arts and Sciences Subjects

  • 2012   Graduation Research   Specialized Subjects

  • 2012   Introduction to Computer and Communication Sciences   Specialized Subjects

  • 2012   Computer and Communication SciencesExperiment   Specialized Subjects

  • 2012   Introductory Seminar   Liberal Arts and Sciences Subjects

  • 2012   Data structures and programmingtechniques   Specialized Subjects

  • 2012   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2012   Computer and Communication SciencesExperiment   Specialized Subjects

  • 2012   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2012   Introduction to information science   Liberal Arts and Sciences Subjects

  • 2012   System Software   Specialized Subjects

  • 2011   Graduation Research   Specialized Subjects

  • 2011   Graduation Research   Specialized Subjects

  • 2011   Graduation Research   Specialized Subjects

  • 2011   Computer and Communication SciencesExperiment   Specialized Subjects

  • 2011   Seminar of Algorithm and Date structureⅠ   Specialized Subjects

  • 2011   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2011   Introductory Seminar   Liberal Arts and Sciences Subjects

  • 2011   Modern Information Technology   Specialized Subjects

  • 2011   System Software   Specialized Subjects

  • 2011   Data structures and programmingtechniques   Specialized Subjects

  • 2011   Introduction to Computer and Communication Sciences   Specialized Subjects

  • 2011   Introduction to information science   Liberal Arts and Sciences Subjects

  • 2010   NA   Specialized Subjects

  • 2010   Graduation Research   Specialized Subjects

  • 2010   Introduction to Computer and Communication Sciences   Specialized Subjects

  • 2010   System Software   Specialized Subjects

  • 2010   Data structures and programmingtechniques   Specialized Subjects

  • 2010   Computer and Communication SciencesExperiment   Specialized Subjects

  • 2010   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2010   Introduction to information science   Liberal Arts and Sciences Subjects

  • 2010   Introductory Seminar   Specialized Subjects

  • 2009   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2009   Computer and Communication SciencesExperiment   Specialized Subjects

  • 2009   NA   Specialized Subjects

  • 2009   Data structures and programmingtechniques   Specialized Subjects

  • 2009   System Software   Specialized Subjects

  • 2009   Introduction to Computer and Communication Sciences   Specialized Subjects

  • 2009   Graduation Research   Specialized Subjects

  • 2008   NA   Specialized Subjects

  • 2008   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2008   Computer and Communication SciencesExperiment   Specialized Subjects

  • 2008   Data structures and programmingtechniques   Specialized Subjects

  • 2008   System Software   Specialized Subjects

  • 2008   Introduction to Computer and Communication Sciences   Specialized Subjects

  • 2008   Graduation Research   Specialized Subjects

  • 2007   NA   Specialized Subjects

  • 2007   Computer and Communication Sciences Seminar   Specialized Subjects

  • 2007   Computer and Communication SciencesExperiment   Specialized Subjects

  • 2007   Data structures and programmingtechniques   Specialized Subjects

  • 2007   System Software   Specialized Subjects

  • 2007   Introduction to Computer and Communication Sciences   Specialized Subjects

  • 2007   Graduation Research   Specialized Subjects

▼display all

Independent study

  • 2007   Cプログラミング技能向上~基礎の習得

  • 2007   Cプログラミング技能向上~基礎の復習と技能向上

  • 2007   Cプログラミング技能向上~適切な条件分岐を書けるようにする

  • 2007   Cプログラミング技能向上~演習を通じてC言語の復習、技能を向上させる

Classes

  • 2022   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2022   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2022   Systems Engineering Advanced Research   Doctoral Course

  • 2022   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2022   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2022   Systems Engineering Project SeminarⅡB   Master's Course

  • 2022   Systems Engineering Project SeminarⅡA   Master's Course

  • 2022   Systems Engineering Project SeminarⅠB   Master's Course

  • 2022   Systems Engineering Project SeminarⅠA   Master's Course

  • 2022   Pattern Recognition Advanced   Master's Course

  • 2022   Systems Engineering SeminarⅡB   Master's Course

  • 2022   Systems Engineering SeminarⅡA   Master's Course

  • 2022   Systems Engineering SeminarⅠB   Master's Course

  • 2022   Systems Engineering SeminarⅠA   Master's Course

  • 2021   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2021   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2021   Systems Engineering Advanced Research   Doctoral Course

  • 2021   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2021   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2021   Systems Engineering Project SeminarⅡB   Master's Course

  • 2021   Systems Engineering Project SeminarⅡA   Master's Course

  • 2021   Systems Engineering Project SeminarⅠB   Master's Course

  • 2021   Systems Engineering Project SeminarⅠA   Master's Course

  • 2021   Pattern Recognition Advanced   Master's Course

  • 2021   Systems Engineering SeminarⅡB   Master's Course

  • 2021   Systems Engineering SeminarⅡA   Master's Course

  • 2021   Systems Engineering SeminarⅠB   Master's Course

  • 2021   Systems Engineering SeminarⅠA   Master's Course

  • 2020   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2020   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2020   Systems Engineering Advanced Research   Doctoral Course

  • 2020   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2020   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2020   Systems Engineering Project SeminarⅡB   Master's Course

  • 2020   Systems Engineering Project SeminarⅡA   Master's Course

  • 2020   Systems Engineering Project SeminarⅠB   Master's Course

  • 2020   Systems Engineering Project SeminarⅠA   Master's Course

  • 2020   Pattern Recognition Advanced   Master's Course

  • 2020   Systems Engineering SeminarⅡB   Master's Course

  • 2020   Systems Engineering SeminarⅡA   Master's Course

  • 2020   Systems Engineering SeminarⅠB   Master's Course

  • 2020   Systems Engineering SeminarⅠA   Master's Course

  • 2019   Pattern Recognition Advanced   Master's Course

  • 2019   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2019   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2019   Systems Engineering Advanced Research   Doctoral Course

  • 2019   Systems Engineering Advanced Research   Doctoral Course

  • 2019   Systems Engineering SeminarⅡB   Master's Course

  • 2019   Systems Engineering SeminarⅡA   Master's Course

  • 2019   Systems Engineering SeminarⅠB   Master's Course

  • 2019   Systems Engineering SeminarⅠA   Master's Course

  • 2019   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2019   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2019   Systems Engineering Project SeminarⅡB   Master's Course

  • 2019   Systems Engineering Project SeminarⅡA   Master's Course

  • 2019   Systems Engineering Project SeminarⅠB   Master's Course

  • 2019   Systems Engineering Project SeminarⅠA   Master's Course

  • 2018   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2018   Systems Engineering Advanced Research   Doctoral Course

  • 2018   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2018   Systems Engineering Project SeminarⅡB   Master's Course

  • 2018   Systems Engineering Project SeminarⅡA   Master's Course

  • 2018   Systems Engineering Project SeminarⅠB   Master's Course

  • 2018   Systems Engineering Project SeminarⅠA   Master's Course

  • 2018   Systems Engineering SeminarⅡB   Master's Course

  • 2018   Systems Engineering SeminarⅡA   Master's Course

  • 2018   Systems Engineering SeminarⅠB   Master's Course

  • 2018   Systems Engineering SeminarⅠA   Master's Course

  • 2018   Pattern Recognition Advanced   Master's Course

  • 2017   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2017   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2017   Systems Engineering Advanced Research   Doctoral Course

  • 2017   Systems Engineering Advanced Research   Doctoral Course

  • 2017   Systems Engineering Project SeminarⅡB   Master's Course

  • 2017   Systems Engineering Project SeminarⅡA   Master's Course

  • 2017   Systems Engineering Project SeminarⅠB   Master's Course

  • 2017   Systems Engineering Project SeminarⅠA   Master's Course

  • 2017   Pattern Recognition Advanced   Master's Course

  • 2017   Systems Engineering SeminarⅡB   Master's Course

  • 2017   Systems Engineering SeminarⅡA   Master's Course

  • 2017   Systems Engineering SeminarⅠB   Master's Course

  • 2017   Systems Engineering SeminarⅠA   Master's Course

  • 2016   Information and Communications Technology for Civilized Society   Master's Course

  • 2016   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2016   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2016   Systems Engineering Advanced Research   Doctoral Course

  • 2016   Systems Engineering Advanced Research   Doctoral Course

  • 2016   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2016   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2016   Systems Engineering Project SeminarⅡB   Master's Course

  • 2016   Systems Engineering Project SeminarⅡA   Master's Course

  • 2016   Systems Engineering Project SeminarⅠB   Master's Course

  • 2016   Systems Engineering Project SeminarⅠA   Master's Course

  • 2016   Pattern Recognition Advanced   Master's Course

  • 2016   Systems Engineering SeminarⅡB   Master's Course

  • 2016   Systems Engineering SeminarⅡA   Master's Course

  • 2016   Systems Engineering SeminarⅠB   Master's Course

  • 2016   Systems Engineering SeminarⅠA   Master's Course

  • 2015   Systems Engineering Advanced Seminar Ⅱ  

  • 2015   Systems Engineering Advanced Seminar Ⅰ  

  • 2015   Systems Engineering Advanced Research  

  • 2015   Systems Engineering SeminarⅡA  

  • 2015   Systems Engineering SeminarⅠA  

  • 2015   Systems Engineering Project SeminarⅡA  

  • 2015   Systems Engineering Project SeminarⅠA  

  • 2015   Systems Engineering Global Seminar Ⅰ  

  • 2015   Pattern Recognition Advanced  

  • 2015   Systems Engineering Advanced Seminar Ⅱ  

  • 2015   Systems Engineering Advanced Seminar Ⅰ  

  • 2015   Systems Engineering Advanced Research  

  • 2015   Systems Engineering SeminarⅡB  

  • 2015   Systems Engineering SeminarⅠB  

  • 2015   Systems Engineering Project SeminarⅡB  

  • 2015   Systems Engineering Project SeminarⅠB  

  • 2015   Systems Engineering Global Seminar Ⅰ  

  • 2014   Systems Engineering Advanced Research  

  • 2014   Systems Engineering Advanced Research  

  • 2014   Systems Engineering Advanced Seminar Ⅱ  

  • 2014   Systems Engineering Advanced Seminar Ⅱ  

  • 2014   Systems Engineering Advanced Seminar Ⅰ  

  • 2014   Systems Engineering Advanced Seminar Ⅰ  

  • 2014   Systems Engineering Project SeminarⅡB  

  • 2014   Systems Engineering Project SeminarⅡA  

  • 2014   Systems Engineering Project SeminarⅠB  

  • 2014   Systems Engineering Project SeminarⅠA  

  • 2014   Pattern Recognition Advanced  

  • 2014   Systems Engineering SeminarⅡB  

  • 2014   Systems Engineering SeminarⅡA  

  • 2014   Systems Engineering SeminarⅠB  

  • 2014   Systems Engineering SeminarⅠA  

  • 2013   Systems Engineering Advanced Research  

  • 2013   Systems Engineering Advanced Research  

  • 2013   Systems Engineering Advanced Seminar Ⅱ  

  • 2013   Systems Engineering Advanced Seminar Ⅱ  

  • 2013   Systems Engineering Advanced Seminar Ⅰ  

  • 2013   Systems Engineering Advanced Seminar Ⅰ  

  • 2013   Systems Engineering Project SeminarⅡB  

  • 2013   Systems Engineering Project SeminarⅡA  

  • 2013   Systems Engineering Project SeminarⅠB  

  • 2013   Systems Engineering Project SeminarⅠA  

  • 2013   Pattern Recognition Advanced  

  • 2013   Systems Engineering SeminarⅡB  

  • 2013   Systems Engineering SeminarⅡA  

  • 2013   Systems Engineering SeminarⅠB  

  • 2013   Systems Engineering SeminarⅠA  

  • 2012   Systems Engineering Advanced Seminar Ⅱ  

  • 2012   Systems Engineering Advanced Seminar Ⅰ  

  • 2012   Systems Engineering Advanced Research  

  • 2012   Systems Engineering SeminarⅡA  

  • 2012   Systems Engineering SeminarⅠA  

  • 2012   Systems Engineering Project SeminarⅡA  

  • 2012   Systems Engineering Project SeminarⅠA  

  • 2012   Pattern Recognition Advanced  

  • 2012   Systems Engineering Advanced Seminar Ⅱ  

  • 2012   Systems Engineering Advanced Seminar Ⅰ  

  • 2012   Systems Engineering Advanced Research  

  • 2012   Systems Engineering SeminarⅡB  

  • 2012   Systems Engineering SeminarⅠB  

  • 2012   Systems Engineering Project SeminarⅡB  

  • 2012   Systems Engineering Project SeminarⅠB  

  • 2011   Kii Peninsula Study Ⅰ  

  • 2011   Systems Engineering Project SeminarⅡB  

  • 2011   Systems Engineering Project SeminarⅡA  

  • 2011   Systems Engineering Project SeminarⅠB  

  • 2011   Systems Engineering Project SeminarⅠA  

  • 2011   Systems Engineering Advanced Research  

  • 2011   Systems Engineering Advanced Research  

  • 2011   NA  

  • 2011   NA  

  • 2011   Systems Engineering Advanced Seminar Ⅱ  

  • 2011   Systems Engineering Advanced Seminar Ⅱ  

  • 2011   Systems Engineering Advanced Seminar Ⅰ  

  • 2011   Systems Engineering Advanced Seminar Ⅰ  

  • 2011   Pattern Recognition Advanced  

  • 2010   NA   Master's Course

  • 2010   NA   Master's Course

  • 2010   NA   Master's Course

  • 2010   NA   Master's Course

  • 2010   Pattern Recognition Advanced   Master's Course

  • 2010   Systems Engineering Advanced Research   Doctoral Course

  • 2010   NA   Doctoral Course

  • 2010   Systems Engineering Advanced Research   Doctoral Course

  • 2010   Introduction to information science   Master's Course

  • 2009   Pattern Recognition Advanced   Master's Course

  • 2009   NA   Master's Course

  • 2009   NA   Master's Course

  • 2009   NA   Master's Course

  • 2009   NA   Master's Course

  • 2008   Pattern Recognition Advanced   Master's Course

  • 2008   NA   Master's Course

  • 2008   NA   Master's Course

  • 2008   NA   Master's Course

  • 2008   NA   Master's Course

  • 2007   Pattern Recognition Advanced   Master's Course

  • 2007   NA   Master's Course

  • 2007   NA   Master's Course

  • 2007   NA   Master's Course

  • 2007   NA   Master's Course

▼display all

Satellite Courses

  • 2016   Information and Communications Technology for Civilized Society  

  • 2011   Kii Peninsula Study Ⅰ  

  • 2010   NA  

Research Interests

  • データマインニング

  • Computer Vision Pattern Recognition Image Understanding Data mining

  • パターン認識

  • 知能ロボット

  • コンピュータビジョン

  • 画像理解

▼display all

Published Papers

  • A Study of Exploiting Hard Examples in Deep Learning

    HIGASHI Ryota, WADA Toshikazu

    電子情報通信学会論文誌D 情報・システム ( The Institute of Electronics, Information and Communication Engineers )  J106-D ( 1 ) 68 - 69   2023.01  [Refereed]

     View Summary

    In discriminate models, Hard Examples (HEs) that is closed to a decision boundary are important. However, in DNNs, training that used HEs only can be failed. This is caused that feature space cannot be reconstructed, and HEs move away a decision boundary.

    DOI

  • Pruning Ratio Optimization with Layer-Wise Pruning Method for Accelerating Convolutional Neural Networks.

    Koji Kamma, Sarimu Inoue, Toshikazu Wada

    IEICE Transactions on Information & Systems   105-D ( 1 ) 161 - 169   2022  [Refereed]

    DOI

  • REAP. A Method for Pruning Convolutional Neural Networks with Performance Preservation

    Koji Kamma, Toshikazu Wada (Part: Last author )

    IEICE Transactions on Information and Systems   E104.D ( 1 ) 194 - 202   2021.01  [Refereed]

  • REAP: A Method for Pruning Convolutional Neural Networks with Performance Preservation.

    Koji Kamma, Toshikazu Wada

    IEICE Transactions on Information & Systems   104-D ( 1 ) 194 - 202   2021  [Refereed]

    DOI

  • Neural Behavior-Based Approach for Neural Network Pruning

    Koji KAMMA, Yuki ISODA, Sarimu INOUE, Toshikazu WADA (Part: Last author )

    IEICE Transactions on Information and Systems   E103.D ( 5 ) 1135 - 1143   2020.05  [Refereed]

    DOI

  • Behavior-Based Compression for Convolutional Neural Networks.

    Koji Kamma, Yuki Isoda, Sarimu Inoue, Toshikazu Wada

    Image Analysis and Recognition - 16th International Conference ( Springer )    427 - 439   2019  [Refereed]

    DOI

  • Reconstruction Error Aware Pruning for Accelerating Neural Networks.

    Koji Kamma, Toshikazu Wada

    Advances in Visual Computing - 14th International Symposium on Visual Computing ( Springer )    59 - 72   2019  [Refereed]

    DOI

  • Local trilateral upsampling for thermal image.

    Aleksandar S. Cvetkovic, Toshikazu Wada

    2017 IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP) ( IEEE )    1577 - 1581   2017  [Refereed]

    DOI

  • Semi-Supervised Feature Transformation for Tissue Image Classification

    Kenji Watanabe, Takumi Kobayashi, Toshikazu Wada

    PLOS ONE ( PUBLIC LIBRARY SCIENCE )  11 ( 12 )   2016.12  [Refereed]

     View Summary

    Various systems have been proposed to support biological image analysis, with the intent of decreasing false annotations and reducing the heavy burden on biologists. These systems generally comprise a feature extraction method and a classification method. Task-oriented methods for feature extraction leverage characteristic images for each problem, and they are very effective at improving the classification accuracy. However, it is difficult to utilize such feature extraction methods for versatile task in practice, because few biologists specialize in Computer Vision and/or Pattern Recognition to design the task-oriented methods. Thus, in order to improve the usability of these supporting systems, it will be useful to develop a method that can automatically transform the image features of general propose into the effective form toward the task of their interest. In this paper, we propose a semi-supervised feature transformation method, which is formulated as a natural coupling of principal component analysis (PCA) and linear discriminant analysis (LDA) in the framework of graph-embedding. Compared with other feature transformation methods, our method showed favorable classification performance in biological image analysis.

    DOI

  • Anomaly Monitoring of Time Series Data based on Gaussian Process Regression

    WADA Toshikazu, OZAKI Shinsaku, MAEDA Shunji, SHIBUYA Hisae

    The IEICE transactions on information and systems (Japanese edition) ( The Institute of Electronics, Information and Communication Engineers )  96 ( 12 ) 3068 - 3078   2013.12  [Refereed]

     View Summary

    本論文では,Gaussian Process Regression(GPR)に基づく異常予兆検出用のモニタリング法を提案する.異常予兆検出はプラントや,生体,乗り物等の健全性モニタリングのために必要であり,幅広い用途がある.異常予兆検出は,正常事例の蓄積をしながら,蓄積された事例に基づいて異常予兆の検出を行う問題であり,オンライン処理が前提となる.事例に基づく異常予兆の検出は,非線型回帰計算であるGPRによってセンサ値を推定し,推定値と実測値の乖離を調べることで実現できる.しかし,GPRでは蓄積された事例のペアを引数とするカーネル関数から成るグラム行列の逆行列を求めるため,事例数の3乗のオーダの計算が必要になる.したがって,オンラインで蓄積され続ける全事例を用いたGPRの計算は,時間とともに急激に減速するため,実時間性が要求される問題には適用できない.本論文では,まず,この問題を解決するために,入力に応じて回帰計算に用いる事例集合(Active Set)を動的に絞り込むことによって,精度を保ちながら回帰計算の高速化を実現する方法を提案する.次に,瞬時的な異常から長時間の趨勢の異常までをモニタリングするために,様々な時間解像度での異常度を表すSpectro Anomaly Gramを提案する.これにより,瞬時的な異常から大域的な異常までがモニタリングできるようになる.

  • 認識・検出 劣化に対して頑健な画像間相違度

    大池 洋史, 岡 藍子, 和田 俊和

    画像ラボ ( 日本工業出版 )  23 ( 7 ) 29 - 36   2012.07

  • Classification Based Character Segmentation Using Local Features

    OTA Takahiro, WADA Toshikazu

    The IEICE transactions on information and systems (Japanese edetion) ( The Institute of Electronics, Information and Communication Engineers )  95 ( 4 ) 1004 - 1013   2012.04  [Refereed]

     View Summary

    本論文では,産業用インクジェットプリンター(IIJP:Industrial Ilnk Jet Printer)の印字検査用OCR(Optical Character Reader)に適した高速かつ高精度な文字切出し法を提案する.筆者らは任意の識別器の振舞いを線形回帰木によって模倣し,これを識別器として使用することで,高速なOCRを構築し,このOCRを用いて,トップダウン的に文字認識と切出しを実行する手法CCS(Classification based Character Segmentation)を提案した.しかしながら,CCSでは印刷された文字が水平かつ等間隔に配置されていることを前提としており,個々の文字の大きさや二次元的な位置の変動には対応できない.この問題を解決するために,局所特徴を用いてボトムアップ的に文字領域候補をあらかじめ抽出しておき,これらの各領域に対する文字認識の評価値の和を最大化する組合せ最適化計算を行うMCCS(Modified CCS)を提案する.局所特徴には,本研究で開発したFast-Hessian-Affine regionsを使用する.実験では,まず様々な局所特徴を比較することでFast-Hessian-Affine-regionsが文字切出しに適した特徴であることを示し,次に多様な二次元的変動をもった印字に対してMCCSを適用した結果から,提案手法の有効性を明らかにする.

  • A Study on Image Degradation Robust Dissimilarity Measures

    OKA Aiko, WADA Toshikazu

    The IEICE transactions on information and systems ( The Institute of Electronics, Information and Communication Engineers )  94 ( 8 ) 1269 - 1278   2011.08  [Refereed]

     View Summary

    本論文では,劣化画像を対象とした検索・照合の問題に適した画像間の相違性尺度を提案する.画像はその観測過程で,陰影の変化,遮へい,ぼけやぶれ,解像度低下など劣化の影響をしばしば受ける.これらの劣化に左右されにくい検索・照合を実現するために,これまでは劣化の影響を受けにくい特徴を使用することが研究されてきた.しかし,全ての劣化に対する不変特徴は存在せず,使用する特徴だけで全ての劣化の影響を緩和あるいは除去することはできない.本研究では,特徴ではなく相違性尺度を変更することによって,劣化の影響を受けにくい画像間相違度を提案する.本論文では,「画像を直交展開した際の項の欠落または混入」によって劣化をモデル化する.このモデルのもとでは,欠落の起きていない部分の展開係数が一致すれば,劣化画像は比較した画像に近いといえる.この考えに基づき,まず,任意の直交展開に対して2枚の画像の展開係数の一致数ができるだけ多くなるように劣化画像の輝度値を調整する.次に,一致する要素が多いベクトル対に対してより小さい評価値が与えられる尺度を,上記の輝度調整された画像の展開係数ベクトルの組に適用する.このようにして得られた画像間相違度を用いた実験を通じて,提案する相違度により照合精度が,従来法を用いる場合よりも大幅に向上することを示す.

  • Revisiting Similarity and Dissimilarity Measures : For Weathering the Info-plosion Era

    WADA Toshikazu, OKA Aiko, ASAMI Tetsuya

    The Journal of the Institute of Electronics, Information and Communication Engineers ( The Institute of Electronics, Information and Communication Engineers )  94 ( 8 ) 689 - 694   2011.08

     View Summary

    マルチメディア情報検索に関しては,これまでにも多くの研究がなされてきたが,特に近年では様々な変化に対して不変な画像の局所特徴量の登場を背景として,画像をクエリとして類似した画像を検索する技術が進歩している.本稿では,情報爆発時代における類似画像検索の基盤技術として,特徴ではなく,それらの間の「類似性尺度」をいかに与えればよいかという問題に焦点を当てる.特に劣化画像をキーとした類似画像検索タスクを通じて,よく用いられる正規化相互相関などよりも極めて優れた検索結果をもたらす類似性尺度に関する最近の研究について紹介し議論する.

  • Classifier Acceleration by Imitation

    OTA Takahiro, WADA Toshikazu, NAKAMURA Takayuki

    The IEICE transactions on information and systems ( The Institute of Electronics, Information and Communication Engineers )  94 ( 7 ) 1071 - 1078   2011.07

     View Summary

    一般に高精度な識別器ほど低速であることが知られている.そこで本論文では任意の識別器を線形回帰木によって識別速度を高速化する手法,"Classifier Molding"を提案する.Moldingには高精度な識別器と大量の学習データが必要となる.本論文では,高精度であるが低速である混合類似度法(CSM:Compound Similarity Method)を高速化の対象とし,産業用インクジェットプリンタ(IIJP)による印字の文字認識問題に適用した.まず,CSMの学習データは,IIJPの印字特性を模した摂動を基準パターンに与えることで自動生成し,次に,学習済みのCSMの入出力関係を大量に収集した.この入出力関係を学習データとして線形回帰木を構築する.線形計算と二分探索だけで構成される回帰木における演算は非常に高速であるため,構築した線形回帰木はCSMの振舞いを模倣した高速な識別器とみなすことができる.また,高速性を利用することで,識別に基づいた文字切出し手法CCS(Classification based Character Segmentation)を開発した.CCSは,組合せ最適化の枠組みを採用することによって画像中を効率良く探索し,探索後の合計類似度を最大化できる点列を文字位置として抽出する手法である.実験では,CSMの識別精度を低下させることなく1500倍の高速化に成功したことと,画素の連結成分に依存していた従来手法によるセグメンテーションエラーがCCSによって全て回避できることを示し,提案手法の有効性を明らかにした.

  • Classification based character segmentation guided by Fast-Hessian-Affine regions.

    Takahiro Ota, Toshikazu Wada

    First Asian Conference on Pattern Recognition(ACPR) ( IEEE )    249 - 253   2011  [Refereed]

    DOI

  • Object Tracking with Extended K-means Tracker

    Yiqiang Qi, Haiyuan Wu, Toshikazu Wada, Qian Chen

    Second China-Japan-Korea Joint Workshop on Pattern Recognition (CJKPR 2010),   1   216 - 220   2010.09  [Refereed]

  • Visual Object Tracking Using Positive and Negative Examples

    Toshikazu Wada

    ROBOTICS RESEARCH ( SPRINGER-VERLAG BERLIN )  66   189 - 199   2010  [Refereed]

     View Summary

    This paper presents a robust and real-time object tracking method in image sequences. Existing visual object detection and tracking algorithms usually use apparent features of target object as a model. Those algorithms detects objects by thresholding the similarity or the dissimilarity measure with the model. They sometimes fail to detect objects because of inadequate threshold. In this paper, we propose a method for detection and tracking using positive (object) and negative (non-object) models. This method does not require any threshold, because the object region can be detected just by comparing the similarities or dissimilarities with positive and negative models. If we employ distance as a dissimilarity measure, the problem can be formalized as a nearest neighbor (NN) classification problem. We first show an example of NN classification based color object detection, which detects multiple objects in real-time. Next, we extend the method to tracking. For this extension, we have to measure the distinctiveness of the object, because the detected object by the NN classification sometimes disappears. By finding the high distinctiveness image region around the tracked region in the previous image frame, we can keep tracking even if the object cannot be detected by NN classification. We implemented this method on an active stereo camera system and confirmed that video-rate detection and tracking as well as 3D position measurement can he realized just by using standard PC.

  • Classifier Acceleration by Imitation.

    Takahiro Ota, Toshikazu Wada, Takayuki Nakamura

    Computer Vision - ACCV 2010 - 10th Asian Conference on Computer Vision ( Springer )    653 - 664   2010  [Refereed]

    DOI

  • Scalable and robust multi-people head tracking by combining distributed multiple sensors.

    Yusuke Matsumoto, Toshikazu Wada, Shuichi Nishio, Takahiro Miyashita, Norihiro Hagita

    Intelligent Service Robotics   3 ( 1 ) 29 - 36   2010  [Refereed]

    DOI

  • K-D Decision Tree: An Accelerated and Memory Efficient Nearest Neighbor Classifier.

    Tomoyuki Shibata, Toshikazu Wada

    IEICE Transactions   93-D ( 7 ) 1670 - 1681   2010  [Refereed]

    DOI

  • Approximate nearest neighbor search on HDD.

    Noritaka Himei, Toshikazu Wada

    12th IEEE International Conference on Computer Vision Workshops ( IEEE Computer Society )    2101 - 2108   2009  [Refereed]

    DOI

  • SIFT特徴量の拡張と対称性平面物体検出への応用

    佐野友祐, 呉海元, 和田俊和, 陳謙

    電子情報通信学会論文誌   D Vol.J92-D No.8   1176 - 1185   2009  [Refereed]

  • Robot Body Guided Camera Calibration: Calibration Using an Arbitrary Circle.

    Qian Chen, Haiyuan Wu, Toshikazu Wada

    Proceedings of the Fifth International Conference on Image and Graphics(ICIG) ( IEEE Computer Society )    39 - 44   2009  [Refereed]

    DOI

  • Accelerating Face Detection by Using Depth Information

    Haiyuan Wu, Kazumasa Suzuki, Toshikazu Wada, Qian Chen

    ADVANCES IN IMAGE AND VIDEO TECHNOLOGY, PROCEEDINGS ( SPRINGER-VERLAG BERLIN )  5414   657 - 667   2009  [Refereed]

     View Summary

    In the case that the sizes of faces are not available, all possible sizes of faces have to be assumed and a face detector has to classify many (often ten or more) sub-image regions everywhere in an image. This makes the face detection slow and the high false positive rate. This paper explores the usage of depth information for accelerating the face detection and reducing the false positive rate at the same time. In detail, we use the depth information to determine the size of the sub-image region that needs to be classified for each pixel. This will reduce the number of sub-image regions that need to be classified from many to one for one position (pixel) in an image. Since most unnecessary classifications are effectively avoided, both the processing time for face detection and the possibility of false positive can be reduced greatly. We also propose a fast algorithm for estimating the depth information that is used to determine the size of sub-image regions to be classified.

  • Mahalanobis距離最小化による高次元線形写像計算法:M3

    岡 藍子, 和田俊和

    電子情報通信学会論文誌   D Vol.J92-D No.8   1094 - 1103   2009  [Refereed]

  • Principal Component Hashing: An Accelerated Approximate Nearest Neighbor Search.

    Yusuke Matsushita, Toshikazu Wada

    Advances in Image and Video Technology(PSIVT) ( Springer )    374 - 385   2009  [Refereed]

    DOI

  • High-Performance Active Camera Head Control Using PaLM-Tree.

    Takayuki Nakamura, Yoshio Sakata, Toshikazu Wada, Haiyuan Wu

    JRM   21 ( 6 ) 720 - 725   2009  [Refereed]

    DOI

  • Mahalanobis distance Minimization Mapping: M3.

    Aiko Oka, Toshikazu Wada

    12th IEEE International Conference on Computer Vision Workshops ( IEEE Computer Society )    93 - 100   2009  [Refereed]

    DOI

  • Adaptive Alignment of Non-target Cluster Centers to Background : Improvement of Robustness and Processing Speed for K-means tracker

    OIKE Hiroshi, WU Haiyuan, WADA Toshikazu

    The IEICE transactions on information and systems ( The Institute of Electronics, Information and Communication Engineers )  91 ( 9 ) 2418 - 2421   2008.09  [Refereed]

     View Summary

    本論文では,K-means trackerにて,より安定かつ高速に対象を追跡するために,背景に応じた非ターゲットクラスタ中心の数とそれらの配置を適応的に求める方法について述べる.提案手法では,K-means trackerにより求められたサーチエリア楕円の輪郭上の画素を走査し,ターゲット,非ターゲットクラスタ中心までの距離をもとに,追跡に都合のよい非ターゲットクラスタ中心の個数,配置を決定する.これにより,従来のK-means trackerの非ターゲットクラスタ中心の配置方法における問題である,似た特徴をもつ画素の多重選択問題と,ターゲットに近い特徴をもつ背景画素の選択もれの問題を同時に解決できる.従来のK-means trackerとの比較実験により,本手法は追跡を安定かつ,効率的に処理できることを確認した.

  • High-Speed Binocular Active Camera System for Capturing Good Image of a Moving Object

    OIKE Hiroshi, WU Haiyuan, HUA Chunsheng, WADA Toshikazu

    The IEICE transactions on information and systems ( The Institute of Electronics, Information and Communication Engineers )  91 ( 5 ) 1393 - 1405   2008.05  [Refereed]

     View Summary

    本論文では,独立して動作する2台の能動カメラを用い,高速に運動している物体(150度/秒程度)を追跡し,対象の鮮明な画像を撮影できる高速追従型2眼能動カメラシステムを提案する.本論文では,鮮明な画像を,適切な大きさで撮影された,ぶれやピントずれがない対象の画像と定義する.能動カメラは,FV-PTZ (Fixed Viewpoint Pan-Tilt-Zoom)カメラを使用し,画像内における対象追跡はK-means trackerアルゴリズムを用いる.提案システムでは,追跡対象を注視するために,「人間の両目」のように2台の能動カメラを協調させ,視線を追跡対象上の1点で交わらせるように能動カメラを制御する.これを実現するために,K-means trackerに信頼度の概念を導入し,相対的な信頼度を用いた2台の能動カメラの制御方法を提案する.この信頼度とエピポーラ拘束を利用することにより,それぞれの能動カメラの追従性能を保ったまま,矛盾のないカメラの首振り制御を行い,また,ズーム・フォーカスも同時に制御することにより,より鮮明な対象の画像の撮影し続ける能動追跡システムを実現する.

  • RK-means clustering: K-means with reliability

    Chunsheng Hua, Qian Chen, Haiyuan Wu, Toshikazu Wada

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS ( IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG )  E91D ( 1 ) 96 - 104   2008.01  [Refereed]

     View Summary

    This paper presents an RK-means clustering algorithm which is developed for reliable data grouping by introducing a new reliability evaluation to the K-means clustering algorithm. The conventional K-means clustering algorithm has two shortfalls: 1) the clustering result will become unreliable if the assumed number of the clusters is incorrect; 2) during the update of a cluster center, all the data points belong to that cluster are used equally without considering how distant they are to the cluster center. In this paper, we introduce a new reliability evaluation to K-means clustering algorithm by considering the triangular relationship among each data point and its two nearest cluster centers. We applied the proposed algorithm to track objects in video sequence and confirmed its effectiveness and advantages.

    DOI

  • A general algorithm to recover external camera parameters from pairwise camera calibrations

    Jaume Verges-Llahi, Toshikazu Wada

    IMAGE ANALYSIS AND RECOGNITION, PROCEEDINGS ( SPRINGER-VERLAG BERLIN )  5112   294 - +   2008  [Refereed]

     View Summary

    This paper presents a general constructive algorithm to recover external camera parameters from a set of pairwise partial camera calibrations embedded in the structure named Camera Dependency Graph (CDG) [1] that encompasses both the feasibility and the reliability of each calibration. An edge in CDG and its weight account for the existence and for the quality of the essential matrix between the two views connected by it, respectively. Any triplet of cameras sharing visible points forms a triangle in a CDG, which permits to compute the relative scale between any two of its edges. The algorithm first selects from CDG the set of feasible paths being the shortest ones in terms of reliability that also are connected by a sequence of triangles. The global external parameters of the arrangement of cameras are computed in a process in two steps that aggregates partial calibrations, represented by triangles, along the paths connecting pairs of views taking into account the relative scales between triangles until recovering the parameters between the extremes of each path. Finally, the scales of the whole set of paths are referred to one canonical value corresponding to the edge in the CDG working as the global scale. Initial experimental results on simulated data demonstrate the usefulness and accuracy of such scheme that can be applied either alone or as the initial approximation for other calibration methods.

  • Training High Dimension Ternary Features with GA in Boosting Cascade Detector for Object Detection

    Qian Chen, Kazuyuki Masada, Haiyuan Wu, Toshikazu Wada

    2008 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2008), VOLS 1 AND 2 ( IEEE )    172 - 177   2008  [Refereed]

     View Summary

    Viola et al. have introduced a fast object detection scheme based on a boosted cascade of haar-like features. In this paper we introduce a novel ternary feature that enriches the diversity and the flexibility significantly over haar-like features. We also introduce a new genetic algorithm (GA) based method for training effective ternary features through iterations of feature generation and selection. Experimental results showed that the rejection rate can reach at 98.5% with only 16 features at the first layer of the constructed cascade detector This indicates the high performance of our method for generating effective features. We confirmed that the training time can be significantly shortened compared with Violas's method while the performance of the resulted cascade detector is comparable to the previous methods.

  • K-means Clustering Based Pixel-wise Object Tracking

    Hua Chunsheng, Wu Haiyuan, Chen Qian, Wada Toshikazu

    IPSJ Online Transactions ( Information Processing Society of Japan )  1 ( 1 ) 66 - 79   2008  [Refereed]

     View Summary

    This paper brings out a robust pixel-wise object tracking algorithm which is based on the K-means clustering algorithm. In order to achieve the robust object tracking under complex condition (such as wired objects, cluttered background), a new reliability-based K-means clustering algorithm is applied to remove the noise background pixel (which is neigher similar to the target nor the background samples) from the target object. According to the triangular relationship among an unknown pixle and its two nearest cluster centers (target and background), the normal pixel (target or background one) will be assigned with high reliability value and correctly classified, while noise pixels will be given low reliability value and ignored. A radial sampling method is also brought out for improving both the processing speed and the robustness of this algorithm. According to the proposed algorithm, we have set up a real video-rate object tracking system. Through the extensive experiments, the effectiveness and advantages of this reliability-based K-means tracking algorithm are confirmed.

    DOI

  • K-means Clustering Based Pixel-wise Object Tracking

    Hua Chunsheng, Wu Haiyuan, Chen Qian, Wada Toshikazu

    Information and Media Technologies ( Information and Media Technologies Editorial Board )  3 ( 4 ) 820 - 833   2008  [Refereed]

     View Summary

    This paper brings out a robust pixel-wise object tracking algorithm which is based on the K-means clustering algorithm. In order to achieve the robust object tracking under complex condition (such as wired objects, cluttered background), a new reliability-based K-means clustering algorithm is applied to remove the noise background pixel (which is neigher similar to the target nor the background samples) from the target object. According to the triangular relationship among an unknown pixle and its two nearest cluster centers (target and background), the normal pixel (target or background one) will be assigned with high reliability value and correctly classified, while noise pixels will be given low reliability value and ignored. A radial sampling method is also brought out for improving both the processing speed and the robustness of this algorithm. According to the proposed algorithm, we have set up a real video-rate object tracking system. Through the extensive experiments, the effectiveness and advantages of this reliability-based K-means tracking algorithm are confirmed.

    DOI

  • GA Based Feature Generation for Training Cascade Object Detector

    Kazuyuki Masada, Qian Chen, Haiyuan Wu, Toshikazu Wada

    19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1-6 ( IEEE )    1614 - 1617   2008  [Refereed]

     View Summary

    Viola et al. have introduced a fast object detection scheme based on a boosted cascade of haar-like features. In this paper we introduce a novel ternary feature that enriches the diversity and the flexibility significantly over haar-like features. We also introduce a new genetic algorithm based method for training effective ternary features. Experimental results showed that the rejection rate can reach at 98.5% with only 16 features at the first layer of the cascade detector We confirmed that the training time can be significantly shortened while the performance of the resulted cascade detector is comparable to the previous methods.

  • Tracking a Firefly-A Stable Likelihood Estimation for Variable Appearance Object Tracking

    Yoshihiko Tsukamoto, Yusuke Matsumoto, Toshikazu Wada

    19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1-6 ( IEEE )    932 - 935   2008  [Refereed]

     View Summary

    The Particle filter estimates a probability, distribution of target object's state by sampled hypotheses and their weights. This method is more expressive than existing method such as Kalman filtering, because the object state is represented as a multi-modal distribution. However the method can't be directly applied to temporally variable appearance object tracking, for example, a firefly or a flickering neon-sign. For solving this problem, we propose a particle filter for a variable appearance object, which estimates a unique state parameter independent of target's position. Our method decomposes the state space into disjoint parameter spaces, i.e., object position and posture space and Appearance parameter space. In the appearance parameter space, the likelihood of each hypothesis is evaluated at the position parameters generated in the other space, and the best explain parameter is determined. Based on this parameter likelihood in the position and posture space is evaluated. By interacting the parameter estimations in different spaces, we can successfully track blinking fire-fly in the darkness.

  • Adaptive Selection of Non-Target Cluster Centers for K-means Tracker

    Hiroshi Oike, Haiyuan Wu, Toshikazu Wada

    19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1-6 ( IEEE )    3938 - 3941   2008  [Refereed]

     View Summary

    Hua et al. have proposed a stable and efficient tracking algorithm called "K-means tracker" [2, 3, 5]. This paper describes an adaptive non-target cluster center selection method that replaces the one used in K-means tracker where non-target cluster center are selected at fixed interval. Non-target cluster centers are selected from the ellipse that defines the area for searching the target object in K-means tracker by checking whether they have significant effects for the pixel classification and are dissimilar to any of the already-selected nontarget cluster centers. This ensures that all important non-target cluster centers will be picked up while avoiding selecting redundant non-target clusters. Through comparative experiments of object tracking, we confirmed that both the robustness and the processing speed could be improved with our method.

  • 複数カメラを用いたCONDENSATION のための隠れに対して頑健な重み統合法

    松元郁佑, 加藤丈和, 和田俊和

    情報処理学会論文誌 コンピュータビジョンとイメージメディア(CVIM)     100 - 114   2007.06  [Refereed]

  • Object tracking with target and background samples

    Chunsheng Hua, Haiyuan Wu, Qian Chen, Toshikazu Wada

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS ( IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG )  E90D ( 4 ) 766 - 774   2007.04  [Refereed]

     View Summary

    In this paper, we present a general object tracking method based on a newly proposed pixel-wise clustering algorithm. To track an object in a cluttered environment is a challenging issue because a target object may be in concave shape or have apertures (e.g. a hand or a comb). In those cases, it is difficult to separate the target from the background completely by simply modifying the shape of the search area. Our algorithm solves the problem by 1) describing the target object by a set of pixels; 2) using a K-means based algorithm to detect all target pixels. To realize stable and reliable detection of target pixels, we firstly use a 5D feature vector to describe both the color ("Y, U, V") and the position ("x, y") of each pixel uniformly. This enables the simultaneous adaptation to both the color and geometric features during tracking. Secondly, we use a variable ellipse model to describe the shape of the search area and to model the surrounding background. This guarantees the stable object tracking under various geometric transformations. The robust tracking is realized by classifying the pixels within the search area into "target" and "background" groups with a K-means clustering based algorithm that uses the "positive" and "negative" samples. We also propose a method that can detect the tracking failure and recover from it during tracking by making use of both the "positive" and "negative" samples. This feature makes our method become a more reliable tracking algorithm because it can discover the target once again when the target has become lost. Through the extensive experiments under various environments and conditions, the effectiveness and efficiency of the proposed algorithm is confirmed.

    DOI

  • High-Speed Object Tracking System using Active Camera

    OIKE Hiroshi, WU Haiyuan, HUA Chunsheng, WADA Toshikazu

    Transactions of the Institute of Systems, Control and Information Engineers ( THE INSTITUTE OF SYSTEMS, CONTROL AND INFORMATION ENGINEERS (ISCIE) )  20 ( 3 ) 114 - 121   2007.03  [Refereed]

     View Summary

    This paper presents a new method for tracking the moving target by controlling a pan-tilt camera. Our method can capture motion-blur-free images of a moving target, because this method automatically synchronizes the target motion with the camera motion. We employed Fixed Viewpoint Pan Tilt Camera (FV-PTZ Camera) as an active camera, which does not produce motion parallax by the camera rotation. For the target tracking on image space, we use K-means Tracker algorithm. As for the pan-tilt control, we employ the PID control scheme. This is because if by assigning the P component as the angular velocity of the object, the I and D components can be corresponded to the angular position and the angular acceleration, respectively. This means the PID control is suitable for controlling the angular speed and position of the pan-tilt unit simultaneously. PID based pan-tilt control is effective for the motion synchronization between the target and camera.

    DOI

  • Nearest first traversing graph for simultaneous object tracking and recognition

    Junya Sakagaito, Toshikazu Wada

    2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8 ( IEEE )    1575 - +   2007  [Refereed]

     View Summary

    This paper presents a new method for simultaneous object tracking and recognition using object image database. This application requires two searches: search for object appearance stored in the database and that for pose parameters (position, scale, orientation, and so on) of the tracking object in each image frame. For simplifying this problem, we propose a new method, pose parameter embedding (PPE) that transforms the original problem to an appearance search problem. The nearest neighbor (NN) appearance search in this problem has a special property that gradually changing queries are given. For this problem, graph based NN search is suitable, because the preceding search result can be used as the starting point of the next search. Delaunay graph can be used for this search, however, both the graph construction cost and the degree (number of mean edges connected to a vertex) drastically increase in high-dimensional space. Instead, we propose nearest first traversing graph (NFTG) for avoiding these problems. Based on these two techniques, we successfully realized video-rate tracking and recognition.

  • A noise-insensitive object tracking algorithm

    Chunsheng Hua, Qian Chen, Haiyuan Wu, Toshikazu Wada

    COMPUTER VISION - ACCV 2007, PT I, PROCEEDINGS ( SPRINGER-VERLAG BERLIN )  4843   565 - 575   2007  [Refereed]

     View Summary

    In this paper, we brought out a noise-insensitive pixel-wise object tracking algorithm whose kernel is a new reliable data grouping algorithm that introduces the reliability evaluation into the existing K-means clustering (called as RK-means clustering). The RK-means clustering concentrates on two problems of the existing K-mean clustering algorithm: 1) the unreliable clustering result when the noise data exists; 2) the bad/wrong clustering result caused by the incorrectly assumed number of clusters. The first problem is solved by evaluating the reliability of classifying an unknown data vector according to the triangular relationship among it and its two nearest cluster centers. Noise data will be ignored by being assigned low reliability. The second problem is solved by introducing a new group merging method that can delete pairs of "too near" data groups by checking their variance and average reliability, and then combining them together. We developed a video-rate object tracking system (called as RK-means tracker) with the proposed algorithm. The extensive experiments of tracking various objects in cluttered environments confirmed its effectiveness and advantages.

  • An occlusion robust likelihood integration method for multi-camera people head tracking - art. no. 67640E

    Yusuke Matsumoto, Takekazu Kato, Toshikazu Wada

    INTELLIGENT ROBOTS AND COMPUTER VISION XXV: ALGORITHMS, TECHNIQUES, AND ACTIVE VISION ( SPIE-INT SOC OPTICAL ENGINEERING )  6764   E7640 - E7640   2007  [Refereed]

     View Summary

    This paper presents a novel method for human head tracking using multiple cameras. Most existing methods estimate 3D target position according to 2D tracking results at different viewpoints. This framework can be easily affected by the inconsistent tracking results on 2D images, which leads 3D tracking failure. For solving this problem, an extension of CONDENSATION Using multiple images has been proposed. The method generates many hypotheses on a target (human head) in 3D space and estimates the likelihood of each hypothesis by integrating viewpoint dependent likelihood values of 2D hypotheses projected onto image planes. In theory, viewpoint dependent likelihood values should be integrated by multiplication, however, it is easily affected by occlusions. Thus we investigate this problem and propose a novel likelihood integration method in this paper and implemented a prototype system consisting of six sets of a PC and a camera. We confirmed the robustness against occlusions.

  • Clear image capture - Active cameras system for tracking a high-speed moving object

    Hiroshi Oike, Haiyuan Wu, Chunsheng Hua, Toshikazu. Wada

    ICINCO 2007: PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL SPSMC ( INSTICC-INST SYST TECHNOLOGIES INFORMATION CONTROL & COMMUNICATION )    94 - 102   2007  [Refereed]

     View Summary

    In this paper, we propose a high-performance object tracking system for obtaining high-quality images of a high-speed moving object at video rate by controlling a pair of active cameras that consists of two cameras with zoom lens mounted on two pan-tilt units. In this paper, "high-quality image" implies that the object image is in focus and not blurred, the size of the object in the image remains unchanged, and the object is located at the image center. To achieve our goal, we use the K-means tracker algorithm for tracking objects in an image sequence captured by the active cameras. We use the results of the K-means tracker to control the angular position and speed of each pan-tilt-zoom unit by employing the PID control scheme. By using two cameras, the binocular stereo vision algorithm can be used to obtain the 3D position and velocity of the object. These results are used in order to adjust the focus and zoom. Moreover, our system allows the two cameras to gaze at a single point in 3D space. However, this system may become unstable when the time response deteriorates by excessively interfering in a mutual control loop or by strict restriction of the camera action. In order to solve these problems, we introduce the concept of reliability into the K-means tracker, and propose a method for controlling the active cameras by using relative reliability. We have developed a prototype system and confirmed through extensive experiments that we can obtain focused and motion-blur-free images of a high-speed moving object at video rate.

  • Tracking iris contour with a 3D eye-model for gaze estimation

    Haiyuan Wu, Yosuke Kitagawa, Toshikazu Wada, Takekazu Kato, Qian Chen

    COMPUTER VISION - ACCV 2007, PT I, PROCEEDINGS ( SPRINGER-VERLAG BERLIN )  4843   688 - 697   2007  [Refereed]

     View Summary

    This paper describes a sophisticated method to track iris contour and to estimate eye gaze for blinking eyes with a monocular camera. A 3D eye-model that consists of eyeballs, iris contours and eyelids is designed that describes the geometrical properties and the movements of eyes. Both the iris contours and the eyelid contours are tracked by using this eye-model and a particle filter. This algorithm is able to detect "pure" iris contours because it can distinguish iris contours from eyelids contours. The eye gaze is described by the movement parameters of the 3D eye model, which are estimated by the particle filter during tracking. Other distinctive features of this algorithm are: 1) it does not require any special light sources (e.g. an infrared illuminator) and 2) it can operate at video rate. Through extensive experiments on real video sequences we confirmed the robustness and the effectiveness of our method.

  • An occlusion robust likelihood integration method for multi-camera people head tracking

    Yusuke Matsumoto, Takekazu Kato, Toshikazu Wada

    INSS 07: PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON NETWORKED SENSING SYSTEMS ( IEEE )    235 - +   2007  [Refereed]

     View Summary

    This paper presents a novel method for human head tracking using multiple cameras. Most existing methods estimate 3D target position according to 2D tracking results at different viewpoints. This framework can be easily affected by the inconsistent tracking results on 2D images, which leads 3D tracking failure. For solving this problem, an extension of CONDENSATION using multiple images has been proposed. The method generates many hypotheses on a target (human head) in 3D space and estimates the likelihood of each hypothesis by integrating viewpoint dependent likelihood values of 2D hypotheses projected onto image planes. In theory, viewpoint dependent likelihood values should be integrated by multiplication, however, it is easily affected by occlusions. Thus we investigate this problem and propose a novel likelihood integration method in this paper and implemented a prototype system consisting of six sets of a PC and a camera. We confirmed the robustness against occlusions.

  • Detection and tracking using multi color target models

    Hiroshi Oike, Toshikazu Wada, Takeo Iizuka, Haiyuan Wu, Takahiro Miyashita, Norihiro Hagita

    INTELLIGENT ROBOTS AND COMPUTER VISION XXV: ALGORITHMS, TECHNIQUES, AND ACTIVE VISION ( SPIE-INT SOC OPTICAL ENGINEERING )  6764   2007  [Refereed]

     View Summary

    This paper presents object detection and tracking algorithm which can adapt to object color shift. In this algorithm, we train and build multi target models using color under different illumination conditions. Each model called as Color Distinctiveness look up Tables or CDT The color distinctiveness is the value integrating 1) similarity with target colors and 2) dissimilarity with non-target colors, which represents how distinctively the color can be classified into target pixel. Color distinctiveness can be used for pixel-wise target detection, because it takes 0.5 for colors on decision boundary of nearest neighbor classifier in color space. Also, it can be used for target tracking by continuously finding the most distinctive region. By selecting the most suitable CDT for camera direction, lighting condition, and camera parameters. the system can adapt target and background color change. We implemented this algorithm for a Pan-tilt stereo camera system. Through experiments using this system, we confirmed that this algorithm is robust against color shift caused by illumination change and it can measure the target 3D position at video rate.

    DOI

  • Background subtraction under varying illumination

    Takashi Matsuyama, Toshikazu Wada, Hitoshi Habe, Kazuya Tanahashi

    Systems and Computers in Japan   37 ( 4 ) 77 - 88   2006.04  [Refereed]

     View Summary

    Background subtraction is widely used as an effective method for detecting moving objects in a video image. However, background subtraction requires a prerequisite in that image variation cannot be observed, and the range of application is limited. Proposed in this research paper is a method for detecting moving objects by using background subtraction that can be applied to cases in which the image has varied due to varying illumination. This method is based on two object detection methods that are based on different lines of thinking. One method compares the background image and the observed image using invariant features of illumination. The other method estimates the illumination conditions of the observed image and normalizes the brightness before carrying out background subtraction. These two methods are complementary, and highly precise detection results can be obtained by ultimately integrating the detection results of both methods. © 2006 Wiley Periodicals, Inc.

    DOI

  • A general framework for tracking people

    Chunsheng Hua, Haiyuan Wu, Qian Chen, Toshikazu Wada

    PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION - PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ( IEEE COMPUTER SOC )    511 - +   2006  [Refereed]

     View Summary

    In this paper, we present a clustering-based tracking algorithm for tracking people (e.g. hand, head, eyeball, body). A human body often appears as a concave object or an object with apertures. In this case, many background areas are mixed into the tracking target which are difficult to be removed by modifying the shape of the search area during tracking. This algorithm realizes the robust tracking for such objects by classifying the pixels in the search area into "target" and "non-target" with K-means clustering algorithm that uses both the "positive" and "negative" samples. The contributions of this research are: 1) Using a 5 - D feature vector to describe both the geometric feature "(x,y)" and color feature "(Y,U,V)" of an object (or a pixel) uniformly. This description ensures our method to follow both the position and color changes simultaneously during tracking; 2) Using a variable ellipse model (a) to describe the shape of a non-rigid object (e.g. hand) approximately, (b) to restrict the search area, and (c) to model the surrounding non-target background. This guarantees the stable tracking of objects with various geometric transformations. Through extensive experiments in various environments and conditions, the effectiveness and the efficiency of the proposed algorithm is confirmed.

  • High-speed-tracking active cameras for obtaining clear object

    Hiroshi Oike, Haiyuan Wu, Chunsheng Hua, Toshikazu Wada

    INTELLIGENT ROBOTS AND COMPUTER VISION XXIV: ALGORITHMS, TECHNIQUES, AND ACTIVE VISION ( SPIE-INT SOC OPTICAL ENGINEERING )  6384   2006  [Refereed]

     View Summary

    In this paper, we propose a high performance object tracking system for obtaining high quality images of a high-speed moving object at video rate by controlling a pair of active cameras mounted on two fixed view point pan-tilt-zoom units. In this paper, the 'High quality object image' means that the image of the object is in focus and not blurred, the S/N ratio is high enough, the size of the object in the image is kept unchanged, and the position the object appearing at the image center. To achieve our goal, we use K-means tracker algorithm for tracking object in image sequence which taken by the active cameras. We use the result of the K-means tracker to control the angular position and speed of each pan-tilt-zoom unit by employing PID control scheme. By using two cameras, binocular stereo vision algorithm can be used to obtain 3D position and velocity of the object. These results are used for adjust the focus and the zoom. Moreover, our system let two cameras fix its eyes on one point in 3D space. However, this system may be unstable, when time response loses by interfering in a mutual control loop too much, or by hard restriction of cameras action. In order to solve these problems, we introduced a concept of reliability into K-means tracker, and propose a method for controlling active cameras by using relative reliability. We produce the prototype system. Though extensive experiments we confirmed that we can obtain in focus and motion- blur-free images of a high-speed moving object at video rate.

    DOI

  • A pixel-wise object tracking algorithm with target and background sample

    Chunsheng Hua, Haiyuan Wu, Qian Chen, Toshikazu Wada

    18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 1, PROCEEDINGS ( IEEE COMPUTER SOC )    739 - +   2006  [Refereed]

     View Summary

    In this paper we present a clustering-based tracking algorithm for non-rigid object. Non-rigid object tracking is a challenging task because the target often appears as a concave shape or an object with apertures. In such cases, many background areas will be mixed into the tracking target, which are difficult to be removed by modifying the shape of the search area. Our algorithm realizes robust tracking for such objects by classifying the pixels in the search area into "target" and "background" with K-means clustering algorithm that uses both the "positive" and "negative" samples. The contributions of this research are: 1) Using a 5D feature vector to describe both the geometric feature "(x, y)" and color feature "(Y, U, V)" of an object (or a pixel) uniformly. This description enables the simultaneous adaptation of both the geometric and color variance during tracking; 2) Using a variable ellipse model (a) to describe the search area; (b) to model the surrounding background. This guarantees the stable tracking of objects with various geometric transformations. Through extensive experiments in various environments and conditions, the effectiveness and the efficiency of the proposed algorithm is confirmed.

  • K-means Tracking with Variable Ellipse Model

    Hua Chunsheng, Wu Haiyuan, Wada Toshikazu, Chen Qian

    Information and Media Technologies ( Information and Media Technologies Editorial Board )  1 ( 1 ) 436 - 445   2006  [Refereed]

     View Summary

    We have proposed a K-means clustering based target tracking method, compared with the template matching, which can work robustly <i>when tracking an object with hole through which the background can be seen</i> (e.g., <i>mosquito coil</i>) (<i>hereafter we call this problem as the</i> background interfusion<i>or the</i> interfused background). This paper presents a new method for solving the drawbacks of the previous method, i.e., low speed, instability caused by the change of shape and size. Our new tracking model consists of a single target center, and a variable ellipse model for representing non-target pixels. The contributions of our new method are: 1) The original K-means clustering is replaced by a 2<sub>&infin;</sub>-means clustering, and the non-target cluster center is adaptively picked up from the pixels on the ellipse. This modification reduces the number of distance computation and improves the stability of the target detection as well. 2) The ellipse parameters are adaptively adjusted according to the target detection result. This adaptation improves the robustness against the scale and shape changes of the target. Through the various experiments, we confirmed that our new method improves speed and robustness of our original method.

    DOI

  • K-means tracker: A general algorithm for tracking people

    Chunsheng Hua, Haiyuan Wu, Qian Chen, Toshikazu Wada

    Journal of Multimedia ( Academy Publisher )  1 ( 4 ) 46 - 53   2006  [Refereed]

     View Summary

    In this paper, we present a clustering-based tracking algorithm for tracking people (e.g. hand, head, eyeball, body, and lips). It is always a challenging task to track people under complex environment, because such target often appears as a concave object or having apertures. In this case, many background areas are mixed into the tracking area which are difficult to be removed by modifying the shape of the search area during tracking. Our method becomes a robust tracking algorithm by applying the following four key ideas simultaneously: 1) Using a 5D feature vector to describe both the geometric feature "(x,y)" and color feature "(Y,U,V)" of each pixel uniformly. This description ensures our method to follow both the position and color changes simultaneously during tracking
    2) This algorithm realizes the robust tracking for objects with apertures by classifying the pixels, within the search area, into "target" and "background" with K-means clustering algorithm that uses both the "positive" and "negative" samples. 3) Using a variable ellipse model (a) to describe the shape of a nonrigid object (e.g. hand) approximately, (b) to restrict the search area, and (c) to model the surrounding non-target background. This guarantees the stable tracking of objects with various geometric transformations. 4) With both the "positive" and "negative" samples, our algorithm achieves the automatic self tracking failure detection and recovery. This ability makes our method distinctively more robust than the conventional tracking algorithms. Through extensive experiments in various environments and conditions, the effectiveness and the efficiency of the proposed algorithm is confirmed. © 2006 ACADEMY PUBLISHER.

    DOI

  • Visual direction estimation from a monocular image

    HY Wu, Q Chen, T Wada

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS ( IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG )  E88D ( 10 ) 2277 - 2285   2005.10  [Refereed]

     View Summary

    This paper describes a sophisticated method to estimate visual direction using iris contours. This method requires only one monocular image taken by a camera with unknown focal length. In order to estimate the visual direction, we assume the visual directions of both eyes are parallel and iris boundaries are circles in 3D space. In this case, the two planes where the iris boundaries reside are also parallel. We estimate the normal vector of the two planes from the iris contours extracted from an input image by using an extended "two-circle" algorithm. Unlike most existing gaze estimation algorithms that require information about eye corners and heuristic knowledge about 3D structure of the eye in addition to the iris Contours, our method uses two iris contours only. Another contribution of our method is the ability of estimating the focal length of the camera. It allows one to use a zoom lens to take images and the focal length can be adjusted at any time. The extensive experiments over simulated images and real images demonstrate the robustness and the effectiveness of our method.

    DOI

  • K-D Decision Tree とその応用 -最近傍識別器の高速化と省メモリ化

    柴田, 加藤,和田

    電子情報通信学会論文誌 D-II     1367 - 1377   2005.08  [Refereed]

  • A Novel NonLinear Mapping Algorithm (PaLM-Tree)

    NAKAMURA Takayuki, KATO Takekazu, WADA Toshikazu

    JRSJ ( The Robotics Society of Japan )  23 ( 6 ) 732 - 742   2005.06  [Refereed]

     View Summary

    This paper presents a novel learning method for nonlinear mapping between arbitrary dimensional spaces. Unlike artificial neural nets, GMDH, and other methods, our method doesn't require complicated control parameters. Providing a feasible error threshold and training samples, it automatically divides the objective mapping into partially linear mappings. Since decomposed mappings are maintained by a binary tree, the linear mapping corresponding to an input is quickly selected. We call this method Partially Linear Mapping tree (PaLM-tree) . In order to estimate the most reliable linear mappings satisfying the feasible error criterion, we employ split-and-merge strategy for the decomposition. Through the experiments on function estimation, image segmentation, and camera calibration problems, we confirmed the advantages of PaLM-tree.

    DOI

  • K-means tracking with variable ellipse model

    C.Hua, H.Wu, T. Wada, Q.Chen

    Trans. of IPSJ, (CVIM12)     59 - 68   2005.01  [Refereed]

  • データベースに基づくサーバサイド迷惑メール検出システム

    松浦広明, 齋藤彰一, 上原哲太郎, 和田俊和

    電子情報通信学会論文誌.   B, Vol.J88-B ( No.10 ) 1934 - 1943   2005.01  [Refereed]

  • A light modulation/demodulation method for real-time 3D imaging

    Qian Chen, Toshikazu Wada

    Proceedings of International Conference on 3-D Digital Imaging and Modeling, 3DIM     15 - 21   2005  [Refereed]

     View Summary

    This paper describes a novel method for digitizing the 3D shape of an object in real-time, which can be used for capturing live sequence of the 3D shape of moving or deformable objects such as faces. Two DMD (digital micro mirror) devices are used as high speed switches for modulating and demodulating light rays. One DMD is used to generate rays of light pulses, which are projected onto the object to be measured. Another DMD is used to demodulate the light reflected from the object illuminated by the light pulses into intensity image that describes the disparity. A prototype range finder implementing the proposed method has been built. The experimental results showed that the proposed method works and video sequences of disparity images can be captured in real time. © 2005 IEEE.

    DOI

  • K-means Tracking with Variable Ellipse Model

    Hua Chunsheng, Wu Haiyuan, Wada Toshikazu, Chen Qian

    IPSJ Digital Courier ( Information Processing Society of Japan )  1 ( 1 ) 508 - 517   2005  [Refereed]

     View Summary

    We have proposed a K-means clustering based target tracking method, compared with the template matching, which can work robustly <i>when tracking an object with hole through which the background can be seen</i> (e.g., <i>mosquito coil</i>) (<i>hereafter we call this problem as the</i> background interfusion<i>or the</i> interfused background). This paper presents a new method for solving the drawbacks of the previous method, i.e., low speed, instability caused by the change of shape and size. Our new tracking model consists of a single target center, and a variable ellipse model for representing non-target pixels. The contributions of our new method are: 1) The original K-means clustering is replaced by a 2<sub>&infin;</sub>-means clustering, and the non-target cluster center is adaptively picked up from the pixels on the ellipse. This modification reduces the number of distance computation and improves the stability of the target detection as well. 2) The ellipse parameters are adaptively adjusted according to the target detection result. This adaptation improves the robustness against the scale and shape changes of the target. Through the various experiments, we confirmed that our new method improves speed and robustness of our original method.

    DOI

  • High performance control of active camera head using PaLM-tree

    T Nakamura, Y Sakata, T Wada, HY Wu

    2005 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, VOLS 1 AND 2 ( IEEE )    963 - 968   2005  [Refereed]

     View Summary

    In this paper, we propose a new feedback-error-learning controller enhanced by the PaLM tree that is an easy-to-use function approximator developed by our research group. We investigate the ability of our feedback-error-learning controller by applying it to controlling an active camera head to pursuit a moving target with high accuracy and high response. The PaLM-tree learns the inverse model in the feedback-error-learning scheme correctly. Although our active camera head has unknown mechanical friction and our closed-loop control system has a relatively large latency, our active camera head can successfully pursuit eye movement by our feedback-error-learning controller based on the PaLM-tree. Through experiments, we confirmed that our method could achieve high-performance control over the tuned feedback control.

  • 最近傍識別器による背景差分と色検出の統合

    加藤,和田

    情報処理学会論文誌:コンピュータビジョンとイメージメディア   Vol.45 ( No. SIG13(CVIM10) ) 110 - 117   2004.12  [Refereed]

  • Real-time dynamic 3-D object shape reconstruction, and high-fidelity texture mapping for 3-D video

    T Matsuyama, XJ Wu, T Takai, T Wada

    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY ( IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC )  14 ( 3 ) 357 - 369   2004.03  [Refereed]

     View Summary

    Three-dimensional (3-D) video is a real 3-D movie recording the object's full 3-D shape, motion, and precise surface texture. This paper first proposes a parallel pipeline processing method for reconstructing a dynamic 3-D object shape from multiview video images, by which a temporal series of full 3-D voxel representations of the object behavior can be obtained in real time. To realize the real-time processing, we first introduce a plane-based volume intersection algorithm: first represent an observable 3-D,space by a group of parallel plane slices, then back-project observed multiview object silhouettes onto each slice, and finally apply two-dimensional silhouette intersection on each. slice. Then, we propose a method to parallelize this algorithm using a PC cluster, where we employ five-stage pipeline processing in each PC as well as slice-by-slice parallel silhouette intersection. Several results of the quantitative performance evaluation are given to demonstrate the effectiveness of the proposed methods. In the latter half of the paper, we present an algorithm of generating video texture on the reconstructed dynamic 3-D object surface. We first describe a naive view-independent rendering method and show its problems. Then, we improve the method by introducing image-based rendering techniques. Experimental results demonstrate the effectiveness of the improved method in generating high fidelity object images from arbitrary viewpoints.

    DOI

  • Camera calibration with two arbitrary coplanar circles

    Q Chen, HY Wu, T Wada

    COMPUTER VISION - ECCV 2004, PT 3 ( SPRINGER-VERLAG BERLIN )  3023   521 - 532   2004  [Refereed]

     View Summary

    In this paper, we describe a novel camera calibration method to estimate the extrinsic parameters and the focal length of a camera by using only one single image of two coplanar circles with arbitrary radius.
    We consider that a method of simple operation to estimate the extrinsic parameters and the focal length of a camera is very important because in many vision based applications, the position, the pose and the zooming factor of a camera is adjusted frequently.
    An easy to use and convenient camera calibration method should have two characteristics: 1) the calibration object can be produced or prepared easily, and 2) the operation of a calibration job is simple and easy. Our new method satisfies this requirement, while most existing camera calibration methods do not because they need a specially designed calibration object, and require multi-view images. Because drawing beautiful circles with arbitrary radius is so easy that one can even draw it on the ground with only a rope and a stick, the calibration object used by our method can be prepared very easily. On the other hand, our method need only one image, and it allows that the centers of the circle and/or part of the circles to be occluded.
    Another useful feature of our method is that it can estimate the focal length as well as the extrinsic parameters of a camera simultaneously. This is because zoom lenses are used so widely, and the zooming factor is adjusted as frequently as the camera setting, the estimation of the focal length is almost a must whenever the camera setting is changed. The extensive experiments over simulated images and real images demonstrate the robustness and the effectiveness of our method.

  • Visual line estimation from a single image of two eyes

    HY Wu, Q Chen, T Wada

    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 3 ( IEEE COMPUTER SOC )    290 - 293   2004  [Refereed]

     View Summary

    This paper describes a conic-based algorithm for estimating visual line from a single monocular image. By assuming that the visual lines of the both eyes are parallel and the iris boundaries are circles, we propose a "two-circle" algorithm that can estimate the normal vector of the supporting plane of the iris boundaries, from which the visual line is calculated. Our new method does not use either the eye corners, or some heuristic knowledge about the structure of the eye. Another advantage of our algorithm is that a camera with an unknown focal length can be used without assuming the orthographical projection. This is a very useful feature because it allows one to use a zoom lens and to change the zooming factor whenever he or she likes. It also gives one more freedom of the camera setting because keeping the camera far from the eyes is not necessary in our method. The extensive experiments over simulated images and real images demonstrate the robustness and the effectiveness of our method.

  • Conic-based algorithm for visual line estimation from one image

    HY Wu, Q Chen, T Wada

    SIXTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, PROCEEDINGS ( IEEE COMPUTER SOC )    260 - 265   2004  [Refereed]

     View Summary

    This paper describes a novel method to estimate visual line from a single monocular image. By assuming that the visual lines of the both eyes are parallel and the iris boundaries are circles, we propose a "two-circle" algorithm that can estimate the normal vector of the supporting plane of the iris boundaries, from which the direction of the visual line can be calculated. Most existing gaze estimation algorithms require eye corners and some heuristic knowledge about the structure of the eye in addition to the iris contours. In contrast to the exiting methods, our one does not use either of those additional information. Another advantage of our algorithm is that a camera with an unknown focal length can be used without assuming the orthographical projection. This is a very useful feature because it allows one to use a zoom lens and to change the zooming factor whenever he or she likes. It also gives one more freedom of the camera setting because keeping the camera far from the eyes is not necessary in our method, The extensive experiments over simulated images and real images demonstrate the robustness and the effectiveness of our method.

  • Direct condensing: An efficient Voronoi condensing algorithm for nearest neighbor classifiers

    T Kato, T Wada

    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 3 ( IEEE COMPUTER SOC )    474 - 477   2004  [Refereed]

     View Summary

    Voronoi condensing reduces training patterns of nearest neighbor classifiers without changing the classification boundaries. This method plays important roles not only in the nearest neighbor classifiers but also in the other classifiers such as the support vector machines, because the resulting prototype patterns involve support vectors in many cases. However, previous algorithms for Voronoi condensing were computationally inefficient in general Pattern Recognition tasks. This is because they use proximity graphs for entire training patters, which require computational time exponentially for the dimension of pattern space. For solving this problem, we proposed an efficient algorithm for Voronoi condensing named direct condensing that does not require the entire proximity graphs of training patterns. We confirmed that direct condensing efficiently calculates Voronoi condensed prototypes in high dimension (from 2 to 20 dimensions).

  • Estimating the visual direction with two-circle algorithm

    HY Wu, Q Chen, T Wada

    ADVANCES IN BIOMETRIC PERSON AUTHENTICATION, PROCEEDINGS ( SPRINGER-VERLAG BERLIN )  3338   125 - 136   2004  [Refereed]

     View Summary

    This paper describes a novel method to estimate visual direction from a single monocular image with "two-circle" algorithm. We assume that the visual direction of both eyes is parallel and iris boundaries are circles in 3-D space. Our "two-circle" algorithm can estimate the normal vector of the supporting plane of two iris boundaries, from which the direction of the visual direction can be calculated. Most existing gaze estimation algorithms require eye corners and some heuristic knowledge about the structure of the eye as well as the iris contours. In contrast to the exiting methods, ours does not use that additional information. Another advantage of our algorithm is that it does not require the focal length, therefore, it is capable of estimating the visual direction from an image taken by an active camera. The extensive experiments over simulated images and real images demonstrate the robustness and the effectiveness of our method.

  • ロボットのボディを利用したカメラキャリブレーション

    呉海元, 和田俊和, 陳謙

    情報処理学会論文誌:コンピュータビジョンとイメージメディア   Vol.44 ( No.SIG 17(CVIM8) ) 61 - 69   2003.12  [Refereed]

  • Color-target Detection Based on Nearest Neighbor Classifier

    Toshikazu Wada (Part: Lead author, Last author, Corresponding author )

    情報処理学会論文誌:コンピュータビジョンとイメージメディア   Vol.44 ( No. SIG17(CVIM8), ) 126 - 135   2003.12  [Refereed]

  • 背景変化の共起性に基づく背景差分

    関真規人, 和田俊和, 藤原秀人, 鷲見和彦

    情報処理学会論文誌. コンピュータビジョンとイメージメディア 44(SIG_5(CVIM_6))     54 - 63   2003  [Refereed]

  • 弾性メッシュモデルを用いた多視点画像からの高精度3次元形状復元

    延原, 和田,松山

    情報処理学会論文誌:CVIM   Vol.43   53 - 63   2002.12  [Refereed]

  • 平面間透視投影を用いた並列視体積交差法

    ウ, 和田, 東海, 松山

    情報処理学会論文誌:コンピュータビジョンとイメージメディア   Vol. 42 ( No. SIG6(CVIM2) ) 33 - 43   2001.06  [Refereed]

  • Real-time Multi-view Image Processing System on PC-cluster - Real-time motion capture and 3D volume reconstruction systems -

    TANIGUCHI Rin-ichiro, WADA Toshikazu

    JRSJ ( The Robotics Society of Japan )  19 ( 4 ) 427 - 432   2001.05

    DOI

  • 照明変化に頑健な背景差分

    松山, 和田俊和, 波部, 棚橋

    電子情報通信学会論文誌   D-II, Vol.J84-D-II ( No.10 ) 2201 - 2211   2001.01  [Refereed]

  • Real-Time Object Tracking with an Active Camera

    WADA Toshikazu, MATSUYAMA Takashi

    JRSJ ( The Robotics Society of Japan )  19 ( 4 ) 433 - 438   2001

    DOI

  • Multiobject behavior recognition by selective attention

    Toshikazu Wada, Masayuki Sato, Takashi Matsuyama

    Electronics and Communications in Japan, Part III: Fundamental Electronic Science (English translation of Denshi Tsushin Gakkai Ronbunshi)   84 ( 9 ) 56 - 66   2001  [Refereed]

     View Summary

    This study proposes a method for recognition of the behavior and number of multiple objects without separation of the objects from images. Most conventional techniques of behavior recognition have used bottom-up processing, in which features were first extracted from images, and then the extracted features were subjected to time-series analysis. However, separation of objects from images at the feature extraction stage resulted in unstable processing. This study aims at stable recognition of multiobject behavior. For this purpose, a mechanism of selective attention is proposed. With this mechanism, particular image regions (focusing regions) are allotted to all states of the NFA (nondeterministic finite automaton) that performs sequence analysis, and feature extraction (event detection) is performed inside such regions. This approach makes it possible to detect events irrespective of noise (that is, changes that may occur in the image beyond the focusing regions), while nondeterministic state transition means that all possible event sequences are analyzed
    hence, the behavior of multiple objects can be recognized without separation of the objects from the images. Object-specific color tokens are assigned to NFA active state sets, and then are transferred along with the state transitions, which is referred to as the object discrimination mechanism. Introduction of this mechanism allows simultaneous multiobject behavior recognition and detection of the number of objects. In addition, the proposed system has been extended to treat multiview images, and its effectiveness has been proven experimentally. © 2001 Scripta Technica.

    DOI

  • 固有空間法を用いた陰影情報からの書籍表面の3次元形状復元

    浮田, 小西, 和田俊和, 松山

    電子情報通信学会論文誌   D-II,Vol. J83-D-II ( No. 12 ) 2610 - 2621   2000.12  [Refereed]

  • Multiobject behavior recognition by event driven selective attention method

    T Wada, T Matsuyama

    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE ( IEEE COMPUTER SOC )  22 ( 8 ) 873 - 887   2000.08  [Refereed]

     View Summary

    Recognizing multiple object behaviors from nonsegmented image sequences is a difficult problem because most of the motion recognition methods proposed so far share the limitation of the single-object assumption. Based on existing methods, the problem can be solved only by bottom-up image sequence segmentation followed by sequence classification. This straightforward approach totally depends on bottom-up segmentation which is easily affected by occlusions and outliers. This paper presents a completely novel approach for this task without using bottom-up segmentation. Our approach is based on assumption generation and verification, i.e., feasible assumptions about the present behaviors consistent with the input image and behavior models are dynamically generated and verified by finding their supporting evidence in input images. This can be realized by an architecture called the selective attention model, which consists of a state-dependent event detector and an event sequence analyzer. The former detects image variation (event) in a limited image region (focusing region), which is not affected by occlusions and outliers. The latter analyzes sequences of detected events and activates all feasible states representing assumptions about multiobject behaviors. In this architecture, event detection can be regarded as a verification process of generated assumptions because each focusing region is determined by the corresponding assumption. This architecture is sound since all feasible assumptions are generated. However, these redundant assumptions imply ambiguity of the recognition result. Hence, we further extend the system by introducing 1) colored-token propagation to discriminate different objects in state space and 2) integration of multiviewpoint image sequences to disambiguate the single-view recognition results. Extensive experiments of human behavior recognition in real world environments demonstrate the soundness and robustness of our architecture.

    DOI

  • Dynamic memory: Architecture for real time integration of Visual Perception, Camera Action, and Network Communication

    T Matsuyama, S Hiura, T Wada, K Murase, A Yoshioka

    IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, VOL II ( IEEE COMPUTER SOC )    728 - 735   2000  [Refereed]

     View Summary

    In a Cooperative Distributed Vision system, a group of communicating Active Vision Agents (AVA, in short, i.e. real time image processor with an active video camera and high speed network interface) cooperate to fulfill a meaningful task such as moving abject tracking and dynamic scene visualization. A key issue to design and implement an AVA rests in the dynamic integration of Visual Perception, Camera Action, and Network Communication. This paper proposes a novel dynamic system architecture named Dynamic Memory Architecture, where perception, action, and communication modules share what we call the Dynamic Memory. It maintains not only temporal histories of state variables such as pan-tilt angles of the camera and the target object location but also their predicted values in the future. Perception, action, and communication modules are implemented as parallel processes which dynamically read from and write into the memory according to their own individual dynamics. The dynamic memory supports such asynchronous dynamic interactions (i.e. data exchanges between the modules) without wasting time for synchronization. This no-wait asynchronous module interaction capability greatly facilitates the implementation of real time reactive systems such as moving object tracking. Moreover, the dynamic memory supports the virtual synchronization between multiple AVAs, which facilitates the cooperative object tracking by communicating AVAs. A prototype system for real time moving object tracking demonstrated the effectiveness of the proposed idea.

  • 視点固定型パン・チルト・ズームカメラを用いた実時間対象検出・追跡

    松山, 和田俊和, 物部

    情報処理学会論文誌   Vol.40, ( No. 8 ) 3169 - 3178   1999.08  [Refereed]

  • 選択的注視に基づく複数対象の動作認識

    和田俊和, 佐藤, 松山

    電子情報通信学会論文誌   D-II, Vol. J82-D-II ( No.6 ) 1031 - 1041   1999.06  [Refereed]

  • 視点固定型パン・チルト・ズームカメラとその応用

    和田俊和, 浮田, 松山

    電子情報通信学会論文誌   D-II, Vol. J81-D-II, ( No.6 ) 1182 - 1193   1998.06  [Refereed]

  • Cooperative spatial reasoning for image understanding

    T Matsuyama, T Wada

    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE ( WORLD SCIENTIFIC PUBL CO PTE LTD )  11 ( 1 ) 205 - 227   1997.02  [Refereed]

     View Summary

    Spatial Reasoning, reasoning about spatial information (i.e. shape end spatial relations), is a crucial function of image understanding and computer vision systems. This paper proposes a novel spatial reasoning scheme for image understanding and demonstrates its utility and effectiveness in two different systems: region segmentation and aerial image understanding systems. The scheme is designed based on a so-called Multi-Agent/Cooperative Distributed Problem Solving Paradigm, where a group of intelligent agents cooperate with each other to fulfill a complicated task. The first part of the paper describes a cooperative distributed region segmentation system, where each region in an image is regarded as an agent. Starting from seed regions given at the initial stage, region agents deform their shapes dynamically so that the image is partitioned into mutually disjoint regions. The deformation of each individual region agent is realized by the snake algorithm(14) and neighboring region agents cooperate with each other to find common region boundaries between them. In the latter part of the paper, we first give a brief description of the cooperative spatial reasoning method used in our aerial image understanding system SIGMA. In SIGMA, each recognized object such as a house and a road is regarded as an agent. Each agent generates hypotheses about its neighboring objects to establish spatial relations and to detect missing objects. Then, we compare its reasoning method with that used in the region segmentation system. We conclude the paper by showing further utilities of the Multi-Agent/Cooperative Distributed Problem Solving Paradigm for image understanding.

    DOI

  • Cooperative Distributed Image Segmentation

    WADA Toshikazu, NOMURA Yoshihiro, MATSUYAMA Takashi

    IPSJ Journal ( Information Processing Society of Japan (IPSJ) )  Vol. 36 ( No. 4 ) 879 - 891   1995.04  [Refereed]

     View Summary

    The image partitioning problem is to find the set of those regions satisfying both their attribute constraints and relational constrains. Attribute constraint means the statistical distribution of the intensity, color and texture of the region, and also the clarity of the region contour edge. Relational constraint is that all regions form disjoint union of the image space. To solve this problem by taking account of both attributes of regions and relations among them, the multi-agent processing is suitable. In this paper, a multi-agent image segmentation method is proposed, where a group of agents representing regions cooperate with each other to partition an image into a set of disjoint regions. The most distinguishing characteristics of the method are : 1. It employs spatial relations between regions as well as their attributes to partition an image. 2. A flexible spatial reasoning is realized by the cooperation among multiple agents. Each agent actively generates hypotheses about its goal state (i.e. expected region boundary). The hypotheses generated by the top-down region boundary estimation and the attributes of a region obtained by the bottom-up analysis are integrated through the energy minimization process. Then, each agent modifies its state (i.e. region deformation)so as to minimize the energy function. The hypothesis generation and region deformation are repeated referring to those hypotheses and states of the other agents ; the inconsistency between hypotheses is examined and less reliable ones are modified to resolve the inconsistency. Some experimental results demonstrate the effectiveness of the proposed method.

  • HIGH-PRECISION GAMMA-OMEGA HOUGH TRANSFORMATION ALGORITHM TO DETECT ARBITRARY DIGITAL LINES

    T WADA, M SEKI, T MATSUYAMA

    SYSTEMS AND COMPUTERS IN JAPAN ( SCRIPTA TECHNICA PUBL )  26 ( 3 ) 39 - 52   1995.03  [Refereed]

     View Summary

    The gamma-omega Hollgh transform we proposed earlier (a) does not lead to bias in the number of votes accumulated in the cell even when the parameter space is sampled in uniform cells and voting takes place over all pixels, and (b) the voting locus becomes a piecewise linear line composed of two segments so drawing and analyzing the curve is simple. These are significant advantages. In a conventional gamma-omega Hough transformation algorithm, however, votes from the pixel set included on one digital line spread over multiple cells in the parameter space and the number of pixels forming digital lines is not always taken as the correct number of votes. The reason is the:conventional algorithm does not detect all of the target digital lines in the image space. In this research, for the gamma-omega Hough parameter space, we determine the cell configuration having a one-to-one correspondence with all of the digital lines in the image space and demonstrate an appropriate voting method for this cell configuration. By applying the ''high-precision gamma-omega Hough transformation algorithm'' used in the cell configuration and voting method proposed in this paper, digital lines having any orientation and position can be stably and precisely detected.

  • イメージ・スキャナを用いた書籍表面の三次元形状復元 (II) -- 相互反射を考慮した近接光源下での Shape from Shading --

    和田俊和, 浮田, 松山

    電子情報通信学会論文誌   D-II,Vol. J78-D-II ( No.2 ) 311 - 320   1995.02  [Refereed]

  • Hough変換に基づく図形検出法の新展開

    和田俊和, 松山隆司

    情報処理, Vol.36, No.3【解説論文】   36 ( 3 ) 253 - 263   1995

  • イメージ・スキャナを用いた書籍表面の三次元形状復元 (I) -- 近接照明下での Shape from Shading --

    和田俊和, 浮田, 松山

    電子情報通信学会論文誌   D-II, Vol. J77-D-II ( No. 6 ) 1059 - 1067   1994.06  [Refereed]

  • High Precision γ-ω Hough Transformation Algorithm to Detect Arbitray Digital Lines

    WADA Toshikazu, SEKI Makito, MATSUYAMA Takashi

    The Transactions of the Institute of Electronics,Information and Communication Engineers. ( The Institute of Electronics, Information and Communication Engineers )  D-II, Vol. J77-D-II ( No. 3 ) 529 - 539   1994.03  [Refereed]

     View Summary

    我々が以前提案したγ-ωハフ変換は,(a)パラメータ空間を均一なセルで標本化し全画素からの投票を行っても,セルに蓄積される投票度数に偏りが生じない.(b)投票軌跡が2本の線分からなる折れ線となり,軌跡の描画や解析が容易に行える.という優れた特長をもっている.しかし,従来のγ-ωハフ変換アルゴリズムでは,1本のディジタル直線に含まれる画素集合からの投票がパラメータ空間中の複数のセルに分散され,ディジタル直線を構成する画素の数が投票度数として正しくとらえられない場合がある.これは,従来のアルゴリズムが,画像空間中に存在するすべてのディジタル直線を検出対象としていないことに起因する.本研究では,γ-ωパラメータ空間において,画像空間中に存在するすべてのディジタル直線と1対1に対応したセル配置を求め,そのセル配置に対する妥当な投票方法を明らかにする.本論文で提案するセル配置と投票方法を用いた「高精度γ-ωハフ変換アルゴリズム」を用いることにより,任意の方向,位置をもつディジタル直線が安定かつ高精度に検出できる.

  • 画像理解のための分散協調処理-領域分割問題を一例として-

    松山隆司, 和田俊和

    マルチェージェントと協調計算III     1994  [Refereed]

  • The γ–ω hough transform: Linearizing voting curves in an unbiased ϱ–θ parameter space

    Toshikazu Wada, Takahiro Fujii, Takashi Matsuyama

    Systems and Computers in Japan   24 ( 6 ) 14 - 25   1993  [Refereed]

     View Summary

    The Hough transform is an effective method for detecting figure elements in images corrupted by noise. Voting on the feature points in the image space is performed in the parameter space, after which the points in the parameter space with many votes are focused on in order to detect the figure elements. In the Hough transform, the parameter space is partitioned into elements called cells where the votes are accumulated. However, when the subject is a digital image, if the parameter space is not sampled appropriately, a bias arises in the number of votes accumulated in the cells. In this paper, we present a sampling method for the parameter space that does not produce distortion in the ρ–θ parameter space used in line detection, and we construct a γ–ω space where no bias in the number of votes arises even when sampling is uniform. The γ–ω parameter space corresponds to the ρ–θ parameter space and has the features of no bias in the number of votes and voting loci, which are segmented lines. In addition, the characteristics of the γ–ω parameter space, where the feature points and voting loci are easily converted from one to the other, are used, and a verification method for the line segments is proposed that does not scan the image space again. By combining the line detection method that uses the γ–ω parameter space and this verification method, stable line detection becomes possible. Copyright © 1993 Wiley Periodicals, Inc., A Wiley Company

    DOI

  • γ-ω Hough Transform - Linearizing Voting Curves in an Unbiased ρ-θ Parameter Space -

    WADA Toshikazu, FUJII Takahiro, MATSUYAMA Takashi

    The Transactions of the Institute of Electronics,Information and Communication Engineers. ( 電子情報通信学会情報・システムソサイエティ )  D-II, Vol. J75-D-II ( No.1 ) 21 - 30   1992.01  [Refereed]

  • GAMMA-OMEGA-HOUGH TRANSFORM - ELIMINATION OF QUANTIZATION NOISE AND LINEARIZATION OF VOTING CURVES IN THE RHO-THETA-PARAMETER SPACE

    T WADA, T MATSUYAMA

    11TH IAPR INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, PROCEEDINGS, VOL III ( I E E E, COMPUTER SOC PRESS )    272 - 275   1992  [Refereed]

  • 周期波形の尺度空間フィルタリング

    和田 俊和, 顧 一禾, 佐藤 誠

    電子情報通信学会論文誌 D-2 情報・システム ( 電子情報通信学会情報・システムソサイエティ )  73 ( 4 ) p544 - 552   1990.04

  • 構造線による一般化波形の階層的表現 (ヒュ-マンインタフェ-ス特集) -- (図形・画像)

    佐藤 誠, 和田 俊和

    電子情報通信学会論文誌 D 情報・システム ( 電子情報通信学会 )  70 ( 11 ) p2154 - 2159   1987.11

▼display all

Books etc

  • 劣化に対して頑健な画像間相違度

    大池洋史, 岡 藍子, 和田俊和( Part: Joint author)

    画像ラボ Vol.23, No.7, pp.29-36  2012.07 

  • Robotics Research: The 13th International Symposium ISRR

    Makoto Kaneko, Yoshihiko Nakamura( Part: Joint author,  Work: Visual Object Tracking Using Positive and Negative Examples)

    Springer  2011.04 

  • -CVIMチュートリアルシリーズ-コンピュータビジョン最先端ガイド3

    八木康史, 斎藤英雄( Part: Joint author,  Work: 第5章最近傍探索の理論とアルゴリズム)

    アドコム・メディア株式会社  2010.12 

  • 高次元空間における近似最近傍探索技術の進歩とその展望

    人工知能学会誌( Part: Sole author,  Work: 高次元空間における近似最近傍探索技術の進歩とその展望)

    Vol.25 ,No.6  2010.11 

  • 画像を用いた対象検出・追跡(第6回・最終回)対象追跡(各論3)パラメータ空間内での追跡

    和田俊和( Part: Joint author,  Work: 画像を用いた対象検出・追跡(第6回・最終回)対象追跡(各論3)パラメータ空間内での追跡)

    画像ラボ 17(11),  2006.11 

  • 画像を用いた対象検出・追跡(第5回)対象追跡(各論1)標準的手法

    和田俊和( Part: Joint author,  Work: 画像を用いた対象検出・追跡(第5回)対象追跡(各論1)標準的手法)

    画像ラボ 17(9),  2006.09 

  • サーベイ : 事例ベースパターン認識, コンピュータビジョン

    和田俊和( Part: Joint author,  Work: サーベイ : 事例ベースパターン認識, コンピュータビジョン)

    電子情報通信学会技術研究報告. PRMU, Vol.2006, No.93,  2006.09 

  • 画像を用いた対象検出・追跡(第4回)対象検出(各論2)背景との相違性に基づく検出

    和田俊和( Part: Joint author,  Work: 画像を用いた対象検出・追跡(第4回)対象検出(各論2)背景との相違性に基づく検出)

    画像ラボ 17(7),  2006.07 

  • 画像を用いた対象検出・追跡(第3回) 対象検出 (各論1)マッチングによる検出

    和田俊和( Part: Joint author,  Work: 画像を用いた対象検出・追跡(第3回) 対象検出 (各論1)マッチングによる検出)

    画像ラボ 17(5),  2006.05 

  • 画像を用いた対象検出・追跡(第2回)対象追跡:総論

    和田俊和( Part: Joint author,  Work: 画像を用いた対象検出・追跡(第2回)対象追跡:総論)

    画像ラボ 17(3),  2006.03 

  • 機械学習法のロボット知能化システムへの応用(2)

    中村恭之, 和田俊和( Part: Joint author,  Work: 機械学習法のロボット知能化システムへの応用(2))

    機械の研究 vol.58 No.2,  2006.02 

  • 機械学習法のロボット知能化システムへの応用(1)

    中村 恭之, 和田 俊和( Part: Joint author,  Work: 機械学習法のロボット知能化システムへの応用(1))

    機械の研究, Vol.58, No.1,  2006.01 

  • 画像を用いた対象検出・追跡(第1回)対象検出:総論

    和田俊和( Part: Joint author,  Work: 画像を用いた対象検出・追跡(第1回)対象検出:総論)

    画像ラボ 17(1),  2006.01 

  • 空間分割を用いた識別と非線形写像の学習 : (2)データ空間の再帰的分割に基づく非線形写像学習 : 回帰木の今昔

    中村恭之, 和田俊和( Part: Joint author,  Work: 空間分割を用いた識別と非線形写像の学習 : (2)データ空間の再帰的分割に基づく非線形写像学習 : 回帰木の今昔)

    情報処理 Vol.46, No.9,  2005.09 

  • 空間分割を用いた識別と非線形写像の学習 : (1)空間分割による最近傍識別の高速化

    和田俊和( Part: Joint author,  Work: 空間分割を用いた識別と非線形写像の学習 : (1)空間分割による最近傍識別の高速化)

    情報処理 Vol. 46, No. 8,  2005.08 

  • コンピュータビジョン:技術評論と将来展望

    松山隆司, 久野義徳, 井宮 淳( Part: Joint author,  Work: 第10章 Hough変換:投票と多数決原理に基づく幾何学的対象の検出と識別)

    (株)新技術コミュニケーションズ  1998.09 

  • Spatial Computing: Issues in Vision, Multimedia and Visualization Technologies

    T. Caelli, Peng Lam, H. Bunke( Part: Joint author,  Work: Cooperative Spatial Reasoning for Image Understanding)

    World Scientific Pub Co Inc  1997.10 

  • レクチャーノート/ソフトウェア科学8巻, 「マルチエージェントと協調計算III」,日本ソフトウェア科学会MACC'93

    奥乃博( Part: Joint author,  Work: 画像理解のための分散協調処理-領域分割問題を一例として-)

    近代科学社  1994.10 

▼display all

Misc

  • DNN効率化のためのプルーニング手法の開発:最終層誤差に基づくニューロン選択および再構成

    菅間幸司, 和田俊和

    電子情報通信学会技術研究報告   PRMU2022 ( 66 ) 42 - 47   2023.03

  • ReplayとParameter isolationを組み合わせた継続学習法

    足立亮太, 和田俊和

    電子情報通信学会技術研究報告   PRMU2022 ( 121 ) 335 - 340   2023.03

  • 商品画像検索用DNNのための画素値変換によるData Augmentationに関する研究

    岡本 悠, 岸部真紀, 和田俊和

    電子情報通信学会技術研究報告   PRMU2022 ( 94 ) 181 - 186   2023.03

  • 活性化関数の出力誤差に基づくDNN圧縮法

    菅間幸司, 和田俊和

    電子情報通信学会技術研究報告   PRMU2022 ( 38 ) 34 - 39   2022.12

  • DN4C ~ Deep Neural Networkと最近傍識別器を組み合わせた対話的セグメンテーションシステム ~

    和田俊和, 菅間幸司 (Part: Lead author )

      PRMU2022 ( 35 ) 19 - 24   2022.12

  • Data Pruningにおけるサンプリング戦略

    東 遼太, 和田俊和

    電子情報通信学会技術研究報告   PRMU2022 ( 48 ) 85 - 90   2022.12

  • 人物検出のための広角天井カメラ高さ変化を表す画像変換法

    北尾颯人, 和田俊和

    電子情報通信学会技術研究報告   PRMU2022 ( 43 ) 57 - 62   2022.12

  • 人物再同定のための遮蔽物による色移りを抑えた人物画像の姿勢変換

    岸部真紀, 和田俊和

    電子情報通信学会技術研究報告   PRMU2022 ( 15 ) 31 - 36   2022.09

  • Convolutional Skip Connectionを用いた分岐構造を持つDNNの圧縮法

    菅間幸司, 和田俊和

    電子情報通信学会技術研究報告   PRMU2022 ( 16 ) 37 - 42   2022.09

  • Recurrent Tracknet : DNNを用いた連続的なボールの検出に関する研究

    前迫 元, 和田俊和

    電子情報通信学会技術研究報告   PRMU2021 ( 38 ) 77 - 82   2021.12

  • GCNを用いた顔パターンの構造推定に関する検討

    圓岡直哉, 和田俊和

    電子情報通信学会技術研究報告   PRMU2021 ( 41 ) 92 - 97   2021.12

  • CNNにおけるドロップアウトを利用した高精度学習法を目指して

    秋葉浩和, 和田俊和

    電子情報通信学会技術研究報告   PRMU2021 ( 52 ) 154 - 159   2021.12

  • データの選択による効率的なDNNの学習

    東 遼太, 和田俊和

    電子情報通信学会技術研究報告   PRMU2021 ( 51 ) 148 - 153   2021.12

  • Feature Sharing: 特徴マップの共有による複数DNNの統合

    磯田雄基, 和田俊和

    研究報告コンピュータービジョンとイメージメディア(CVIM) 2021-CVIM-224(4)     2021.01

  • 平面物体のホモグラフィー変換のための信頼度付きコーナー点抽出法

    藤浦弘也, 和田俊和

    電子情報通信学会技術研究報告=IEICE technical report:信学技報   120 ( 300 ) 93 - 98   2020.12

  • 人物追跡のための姿勢と人物領域の同時推定法

    渡邉和彦, 和田俊和

    電子情報通信学会技術研究報告=IEICE technical report:信学技報   120 ( 300 ) 18 - 23   2020.12

  • 少数データの学習における知識の蒸留を用いた正則化

    東遼太, 和田俊和

    電子情報通信学会技術研究報告=IEICE technical report:信学技報   120 ( 300 ) 133 - 138   2020.12

  • 教師あり学習による解きほぐされた特徴表現の学習 ~ 識別器を用いた特徴表現の解きほぐし ~

    黒田修二郎, 和田俊和

    電子情報通信学会技術研究報告=IEICE technical report:信学技報   120 ( 300 ) 116 - 121   2020.12

  • Serialized Residual Network

    Koji Kamma, Toshikazu Wada

    第23回 画像の認識・理解シンポジウム OS1-2A-5     2020.08  [Refereed]

  • 人物領域と姿勢の相互推定に基づく多人数姿勢推定

    渡邉和彦, 和田俊和

    第23回 画像の認識・理解シンポジウム IS1-1-18     2020.08

  • 反復的逆伝播計算を用いたGrad-CAMの精度向上法

    村上佑介, 和田俊和

    第23回 画像の認識・理解シンポジウム IS1-2-12     2020.08

  • Biorthogonal System Based Channel Selection Algorithm for Neural Network Pruning

    Koji Kamma, Toshikazu Wada

    電子情報通信学会技術研究報告=IEICE technical report:信学技報   IEICE-120 ( IEICE-PRMU-14 ) 7 - 12   2020.05

  • Local Trilateral UpsamplingのGPU並列実装による高速化—Acceleration by GPU Parallelization for Local Trilateral Upsampling—メディアエクスペリエンス・バーチャル環境基礎

    門脇 奨, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  119 ( 386 ) 83 - 88   2020.01

  • Local Trilateral Upsampling のGPU並列実装による高速化

    門脇 奨, 和田 俊和

    研究報告コンピュータービジョンとイメージメディア(CVIM)2020-CVIM-220(18)     2020.01

  • DCNNsの中間層を統合した画像検索用インデックスの生成法

    谷洸明, 和田俊和

    研究報告コンピュータービジョンとイメージメディア(CVIM)2019-CVIM-219(4)     2019.11

  • vSLAMのための画像局所特徴を併用したICPの高速化法

    島田佳典, 和田俊和

    第18回情報科学技術フォーラム(FIT2019) H-013     2019.09

  • DNNによる星座画像識別の困難さとその解消法

    黒田修二郎, 和田俊和

    第18回情報科学技術フォーラム(FIT2019) H-019     2019.09

  • Neuro Coding/Unificationを用いたDNNの効率的なパラメータ数削減法

    磯田 雄基, 菅間 幸司, 和田 俊和

    第22回 画像の認識・理解シンポジウム PS3-20     2019.08

  • OpenPoseに基づく自由姿勢推定に関する研究

    渡邉 和彦, 和田 俊和

    第22回 画像の認識・理解シンポジウム OS4A-5     2019.08

  • データの選択と重み付けによる効率的な DNN の学習

    東 遼太, 和田 俊和

    第22回 画像の認識・理解シンポジウム OS2A-7     2019.07

  • Accelerating the Convolutional Neural Network by Smart Channel Pruning

    Koji Kamma, Toshikazu Wada

    第22回 画像の認識・理解シンポジウムOS1A-2     2019.07

  • 動画像に対する対話的領域アノテーションのためのセマンティックラベル伝播

    久保 友輔, 和田 俊和

    第22回 画像の認識・理解シンポジウム OS3A-2     2019.07

  • On Predatory Journals and Conferences : A Muddy Stream beside You

    和田 俊和

    情報処理 ( 情報処理学会 ; 1960- )  60 ( 2 ) 104 - 108   2019.01

     View Summary

    近年のオープンアクセスジャーナルの流れに沿って登場し,増加の一途を辿っている粗悪学術論文誌と粗悪学術会議について,その背景,見分けるためのチェックポイント,具体的な弊害,ブラックリストの必要性,研究者倫理との関連,対策,等について概説した.

  • Non-differential Descriptor : Robust gradient reversal descriptor based intensity

    大西 秀明, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  117 ( 392 ) 195 - 202   2018.01

  • NLFBDT : Non-Linear Fish Bone Decision Tree incorporating SVM for classification and visualization

    松尾 大典, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  117 ( 392 ) 285 - 291   2018.01

  • Geometric and brightness correction of nighttime images by matching images

    野水 建吾, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  117 ( 392 ) 37 - 43   2018.01

  • A study on fast convergence of power iteration by combining shifted method and series acceleration

    平松 幹洋, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  117 ( 392 ) 277 - 283   2018.01

  • Video Rate Image Retrieval using Query Caching

    大倉 有人, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  117 ( 392 ) 257 - 261   2018.01

  • Accelerating Fractal Image Compression Using Nearest Neighbor Search

    ONISHI Hideaki, WADA Toshikazu

    電子情報通信学会論文誌D 情報・システム ( The Institute of Electronics, Information and Communication Engineers )  J100-D ( 12 ) 964 - 973   2017.12

     View Summary

    Fractal image compression finds an image-to-image mapping (iterative function system) whose fixed point is the target image. This problem is recognized as Nearest Subspace Search (NSS) problem, because the primitive mapping from Domain image block to Range block has two parameters of intensity magnification and offset. These parameters span 2D subspace. In this paper, we first present that this NSS problem is equivalent to Nearest Neighbor Search (NNS) problem, because all subspaces involve a common non-zero vector. Essentially equivalent interpretation has been shown in D.~Saupe's past research. This paper, however, shows more simple and general interpretation. Based on this interpretation, we compared equivalent NNS settings with different query vectors and clarify the most efficient setting. In addition, based on this interpretation, we generalize the fractal image compression.

    DOI

  • Fish Bone Decision Tree : method of divide most distinguish class

    松尾 大典, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  117 ( 211 ) 173 - 179   2017.09

  • Multiple decision tree for classification and image retrieval

    松尾 大典, 前田 啓, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  116 ( 412 ) 121 - 128   2017.01

  • Geometric interpretation of search problem in fractal image compression and acceleration

    大西 秀明, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  116 ( 412 ) 193 - 198   2017.01

  • A geometric consistency checking method for keypoint matching : Application to image retrieval

    大倉 有人, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  116 ( 412 ) 115 - 120   2017.01

  • フラクタル圧縮における探索問題の幾何学的解釈とそれを用いた高速化に関する検討—Geometric interpretation of search problem in fractal image compression and acceleration—パターン認識・メディア理解

    大西 秀明, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  116 ( 411 ) 193 - 198   2017.01

  • 間接灸の温度特性 : 台座灸、温筒灸、棒灸の比較

    和田 恒彦, 全 英美, 宮本 俊和

    Journal of Japanese Society of Oriental Physiotherapy ( Journal of Japanese Society of Oriental Physiotherapy )  42 ( 2 ) 65 - 71   2017

     View Summary

    【目的】間接灸は火傷の可能性もあり温度特性について把握しておく必要がある。しかし先行研究 では1秒未満の詳細な温度変化を検討したものは見受けられない。そこで台座灸、温筒灸、棒灸 の温度特性について検討した。<br> 【方法】4㎜厚のシナベニア板上に置いた1㎜厚のアメゴムシート上に熱電対を施灸部位直下(0.00 ㎜)、直下から外方3.75㎜、7.50㎜、15.00㎜、30.00㎜、45.00㎜の6点に設置し、温度インターフェ イスを介してパーソナルコンピュータに温度データを取り込んだ。棒灸は、熱電対からの高さ20 ㎜と100㎜とした。0.55秒間隔で600秒間計測し、各灸6回測定した。<br> 【結果および考察】平均最高温度は台座灸55.9±5.0℃、温筒灸64.3±3.3℃、棒灸の高さ20㎜は 51.2±4.7℃、 100㎜は30.1℃±3.3℃だった。最高温度までの時間は、 台座灸160.7秒、 温筒灸154.5秒、 棒灸の高さ20㎜は126.7秒、100㎜は182.5秒だった。台座灸の温度曲線は漸増的に温度上昇後、 頂点付近は弧を描き、なだらかに温度下降をした。温筒灸は、台座灸よりも急激に温度上昇し、 頂点付近で少しゆるやかになり頂点から急激に下降した。棒灸の高さ20㎜は、漸増後、頂点はゆ るやかな弧をえがき、非常になだらかに直線的に温度は下降、高さ100㎜は非常になだらかに温度 上昇し、直線的に推移した。台座灸、温筒灸では施灸部外方7.5㎜以遠ではほとんど温度上昇がな かった。台座灸では最高温度付近で急激な温度変化があることがわかった。各種間接灸を使い分 けることにより、異なる刺激を与えることができる可能性が示唆された。<br> 【結語】高頻度の温度計測および施灸周囲の温度測定によりこれまで不明だった台座灸、温筒灸、 棒灸の温度特性をとらえることができた。

    DOI

  • Trainable nearest neighbor search exploiting query bias

    香川 椋平, 和田 俊和

    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報 ( 電子情報通信学会 )  116 ( 209 ) 135 - 140   2016.09

  • 棒灸の温度特性 : 燃焼部からの距離と加温範囲の検討

    和田 恒彦, 全 英美, 宮本 俊和

    Journal of Japanese Society of Oriental Physiotherapy ( Journal of Japanese Society of Oriental Physiotherapy )  41 ( 2 ) 51 - 56   2016

     View Summary

    【目的】棒灸は治療家のみならず、セルフケアとしても用いられているが、火傷の過誤が報告されるなど、温度特性について把握しておく必要がある。棒灸の燃焼部からの高さ、経時的な温度変化、影響範囲について検討した。
    【方法】4mm厚のシナベニア板上に置いた1mm厚のアメゴムシート上に熱電対(ST-50 理科工業)を燃焼部直下(0mm)、直下から外方3.75mm、7.5mm、15mm、30mm、45mmの6点に設置し、温度インターフェイス(E830 テクノセブン)を介してパソコンに温度データを取り込んだ。棒灸(温灸純艾條 カナケン)は、先端から熱電対までの高さを20、30、40、60、80、100mmと変えて10分間経時的に6回測定した。
    【結果および考察】測定時の室温は24.5±2.5℃だった。測定点の平均最高温度は、高さ20mmでは44.1±4.7℃、30mmは38.5±2.6℃、40mmは36.2±4.5℃、60mmは29.4±2.9℃、80mmは26.8±2.3℃、100mmは25.6℃±3.1℃だった。高さの100mmの燃焼部からの水平距離では直下から3.75mmでは25.9±3.3℃、7.5mmは25.9±3.2℃、15mmは25.9±3.1、30mmは26.1±3.0℃、45mmは26.3±3.1℃だった。また、高さ20mmで直下と外方部の温度逆転が、260秒後に外方15mm、290秒後に7.5mm、310秒後に3.75mm、430秒後に30mm、600秒後に45mmとの間に見られた。灰による影響と思われる。
    【結語】直下では燃焼部に近いほど最高温は高かったが、高さ100mmでは遠いほど温度が高く、経時的には水平距離と温度の関係の逆転もみられた。棒灸は燃焼部からの高さにより、上昇温度、刺激範囲を可変でき、術者が刺激を調節することができることが確認された。

    DOI

  • 複数の人物検出器ネットワークによる広域人物追跡

    森 敦, 大倉有人, 目片健一, 大川一朗, 古川裕三, 技研トラステム, 和田 俊和

    第18回画像の認識・理解シンポジウム(MIRU2015)     2015.07  [Refereed]

  • パターン間類似度・相違度の再考

    和田 俊和

    第18回画像の認識・理解シンポジウム(MIRU2015)     2015.07  [Invited]

  • マルチコプターのFly-away:原因と画像による飛行制御の必要性

    和田 俊和

    第21回画像センシングシンポジウムSSII2015     2015.06  [Invited]

  • Query Feature Reduction For Multiple Instance Image Retrieval

    YUASA KEITA, WADA TOSHIKAZU

    Technical report of IEICE. Multimedia and virtual environment ( The Institute of Electronics, Information and Communication Engineers )  114 ( 410 ) 165 - 170   2015.01

     View Summary

    For Multiple Instance Image Retrieval (MIIR), we have proposed index feature reduction method, which reduces the number of indexes attached to the image entries in the database. This method computes the importance measure representing the stability and the discrimination power of each feature by using the framework of Diverse Density and reduces the features having less importance measures. Through the experiments, this reduction drastically reduces the memory usage and retrieval accuracy, but the acceleration of retrieval speed is limited. This is because the number of nearest neighbor searches performed for each query is equivalent to the number of local features in the query image. This means that query feature reduction is required for the acceleration of image retrieval. This report presents a query feature reduction method for solving this problem.

  • Covariance matrix estimation for multivariate Gaussian process regression

    MATSUMURA Yuki, WADA Toshikazu

    Technical report of IEICE. Multimedia and virtual environment ( The Institute of Electronics, Information and Communication Engineers )  114 ( 410 ) 117 - 122   2015.01

     View Summary

    Gaussian process regression is a nonlinear regression that estimates the expected value of the of the output and its variance for given input. The original Gaussian process regression estimates a scalar value as an expected value of output, which can simply be extended to estimate a vector value. However, the covariance matrix cannot be estimated by simple extension. We have proposed an accelerated Gaussian process regression by introducing dynamic active set consisting of input-output pairs, and the weighted output covariance can be utilized as the covariance of output for multivariate Gaussian process regression. However, the diagonal elements of the estimated matrix can be negative. This is caused by the negative weight of the outputs originated by the inverse of gram matrix. In this report, we propose a method to estimate non-negative weights for the outputs. The estimation is twofold: initially estimate the output vector by simple multivariate Gaussian process regression, then recompute non-negative weights so as to minimize the output error. The resulted weights are utilized to estimate covariance matrix. By using the proposed method in anomaly detection of plant data and electrocardiogram data in the experiment, it was confirmed its effectiveness.

  • Map based Cooperation for Multiple Pedestrian Detection Systems

    MORI Atsushi, WADA Toshikazu

    Technical report of IEICE. Multimedia and virtual environment ( The Institute of Electronics, Information and Communication Engineers )  114 ( 410 ) 211 - 217   2015.01

     View Summary

    This report presents a cooperation method for multiple pedestrian detection systems, each of which has a single camera with wide view angle, for extending their observation area. Most single-viewpoint human detection algorithms perform matching between pedestrian models and captured images. Since the major task of the target system is people counting, the output of the system is rather simple. We assume that only the pedestrian positions on the image plane are available. For the cooperation of such systems, we propose map-based cooperation scheme, where the detected positions by multiple systems are transformed by nonlinear regression onto a single floor map, and the locations detected by different systems are merged based on the proximity. In our research, we employ linear regression tree for the nonlinear regression due to its short computation time. However, single camera people detection system cannot produce accurate floor position without the height information of each pedestrian. This is because the half upper body of the human model has strong matching weight, and hence, the head position is well estimated on the image plane but the foot position is not. Our method computes the correspondences of the locations of the same pedestrian, and the pedestrian's height is estimated simultaneously based on the discrepancy of their locations on the map.

  • Accelerating Diverse Density for Keypoint Reduction

    向井 祐一郎, 和田 俊和

    Technical report of IEICE. Multimedia and virtual environment ( The Institute of Electronics, Information and Communication Engineers )  114 ( 410 ) 159 - 164   2015.01

     View Summary

    Content Based Image Retrieval (CBIR) using local features can be classified into two types. One is the method using single vector representation that integrates local features detected in an image into a single Bag of Feature (BoF) vector. The other uses multiple instance representation without integration. We call the latter method Multiple Instance Image Retrieval (MIIR). In MIIR, a method for reducing database indexes has been proposed. This method employs the framework of Diverse Density (DD) to represent the importance measure, which means the stability as well as the discriminative power of the feature (instance). This reduction reduces the memory usage and the retrieval accuracy. The computational cost of DD, however, is considerably big, because it has to compute all distances between all combinations of instances. This report presents the approximate computation of DD for MIIR using nearest neighbor search. We confirmed through the experiments that the computational speed of DD becomes 520 times faster on Nister's database while keeping the accuracy.

  • Covariance matrix estimation for multivariate Gaussian process regression

    松村 祐貴, 和田 俊和

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2015 ( 22 ) 1 - 6   2015.01

     View Summary

    Gaussian process regression is a nonlinear regression that estimates the expected value of the of the output and its variance for given input. The original Gaussian process regression estimates a scalar value as an expected value of output, which can simply be extended to estimate a vector value. However, the covariance matrix cannot be estimated by simple extension. We have proposed an accelerated Gaussian process regression by introducing dynamic active set consisting of input-output pairs, and the weighted output covariance can be utilized as the covariance of output for multivariate Gaussian process regression. However, the diagonal elements of the estimated matrix can be negative. This is caused by the negative weight of the outputs originated by the inverse of gram matrix. In this report, we propose a method to estimate non-negative weights for the outputs. The estimation is twofold: initially estimate the output vector by simple multivariate Gaussian process regression, then recompute non-negative weights so as to minimize the output error. The resulted weights are utilized to estimate covariance matrix. By using the proposed method in anomaly detection of plant data and electrocardiogram data in the experiment, it was confirmed its effectiveness.

  • Map based Cooperation for Multiple Pedestrian Detection Systems

    森 敦, 和田 俊和

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2015 ( 39 ) 1 - 7   2015.01

     View Summary

    This report presents a cooperation method for multiple pedestrian detection systems, each of which has a single camera with wide view angle, for extending their observation area. Most single-viewpoint human detection algorithms perform matching between pedestrian models and captured images. Since the major task of the target system is people counting, the output of the system is rather simple. We assume that only the pedestrian positions on the image plane are available. For the cooperation of such systems, we propose map-based cooperation scheme, where the detected positions by multiple systems are transformed by nonlinear regression onto a single floor map, and the locations detected by different systems are merged based on the proximi ty. In our research, we employ linear regression tree for the nonlinear regression due to its short computation time. However, single camera people detection system cannot produce accurate floor position without the height information of each pedestrian. This is because the half upper body of the human model has strong matching weight, and hence, the head position is well estimated on the image plane but the foot position is not. Our method computes the correspondences of the locations of the same pedestrian, and the pedestrian's height is estimated simultaneously based on the discrepancy of their locations on the map.

  • Query Feature Reduction For Multiple Instance Image Retrieval

    湯浅 圭太, 和田 俊和

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2015 ( 29 ) 1 - 6   2015.01

     View Summary

    For Multiple Instance Image Retrieval (MIIR), we have proposed index feature reduction method, which reduces the number of indexes attached to the image entries in the database. This method computes the importance measure representing the stability and the discrimination power of each feature by using the framework of Diverse Density and reduces the features having less importance measures. Through the experiments, this reduction drastically reduces the memory usage and retrieval accuracy, but the acceleration of retrieval speed is limited. This is because the number of nearest neighbor searches performed for each query is equivalent to the number of local features in the query image. This means that query feature reduction is required for the acceleration of image retrieval. This report presents a query feature reduction method for solving this problem.

  • Accelerating Diverse Density for Keypoint Reduction

    向井 祐一郎, 和田 俊和

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2015 ( 28 ) 1 - 6   2015.01

     View Summary

    Content Based Image Retrieval (CBIR) using local features can be classified into two types. One is the method using single vector representation that integrates local features detected in an image into a single Bag of Feature (BoF) vector. The other uses multiple instance representation without integration. We call the latter method Multiple Instance Image Retrieval (MIIR). In MIIR, a method for reducing database indexes has been proposed. This method employs the framework of Diverse Density (DD) to represent the importance measure, which means the stability as well as the discriminative power of the feature (instance). This reduction reduces the memory usage and the retrieval accuracy. The computational cost of DD, however, is considerably big, because it has to compute all distances between all combinations of instances. This report presents the approximate computation of DD for MIIR using nearest neighbor search. We confirmed through the experiments that the computational speed of DD becomes 520 times faster on Nister's database while keeping the accuracy.

  • ベクトル出力可能なガウス過程回帰における共分散行列の推定

    松村祐貴, 和田俊和

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2015-CVIM-195   2015.01

  • Fast Keypoint Reduction for Image Retrieval by Accelerated Diverse Density Computation.

    Toshikazu Wada, Yuichi Mukai

    IEEE International Conference on Data Mining (ICDM) 2015 ( IEEE Computer Society )    102 - 107   2015  [Refereed]

    DOI

  • Semi-supervised Component Analysis

    Kenji Watanabe, Toshikazu Wada

    2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS ( IEEE COMPUTER SOC )    3011 - 3016   2015  [Refereed]

     View Summary

    Object re-identification techniques are essential to improve the identification performance in video surveillance tasks. The re-identification problem is equal to a multi-view problem that an unknown individual is identified across spatially disjoint data. For the re-identification techniques, several multi-view feature transformation methods have been proposed. These methods are formulated by the supervised learning framework and show the better performances in multi-view classification tasks in which the training data are observed by the different sensors. However, in the re-identification tasks, these methods may not be required because the simple feature transformation method such as linear discriminant analysis (LDA) shows the reasonable identification rates. In this paper, we propose a novel semi-supervised feature transformation method, which is formulated as a natural coupling with PCA and LDA modeled by the graph embedding framework. Our method showed best re-identification performances compared with other feature transformation methods.

    DOI

  • Face Image Recall by Nonlinear Mapping and Its Application To Human Identification

    FURUTANI SHUNTA, WADA TOSHIKAZU, MATSUMURA YUKI

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  114 ( 197 ) 63 - 69   2014.09

     View Summary

    Face image recognition is widely used for many applications from crime investigation to user identification for personal devices. Most face recognition systems, however, do not show the reason of recognition, i.e., by which part of the face, this image is recognized as the resulted person. This paper proposes a face image recognition method that provides the recognition result, as well as the reason of the result. This method is basically an image recall system using example based nonlinear mapping. This nonlinear mapping refers the image database and performs image-to-image mapping using the database so as to realize image recall, where the system generates the similar image with the input by partially linear combination of images in the database. As a by-product, weighting coefficients for images in the database are obtained at any locations on the recalled image. By applying this method to face images, we can recognize the person id by summing up the coefficients for ids. The coefficient distribution for the resulted person id, we can understand which part provides the evidence of the recognition result. Through experiments conducted on ORDB face database, we confirmed our method have high recognition rate.

  • A Method For Finding The Image Pair Capturing The Same Object Based On Local Image Features

    YASUDA ATSUSHI, WADA TOSHIKAZU

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  114 ( 197 ) 115 - 120   2014.09

     View Summary

    Crime investigation based on surveillance videos requires time and manpower consuming work, where the main task is to find the same person captured by different videos. This task is important for tracing the suspects or related people, and the results sometimes provide important cues for solution. However, so many surveillance videos are working in urban areas, and the investigators gather them and inspect them by using many workers for long time. If this task is automated, we can reduce the number of workers and overlooking. Also the automated system works faster than human inspection, we can accelerate the investigation process. This paper proposes a system finding the image pair capturing the same person based on local features. This implies our method does not require human detection or tracking. This property is suitable for practical situations, where people are occluded by obstacles, and most human detection systems fail. Our method detects local features for input image and by suppressing those features detected at the background region. The resulted local features are assigned as positive images and by computing the Commonality measure derived from Diverse Density, we can find the image pairs having high commonality. We applied our method to multiple video sequences, and confirmed that the proposed method is promising when the local features are roughly preserved.

  • 局所特徴に基づくAspect Graphの構築と3次元物体認識に関する研究

    中田 健一, 和田 俊和

    情報処理学会研究報告   Vol.2014-CVIM-193 ( No.11 )   2014.09

  • A Method For Finding The Image Pair Capturing The Same Object Based On Local Image Features

    Atsushi Yasuda, Toshikazu Wada

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2014 ( 19 ) 1 - 6   2014.08

     View Summary

    Crime investigation based on surveillance videos requires time and manpower consuming work, where the main task is to find the same person captured by different videos. This task is important for tracing the suspects or related people, and the results sometimes provide important cues for solution. However, so many surveillance videos are working in urban areas, and the investigators gather them and inspect them by using many workers for long time. If this task is automated, we can reduce the number of workers and overlooking. Also the automated system works faster than human inspection, we can accelerate the investigation process. This paper proposes a system finding the image pair capturing the same person based on local features. This implies our method does not require human detection or tracking. This property is suitable for practical situations, where people are occluded by obstacles, and most human detection systems fail. Our method detects local features for input image and by suppressing those features detected at the background region. The resulted local features are assigned as positive images and by computing the Commonality measure derived from Diverse Density, we can find the image pairs having high commonality. We applied our method to multiple video sequences, and confirmed that the proposed method is promising when the local features are roughly preserved.

  • Multi-Aspect Modeling of 3D Objects based on Local Features and Its Application to 3D Object Recognition

    Ken'ichi Nakata, Toshikazu Wada

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2014 ( 18 ) 1 - 8   2014.08

     View Summary

    Appearance based 3D object modeling is a handy approach for 3D object recognition. The most representative method in this approach is parametric-eigenspace method, which models the appearance variations of a 3D object by a manifold in eigenspace and recognizes the object name and the observation direction by using the manifolds of multiple objects. This method, however, is not robust against occlusions and background variations. For solving this problem, this paper proposes a robust 3D object modeling and recognition method based on local features. The inputs of this method are the images of objects representing their aspects. From these images, tons of local features are generated. Some are unique to an object and some are not. Also, some are created by specular highlight and shadows. By grouping the appearances of contiguous aspects, we can obtain a compact representation of an object, which indirectly excludes fragile and non-essential features. However, excessive grouping may cause poor recognition rate. The distinctive features unique to an object can be found by using the framework of Diverse Density (DD). The DD can be used also for controlling the grouping process. Our DD guided local feature selection and grouping is applied to Coil image dataset for object recognition task. Through comparative experiments with parametric-eigenspace method for occluded and background added input images, we confirmed that our method achieves much higher recognition rate.

  • Multi-Aspect Modeling of 3D Objects based on Local Features and Its Application to 3D Object Recognition

    NAKATA KEN'ICHI, WADA TOSHIKAZU

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  114 ( 197 ) 107 - 114   2014.08

     View Summary

    Appearance based 3D object modeling is a handy approach for 3D object recognition. The most representative method in this approach is parametric-eigenspace method, which models the appearance variations of a 3D object by a manifold in eigenspace and recognizes the object name and the observation direction by using the manifolds of multiple objects. This method, however, is not robust against occlusions and background variations. For solving this problem, this paper proposes a robust 3D object modeling and recognition method based on local features. The inputs of this method are the images of objects representing their aspects. From these images, tons of local features are generated. Some are unique to an object and some are not. Also, some are created by specular highlight and shadows. By grouping the appearances of contiguous aspects, we can obtain a compact representation of an object, which indirectly excludes fragile and non-essential features. However, excessive grouping may cause poor recognition rate. The distinctive features unique to an object can be found by using the framework of Diverse Density (DD). The DD can be used also for controlling the grouping process. Our DD guided local feature selection and grouping is applied to Coil image dataset for object recognition task. Through comparative experiments with parametric-eigenspace method for occluded and background added input images, we confirmed that our method achieves much higher recognition rate.

  • Face Image Recall by Nonlinear Mapping and Its Application To Human Identification

    Shunta Furutani, Toshikazu Wada, Yuki Matsumura

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2014 ( 11 ) 1 - 7   2014.08

     View Summary

    Face image recognition is widely used for many applications from crime investigation to user identification for personal devices. Most face recognition systems, however, do not show the reason of recognition, i.e., by which part of the face, this image is recognized as the resulted person. This paper proposes a face image recognition method that provides the recognition result, as well as the reason of the result. This method is basically an image recall system using example based nonlinear mapping. This nonlinear mapping refers the image database and performs image-to-image mapping using the database so as to realize image recall, where the system generates the similar image with the input by partially linear combination of images in the database. As a by-product, weighting coefficients for images in the database are obtained at any locations on the recalled image. By applying this method to face images, we can recognize the person id by summing up the coefficients for ids. The coefficient distribution for the resulted person id, we can understand which part provides the evidence of the recognition result. Through experiments conducted on ORDB face database, we confirmed our method have high recognition rate.

  • Commonality Preserving Multiple Instance Clustering Based on Diverse Density.

    Takayuki Fukui, Toshikazu Wada

    Computer Vision - ACCV 2014 Workshops - Singapore ( Springer )    322 - 335   2014  [Refereed]

    DOI

  • Commonality Preserving Image-Set Clustering Based on Diverse Density.

    Takayuki Fukui, Toshikazu Wada

    10th International Symposium on Visual Computing(ISVC'14) ( Springer )    258 - 269   2014  [Refereed]

    DOI

  • Logistic Component Analysis for Fast Distance Metric Learning

    Kenji Watanabe, Toshikazu Wada

    2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) ( IEEE COMPUTER SOC )    1278 - 1282   2014  [Refereed]

     View Summary

    Discriminating feature extraction is important to achieve high recognition rate in a classification problem. Fisher's linear discriminant analysis (LDA) is one of the well-known discriminating feature extraction methods and is closely related to the Mahalanobis distance metric learning. Neighborhood component analysis (NCA) is one of the Mahalanobis distance metric learning methods based on stochastic nearest neighbor assignment. The objective function of NCA can be expressed as a within-class coherency by a simple formula, and NCA extracts discriminating features by minimizing the objective function. Unfortunately, the computational cost of NCA significantly increases as the number of input data increases. For reducing the computational cost, we propose a fast distance metric learning method by taking the between-class distinguishability into account of nearest mean classification. According to the experimental results using standard repository datasets, the computational time of our method is evaluated as 27 times shorter than that of NCA while keeping or improving the accuracy.

    DOI

  • 下肢へのオイルマッサージが健康成人のむくみに及ぼす影響-複合主義による検討-

    和田 恒彦, 宮本 俊和

    日本東洋医学系物理療法学会誌   39(2)   47 - 52   2014  [Refereed]

  • 画像処理 近赤外モノクロ顔画像のカラー化

    森 敦, 和田 俊和

    画像ラボ ( 日本工業出版 )  24 ( 11 ) 46 - 52   2013.11

  • 特徴量補正のためのL1正則化項を導入した制約条件付き線形回帰の提案

    渡辺顕司, 和田俊和

    第16回画像の認識・理解シンポジウム(MIRU2013)     2013.08  [Refereed]

  • Automatic Colorization of Near-Infrared Monochrome Face Image based on Position-Dependent Regression

    Atsushi Mori, Toshikazu Wada, Hiroshi Oike

    コンピュータビジョンとイメージメディア研究会(CVIM2013年5月研究会)   Vol.2013-CVIM-187 ( No.34 ) 1 - 6   2013.05

     View Summary

    This report presents a method for estimating color face images from near-infrared monochrome face images. This estimation is done by the regression from a monochrome image to a color image. One difficult problem is that the regression depends on face organs. That is, the same intensity pixels in an infrared monochrome image do not correspond to the same color pixels. Therefore, entirely uniform regression cannot colorize the pixels correctly. This report presents a colorization method for monochrome face images by position-dependent regressions, where the regression coefficients are obtained in different image regions corresponding to facial organs. Also, we can extend the independent variables by adding texture information around the pixels so as to obtain accurate color images. However, unrestricted extension may cause multi-collinearity problem, which may produce inaccurate results. This report also proposes CCA based dimensionality reduction for avoiding this problem. Comparative experiments on the restoration accuracy demonstrate the superiority of our method.This report presents a method for estimating color face images from near-infrared monochrome face images. This estimation is done by the regression from a monochrome image to a color image. One difficult problem is that the regression depends on face organs. That is, the same intensity pixels in an infrared monochrome image do not correspond to the same color pixels. Therefore, entirely uniform regression cannot colorize the pixels correctly. This report presents a colorization method for monochrome face images by position-dependent regressions, where the regression coefficients are obtained in different image regions corresponding to facial organs. Also, we can extend the independent variables by adding texture information around the pixels so as to obtain accurate color images. However, unrestricted extension may cause multi-collinearity problem, which may produce inaccurate results. This report also proposes CCA based dimensionality reduction for avoiding this problem. Comparative experiments on the restoration accuracy demonstrate the superiority of our method.

  • Diverse Densityを用いた画像検索用キーポイントの削減法

    湯浅 圭太, 和田 俊和, 渡辺 顕司

    コンピュータビジョンとイメージメディア研究会(CVIM2013年5月研究会)   Vol.2013-CVIM-187 ( No.35 )   2013.05

  • Face Model from Local Features: Image Clustering and Common Local Feature Extraction based on Diverse Density

    Takayuki Fukui, Toshikazu Wada, Hiroshi Oike

    コンピュータビジョンとイメージメディア研究会(CVIM2013年5月研究会)   Vol.2013-CVIM-187 ( No.35 ) 1 - 5   2013.05

     View Summary

    Face image retrieval based on local features has advantages of short elapsed time and robustness against the occlusions. However, the keypoint detection, beforehand with the feature description, may fail due to illumination changes. For solving this problem, top-down model-based keypoint detection can be applied, where man-made face model does not fit this task. This report addresses the problem of bottom-up face model construction from example, which can be formalized as common local features extraction among face images. For this purpose, a measure called Diverse Density (DD) can be applied. DD at a point in a feature space represents how the point is close to other positive example while keeping enough distance from negative examples. Because of this property, DD is defined as product of metrics, which can easily be affected by exceptional data, i.e., if one negative data leaps into the neighbour of a positive example, the DD around there becomes lower. Actually, face images have wide variations of face organs' positions, beard, moustache, glasses, and so on. Under these variations, DD for wide varieties of face images will be low at any point in the feature space. For solving this problem, we propose a method performing hierarchical clustering and common local feature extraction simultaneously. In this method, we define a measure representing the affinity of two face image sets, and cluster the face images by iteratively merging the cluster pair having the maximum score. Through experiments on 1021 CAS-PEAL face images, we confirmed that multiple face models are successfully constructed.Face image retrieval based on local features has advantages of short elapsed time and robustness against the occlusions. However, the keypoint detection, beforehand with the feature description, may fail due to illumination changes. For solving this problem, top-down model-based keypoint detection can be applied, where man-made face model does not fit this task. This report addresses the problem of bottom-up face model construction from example, which can be formalized as common local features extraction among face images. For this purpose, a measure called Diverse Density (DD) can be applied. DD at a point in a feature space represents how the point is close to other positive example while keeping enough distance from negative examples. Because of this property, DD is defined as product of metrics, which can easily be affected by exceptional data, i.e., if one negative data leaps into the neighbour of a positive example, the DD around there becomes lower. Actually, face images have wide variations of face organs' positions, beard, moustache, glasses, and so on. Under these variations, DD for wide varieties of face images will be low at any point in the feature space. For solving this problem, we propose a method performing hierarchical clustering and common local feature extraction simultaneously. In this method, we define a measure representing the affinity of two face image sets, and cluster the face images by iteratively merging the cluster pair having the maximum score. Through experiments on 1021 CAS-PEAL face images, we confirmed that multiple face models are successfully constructed.

  • Removal of Show-through for One-side Scanned Document by Discrete Optimization

    MUKAI Yuichiro, WADA Toshikazu, OIKE Hiroshi

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 495 ) 81 - 86   2013.03

     View Summary

    This paper presents a show-through removal method based on discrete optimization. Images of duplex printed documents may be degraded by show-through, which may affect the character recognition ratio of a document. Show-through is a common phenomenon that texts or figures on one side appears on the other side. There are two ways to remove show-through components from the images: show-through removal from both-side images and that from a one-side image. The method using both-side images requires 1)both side images and 2)precise alignment of images, which is a difficult problem to solve automatically. The method using a one-side image can be designed as a discrete optimization problem estimating both-side intensities for each pixel. However, the solution requires huge computational cost, because the number of labels is big (256×256). In this report, we formalize this problem as another discrete-optimization problem to estimate pixel's attributes (foreground, show-through, or background). Clearly, this method uses three labels, and thus, computationally inexpensive. Through experiment using many show-through images, we confirmed that our method removes show-through components.

  • 離散最適化よる片面文章画像のショースルー除去法

    向井祐一郎, 和田俊和, 大池洋史

    パターン認識・メディア理解研究会(PRMU)     2013.03

  • Memory Efficient K-means Clustering using HDD

    OIKE Hiroshi, KISHI Kazuyoshi, Wada Toshikazu

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 441 ) 61 - 66   2013.02

     View Summary

    This report presents an "external" k-means clustering on HDD K-means clustering is widely used for many applications. For example, codebook creation for Bag of Visual Words requires k-means clustering on huge amount of local feature vectors to obtain Visual Words (codebook entries). Standard "internal" k-means clustering loads the whole vector data on the main memory and performs clustering. This working memory can explode for huge amount of data. As a solution of this problem, we propose an "external" clustering algorithm on HDD. This is a multi-path algorithm, which scans the whole data in each path In the first stage, cluster centroids are updated gradually, providing the data sequentially. Through this path, the number and the sum of the data are recorded for each cluster, and the belonging cluster is recorded for each data. In the following paths, each data is provided and the cluster center is updated for those data that changes belonging cluster. By adjusting this update frequency, the number of distance computation can be reduced and the performance can be improved.

  • Grafting Trees A Nearest Neighbor Search Algorithm without Distance Computation

    Ohtani Youhei, Wada Toshikazu, Oike Hiroshi

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 441 ) 137 - 142   2013.02

     View Summary

    This report presents a nearest neighbor search algorithm without distance computation K-D tree based nearest neighbor search algorithm consists of two processes : 1) determine tentative nearest neighbor data by tree search, and 2) A* algorithm based "priority search" to find true nearest neighbor data to the query. The latter process consumes most computational time especially for high dimensional data distributions. We focus on the fact that the nearest neighbor data is essentially determined by the query position in the feature space and the priority search can be replaced by classifiers. In practice, we employ decision trees as classifiers and attach them to the K-D tree leaf nodes. This data structure is named Grafting trees. On this tree, we can find the nearest neighbor data without distance computation, In this data structure, some decision tree leaves can be very deep. For solving this problem, we modified this data structure forgiving distance computation for such leaf nodes. The modified tree is named Hybrid Grafting trees. In the experiments, we compared ANN library with our methods and confirmed the search time is shorter than ANN.

  • Face model creation based on simultaneous execution of hierarchical training-set clustering and common local feature extraction

    FUKUI Takayuki, WADA Toshikazu, OIKE Hiroshi, SAKATA Jun

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 385 ) 23 - 28   2013.01

     View Summary

    Face image retrieval based on local features has advantages of short elapsed time and robustness against the occlusions. However, the keypoint detection, beforehand with the feature descnption, may fail due to illumination change. For solving this problem, top-down model-based keypoint detection must be effective, where man-made face model does not fit for this task. This report addresses the problem of bottom-up face model creation from examples, which can be formalized as common local feature extraction among examples. For this purpose, a measure called Diverse Density (DD) established in the field of Multiple Instance Learning (MIL) can be applied. DD at a point in a feature space represents how the point is close to other positive examples while keeping enough distance from negative examples. Because of this this property, DD is defined as a product of metrics, which can easily be affected by exceptional data, i.e., if one negative data leaps into the neighbor of a positive example, the DD around there becomes lower. Actually, face images have wide vanations of face organs' positions, beard, mustache, glasses, and so on. Under these vanations, DD for wide varieties of face images will be low at any point in the feature point. For solving this problem, we propose a method performing hierarchical clustering and common feature extraction simultaneously. In this method, DD score is employed as a measure representing the integrity of the face image set, and hierarchical clustering is performed by merging the cluster pair having maximum DD score. Through experiments on 1021 CASPEAL face images, we confirmed that multiple face models are successfully constructed.

  • Co-segmentation based on Multiple-Instance Learning

    SAKATA Jun, WADA Toshikazu

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 385 ) 81 - 86   2013.01

     View Summary

    Appearance learning of an object represented by a text can be realized by utilizing image retrieval systems on the Internet. This enables us a real-world object search system, which searches real object providing a text representing the object. Images collected by an image retneval system, however, cannot be used for appearance learning, because the object locations and sizes are not uniform among collected images. That is, object localization in each image is required. This prob-lem can be taken as a co-segmentation problem extracting common object among images, and we propose a method based on Multiple-Instance Learning (MIL) framework. Our method consists of two parts, foreground-background modeling and dis-crete optimization to obtain object regions. Foreground-background modeling is to compute Diverse Density (DD) for exces-sively partitioned image regions, where original DD represents similanty among positive features and dissimilarity between negative features. In our method, we slightly modified the DD definition to fit the co-segmentation problem Expenmental results using iCoseg database demonstrates higher accuracy of our method than other methods.

  • Depth-based active object tracking

    SHIMADA Yoshiaki, WADA Toshikazu

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 385 ) 299 - 304   2013.01

     View Summary

    This paper presents an active tracking system that only uses depth image taken by Kinect. From the view pomt of Kinect, flexible software libranes including human detection and human-pose estimation cannot be applied when target people go outside of the viewing area, and the active trackmg system solve or relax this problem. From the active-tracking -system viewpoint, different from other cameras, Kmect can take depth image in mdoor scene independent of illumination conditions. This implies that the tracking system can keep tracking even in complete darkness. For single camera trackmg system, Fixed Viewpoint Camera (FVC) setting, which does not change the 3D viewpoint under rotation, is useful for active tracking, because the rotation angle to keep target inside the image frame can directly computed from the 2D image position on the image plane. In the case of Kmect, however, we don't have to use FVC settings if we know the relative viewpoint position from the rotation center, because 3D positional information is directly obtained from the sensor. First, we present the calibration method to know the relative position. Next, we present a 3D blob-tracking method based on mean-shift Since this method uses 3D window function, the object separability from its background and the stability are both good when the target shape is roughly convex. As a tracking method using more precise 3D model, this paper also presents spin-image based tracking algorithm. Through the expenment, we discuss what method should be used for specific tasks.

  • Vector Output Estimation by Gaussian Process Regression using Dynamic Active Set

    MATSUMURA Yuki, WADA Toshikazu, MAEDA Shunji, SHIBUYA Hisae

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 385 ) 317 - 322   2013.01

     View Summary

    This report presents a method to estimate vector outputs m the framework of Gaussian Process Regression (GPR). GPR is a non-linear regression method based on input-output examples. Since basic GPR estimates an output consisting of scalar mean and variance, multiple executions of GPR cannot estimate covariance between vector components. Our method estimates both vector mean values and covariance matrices based on our previous method Dynamic Active Set (DAS). Active set is the set of examples consisting of input and output pairs. DAS automatically construct an active set suitable for estimating the output for given mput. Our method estimates covariance matrix from the output components of the. Active Set obtained by DAS Experimental results for artificial and real datasets demonstrate the soundness of our method.

  • Bilateral Filtering based depth image correction consistent with color image captured by Kinect

    TAKENAKA Haruka, WADA Toshikazu, OIKE Hiroshi, INOUE Manabu, SHIMADA Yoshiaki

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 385 ) 311 - 316   2013.01

     View Summary

    There exists a calibration method that modifies depth and color images captured by Kinect so as to minimize the 2D geometric inconsistencies. However, the resulting image pairs still have small 2D allocation error. Furthermore, the distance measurement may fail due to weak reflections at dark area, specular reflections, infra-red light occlusion observed near occluding contour, and so on. Under these geometric inconsistency and lack of depth information, re-projection of the color image projected depth map to different screen may produce heavily distorted image. This report presents a depth image correction method using local relationship between depth and color information, i.e., spatially near and similar-color pixels should have similar depth. For this computation, bilateral filtering best fits, where the product of color-similarity and spatial proximity represents the weight of the depth information. Through some experiments, we confirmed that our method produces accurately allocated and lack free depth images.

  • Phase-shift 3-D measurement with belief propagation phase unwrapping

    INOUE Manabu, WADA Toshikazu

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 385 ) 305 - 310   2013.01

     View Summary

    Phase-shift 3-D measurement is a very accurate depth measurement method that can measure the depth in sub micrometer order by analyzing at least 3 images. The resolution is very high, however, the depth is initially estimated as a phase value in a certain depth period, which corresponds to a depth interval. For estimating the absolute depth from the phase information, we have to determine the order of the depth period. This problem is called "phase unwarping" problem. Popular solution of phase unwarping requires additional light projections and observations, which reduce the temporal resolution of the depth measurement. This report formalizes the phase unwarpmg as a discrete optimization problem. The problem is solved by the belief-propagation. Our previous method only uses smoothness-term in the formalization, however, this report presents a method to use data-term at some pixels by modifying the projection light pattern. Through experiments, we confirmed that the depth reconstruction is successfully done for a "steep" object whose depth spatially changes within a small position change.

  • Automatic colorization of near-infrared monochrome face image based on pixel-wise classification and regression

    MORI Atsushi, WADA Toshikazu, OIKE Hiroshi

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 385 ) 353 - 358   2013.01

     View Summary

    This report presents a method that estimates a color face image from a monochrome face image taken under near-infrared illumination. The estimation is done by the regression between parameters obtained by CCA, where the regression coefficients are learned from color and infrared image pairs a pnon. The most difficult problem is that the regressions depend on face organs. That is, the same intensity pixels in an infrared monochrome image do not correspond to the same color pixels in a color image. That is, globally uniform regression cannot colorize the pixels correctly. Especially, intensities of iris region in an infrared image are brighter than that in the corresponding monochrome image taken under visible ray illumination. In our method, we colonze a monochrome face image by regressions, where the coefficients are obtained in different image regions corresponding to facial parts. These regions are obtained by two-stage segmentation, which uses learned face template and adaptive segmentation. Comparative experiments between simple intensity based segmentation and our two-stage segmentation demonstrates the restoration accuracy of our method.

  • Keypoint Selection based on Diverse Density for Image Retrieval

    YUASA Keita, WADA Toshikazu, OIKE Hiroshi, SAKATA Jun

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  112 ( 385 ) 87 - 92   2013.01

     View Summary

    We are planning to construct an image retrieval system using FPGA. For designing this system, we developed a prototype system, which retrieves the image having maximum number of matched local image features with a query image. The reason why we don't use Bag of Features (BoF) is that codebook referencing may consume considerable time and computational resources on FPGA. For this purpose, the local features describing a stored image should satisfy the following conditions : 1) they should have strong discrimination power from other images, 2) they should be robust agamst observation distortions including rotation, scaling, and so on. In order to maximize the number of stored images, the number of local features describing a stored image should be minimized For selecting such "good local features" from all local features, we propose a method based on Diverse Density. In the experiment, we confine the number of local features describing a single image to 10, and our method outperforms other local feature selection methods.

  • Co-segmentation based on Multiple-Instance Learning

    坂田 惇, 和田 俊和

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2013 ( 13 ) 1 - 6   2013.01

     View Summary

    インターネット上の画像検索を利用して,物体の見えの学習を行うことができれば,テキストを与えるだけで現実世界での物体の探索が行えるようになる.しかし,現実の画像検索結果は,対象以外の無関係なものを含むため,画像内での対象のローカライズが必要になる.本報告では,この問題を,複数の画像群から共通対象を検出するCo-segmentationの問題であると捉え,Multiple-InstanceLearning(MIL)の考え方を取り入れて,この問題を解く方法を提案する.具体的には,過剰分割した画像領域から抽出した特徴量に対して求めたDiverseDensity(DD)の値を利用して前景・背景モデルを構築し,離散最適化によって,各画像で共通する領域の検出を行う.この際に通常のDDを計算したのでは,前景におけるDDの値が低下するために,Co-segmentationに適した新たな尺度を導入する.iCoseg databaseを用いた実験では先行研究に比べ高いセグメンテーション精度が得られ,本手法の有効性が確認できた.Appearance learning of an object represented by a text can be realized by utilizing image retrieval systems on the Internet. This enables us a real-world object search system, which searches real object providing a text representing the object. Images collected by an image retrieval system, however, cannot be used for appearance learning, because the object locations and sizes are not uniform among collected images. That is, object localization in each image is required. This problem can be taken as a co-segmentation problem extracting common object among images, and we propose a method based on Multiple-Instance Learning (MIL) framework. Our method consists of two parts, foreground-background modeling and dis crete optimization to obtain object regions. Foreground-background modeling is to compute Diverse Density (DD) for exces sively partitioned image regions, where original DD represents similarity among positive features and dissimilarity between negative features. In our method, we slightly modified the DD definition to fit the co-segmentation problem. Experimental results using iCoseg database demonstrates higher accuracy of our method than other methods.

  • Phase-shift 3-D measurement with belief propagation phase unwrapping

    井上 学, 和田 俊和

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2013 ( 48 ) 1 - 6   2013.01

     View Summary

    位相シフト法は,最低3枚の画像を解析するだけで,画像全体の三次元形状をサブ川単位の分解能で計測できる手法である.しかしながら,各画素で求められる位相は,奥行きを幾つかの周期に区分した内部での奥行きに対応付けられるため,各位相情報が何周期目の区分の中での奥行きかを求めない限り,一意に奥行きを決定できない.これを補うために,各位相が何周期目の位相かを決定する「位相接続」問題を解かなければならない.本論文では,この問題を離散最適化問題として定式化し,BeliefPropagationを用いて解くことにより,奥行きの推定を行う.特に,以前は相対的な形状を求めるために,平滑化項のみを用いた離散最適化を行なってきたが[1],今回は投光パターンを工夫することにより,幾つかの点で絶対的奥行きを表すデータ項を求め,これを用いて奥行き推定を行う手法を提案する.実験により,奥行きが激しい物体であっても正しく奥行きの推定が行え,提案手法の有効性を確認した.Phase-shift 3-D measurement is a very accurate depth measurement method that can measure the depth in submicrometer order by analyzing at least 3 images. The resolution is very high, however, the depth is initially estimated as a phase value in a certain depth period, which corresponds to a depth interval. For estimating the absolute depth from the phase information, we have to determine the order of the depth period. This problem is called "phase unwarping" problem. Popular solution of phase unwarping requires additional light projections and observations, which reduce the temporal resolution of the depth measurement. This report formalizes the phase unwarping as a discrete optimization problem. The problem is solved by the belief-propagation. Our previous method only uses smoothness-term in the formalization, however, this report presents a method to use data-term at some pixels by modifying the projection light pattern. Through experiments, we confirmed that the depth reconstruction is successfully done for a "steep" object whose depth spatially changes within a small position change.

  • Keypoint Selection based on Diverse Density for Image Retrieval

    湯浅 圭太, 和田 俊和, 大池 洋史, 坂田 淳

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2013 ( 14 ) 1 - 6   2013.01

     View Summary

    我々は,SIFTやSURFなどの画像の局所特徴量を用いた画像検索システムをFPGA上に構築するためのプロトタイプを設計している.この設計ではBag of Features(BoF)などのコードブックを用いた特徴記述を行うことは煩雑であるため,局所特徴量そのものを用いた画像検索を行う.この際,一枚の画像を特徴付けるキーポイントが満たすべき条件としては,1)他の画像には含まれない弁別性の高い局所特徴量を持つこと,2)画像の回転やスケール変化等の変換を受けても検出されやすいこと,という2つが重要である.これらの条件を満足する少数のキーポイントを用いて登録する画像を記述しておけば,省メモリかつ高速で,しかもロバストな画像検索が行えるはずである.このようなキーポイントを抽出するためにDiverse Densityを用いたキーポイントの絞込み法を提案する.実験では,1枚の画像を10個のキーポイントで記述するものとし,ランダムに選んだキーポイントよりも,提案手法で選んだ10点のほうが圧倒的に安定な検索が行えることを確認した.We are planning to construct an image retrieval system using FPGA. For designing this system, we developed a prototype system, which retrieves the image having maximum number of matched local image features with a query image.The reason why we don't use Bag of Features (BoF) is that codebook referencing may consume considerable time and computational resources on FPGA. For this purpose, the local features describing a stored image should satisfy the following conditions: 1) they should have strong discrimination power from other images, 2) they should be robust against observation distortions including rotation, scaling, and so on. In order to maximize the number of stored images, the number of local features describing a stored image should be minimized. For selecting such "good local features" from all local features, we propose a method based on Diverse Density. In the experiment, we confine the number of local features describing a singleimage to 10, and our method outperforms other local feature selection methods.

  • Vector Output Estimation by Gaussian Process Regression using Dynamic Active Set

    松村 祐貴, 和田 俊和, 前田 俊二, 渋谷 久恵

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2013 ( 50 ) 1 - 6   2013.01

     View Summary

    本報告では,GaussianProcessRegression(GPR)の枠組みを用いてベクトル値の出力を推定する手法を提案する.GPRは出力の期待値と分散を推定する非線形回帰手法である.基本的なGPRで推定できる出力の期待値はスカラー値であり,ベクトル値を推定する際にGPRを出力次元の回数だけ実行しても各出力次元間の共分散は推定できない.本手法では,我々が先に提案したActiveSetの動的紋込みを利用し,ベクトル出力の期待値とその共分散の推定を行う.ActiveSetの動的絞込みは,入力データをキーとして,出力の推定に必要な事例集合を動的に決定する方法である.各事例は,入力と出力のペアから構成されており,ActiveSetの出力部を利用することで,共分散行列の推定を行うことが出来る.実験では,提案手法を人工的なデータとプラントデータに適用し,提案手法の有効性を確認した.This report presents a method to estimate vector outputs in the framework of Gaussian Process Regression(GPR) . GPR is a non-linear regression method based on input-output examples. Since basic GPR estimates an output consisting of scalar mean and variance, multiple executions of GPR cannot estimate covariance between vector components.Our method estimates both vector mean values and covariance matrices based on our previous method Dynamic Active Set (DAS). Active set is the set of examples consisting of input and output pairs. DAS automatically construct an active set suitable for estimating the output for given input. Our method estimates covariance matrix from the output components of the Active Set obtained by DAS. Experimental results for artificial and real datasets demonstrate the soundness of our method.

  • Face model creation based on simultaneous execution of hierarchical training-set clustering and common local feature extraction

    福井 崇之, 和田 俊和, 大池 洋史, 坂田 惇

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2013 ( 4 ) 1 - 6   2013.01

     View Summary

    局所特徴量に基づく顔画像検索は,高速でありオクルージョンにも頑健であるが,照明変化によってキーポイントの検出漏れが発生するため,特徴記述が不安定になる.この問題に対処するには,モデルを用いたトップダウン的なキーポイント検出が有効であると考えられる.しかし,人間の先見的知識に基づいてこれらのモデルを構築した場合,必ずしも実際の画像とは一致しないため,実際のデータからボトムアップ的に画像の局所特徴量に基づくモデルを構築しなければならない画像の局所特徴量の共通性を評価する尺度として,Multiple lnstanceLeaming(MIL)で用いられるDiverse Density(DD)がある.これを用いれば,非顔画像から抽出したネガティブデータから遠く,顔画像から抽出したポジティブデータと近い局所特徴量が抽出できる.しかし,実際の人物顔画像は顔器官の空間的配置や,メガネや髭など,変化に富み,全ての顔画像について積の形で表現されるDDの値を計算したのでは値が小さくなり,共通性の高い顔特徴は検出できない.本報告では,この問題を解決するために,DDの期待値をスコアとして,顔画像集合を階層的にクラスタリングしながら,局所特徴量のモデルを構築する方法を提案する.実験では,CASPEALの顔画像1021枚を対象として提案手法を実行し,複数の顔モデルの抽出が行えることを確認した.Face image retrieval based on local features has advantages of short elapsed time and robustness against the occlusions. However, the keypoint detection, beforehand with the feature description, may fail due to illumination change. For solving this problem, top-down model-based keypoint detection must be effective, where man-made face model does not fit for this task. This report addresses the problem of bottom-up face model creation from examples, which can be formalized as common local feature extraction among examples. For this purpose, a measure called Diverse Density (DD) established in the field of Multiple Instance Learning (MIL) can be applied. DD at a point in a feature space represents how the point is close to other positive examples while keeping enough distance from negative examples. Because of this this property, DD is defined as a product of metrics, which can easily be affected by exceptional data, i.e., if one negative data leaps into the neighbor of apositive example, the DD around there becomes lower. Actually, face images have wide variations of face organs' positions,beard, mustache, glasses, and so on. Under these variations, DD for wide varieties of face images will be low at any point in the feature point. For solving this problem, we propose a method performing hierarchical clustering and common feature extraction simultaneously. In this method, DD score is employed as a measure representing the integrity of the face image set,and hierarchical clustering is performed by merging the cluster pair having maximum DD score. Through experiments on 1021CASPEAL face images, we confirmed that multiple face models are successfully constructed.

  • Depth-based active object tracking

    島田 喜明, 和田 俊和

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2013 ( 47 ) 1 - 6   2013.01

     View Summary

    本報告では,Kinectによって得られる深度画像のみを用いて,首振りによる対象の能動追跡を行うシステムおよび方法について述べる.これは,Kinectの側から見ると,人物検出やポーズ推定などを高速かつ安定に行うソフトウエアライブラリが使えるにもかかわらず,人物が観測範囲から出てしまうとその機能が使えなくなることを補うことになる.また,能動追跡の立場から見ると,屋内であれば照明の強弱にかかわらず,完全な暗闇でも追跡が行えることを意味している.単眼およびステレオカメラなどでは,回転による運動視差が発生しない「視点固定型」のカメラを用いた能動追跡が2次元画像平面上での対象位置のみで回転角が決定できるため有利である.しかし,Kinectでは3次元情報が直接得られるため,深度カメラの光学中心と回転中心の位置関係が分かっていれば,視点固定型の構成に剃る必要はない本稿では,まずこのキャリブレーション法について述べる.次に,3次元空間中での点群をmeanshiftによって追跡する手法について述べる.この手法は,3次元の窓関数を用いるため,物体が凸であり周辺に他の物体が存在しない限り,安定な追跡を実現することが出来る.次に,より詳細な3次元形状に基づく追跡アルゴリズムの例としてspmimageを用いた追跡方法,P、制御の方法について述べ,実験結果から,対象形状やタスクによって,どのような追跡手法が適しているかについて述べる.This paper presents an active tracking system that only uses depth image taken by Kinect. From the view point of Kinect,flexible software libraries including human detection and human-pose estimation cannot be applied when target people go outside of the viewing area, and the active tracking system solve or relax this problem. From the active-tracking -system viewpoint, different from other cameras, Kinect can take depth image in indoor scene independent of illumination conditions. This implies that the tracking system can keep tracking even in complete darkness. For single camera tracking system, Fixed Viewpoint Camera (FVC) setting, which does not change the 3D viewpoint under rotation, is useful for active tracking, because the rotation angle to keep target inside the image frame can directly computed from the 2D image position on the image plane. In the case of Kinect, however, we don't have to use FVC settings if we know the relative viewpoint position from the rotation center, because 3D positional information is directly obtained from the sensor. First, we present the calibration method to know the relative position. Next, we present a 3D blob-tracking method based on mean-shift. Since this method uses 3D window function, the object separability from its background and the stability are both good when the target shape is roughly convex. As a tracking method using more precise 3D model, this paper also presents spin-image based tracking algorithm. Through the experiment,we discuss what method should be used for specific tasks.

  • Nighttime Pedestrian Pose Estimation Using Hierarchy NFTG

    Hiroki Maebuchi, Haiyuan Wu, Qian Chen, Toshikazu Wada

    The International Workshop on Advanced Image Technology (IWAIT2013)     2013.01  [Refereed]

  • Part Based Regression with Dimensionality Reduction for Colorizing Monochrome Face Images.

    Atsushi Mori, Toshikazu Wada

    2nd IAPR Asian Conference on Pattern Recognition(ACPR) ( IEEE )    506 - 510   2013  [Refereed]

    DOI

  • Keypoint Reduction for Smart Image Retrieval.

    Keita Yuasa, Toshikazu Wada

    2013 IEEE International Symposium on Multimedia(ISM) ( IEEE Computer Society )    351 - 358   2013  [Refereed]

    DOI

  • 階層化NFTGと夜間歩行者姿勢推定への応用

    前渕 啓材, 呉 海元, 和田 俊和

    平成24年度情報処理学会関西支部 支部大会     2012.09

  • 木探索とグラフ探索を結合した近似最近傍探索アルゴリズム

    大谷洋平, 和田俊和, 大池洋史

    MIRU2012第15回画像の認識・理解シンポジウム論文集   2012   IS1-07   2012.08  [Refereed]

  • オプティカルフローを用いた動画像からの関節位置の推定

    渡邊佳寛, 和田俊和

    MIRU2012第15回画像の認識・理解シンポジウム論文集   2012   IS2-30   2012.08  [Refereed]

  • 装着型カメラを用いた頭部3次元位置計測と運動視差生成

    中崎裕介, 和田俊和

    MIRU2012第15回画像の認識・理解シンポジウム論文集   2012   IS3-19   2012.08  [Refereed]

  • FPGA上での画像キーポイント検出と対応付けの並列実装

    吉岡勇太, 和田俊和

    MIRU2012第15回画像の認識・理解シンポジウム論文集   2012   IS3-17   2012.08  [Refereed]

  • Belief Propagation によるオプティカルフロー推定の高速化・高精度化

    向井祐一郎, 和田俊和

    MIRU2012第15回画像の認識・理解シンポジウム論文集   2012   IS1-21   2012.08  [Refereed]

  • 整数化によるSURFアルゴリズムの高速化

    吉岡勇太, 和田俊和

    第18回画像センシングシンポジウム(SSII2012)     DS1-04   2012.06  [Refereed]

  • FPGA上での画像キーポイント検出と対応付けの並列実装

    吉岡勇太, 和田俊和

    第18回画像センシングシンポジウム(SSII2012)     IS3-03   2012.06  [Refereed]

  • 深度画像を用いた能動対象追跡

    島田喜明, 和田俊和

    第18回画像センシングシンポジウム(SSII2012)     DS2-07   2012.06  [Refereed]

  • Detecting Pedestrians in Night View using HOG and NFTG with Geometric Constraints

    MAEBUCHI Hiroki, WU Haiyuan, WADA Toshikazu

    Technical report of IEICE. Multimedia and virtual environment ( The Institute of Electronics, Information and Communication Engineers )  111 ( 380 ) 291 - 296   2012.01

     View Summary

    This paper proposes a method for detecting pedestrians on the road in night view using HOG, NFTG and the geometric constraints that pedestrians stand or walk on the road. The vanishing point of the road is estimated from the white lines on the road using particle filter and Hough transform. The vanishing point is used to predict the size of the pedestrian when whose position is given, thus brings the effect of reducing the processing time and the false detection. A new method called "Integral-Ni-HOG" (or INi-HOG) for calculating HOG for near-infrared images fast is proposed. The Hausdorff distance between the pedestrian candidate regions and the pedestrian model is used for the verification of each pedestrian candidate region in the image. NFTG is used to describe the relations between each pedestrian model in order to reduce the number of times of pedestrian models being used for verifying each pedestrian candidate region. The experiments using real images show that this method is 15 times faster and the false detection rate is lowered by more than 50% compared the conventional method based on HOG.

  • Detecting Pedestrians in Night View using HOG and NFTG with Geometric Constraints

    前渕 啓材, 呉 海元, 和田 俊和

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2012 ( 52 ) 1 - 6   2012.01

     View Summary

    本稿では幾何制約付 INi-HOG と NFTG (Nearest First Traversing Graph) を使って夜間歩行者を検出する方法を提案する.歩行者候補領域の検出段階では、安定に消失線を求めるために、路面内の白線等の直線情報に基づいて,ハフ変換とパーティクルフィルタから消失点を検出する方法 [15] を導入する.近赤外線画像用の HOG(Ni-HOG)[14] を高速に計算するために、Integral-Ni-HOG(INi-HOG) を提案する.消失線とカメラパラメータから INi-HOG の探索位置とサイズを導出でき,高速処理かつ誤検出が抑制することが可能になる.歩行者候補領域の検証段階では、人物輪郭モデルとのハウスドルフ距離を計算する方法 [14] を採用する.多数の人物輪郭モデルとの距離計算回数を最小限にするために、NFTG[7] を用いたモデル間のグラフ化を提案する。実画像を用いた実験結果より,従来手法 (HOG) と先行研究 [14] と比較し,提案手法を用いた場合,誤検出を約半分まで抑えながら,処理速度が約 15 倍速くなることを確認した.This paper proposes a method for detecting pedestrians on the road in night view using HOG, NFTG and the geometric constraints that pedestrians stand or walk on the road. The vanishing point of the road is estimated from the white lines on the road using particle filter and Hough transform[15]. The vanishing point is used to predict the size of the pedestrian when whose position is given, thus brings the effect of reducing the processing time and the false detection. A new method called "Integral-Ni-HOG" (or INi-HOG) for calculating HOG for near-infrared images fast is proposed. The Hausdorff distance between the pedestrian candidate regions and the pedestrian model[14] is used for the verification of each pedestrian candidate region in the image. NFTG[7] is used to describe the relations between each pedestrian model in order to reduce the number of times of pedestrian models being used for verifying each pedestrian candidate region. The experiments using real images show that this method is 15 times faster and the false detection rate is lowered by more than 50% compared the conventional method[14] based on HOG.

  • Gaussian Processes based Pre-fault detection with multi-resolutional temporal analysis

    尾崎 晋作, 渋谷 久恵, 前田 俊二, 和田 俊和

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2012 ( 22 ) 1 - 6   2012.01

     View Summary

    我々は,プラントに取り付けられたセンサ情報から,プラントの異常や故障等の予兆を検出する方法として,事例ベースの内挿計算法である Gaussian Processes を用いた手法を提案している.Gaussian Processes は,出力の期待値だけでなく,標準偏差も同時に推定することが出来るという特長を有している.しかし,センサ信号の異常さは,瞬時的に現れる場合と,比較的長期間の趨勢として現れる場合がある.長期間の分析のために入力ベクトルの次元数を増加させると,精度を保つために必要な事例数が急激に増大するため,可用性が著しく低下する.この問題に対処するために,時間軸方向に可変窓幅で信号を平滑化する多重時間解像度解析を導入した異常予兆検出法を提案する.この手法により,様々なスケールで起きる異常の予兆をより確実に検出できるようになる.We have been proposed a pre-fault detection system based on Gaussian Processes (GP). The advantage of GP based pre-fault detection is that it can estimate not only the estimated output but also the standard deviation of the output. However, the anomaly can sometimes be detected as short-term phenomena and sometimes as long-term tendency. If we simply apply GP to high-dimensional vectors sampled within wide temporal window, the method requires huge number of training samples to keep the sensitivity and the accuracy. This is because GP is an example based non-linear regression method. For solving this problem, we introduce a multi-resolutional analysis along temporal axis that produces multiple signals with different smoothing windows from a single signal. By applying our GP based pre-fault detection method to these multi-scale signals, we can realize a versatile and reliable pre-fault detection system.

  • Gaussian Processes based Pre-fault detection with multi-resolutional temporal analysis

    OZAKI Shinsaku, SHIBUYA Hisae, MAEDA Shunji, WADA Toshikazu

    Technical report of IEICE. Multimedia and virtual environment ( The Institute of Electronics, Information and Communication Engineers )  111 ( 380 ) 109 - 114   2012.01

     View Summary

    We have been proposed a pre-fault detection system based on Gaussian Processes (GP). The advantage of GP based pre-fault detection is that it can estimate not only the estimated output but also the standard deviation of the output. However, the anomaly can sometimes be detected as short-term phenomena and sometimes as long-term tendency. If we simply apply GP to high-dimensional vectors sampled within wide temporal window, the method requires huge number of training samples to keep the sensitivity and the accuracy. This is because GP is an example based non-linear regression method. For solving this problem, we introduce a multi-resolutional analysis along temporal axis that produces multiple signals with different smoothing windows from a single signal. By applying our GP based pre-fault detection method to these multi-scale signals, we can realize a versatile and reliable pre-fault detection system.

  • 特徴点の属性と空間的配置を統合した密な画像間対応付け

    島田喜明, 和田俊和

    画像の認識・理解シンポジウム(MIRU2011)論文集   2011   1272 - 1278   2011.07

  • 信念伝搬型位相シフト法による三次元形状計測

    井上学, 和田俊和

    画像の認識・理解シンポジウム(MIRU2011)論文集   2011   1279 - 1285   2011.07

     View Summary

    位相シフト法は,比較的少数の画像の解析のみで高精度な形状計測を行うことが可能である.しかしながら,投影した格子パターンの位相が何番目の周期に含まれるかを求めないと物体全体の三次元形状の計測を行えない問題がある.本論文では,この問題を離散最適化問題として定式化し,Belief Propagation(信念伝搬)法を用いて解くことにより,位相接続を行う方法を提案する.本手法では,計測物体の絶対的な奥行きではなく,各点の周期を決定することで滑らかな三次元形状を計測することを目的にしている.

  • Consensus based Object Localization:複数画像からの共通物体検出

    坂田惇, 和田俊和

    画像の認識・理解シンポジウム(MIRU2011)論文集   2011   874 - 879   2011.07

     View Summary

    本論文では,インターネットの画像検索を用いて集められた画像群において,個々の画像上に写る対象の領域を推定する手法を提案する.提案手法では,複数の画像間で共通する特徴を合意(consensus)として形成し,そのconsensusに最もマッチする領域を各画像の中から特定(localize)する.そして各画像でlocalizeされた領域の特徴をもとにして再度consensusを形成する,という一連の手続きを反復する.領域形状としては楕円を用い,同心楕円周上の輝度値に対するフーリエ変換によって特徴抽出を行う.また領域パラメータの探索にはMCMC法を用いる.実験により,比較的単純な画像群では対象の画像内領域をある程度推定することができ,提案手法の有効性が確認できた.

  • ビジョンアルゴリズムとノルムの選定に関するー考察

    向井祐一郎, 大池洋史, 和田俊和

    パターン認識・メディア理解研究会(PRMU)     216 - 220   2011.06

  • A Study on Norm Selection for CV algorithms

    MUKAI Yuichiro, OIKE Hiroshi, WADA Toshikazu

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  111 ( 77 ) 73 - 78   2011.05

     View Summary

    This paper discusses the importance of the norm selections for Computer Vision(CV) algorithms. L_p norm is often used as a dissimilarity measure between two vectors. For example, L_0 norm represents number of non-zero elements of a vector (assuming 0^0 ≡ 0), L_0 norm of a difference vector is called Manhattan Distance, and L_2 Norm is Euclidean distance. Because of the different poperties among these norms, CV algorithm can produce different result depending on the employed norm. We clarify that the following two problems are mainly caused by improper norm selections: "why Bag of Features based similar image search often finds the same image as similar to varieties of query images?" and "why discontinuous pixel values are estimated by discrete optimization based denoising algorithm employing smoothing energy term consisting of absolute sum of the neighboring pixel values?". Also, this paper shows these two problems are solved or relaxed just by changing the norms.

  • Connection between Gaussian Processes and Similarity Based Modeling for Anomaly Detection

    OZAKI Shinsaku, WADA Toshikazu, MAEDA Shunji, SHIBUYA Hisae

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  111 ( 48 ) 133 - 138   2011.05

     View Summary

    Anomaly detection can be applied to health monitoring of industrial plants, human medical conditions, vehicle conditions, and so on. A well-known anomaly detection method "Similarity Based Modeling" (SBM) proposed by Stephan W. Wegerich is acknowledged as an effective and essential method in this field. This report first shows that SBM is naturally derived as a special case of "Gaussian Processes" (GP) by regarding the similarity function in SBM as the kernel function in GP, where GP was proposed before the SBM patent. This fact provides us a new interpretation that GP is an example based nonlinear regression method. Based on this interpretation, next we proposed a method for relaxing the scalability problem of GP, that is, a method for reducing the size of gram matrix consisting of pairwise similarities on training samples. This method picks up the reference data for explaining the input data from the training dataset and dynamically construct "reduced" gram matrix. By using this matrix, we can estimate the output without losing the accuracy. The anomaly detection algorithm is applied to practical health monitoring problem of an electric plant with human operation and we confirmed that our algorithm successfully detect the pre-fault events before the actual fault.

  • 異常検出におけるSimilarity Based ModelingとGaussian Processesの関連に関して

    尾崎晋作, 和田俊和, 前田俊二, 渋谷久恵

    電子情報通信学会技術研究報告,信学技報   Vol.111 ( No.48 ) 133 - 138   2011.05

  • Fault Detection and Prediction of Industrial Plants Based on Gaussian Processes

    尾崎 晋作, 和田 俊和, 前田 俊二

    情報処理学会研究報告 ( 情報処理学会 )  2010 ( 5 ) 6p   2011.02

  • Fault Detection and Prediction of Industrial Plants Based on Gaussian Processes

    OZAKI Shinsaku, WADA Toshikazu, MAEDA Shunji, SHIBUYA Hisae

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  110 ( 381 ) 211 - 216   2011.01

     View Summary

    This report proposes a fault and pre-fault detection method for industrial plants based on Gaussian process. Industrial plants can be monitored via attached sensors that measures temperature, pressure, voltage, electric current, and so on. Based on these sensor outputs, health monitoring of the target plant can be designed. The difficulty of this design problem is that the system fault can appear as statistical abnormality observed as irregular ensemble of the sensor outputs andlor temporal abnormality observed as irregularity of the time sequences. Furthermore, those systems are operated by human and it is difficult to distinguish the abnormalities caused by human-operation and system fault. We already proposed an abnormality detection system based on ICA and linear prediction, which avoids incorrect detections of abnormalities caused by human operations by recognizing and masking them. This method, however, cannot detect the system fault during the human operation. For solving this problem, we propose a unified method for statistical and temporal abnormality detection based on Gaussian Processes in this report. This method can detect system fault under normal human operation. We confirmed the effectiveness of our method through experiments on long-term sensory data sampled from real industrial plant.

  • 拡張K-means Trackerによる物体の追跡

    戚意強, 呉海元, 和田俊和, 陳謙

    情報処理学会研究報告   Vol. 2011-CVIM-175 ( No.21 ) 1 - 4   2011.01

  • Advances and Prospect of Nearest Neighbor Search in High Dimensional Space(<Special Issue>Analysis of Very Large Scale Image Collections)

    WADA Toshikazu, Toshikazu Wada

    Journal of Japanese Society for Artificial Intelligence ( 人工知能学会 )  25 ( 6 ) 761 - 768   2010.11

  • A study on Degradation Tolerant Dissimilarity Measures

    岡 藍子, 和田 俊和

    情報処理学会研究報告 ( 情報処理学会 )  2010 ( 3 ) 1 - 8   2010.10

  • Position and orientation estimation using head-mount camera

    中崎 裕介, 和田 俊和

    IEICE technical report ( 電子情報通信学会 )  110 ( 187 ) 29 - 35   2010.09

  • Position and orientation estimation using head-mount camera

    NAKAZAKI Yusuke, WADA Toshikazu

    情報処理学会研究報告. CVIM, [コンピュータビジョンとイメージメディア] ( 情報処理学会 )  173 ( No.5 ) B1 - B7   2010.09

     View Summary

    ディスプレーを観測する人物の目の 3 次元位置が計測できれば,視点位置に依存した映像を提示することで,運動視差を作り出すことができ,3 次元物体の表示を行うことができる.我々は,能動ステレオカメラを用いて眼の位置を計測する方法を提案したが,この方法では装置が高価になってしまうため,一般に普及することは困難である.そこで,本研究では頭部に装着したカメラでディスプレーの矩形を観測し,この矩形の変形から実時間でカメラの位置と向きを計測する方法を提案する.この方法では,首振り機構を持たない単眼カメラのみでディスプレーを観測している際のカメラの位置・姿勢が計測できるため安価なシステムが構築可能である.ディスプレー枠の計測には位相計算型の勾配ベクトルを用いた Particle Filter を用いており,安定性と高速性の両者を同時に達成している.By using 3D position information of human eyes watching computer display, we can realize viewpoint dependent image displaying, i.e., motion parallax generation, which enables us popup displaying of 3D contents. We already developed a 3D displaying system using active stereo camera measuring 3D eye position. However, this system requires expensive equipments, such as, stereo camera, pan-tilt unit, rotary encoders, and so on. For making this 3D displaying method popular, we have to have a cheaper device that measures 3D eye positions. In this research, we use a head-mount camera observing LCD display frame. That is, we can estimate the head-mount camera position and orientation from the deformation of the display frame on the image. This method does not require pan-tilt unit, because the computer operator watches the display from any viewpoints and the head-mount camera captures the display frame. When the operator does not watch the display, we don't have to know the viewpoint position and orientation. In our implementation, we employed phase based gradient estimation for fast and robust implementation of particle filter..

  • A study on degradation tolerant dissimilarity measures

    岡 藍子, 和田 俊和

    IEICE technical report ( 電子情報通信学会 )  110 ( 187 ) 221 - 228   2010.09

  • A Study on Degradation Tolerant Dissimilarity Measures

    OKA AIKO, WADA TOSHIKAZU

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2010 ( 35 ) 1 - 8   2010.08

     View Summary

    本論文では,劣化画像を対象とした検索・照合の問題に適した画像間の相違性尺度を提案する.これまで,陰影の変化,遮蔽,ボケやブレ,解像度低下などの劣化に左右され難い検索・照合を実現するために,劣化の影響を受け難い特徴を使用することが研究されてきた.しかし,全ての劣化に対する不変特徴は存在せず,使用する特徴だけで劣化の影響を除去することはできない.本研究では,特徴ではなく相違性尺度を変更することによって,この問題を解決する方法を示す.劣化の多くは 「画像を直交展開した際の項の欠落」 によってモデル化することができる.このモデルの下では,欠落の起きていない部分の展開係数が一致すれば,劣化画像は比較した画像に近いと言える.この考えに基づき,まず,任意の直交展開に対して 2 枚の画像の展開係数の一致数ができるだけ多くなるように劣化画像の輝度値を調整する.次に,一致する要素が多いベクトル対に対してより小さい評価値が与えられる尺度を,上記の輝度調整された画像の展開係数ベクトルの組に適用する.このようにして得られた画像間相違度を用いた実験を通じて,提案する相違度により照合精度が,正規化相互相関を用いる場合よりも大幅に向上する事を確認した.This paper presents a set of dissimilarity measures suitable for image retrieval and matching tasks with degraded query images. For those applications, many works employ degradation invariant features. However, no feature can be invariant to all possible image degradations. In this paper, we focus on image dissimilarity measures rather than image features. A dissimilarity measure can be defined by answering two questions: "where to be measured between two image vectors" and "what metric to be used". Most image degradations can be modeled by partial masking of orthogonal image expansion coefficients. Under this model, the number of mismatching coefficient pairs can be a dissimilarity measure. Then, we first propose a method for adjusting coefficient vector magnitude so as to maximize the number of matching coefficient pairs. Next, by applying those measures proportional to the number of mismatching coefficients to the magnitude adjusted image pairs, we can define a degradation tolerant dissimilarity. Through extensive experiments, we confirmed that the matching rates by our measures are much higher than that of normalized correlation for image retrieval.

  • RCAを用いた局所特徴変換法と一般物体認識への応用

    西村朋己, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2010)     2010.07  [Refereed]

  • FPGAを用いたSURFの実時間計算法

    吉岡勇太, 和田俊和

    画像の認識・理解シンポジウム(MIRU2010)     1295 - 1302   2010.07  [Refereed]

  • 文書表裏面スキャン画像の輝度値分布変換による裏写り除去法

    ハリム サンディ, 和田俊和

    画像の認識・理解シンポジウム(MIRU2010)     2010.07  [Refereed]

  • ユーザの選好を反映した特徴変換法

    高宮隆弘, 和田俊和, 前田俊二, 渋谷久恵

    画像の認識・理解シンポジウム(MIRU2010)     2010.07  [Refereed]

  • 夜間における複数車両の検出と追跡

    塚本吉彦, 和田俊和

    画像の認識・理解シンポジウム(MIRU2010)     2010.07  [Refereed]

  • スポーツ分野における鍼治療のエビデンス—特集 スポーツ分野における鍼治療のエビデンス

    宮本 俊和, 和田 恒彦

    臨床スポーツ医学 ( 文光堂 )  27 ( 6 ) 575 - 586   2010.06

     View Summary

    記事種別: 特集

  • Image Segmentation using Hierarchical Belief Propagation

    SEKI MAKITO, WADA TOSHIKAZU

    情報処理学会研究報告. CVIM, [コンピュータビジョンとイメージメディア] ( 情報処理学会 )  171 ( 18 ) R1 - R6   2010.03

     View Summary

    近年では,特徴空間でのクラスタリングと画像空間での離散最適化を反復的に行い,各画素に最適なクラスタ中心値を割り当てる領域分割手法が提案されている.この方法は,特徴空間と画像空間の統合処理というだけでなく,新しい領域分割の枠組みへの見通しを示唆している.しかしながら,この方法は安定性と高速性に課題がある.この問題を解決するために,本研究では,特徴空間の分割と画像空間における離散最適化を行う方法を提案する.提案手法では,特徴空間の分割と Belief Propagation による最適化をいずれも階層的に行う.さまざまな画像を用いた実験では,精度と速度の観点で従来手法に対する有効性を確認することができた.Recently, a new image segmentation method has been proposed, that iteratively performs clustering in the feature space and discrete optimization in the image space, which assigns optimal cluster center values to pixels. This method provides us a vistas on novel image segmentation framework as well as a unified processing both in feature and image spaces. Unfortunately, this method has problems on stability and processing speed. For solving this problem, we propose a method that performs feature space partitioning and discrete optimization in the image space. The optimization by Belief Propagation and the feature space partitioning are done both in hierarchical manner. Through extensive experiments, we confirmed that our method performs better than existing method both in terms of accuracy and speed.

  • A Study on Degradation Tolerant Dissimilarity Measures

    OKA Aiko, WADA Toshikazu

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  109 ( 470 ) 431 - 436   2010.03

     View Summary

    This report presents a set of dissimilarity measures suitable for image retrieval and matching tasks with degraded query images. Since local image features such as SIFT can easily be affected by global image degradations like motion blurring, we have to use pixel-by-pixel image dissimilarity measure. In this report, we confine the degradations to lacking of terms in arbitrary orthogonal expansion. In this case, if remained expansion coefficients match between two images, they should be the right pair. We first propose a method for computing sparse difference vectors consisting of coefficient differences for the right pairs. Next, a set of metric defined on vectors that take smaller values for sparser vectors while keeping the same L2 norm. Our dissimilarity measures are the combination of the difference vector computation and the sparsity sensitive metrics. Through the experiments, we confirmed that the one-by-one SQI matching rate by our measure is over two times bigger than that of normalized correlation for Extended Yale B Face database (subset 4).

  • Removal of Show-through in Duplex Scanned Image by Transformation of Pixel Intensity Distribution

    HALIM Sandy, WADA Toshikazu

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  109 ( 470 ) 495 - 500   2010.03

     View Summary

    In scanning duplex printed documents, show-through may degrade the image quality and character recognition rate. Show-through is a common phenomenon where the texts or images on one side is contaminated and appear on the other side. Since two images printed on both sides are essentially independent, this problem is a Blind Source Separation problem and one may think that Independent Component Analysis (ICA) can be applied for separating two images. However, ICA cannot produce show-through corrected images due to the frequent occurrence of the intensity pairs in the 2D space spanned by front side and backside intensities. In this report, we propose a show-through removal method using duplex scanned images by performing clustering and direct transformation of the 2D intensity distribution in order to make frontside and backside distribution independent. Through the separation experiments, we confirmed that our method performs well for removing show-through.

  • ユーザの選好を反映した特徴変換

    高宮隆弘, 和田俊和, 前田俊二, 渋谷久恵

    信学技報   Vol. 109, PRMU2009-305 ( No. 470 ) 425 - 430   2010.03

  • 離散最適化によるDenoisingのFPGA上での実時間実装法

    東谷匡記, 和田俊和

    信学技報   Vol. 109, PRMU2009-275 ( No. 470 ) 247 - 252   2010.03

  • Whiteningと線形予測を用いた人為的操作を伴うプラントの異常検出

    尾崎晋作, 和田俊和, 前田俊二, 渋谷久恵

    電子情報通信学会技術研究報告   Vol.109 ( No.470 ) 275 - 279   2010.03

  • FPGAを用いたSURFの実時間計算法

    吉岡勇太, 和田俊和

    信学技報   Vol. 109, PRMU2009-274 ( No. 470 ) 241 - 246   2010.03

  • 最近傍探索の理論とアルゴリズム

    和田俊和

    コンピュータビジョン最先端ガイド ( アドコム メディア )  3   119 - 136   2010

  • 回帰木を用いた識別器のモールディングによる文字認識の高速化

    太田貴大, 和田俊和

    信学技報   Vol. 109, PRMU2009-134 ( No. 344 ) 1 - 6   2009.12

  • Theory and algorithms for nearest neighbor search

    和田 俊和

    IEICE technical report ( 電子情報通信学会 )  109 ( 306 ) 67 - 78   2009.11

  • Theory and Algorithms for Nearest Neighbor Search

    WADA Toshikazu

    研究報告コンピュータビジョンとイメージメディア(CVIM) ( 情報処理学会 )  2009 ( 13 ) 1 - 12   2009.11

     View Summary

    最近傍探索の研究においては,「次元の呪縛」 と呼ばれる現象が問題を困難にしてきた.この現象は,数学的に厳密に定義されていないが,「蓄積されるデータの分布の次元が一定の値を超えると,いかなるアルゴリズムでも全探索と等価になる」 という現象である.最近傍探索研究の歴史は,これを解決するために行われてきたと言っても過言ではない.これまでに数々の研究がなされてきたが,次元の呪縛が解けたという報告はこれまでになく,近年はこの現象を回避するための 「近似最近傍」 を探索する研究が盛んにおこなわれるようになってきた.本チュートリアルでは,近似を含むものと含まないもの両方について高速な最近傍探索アルゴリズムの解説を行う.その後に,果たして次元の呪縛を解くことが可能であるか否かについて再度検討を行う.The phenomenon so called "curse of dimensionality" makes the Nearest Neighbor (NN) search problem difficult and attractive. Without this phenomenon, NN search is just a boring problem. One-NN search problem is to find the closest pattern to a given query, and k-NN search is to find k-closest patterns. For solving these problems, so many accelerated algorithms have been proposed. However, the curse tells us that every accelerated NN search algorithm becomes linear (exhaustive) search when the stored data form a high dimensional distribution. Researchers who tackled this problem are sometimes regarded as daydreamers, because they fought with an unbeatable ghost. Recently, they changed their mind to stop this straight forward fighting. Instead, they are focusing on approximate NN search, which does not face the phenomenon. In this tutorial, some typical exact and approximate NN search algorithms are introduced, and we revisit the phenomenon, "curse of dimensionality" so as to solve this problem.

  • MCMC と分離度フィルタを用いた領域分割

    松岡修史, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2009) インタラクティブセッション     2009.07  [Refereed]

  • 適応型固有輪郭モデルを用いた人物の輪郭検出と状態推定

    白石明, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2009) インタラクティブセッション     2009.07  [Refereed]

  • 自己参照に基づくパターン欠陥検査法

    淺海徹哉, 和田俊和, 酒井薫, 前田俊二

    画像の認識・理解シンポジウム(MIRU2009) インタラクティブセッション     2009.07  [Refereed]

  • ラベル数の削減によるBelief Propagation の高速化に関する研究

    浦田賢人, Halim Sandy, 和田俊和

    画像の認識・理解シンポジウム(MIRU2009) インタラクティブセッション     2009.07  [Refereed]

  • 飽和画像からの色復元 -1,2 色飽和の場合-

    玉置貴規, 和田俊和, 鈴木一正

    画像の認識・理解シンポジウム(MIRU2009) インタラクティブセッション     2009.07  [Refereed]

  • カーネル弁別特徴変換に関する研究

    高宮隆弘, 小倉将義, 和田俊和, 前田俊二, 酒井薫

    画像の認識・理解シンポジウム(MIRU2009) インタラクティブセッション     2009.07  [Refereed]

  • 一般物体認識に有効なvisual words の作成法

    西村朋己, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2009) インタラクティブセッション     2009.07  [Refereed]

  • 視覚監視における対象表現法

    和田俊和

    SSII09 第15回画像センシングシンポジウム     2009.06

  • Two-Dimensional Mahalanobis Distance Minimization Mapping : 2D-M3

    OKA Aiko, WADA Toshikazu

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  Vol.108 ( No.484 ) 183 - 190   2009.03  [Refereed]

     View Summary

    This paper presents a regression method, two-dimensional Mahalanobis distance minimization mapping (2D-M3), which is an extention of Mahalanobis Minimization Mapping: M3. M3 is a regression method between very high-dimensional input and output spaces based on Mahalanobis distance minimization criterion. Unlike the original M3, 2D-M3 directly extracts the features from image matrix rather than matrix-to-vector transformation. Because of this, 2D-M3 is much faster than original M3 without consuming much memory. We demonstrate the effectiveness of M3 through extensive experiments on a face image inpainting task.

  • Approximate nearest neighbor search algorithm on HDD based on B+ tree

    HIMEI Noritaka, WADA Toshikazu

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  Vol.108 ( No.484 ) 223 - 228   2009.03  [Refereed]

     View Summary

    Nearest Neighbor (NN) search plays important roles in example-based Computer vision algorithms. Accelerating NN search in very high-dimensional space, however, has a limitation, because of curse of dimensionality. For avoiding this problem, approximate NN search algorithms have been proposed. The most popular one is ANN which is basically a kd-tree based search algorithm with a feasible error. Recently, Locality Sensitive Hashing (LSH) is getting highlighted, because of its theoretical basis providing a clear relationship between the accuracy and the computational complexity. An improved hash based search algorithm, Principal Component Hashing (PCH), has been proposed, which is faster than ANN and LSH at the same accuracy without producing any search failures. However, most NN search algorithms share the limitations that 1) start-up time is mainly consumed for loading the data onto the memory, and 2) NN search for huge data cannot be executed, because the main memory size is limited. For solving this problem, we propose an external NN search algorithm rithm which directly finds the approximate NN data on HDD based on PCH algorithm. The basic idea is simple. By replacing the hash bins on a projection axis by a file having B+ tree structure, we can realize memory efficient PCH which works on HDD. In the experiment, we confirmed that the algorithm can perform NN search on huge database, which cannot be loaded on main memory. Also, we noticed that the search algorithm gains unexpected acceleration by the cashing mechanism of Linux operating system that frequently accessed data on HDD is kept on the memory.

  • 自己参照に基づくパターン欠陥検査法

    淺海徹哉, 和田俊和, 酒井 薫, 前田俊二

    電子情報通信学会技術研究報告   Vol.108 ( No.484 )   2009.03  [Refereed]

  • 誤差による変動幅を考慮した決定木に関する研究

    中田裕介, 和田俊和

    電子情報通信学会技術研究報告   Vol.108 ( No.484 )   2009.03  [Refereed]

  • 線形弁別特徴変換に関する研究

    小倉将義, 高宮隆弘, 和田俊和, 前田俊二, 酒井 薫

    電子情報通信学会技術研究報告   Vol.108 ( No.484 )   2009.03  [Refereed]

  • 飽和画像からの色復元 〜1色,2色飽和の場合〜

    玉置貴規, 和田俊和, 鈴木一正

    電子情報通信学会技術研究報告   Vol.108 ( No.484 )   2009.03  [Refereed]

  • 一般物体認識に適したvisual wordの作成法

    西村朋己, 呉 海元, 和田俊和

    電子情報通信学会技術研究報告   Vol.108 ( No.327 )   2008.11  [Refereed]

  • 特集「画像の認識・理解」の発行に寄せて

    和田俊和

    情報処理学会論文誌コンピュータビジョンとイメージメディア(CVIM)   1 ( 2 ) i - i   2008.07

  • 特徴生成機能を有する識別器の学習法

    正田和之, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     1142 - 1147   2008.07  [Refereed]

  • 顕著性に基づく外観検査のための異常検出アルゴリズム

    淺海徹哉, 加藤丈和, 和田俊和, 酒井薫, 前田俊二

    画像の認識・理解シンポジウム(MIRU2008)     1400 - 1407   2008.07  [Refereed]

  • 姿勢パラメータ埋め込みと最近傍探索によるスキップスキャン検出

    小倉将義, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     1492 - 1499   2008.07  [Refereed]

  • 動的色境界の提案と道路標識追跡・認識への応用

    岡田大輝, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)   2007   254 - 259   2008.07  [Refereed]

  • 能動的ステレオカメラを用いた実空間立体描画

    陳謙, 東谷匡記, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     1380 - 1385   2008.07  [Refereed]

  • ステレオカメラによるビデオレート顔検出

    鈴木一正, 大池洋史, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     1462 - 1467   2008.07  [Refereed]

  • パターンの近接性と密度推定に基づく1クラス識別器

    佐野真通, 加藤丈和, 和田俊和, 酒井薫, 前田俊二

    画像の認識・理解シンポジウム(MIRU2008)     897 - 902   2008.07

  • 多眼カメラによる3次元追跡のための自己修復型較正法

    前田昌宏, 和田俊和

    ,画像の認識・理解シンポジウム(MIRU2008)     1346 - 1351   2008.07  [Refereed]

  • グラフカットを用いた画像の粒状ノイズ抑制

    Halim Sandy, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     109 - 116   2008.07  [Refereed]

  • ハッシュを用いた最大類似度探索に関する研究

    中田裕介, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     1436 - 1443   2008.07  [Refereed]

  • Mahalanobis汎距離最小化による高次元線形写像計算法:M3

    岡藍子, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     123 - 130   2008.07  [Refereed]

  • 制約付きEMアルゴリズムによる対象個数推定

    瀬藤英隆, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     929 - 934   2008.07  [Refereed]

  • SIFT 特徴量の拡張と対称性平面物体検出への応用

    佐野友祐, 呉海元, 和田俊和, 陳謙

    画像の認識・理解シンポジウム(MIRU2008)     34 - 41   2008.07  [Refereed]

  • 状態推定を用いた点滅パターンの追跡

    塚本吉彦, 松元郁佑, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     814 - 821   2008.07  [Refereed]

  • 非同期Binocularカメラによる3次元位置計測のための時刻ずれ推定法

    行旨克哉, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2008)     549 - 554   2008.07  [Refereed]

  • K-means Clustering Based Pixel-wise Object Tracking

    Chunsheng Hua, Haiyuan Wu, Qian Chen, Toshikazu Wada

    情報処理学会論文誌コンピュータビジョンとイメージメディア(CVIM) ( 情報処理学会 )  1 ( 1 ) 20 - 33   2008.06

     View Summary

    This paper brings out a robust pixel-wise object tracking algorithm which is based on the K-means clustering algorithm. In order to achieve the robust object tracking under complex condition (such as wired objects cluttered background) a new reliability-based K-means clustering algorithm is applied to remove the noise background pixel (which is neigher similar to the target nor the background samples) from the target object. According to the triangular relationship among an unknown pixle and its two nearest cluster centers (target and background) the normal pixel (target or background one) will be assigned with high reliability value and correctly classified while noise pixels will be given low reliability value and ignored. A radial sampling method is also brought out for improving both the processing speed and the robustness of this algorithm. According to the proposed algorithm we have set up a real video-rate object tracking system. Through the extensive experiments the effectiveness and advantages of this reliability-based K-means tracking algorithm are confirmed.This paper brings out a robust pixel-wise object tracking algorithm which is based on the K-means clustering algorithm. In order to achieve the robust object tracking under complex condition (such as wired objects, cluttered background), a new reliability-based K-means clustering algorithm is applied to remove the noise background pixel (which is neigher similar to the target nor the background samples) from the target object. According to the triangular relationship among an unknown pixle and its two nearest cluster centers (target and background), the normal pixel (target or background one) will be assigned with high reliability value and correctly classified, while noise pixels will be given low reliability value and ignored. A radial sampling method is also brought out for improving both the processing speed and the robustness of this algorithm. According to the proposed algorithm, we have set up a real video-rate object tracking system. Through the extensive experiments, the effectiveness and advantages of this reliability-based K-means tracking algorithm are confirmed.

  • 動的色境界の提案と道路標識追跡・認識への応用

    岡田大輝, 和田俊和

    情報処理学会研究報告   Vol.2008 ( No.27 ) 173 - 180   2008.03

  • グラフカットを用いた画像の粒状ノイズ抑制

    Halim Sandy, 和田俊和

    情報処理学会研究報告 ( 電子情報通信学会 )  Vol.2008 ( No.27 ) 289 - 296   2008.03

  • 一般分布に対するPrincipal Component Hashing

    松下裕輔, 和田俊和

    情報処理学会研究報告   Vol.2008 ( No.27 ) 283 - 288   2008.03

  • Plane-Symmetric Object Detction using Symmetrical SIFT Features

    SANO Yuusuke, WU Haiyuan, WADA Toshikazu, CHEN Qian

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  107 ( 427 ) 203 - 208   2008.01

     View Summary

    In thie paper, a mirrored SIFT feature is proposed for detecting plane-symmtric objects. This feature can be used to evaluate the symmetric-pair likeness of feature points of a pseudo affine transformed plane-symmetric object. Plane-symmetric object detection is carried out by using the proposed mirrored SIFT feature and scale feature. The effectiveness has been confirmed through several experiments using real images.

  • Maximum Similarity Search based on Hashing

    NAKATA Yusuke, WADA Toshikazu

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  107 ( 427 ) 287 - 292   2008.01

     View Summary

    The Nearest Neighbor(NN) search can be applied to the fixed-dimensional vector data with a metric satisfying axioms of distance. On the other hand, maximum-similarity search can be applied to wide variety of data and metrics including fingerprint matching, where the data cannot be represented as fixed-dimensional vectors and the similarity measure cannot be converted to distance measure. Many methods have been proposed for solving this problem. Among them, Maeda's method has been well known as one of the fastest search algorithm. The method, however, uses matching score matrix representing similarity measures of all possible combinations of n stored data, which consumes O(n^2) memory. Hence, it is almost impossible to perform maximum similarity search on millions of data. In this report, we propose a method for extracting independent data among stored data. Stored data and the query can be mapped to a fixed dimensional space by using the matching scores with the extracted data. Then we apply our NN search algorithm Principal Componen Hashing(PCH) for performing approximate maximum similarity search. Through some experiments, we prove that our algorithm outperforms on accuracy and speed.

  • Object Counting based on Constrained EM-Algorithm

    SETO Hidetaka, KATO Takekazu, WADA Toshikazu

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  107 ( 427 ) 243 - 248   2008.01

     View Summary

    This paper presents a method for detecting and tracking unknown number of targets in image sequences based on EM algorithm. EM algorithm can fit mixture of distributions to observed data. Applying this algorithm to the foreground pixels obtained by background subtraction, we can estimate the assumed number of object positions, and shape parameters. Also, we can estimate the number of objects based on MDL criterion, where the valid number of objects provides the minimum DL. A simple combination of EM algorithm and MDL criterion, however, will produce incorrect object number. This is because the constraint on shape and size of each object is too loose and EM algorithm may estimate inconsistent fittings. We propose a robust method for counting the number of objects by introducing a constraint between the object position and shape parameters, i.e., covariance matrix. This constraint can be represented by a hyper plane in 5-D space spanned by positions(x,y) and covariance parameters(σ^2_x,σ_xy,σ^2_y). By confining the distribution parameters on this plane in EM algorithm, we can robustly estimate the number of objects.

  • Adaptive alignment of non-target cluster centers for K-means tracker

    OIKE HIROSHI, WU HAIYUAN, WADA TOSHIKAZU

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  107 ( 427 ) 237 - 242   2008.01

     View Summary

    We present a method for adequate alignment of non-target cluster centers for K-means tracker algorithm. In this method, the number of non-target cluster centers and alignment of them are determined based on the distances from a pixel which is on the ellipse surrounding a target to both target cluster centers and non-target cluster centers. This distance is calculated in the 5 dimensional feature space used in K-means tracker. This method can prevent selecting the pixel as a non-target cluster center which doesn't contribute to clustering for tracking a target or which has similar feature with other determined non-target cluster centers. Through a comparative object tracking experiment, we confirmed that this algorithm contributes stabilization of object tracking and improvement of efficient processing.

  • A Classifier Learning Algorithm with Feature Generating Function

    MASADA Kazuyuki, WU Haiyuan, WADA Toshikazu

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  107 ( 427 ) 231 - 236   2008.01

     View Summary

    Recently, AdaBoost is a general algorithm for the face detection. Since AdaBoost cannot generate new features, it is necessary to prepare a large amount of features beforehand. The detection performance depends on the prepared features. In this paper, new features are generated by GA, and the efficient features are selected by evaluating both independence between features and conviction calculated with Real AdaBoost. Then Real AdaBoost constructs the classifier by using those features.

  • 対称SIFT特徴量を用いた対称性平面物体検出

    佐野友祐, 呉海元, 和田俊和, 陳謙

    情報処理学会研究報告   Vol.2008 ( No.3 ) 171 - 176   2008.01

  • 特徴生成機能を有する識別器の学習法

    正田和之, 呉海元, 和田俊和

    情報処理学会研究報告   Vol.2008 ( No.3 ) 199 - 204   2008.01

  • ステレオカメラを用いた顔検出の高速化

    鈴木一正, 呉海元, 和田俊和

    情報処理学会研究報告   Vol.2008 ( No.3 ) 107 - 112   2008.01

  • 対象検出問題における注意の伝播に関する研究

    坂平星弘, 和田俊和

    情報処理学会研究報告   Vol.2008 ( No.3 ) 101 - 106   2008.01

  • 制約付きEMアルゴリズムによる対象個数推定

    瀬藤英隆, 加藤丈和, 和田俊和

    情報処理学会研究報告   Vol.2008 ( No.3 ) 211 - 216   2008.01

  • K-means trackerにおける適応的な非ターゲットクラスタ中心の配置法

    大池洋史, 呉海元, 和田俊和

    情報処理学会研究報告   Vol.2008 ( No.3 ) 205 - 210   2008.01

  • ハッシュを用いた最大類似度探索法に関する研究

    中田裕介, 和田俊和

    情報処理学会研究報告   Vol.2008 ( No.3 ) 9月11日 - 260   2008.01

  • Chamfer Matchingを利用した有向NFTGとその応用

    岡田大輝, 和田俊和, 坂垣内洵也

    画像の認識・理解シンポジウム(MIRU2007)     381 - 388   2007.07  [Refereed]

  • 顔検出のための画像データ依存型特徴抽出法

    林拓, 坂井義幸, 和田俊和, 呉海元

    画像の認識・理解シンポジウム(MIRU2007)     768 - 773   2007.07  [Refereed]

  • GAによる特徴生成機能を有するカスケード型識別器の学習法

    山田剛士, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2007)     756 - 761   2007.07  [Refereed]

  • パターンの近接性に基づく1クラス識別器

    加藤丈和, 野口真身, 和田俊和, 酒井薫, 前田俊二

    画像の認識・理解シンポジウム(MIRU2007)     762 - 767   2007.07  [Refereed]

  • Principal Component Hashing等確率バケット分割による近似最近傍探索法

    松下裕輔, 和田俊和

    画像の認識・理解シンポジウム(MIRU2007)     127 - 134   2007.07  [Refereed]

  • 測域センサデータの直線追跡に基づくスキャンマッチング法

    武野哲也, 中村恭之, 和田俊和

    画像の認識・理解シンポジウム(MIRU2007)     1564 - 1569   2007.07  [Refereed]

  • Genetic Algorithmを用いた対象検出法

    坂平星弘, 和田俊和

    画像の認識・理解シンポジウム(MIRU2007)     821 - 826   2007.07  [Refereed]

  • インテグラルイメージを用いた主成分木による画像の最近傍探索の高速化

    加藤丈和, 藤原純也, 荒井英剛, 和田俊和

    画像の認識・理解シンポジウム(MIRU2007)     103 - 110   2007.07  [Refereed]

  • Hallucination の一般化に関する検討

    橋本貴志, 和田俊和

    画像の認識・理解シンポジウム(MIRU2007)     603 - 608   2007.07  [Refereed]

  • MCMCとChamfer Matchingを用いた対称性平面図形の検出

    佐野友祐, 呉海元, 和田俊和, 陳謙

    画像の認識・理解シンポジウム(MIRU2007)     798 - 803   2007.07  [Refereed]

  • 視点追跡による運動視差が再現できる立体映像提示法

    陳謙, 三輪創平, 和田俊和

    第13回画像センシングシンポジウム予稿集     2007.06

  • 複数の弁別度テーブルを用いた実時間能動ステレオ3 次元位置計測システム

    大池洋史, 和田俊和, 呉海元

    第13回画像センシングシンポジウム予稿集     2007.06

  • K-means Tracker の高速化

    華春生, 呉海元, 陳謙, 和田俊和

    第13回画像センシングシンポジウム予稿集     2007.06

  • Reliability-based K-means Clustering と追跡への応用

    華春生, 陳謙, 呉海元, 和田俊和

    第13回画像センシングシンポジウム予稿集     2007.06

  • K-means Clustering Based Pixel-wise Object Tracking

    HUA CHUNSHENG, WU HAIYUAN, CHEN QIAN, WADA TOSHIKAZU

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  159 ( 42 ) 17 - 32   2007.05

     View Summary

    This paper brings out a robust pixel-wise object tracking algorithm which is based on the K-means clustering. In this paper, the target object is assumed to be non-rigid and may contain apertures. In order to achieve the robust object tracking against such objects, several ideas are applied in this work: 1) Pixel-wise clustering algorithm is applied for tracking the non-rigid object and removing the mixed background pixels from the search area; 2) Embedding the negative samples into K-means clustering so as to achieve the adaptive pixel classification without the fixed threshold; 3) Representing the image feature with a color-position feature vector so that this algorithm can follow the changes of target colors and position simultaneously; 4) A variable ellipse model is used to restrict the search area and represent the surrounding background samples; 5) Tracking failure detection and recovery processes are brought out according to both the target and background samples; 6) A radial sampling method is brought out not only for speeding up the clustering process but also improving the robustness of this algorithm. We have set up a video-rate object tracking system with the proposed algorithm. Through extensive experiments, the effectiveness and advantages of this K-means clustering based tracking algorithm are confirmed.

  • 2A2-F07 Self Localization Method for Vision-based Mobile Robot Using Multiple Parallel Line Patterns

    Kawano Hiroaki, Nakamura Takyuki, Yokomori Ryosuke, Chen Qian, Wada Toshikazu

    ロボティクス・メカトロニクス講演会講演概要集 ( 一般社団法人日本機械学会 )  2007   "2A2 - F07(1)"-"2A2-F07(3)"   2007.05

     View Summary

    We propose a new self-localization method for a mobile robot equipped with a fixed-viewpoint active camera by observing multiple parallel lines patterns with the camera. The method utilizes the technique for estimating location and posture of the camera from one parallel lines pattern which was previously developed by our research group. The method switches from a certain parallel lines pattern used for self-localization to another parallel lines pattern while the robot is moving in the environment. After the new parallel lines pattern is selected, the method keeps estimating location and p...

  • 測域センサデータの直線追跡に基づく局所環境地図生成法

    武野哲也, 中村恭之, 和田俊和

    ロボティクス・メカトロニクス講演会'07     2007.05

  • Editor's Message to Special Session on CV for Relief and Safety(<Special Session>CV for Relief and Safety)

    Watanabe M., Sumi K., Wada T.

    情報処理学会論文誌コンピュータビジョンとイメージメディア(CVIM) ( Information Processing Society of Japan (IPSJ) )  48 ( 1 ) i - ii   2007.02

  • Editor's Message to Special Session on CV for Relief and Safety

    WATANABE M., SUMI K., WADA T.

    情報処理学会論文誌. 数理モデル化と応用   48 ( 1 ) i - ii   2007.02

  • GAと Adaboost を用いた顔検出

    山田剛士, 呉海元, 和田俊和

    電子情報通信学会技術研究報告   Vol.106 ( No.469 ) 43 - 48   2007.01

  • 自動視線推定のためのアイモデルの個人適応法

    北川洋介, 呉海元, 和田俊和, 加藤丈和

    電子情報通信学会技術研究報告   Vol.106 ( No.469 ) 55 - 60   2007.01

  • インテグラルイメージを用いた主成分木による画像の最近傍探索の高速化

    藤原純也, 荒井英剛, 加藤丈和, 和田俊和

    電子情報通信学会技術研究報告   Vol.106 ( No.470 ) 55 - 60   2007.01

  • 顔検出のためのデータ依存型特徴抽出法

    林拓, 和田俊和, 呉海元

    電子情報通信学会技術研究報告   Vol.106 ( No.469 ) 37 - 42   2007.01

  • 画像を用いた対象検出・追跡(第6回・最終回)対象追跡(各論3)パラメータ空間内での追跡

    和田 俊和

    画像ラボ / 画像ラボ編集委員会 編 ( 日本工業出版 )  17 ( 11 ) 69 - 73   2006.11

  • Suvey:Example based Pattern Recognition and Computer Vision

    WADA Toshikazu

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2006 ( 93 ) 97 - 104   2006.09

     View Summary

    This paper reviews and surveys example based techniques in the fields of Pattern Recognition and Computer Vision, from the viewpoints of anormality detection, classification, non-linear mapping learning, regularization, and basics: nearest neighbor search and non-parametric probability density estimation.

  • Survey : Example based Pattern Recognition and Computer Vision

    WADA Toshikazu

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  106 ( 229 ) 97 - 104   2006.09

     View Summary

    This paper reviews and surveys example based techniques in the fields of Pattern Recognition and Computer Vision, from the viewpoints of anormality detection, classification, non-linear mapping learning, regularization, and basics: nearest neighbor search and non-parametric probability density estimation.

  • 画像を用いた対象検出・追跡(第5回)対象追跡(各論1)標準的手法

    和田 俊和

    画像ラボ ( 日本工業出版 )  17 ( 9 ) 60 - 63   2006.09

  • 分離度フィルタと PaLM-treeを用いた視線方向の推定

    松本拓也, 呉海元, 和田俊和, 中村恭之

    画像の認識・理解シンポジウム(MIRU2006)     1042 - 1047   2006.07  [Refereed]

  • 顔検出のための特徴生成と特徴選択

    山田剛士, 坂井義幸, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     1048 - 1053   2006.07  [Refereed]

  • 色弁別度を用いた実時間ステレオ対象検出・追跡

    飯塚健男, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     1072 - 1077   2006.07  [Refereed]

  • 画像を用いた対象検出・追跡(第4回)対象検出(各論2)背景との相違性に基づく検出

    和田 俊和

    画像ラボ ( 日本工業出版 )  17 ( 7 ) 60 - 63   2006.07

  • 基礎行列の不安定度に基づくカメラ間の最適組み合わせの推定

    前田昌宏, 加藤丈和, 和田俊和, Moldovan Daniel

    画像の認識・理解シンポジウム(MIRU2006),     952 - 957   2006.07  [Refereed]

  • 高速追従型2 眼能動カメラシステム

    大池洋史, 呉海元, 華春生, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     200 - 207   2006.07  [Refereed]

  • オプティカルフローの白色化によるエゴモーション解析

    高田亮, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     1036 - 1041   2006.07  [Refereed]

  • カメラ・床センサトラッキングの統合による人物検出・追跡

    江郷俊太, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     794 - 799   2006.07  [Refereed]

  • 階層的固有空間による高次元最近傍探索の高速化

    荒井英剛, 武本浩二, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     291 - 297   2006.07  [Refereed]

  • 弁別度に基づく実時間能動ステレオ3次元計測システム

    飯塚健男, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006) デモセッション     1385 - 1386   2006.07  [Refereed]

  • 実時間対象追跡・認識を行なうための対話的システム

    坂平星弘, 坂垣内洵也, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006) デモセッション     1377 - 1378   2006.07  [Refereed]

  • 複数カメラを用いたCondensationによるオクルージョンにロバストな人物追跡

    松元郁佑, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     291 - 506   2006.07  [Refereed]

  • 効率的な距離計算戦略による高次元最近傍探索の高速化

    武本浩二, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     1156 - 1161   2006.07  [Refereed]

  • Reliability K-means Clustering and Its Application for Object Tracking

    Chunsheng Hua, Haiyuan Wu, Qian Chen, Hiyoshi Oike, Toshikazu Wada

    画像の認識・理解シンポジウム(MIRU2006)     1090 - 1095   2006.07  [Refereed]

  • 実時間視点追従機能を有する立体映像提示法

    三輪創平, 陳謙, 和田俊和

    画像の認識・理解シンポジウム(MIRU2006)     740 - 745   2006.07  [Refereed]

  • 画像を用いた対象検出・追跡(第3回)対象検出 各論(1)マッチングによる検出

    和田 俊和

    画像ラボ ( 日本工業出版 )  17 ( 5 ) 70 - 73   2006.05

  • Report on the 10th IEEE International Conference on Computer Vision (ICCV2005)

    FURUKAWA Ryo, KATO Takekazu, KAWASAKI Hiroshi, MITA Takeshi, MIYAZAKI Daisuke, NAKAZAWA Atsushi, SATO Tomokazu, SUGAYA Yasuyuki, UTSUMI Akira, SUGIMOTO Akihiro, SATO Yoichi, WADA Toshikazu

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  105 ( 674 ) 249 - 258   2006.03

     View Summary

    This report gives an overview of the 10th IEEE international conference on computer vision (ICCV2005), which was held in Beijing, China, from October 17th to 20th, 2005.

  • Real time tracking and detection multiple peoples using multiple cameras based on CONDENSATION

    MATSUMOTO YUSUKE, KATO TAKEKAZU, WADA TOSHIKAZU, UEDA HIROTADA

    IEICE technical report ( The Institute of Electronics, Information and Communication Engineers )  105 ( 674 ) 121 - 128   2006.03

     View Summary

    This paper presents a novel method for human head tracking using multiple cameras. Most existing methods estimate 3D target, position according to 2D tracking results at different viewpoints. This framework can be easily affected by the inconsistent tracking results on 2D images, which leads 3D tracking failure. For solving this problem, we investigate a natural extension of CONDENSATION to multi-viewpoint images. Our method generates many hypotheses on a target (human head) in 3D space and estimates the likelihood of each hypothesis by integrating viewpoint dependent likelihood values of 2D hypotheses projected on image planes. In theory, view point dependent likelihood values should be integrated by multiplication, however, it is easily affected by occulusions. Thus we investigate this problem and propose a novel integration method in this paper and implemented a prototype system consisting of six set of PCs and cameras. We confirmed the robustness against occulusions and the efficiency of our method.

  • 画像を用いた対象検出・追跡(第2回)対象検出:総論

    和田 俊和

    画像ラボ ( 日本工業出版 )  17 ( 3 ) 70 - 74   2006.03

  • 効率的な距離計算戦略による高次元最近傍探索の高速化

    武本浩二, 加藤丈和, 和田俊和

    情報処理学会研究報告   Vol.2006 ( No.25 ) 49 - 56   2006.03

  • 第10回コンピュータビジョン国際会議ICCV2005報告

    古川亮, 川崎洋, 宮崎大輔, 佐藤智和, 内海章, 佐藤洋一, 加藤丈和, 三田雄志, 中澤篤志, 菅谷保之, 杉本晃宏, 和田俊和

    情報処理学会研究報告   Vol.2006 ( No.25 ) 421 - 430   2006.03

  • 空間分割と直交変換の統合による高次元最近傍探索の高速化

    荒井英剛, 加藤丈和, 和田俊和

    情報処理学会研究報告   Vol.2006 ( No.25 ) 41 - 48   2006.03

  • 事例ベース対象追跡・認識のための近さ優先探索グラフの対話的構築アルゴリズム

    坂平星弘, 和田俊和, 坂垣内洵也, 加藤丈和

    情報処理学会研究報告   Vol.2006 ( No.25 ) 85 - 92   2006.03

  • Chamfer, Matchingを利用した有向NFTGとその応用

    岡田大輝, 和田俊和, 坂垣内洵也

    情報処理学会研究報告   Vol.2006 ( No.25 ) 93 - 100   2006.03

  • 連続特徴空間における決定木構築法と顔検出への応用

    林拓, 和田俊和, 加藤丈和

    情報処理学会研究報告   Vol.2006 ( No.25 ) 125 - 130   2006.03

  • 複数カメラを用いたCONDENSATIONによる複数人物頭部の実時間検出・追跡

    松元郁佑, 加藤丈和, 和田俊和, 上田博唯

    情報処理学会研究報告   Vol.2006 ( No.25 ) 293 - 300   2006.03

  • オプティカルフローの無相関化によるエゴモーション解析

    高田亮, 呉海元, 和田俊和

    情報処理学会研究報告   Vol.2006 ( No.25 ) 33 - 40   2006.03

  • 機械学習法のロボット知能化システムへの応用(2)

    中村 恭之, 和田 俊和

    機械の研究 ( 養賢堂 )  58 ( 2 ) 263 - 267   2006.02

  • 機械学習法のロボット知能化システムへの応用(1)

    中村 恭之, 和田 俊和

    機械の研究 ( 養賢堂 )  58 ( 1 ) 7 - 16   2006.01

  • 画像を用いた対象検出・追跡(第1回)対象検出:総論

    和田 俊和

    画像ラボ ( 日本工業出版 )  17 ( 1 ) 66 - 69   2006.01

  • Effect of shielding gases on Laser-Arc hybrid welding of galvanized steel sheets

    Kamei Toshikazu, Wada Katsunori

    Preprints of the National Meeting of JWS ( JAPAN WELDING SOCIETY )  2006f   149 - 149   2006

     View Summary

    In laser arc hybrid welding of a galvanized steel sheet, I examined influence of the shielding gas which generated a welding defect. By welding phenomena observation, it was confirmed that a defect occurred by a difference of molten pools behavior.

    DOI

  • 2A1-E08 Tightly coupled planning and model learning for real robot behavior learning

    Kawahara Terumi, Nakamura Takayuki, Wada Toshikazu

    ロボティクス・メカトロニクス講演会講演概要集 ( 一般社団法人日本機械学会 )  2006   "2A1 - E08(1)"-"2A1-E08(4)"   2006

     View Summary

    This paper presents a new framework for learning dynamic robot behavior such that a wheeled mobile robot moves to the destination with wheel slippage. First, our method learns the body dynamics from a large number of sensory-motor instances using PaLM-tree algorithm, which approximates nonlinear mapping in arbitrary preciseness. The mapping specifies a transition model from state-action pair to next state. An optimal action sequence generating the transition from initial to goal states is obtained by the branch-and-bound algorithm using the estimated transition model. Then, new sensory-moto...

  • K-means tracker : A multiple colors object tracking algorithm

    HUA CHUNSHENG, OIKE HIROSHI, WU HAIYUAN, WADA TOSHIKAZU, CHEN QIAN

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  151 ( 112 ) 1 - 8   2005.11

     View Summary

    This paper presents a K-means tracker, which is a novel visual tracking algorithm. This algorithm is robust against the interfused background, because it discriminates "target" pixels from "background" pixels in the tracking region by applying K-means clustering to both the positive and negative information of the target. To ensure the robustness of this algorithm, we apply the following ideas : 1) To represent the color similarity and spatial approximation of the target simultaneously, we use a 5D feature vector consisting of the position (x, y) and color (y, u, v) information for object tracking. The object tracking is performed and updated not only in the image space but also in the color space at the same time. Therefore, this adaptive nature guarantees our method to be robust against the target color change. 2) By using a variable ellipse model to restrict the target search area and represent the non-target pixels surrounding the target, the algorithm can cope with the changes of the scale and shape of the target object flexibly. 3) Even the tracking sometimes fails, this algorithm can automatically discover and recover from the tracking failure based on the positive and negative information. To capture motion-blur-free images of a moving object at video rate, we control a set of active cameras which are mounted on the pan-tilt units according to the results of the K-means tracker.

  • 事例を用いた弁別性マップの構築とその応用, :, 弁別性マップを用いたステレオトラッキング

    飯塚健男, 和田俊和, 華春生, 中村恭之

    情報処理学会研究報告   Vol.2005 ( No.112 ) 9 - 16   2005.11

  • Editor's Message to Special Section on Progress of Pattern Recognition and Machine Learning for Computer Vision

    Wada T., Sato Y., Sugimoto A.

    情報処理学会論文誌コンピュータビジョンとイメージメディア(CVIM) ( Information Processing Society of Japan (IPSJ) )  46 ( 15 ) i - ii   2005.10

  • Learning Nonlinear Mapping based on Recursive Partitioning of Data Space : Current and Old Trend on Regression Tree

    NAKAMURA Takayuki, WADA Toshikazu

    Journal of Information Processing Society of Japan ( 一般社団法人情報処理学会 )  46 ( 9 ) 1030 - 1038   2005.09

  • アイモデルを用いたConDensationによる視線推定

    北川洋介, 加藤丈和, 呉海元, 和田俊和

    情報処理学会研究報告   Vol.2005 ( No.88 ) 17 - 24   2005.09

  • Multi-Target Tracking of Human Position using Floor Pressure Sensors based on MCMC/EM Algorithm/MDL

    佐藤哲, 和田俊和, 和田俊和, 加藤丈和

    情報処理学会研究報告   Vol.2005 ( No.88 ) 153 - 160   2005.09

  • Network Augmented Multisensor Assosiation-CONDENSATION: CONDENSATIONの自然な拡張による3次元空間内での人物頭部の実時間追跡

    松元郁佑, 加藤丈和, 和田俊和

    情報処理学会研究報告   Vol.2005 ( No.88 ) 161 - 168   2005.09

  • LF-006 Estimating the Number of Tracking Targets for Human Location Tracking Using Floor Pressure Sensors

    Satoh Tetsu, Wada Toshikazu, Nakamura Takayuki

    情報科学技術レターズ ( Forum on Information Technology )  4 ( 4 ) 103 - 104   2005.08

  • Acceleration Method for Nearest Neighbor Classification based on Space Decomposition

    WADA Toshikazu

    IPSJ Magazine ( Information Processing Society of Japan (IPSJ) )  46 ( 8 ) 912 - 918   2005.08

  • アイモデルを用いた視線推定のための黒目追跡

    北川洋介, 加藤丈和, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2005)     1343 - 1350   2005.07  [Refereed]

  • NAMA-CON: ネットワーク結合された複数センサを用いたCONDENSATIONによる人物頭部追跡

    松元郁佑, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2005)     411 - 418   2005.07  [Refereed]

  • 近さ優先探索グラフを用いた実時間対象追跡・認識

    坂垣内洵也, 和田俊和, 加藤丈和

    画像の認識・理解シンポジウム(MIRU2005)     16 - 23   2005.07  [Refereed]

  • MDL基準に基づく区分的関数あてはめによる写像学習

    上江洲吉美, 中村恭之, 和田俊和

    画像の認識・理解シンポジウム(MIRU2005)     144 - 150   2005.07  [Refereed]

  • 一般化K-D Decision Tree - 近接性グラフによるEditing を利用した最近傍識別器の高速化 &#8211;

    柴田智行, 和田俊和, 加藤丈和

    画像の認識・理解シンポジウム(MIRU2005)     16 - 23   2005.07  [Refereed]

  • 高速追従型2眼アクティブカメラ

    大池洋史, 華春生, 呉海元, 和田俊和

    画像の認識・理解シンポジウム(MIRU2005) デモセッション     1608 - 1609   2005.07  [Refereed]

  • K-means トラッカー:失敗を自動的に検出・回復する対象追跡法

    華春生, 呉海元, 陳謙, 和田俊和

    画像の認識・理解シンポジウム(MIRU2005)     395 - 402   2005.07  [Refereed]

  • 視点固定型パン・チルトステレオカメラによるリアルタイム3次元位置計測システム

    飯塚健男, 和田俊和, 中村恭之, 加藤丈和, 吉岡悠一

    画像の認識・理解シンポジウム(MIRU2005) デモセッション     1620 - 1621   2005.07  [Refereed]

  • 識別器選択のための入力空間分割法に関する検討

    林拓, 中村恭之, 和田俊和

    画像の認識・理解シンポジウム(MIRU2005)     859 - 866   2005.07  [Refereed]

  • 平行線パターンを用いたカメラの自己位置・姿勢の推定

    横守良介, 和田俊和, 陳謙

    画像の認識・理解シンポジウム(MIRU2005)     610 - 617   2005.07  [Refereed]

  • フラクタル画像符号化を用いた均一なテクスチャ平面の傾斜角推定

    新宅景二, 和田俊和, 呉海元

    画像の認識・理解シンポジウム(MIRU2005)     56 - 63   2005.07  [Refereed]

  • PaLM-treeによる能動カメラ制御の高性能化

    中村恭之, 坂田好生, 呉海元, 和田俊和

    ロボティクスメカトロニクス講演会'05 講演論文集     2005.06

  • マルコフ連鎖モンテカルロ法とEMアルゴリズムを用いた床圧力センサ情報による人物位置追跡

    佐藤 哲, 和田 俊和

    第67回全国大会講演論文集   2005 ( 1 ) 101 - 102   2005.03

  • 計画と実行の反復による車輪型移動ロボットの滑り運動学習

    川原輝美, 中村恭之, 和田俊和

    第10回ロボティクスシンポジア予稿集     405 - 409   2005.03

  • Condensation-Based Iris Tracking with Eye-Model

    KITAGAWA Yosuke, KATO Takekazu, WU Haiyuan, WADA Toshikazu

    電子情報通信学会技術研究報告. PRMU, パターン認識・メディア理解   104 ( 670 ) 13 - 18   2005.02

  • Condensation-Based Iris Tracking with Eye-Model

    KITAGAWA Yosuke, KATO Takekazu, WU Haiyuan, WADA Toshikazu

    IEICE technical report. Natural language understanding and models of communication ( The Institute of Electronics, Information and Communication Engineers )  104 ( 668 ) 13 - 18   2005.02

     View Summary

    We have proposed an algorithm for estimating the visual direction using iris contours, which are detected from an input image and fitted with ellipses. However, it is difficult to detect the iris contours without the boundaries between an eyelid and the iris. In order to solve this problem, we propose an simple eye-model that consists of the iris contours and the eyelids. This paper describes how to track the iris contours correctly and robustly by using the eye-model. We use Condensation algorithm for tracking stably the iris contours against blinking and propose a likelihood function based on directions of iris contours and a brightness of iris region.

  • Estimating Texture Plane Orientation with Fractal Image Encoding

    SHINTAKU Keiji, WADA Toshikazu, WU Haiyuan

    IEICE technical report. Natural language understanding and models of communication ( The Institute of Electronics, Information and Communication Engineers )  104 ( 667 ) 85 - 90   2005.02

     View Summary

    Fractal coding is based on the assumption of self-affinity. In image coding, blocks of an image can be approximated by extracting larger blocks from elsewhere in the image. For that reason, in the plane with a homogeneous texture pattern, a front image can code more efficiently than the inclined image. In this paper, we examine the index of coding efficiency when performing image compression with fractal picture coding. And then, we propose an algorithm to estimate the plane orientation. Through various experiments with generated images and real images, we confirmed our method is effective.

  • 情報量基準に基づく区分的関数あてはめによる写像学習

    上江洲吉美, 中村恭之, 和田俊和

    電子情報通信学会技術研究報告PRMU   Vol.104 ( No.667 ) 13 - 18   2005.02

  • 識別器選択のための入力空間分割法に関する検討

    林柘, 中村恭之, 和田俊和

    電子情報通信学会技術研究報告PRMU   Vol.104 ( No.667 ) 7 - 12   2005.02

  • 最近傍探索・識別技術と画像理解

    和田俊和, 武本浩二

    電子情報通信学会技術研究報告PRMU   Vol.104 ( No.668 ) 73 - 78   2005.02

  • Research Activities in Vision and Robotics Laboratory

    WADA Toshikazu

    SYSTEMS, CONTROL AND INFORMATION ( THE INSTITUTE OF SYSTEMS, CONTROL AND INFORMATION ENGINEERS )  49 ( 2 ) 70 - 71   2005

    DOI

  • K-meansトラッカー : 追跡失敗の発見と修復ができる対象追跡法

    呉 海元, 陳 謙, 和田 俊和

    画像の認識・理解シンポジュウム(MIRU2005) Vol.2005 No.7     359 - 402   2005

  • 事例ベースビジョン

    和田俊和

    第11回画像センシングシンポジウム, 2005     2005

  • 事例を用いた弁別性マップの構築とその応用

    和田 俊和, 中村 恭之

    情報処理学会研究報告 2005-CVIM・151     65 - 70   2005

  • 2P1-N-050 Development of Locomotion Control Method by Tracking with Floor Pressure Sensors : Approach by Human Tracking for Practicality and Familiarity(Home Robot and Mechatronics 2,Mega-Integration in Robotics and Mechatronics to Assist Our Daily Lives)

    YAMAMOTO Daisuke, SATO Tetsu, WADA Toshikazu, OGAWA Hideki, DOI Miwako, UEDA Hirotada, KIDODE Masatsugu

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  2005 ( 0 ) 188 - 188   2005

    DOI

  • Regression Trees and Its Application to Classification Problems

    Wada Toshikazu, Nakamura Takayuki

    Technical report of IEICE. PRMU ( 社団法人電子情報通信学会 )  104 ( 291 ) 33 - 40   2004.09

     View Summary

    This paper presents evolutional history of regression trees, a novel regression tree called PaLM-tree, and its application to Computer Vision and Pattern Recognition problems. Regression tree is a tree representation of nonlinear function, which is mainly investigated in the fields of Machine Learning and Data Mining. The original regression tree is a simple tree representation holding function values at leaves (terminated nodes). Linear regression tree extends the function values to linear regression coefficients. Since these regression trees are designed for rule-extraction, most of them ...

  • 修正相対近傍グラフを用いたターゲット追跡と認識

    坂垣内洵也, 加藤丈和, 和田俊和

    電子情報通信学会技術研究報告PRMU   Vol.104 ( No.290 ) 121 - 128   2004.09

  • 可変楕円モデルを用いたK-meansトラッキング

    華春生, 和田俊和, 呉海元

    電子情報通信学会技術研究報告PRMU   Vol.104 ( No.290 ) 113 - 120   2004.09

  • 可変楕円モデルを用いたK-meansトラッキング

    華春生, 和田俊和, 呉海元

    情報処理学会研究報告   Vol.2004 ( No.91 ) 113 - 120   2004.09

  • 回帰木を用いた非線形写像の学習と識別問題への応用

    和田俊和, 中村恭之

    情報処理学会研究報告   Vol.2004 ( No.91 ) 203 - 210   2004.09

  • I-067 Face Direction Estimation using Nonlinear Map Learning Algorithm "PaLM-tree"

    Satoh Tetsu, Wada Toshikazu, Nakamura Takayuki

    情報科学技術フォーラム一般講演論文集 ( Forum on Information Technology )  3 ( 3 ) 157 - 158   2004.08

  • DMDを用いた実時間レンジファインダ

    陳 謙, 和田 俊和

    知能メカトロニクスワークショップ講演論文集 ( 〔精密工学会〕 )  9   255 - 260   2004.08

  • 鮮明な画像撮影のための高速追従カメラシステム

    大池洋史, 呉海元, 加藤丈和, 和田俊和

    第9回知能メカトロニクスワークショップ     79 - 84   2004.08

  • ロボットの身体と環境との相互作用に基づく地図作成法

    上江洲吉美, 中村恭之, 和田俊和

    第9回知能メカトロニクスワークショップ     145 - 152   2004.08

  • カメラ同期・対応点の必要のない複数ステレオカメラのキャリブレーション法

    飯塚健男, 中村恭之, 和田俊和

    第9回知能メカトロニクスワークショップ ( 〔精密工学会〕 )  9   157 - 162   2004.08

  • 高速追従型アクティブカメラ

    大池洋史, 呉海元, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2004) デモセッション, Vol. I     113 - 114   2004.07  [Refereed]

  • 構造化MCMC とその応用

    松元郁佑, 加藤丈和, 呉海元

    画像の認識・理解シンポジウム(MIRU2004), Vol. I     727 - 732   2004.07  [Refereed]

  • K-D Decision Tree -最近傍識別器の高速化と省メモリ化-

    柴田智行, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2004), Vol. II     55 - 60   2004.07  [Refereed]

  • 特徴の連接と最近傍識別によるターゲット検出-事例に基づく情報統合

    加藤丈和, 浮田宗泊, 和田俊和

    画像の認識・理解シンポジウム(MIRU2004), Vol. II     1 - 6   2004.07  [Refereed]

  • ステレオカメラによるリアルタイム3次元位置計測システム

    飯塚健男, 中村恭之, 和田俊和

    画像の認識・理解シンポジウム(MIRU2004) デモセッション     312   2004.07  [Refereed]

  • 狭い共通視野での複数台カメラのキャリブレーション

    呉海元, 陳謙, 飯尾太郎, 和田俊和

    画像の認識・理解シンポジウム(MIRU2004), Vol. I     285 - 290   2004.07  [Refereed]

  • K-means トラッキング:背景混入に対して頑健な対象追跡法

    和田俊和, 濱塚俊明, 加藤丈和

    画像の認識・理解シンポジウム(MIRU2004), Vol. II   2   7 - 12   2004.07  [Refereed]

  • 画像の4分木表現を用いた高速最近傍識別

    武本浩二, 加藤丈和, 和田俊和

    画像の認識・理解シンポジウム(MIRU2004), Vol. II     347 - 352   2004.07  [Refereed]

  • 単眼画像からの視線推定

    呉海元, 陳謙, 和田俊和

    画像の認識・理解シンポジウム(MIRU2004), Vol. II     253 - 258   2004.07  [Refereed]

  • PaLM-treeによる車輪型移動ロボットの行動計画

    川原輝美, 中村恭之, 和田俊和

    ロボティクスメカトロニクス講演会'04 講演論文集     2004.06

  • Facial parts detection based on Structuralized MCMC

    MATSUMOTO YUSUKE, KATO TAKEKAZU, WU HAIYUAN, WADA TOSHIKAZU

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  144 ( No.40 ) 101 - 108   2004.05

     View Summary

    In this paper, we propose a new method called structuralized MCMC, which is an extension of the framework of MarRov Chain Monte Carlo (MCMC). Our method estimates parameters of the objects that consist of some parts whick have structural relationship among themselves. Parameters of objects are divided for each part, and the structure among parts is represented as a conditional probability on distributions of other parts. The MCMC is applied by using the conditional probability as a kernel of MarRov-chain. Each part is estimated by using the estimated distribution of other parts. Our method can estimate the distribution of each part while considering the structure among parts. This Structuralized MCMC is applied to a facial parts detection problem. The extensive experiments demonstrate the effectiveness of our method.

  • 4分木表現を用いた画像の高速最近傍識別

    武本浩二, 加藤丈和, 和田俊和

    情報処理学会研究報告   Vol.2004 ( No.40 ) 25 - 32   2004.05

  • 事例ベースカメラキャリブレーション

    吉岡悠一, 和田俊和

    情報処理学会研究報告   Vol.2004 ( No.40 ) 49 - 56   2004.05

  • 鮮明な画像撮影のための高速追従型アクティブカメラ

    大池洋史, 呉海元, 加藤丈和, 和田俊和

    情報処理学会研究報告   Vol.2004 ( No.40 ) 71 - 78   2004.05

  • ステレオカメラによる色ターゲットの3次元位置計測

    飯塚健男, 中村恭之, 和田俊和

    情報処理学会研究報告   Vol.2004 ( No.40 ) 65 - 70   2004.05

  • A CALIBRATED PINHOLE CAMERA MODEL FOR SINGLE VIEWPOINT IMAGING SYSTEMS

    MOLDOVAN Daniel, YOSHIOKA Yuichi, WADA Toshikazu

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2004 ( 26 ) 65 - 71   2004.03

     View Summary

    This paper presents a perspective imaging model that can be used to represent any single viewpoint imaging system. Our imaging model consists of an optical center, whose position is well determined in the 3D space, and a virtual screen on which undistorted images taken with the uncalibrated camera are displayed. This model uses two screens as virtual image planes, and the optical center is estimated as the converging point of those rays that are passing through the corresponding points on these planes. This method has the advantages that 1) it can be applied to any single viewpoint camera, and 2) it can remove any type of distortions. In the experiments, results for normal and omni-directional cameras demonstrate the effectiveness of our method.

  • マスメイルデータベースとそれを用いたマスメイル検出システム

    松浦広明, 齋藤彰一, 上原哲太郎, 泉裕, 和田俊和

    情報処理学会研究報告,DSM   Vol.2004 ( No.37 ) 73 - 78   2004.03

  • 顔表情計測のための実時間レンジファインダの開発

    竹下昌宏, 陳謙, 和田俊和

    電子情報通信学会技術研究報告PRMU   Vol.103 ( No.738 ) 77 - 82   2004.03

  • 非線形写像学習のためのPaLM-treeの提案とその応用

    中村恭之, 加藤英介, 和田俊和

    第9回ロボティクスシンポジア予稿集     360 - 366   2004.03

  • Multi-Camera Calibration for Wide Range Scene Observation

    IIO Taro, WU Haiyuan, CHEN Qian, WADA Toshikazu

    Technical report of IEICE. Thought and language ( The Institute of Electronics, Information and Communication Engineers )  103 ( 657 ) 73 - 78   2004.02

     View Summary

    In this paper, we describe a novel method that can estimate the extrinsic parameters and the focal length of a camera by using only one single image of two coplanar circles with arbitrary radius. This method does not require that the circle centers or the full circles are viewable, and the circles can have any unknown radius. Compared with conventional methods, our method has the following advantages : 1) it does not use specially designed objects, 2) it can obtain unique solution by using only one monocular image, 3) it can estimate the focal length and the extrinsic parameters, and 4) it can be used to estimate the relative translation and rotation between cameras. The extensive experiments over simulated images and real images demonstrate the robustness and the effectiveness of our method.

  • 二つの円パターンを利用した視線推定

    呉海元, 陳謙, 飯尾太郎, 和田俊和

    電子情報通信学会技術研究報告PRMU   Vol.103 ( No.657 ) 79 - 84   2004.02

  • 最近傍識別器を用いた背景差分と色検出の統合

    加藤丈和, 柴田智行, 和田俊和

    情報処理学会研究報告   Vol.2004 ( No.6 ) 31 - 36   2004.01

  • K-means Tracking : A Robust Target Tracking against Background Involution

    WADA Toshikazu

    MIRU, 2004     2004

  • Nearest Neighbor Classification for Quadtree representations of binary images

    TAKEMOTO Koji, KATO Takekazu, WADA Toshikazu

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  103 ( 390 ) 1 - 6   2003.10

     View Summary

    This report presents an accelerated Nearest Neighbor (NN) classifier for quadtree representations of binary images generated by color target detection. This method finds the NN prototype to the input quadtree from many prototypes and classifies the input to the prototype class. For this purpose, we add the density information to the grey nodes of the tree. In the coarse-to-fine comparison between two trees, we can calculate the upper and lower bounds of the distance between these trees by referring the density information at any level. Using these upper and lower bounds, we can reduce the NN candidates to the input in the similar way to the branch-and-bound or A* algorithms. We modified this method by performing best-first search for accelerating the decreasing speed of the minimum upper bound. This enables further reduction of the NN candidates. The classification speed can further be accelerated depending on the fact that the problem is not a NN search problem but a NN classification. That is, if all the NN candidates belong to a single class, we can classify the input immediately. Through the experiments, we confirmed that our method is over forty times faster than the brute force NN search.

  • Contents based Mass-Mail Filtering

    WADA Toshikazu, SAITO Shoichi, IZUMI Yutaka, UEHARA Tetsutaro

    Technical report of IEICE. PS ( The Institute of Electronics, Information and Communication Engineers )  103 ( 173 ) 55 - 60   2003.07

     View Summary

    Recently mass-mail sending is becoming a serious problem that prevents normal commerce and advertisement on the Internet. We are now developing a system solving this problem. Existing mass-mail filtering systems can be classified into two types; 1) open relay MTA based filtering systems and 2) recognition based contents filtering systems. Since the number of open relay MTA is much bigger than the number of mass-mail contents, contents based mass-mail filtering is much effective than the open relay based filtering. However, the recognition based contents filtering requires time consuming training and tuning. Unlike these approaches, our system consists of two parts; mass-mail checksum database and clients. The checksums are delivered via the DNS and the clients. The prototype database system is currently running, i.e., mass-mail contents sent to some dummy addresses are analyzed, normalized, and their checksums are stored into the DNS. As well, some clients referring the DNS are developed. This report presents the overview of our current system and the future extensions.

  • Contents based Mass - Mail Filtering

    WADA Toshikazu, SAITO Shoichi, IZUMI Yutaka, UEHARA Tetsutaro

    情報処理学会研究報告インターネットと運用技術(IOT) ( Information Processing Society of Japan (IPSJ) )  2003 ( 68 ) 55 - 60   2003.07

     View Summary

    Recently mass-mail sending is becoming a serious problem that prevents normal commerce and advertisement on the Internet. We are now developing a system solving this problem. Existing mass-mail filtering systems can be classified into two types; 1) open relay MTA based filtering systems and 2) recognition based contents filtering systems. Since the number of open relay MTA is much bigger than the number of mass-mail contents, contents based mass-mail filtering is much effective than the open relay based filtering. However, the recognition based contents filtering requires time consuming training and tuning. Unlike these approaches, our system consists of two parts; mass-mail checksum database and clients. The checksums are delivered via the DNS and the clients. The prototype database system is currently running, i.e., mass-mail contents sent to some dummy addresses are analyzed, normalized, and their checksums are stored into the DNS. As well, some clients referring the DNS are developed. This report presents the overview of our current system and the future extensions.

  • Homography from COnic Intersection : Camera Calibration based on Arbitrary Circular Patterns

    WU HAIYUAN, CHEN QIAN, WADA TOSHIKAZU

    IPSJ SIG Technical Report CVIM ( Information Processing Society of Japan (IPSJ) )  139   9 - 16   2003.07

     View Summary

    For the camera calibration, point correspondence data between 3D and 2D points widely spreading over the image plane is necessary to get a precise result. However, in the case of wide observing area, it is impractical to use a huge calibration object that covers the camera view. This paper presents a refined calibration method using circular patterns. We already proposed a circular pattern based camera calibration method using a four-wheel mobile robot. That is, by keeping low speed and constant steering angle, the robot can draw a big circular pattern on a plane. Our previous method, however, has a limitation that roll angle around the optical axis is assumed to be zero. This paper presents an improved version of the camera calibration method. Our method estimates the extrinsic parameters of fixed camera from a single circular trajectory when the focal length of the camera is known. In the case that the focal length is not given, two co-planar un-concentric circular trajectories are enough for calibrating the extrinsic parameters and the focal length. In both cases, the center and the radius of the circle(s), the speed of the robot are not required. The extensive experiments over simulated images as well as real images demonstrate the robustness and the accuracy of our method.

  • Introducing a crystalline flow for a multi-scale contour figure analysis

    Haiyuan Wu, Qian Chen, Toshikazu Wada

    情報処理学会研究報告コンピュータビジョンとイメージメディア(CVIM)   2003 ( 66 ) 9 - 16   2003.07

     View Summary

    高精度のカメラキャリブレーションを実現するために、3次元位置が既知である校正用の基準点はできるだけ画像面に広く分布した方が望まれる。しかし、カメラの観測範囲は運動場のように広大である場合、全画面がカバーできる巨大な物体を用意することが非常に困難である。我々は、4輪車輌型ロボットを用いて、真円の軌跡を作り、それを利用したカメラキャリブレーション法を既に提案した。しかし、その方法にはカメラ光軸回りの回転がないという制限があった。本稿では、円パターンを利用した完全なカメラキャリブレーション法を提案する。本方法では、円心と半径、そしてロボットの移動速度の情報を使わずに、1)焦点距離既知の場合、一個の円パターンからカメラの外部パラメータを推定する;2)焦点距離未知の場合、同じ平面内における二つの非同心円のパターンからカメラの外部パラメータと焦点距離を求める方法を提案する。シミュレーションと実画像を用いた実験により提案手法の有効性を確認した。For the camera calibration, point correspondence data between 3D and 2D points widely spreading over the image plane is necessary to get a precise result. However, in the case of wide observing area, it is impractical to use a huge calibration object that covers the camera view. This paper presents a refined calibration method using circular patterns. We already proposed a circular pattern based camera calibration method using a four-wheel mobile robot. That is, by keeping low speed and constant steering angle, the robot can draw a big circular pattern on a plane. Our previous method, however, has a limitation that roll angle around the optical axis is assumed to be zero. This paper presents an improved version of the camera calibration method. Our method estimates the extrinsic parameters of fixed camera from a single circular trajectory when the focal length of the camera is known. In the case that the focal length is not given, two co-planar un-concentric circular trajectories are enough for calibrating the extrinsic parameters and the focal length. In both cases, the center and the radius of the circle(s), the speed of the robot are not required. The extensive experiments over simulated images as well as real images demonstrate the robustness and the accuracy of our method.

  • Theorems for Efficient Condensing based on Proximity Graphs

    WADA Toshikazu, KATO Takekazu

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  103 ( 96 ) 13 - 18   2003.05

     View Summary

    This paper presents two efficient condensing algorithms for nearest neighbor classifiers. Condensing is to remove prototype patterns keeping the classification accuracy of nearest neighbor classifiers. This method plays important roles not only for the nearest neighbor classifiers, but also for the support vector machines, because the resulting prototype patterns involve support vectors in many cases. However, the previous condensing algorithms are computationally inefficient in general Pattern Recognition tasks. This is because it refers proximity graphs, e.g., Delaunay and Gabriel graphs, which consume exponential computational time for the dimension of pattern space. We propose two condensing methods that do not require the entire proximity graph of given patterns.

  • Algorithms and Evaluations for Efficient Condensing based on Proximity Graphs

    KATO Takekazu, WADA Toshikazu

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  103 ( 96 ) 19 - 24   2003.05

     View Summary

    We have proposed new theorems for efficient condensing algorithms based on proximity graphs. This paper presents implementations and evaluations of two condensing algorithms, called Direct condensing and Chip-off condensing, based on our theorems. The algorithms can efficiently obtain the same prototypes as the Voronoi condensing without the entire proximity graph of given patterns. The Direct condensing algorithm is more computationally efficient than any previous condensing algorithms for voronoi condensing, and the Chip-Off condensing algorithm can sequencially remove disused patterns while keeping the classifier boundaries. This paper shows the evaluations of proposed condensing algorithms, and demonstrates applications of the nearest-neighbor classifier and the condensing.

  • Adaptive Collecting Examples in Color Competition

    山本俊一, 和田俊和, 奥乃博

    情報処理学会研究報告 ( 一般社団法人情報処理学会 )  2003 ( 41(CVIM-138) ) 1 - 8   2003.05

     View Summary

    The color competition method is proposed as a method of detecting the specified color from an image. This is the method of constituting a look up table using nearest neighbor in a color space. This method is known to be able to learn and classify fast and accurately if the method be trained by specifying target and nontarget color examples. This method can accurately detect if there are many training patterns. However, it is difficult to collect many training patterns manually. In this report, we propose a method that collect training patterns of skin color and non-skin color, tracking humans based on skin color, and learn skin color dynamically. This method generates hypotheses of training patterns based on regions of tracking faces and continuity in color variation, and verifies these hypotheses based on similarity of colors and contribution to detecting skin color, and learn likely training patterns.

  • Robot Body Guided Camera Calibration

    WU HAIYUAN, WADA TOSHIKAZU, HAGIWARA YOUJI

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  136 ( 2 ) 67 - 74   2003.01

     View Summary

    This paper presents a novel camera calibration method for RC car robot control system with external cameras. This system can be regarded as a sensorless robot system, which has advantages of low-cost body and precise localization by external camera. The only drawback is that it requires a priory camera calibration. For this calibration, we don't use any specially designed object, but the robot is employed as a calibration object. Before the calibration, the robot cannot move along a straight line, however, it can move along a circle by keeping constant steering angle and slow speed. The circular locus of the robot is projected as an ellipse on the image plane by the perspective projection, Based on this geometric relationship, we propose a calibration method of the focal length and the view direction of the camera relative to the working plane of the robot. Through computer simulations and experiments with a real camera and a robot, we have confirmed that our method is more robust than the Homography based calibration using four point correspondences.

  • Map generation based on the interaction between robot body and its surrounding environment

    T. Nakamura, Y. Uezu, H. Wu, T. Wada

    IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems ( Institute of Electrical and Electronics Engineers Inc. )  2003-   70 - 75   2003  [Refereed]

     View Summary

    This paper presents a method for map generation based on the interaction between a robot body and its surrounding environment. While a robot moves in the environment, the robot interacts with its surrounding environment. If the effect of the environment on the robot changes, such interactions also change. By observing the robot's body, our method detects such change of the interaction and generates a description representing the type of change and the location where such change is observed. In the current implementation, we assume that there are two types of the change in the interaction. The real robot experiments are conducted in order to show the validity of our method.

    DOI

  • K-D decision tree: An accelerated and memory efficient nearest neighbor classifier

    T Shibata, T Kato, T Wada

    THIRD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS ( IEEE COMPUTER SOC )    641 - 644   2003

     View Summary

    Most nearest neighbor (NN) classifiers employ NN search algorithms for the acceleration. However, NN classification does not always require the NN search. Based on this idea, we propose a novel algorithm named k-d decision tree (KDDT). Since KDDT uses Voronoi condensed prototypes, it is less memory consuming than naive NN classifiers. We have confirmed that KDDT is much faster than NN search based classifiers through the comparative, experiment (from 9 to 369 times faster).

  • What's the New Approach towards Image Understanding?

    WADA Toshikazu

    Journal of The Society of Instrument and Control Engineers ( The Society of Instrument and Control Engineers )  42 ( 6 ) 485 - 490   2003

    DOI

  • Background subtraction based on cooccurrence of image variations

    M Seki, T Wada, H Fujiwara, K Sumi

    2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL II, PROCEEDINGS ( IEEE COMPUTER SOC )  pp.65-72   65 - 72   2003

     View Summary

    This paper presents a novel background subtraction method for detecting foreground objects in dynamic scenes involving swaying trees and fluttering flags. Most methods proposed so far adjust the permissible range of the background image variations according to the training samples of background images. Thus, the detection sensitivity decreases at those pixels having wide permissible ranges. If we can narrow the ranges by analyzing input images, the detection sensitivity can be improved. For this narrowing, we employ the property that image variations at neighboring image blocks have strong correlation, also known as "cooccurrence". This approach is essentially different from chronological background image updating or morphological postprocessing. Experimental results for real images demonstrate the effectiveness of our method.

  • Editor's Message

    WADA T, UTSUMI A

    情報処理学会論文誌コンピュータビジョンとイメージメディア(CVIM) ( Information Processing Society of Japan (IPSJ) )  43 ( 11 ) i - iii   2002.12

  • Color - Target Detection Based on Nearest Neighbor Classifier : Example Based Classification and Its Applications

    WADA Toshikazu

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2002 ( 84 ) 17 - 24   2002.09

     View Summary

    Color target detection tasks can ha regarded as a color classification problem. Most of the classification methods first integrate features into a single value representing the probability or similarity, and the classification is done by comparing or thresholding these values. This integration, i.e. mapping from multi-dimensional pattern space to 1-dimensional space removes the manifold of pattern distributions in the original space, and hence, the exceptional patterns can be missclassified. In this report, we propose a novel method for color target detection based on nearest neighbor classifier, which doesn't require the feature integration and can be trained by specifying target and nontarget color examples. Since the color classifier is implemented as LUT, our system can detect targets in video-rate. Experimental results demonstrate the effectiveness of our method.

  • Physics based Vision and its Future

    WADA Toshikazu

    Journal of the Robotics Society of Japan ( The Robotics Society of Japan )  20 ( 4 ) 360 - 363   2002.05

    DOI

  • Panel Discussion : Is mass - camera system necessary?

    TANIGUCHI RIN-ICHIRO, OHTA YUICHI, MINOH MICHIHIKO, ISHIGURO HIROSHI, KUWASHIMA SHIGESUMI, WADA TOSHIKAZU

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  2002 ( 2 ) 85 - 87   2002.01

     View Summary

    In this panel, we will discuss varieties of topics, e.g., "What is the intrinsic information obtained only by mass-camera system?", "What is the role of 'communication' in mass-camera system?", "Future systems based on mass-camera". Through the discussions, we hope to have a vision of the future in this research field.

  • State Transition Models for Motion Recognition : Augmentation of HMMs and Progress in Non-HMM Methods(Special Issue "Temporal Sequence Beyond the HMM")

    YAMATO Junji, UEDA Naonori, WADA Toshikazu, Junji Yamato, Naonori Ueda, Toshikazu Wada

    Journal of Japanese Society for Artificial Intelligence ( 人工知能学会 )  17 ( 1 ) 41 - 46   2002.01

  • Editor's Introduction to "Temporal Sequence Recognition:Beyond the HMM"

    WADA Toshikazu

    Journal of the Japanese Society for Artificial Intelligence ( The Japanese Society for Artificial Intelligence )  17 ( 1 ) 34 - 34   2002.01

    DOI

  • Editor's Introduction to "Temporal Sequence Recognition:Beyond the HMM"

    WADA Toshikazu, Toshikazu Wada

    人工知能学会誌 = Journal of Japanese Society for Artificial Intelligence   17 ( 1 ) 34 - 34   2002.01

  • 多視点映像からの実時間3次元形状復元とその高精度化

    ウ小軍, 延原章平, 和田俊和, 松山隆司

    情報処理学会研究会資料     2002

  • 3次元人体形状計測に基づく指差し動作の解析

    田中宏一, 和田俊和, 松山隆司

    情報処理学会研究会資料   133   125 - 132   2002

  • プロジェクタ・カメラシステムのキャリブレーションに関する研究

    見市伸裕, 和田俊和, 松山隆司

    情報処理学会研究会資料     133 - 1   2002

  • <大学の研究・動向> 3次元ビデオ映像世界の開拓

    松山 隆司, 和田 俊和, 杉本 晃宏

    Cue : 京都大学電気関係教室技術情報誌 ( 京都大学電気関係教室・洛友会 )  8 ( 8 ) 8 - 12   2001.12

    DOI

  • Rotational Cameras for Omnidirectional Imaging and Their Application

    WADA Toshikazu

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  125 ( 4 ) 161 - 168   2001.01

     View Summary

    A rotational camera can be regarded as a sensor that accumulates and integrates multiple image information acquired by changing the camera parameters, i.e., pan, tilt, zoom, exposure, and so on. The accumulation and integration should be performed for each view direction from a single view point. By using a rotational camera, we can realize 1)precise viewpoint calibration, 2)high spatial resolution, 3)wide dynamic range of pixels, and 4)high S/N ratio. As for the omnidirectional imaging, a rotational camera must rotate to observemultiple view directions, i.e., an omnidirectional image cannot be captured at once. However, as an active camera, its time response is equivalent to that of an ordinary video camera. Especially, a specially calibrated rotational camera whose projection center coinsides with its rotation center has advantages that 1)background subtraction can be performed for the captured images by a rotating camera, 2)simple egomotion analysis can be applied, because camera action will not cause motion parallax. In this paper, all above topics are discussed.

  • 視点固定型ステレオカメラを用いた対 象追跡

    常谷茂之, 和田俊和, 松山隆司

    情報処理学会研究会資料     2001

  • 3次元ビデオ映像の能動的実時間撮影と対話的編集・表示

    ウ小軍, 圓藤康平, 和田俊和, 松山隆司

    電子情報通信学会パターン認識・メディア理解研究会     2001

  • 3次元ビデオ映像のためのデータ圧縮法

    木村雅之, 和田俊和, 松山隆司

    情報処理学会研究会資料     2001

  • ブリティッシュコロンビア大学 : Laboratory for Computational Intelligence (LCI)

    和田 俊和

    人工知能 ( 一般社団法人 人工知能学会 )  15 ( 3 ) 538 - 539   2000.05

    DOI

  • 3D ビデオ (1) : PCクラスタによる身体動作の実時問3次元映像化

    和田俊和

    The Proc. of IMPS, 2000     9 - 10   2000

  • 3Dビジョン(2):多視点映像からの3D映像の生成と表示

    圓藤康平, 西出義章, 和田俊和, 松山隆司

    映像メディア処理シンポジウム     2000

  • Recovering Shape of Unfolded Book Surface from a Scanner Image using Eigenspace Method

    Hiroyuki Ukida, Toshikazu Wada, Takashi Matsuyama

    MVA2000 ( The Institute of Electronics, Information and Communication Engineers )  83 ( 12 ) 2610 - 2621   2000  [Refereed]

     View Summary

    本論文では, 3次元形状復元の実用的問題の一つとして, イメージスキャナによって撮影された書籍表面の陰影画像から, 固有空間法を用いて高速に書籍の断面形状を復元し, 画像中の陰影とひずみを補正する方法について述べる.具体的には, 互いに対をなす陰影画像と断面形状のデータを事前に多数取得し, これらを正規化して別々の固有空間に保存する.そして, 与えられた画像に対して陰影固有空間内で局所的な線形補間を行い, この係数を用いて形状固有空間から断面形状を復元し, 画像の補正を行う.ここでは, 固有空間中の局所的な線形補間での誤差と, 陰影固有空間から形状固有空間への線形変換での誤差を考慮し, データに対するSN比の概念を導入して線形補間における係数を求める.また, データの正規化として, フーリエ変換を用いて書籍の平行移動成分を除去する方法を用いる.提案した手法の妥当性を示すために, 模型及び実際の書籍を用いた実験を行い, 高速で精度の良い形状復元と画像の補正が可能であることを確認した.

  • 視点固定型パン・チルト・ズームカメラを用いた適応的見え方モデルに基づく人物頭部の検出・追跡

    谷内清剛, 和田俊和, 松山隆司

    画像の認識・理解シンポジウム(MIRU2000)     9 - 14   2000

  • Human head tracking using adaptive appearance models with a fixed-viewpoint pan-tilt-zoom camera

    Kiyotake Yachi, Toshikazu Wada, Takashi Matsuyama

    Proceedings - 4th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2000 ( IEEE Computer Society )    150 - 155   2000  [Refereed]

     View Summary

    We propose a method for detecting and tracking a human head in real time from an image sequence. The proposed method has three advantages: (1) we employ a fixed-viewpoint pan-tilt-zoom camera to acquire image sequences
    with the camera, we eliminate the variations in the head appearance due to camera rotations with respect to the viewpoint
    (2) we prepare a variety of contour models of the head appearances and relate them to the camera parameters
    this allows us to adaptively select the model to deal with the variations in the head appearance due to human activities
    (3) we use the model parameters obtained by detecting the head in the previous image to estimate those to be fitted in the current image
    this estimation facilitates computational time for the head detection. Accordingly, the accuracy of the detection and required computational time are both improved and, at the same time, the robust head detection and tracking are realized in almost real time. Experimental results in the real situation show the effectiveness of our method. © 2000 IEEE.

    DOI

  • Multiobject Behavior Recognition by Event Driven Selective Attention Method

    IEEE Transaction on Pattern Analysis and Machine Intelligence   Vol. 22,No.8,pp.873-887   2000

    DOI

  • 3Dビジョン(1):PCクラスタを用いた身体 動作の実時間3次元形状復元

    和田俊和, ウ小軍, 東海彰吾, 松山隆司

    映像メディア処理シンポジウム     2000

  • Homography based parallel volume intersection: Toward real-time volume reconstruction using active cameras

    T Wada, XJ Wu, S Tokai, T Matsuyama

    5TH INTERNATIONAL WORKSHOP ON COMPUTER ARCHITECTURES FOR MACHINE PERCEPTION, PROCEEDINGS ( IEEE COMPUTER SOC )  pp. 331-339   331 - 339   2000

     View Summary

    Silhouette volume intersection is one of the most popular ideas for reconstructing the 3D volume of an object from multi-viewpoint silhouette images. This paper presents a novel parallel volume intersection method based on plane-to-plane homography for real-time 3D volume reconstruction using active cameras. This paper mainly focuses on the acceleration of back-projection from silhouette images to 3D space without using any sophisticated software technique, such as octree volume representation, or look-up table based projection acceleration. Also this paper presents a parallel intersection method of projected silhouette images. From the preliminary experimental results we estimate near frame-rate volume reconstruction for a life-sized mannequin can be achieved at 3cm spatial resolution on our PC cluster system.

  • PCクラスタを用いた身体動作の実時間3次元映像化

    ウ小軍, 東海彰吾, 和田俊和, 松山隆司

    情報処理学会研究会資料     2000

  • 選択的注視に基づく複数対象の動作認識

    信学論 D-II   Vol. J82-D-II, No.6, pp. 1031-1041   1999

  • 照明変化に対して頑健な背景差分法

    波部斉, 和田俊和, 松山隆司

    情報処理学会研究会資料     1999

  • Active image capturing and dynamic scene visualization by cooperative distributed vision

    T Matsuyama, T Wada, S Tokai

    ADVANCED MULTIMEDIA CONTENT PROCESSING ( SPRINGER-VERLAG BERLIN )  1554   252 - 288   1999

     View Summary

    This paper addresses active image capturing and dynamic scene visualization by Cooperative Distributed Vision (CDV, in short). The concept of CDV was proposed by our five years project starting from 1996. From a practical point of view, the goal of CDV is summarized as follows: Embed in the real world a group of network-connected Observation Stations (real time video image processor with active camera(s)) and mobile robots with vision. And realize 1) wide-area dynamic scene understanding and 2) versatile scene visualization. Applications of CDV include real time wide-area surveillance, remote conference and lecturing systems, interactive 3D TV and intelligent TV studio, navigation of (non-intelligent) mobile robots and disabled people, cooperative mobile robots, and so on. In this paper, we first define the framework of CDV and give a brief retrospective view of the computer vision research to show the background of CDV. Then we present technical research results so far obtained: 1) fixed viewpoint pan-tilt-zoom camera for wide-area active imaging, 2) moving object detection and tracking for reactive image acquisition, 3) multi-viewpoints object imaging by cooperative observation stations, and 4) scenario-based cooperative camera-work planning for dynamic scene visualization. Prototype systems demonstrate the effectiveness and practical utilities of the proposed methods.

    DOI

  • PCクラスタを用いた実時間3次元形状復元システム

    東海彰吾, 美越剛宣, 角田健, 和田俊和, 松山隆司

    第5回知能情報メディアシンポジウム     1999

  • Normalsize Image Understanding Environment and Calibrated Image Database:Status Report

    松山 隆司, 和田 俊和, 松尾 啓志

    情報処理   39 ( 2 )   1998.02

  • Real - time Object Detection and Tracking by Fixed Viewpoint Pan -Tilt- Zoom Camera

    MONOBE Y., WADA T., MATSUYAMA T.

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  1998 ( 5 ) 9 - 16   1998.01

     View Summary

    For the detection and tracking of objects moving in wide surveillance area, active sensing method using pan-tilt-zoom camera is effective. In the visual reactive system which autonomously changes camera barameters depending on the visual information, time elapses while capturing image, determining and performing camera action. Because of this elapsed time, detected attributes (position, speed, etc.) of objects in the system correspond to the past attributes in the real world. This time lag causes inconsistent camera action with moving objects. Hence, the realtime system should perform consistent camera action with moving objects by canceling this time lag. This report presents a method to determine the consistent camera action based on 1) motion estimation of object and 2) response-time property of the system. Experimental results demonstrate that the proposed method drastically improves the performance and stability of the object tracking system using fixed viewpoint camera.

  • Hough変換:投票と多数決原理に基づく幾何学的対象の検出と識別

    和田俊和

    コンピュータビジョン新技術コミュニケーションズ     149 - 165   1998

  • デジタル放送のためのグラフ構造に基づくインデックス情報による検索方式

    大和田俊和

    情報処理学会研究報告     1998

  • デジタル放送のためのインデックス情報の断片化方式に関する検討

    大和田俊和

    情報処理学会研究報告     1998

  • 回転を伴うカメラによる移動物体の検出

    村瀬健太郎, 和田俊和, 松山隆司

    画像の認識・理解シンポジウム MIRU'98     425 - 430   1998

  • 視覚・行動機能の統合による柔軟・頑健な能動視覚システムの開発-視点固定型パン・チルト・ズームカメラを用いた実時間対象検出・追跡-

    松山隆司, 和田俊和

    画像の認識・理解シンポジウム MIRU'98     359 - 364   1998

  • 能動視覚エージェントによる移動対象の協調的追跡

    松山隆司, 和田俊和, 丸山昌之

    画像の認識・理解シンポジウム MIRU'98   1   365 - 370   1998

  • 視点固定型パン・チルト・ズームカメラとその応用

    信学論 D-II   Vol. J81-D-II, No.6, pp. 1182-1193   1998

  • 多視点映像を用いた協調的動作認識

    佐藤正行, 和田俊和, 松山隆司

    情報処理学会研究会資料     1998

  • Appearance based behavior recognition by event driven selective attention

    T Wada, T Matsuyama

    1998 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS ( IEEE COMPUTER SOC )    759 - 764   1998  [Refereed]

     View Summary

    Most of behavior recognition methods proposed so far share the limitations of bottom-up analysis, and single-object assumption; the bottom-lip analysis can be confused by erroneous and missing image features and the single-object assumption prevents us from analyzing image sequences including multiple moving objects. This paper presents a robust behavior recognition method free from these limitations. Our method is best characterized by I) top-down image feature extraction by selective attention mechanism, 2) object discrimination by colored-token propagation, and 3) integration of multi-viewpoint images. Extensive experiments of human behavior recognition in real world environments demonstrate the soundness and robustness of our method.

    DOI

  • Shape from shading with interreflections under a proximal light source: Distortion-free copying of an unfolded book

    T Wada, H Ukida, T Matsuyama

    INTERNATIONAL JOURNAL OF COMPUTER VISION ( KLUWER ACADEMIC PUBL )  24 ( 2 ) 125 - 135   1997.09

     View Summary

    We address the problem of recovering the 3D shape of an unfolded book surface from the shading information in a scanner image. This shape-from-shading problem in a real world environment is made difficult by a proximal, moving light source, interreflections, specular reflections, and a nonuniform albedo distribution. Taking all these factors into account, we formulate the problem as an iterative, non-linear optimization problem. Piecewise polynomial models of the 3D shape and albedo distribution are introduced to efficiently and stably compute the shape in practice. Finally, we propose a method to restore the distorted scanner image based on the reconstructed 3D shape. The image restoration experiments for real book surfaces demonstrate that much of the geometric and photometric distortions are removed by our method.

    DOI

  • Motion Recognition by Selective Attention Model for Cooperative Distributed Vision System

    WADA Toshikazu, KATO Takekazu

    IPSJ SIG Notes. CVIM ( Information Processing Society of Japan (IPSJ) )  1997 ( 10 ) 43 - 50   1997.01

     View Summary

    This paper presents a motion-recognition method for visual-surveillance tasks in Cooperative-Distributed-Vision(CDV) system. This method is designed as a motion recognition mechanism of Observation Station, so as to be capable of the real-time recognition of multiple-object motions. To realize multiple-motion recognition, we propose Selective Attention Model, Which iterates event detection in focusing regions on the image space and the action renewing the focusing regions according to the event detection result. While iterating the event detection and action, the model changes its internal states corresponding to the object-motion stage. By employing an NFA as its state transition mechanism, the model can generate and follow all possible states. The action of this model can be expanded as "camera-parameter control" and "message sending to other stations", which enables active and cooperative motion recognition.

  • 分散協調視覚プロジェクト-分散協調視覚研究, システム開発の概要-

    松山隆司, 浅田稔, 美濃導彦, 和田俊和

    情報処理学会研究会資料     1997

  • Mathematical Models in Pattern Recognition and Media Understanding : Present and Future of Mathematical Model Researches

    MUKAWA Naoki, KUMAZAWA Itsuo, IMIYA Atsushi, WAKAHARA Toru, WADA Toshikazu, OHTA Naoya, SHIZAWA Masahiko

    Technical report of IEICE. PRMU ( The Institute of Electronics, Information and Communication Engineers )  96 ( 247 ) 79 - 80   1996.09

     View Summary

    Research field for PRMU researchers is expanding rapidly. Recent criticisms to mathematical model researches for compter vision was summarized as follows: (1) Can we find new directions for mathematical model? Is it exciting? (2) Does it work in real world? Is it useful for real applications? In the panel discussion we will review the above criticisms and discuss what comes next for mathematical model researches.

  • 特別企画:わが国におけるIP, CV研究の軌跡と現状

    松山隆司, 久野義徳, 谷口倫一郎, 和田俊和

    情報処理学会研究会資料     1996

  • Appearance Sphere-パン・チルト・ズームカメラのための背景モデル-

    和田俊和, 浮田宗伯, 松山隆司

    画像の認識・理解シンポジウム(MIRU’96)     II 103 - 108   1996

  • 広域分散監視システムにおける分散協調型対象同定法

    和田俊和, 田村牧也, 松山隆司

    画像の認識・理解シンポジウム(MIRU’96)   103   1996

  • Appearance sphere: Background model for pan-tilt-zoom camera

    Toshikazu Wada, Takashi Matsuyama

    Proceedings - International Conference on Pattern Recognition ( Institute of Electrical and Electronics Engineers Inc. )  1   718 - 722   1996  [Refereed]

     View Summary

    Background subtraction is a simple and effective method to detect anomalous regions in images. In spite of the effectiveness, it cannot be used with an active (moving) camera head because the background image varies with camera-parameter control. This paper presents a background subtraction method with pan-tilt-zoom control. The proposed method consists of an omnidirectional background model called appearance sphere and parallax free sensing. Based on this model, precise background images can be generated and background subtraction can be performed for any combination of pan-tilt-zoom parameters without restoring 3D scene information. © 1996 IEEE.

    DOI

  • A Report on ICCV - 95

    Wada Toshikazu, Shizawa Masahiko, Uchiyama Toshio, Fujiwara Hisanaga, Miura Jun, Lee Chil-Woo, Zheng Jiang-Yu, Asada Naoki

    IPSJ SIG Notes ( 一般社団法人情報処理学会 )  1995 ( 95 ) 57 - 64   1995.09

     View Summary

    This report describes an overview of the fifth International Conference on Computer Vision, which was held at MIT, USA in June 20-23, 1995.

  • ICCV'95

    Wada Toshikazu

    The Journal of the Institute of Television Engineers of Japan ( The Institute of Image Information and Television Engineers )  49 ( 8 ) 1100 - 1100   1995.08

     View Summary

    第5回目を迎えたInternational Conference on Computer Vision1995(ICCD'95)は, 今回は6月20日〜23日の間, 米国マサチューセッツ工科大学で開催された.会議の主催者は, IEEE, PAMI技術委員会が母体となって組織されたICCV'95実行委員会である.会議の規模は, 応募論文数600件, 採択論文数161件(口頭56体, ポスタ105件), 会議参加者582人であった.ICCVは, 2年に1回の間隔で開催されるコンピュータビジョンの国際会議であり, 会議の規模自体は比較的小規模であるが.論文審査は非常に厳しく, セッションはシングルトラックで行われるため, 発表や議論の質に関しては研究者間でも定評がある.

  • Shape from shading with interreflections under proximal light source - 3D shape reconstruction of unfolded book surface from a scanner image

    T WADA, H UKIDA, T MATSUYAMA

    FIFTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, PROCEEDINGS ( I E E E, COMPUTER SOC PRESS )    66 - 71   1995  [Refereed]

  • CVCV-WG特別報告 : コンピュータビジョンにおける技術評論と将来展望 (I)-投票と多数原理に基づく幾何学的対象の検出と識別-

    和田俊和

    情処研資   91   71 - 78   1994

  • 分散協調処理による画像の領域分割法

    和田俊和

    画像の認識 理解シンポジウム, 1994   1   169 - 176   1994

  • 3D Shape Reconstruction of Unfolded Book Surface from a Scanner Image

    Hiroyuki Ukida, Toshikazu Wada, Takashi Matsuyama

    IAPR Workshop Machine Vision Applications     409 - 412   1994  [Refereed]

  • 動的背景モデルを用いた移動領域の抽出

    和田俊和, 松山隆司

    情報処理学会第49回全国大会   2   141 - 142   1994

  • Discrete Scale Space Filtering

    Toshikazu Wada, T.Hosokawa, Takashi Matsuyama

    Proc. of Asian Conference on Computer Vision     1993  [Refereed]

  • スネークをエージェントとする分散協調型領域分割法

    喜田弘司, 和田俊和, 松山隆司

    電子情報通信学会研究会資料     1993

  • High Precision &gamma; - &omega; Hough Transformation Algorithm to Detect Arbitrary Digital Lines

    Seki Makito, Wada Toshikazu, Matsuyama Takashi

    IPSJ SIG Notes ( Information Processing Society of Japan (IPSJ) )  1993 ( 66 ) 7 - 14   1993

     View Summary

    γ-ω Hough transform, which we proposed before, has the following advantages over the ordinary ρ-θ Hough transform: (a)The ρ-θ parameter space involves the inherent bias, i.e. the uneven sensitivity to directions of straight lines. In the γ-ω space this bias is completely eliminated. (b)A voting curve in the γ-ω space is a piece-wise linear line consisting of two segments. This enables the fast voting as well as facilitates the analysis of geometric properties of straight lines. In practice, however, the γ-ω Hough transformation algorithm involves a crucial problem: votes from those pixels on the same digital line are not always accumulated into a single cell, quantized unit, in the parameter space and hence spread over several neighboring cells. The essential cause of this problem is that the set of cells in the γ-ω space does not represent all possible digital lines in an image. In this paper, we first analyze detailed geometric properties of digital lines and propose a new quantization method, i.e. cell configuration, of the γ-ω space. In this cell configuration, all possible digital lines have one-to-one correspondence to the set of cells. Then, considering the uncertainty range of vote from a pixel, a new voting method is proposed. We call the Hough transformation algorithm incorporating these new cell configuration and voting method "high precision γ-ω Hough transformation algorithm". Experimental results have demonstrated that digital lines of arbitrary orientations and location can be correctly detected by the high precision γ-ω Hough transformation algorithm.

  • Shape from Shading under Attenuating Illumination -Restoring Distorted Scanner Image of Unfolded Book Surface-

    浮田浩行, 和田俊和, 松山隆司

    情報処理学会CV研究会   1993 ( 7 ) 9 - 16   1993

     View Summary

    本論文では、イメージ・スキャナによって書籍表面を撮影した画像から、書籍表面の3次元形状を復元する問題を対象とし、()照明光の強度、および光源からの光の方向がスキャナ面から書籍表面までの距離に応じて変化する()書籍表面の反射特性が拡散反射と正反射を含むPhongの反射モデルに従うという2点を考慮した解析モデルを構成することによって、精度良く3次元形状が復元できることを示す。推定された3次元形状を用いることにより、画像中の陰影、幾何学的歪みを補正することができる。実験の結果、スキャナ画像から真の3次元形状にほぼ一致した形状が復元され、補正後の画像の可読性は原画像と比べて大幅に改善されていることが確認され、本手法の有効性が示された。In this paper, we present a method to estimate the 3D shape of an unfolded curved book surface from an image taken by an image-scanner. To solve this estimation problem, we augmented the ordinary Shape from Shading model in the following two points. The first augmentation is that the illuminant intensity and the lighting direction vary with the gap between the scanning plane and the book surface. The second augmentation is that the Phong's reflection model is incorporated to cope with the specular reflection on the book surface. In the experiments, the 3D shape of the book surface was correctly reconstructed using the augmented model. Then the estimated 3D shape was used for the restoration of the distorted scanner image. This image restoration considerably improved the readability of the book surface.

  • An Retrieveal Support System for Nultiple Database

    大和田, 俊和, 清木, 康

    全国大会講演論文集   第44回 ( ソフトウェア ) 195 - 196   1992.02

     View Summary

    本稿では,オンラインで利用可能な多数のデータベースを対象とした,データベース検索支援システムについて述べる。近年、データベースの普及に伴ない,データベースの種類およびデータベース数が増加している。そのため、データベース群全体を把握し,検索要求に応じる適切なデータベースを選択することが因難になっている。我々は,データベース群について十分な知識を有さないユーザが,検索要求に適したデータベースを選択するために必要となるメタ情報および基本機能を示した。メタ情報は,各データベースの特徴および実際の内容に関する情報であり,メタ情報に対して基本機能を適用させることにより,適切なデータベースが選択される。本稿では,メタ情報の分類のための指針を示し,さらに,メタ情報の生成,および,利用方法について述べる。

  • 文字の共起性に着目した英文書画像の解析

    森田清輝, 和田俊和, 松山隆司

    1992年電子情報通信学会春季大会     1992

  • γ-ωHough変換を用いた幅のある線分の抽出

    藤原久永, 和田俊和, 松山隆司

    1992年電子情報通信学会春季大会     1992

  • Shape from Shading on Textured Cylindrical Surface -Restoring Distorted Scanner Images of Unfolded Book Surfaces-

    Toshikazu Wada, Takashi Matsuyama

    Proc. of IAPR Workshop on Machine Vision Applications     1992  [Refereed]

  • Wm Leler : Constraint Programming Languages Their Specification and Generation, Addison-Wesley (1988).

    和田 俊和

    人工知能学会誌 = Journal of Japanese Society for Artificial Intelligence   6 ( 1 ) 140 - 141   1991.01

  • Wm Leler : Constraint Programming Languages Their Specification and Generation, Addison-Wesley (1988).

    Journal of the Japanese Society for Artificial Intelligence ( The Japanese Society for Artificial Intelligence )  6 ( 1 ) 140 - 141   1991.01

    DOI

  • Hough変換における歪みのないρ-θパラメータ空間の構成法

    藤井高広, 和田俊和, 松山隆司

    1991年信学会春期全国大会     1991

  • γ-ω Hough変換-可変標本化によるρ-θパラメータ空間の歪みの除去と投票軌跡の直線化-

    和田俊和, 藤井高広, 松山隆司

    信学技報     1991

  • On a Structural Matching for Waveforms

    伊藤 泰雅, 和田 俊和, 佐藤 誠

    全国大会講演論文集   40 ( 0 ) 380 - 381   1990.03

     View Summary

    波形間の対応する部分を求めるという波形の照合問題は,信号処理やステレオ・ビジョンなどの分野における重要な課題である.波形の凹凸領域は,尺度空間フィルタリングを用いて階層的にとらえることができる.このような階層構造を対応付けることにより,安定で効率のよい対応付けが可能になる.そこで本報告では,波形の階層構造そのものを対応付ける「構造照合」という問題を提案し,2次零交差線の包含関係を用いて波形の階層構造を表わした場合の,構造照合の定式化と解法について論じた.

  • On a Structural Matching for Waveforms

    伊藤泰雅, 和田 俊和, 佐藤 誠

    情報処理学会研究報告コンピュータビジョンとイメージメディア(CVIM)   1990 ( 6 ) 97 - 104   1990.01

     View Summary

    2つの波形間の対応を求めるという波形の照合問題は,信号処理をはじめとする様々な分野における重要な課題の1つである.従来この問題に対しては,波形上の特徴点を直接対応付けるための手法が,主に検討されてきた.波形の凹凸構造は本来,階層構造を持っている.この階層構造を対応付けることにより,精度の良い,しかも安定した対応付けが可能になると考えられる.本報告では,波形の階層構造に着目し,この階層構造を対応付ける「構造照合」という問題を新たに提案する.まず,尺度空間フィルタリングにより得られる2次零交差線の,包含関係を用いて波形の階層構造を表わす.そして,この階層構造を構造点の木構造として記述し,波形間の形態の変動構造を確率論的に表現することで,波形の構造照合を最尤推定問題として定式化する.さらに,この問題の解法を論じる.The pattern matching of waveforms is an important problem in the fields of signal proccesing, image processing, and so no. Hitherto some methods have been proposed, which match the feature points of waveforms directly. Since the convex and the concave regions of waveform have hierarchical structure, it is necessary for a stable and precise matching to match the hierarchical structure. In this paper, we propose a structural matching problem that matches the hierarchical structure of scale-space filtered waveforms, and formalize this problem as a likeliwood estimation of fluctuation of structure points.

  • 7)構造情報による波形の分析・合成(視覚情報研究会)

    和田 俊和, 佐藤 誠, 河原 田弘

    テレビジョン学会誌 ( 一般社団法人映像情報メディア学会 )  43 ( 8 ) 865 - 865   1989.08

  • Analysis and Synthesis of Waveform based on Structural Features

    Wada Toshikazu, Sato Makoto, Kawarada Hiroshi

    ITE Technical Report ( The Institute of Image Information and Television Engineers )  13 ( 20 ) 37 - 42   1989

    DOI

  • An Analysis of One - Dimensional Patterns by Structure line

    和田 俊和, 佐藤 誠

    情報処理学会研究報告コンピュータビジョンとイメージメディア(CVIM)   1988 ( 86 ) 1 - 8   1988.11

     View Summary

    本研究では数本の木が配置された二次元環境中を移動するカメラによって環境を観測した場合の画像を想定し,スクリーン上の一次元濃淡波形を「一般化波形の構造線」を用いて解析した.構造線は波形の凹凸領域の階層構造をあらわす解析的な曲線であり,位相的には三分木の構造を持っている,解析の結果,構造線の枝の生成・消滅,切断・接合などの形態変化と木の見えかくれ,カメラの視界への対象の出入り,スクリーン上での木の位置関係,大小関係の変化が関係あることがわかった.この関係を整理することによって観測された画像中での木の見えかくれなどのシーンの見え方の変化を判断することが可能である.Structure line is a hierarchical representation of waveforms based on scale space filtering. Structure line has the same topological property as the trinary tree, and represents the hierarchy of the convex and concave region of the waveform. In this paper, we discuss an application of the structure line to the computer vision. One of the most basic and difficult problem of the computer vision is the reconstruction of the 3-dimensional object from the multi-viewpoint images. There may be the occlusion of the characteristic points between the images, then we can't find the matching pair. So, we shold know the occluded regions between the images before the beginning the correspondence process. We investigated the relation between the morphological transition of the structure line of the observed images and the transition of the scene. The relation between the transitions of the structure line and the scene transition are cleared.

  • 3.BAIにより著明な腫瘍陰影の縮小をみた2症例(第567回千葉医学会例会・第11回肺癌研究所例会)

    由佐,俊和, 和田,源司

    千葉医学雑誌 = Chiba medical journal ( 千葉医学会 )  53 ( 3 ) 121   1977.06

     View Summary

    type:text

▼display all

Awards & Honors

  • IEICE Fellow

    Winner: Toshikazu Wada

    2023.03   The Institute of Electronics, Information and Communication Engineers   コンピュータビジョン,パターン認識の実用的アルゴリズムの開発

  • 学会賞論文賞

    2009.05   システム制御情報学会  

  • Outstanding Paper Award

    2008.11   Korea Robotics Society  

  • 電子情報通信学会論文賞

    1999.05   電子情報通信学会  

  • 情報処理学会 山下記念研究賞

    1997.09   情報処理学会  

  • David Marr Prize

    1995.06   Fifth International Conference on Computer Vision (ICCV 95)  

▼display all

Conference Activities & Talks

  • 画像・映像情報の信憑性

    和田俊和  [Invited]

    情報処理学会創立50周年記念 全国大会  2010.03   情報処理学会

  • 離散最適化と画像解析への応用

    和田俊和  [Invited]

    フォトニクス技術フォーラム 第4回光情報技術研究会  2010.02   財団法人大阪学技術センター

  • 最近傍探索と画像解析への応用

    和田俊和  [Invited]

    豊田中央研究所講演会  2010.01   (株)豊田中央研究所

  • [チュートリアル講演]最近傍探索の理論とアルゴリズム

    和田俊和  [Invited]

    CVIM 2009年11月研究会  2009.11   情報処理学会

  • 視覚監視における対象表現法

    和田俊和  [Invited]

    SSII09 第15回画像センシングシンポジウム  2009.06   画像センシング技術研究会

  • 第2章 画像認識の基礎から応用

    和田俊和  [Invited]

    第32 回STARC アドバンスト講座 画像・メディア解析技術セミナー  2008.09   株式会社半導体理工学研究センター

  • 画像認識の基礎から応用

    和田俊和  [Invited]

    画像解析に関する技術講習会  2008.03   解析支援ネット解析支援ネットOKAYAMA 画像解析等グループ

  • サーベイ : 事例ベースパターン認識,コンピュータビジョン

    和田俊和  [Invited]

    CVIM研究会  2006.09   情報処理学会

  • 事例ベースビジョン

    和田俊和  [Invited]

    画像センシングシンポジウム(SSII2005)チュートリアル講師  2005.06   画像センシング技術研究会

  • 空間分割による最近傍識別と非線形写像の学習アルゴリズム:その理論と応用

    和田俊和  [Invited]

    精研談話会  2004.11   東京工業大学

  • CVのためのパターン認識・学習理論の新展開

    和田俊和  [Invited]

    CVIM研究会  2004.09   情報処理学会

  • 計算機科学セミナー

    和田俊和  [Invited]

    計算機科学セミナー  2000.01   カナダ Dalhousie Univeristy 計算機科学科

  • 知能機械特別講義

    和田俊和  [Invited]

    知能機械特別講義  1999.01   徳島大学機械工学科

  • 動画像認識におけるトップダウン解析

    和田俊和  [Invited]

    「若手セミナー'96」  1996.12   電子情報通信学会 情報・システムソサイエティ

  • パターン認識・ メディア理解のための数理モデル

    和田俊和  [Invited]

    電子情報通信学会パターン認識メディア理解研究会  1996.09   電子情報通信学会

▼display all

Patents

  • ニューラルネットワーク処理装置、コンピュータプログラム、ニューラルネットワーク製造方法、ニューラルネットワークデータの製造方法、ニューラルネットワーク利用装置

    Patent no: 特許第7438544号

    Date registered: 2024.02.16 

    Date applied: 2019.08.28 ( 特願2020-546831 )   Public disclosure date: 2020.03.19 ( WO2020/054402 )

    Inventor(s)/Creator(s): 和田俊和、菅間幸司、磯田雄基  Applicant: 国立大学法人和歌山大学

  • ニューラルネットワークの圧縮方法、ニューラルネットワーク圧縮装置、コンピュータプログラム、及び圧縮されたニューラルネットワークデータの製造方法

    Patent no: 特許第7438517号

    Date registered: 2024.02.16 

    Date applied: 2019.07.25 ( 特願2019-137019 )   Publication date: 2021.02.18 ( 特開2021-022050 )  

    Inventor(s)/Creator(s): 和田俊和、菅間幸司  Applicant: 国立大学法人和歌山大学

  • 身長推定方法及び身長推定システム

    Patent no: 6173377

    Date registered: 2017.07.14 

    Date applied: 2015.03.03 ( 特願2015-41662 )   Publication date: 2016.09.05 ( 特開2016-162307 )  

    Inventor(s)/Creator(s): 和田俊和、大川一朗、古川裕三、目片健一  Applicant: 国立大学法人和歌山大学、技研トラステム株式会社

  • 同一人物検出方法及び同一人物検出システム

    Patent no: 6173376

    Date registered: 2017.07.14 

    Date applied: 2015.03.03 ( 特願2015-41656 )   Publication date: 2016.09.05 ( 特開2016-162306 )  

    Inventor(s)/Creator(s): 和田俊和、大川一朗、古川裕三、目片健一  Applicant: 国立大学法人和歌山大学、技研トラステム株式会社

  • メールフィルタリングシステム、そのコンピュータプログラム、情報生成方法

    Patent no: 5366204

    Date registered: 2013.09.20 

    Date applied: 2009.07.17 ( 特願2009-168408 )   Publication date: 2011.02.03 ( 特開2011-22876 )  

    Inventor(s)/Creator(s): 和田俊和  Applicant: 国立大学法人和歌山大学

  • 情報提供装置、コンピュータプログラム、記録媒体、作成方法、作成システム、及び特定のURLへ誘導する方法

    Patent no: 5246627

    Date registered: 2013.04.19 

    Date applied: 2010.06.01 ( 特願2010-125977 )   Publication date: 2011.12.15 ( 特開2011-253293 )  

    Inventor(s)/Creator(s): 和田俊和  Applicant: 国立大学法人和歌山大学

  • カメラキャリブレーション装置及び方法

    Patent no: 3849030

    Date registered: 2006.09.08 

    Date applied: 2004.04.23 ( 特願2004-128502 )   Publication date: 2005.11.04 ( 特開2005-308641 )  

    Inventor(s)/Creator(s): 中村恭之、和田俊和、飯塚健男  Applicant: 国立大学法人和歌山大学

  • 画像処理装置、特徴抽出器の学習方法、識別器の更新方法、および画像処理方法

    Date applied: 2022.12.07 ( 特願2022-195687 )  

    Inventor(s)/Creator(s): 和田俊和、菅間幸司、岡山敏之、野口 威、島田佳典 

  • 物体検出方法及びコンピュータプログラム

    Date applied: 2022.12.05 ( 特願2022-194547 )  

    Inventor(s)/Creator(s): 和田俊和、北尾颯人、古川裕三 

  • ニューラルネットワーク処理装置、ニューラルネットワーク処理方法、及びコンピュータプログラム

    Date applied: 2020.07.20 ( 特願2020-123973 )   Publication date: 2022.02.01 ( 特開2022-20464 )  

    Inventor(s)/Creator(s): 和田俊和、菅間幸司  Applicant: 国立大学法人和歌山大学

  • NEURAL NETWORK PROCESSING DEVICE, COMPUTER PROGRAM, NEURAL NETWORK MANUFACTURING METHOD, NEURAL NETWORK DATA MANUFACTURING METHOD, NEURAL NETWORK USE DEVICE, AND NEURAL NETWORK DOWNSCALING METHOD

    Date applied: 2019.03.26 ( 特願 2019-059091 )  

    Inventor(s)/Creator(s): WADA Toshikazu, KAMMA Koji, ISODA Yuuki  Applicant: Wakayama University

     View Summary

    小規模化の際におけるニューラルネットワークの性能の低下を抑制する。本開示のニューラルネットワーク処理装置10は、複数の人工ニューロンが結合したニューラルネットワークN1に対して複数の入力データ40を与えて、前記人工ニューロンから出力される複数の出力からなるベクトルを、複数の前記人工ニューロンそれぞれについて求める処理22と、前記ベクトルに基づいて、同一又は類似の振舞いをする複数の人工ニューロンを選択し、選択された複数の人工ニューロンを統合する統合処理23と、を実行するように構成されている。

  • 画像処理装置、撮像システム、画像処理方法及びコンピュータプログラム

    Patent no: 特許第6806356号

    Date applied: 2016.06.30 ( 特願2016-130865 )   Publication date: 2018.01.11 ( 特開2018-5518 )  

    Inventor(s)/Creator(s): 和田俊和、アレクサンダー  Applicant: 国立大学法人和歌山大学

  • 破砕ガラスの分別方法および破砕ガラス分別装置

    Date applied: 2012.03.16 ( 特願2012-59526 )   Publication date: 2013.09.30 ( 特開2013-192973 )  

    Inventor(s)/Creator(s): 和田俊和、坂本正人、藍畑春雄、服部修藏、前田裕司、徳本真一  Applicant: 国立大学法人和歌山大学、和歌山県、株式会社資源開発、株式会社服部製作所

  • 設備状態監視方法およびその装置

    Date applied: 2012.01.11 ( 特願2012-3027 )   Publication date: 2013.07.22 ( 特開2013-143009 )  

    Inventor(s)/Creator(s): 渋谷久恵、前田俊二、和田俊和、尾崎晋作  Applicant: 国立大学法人和歌山大学、株式会社日立製作所

  • 設備状態監視方法およびその装置

    Date applied: 2011.07.15 ( 特願2011-156648 )   Publication date: 2013.02.04 ( 特開2013-25367 )  

    Inventor(s)/Creator(s): 渋谷久恵、前田俊二、和田俊和、尾崎晋作  Applicant: 国立大学法人和歌山大学、株式会社日立製作所

  • 写真画像処理方法、写真画像処理装置、及び事象推定方法

    Date applied: 2008.07.25 ( 特願2008-192867 )   Publication date: 2010.02.12 ( 特開2010-33199 )  

    Inventor(s)/Creator(s): 和田俊和、倉本寿一、岡藍子、北耕次  Applicant: 国立大学法人和歌山大学、ノーリツ鋼機株式会社

  • 写真画像処理方法、写真画像処理プログラム、及び写真画像処理装置

    Date applied: 2008.07.25 ( 特願2008-192866 )   Publication date: 2010.02.12 ( 特開2010-34713 )  

    Inventor(s)/Creator(s): 和田俊和、倉本寿一、ハリムサンディ、北耕次  Applicant: 国立大学法人和歌山大学、ノーリツ鋼機株式会社

  • 非線形写像学習コンピュータプログラム、および記録媒体

    Date applied: 2005.05.31 ( 特願2005-159328 )   Publication date: 2006.12.14 ( 特開2006-338123 )  

    Inventor(s)/Creator(s): 中村恭之、和田俊和  Applicant: 国立大学法人和歌山大学

▼display all

Research Exchange

  • 【リアル&オンライン開催】MOBIO産学連携オフィス連続企画 テーマ別大学・高専合同研究シーズ発表会『ロボット・AI編』

    2020.10
     
  • 大阪府立大学・和歌山大学 工学研究シーズ合同発表会

    2018.10
     
  • ロボティクス 新技術説明会

    2018.07
     
  • 大阪府立大学・和歌山大学 工学研究シーズ合同発表会

    2016.11
     
  • 大阪府立大学・和歌山大学 工学研究シーズ合同発表会

    2015.11
     
  • 研究倫理教育研修会

    2015.06
     
  • 工学研究シーズ合同発表会

    2014.11
     
  • 工学研究シーズ合同発表会

    2013.11
     
  • HLAC特徴に基づく汎用認識方式とその応用に関する講演会

    2013.08
     
  • ディープラーニングとは何なのか:画像認識への応用を主とした議論に関する講演会

    2013.05
     
  • セマンティックギャップ克服に関する討論会

    2012.07
     

▼display all

KAKENHI

  • Neuro-Coding/Unificationを用いたCNNのコンパクト化

    2019.04
    -
    2022.03
     

    Grant-in-Aid for Scientific Research(C)  Principal investigator

  • 生物学分野における計測画像の解析手法に関する研究

    2014.04
    -
    2017.03
     

    Grant-in-Aid for Scientific Research(C)  Co-investigator

  • 事例に基づく予兆検出

    2012.04
    -
    2016.03
     

    Grant-in-Aid for Scientific Research(B)  Principal investigator

  • 事例に基づく静止画像の超解像処理

    2006.04
    -
    2008.03
     

    Grant-in-Aid for Exploratory Research  Principal investigator

  • 三次元「高臨場感中継」システムに関する研究

    2006.04
    -
    2008.03
     

    Grant-in-Aid for Scientific Research(C)  Co-investigator

  • 能動カメラによる被視認識に関する研究

    2004.04
    -
    2007.03
     

    Grant-in-Aid for Scientific Research(A)  Principal investigator

  • 最近傍識別器の高速化,高性能化に関する研究

    2004.04
    -
    2007.03
     

    Grant-in-Aid for Young Scientists(B)  Co-investigator

  • 非線形システムの事例に基づく逐次学習可能なモデリング手法の構築とその応用

    2004.04
    -
    2006.03
     

    Grant-in-Aid for Young Scientists(B)  Co-investigator

  • 環境埋め込み型「アイカメラ」に関する研究

    2004.04
    -
    2006.03
     

    Grant-in-Aid for Scientific Research(C)  Co-investigator

  • ヒューマノイドのためのアクティブオーディションを用いた音環境理解の研究

    2003.04
    -
    2007.03
     

    Grant-in-Aid for Scientific Research(A)  Co-investigator

  • 破片の視点適応型立体提示法による遺物の自動復元に関する研究

    2003.04
    -
    2006.03
     

    Grant-in-Aid for Scientific Research(C)  Co-investigator

  • ロボットの身体を用いた環境認識に関する研究

    2000.04
    -
    2003.03
     

    Grant-in-Aid for Scientific Research(A)  Principal investigator

  • 分散協調型画像理解システムに関する研究

    1996.04
    -
    1999.03
     

    Grant-in-Aid for Scientific Research(A)  Co-investigator

  • 光学中心不変カメラによる広域環境監視に関する研究

    1996.04
    -
    1998.03
     

    Grant-in-Aid for Scientific Research(C)  Principal investigator

  • 多重画像の統合に基づく多機能高精度カメラシステムの開発

    1995.04
    -
    1998.03
     

    Grant-in-Aid for Scientific Research(A)  Co-investigator

▼display all

Public Funding (other government agencies of their auxiliary organs, local governments, etc.)

  • 大型有形・無形文化財の高精度デジタル化ソフトウェアの開発

    2004.04
    -
    2007.03
     

    Co-investigator

  • 顔表情計測のための実時間レンジファインダの開発

    2002.04
    -
    2003.03
     

    Principal investigator

  • 文化遺産の高度メディアコンテンツ化のための自動化手法

    1999.04
    -
    2005.03
     

    Co-investigator

  • 分散協調視覚による動的3次元状況理解

    1996.04
    -
    2001.03
     

    Co-investigator

Joint or Subcontracted Research with foundation, company, etc.

  • 人物検出・追跡を最適に行う為の画像処理手法に関する研究

    2023.04
    -
    2024.03
     

    Academic instruction  Principal investigator

  • 画像解析によるミニトマトの生育診断手法の開発

    2023.04
    -
    2024.02
     

    Joint research  Principal investigator

  • 商品棚画像の解析及び精度向上に関する研究

    2022.05
    -
    2024.03
     

    Joint research  Principal investigator

  • Deep Learningを用いた光学検査装置の新規ソリューションの開発

    2022.04
    -
    2024.03
     

    Joint research  Principal investigator

  • 人物検出・追跡を最適に行う為の画像処理手法に関する研究

    2022.04
    -
    2023.03
     

    Academic instruction  Principal investigator

  • 画像解析によるミニトマトの生育診断手法の開発

    2022.04
    -
    2023.02
     

    Joint research  Principal investigator

  • 商品棚画像の解析及び精度向上に関する研究

    2021.05
    -
    2022.03
     

    Joint research  Principal investigator

  • 人物検出・追跡を最適に行う為の画像処理手法に関する研究

    2021.04
    -
    2022.03
     

    Academic instruction  Principal investigator

  • 産業廃棄物のサーマルリカバリープロセスへのICT・AI 導入による施設の維持・管理の高度化

    2019.04
    -
    2022.03
     

    Contracted research  Co-investigator

▼display all

Instructor for open lecture, peer review for academic journal, media appearances, etc.

  • 編集査読委員

    2018.04
    -
    Now

    ECCV査読委員

     View Details

    学術雑誌等の編集委員・査読・審査員等

    編集査読委員等

  • 編集査読委員

    2017.04
    -
    Now

    ECCV査読委員

     View Details

    学術雑誌等の編集委員・査読・審査員等

    編集査読委員等

  • 編集査読委員

    2017.04
    -
    Now

    CVPR2017査読委員

     View Details

    学術雑誌等の編集委員・査読・審査員等

    編集査読委員等

  • 編集査読委員

    2016.04
    -
    Now

    ECCV査読委員

     View Details

    学術雑誌等の編集委員・査読・審査員等

    編集査読委員等

  • 査読3件

    2015.04
    -
    Now

    ICCV2015

     View Details

    学術雑誌等の編集委員・査読・審査員等

    査読3件

  • 編集査読委員

    2015.04
    -
    Now

    CVPR査読委員

     View Details

    学術雑誌等の編集委員・査読・審査員等

    編集査読委員等

  • 査読1件

    2015.04
    -
    Now

    SIGGRAPH Asia 2015

     View Details

    学術雑誌等の編集委員・査読・審査員等

    査読1件

  • 編集査読委員

    2015.04
    -
    Now

    電子情報通信学会論文誌査読委員

     View Details

    学術雑誌等の編集委員・査読・審査員等

    編集査読委員等,任期:1年

  • 平成23年度奈良先端科学技術大学院大学ゼミナール講演「相違性・類似性と画像検索・異常検出」

    2011.04

    奈良先端科学技術大学院大学情報科学研究科

     View Details

    公開講座・講演会の企画・講師等

    まず,大量のデータ中から検索キーに類似したデータを高速に探索する問題にお いて,データ間の類似性をいかに定義するのかが極めて重要な問題であるという ことを実例を挙げながら示す。特に,検索キーベクトルに劣化(欠落)が含まれ る場合には,データの正規化方法と使用する類似性尺度によって,検索結果が大 きく変わることを示し,その理由について言及する。次に事例に基づく異常検出 において,主に用いられているSimilarity Based Modellingが,類似度をカーネ ル関数と同じであるとみなせば,非線形回帰の手法であるGaussian Processと本 質的に同じになるということを示す。,日付:10月31日

  • オープンキャンパス:学科紹介・後期過去問解説

    2011.04

    その他

     View Details

    小・中・高校生を対象とした学部体験入学・出張講座等

    後期入試の過去問「情報」を高校生向けに解説した.,日付:7月25日

  • 開智オープンセミナー:高校生のための情報科学入門

    2011.04

    その他

     View Details

    小・中・高校生を対象とした学部体験入学・出張講座等

    コンピュータの生い立ち,コンピュータを用いた計算で何が出来 るのか等について,現在高校生の皆さんが学んでいることを基 に説明を行った. ,日付:7月16日

  • メディア出演等

    2010.11

    リビング和歌山

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    「壊れたPCをRe活用した低価格Wi-Fiシステム技術を開発」
    「世界初!Wi-Fi宝さがしin南紀田辺」開催
    2010年11月6日

  • メディア出演等

    2010.11

    毎日新聞

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    「宝探しイベント:廃棄PCで 和大研究室がシステム開発--きょう、田辺で /和歌山」
    2010年11月6日、和歌山大学システム工学部の学生たちが開発した廃棄パソコン(PC)による無線LANシステムを利用し、参加者が携帯端末で受信したクイズを解いて街を巡る宝探しイベント「Wi-Fi★宝さがしin南紀田辺」開催のご案内。
    毎日新聞 2010年11月6日 地方版

  • メディア出演等

    2010.11

    紀伊民報AGARA

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    「携帯端末片手に街巡り 田辺の商店街で「宝探し」」
    JR紀伊田辺駅周辺の商店街で6日、iPad(アイパッド)、iPhone(アイフォーン)など電子携帯端末を使って宝探しをするイベント「Wi-Fi(ワイファイ)★宝さがし in南紀田辺」が開かれた。当日は端末の貸し出しもあり、家族連れや大学生など、県内外から約100人が参加した。
    2010年11月11日

  • メディア出演等

    2010.10

    インターネットコム

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    「【今週の Web ミミズク】和歌山大学で、壊れた PC が WiFi システムとして復活!」
    和歌山大学のシステム工学部、和田研究室では、HDD の故障などで廃棄された PC の WiFi(無線 LAN)通信機能を有効活用して、iPhone などのスマートフォンやニンテンドー DS などの通信対応携帯ゲーム機などに情報を発信できる、低価格 WiFi システム技術を開発中だそうだ。
    2010年10月23日

  • メディア出演等

    2010.10

    日刊工業新聞

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    「和歌山大学、PCをRe活用したWiFiシステム技術を開発」
    和歌山大学
    壊れたPCを用いた無線広報システムとその応用事例に関するご案内
    2010年10月20日

  • メディア出演等

    2010.10

    マイコミジャーナル

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    「和歌山大学、壊れたPCを活用した低価格のWi-Fiシステム技術を開発」
    2010年10月20日

  • メディア出演等

    2010.10

    紀伊民報AGARA

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    最新機器で宝探し(11月6日) 和大生が田辺でまちおこし
    和歌山大学システム工学部の和田俊和教授と同大学の学生や和大大学院の院生らでつくるプロジェクトチームが、まちおこしにつなげようと田辺市商店街振興組合連合会、田辺広域の第三セクターまちづくり会社「南紀みらい」とともに企画、運営する。

    2010年10月29日

  • メディア出演等

    2010.10

    Yahooニュース

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    「和歌山大学、壊れたPCでも再活用できる低価格WiFi情報配信システムを開発」
     和歌山大学 システム工学部 和田研究室は10月20日、ハードディスクの故障などで廃棄されたPCのWi-Fi(無線LAN)通信機能を有効活用して、スマートフォンや携帯ゲーム機に独自情報を発信できるシステムを開発したことを発表した。
    2010年10月20日

  • メディア出演等

    2010.09

    紀伊民報AGARA

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    「「田辺を遊園地に」 和大生らラリー企画で商店街訪問」
    和歌山大学の学生や和大大学院の院生ら11人が7日、田辺市中心市街地の商店街を訪れた。多機能携帯電話などを活用して、参加者がクイズなどに回答しながら各商店を回る「ワイファイラリー」の企画を検討した。
    2010年9月8日

  • 情報処理学会創立50周年記念(第72回)全国大会併設シンポジウム「情報爆発時代における情報の信頼性とデータ品質」(演題:画像・映像情報の信憑性)

    2010.03

    情報処理学会

     View Details

    公開講座・講演会の企画・講師等

    ディジタル化された画像・映像は比較的容易に改竄することができる.ディジタルカメラで撮影された画像でさえ,写実性からの逸脱という意味で,改竄されているということもできる.本講演では,ディジタル画像,映像の改竄方法を紹介し,それらを見抜く方法を紹介する.,日付:2010.3

  • フォトニクス技術フォーラム第4回光情報技術研究会招待講演(演題:離散最適化と画像解析への応用)

    2010.02

    光情報技術研究会

     View Details

    公開講座・講演会の企画・講師等

    近年盛んに用いられるようになっている離散最適化の方法として,Graph CutsとBelief Propagationについて概説し,セグメンテーション,ノイズ除去など様々な用途に応用可能であることを示す.また,我々の開発した2分探索型Belief Propagationによる計算の高速化とFPGA上での高速化方法とその結果について述べた.,日付:2010.2

  • 技術講演会:(演題:最近傍探索と画像解析への応用)

    2010.01

    豊田中央研究所

     View Details

    公開講座・講演会の企画・講師等

    画像から対象を検出,追跡,識別するというタスクの中で,最近傍探索は数多く用いられる.最近傍探索アルゴリズムの概要と,検出・追跡・識別への応用について述べ,距離計算の違いによって画像検索精度が大きく変わるなどの最近の成果についても説明する.,日付:2010.1

  • 情報処理学会CVIM研究会「チュートリアル講演」(演題:最近傍探索の理論とアルゴリズム)

    2009.11

    情報処理学会CVIM研究会

     View Details

    公開講座・講演会の企画・講師等

    最近傍探索の研究においては,「次元の呪縛」と呼ばれる現象が問題を困難にしてきた.この現象は,数学的に厳密に定義されていないが,「蓄積されるデータの分布の次元が一定の値を超えると,いかなるアルゴリズムでも全探索と等価になる」という現象である.最近傍探索研究の歴史は,これを解決するために行われてきたと言っても過言ではない.これまでに数々の研究がなされてきたが,次元の呪縛が解けたという報告はこれまでになく,近年はこの現象を回避するための「近似最近傍」を探索する研究が盛んにおこなわれるようになってきた.本チュートリアルでは,近似を含むものと含まないもの両方について高速な,日付:2009.11

  • SSII:第15回画像センシングシンポジウム,オーガナイズドセッション講師(演題:視覚監視における対象表現法)

    2009.06

    第15回画像センシングシンポジウム運営委員会

     View Details

    公開講座・講演会の企画・講師等

    モデルベースの対象表現法と事例ベースの対象表現法の方法と実際について述べ,画像からの歩行者検出タスクでは,近年盛んに研究がおこなわれるようになっている事例ベース対象表現法よりもモデルベース対象表現法が優れている場合があることを示した.これを通じて,流行ばかりを追求する研究の方法論を見直し,合目的的な方法論を採用すべきであることを訴えた.,日付:2009.6

  • ICCV Area Chair Meeting( International Workshop on Recent Trend in Computer Vision)開催地:京都

    2009.06

    IWRTCV実行委員会

     View Details

    国際交流事業

    上記ワークショップの開催責任者,海外からの研究者招聘手続き,予稿集出版,他全て,相手国:米国,イギリス,スイス,中国,フランス,カナダ,オーストラリア

▼display all

Committee member history in academic associations, government agencies, municipalities, etc.

  • パターン認識・メディア理解研究専門委員会 専門委員

    2020.06.04
    -
    2022.06.08
     

    一般社団法人 電子情報通信学会

     View Details

    学協会、政府、自治体等の公的委員

    パターン認識・メディア理解研究会の企画運営とそのための打ち合わせに従事。

  • ソサイエティ論文誌編集委員会 査読委員

    2020.06.04
    -
    2021.06.02
     

    一般社団法人 電子情報通信学会

     View Details

    学協会、政府、自治体等の公的委員

    ソサイエティ論文誌に投稿されてきた論文の査読業務

  • 専門委員

    2020.06
    -
    2022.06
     

    電子情報通信学会PRMU研究会

     View Details

    学協会、政府、自治体等の公的委員

    日本の画像解析・認識研究の中心的組織であるPRMU研究会の運営を司る専門委員会の委員,任期:2020.6〜2022.6

  • 専門委員

    2018.06
    -
    2020.06
     

    電子情報通信学会PRMU研究会

     View Details

    学協会、政府、自治体等の公的委員

    日本の画像解析・認識研究の中心的組織であるPRMU研究会の運営を司る専門委員会の委員,任期:2018.6〜2020.6

  • 委員(アドバイザー)

    2017.09
    -
    2018.03
     

    平成29年度中小企業経営支援等対策費補助金(戦略的基盤技術高度化支援事業)プロジェクト

     View Details

    国や地方自治体、他大学・研究機関等での委員

    委員(アドバイザー),任期:2017年9月~2018年3月

  • 委員

    2013.08
    -
    2014.07
     

    独立行政法人日本学術振興会 特別研究員等審査会専門委員及び国際事業委員会書面審査員

     View Details

    国や地方自治体、他大学・研究機関等での委員

    国や地方自治体、他大学・研究機関等での委員,任期:2013/08/01~2014/07/31

  • 委員

    2013.08
    -
    2013.12
     

    和歌山市営住宅等指定管理者選定委員会委員

     View Details

    国や地方自治体、他大学・研究機関等での委員

    国や地方自治体、他大学・研究機関等での委員,任期:2013/08/20~2013/12/31

  • 委員

    2012.12
    -
    2013.01
     

    「研究を事業化するプロデューサー養成講座」アドバイザー

     View Details

    国や地方自治体、他大学・研究機関等での委員

    国や地方自治体、他大学・研究機関等での委員,任期:2012/12/1~2013/1/31

  • Area Chair

    2011.04
    -
    2011.06
     

    Program Committee of International Conference on Computer Vision (ICCV) 2011

     View Details

    学協会、政府、自治体等の公的委員

    ICCV2011 の査読結果を集約し,論文採否の議論を行い,決定し,レポートを作成する業務。今年度は39本の論文について書かれた116通の査読結果を集約した。,任期:2011.4~2011.6

  • パターン認識・メディア理解研究専門委員会専門委員

    2010.05
    -
    2011.03
     

    電子情報通信学会PRMU研究会

     View Details

    学協会、政府、自治体等の公的委員

    日本の画像解析・認識研究の中心的組織であるPRMU研究会の運営を司る専門委員会の委員,任期:2010.5~2011.3

  • コンピュータビジョンとイメージメディア研究運営委員会

    2010.04
    -
    2015.03
     

    情報処理学会CVIM研究会

     View Details

    学協会、政府、自治体等の公的委員

    日本国内でのコンピュータビジョン研究の中心的組織であるCVIM研究会の運営委員,任期:2010.4~2015.3

  • Area Chair

    2010.03
    -
    2010.11
     

    The Asian Conference on Computer Vision (ACCV)

     View Details

    学協会、政府、自治体等の公的委員

    ACCV2010のプログラムを決定するために,論文の採否を決定する委員会の委員である.,任期:2010.3~2010.11

  • 運営委員

    2009.04
    -
    2010.03
     

    情報処理学会CVIM研究会

     View Details

    学協会、政府、自治体等の公的委員

    日本国内でのコンピュータビジョン研究の中心的組織であるCVIM研究会の運営委員,任期:2009.4~2010.3

  • 専門委員

    2009.04
    -
    2010.03
     

    電子情報通信学会PRMU研究会

     View Details

    学協会、政府、自治体等の公的委員

    日本の画像解析・認識研究の中心的組織であるPRMU研究会の運営を司る専門委員会の委員,任期:2009.4~2010.3

  • Area Chair

    2009.03
    -
    2009.09
     

    International Conference on Computer Vision (ICCV2009)

     View Details

    学協会、政府、自治体等の公的委員

    ICCVのプログラムを決定するために,論文の採否を決定する委員会の委員である.今回は日本開催であったため,全32人のうち日本から参加した委員は3名であった.,任期:2009.3~2009.9

  • Program Chair

    2008.03
    -
    2009.01
     

    Pacifi-Rim Symposium on Image and Video Technology (PSIVT2009)

     View Details

    学協会、政府、自治体等の公的委員

    学協会、政府、自治体等の公的委員,任期:2008.3~2009.1

  • Area Chair

    2006.03
    -
    2007.10
     

    International Conference on Computer Vision (ICCV2007)

     View Details

    国や地方自治体、他大学・研究機関等での委員

    Area Chair ,任期:2006.3~2007.10

  • プログラム委員長

    2006.03
    -
    2007.08
     

    画像の認識理解シンポジウム(MIRU2007)

     View Details

    国や地方自治体、他大学・研究機関等での委員

    プログラム委員長 ,任期:2006.3~2007.8

  • 専門委員

    2003.03
    -
    2009.03
     

    電子情報通信学会PRMU研究会

     View Details

    学協会、政府、自治体等の公的委員

    学協会、政府、自治体等の公的委員,任期:2003.3~2009.3

  • 運営委員, 幹事

    2003.03
    -
    2009.03
     

    情報処理学会CVIM研究会

     View Details

    国や地方自治体、他大学・研究機関等での委員

    運営委員, 幹事 ,任期:2003.3~2009.3

  • 論文誌編集委員

    2003.03
    -
    2009.03
     

    情報処理学会CVIM研究会

     View Details

    国や地方自治体、他大学・研究機関等での委員

    論文誌編集委員 ,任期:2003.3~2009.3

  • 編集委員

    2003.03
    -
    2006.03
     

    人工知能学会

     View Details

    国や地方自治体、他大学・研究機関等での委員

    編集委員 ,任期:2003.3~2006.3

▼display all

Other Social Activities

  • イノベーション・ジャパン2019 ~大学見本市&ビジネスマッチング

    2019.04
    -
    2020.03

    その他

     View Details

    産業界、行政諸機関等と行った共同研究、新技術創出、コンサルティング等

    「組み込み系DNNシステムの圧縮による高速化」について展示及び説明を行った,実施者:新エネルギー・産業技術総合開発機構  科学技術振興機構