Updated on 2024/08/28

写真a

 
OGAWARA Koichi
 
Name of department
Faculty of Systems Engineering, Mechatronics
Job title
Associate Professor
Concurrent post
Robotics Majaor(Associate Professor)
Mail Address
E-mail address
External link

Degree

  • Ph.D.   2002

Association Memberships

  • 計測自動制御学会

  • 日本機械学会

  • 日本ロボット学会

  • 情報処理学会

  • 電子情報通信学会

  • IEEE

▼display all

Research Areas

  • Informatics / Robotics and intelligent systems

Classes (including Experimental Classes, Seminars, Graduation Thesis Guidance, Graduation Research, and Topical Research)

  • 2023   Exercises in Majors A   Specialized Subjects

  • 2023   Fundamentals of Robotics   Liberal Arts and Sciences Subjects

  • 2023   Graduation Research   Specialized Subjects

  • 2023   Graduation Research   Specialized Subjects

  • 2023   Practice for Researches in Mechatronics   Specialized Subjects

  • 2023   Electronic CircuitⅠ   Specialized Subjects

  • 2023   Fourier Analysis   Specialized Subjects

  • 2022   Exercises in Majors A   Specialized Subjects

  • 2022   Electronic CircuitⅠ   Specialized Subjects

  • 2022   Graduation Research   Specialized Subjects

  • 2022   Graduation Research   Specialized Subjects

  • 2022   Graduation Research   Specialized Subjects

  • 2022   Practice for Researches in Mechatronics   Specialized Subjects

  • 2022   Fourier Analysis   Specialized Subjects

  • 2022   Introductory Seminar in Systems Engineering   Specialized Subjects

  • 2021   Exercises in Majors A   Specialized Subjects

  • 2021   Graduation Research   Specialized Subjects

  • 2021   Graduation Research   Specialized Subjects

  • 2021   Practice for Researches in Mechatronics   Specialized Subjects

  • 2021   Fourier Analysis   Specialized Subjects

  • 2021   Electronic CircuitⅠ   Specialized Subjects

  • 2021   Graduation Research   Specialized Subjects

  • 2021   Graduation Research   Specialized Subjects

  • 2021   Fundamentals of Robotics   Liberal Arts and Sciences Subjects

  • 2020   Exercises in Majors A   Specialized Subjects

  • 2020   Graduation Research   Specialized Subjects

  • 2020   Graduation Research   Specialized Subjects

  • 2020   Introductory Seminar in Systems Engineering   Specialized Subjects

  • 2020   Fourier Analysis   Specialized Subjects

  • 2020   Practice for Researches in Mechatronics   Specialized Subjects

  • 2020   Electronic CircuitⅠ   Specialized Subjects

  • 2019   NA   Specialized Subjects

  • 2019   Practice for Researches in Mechatronics   Specialized Subjects

  • 2019   Voluntary Study on Systems Engineering Ⅲ   Specialized Subjects

  • 2019   Voluntary Study on Systems Engineering Ⅰ   Specialized Subjects

  • 2019   Fourier Analysis   Specialized Subjects

  • 2019   Robot Vision   Specialized Subjects

  • 2019   Electronic CircuitⅠ   Specialized Subjects

  • 2018   NA   Specialized Subjects

  • 2018   Voluntary Study on Systems Engineering Ⅵ   Specialized Subjects

  • 2018   Voluntary Study on Systems Engineering Ⅴ   Specialized Subjects

  • 2018   Voluntary Study on Systems Engineering Ⅳ   Specialized Subjects

  • 2018   Voluntary Study on Systems Engineering Ⅲ   Specialized Subjects

  • 2018   Voluntary Study on Systems Engineering Ⅱ   Specialized Subjects

  • 2018   Voluntary Study on Systems Engineering Ⅰ   Specialized Subjects

  • 2018   Fourier Analysis   Specialized Subjects

  • 2018   Practice for Researches in Mechatronics   Specialized Subjects

  • 2018   Robot Vision   Specialized Subjects

  • 2018   Electronic CircuitⅠ   Specialized Subjects

  • 2017   NA   Specialized Subjects

  • 2017   Voluntary Study on Systems Engineering Ⅳ   Specialized Subjects

  • 2017   Voluntary Study on Systems Engineering Ⅴ   Specialized Subjects

  • 2017   Voluntary Study on Systems Engineering Ⅲ   Specialized Subjects

  • 2017   Graduation Research   Specialized Subjects

  • 2017   Robot Vision   Specialized Subjects

  • 2017   Practice for Researches in Mechatronics   Specialized Subjects

  • 2017   NA   Specialized Subjects

  • 2017   Fourier Analysis   Specialized Subjects

  • 2017   Introductory Seminar in Systems Engineering   Specialized Subjects

  • 2017   Electronic CircuitⅠ   Specialized Subjects

  • 2016   Voluntary Study on Systems Engineering Ⅵ   Specialized Subjects

  • 2016   Voluntary Study on Systems Engineering Ⅳ   Specialized Subjects

  • 2016   Voluntary Study on Systems Engineering Ⅱ   Specialized Subjects

  • 2016   Voluntary Study on Systems Engineering Ⅲ   Specialized Subjects

  • 2016   Voluntary Study on Systems Engineering Ⅲ   Specialized Subjects

  • 2016   Voluntary Study on Systems Engineering Ⅰ   Specialized Subjects

  • 2016   Graduation Research   Specialized Subjects

  • 2016   NA   Specialized Subjects

  • 2016   Experiments C for Opto-mechatronics   Specialized Subjects

  • 2016   Applied Seminar   Specialized Subjects

  • 2016   Practice for reserches   Specialized Subjects

  • 2016   Fourier Analysis   Specialized Subjects

  • 2016   Introductory Seminar in Systems Engineering   Specialized Subjects

  • 2016   Electronic CircuitⅠ   Specialized Subjects

  • 2015   NA   Specialized Subjects

  • 2015   NA   Specialized Subjects

  • 2015   Voluntary Study on Systems Engineering Ⅰ   Specialized Subjects

  • 2015   Graduation Research   Specialized Subjects

  • 2015   NA   Specialized Subjects

  • 2015   Electronic CircuitⅠ   Specialized Subjects

  • 2015   Experiments C for Opto-mechatronics   Specialized Subjects

  • 2015   Applied Seminar   Specialized Subjects

  • 2015   Practice in Applied Analysis   Specialized Subjects

  • 2014   Practice in Applied Analysis   Specialized Subjects

  • 2014   Applied Seminar   Specialized Subjects

  • 2014   Practice for reserches   Specialized Subjects

  • 2014   Experiments C for Opto-mechatronics   Specialized Subjects

  • 2014   Electronic CircuitⅠ   Specialized Subjects

  • 2014   Introduction to Opto-Mechatronics   Specialized Subjects

  • 2014   Introductory Seminar   Liberal Arts and Sciences Subjects

  • 2013   Practice in Applied Analysis   Specialized Subjects

  • 2013   Applied Seminar   Specialized Subjects

  • 2013   Practice for reserches   Specialized Subjects

  • 2013   Experiments C for Opto-mechatronics   Specialized Subjects

  • 2013   Electronic CircuitⅠ   Specialized Subjects

  • 2013   Introduction to Opto-Mechatronics   Specialized Subjects

  • 2013   Mechatoronics using in familiar products   Liberal Arts and Sciences Subjects

  • 2012   Electronic CircuitⅠ   Specialized Subjects

  • 2012   Mechatoronics using in familiar products   Liberal Arts and Sciences Subjects

  • 2012   Introduction to Opto-Mechatronics   Specialized Subjects

  • 2012   Experiments C for Opto-mechatronics   Specialized Subjects

  • 2012   Applied Seminar   Specialized Subjects

  • 2012   Introductory Seminar   Liberal Arts and Sciences Subjects

  • 2012   Practice in Applied Analysis   Specialized Subjects

  • 2012   Practice for reserches   Specialized Subjects

  • 2011   Applied Seminar   Specialized Subjects

  • 2011   Practice for reserches   Specialized Subjects

  • 2011   Practice in Applied Analysis   Specialized Subjects

  • 2011   Electronic CircuitⅠ   Specialized Subjects

  • 2011   Dynamics in Engineering   Specialized Subjects

  • 2011   Introduction to Opto-Mechatronics   Specialized Subjects

  • 2011   Mechatoronics using in familiar products   Liberal Arts and Sciences Subjects

  • 2011   Practice in Applied Analysis   Specialized Subjects

▼display all

Satellite Courses

  • 2012   Mechatoronics using in familiar products   Liberal Arts and Sciences Subjects

Independent study

  • 2023   一軸の二重反転ローターの設計と制作

  • 2023   マイコン制御の学習と自動運転の研究

  • 2021   ロボットの歴史と構成部品、プログラムの基本について学ぶ

  • 2019   レスキューロボットの製作(前期)

  • 2018   レスキューロボットの製作(後期)

  • 2018   レスキューロボットの製作(前期)

  • 2017   レスキューロボット製作のための講習の実施(前期)

  • 2017   レスキューロボットの製作(後期)

  • 2016   レスキューロボットの製作(前期)

  • 2016   ダミヤンの制作(後期)

  • 2016   DCモータのフィードバック制御系の設計・製作(後期)

  • 2015   レスキューロボットの製作

  • 2014   学習用ロボットの製作

  • 2012   2足歩行ロボットの制御

  • 2012   除塵ロボットの制御

▼display all

Classes

  • 2023   Advanced Course on Systems Engineering   Master's Course

  • 2023   Systems Engineering SeminarⅠA   Master's Course

  • 2023   Systems Engineering SeminarⅠB   Master's Course

  • 2023   Systems Engineering SeminarⅡA   Master's Course

  • 2023   Systems Engineering SeminarⅡB   Master's Course

  • 2023   Symbiotic Robotics   Master's Course

  • 2023   Systems Engineering Project SeminarⅠA   Master's Course

  • 2023   Systems Engineering Project SeminarⅠB   Master's Course

  • 2023   Systems Engineering Project SeminarⅡA   Master's Course

  • 2023   Systems Engineering Project SeminarⅡB   Master's Course

  • 2023   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2023   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2023   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2023   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2023   Systems Engineering Advanced Research   Doctoral Course

  • 2023   Systems Engineering Advanced Research   Doctoral Course

  • 2023   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2023   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2023   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2023   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2022   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2022   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2022   Systems Engineering Advanced Research   Doctoral Course

  • 2022   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2022   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2022   Systems Engineering Project SeminarⅡB   Master's Course

  • 2022   Systems Engineering Project SeminarⅡA   Master's Course

  • 2022   Systems Engineering Project SeminarⅠB   Master's Course

  • 2022   Systems Engineering Project SeminarⅠA   Master's Course

  • 2022   Symbiotic Robotics   Master's Course

  • 2022   Systems Engineering SeminarⅡB   Master's Course

  • 2022   Systems Engineering SeminarⅡA   Master's Course

  • 2022   Systems Engineering SeminarⅠB   Master's Course

  • 2022   Systems Engineering SeminarⅠA   Master's Course

  • 2021   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2021   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2021   Systems Engineering Advanced Research   Doctoral Course

  • 2021   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2021   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2021   Systems Engineering Project SeminarⅡB   Master's Course

  • 2021   Systems Engineering Project SeminarⅡA   Master's Course

  • 2021   Systems Engineering Project SeminarⅠB   Master's Course

  • 2021   Systems Engineering Project SeminarⅠA   Master's Course

  • 2021   Symbiotic Robotics   Master's Course

  • 2021   Systems Engineering SeminarⅡB   Master's Course

  • 2021   Systems Engineering SeminarⅡA   Master's Course

  • 2021   Systems Engineering SeminarⅠB   Master's Course

  • 2021   Systems Engineering SeminarⅠA   Master's Course

  • 2021   Systems Engineering Advanced Research   Doctoral Course

  • 2020   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2020   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2020   Systems Engineering Advanced Research   Doctoral Course

  • 2020   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2020   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2020   Systems Engineering Project SeminarⅡB   Master's Course

  • 2020   Systems Engineering Project SeminarⅡA   Master's Course

  • 2020   Systems Engineering Project SeminarⅠB   Master's Course

  • 2020   Systems Engineering Project SeminarⅠA   Master's Course

  • 2020   Symbiotic Robotics   Master's Course

  • 2020   Systems Engineering SeminarⅡB   Master's Course

  • 2020   Systems Engineering SeminarⅡA   Master's Course

  • 2020   Systems Engineering SeminarⅠB   Master's Course

  • 2020   Systems Engineering SeminarⅠA   Master's Course

  • 2019   Symbiotic Robotics   Master's Course

  • 2019   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2019   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2019   Systems Engineering Advanced Research   Doctoral Course

  • 2019   Systems Engineering Advanced Research   Doctoral Course

  • 2019   Systems Engineering SeminarⅡB   Master's Course

  • 2019   Systems Engineering SeminarⅡA   Master's Course

  • 2019   Systems Engineering SeminarⅠB   Master's Course

  • 2019   Systems Engineering SeminarⅠA   Master's Course

  • 2019   Systems Engineering Project SeminarⅡB   Master's Course

  • 2019   Systems Engineering Project SeminarⅡA   Master's Course

  • 2019   Systems Engineering Project SeminarⅠB   Master's Course

  • 2019   Systems Engineering Project SeminarⅠA   Master's Course

  • 2018   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2018   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2018   Systems Engineering Advanced Research   Doctoral Course

  • 2018   Systems Engineering Advanced Research   Doctoral Course

  • 2018   Systems Engineering Project SeminarⅡB   Master's Course

  • 2018   Systems Engineering Project SeminarⅡA   Master's Course

  • 2018   Systems Engineering Project SeminarⅠB   Master's Course

  • 2018   Systems Engineering Project SeminarⅠA   Master's Course

  • 2018   Systems Engineering SeminarⅡB   Master's Course

  • 2018   Systems Engineering SeminarⅡA   Master's Course

  • 2018   Systems Engineering SeminarⅠB   Master's Course

  • 2018   Systems Engineering SeminarⅠA   Master's Course

  • 2018   Symbiotic Robotics   Master's Course

  • 2017   Systems Engineering Global Seminar Ⅱ   Doctoral Course

  • 2017   Systems Engineering Advanced Research   Doctoral Course

  • 2017   Systems Engineering Advanced Research   Doctoral Course

  • 2017   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2017   Systems Engineering Advanced Seminar Ⅱ   Doctoral Course

  • 2017   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2017   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2017   Systems Engineering Project SeminarⅡB   Master's Course

  • 2017   Systems Engineering Project SeminarⅡA   Master's Course

  • 2017   Systems Engineering Project SeminarⅠB   Master's Course

  • 2017   Systems Engineering Project SeminarⅠA   Master's Course

  • 2017   Symbiotic Robotics   Master's Course

  • 2017   Systems Engineering SeminarⅡB   Master's Course

  • 2017   Systems Engineering SeminarⅡA   Master's Course

  • 2017   Systems Engineering SeminarⅠB   Master's Course

  • 2017   Systems Engineering SeminarⅠA   Master's Course

  • 2016   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2016   Systems Engineering Global Seminar Ⅰ   Doctoral Course

  • 2016   Systems Engineering Advanced Research   Doctoral Course

  • 2016   Systems Engineering Advanced Research   Doctoral Course

  • 2016   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2016   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2016   Systems Engineering Project SeminarⅡB   Master's Course

  • 2016   Systems Engineering Project SeminarⅡA   Master's Course

  • 2016   Systems Engineering Project SeminarⅠB   Master's Course

  • 2016   Systems Engineering Project SeminarⅠA   Master's Course

  • 2016   Systems Engineering SeminarⅡB   Master's Course

  • 2016   Systems Engineering SeminarⅡA   Master's Course

  • 2016   Systems Engineering SeminarⅠB   Master's Course

  • 2016   Systems Engineering SeminarⅠA   Master's Course

  • 2016   Symbiotic Robotics   Master's Course

  • 2015   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2015   Systems Engineering Advanced Research   Doctoral Course

  • 2015   Systems Engineering SeminarⅡA   Master's Course

  • 2015   Systems Engineering SeminarⅠA   Master's Course

  • 2015   Systems Engineering Project SeminarⅡA   Master's Course

  • 2015   Systems Engineering Project SeminarⅠA   Master's Course

  • 2015   Symbiotic Robotics   Master's Course

  • 2015   Systems Engineering Advanced Seminar Ⅰ   Doctoral Course

  • 2015   Systems Engineering Advanced Research   Doctoral Course

  • 2015   Systems Engineering SeminarⅡB   Master's Course

  • 2015   Systems Engineering SeminarⅠB   Master's Course

  • 2015   Systems Engineering Project SeminarⅡB   Master's Course

  • 2015   Systems Engineering Project SeminarⅠB   Master's Course

  • 2014   Systems Engineering Advanced Research  

  • 2014   Systems Engineering Advanced Research  

  • 2014   Systems Engineering Advanced Seminar Ⅱ  

  • 2014   Systems Engineering Advanced Seminar Ⅱ  

  • 2014   Systems Engineering Advanced Seminar Ⅰ  

  • 2014   Systems Engineering Advanced Seminar Ⅰ  

  • 2014   Systems Engineering Project SeminarⅡB  

  • 2014   Systems Engineering Project SeminarⅡA  

  • 2014   Systems Engineering Project SeminarⅠB  

  • 2014   Systems Engineering Project SeminarⅠA  

  • 2014   Symbiotic Robotics  

  • 2014   Systems Engineering SeminarⅡB  

  • 2014   Systems Engineering SeminarⅡA  

  • 2014   Systems Engineering SeminarⅠB  

  • 2014   Systems Engineering SeminarⅠA  

  • 2013   Systems Engineering Advanced Research  

  • 2013   Systems Engineering Advanced Research  

  • 2013   Systems Engineering Advanced Seminar Ⅱ  

  • 2013   Systems Engineering Advanced Seminar Ⅱ  

  • 2013   Systems Engineering Advanced Seminar Ⅰ  

  • 2013   Systems Engineering Advanced Seminar Ⅰ  

  • 2013   Systems Engineering Project SeminarⅡB  

  • 2013   Systems Engineering Project SeminarⅡA  

  • 2013   Systems Engineering Project SeminarⅠB  

  • 2013   Systems Engineering Project SeminarⅠA  

  • 2013   Symbiotic Robotics  

  • 2013   Systems Engineering SeminarⅡB  

  • 2013   Systems Engineering SeminarⅡA  

  • 2013   Systems Engineering SeminarⅠB  

  • 2013   Systems Engineering SeminarⅠA  

  • 2012   Systems Engineering Advanced Seminar Ⅱ  

  • 2012   Systems Engineering Advanced Seminar Ⅰ  

  • 2012   Systems Engineering Advanced Research  

  • 2012   Systems Engineering SeminarⅡA  

  • 2012   Systems Engineering SeminarⅠA  

  • 2012   Systems Engineering Project SeminarⅡA  

  • 2012   Systems Engineering Project SeminarⅠA  

  • 2012   Symbiotic Robotics  

  • 2012   Systems Engineering Advanced Seminar Ⅱ  

  • 2012   Systems Engineering Advanced Seminar Ⅰ  

  • 2012   Systems Engineering Advanced Research  

  • 2012   Systems Engineering SeminarⅡB  

  • 2012   Systems Engineering SeminarⅠB  

  • 2012   Systems Engineering Project SeminarⅡB  

  • 2012   Systems Engineering Project SeminarⅠB  

  • 2011   Systems Engineering Project SeminarⅡB  

  • 2011   Systems Engineering Project SeminarⅡA  

  • 2011   Systems Engineering Project SeminarⅠB  

  • 2011   Systems Engineering Project SeminarⅠA  

  • 2011   Systems Engineering Advanced Research  

  • 2011   Systems Engineering Advanced Research  

  • 2011   NA  

  • 2011   NA  

  • 2011   Systems Engineering Advanced Seminar Ⅱ  

  • 2011   Systems Engineering Advanced Seminar Ⅱ  

  • 2011   Systems Engineering Advanced Seminar Ⅰ  

  • 2011   Systems Engineering Advanced Seminar Ⅰ  

  • 2011   NA   Master's Course

▼display all

Published Papers

  • Improvements over Coordinate Regression Approach for Large-Scale Face Alignment

    Haoqi Gao, Koichi Ogawara (Part: Last author, Corresponding author )

    IIEEJ Transactions on Image Electronics and Visual Computing   10 ( 1 ) 127 - 135   2022.06  [Refereed]

  • Face Reconstruction Algorithm based on Lightweight Convolutional Neural Networks and Channel-wise Attention

    Haoqi Gao, Koichi Ogawara (Part: Last author, Corresponding author )

    IIEEJ Transactions on Image Electronics and Visual Computing   10 ( 1 ) 90 - 97   2022.06  [Refereed]

  • Segmentation and Shape Estimation of Multiple Deformed Cloths Using a CNN-Based Landmark Detector and Clustering," 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 7043-7049, doi: 10.1109/ICRA46639.2022.9811551

    D. Sonegawa, K. Ogawara (Part: Last author, Corresponding author )

    2022 International Conference on Robotics and Automation     7043 - 7049   2022.05  [Refereed]

  • Face Alignment Using a GAN-based Photorealistic Synthetic Dataset

    Haoqi Gao, Koichi Ogawara (Part: Last author )

    2022 7th International Conference on Control and Robotics Engineering (ICCRE) ( IEEE )    2022.04  [Refereed]

    DOI

  • Face alignment by learning from small real datasets and large synthetic datasets

    aoqi Gao, Koichi Ogawara (Part: Last author )

    2022 Asia Conference on Cloud Computing, Computer Vision and Image Processing     2022.03  [Refereed]

  • Bidirectional Mapping Augmentation Algorithm for Synthetic Images Based on Generative Adversarial Network

    Haoqi Gao, Koichi Ogawara (Part: Last author )

    IIEEJ Transactions on Image Electronics and Visual Computing   8 ( 2 ) 110 - 120   2020.12  [Refereed]

  • Estimation of object class and orientation from multiple viewpoints and relative camera orientation constraints

    Koichi Ogawara, Keita Iseki (Part: Lead author, Corresponding author )

    2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ( IEEE )    10588 - 10594   2020.10  [Refereed]

    DOI

  • Generative adversarial network for bidirectional mappings between synthetic and real facial image

    HAOQi GAO, Koichi Ogawara (Part: Last author )

    Twelfth International Conference on Digital Image Processing (ICDIP 2020) ( SPIE )    136 - 145   2020.06  [Refereed]

    DOI

  • CGAN-based Synthetic Medical Image Augmentation between Retinal Fundus Images and Vessel Segmented Images

    Gao HaoQi, Koichi Ogawara (Part: Last author )

    2020 5th International Conference on Control and Robotics Engineering (ICCRE) ( IEEE )    218 - 223   2020.04  [Refereed]

    DOI

  • Seminar Report: the 105th Robotics Seminar

    Ogawara Koichi

    Journal of the Robotics Society of Japan ( The Robotics Society of Japan )  35 ( 7 ) 536 - 537   2017

    DOI

  • Development of Power Assist Suit for Supporting and Transporting Heavy Object with Single Motor

    Tomoya Takeno, Ayano Murakami, Han Peng, Koichi Ogawara, Arata Suzuki, Kunitomo Kikuchi

    2015 54TH ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS OF JAPAN (SICE) ( IEEE )    1240 - 1243   2015  [Refereed]

     View Summary

    In this paper, we propose a power assist suit for supporting and transporting a heavy object with a single motor that reduces the load on the whole body of a wearer. The effectiveness of the power assist suit was verified by experiments.

  • Identification of people walking along curved trajectories

    Yumi Iwashita, Koichi Ogawara, Ryo Kurazume

    PATTERN RECOGNITION LETTERS ( ELSEVIER SCIENCE BV )  48   60 - 69   2014.10  [Refereed]

     View Summary

    Conventional methods of gait analysis for person identification use features extracted from a sequence of camera images taken during one or more gait cycles. The walking direction is implicitly assumed not to change. However, with the exception of very particular cases, such as walking on a circle centered on the camera, or along a line passing through the camera, there is always some degree of orientation change, most pronounced when the person is closer to the camera. This change in the angle between the velocity vector and the position vector in respect to the camera causes a decrease in performance for conventional methods. To address this issue we propose in this paper a new method, which provides improved identification in this context of orientation change. The proposed method uses a 4D gait database consisting of multiple 3D shape models of walking people and adaptive virtual image synthesis with high accuracy. Each frame, for the duration of a gait cycle, is used to estimate the walking direction of the subject, and a virtual image corresponding to the estimated direction is synthesized from the 4D gait database. The identification uses affine moment invariants as gait features. The efficiency of the proposed method is demonstrated through experiments using a database that includes 42 subjects. (C) 2014 Elsevier B. V. All rights reserved.

    DOI

  • 顔の部位識別に基づくマーカレスモーションキャプチャに関する研究

    赤木 康宏, 古川 亮, 佐川 立昌, 小川原 光一, 川崎 洋

    精密工学会論文誌   79 ( 11 ) 1152 - 1158   2013.11  [Refereed]

  • A Voting-Based Sequential Pattern Recognition Method

    Koichi Ogawara, Masahiro Fukutomi, Seiichi Uchida, Yaokai Feng (Part: Lead author )

    PLOS ONE ( PUBLIC LIBRARY SCIENCE )  8 ( 10 )   2013.10  [Refereed]

     View Summary

    We propose a novel method for recognizing sequential patterns such as motion trajectory of biological objects (i.e., cells, organelle, protein molecules, etc.), human behavior motion, and meteorological data. In the proposed method, a local classifier is prepared for every point (or timing or frame) and then the whole pattern is recognized by majority voting of the recognition results of the local classifiers. The voting strategy has a strong benefit that even if an input pattern has a very large deviation from a prototype locally at several points, they do not severely influence the recognition result; they are treated just as several incorrect votes and thus will be neglected successfully through the majority voting. For regularizing the recognition result, we introduce partial-dependency to local classifiers. An important point is that this dependency is introduced to not only local classifiers at neighboring point pairs but also to those at distant point pairs. Although, the dependency makes the problem non-Markovian (i.e., higher-order Markovian), it can still be solved efficiently by using a graph cut algorithm with polynomial-order computations. The experimental results revealed that the proposed method can achieve better recognition accuracy while utilizing the above characteristics of the proposed method.

    DOI

  • Marker-less Facial Motion Capture based on the Parts Recognition.

    Yasuhiro Akagi, Ryo Furukawa, Ryusuke Sagawa, Koichi Ogawara, Hiroshi Kawasaki

    Journal of WSCG   21 ( 2 ) 137 - 144   2013  [Refereed]

  • Expanding gait identification methods from straight to curved trajectories

    Yumi Iwashita, Ryo Kurazume, Koichi Ogawara

    2013 IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION (WACV) ( IEEE )    193 - 199   2013  [Refereed]

     View Summary

    Conventional methods of gait analysis for person identification use features extracted from a sequence of camera images taken during one or more gait cycles. An implicit assumption is made that the walking direction does not change. However, cameras deployed in real-world environments (and often placed at corners) capture images of humans who walk on paths that, for a variety of reasons, such as turning corners or avoiding obstacles, are not straight but curved. This change of the direction of the velocity vector causes a decrease in performance for conventional methods. In this paper we address this aspect, and propose a method that offers improved identification results for people walking on curved trajectories. The large diversity of curved trajectories makes the collection of complete real world data infeasible. The proposed method utilizes a 4D gait database consisting of multiple 3D shape models of walking subjects and adaptive virtual image synthesis. Each frame, for the duration of a gait cycle, is used to estimate a walking direction for the subject, and consequently a virtual image corresponding to this estimated direction is synthesized from the 4D gait database. The identification uses affine moment invariants as gait features. Experiments using the 4D gait database of 21 subjects show that the proposed method has a higher recognition performance than conventional methods.

  • A facial motion tracking and transfer method based on a key point detection

    Yasuhiro Akagi, Ryo Furukawa, Ryusuke Sagawa, Koichi Ogawara, Hiroshi Kawasaki

    WSCG 2013, COMMUNICATION PAPERS PROCEEDINGS ( UNION AGENCY SCIENCE PRESS )    137 - 144   2013  [Refereed]

     View Summary

    Facial animation is one of the most important contents in 3D CG animations. By the development of scanning and tracking methods of a facial motion, a face model which consists of more than 100,000 points can be used for the animations. To edit the facial animations, key point based approaches such as "face rigging" are still useful ways. Even if a facial tracking method gives us all point-to-point correspondences, a detection method of a suitable set of key points is needed for content creators. Then, we propose a method to detect the key points which efficiently represent motions of a face. We optimize the key points for a Radial Basis Function (RBF) based 3D deformation technique. The RBF based deformation is a common technique to represent a movement of 3D objects in CG animations. Since the key point based approaches usually deform objects by interpolating movements of the key points, these approaches cause errors between the deformed shapes and the original ones. To minimize the errors, we propose a method which automatically inserts additional key points by detecting the area where the error is larger than the surrounding area. Finally, by utilizing the suitable set of key points, the proposed method creates a motion of a face which are transferred form another motion of a face.

  • Non-Markovian Dynamic Time Warping

    Seiichi Uchida, Masahiro Fukutomi, Koichi Ogawara, Yaokai Feng

    2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012) ( IEEE )    2294 - 2297   2012  [Refereed]

     View Summary

    This paper proposes a new dynamic time warping (DTW) method, called non-Markovian DTW In the conventional DTW, the warping function is optimized generally by dynamic programming (DP) subject to some Markovian constraints which restrict the relationship between neighboring time points. In contrast, the non-Markovian DTW can introduce non-Markovian constraints for dealing with the relationship between points with a large time interval. This new and promising ability of DTW is realized by using graph cut as the optimizer of the warping function instead of DP. Specifically, the conventional DTW problem is first converted as an equivalent minimum cut problem on a graph and then edges representing the non-Markovian constraints are added to the graph. An experiment on online character recognition showed the advantage of using non-Markovian constraints during DTW.

  • Approximate Belief Propagation by Hierarchical Averaging of Outgoing Messages

    OGAWARA Koichi (Part: Lead author )

    20th International Conference on Pattern Recognition(ICPR) ( The Institute of Electronics, Information and Communication Engineers )  J94-D ( 3 ) 593 - 603   2011.03  [Refereed]

     View Summary

    本論文では,ノードの出力メッセージをそのノードの全出力メッセージの平均値で置き換え,更に低解像度グラフから元のグラフへと階層的にメッセージを伝搬することにより,画像に対して適用した場合に通常の確率伝搬法と比較して計算時間を約2分の1から3分の1に,メモリ使用量を40%に削減可能な近似確率伝搬法を提案する.また,提案手法をCPU及びGPU上に実装し,Middleburyステレオデータセットを使用して通常の確率伝搬法と比較実験を行い,わずかな精度の低下と引換えに計算時間とメモリ使用量の両面において性能が大きく改善されたことを示す.

    DOI

  • Detecting Frequent Patterns in Time Series Data using Partly Locality Sensitive Hashing

    OGAWARA Koichi, TANABE Yasufumi, KURAZUME Ryo, HASEGAWA Tsutomu (Part: Lead author )

    Journal of the Robotics Society of Japan ( The Robotics Society of Japan )  29 ( 1 ) 67 - 76   2011.01  [Refereed]

     View Summary

    Frequent patterns in time series data are useful clues to learn previously unknown events in an unsupervised way. In this paper, we propose a method for detecting frequent patterns in long time series data efficiently. The major contribution of the paper is two-fold: (1) Partly Locality Sensitive Hashing (PLSH) is proposed to find frequent patterns efficiently and (2) the problem of finding consecutive time frames that have a large number of frequent patterns is formulated as a combinatorial optimization problem which is solved via Dynamic Programming (DP) in polynomial time <i>O</i> (<i>N</i> <sup>1+1/&alpha;</sup>) thanks to PLSH where <i>N</i> is the total amount of data. The proposed method was evaluated by detecting frequent whole body motions in a video sequence as well as by detecting frequent everyday manipulation tasks in motion capture data.

    DOI

  • 相互制約付き多数決型アルゴリズムによる時系列パターン認識

    福冨 正弘, 小川原 光一, 馮 尭楷, 内田 誠一

    電子情報通信学会論文誌   J93-D ( 4 ) 548 - 551   2010.04  [Refereed]

  • Detecting Repeated Patterns using Partly Locality Sensitive Hashing

    Koichi Ogawara, Yasufumi Tanabe, Ryo Kurazume, Tsutomu Hasegawa

    IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010) ( IEEE )    2010  [Refereed]

     View Summary

    Repeated patterns are useful clues to learn previously unknown events in an unsupervised way. This paper presents a novel method that detects relatively long variable-length unknown repeated patterns in a motion sequence efficiently.
    The major contribution of the paper is two-fold: (1) Partly Locality Sensitive Hashing (PLSH) [1] is employed to find repeated patterns efficiently and (2) the problem of finding consecutive time frames that have a large number of repeated patterns is formulated as a combinatorial optimization problem which is solved via Dynamic Programming (DP) in polynomial time O( N1+1/alpha) thanks to PLSH where N is the total amount of data. The proposed method was evaluated by detecting repeated interactions between objects in everyday manipulation tasks and outperformed previous methods in terms of accuracy or computational time.

  • Approximate belief propagation by hierarchical averaging of outgoing messages

    Koichi Ogawara

    Proceedings - International Conference on Pattern Recognition     1368 - 1372   2010  [Refereed]

     View Summary

    This paper presents an approximate belief propagation algorithm that replaces outgoing messages from a node with the averaged outgoing message and propagates messages from a low resolution graph to the original graph hierarchically. The proposed method reduces the computational time by half or two-thirds and reduces the required amount of memory by 60% compared with the standard belief propagation algorithm when applied to an image. The proposed method was implemented on CPU and GPU, and was evaluated against Middlebury stereo benchmark dataset in comparison with the standard belief propagation algorithm. It is shown that the proposed method outperforms the other in terms of both the computational time and the required amount of memory with minor loss of accuracy. © 2010 IEEE.

    DOI

  • Painting robot with multi-fingered hands and stereo vision.

    Shunsuke Kudoh, Koichi Ogawara, Miti Ruchanurucks, Katsushi Ikeuchi

    Robotics Auton. Syst.   57 ( 3 ) 279 - 288   2009.03  [Refereed]

    DOI

  • Detecting Repeated Motion Patterns via Dynamic Programming using Motion Density

    Koichi Ogawara, Yasufumi Tanabe, Ryo Kurazume, Tsutomu Hasegawa

    ICRA: 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-7 ( IEEE )    2926 - +   2009  [Refereed]

     View Summary

    In this paper, we propose a method that detects repeated motion patterns in a long motion sequence efficiently. Repeated motion patterns are the structured information that can be obtained without knowledge of the context of motions. They can be used as a seed to find causal relationships between motions or to obtain contextual information of human activity, which is useful for intelligent systems that support human activity in everyday environment.
    The major contribution of the proposed method is two-fold:
    (1) motion density is proposed as a repeatability measure and
    (2) the problem of finding consecutive time frames with large motion density is formulated as a combinatorial optimization problem which is solved via Dynamic Programming (DP) in polynomial time O(N log N) where N is the total amount of data. The proposed method was evaluated by detecting repeated interactions between objects in everyday manipulation tasks and outperformed the previous method in terms of both detectability and computational time.

  • Integrating Region Growing and Classification for Segmentation and Matting

    Miti Ruchanurucks, Koichi Ogawara, Katsushi Ikeuchi

    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5 ( IEEE )    593 - 596   2008  [Refereed]

     View Summary

    This paper presents a supervised foreground segmentation method that uses local and global feature similarity with edge constraint. This framework integrates and extends the notion of region growing and classification to deal with local and global fitness. It parameterizes constraint of growing using Chebyshev's inequality. The constraint is used to stop segmentation before matting. Matting relies on both local and global information. The proposed method outperforms many of the current methods in the sense of correctness and minimal user interaction, and it does so in a reasonable computation time.

  • Learning Meaningful Interactions from Repetitious Motion Patterns

    Koichi Ogawara, Yasufumi Tanabe, Ryo Kurazume, Tsutomu Hasegawa

    2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS ( IEEE )    3350 - +   2008  [Refereed]

     View Summary

    In this paper, we propose a method for estimating meaningful actions from long-term observation of everyday manipulation tasks without prior knowledge as part of an action understanding framework for life support robotic systems. The target task is defined as a sequence of interactions between objects. An interaction that appears many times is assumed to be meaningful and repetitious relative motion patterns are detected from trajectories of multiple objects. The main contribution is that the problem is formulated as a combinatorial optimization problem with two parameters, target object labels and correspondences on similar motion patterns, and is solved using local and global Dynamic Programming (DP) in polynomial time O(N log N), where N is a total amount of data. The proposed method is evaluated against manipulation tasks using everyday objects such as a cup and a tea-pot.

    DOI

  • Representation of Drawing Behavior by a Robot

    KUDOH Shunsuke, OGAWARA Koichi, RUCHANURUCKS Miti, TAKAMATSU Jun, IKEUCHI Katsushi

    Journal of the Robotics Society of Japan ( The Robotics Society of Japan )  26 ( 6 ) 612 - 619   2008  [Refereed]

     View Summary

    This paper describes a painting robot with multi-fingered hands and stereo vision. The goal of this study is for the robot to reproduce the whole procedure involved in human painting. A painting action is divided into three phases: obtaining a 3D model, composing a picture model, and painting by a robot. In this system, various feedback techniques including computer vision and force sensors are used. As experiments, an apple and a human silhouette are painted on a canvas using this system.

    DOI

  • Integrating Region Growing and Classification for Segmentation and Matting

    Miti Ruchanurucks, Koichi Ogawara, Katsushi Ikeuchi

    2008 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, PROCEEDINGS ( IEEE )    597 - +   2008  [Refereed]

     View Summary

    This paper presents a supervised foreground segmentation method that uses local and global feature similarity with edge constraint. This framework integrates and extends the notion of region growing and classification to deal with local anti global fitness. It parameterizes constraint of growing using Chebyshev's inequality. The constraint is used to stop segmentation before matting. Matting relies on both local and global information. The proposed method outperform many of the current methods in the sense of correctness and minimal user interaction, and it does so in a reasonable computation time.

  • Condtruction method of drift-free omni images by integration of multiple video cameras

    MIKAMI Takeshi, ONO Shintaro, OGAWARA Koichi, KAWASAKI Hiroshi, IKEUCHI Katsushi

    Monthly journal of the Institute of Industrial Science, University of Tokyo ( The University of Tokyo )  59 ( 3 ) 80 - 84   2007.05

    DOI

  • Marker-less human motion estimation using articulated deformable model

    Koichi Ogawara, Xiaolu Li, Katsushi Ikeuchi

    PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10 ( IEEE )    46 - 51   2007  [Refereed]

     View Summary

    This paper presents a novel whole body motion estimation method by fitting a deformable articulated model of the human body into the 31) reconstructed volume obtained from multiple video streams. The advantage of the proposed method is two fold: (1) combination of a robust estimator and ICP algorithm with Kd-tree search in pose and normal space make it possible to track complex and dynamic motion robustly against noise and interference between limb and torso, (2) the hierarchical estimation and backtrack re-estimation algorithm enable accurate estimation.
    The power to track challenging whole body motion in real environment is also presented.

  • Robot painter: From object to trajectory

    Miti Ruchanurucks, Shunsuke Kudoh, Koichi Ogawara, Takaaki Shiratori, Katsushi Ikeuchi

    2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9 ( IEEE )    345 - +   2007  [Refereed]

     View Summary

    This paper presents visual perception discovered in high-level manipulator planning for a robot to reproduce the procedure involved in human painting. First, we propose a technique of 3D object segmentation that can work well even when the precision of the cameras is inadequate. Second, we apply a simple yet powerful fast color perception model that shows similarity to human perception. The method outperforms many existing interactive color perception algorithms. Third, we generate global orientation map perception using a radial basis function. Finally, we use the derived foreground, color segments, and orientation map to produce a visual feedback drawing. Our main contributions are 3D object segmentation and color perception schemes.

  • Humanoid robot painter: Visual perception and high-level manning

    Miti Ruchanurucks, Shunsuke Kudoh, Koichi Ogawara, Takaaki Shiratori, Katsushi Ikeuchi

    PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10 ( IEEE )    3028 - +   2007  [Refereed]

     View Summary

    This paper presents visual perception discovered in high-level manipulator planning for a robot to reproduce the procedure involved in human painting. First, we apply a technique of 2D object segmentation that considers region similarity as an objective function and edge as a constraint with artificial intelligent used as a criterion function. The system can segment images more effectively than most of existing methods, even if the foreground is very similar to the background. Second, we propose a novel color perception model that shows similarity to human perception. The method outperforms many existing color reduction algorithms. Third, we propose a novel global orientation map perception using a radial basis function. Finally, we use the derived model along with the brush's position- and force-sensing to produce a visual feedback drawing. Experiments show that our system can generate good paintings including portraits.

  • Recognizing Assembly Tasks Through Human Demonstration.

    Jun Takamatsu, Koichi Ogawara, Hiroshi Kimura, Katsushi Ikeuchi

    International Journal of Robotics Research   26 ( 7 ) 641 - 659   2007  [Refereed]

    DOI

  • Representation for knot-tying tasks.

    Jun Takamatsu, Takuma Morita, Koichi Ogawara, Hiroshi Kimura, Katsushi Ikeuchi

    IEEE Transactions on Robotics   22 ( 1 ) 65 - 78   2006  [Refereed]

    DOI

  • Painting robot with multi-fingered hands and stereo vision

    Shunsuke Kudoh, Koichi Ogawara, Miti Ruchanurucks, Katsushi Ikeuchi

    IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems     127 - 132   2006  [Refereed]

     View Summary

    In this paper, we describe a painting robot with multi-fingered hands and stereo vision. The goal of this study is for the robot to reproduce the whole procedure involved in human painting. A painting action is divided into three phases: obtaining a 3D model, composing a picture model, and painting by a robot. In this system, various feedback techniques including computer vision and force sensors are used. As experiments, an apple and a human silhouette are painted on a canvas using this system. © 2006 IEEE.

    DOI

  • Laser Range Sensor Suspended beneath Balloon : FLRS (Flying Laser Range Sensor)

    HASEGAWA Kazuhide, HIROTA Yuichiro, OGAWARA Koichi, KURAZUME Ryo, IKEUCHI Katsushi

    The transactions of the Institute of Electronics, Information and Communication Engineers. D-II ( The Institute of Electronics, Information and Communication Engineers )  J88-D-II ( 8 ) 1499 - 1507   2005.08  [Refereed]

     View Summary

    レーザレンジセンサなどを用いた計測作業において, 遺跡や建物などの対象物が大規模になればなるほど, 地上からは観測できないオクルージョン領域が発生する. 一般的には足場を組んでセンサを持ち上げてその上部からの計測を行っているが, 計測位置の移動には多くの時間とコストがかかる. また, 足場を組むことが不可能な場合は計測作業自体が止まることにもなる. この問題を解決するために, 筆者らは気球を用いて空中から計測を行うことにより, 足場を組むことなく手軽に計測位置の移動を可能にするレーザレンジセンサFLRSの開発を行った. このセンサでは, 気球の揺れによる影響を少なくするために高速な計測を実現しているが, 計測結果には若干のひずみが発生している. そこで, センサに内蔵しているビデオカメラからの連続画像を用いて, センサ本体の動きを検出して補正する手法を提案する. そして, 実際に遺跡計測に用いた例を挙げ, センサシステムの有効性を示す.

  • Representation of Knot Tying Tasks for Robot Execution

    TAKAMATSU Jun, MORITA Takuma, OGAWARA Koichi, KIMURA Hiroshi, IKEUCHI Katsushi

    Journal of the Robotics Society of Japan ( The Robotics Society of Japan )  23 ( 5 ) 572 - 582   2005.07  [Refereed]

     View Summary

    The<I>Learning from Observation</I> (LFO) paradigm has been widely applied in various types of robot systems. It helps reduce the work of the programmer. However, so far, the available systems have application limited to rigid objects. Deformable objects are not considered, because: (1) it is difficult to describe their states and (2) too many operations are possible on them. In this paper, we choose knot-tying tasks as case study for operating on deformable objects, since the<I>knot theory</I>is available and since types of operations are limited. In actuality, we introduce an appropriate representation for describing knot's states and define four types of operations enough to realize any knot-tying tasks.

    DOI

  • ロボット動作の自動生成のための観察による組み立て作業の抽象化

    高松 淳, 小川原 光一, 木村 浩, 池内 克史

    情報処理学会論文誌コンピュータビジョンとイメージメディア   46 ( SIG 9 ) 41 - 55   2005.06  [Refereed]

  • A sensor fusion approach for recognizing continuous human grasping sequences using hidden Markov models

    Keni Bernardin, Koichi Ogawara, Katsushi Ikeuchi, Ruediger Dillmann

    IEEE Transactions on Robotics   21 ( 1 ) 47 - 57   2005.02  [Refereed]

     View Summary

    The Programming by Demonstration (PbD) technique aims at teaching a robot to accomplish a task by learning from a human demonstration. In a manipulation context, recognizing the demonstrator's hand gestures, specifically when and how objects are grasped, plays a significant role. Here, a system is presented that uses both hand shape and contact-point information obtained from a data glove and tactile sensors to recognize continuous human-grasp sequences. The sensor fusion, grasp classification, and task segmentation are made by a hidden Markov model recognizer. Twelve different grasp types from a general, task-independent taxonomy are recognized. An accuracy of up to 95% could be achieved for a multiple-user system. © 2005 IEEE.

    DOI

  • A photo-realistic driving simulation system for mixed-reality traffic experiment space

    Shintaro Ono, Koichi Ogawara, Masataka Kagesawa, Hiroshi Kawasaki, Masaaki Onuki, Junichi Abeki, Tori Yano, Masami Nerio, Ken Honda, Katsushi Ikeuchi

    IEEE Intelligent Vehicles Symposium, Proceedings   2005   747 - 752   2005  [Refereed]

     View Summary

    In this paper, we propose an efficient and effective image generation system for "Mixed Reality Traffic Experiment Space", an enhanced driving/traffic simulation system which we have been developing for Sustainable ITS project at the University of Tokyo. Conventional driving simulators represent ther view by a set of polygon-based objects, which leads to less photo-reality and huge human costs for dataset construction. We introduce our image/geometry-based hybrid method to realize more photo-realistic view with less human cost at the same time. Images for datesets are captured from real world by multiple video cameras mounted on a data acquisition vehicle. And the view for the system is created by synthesizing the image dataset. Following contents mainly describe details on data acquisition and view rendering. © 2005 IEEE.

    DOI

  • Driving view simulation synthesizing virtual geometry and real images in an experimental mixed-reality traffic space

    Shintaro Ono, Koichi Ogawara, Masataka Kagesawa, Hiroshi Kawasaki, Masaaki Onuki, Ken Honda, Katsushi Ikeuchi

    Proceedings - Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005   2005   214 - 215   2005  [Refereed]

     View Summary

    We propose an efficient and effective image generation system for an experimental mixed-reality traffice space. Our enhanced traffic/driving simulation system represents the view through a hybrid that combines virtual geometry with real images to realize high photo-reality with little human cost. Images for datasets are captured from the real world, and the view for the simulation system is created by synthesizing image datasets with a conventional driving simulator. © 2005 IEEE.

    DOI

  • Understanding of Human Assembly Tasks for Robot Execution Generation of Optimal Trajectories Based on Transitions of Contact Relations

    TAKAMATSU Jun, OGAWARA Koichi, KIMURA Hiroshi, IKEUCHI Katsushi

    JRSJ ( The Robotics Society of Japan )  22 ( 6 ) 752 - 763   2004.09  [Refereed]

     View Summary

    The <I>planning-from-observation</I> paradigm is widely noticed as a novel robot-programming technique. It consists of two parts: (1) recognition of a human demonstration from observation as symbolic representation, i.e., a sequence of movement primitives and (2) execution of the same task. Symbolic representation enables a robot to achieve the same task even in a different environment. We already proposed a method to build a symbolic representation for an assembly task. However, for a robot to execute the task, it is necessary to adjust parameters of each movement primitive to generate appropriate trajectories of objects, especially a path which maintains one of any contact states. Many researchers have proposed methods to calculate such a path using a non-linear optimization method, for example, the potential field method, the probabilistic method, and so on. Although these methods are very powerful as calculation tools on a computer, their solutions are not optimal. We propose a novel method to calculate such an optimal path using a linear solution.

    DOI

  • 民俗芸能のデジタルアーカイブとロボットによる動作提示

    池内 克史, 中澤 篤志, 小川原 光一, 高松 淳, 工藤 俊亮, 中岡 慎一郎, 白鳥 貴亮

    日本バーチャルリアリティ学会誌   9 ( 2 ) 14 - 20   2004.06  [Refereed]

  • Extraction of Essential Interactions Through Multiple Observations of Human Demonstrations

    Koichi Ogawara, Jun Takamatsu, Hiroshi Kimura, Katsushi Ikeuchi (Part: Lead author )

    IEEE Transactions on Industrial Electronics   50 ( 4 ) 667 - 675   2003.12  [Refereed]

  • 観察に基づく手作業の獲得における視覚の利用

    小川原 光一, 高松 淳, 木村 浩, 池内 克史 (Part: Lead author )

    情報処理学会論文誌コンピュータビジョンとイメージメディア   44 ( SIG 17 ) 13 - 23   2003.12  [Refereed]

  • Knot planning from observation

    Takuma Morita, Jun Takamatsu, Koichi Ogawara, Hiroshi Kimura, Katsushi Ikeuchi

    Proceedings - IEEE International Conference on Robotics and Automation   3   3887 - 3892   2003

     View Summary

    Learning from Observation (LFO) has been widely applied in various types of robot system. It helps reduce the work of the programmer. But the available systems have application limited to rigid objects. Deformable objects are not considered because: 1) it is difficult to describe their state and 2) too many operations are possible on them. In this paper, we choose the knot tying as case study for operating on nonrigid bodies, because a "knot theory" is available and the type of operations is limited. We describe the Knot Planning from Observation (KPO) paradigm, a KPO theory and a KPO system.

  • Acquisition of a symbolic manipulation task model by attention point analysis

    Koichi Ogawara, Jun Takamatsu, Hiroshi Kimura, Katsushi Ikeuchi (Part: Lead author )

    Advanced Robotics   17 ( 10 ) 1073 - 1091   2003  [Refereed]

     View Summary

    As a way of automatic programming of robot behavior, a method for building a symbolic manipulation task model from a demonstration is proposed. The feature of this model is that it explicitly stores the information about the essential parts of a task, i.e. interaction between a hand and an environmental object, or interaction between a grasped object and a target object. Thus, even in different environments, this method reproduces robot motion as similar as possible to that of humans to complete the task while changing the motion during non-essential parts to adapt to the current environment. To automatically determine the essential parts, a method called attention point analysis is proposed
    this method searches for the nature of a task using multiple sensors and estimates the parameters to represent the task. A humanoid robot is used to verify the reproduced robot motion based on the generated task model.

    DOI

  • 複数教示動作の時系列上での統合に基づく人間作業のモデル化手法

    小川原 光一, 高松 淳, 木村 浩, 池内 克史 (Part: Lead author )

    情報処理学会論文誌コンピュータビジョンとイメージメディア   43 ( SIG 4 ) 117 - 126   2002.06  [Refereed]

  • Modeling manipulation interactions by hidden Markov models

    Koichi Ogawara, Jun Takamatsu, Hiroshi Kimura, Katsushi Ikeuchi

    IEEE International Conference on Intelligent Robots and Systems   2   1096 - 1101   2002

     View Summary

    This paper describes a new approach on how to teach everyday manipulation tasks to a robot under the "Learning from Observation" framework. In our previous work, to acquire low-level action primitives of a task automatically, we proposed a technique to estimate essential interactions to complete a task by integrating multiple observations of similar demonstrations. But after many demonstrations are performed, there are possibly interactions which are the same in nature. These identical interactions should be grouped so that each action primitive becomes unique. For this purpose, a Hidden Markov Model based clustering algorithm is presented which automatically determines the number of the independent interactions, We also show that obtained interactions can be used as discriminations of human behavior. Finally, a simulation result and an experimental result in which a real humanoid robot learns and recognizes essential actions by observing demonstrations are presented.

  • Correcting observation errors for assembly task recognition

    Jun Takamatsu, Koichi Ogawara, Hiroshi Kimura, Katsushi Ikeuchi

    IEEE International Conference on Intelligent Robots and Systems   1   232 - 237   2002

     View Summary

    The completion of robot programs requires long development time and much effort. To shorten the programming time and to minimize the effort, we have been developing a system which we refer to as the "assembly-plan-from-observation (APO) system." This system requires assembly task recognition from observing human performance. Observation data by a robot's vision system is usually error contaminated, and, thus, we cannot use those data directly. This paper proposes two methods to clean up those errors by using contact relations and their transitions. The first one corrects the observed configuration from contact relations observed. The second one identifies wrongly determined contact relations from an analysis of configuration space (C-space). We have implemented both methods on our test bed and have verified their effectiveness.

  • Refining hand-action models through repeated observations of human and robot behavior by combined template matching

    Koichi Ogawara, Hiroshi Kimura, Katsushi Ikeuchi

    IEEE International Conference on Intelligent Robots and Systems   1   545 - 550   2001

     View Summary

    This paper describes our current research on learning task level representations by a robot through observation of human demonstrations. Representations proposed so far typically utilized one or a few sensors and were constructed through one time observation, therefore they could solve ambiguity in the data only under strong assumptions. We focus on human hand actions and have been developing a novel construction method of a human task model which integrates multiple observations to solve ambiguity based on Attention Points (APs). So far, this analysis constructs a symbolic task model efficiently in a coarse-to-fine way through two steps. However, to represent delicate motion appeared in a task, the system must incorporate the information about precise motion of the manipulated objects into an abstract task model. In this paper, we propose a method to identify the manipulated object through repeated observations of both human and robot behavior. To this end, we present a method which combines 2D and 3D template matching techniques to localize an object in 3D space generated from a depth and an intensity image. We apply this technique to recognition of human and robot behavior by obtaining the precise trajectory of the manipulated objects. We also present the experimental results achieved through the use of a human-form robot equipped with a 9-eye stereo vision system.

  • Acquiring hand-action models by attention point analysis

    Koichi Ogawara, Soshi Iba, Tomikazu Tanuki, Hiroshi Kimura, Katsushi Ikeuchi

    Proceedings - IEEE International Conference on Robotics and Automation   1   465 - 470   2001  [Refereed]

     View Summary

    This paper describes our current research on learning task level representations by a robot through observation of human demonstrations. We focus on human hand actions and represent such hand actions in symbolic task models. We propose a framework of such models by efficiently integrating multiple observations based on attention points
    we then evaluate the produced model by using a human-form robot. We propose a two-step observation mechanism. At the first step, the system roughly observes the entire sequence of the human demonstration, builds a rough task model and also extracts attention points (APs). The attention points indicate the time and the position in the observation sequence that requires further detailed analysis. At the second step, the system closely examines the sequence around the APs, and obtains attribute values for the task model, such as what to grasp, which hand to be used, or what is the precise trajectory of the manipulated object. We have implemented this system on a human form r obot and demonstrated its effectiveness.

    DOI

  • Extraction of fine motion through multiple observations of human demonstration by DP matching and combined template matching

    Koichi Ogawara, Jun Takamatsu, Hiroshi Kimura, Katsushi Ikeuchi

    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication     8 - 13   2001  [Refereed]

     View Summary

    This paper describes our current research on how a robot, through observation of human demonstrations, can learn task level representations of human hand-work tasks. Representations proposed so far typically handle a human hand trajectory as it is or segment the entire task into a discrete pre-determined symbol sequence
    but to make a generalized model of human hand-work tasks, both types of information must be incorporated into the model appropriately. We propose a technique for segmenting an observed hand-work task into pieces which are composed of fine motion or coarse motion. Fine motion means delicate manipulation and holds the relative trajectory between the grasped object and the target object, while coarse motion is a symbol which connects each fine motion. During coarse motion, a trajectory can be adjusted according to the environment or the structure of a robot when the robot performs the same task performed by a human. To extract essential fine motion automatically, we propose a technique for integrating and aligning multiple observations of different demonstrations, which are virtually the same task, by using data gloves and multi-dimensional Dynamic Programming (DP) matching. Along each fine motion, the relative trajectory (position and orientation) is calculated by tracking the manipulated object using a stereo vision. To localize and track the manipulated object efficiently, we propose a model-based localization technique, which combines 2D and 3D template matching. We have implemented those techniques on our human-form robot and present an experimental result which analyzed and performed a non-contact hand-work task. © 2001 IEEE.

    DOI

  • ネットワーク環境におけるモバイルエージェント機構に基づいて個人の継続的な支援を目指した分散ロボット統合法

    小川原 光一, 金広 文男, 稲葉 雅幸, 井上 博允 (Part: Lead author )

    日本ロボット学会誌   18 ( 7 ) 1034 - 1039   2000.07  [Refereed]

  • Recognition of Human Behaviour using Stereo Vision and Data Gloves

    Ogawara Koichi, Iba Soshi, Tanuki Tomikazu, Sato Yoshihiro, Saegusa Akira, Kimura Hiroshi, Ikeuchi Katsushi

    Monthly journal of the Institute of Industrial Science, University of Tokyo ( The University of Tokyo )  52 ( 5 ) 225 - 230   2000.05

     View Summary

    This paper presents a novel method of constructing a human behaviour model by attention point (AP) analysis. The AP analysis consists of two steps. At the first step, it broadly observes human behaviour, constructs rough human behaviour model and finds APs which require detailed analysis. Then at the second step, by applying time-consuming analysis on APs in the same human behaviour, it can enhance the human behaviour model. This human behaviour model is highly abstracted and is able to change the degree of abstraction adapting to the environment so as to be applicable in a different environment. We describe this method and its implementation using data gloves and a stereo vision system. We also show an experimental result in which a real robot observed and performed the same human behaviour successfully in a different environment using this model.

    DOI

  • Recognition of human task by attention point analysis

    Koichi Ogawara, Soshi Iba, Tomikazu Tanuki, Hiroshi Kimura, Katsushi Ikeuchi

    IEEE International Conference on Intelligent Robots and Systems   3   2121 - 2126   2000  [Refereed]

     View Summary

    This paper presents a novel method of constructing a human task model by attention point (AP) analysis. The AP analysis consists of two steps. At the first step, it broadly observes human task, constructs rough human task model and finds APs which require detailed analysis. Then at the second step, by applying time-consuming analysis on APs in the same human task, it can enhance the human task model. This human task model is highly abstracted and is able to change the degree of abstraction adapting to the environment so as to be applicable in a different environment. We describe this method and its implementation using data gloves and a stereo vision system. We also show an experimental result in which a real robot observed a human task and performed the same human task successfully in a different environment using this model.

    DOI

  • Symbolic Representation of Trajectories for Skill Generation.

    Hirohisa Tominaga, Jun Takamatsu, Koichi Ogawara, Hiroshi Kimura, Katsushi Ikeuchi

    Proceedings of the 2000 IEEE International Conference on Robotics and Automation, ICRA 2000, April 24-28, 2000, San Francisco, CA, USA ( IEEE )    4076 - 4081   2000  [Refereed]

    DOI

▼display all

Misc

  • 自重補償機構を用いた冗長自由度ロボットアームの開発

    石田 陸登, 小川原 光一 (Part: Last author, Corresponding author )

    日本機械学会関西支部 第98期定時総会講演会     1 - 4   2023.03

  • ロボット指に搭載可能な波長が異なる複数のLEDを利用した小型近接覚センサの開発

    加藤 颯, 小川原 光一 (Part: Last author )

    日本機械学会関西支部 第97期定時総会講演会     2022.03

  • RGB-Dカメラと2台のロボットアームによる計測視点数を考慮した持ち替えに基づく全周3次元形状計測法

    曽我 幸慧, 小川原 光一 (Part: Last author )

    日本機械学会関西支部 第97期定時総会講演会     2022.03

  • 深層学習に基づく複数の長方形布の分離と形状推定

    曽根川 大輝, 小川原 光一 (Part: Last author )

    第22回計測自動制御学会システムインテグレーション部門講演会 講演論文集     2B2-14   2021.12

  • Development of a Power Assist Suit for Transfer Assistance Composed of Passive Actuators

    FUKUNAGA Kodai, OGAWARA Koichi (Part: Last author )

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  2021   2P3-C13   2021.06

     View Summary

    Today in Japan, as the population ages and the number of people using nursing homes increases, back pain and other symptoms caused by the increased burden on caregivers have become a problem. Therefore, several devices have been developed to reduce the burden of transfer care, which is particularly burdensome, but they are not practical and have not been adopted in nursing homes. Therefore, in this study, we developed a power assist suit with only passive actuators to reduce the burden of transfer care. We conducted lifting experiments using a developed power assist suit and confirmed the assisting effect from ElectroMyoGraphy (EMG) values with and without the power assist suit.

    DOI

  • Multi-label Image Classification with Visual Explanations

    Haoqi Gao, Liangzhi Li, Bowen Wang, Yuta Nakashima, Ryo Kawasaki, Koichi Ogawara, Hajime Nagahara

    Responsible Computer Vision CVPR 2021 Workshop     2021.06  [Refereed]

  • Development of a gripper for power assist suits with linkage and ratchet mechanisms

    HORIGUCHI Kouki, OGAWARA Koichi (Part: Last author )

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  2021   2P3-C14   2021.06

     View Summary

    The L-shaped gripper commonly used in power assist suits has the problem of not being able to stably hold arbitrarily shaped objects. To solve this problem, we developed a gripper with a wire-driven ratchet mechanism. However, we could not confirm its effectiveness in reducing the burden on the hands and arms for irregularly shaped objects such as rice bags. This is because the mechanism using a wire did not work as expected due to friction between the wire and the wire guide. Therefore, in this study, we solved these problems by developing a gripper with a link-driven ratchet mechanism, which does not use a wire. We conducted experiments to compare the proposed gripper with our previous gripper by measuring the surface EMG of the wearer's arms when lifting objects of various shapes.

    DOI

  • Goal-Conditioned Reinforcement Learning with Latent Representations using Contrastive Learning

    YAMADA Takaya, OGAWARA Koich (Part: Last author )

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  2021   1P1-I15   2021.06

     View Summary

    The goal-conditioned reinforcement learning approach is useful when an agent wants to adaptively achieve arbitrary goals in various environments. Furthermore, it is desirable to be able to learn the behavior of an agent from observations from on-board sensors such as RGB cameras. Since it is difficult for agents to learn directly from images, previous research has shown that the learning process can be accelerated by learning latent representations using VAEs and performing goal-conditioned reinforcement learning in the latent space. However, latent representations learned using VAEs with image reconstruction loss contain information irrelevant to the task to be accomplished. In goal-conditioned reinforcement learning, the reward function is defined as the distance to the goal state, so latent representations that capture the distance between states are appropriate. In this study, we propose a method for maintaining the similarity between states in the latent space using contrastive learning and performing goal-conditioned reinforcement learning in the latent space given image observations. We compared the proposed method with that using latent representations obtained from VAEs, and show that our method outperformed the other.

    DOI

  • Development of a winch-type power assist suit that realizes luggage lifting movement and knee fixation-and-release movement with a single motor

    HARA Yukimasa, OGAWARA Koichi (Part: Last author )

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  2021   2P3-C11   2021.06

     View Summary

    In this paper, we present a mechanical design of a winch-type power assist suit that realizes luggage lifting movement and knee fixation-and-release movement with a single motor. To solve the problems of our previous power assist suit, we realized the descent of a load of 20kg, reduced the weight, improved the safety, and strengthened the rigidity. We developed a prototype power assist suit and verified if it works as designed. We also conducted lifting experiments and confirmed that it could lift a load of 5kg. We measured the surface electromyogram (EMG) and confirmed that it reduced the burden on the wearer's arms and back when lifting a load.

    DOI

  • Adaptive Data Generation and Bidirectional Mapping for Polyp Images

    Haoqi Gao, Koichi Ogawara (Part: Last author )

    49th Anual IEEE Applied Imagery Pattern Recognition (AIPR) workshop     2020.10

  • Development of a gripper for power assist suits with a ratchet mechanism consisting of linear motion and rotational motion parts

    HORIGUCHI Koki, OGAWARA Koichi, SUZUKI Arata, KIKUCHI Kunitomo

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  1A1-I10   1 - 4   2019.06

     View Summary

    <p>L-shaped grippers typically used for power assist suits have a problem in stably holding objects of arbitrary shape. To solve this problem, we developed a gripper with a ratchet mechanism consisting of linear motion and rotational motion parts. The gripper can be deformed according to the shape of the wearer's hand and the shape of the object to be held, and the shape of the gripper can be fixed due to the rachet mechanism, thus objects of arbitrary shape can be held stably by this gripper and the burden of the wearers is reduced. We carried out experiments where four types of objects were held by the developed gripper. We showed that the developed gripper has reduced the wearer's burden for two types of objects among four.</p>

    DOI

  • 弾性要素を利用して把持物体の6自由度運動を実現する多指ロボットハンドの開発

    和唐 昂希, 南方 隆秀, 小川原 光一

    日本機械学会 関西学生会 学生員卒業研究発表講演会 講演前刷集   12P13   1 - 1   2019.03

  • Development of a power assist suit that supports and transports heavy objects by a link mechanism located inside the wearer's legs

    TSUBOI Kazuhiro, HARA Yukimasa, OGAWARA Koichi, SUZUKI Arata, KIKUCHI Kunitomo

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  2019 ( 0 ) 1P1 - I03   2019

     View Summary

    <p>In this paper, we present a power assist suit that supports and transports heavy objects with a single motor. The leg mechanism of the power assist suit is made of upper and lower parallel links and springs. It is designed to be located inside the wearer's legs so that the power assist suit becomes small and light-weight, and the stability during transportation is improved. We measured the surface electromyogram (EMG) during luggage lifting experiments and confirmed that the burden of the wearer's back and legs is reduced when wearing the developed power assist suit.</p>

    DOI

  • Development of a wire drive mechanism for power assist suits that realizes linear luggage lifting movement and nonlinear knee fixation-and-release movement

    HARA Yukimasa, TSUBOI Kazuhiro, OGAWARA Koichi, SUZUKI Arata, KIKUCHI Kunitomo

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  2019 ( 0 ) 1P1 - I09   2019

     View Summary

    <p>In this paper, we present a mechanical design of a wire drive mechanism for power assist suits that realizes linear luggage lifting movement and nonlinear knee fixation-and-release movement using a motor. The nonlinear fixation-and-release movement is realized by a mechanism using a piston and a crankshaft. We developed an algorithm to find the design parameters that minimize the error from the optimal wire movement. We developed a prototype wire drive mechanism and verified that it works as designed. In addition, we carried out lifting experiments and confirmed that luggage up to 20 kg can be lifted. We measured the surface electromyogram (EMG) and confirmed that the burden of the wearer's arms was reduced during luggage lifting.</p>

    DOI

  • 遠隔操作のための複数3次元視覚情報統合・提示システムの開発~過去の映像の提示による作業対象の視認性の向上~

    岩橋 知久, 小川原 光一

    第19回計測自動制御学会システムインテグレーション部門講演会 講演論文集   2C1-05   1675 - 1679   2018.12

  • RGB-Dカメラと双腕ロボットによる把持の可否を考慮した持ち替えに基づく全周3次元形状計測法の開発

    原田 稜, 小川原 光一

    第19回計測自動制御学会システムインテグレーション部門講演会 講演論文集   3D2-08   2986 - 2990   2018.12

  • 多視点から得られるRGB-D画像とカメラの姿勢を用いた深層学習に基づく物体の種類と姿勢の推定

    井堰 啓太, 小川原 光一

    第19回計測自動制御学会システムインテグレーション部門講演会 講演論文集   3D2-09   2991 - 2994   2018.12

  • 隣接特徴点の位置制約を用いた深層学習に基づく任意に変形した長方形布の形状推定

    今川 涼介, 小川原 光一

    第19回計測自動制御学会システムインテグレーション部門講演会 講演論文集   3D2-05   2970 - 2975   2018.12

  • 連続多様体からなる物体の把持姿勢候補を用いた RRT-connect に基づくロボットの把持計画法

    齋藤 拓史, 小川原 光一

    第18回計測自動制御学会システムインテグレーション部門講演会 講演論文集   3A4-10   2182 - 2186   2017.12

  • 把持の安定性を考慮したRRT-connect に基づく多指ハンドとロボットアームの経路探索法

    塔本 健太, 小川原 光一

    第18回計測自動制御学会システムインテグレーション部門講演会 講演論文集   3A4-04   2155 - 2158   2017.12

  • 探索自由度の適応的な変更に基くRRT-connectを拡張した経路探索法

    塔本 健太, 小川原 光一

    第35回日本ロボット学会学術講演会 予稿集   3K2-04   1 - 4   2017.09

  • 手形状と物体形状の相関を利用した深層学習に基づく画像からの把持形状推定

    片山 涼平, 小川原 光一

    ロボティクス・メカトロニクス講演会2017 講演論文集   1P2-J01   1 - 4   2017.05

  • RGB-Dカメラとロボットアームに設置したステレオ魚眼カメラを用いた遠隔操作用透過映像の生成

    岩橋 知久, 小川原 光一

    ロボティクス・メカトロニクス講演会2017 講演論文集   2A1-K07   1 - 3   2017.05

  • ロボットアームを搭載した飛行ロボットによる物体の安定した運搬を実現するための経路計画に関する研究

    村瀬 浩彰, 小川原 光一

    ロボティクス・メカトロニクス講演会2017 講演論文集   1P1-E03   1 - 3   2017.05

  • RGB-Dカメラと双腕ロボットによる持ち替えを利用したSfM法に基づく欠損のない全周3次元形状計測法の開発

    原田 稜, 小川原 光一

    ロボティクス・メカトロニクス講演会2017 講演論文集   2A2-O11   1 - 4   2017.05

  • 狭隘空間で大型物体を運搬するための飛行ロボットの経路計画に関する研究

    村瀬 浩彰, 小川原 光一

    日本機械学会 関西支部第92期定時総会講演会 論文集   M612   1 - 1   2017.03

  • クレーン型アームを有し1台のモータで重量物の支持運搬を補助するパワーアシストスーツの開発 -第2報 アームの揺動を抑制する機構の開発-

    韓 鵬, 武野 友哉, 小川原 光一, 鈴木 新, 菊地 邦友

    第34回日本ロボット学会学術講演会 予稿集   1E3-04   1 - 4   2016.09

  • ナイロンアクチュエータ作製時の荷重による駆動特性への影響

    石田 龍一, 菊地 邦友, 硴塚 龍望, 鈴木 新, 小川原 光一

    第34回日本ロボット学会学術講演会 予稿集   1B3-03   1 - 3   2016.09

  • Leg mechanism of power assist robot with single motor

    柴原 悠佑, 武野 友哉, 鈴木 新, 小川原 光一, 菊地 邦友

    システム制御情報学会研究発表講演会講演論文集 ( システム制御情報学会 )  60   3p   2016.05

  • 上空からの映像を位置合わせに利用した移動ロボットによる広域空間の3次元レーザ計測

    村瀬 浩彰, 中村 祐太, 小川原 光一

    第20回知能メカトロニクスワークショップ 予稿集     13 - 18   2015.07

  • RGB-Dカメラを用い手と物体の相互隠蔽を考慮したHuモーメント不変量に基づく手形状推定

    片山 涼平, 小川原 光一

    ロボティクス・メカトロニクス講演会2015 講演論文集   2A1-S05   1 - 3   2015.05

  • クレーン型アームを有し1台のモータで重量物の支持運搬を補助するパワーアシストスーツの開発

    韓 鵬, 武野 友哉, 村上 綺乃, 小川原 光一, 鈴木 新, 菊地 邦友

    第59回システム制御情報学会研究発表講演会 予稿集   341-2   1 - 4   2015.05

  • 統計的モデルを用いた見えの変化に頑強な歩容による個人識別

    新崎 誠, 岩下 友美, 小川原 光一, 倉爪 亮

    第20回ロボティクスシンポジア 予稿集     342 - 347   2015.03  [Refereed]

  • Quantitative analysis of ciliary alignment on multiciliated epithelium

    Kazuhiro Tateishi, Koichi Ogawara, Sachiko Tsukita

    Cellular Dynamics & Modelbs, Cold Spring Harbor Laboratory 2015 Meeting & Courses     1 - 1   2015  [Refereed]

  • 2P1-M01 Development of Power Assist Suit for Supporting and Transporting Heavy Objects with Single Motor in that Hip and Leg Joint Axes are Aligned with Those of Wearer

    TAKENO Tomoya, MURAKAMI Ayano, PENG Han, OGAWARA Koichi, SUZUKI Arata, KIKUCHI Kunitomo

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) ( The Japan Society of Mechanical Engineers )  2015 ( 0 ) _2P1 - M01_1-_2P1-M01_3   2015

     View Summary

    In this paper, we present a mechanical design of an exoskeleton-type power assist suit that helps a wearer to lift up and transport heavy objects while walking. The assist suit supports heavy objects by itself thus the force exerting on the whole body of the wearer is reduced. In addition, the hip and leg joint axes are aligned with those of the wearer so that the wearer can walk smoothly. The experimental results show that the developed power assist suit is able to support and transport heavy objects while walking without exerting a load on the upper body of a wearer including the arms and legs.

    DOI

  • Development of Power Assist Suit for Supporting and Transporting Heavy Objects with Single Motor

    Tomoya Takeno, Ayano Murakami, Peng Han, Koichi Ogawara, Arata Suzuki, Kunitomo Kikuchi

    The 34th Chinese Control Conference and SICE Annual Conference   ThC04-4   1 - 4   2015  [Refereed]

  • Gait-Based Person Identification Method Using Shadow Biometrics for Robustness to Changes in the Walking Direction

    Makoto Shinzaki, Yumi Iwashita, Ryo Kurazume, Koichi Ogawara

    IEEE Winter Conference on Applications of Computer Vision     670 - 677   2015  [Refereed]

  • 統計的モデルを用いた見えの変化 に頑強な歩容認証

    新崎 誠, 岩下 友美, 小川原 光一, 倉爪 亮

    第4回バイオメトリクスと認識・認証シンポジウム   O-1-1   1 - 6   2014.11

  • 動的計画法を用いた多指ハンドによる物体操作運動の生成とフィードバック制御による運動補償

    中川 圭祐, 小川原 光一

    第32回ロボット学会学術講演会 予稿集     1 - 4   2014.09

  • ロボットによるSFM法を用いた未知小物体の全周3次元形状推定

    古川 直人, 小川原 光一

    第19回知能メカトロニクスワークショップ 予稿集     1 - 5   2014.07

  • 動的計画法とフィードバック制御を用いた多指ハンドによる持ち替えを伴う物体操作の獲得

    中川 圭祐, 小川原 光一

    第19回知能メカトロニクスワークショップ 予稿集     1 - 4   2014.07

  • 画像処理に基づく細胞内構造体の配置の規則性に関する定量解析

    小川原 光一

    バイオイメージ・インフォマティクス ワークショップ 2014     1 - 1   2014.06

  • Non-rigid Registration with Reliable Distance Field for Dynamic Shape Completion

    Kent Fujiwara, Hiroshi Kawasaki, Ryusuke Sagawa, Koichi Ogawara, Katsushi Ikeuchi

    Workshop on Dynamic Shape Measurement and Analysis 2014 in conjunction with International Conference on 3DV     1 - 8   2014  [Refereed]

  • Gait identification robust to changes in walking direction by 4D gait database

    BABA Ryosuke, IWASHITA Yumi, OGAWARA Koichi, KURAZUME Ryo

    IEICE technical report. Speech ( The Institute of Electronics, Information and Communication Engineers )  111 ( 431 ) 59 - 64   2012.02

     View Summary

    In person identification using gait images, various inherent image features of individuals are extracted from a sequence of gait images. However, for instance, if a subject's observation angle changes compared with those in the database, the correct classification rate gets low. To deal with this problem, we constructed a 4D gait database consisting of multiple 3D shape models of walking subjects, and introduce a method robustly against walking direction changes. In this method, firstly we reconstruct 3D models of subjects from gait images taken by multiple cameras, and then synthesize virtual images of 3D models from multiple arbitrary virtual viewpoints and build a database from gait features extracted from virtual images. In the identification phase, the person is identified by matching the gait features of the subject and those from all virtual viewpoints in the database. However, the calculation cost is expensive due to full search, and the subject is wrongly estimated due to wrong estimation of walking direction. So in this paper, to achieve the reduction of calculation time and high correct classification rate, we introduce a method which estimates the walking direction using Frieze Patterns firstly and then identify the person using features from the estimated virtual viewpoint. Experiments using the 4D gait database show the effectiveness of the proposed method.

  • Gait identification robust to changes in observation angle

    IWASHITA Yumi, BABA Ryosuke, OGAWARA Koichi, KURAZUME Ryo

    電子情報通信学会技術研究報告. SP, 音声   111 ( 431 ) 65 - 70   2012.02

  • Sequential Pattern Recognition by Local Classifiers and Dynamic Time

    福冨 正弘, 小川原 光一, フォン ヤオカイ

    情報処理学会研究報告 ( 情報処理学会 )  2010 ( 3 ) 8p   2010.10

  • Sequential Pattern Recognition by Local Classifiers and Dynamic Time Warping

    FUKUTOMI Masahiro, OGAWARA Koichi, FENG Yaokai, UCHIDA Seiichi

    Technical report of IEICE. PRMU ( 一般社団法人電子情報通信学会 )  110 ( 187 ) 157 - 164   2010.08

     View Summary

    This paper describes a method for recognizing sequential patterns with nonlinear time warping. The proposed method uses a sequence of local classifiers, each of which is prepared to provide a recognition result (i.e., class label) at a certain sample point. In addition, in order to compensate nonlinear time warping, the local classifier of the point v has to be assigned to the point t_v of the prototype sequential pattern. Consequently, we must solve the optimal labeling problem and the optimal point-to-point correspondence problem (i.e., the optimal mapping from v to t_v) simultaneously. In the proposed method, this multiple optimization problem is tackled by graph cut. Specifically, the α-expansion algorithm, which is an approximation algorithm for graph cut problems, is employed. After the solving the problem, the input pattern is recognized based on majority voting of the class labels obtained at the local classifiers. Several penalties are introduced for forcing neighboring local classifiers to have the same class labels and continuous point-to-point correspondence. For observing the validity of the proposed method, it was applied to an online character recognition task.

  • Articulated structure from motion using subspace separation

    松尾 幸治, 小川原 光一, 倉爪 亮

    IEICE technical report ( 電子情報通信学会 )  109 ( 376 ) 273 - 278   2010.01

  • 部分的に局所性を保持するハッシュ関数を用いた画像列からの高頻度パターン検出

    小川原光一, 田邉康史, 倉爪亮, 長谷川勉

    日本ロボット学会学術講演会予稿集(CD-ROM)   28th   2010

  • 部分的に局所性の高いハッシュ関数を用いた頻出運動パターンの効率的な抽出法

    小川原光一, 田邉康史, 倉爪亮, 長谷川勉

    ロボティクスシンポジア予稿集   15th   2010

  • Detecting Frequent Motion Patterns using Motion Density

    田邉康史, 小川原光一, 倉爪亮, 長谷川勉

    日本ロボット学会学術講演会予稿集(CD-ROM)   27th   2009

  • Estimation Method of the pose of an Object Manipulated by a Multi-fingered Robot-hand using Visual and Force Information

    亀崎康之, 小川原光一, 倉爪亮, 長谷川勉

    日本ロボット学会学術講演会予稿集(CD-ROM)   27th   2009

  • Manipulation Recognition based on relative motions between two objects using Dynamic Programming

    Yonemasu Yoshihiro, Uchida Seiichi, Ogawara Koichi

    Record of Joint Conference of Electrical and Electronics Engineers in Kyushu ( 電気・情報関係学会九州支部連合大会委員会 )  2009 ( 0 ) 435 - 435   2009

     View Summary

    本論文では,物体操作作業を2物体間の時系列相対運動パターンとみなし,これをあらかじめ用意した参照相対運動パターン集合と比較することによって,動画像から物体操作を認識する手法を提案する.ある時刻の画像に対し,画像中の任意の2領域がある参照相対運動パターンに属する尤度を,(1)1つ目の領域が1番目の物体である尤度,(2)2つ目の領域が2番目の物体である尤度,(3)領域間の相対位置,の3つの指標により定義する.本手法では,この尤度に基づき動的計画法を用いて入力動画像全体と各参照相対運動パターンとの比較を行い,最も類似した参照相対運動パターンが表す物体操作を認識結果として出力する.

    DOI

  • Fast Repetitious Motion Pattern Detection Method via Dynamic Programming using Motion Density

    Koichi Ogawara, Yasufumi Tanabe, Ryo Kurazume, Tsutomu Hasegawa

    IEEE International Conference on Robotics and Automation     2009

    DOI

  • Obtaining Manipulation Skills from Observation

    Takamatsu Jun, Tominaga Hirohisa, Ogawara Koichi, Kimura Hiroshi, Ikeuchi Katsushi

    Monthly journal of the Institute of Industrial Science, University of Tokyo ( The University of Tokyo )  52 ( 5 ) 231 - 236   2000.05

    DOI

▼display all

Awards & Honors

  • 優秀論文賞

    Winner: 福冨 正弘, 小川原 光一, フォンヤ オカイ, 内田 誠一

    2011.07   第14回画像の認識・理解シンポジウム   非マルコフ的制約を導入した最適弾性マッチング"

  • Best Oral Presentation Award

    Winner: Yumi Iwashita, Ryosuke Baba, Koichi Ogawara, Ryo Kurazume

    2010.09   The Fifth International Conference on Emerging Security Technologies   Person identification from spatio-temporal 3D gait

  • Best Poster Paper Award

    Winner: Yasufumi Tanabe, Koichi Ogawara, Ryo Kurazume, Tsutomu Hasegawa

    2009.10   The 5th Joint Workshop on Machine Perception and Robotics   Detecting Frequent Actions using Partly Locality Sensitive Hashing

  • Best Vision Paper Award

    Winner: Koichi Ogawara, Xiaolu Li, Katsushi Ikeuchi

    2007.04   2007 IEEE International Conference on Robotics and Automation   Marker-less Human Motion Estimation using Articulated Deformable Model

Patents

  • 重量物支持装置

    Date applied: 2014.12.08 ( 特願2014-248018 )   Publication date: 2016.06.20 ( 特開2016-108104 )  

    Inventor(s)/Creator(s): 小川原光一、鈴木新  Applicant: 国立大学法人和歌山大学

KAKENHI

  • ロボットによる布状柔軟物組み付けのための学習に基づく認識技術と操作技術の開発

    2023.04
    -
    2026.03
     

    Grant-in-Aid for Scientific Research(C)  Principal investigator

  • 視覚と多指ハンドを備えたロボットのための半自律遠隔操作に基づく組み立て作業の教示

    2019.04
    -
    2022.03
     

    Grant-in-Aid for Scientific Research(C)  Principal investigator

  • 手指による物体操作の伝達を可能とするロボットの半自律遠隔操作技術の開発

    2016.04
    -
    2019.03
     

    Grant-in-Aid for Scientific Research(C)  Principal investigator

  • 把持の中間表現と自律視触覚フィードバック制御を用いた物体操作の教示法

    2013.04
    -
    2016.03
     

    Grant-in-Aid for Scientific Research(C)  Principal investigator

  • 視覚とハンドを併用した能動センシングに基づく物体操作の獲得法

    2011.04
    -
    2013.03
     

    Grant-in-Aid for Young Scientists(B)  Principal investigator

  • 動画像からの形状・運動・作業文脈抽出に基づく物体操作のモデリング

    2009.04
    -
    2011.03
     

    Grant-in-Aid for Young Scientists(B)  Principal investigator

▼display all

Public Funding (other government agencies of their auxiliary organs, local governments, etc.)

  • 高分子アクチュエータを用いたCPM リハビリ機器の実用化に向けた基礎研究

    2011.04
    -
    2012.03
     

    Co-investigator

  • 農村復興のためのパワードスーツ開発に向けた調査研究

    2011.04
    -
    2012.03
     

    Principal investigator

  • 人体の全身運動と非剛体変形する衣服の同時4次元計測技術の開発

    2011.04
    -
    2012.03
     

    Principal investigator

  • JST研究成果最適展開支援事業(A-STEP)フィージビリティステージ探索タイプ,非剛体変形する衣服の高速・高精度な全周3次元計測技術の開発

    2011.04
    -
    2012.03
     

    Principal investigator

Competitive funding, donations, etc. from foundation, company, etc.

  • システム工学部寄附金(栢森情報科学振興財団研究助成)

    2010.04
    -
    2011.03
     

    Research subsidy  Principal investigator

Instructor for open lecture, peer review for academic journal, media appearances, etc.

  • Review

    2023.04
    -
    2024.03

    画像電子学会

     View Details

    査読

    画像電子学会誌に投稿された論文の査読

  • Review

    2023.04
    -
    2024.03

    日本機械学会

     View Details

    査読

    日本機械学会ロボティクス・メカトロニクス部門欧文誌ROBOMECH Journalに投稿された論文の査読

  • 高校生向けの公開講座(夢ナビ 大学教授がキミを学問の世界へナビゲート)の講師

    2022.10.15
    -
    2022.10.16

    株式会社フロムページ

     View Details

    公開講座・講演会の企画・講師等

    「遠い場所の作業をロボットが代行するには」に関する講演,日付:10月15-16日

  • Review

    2022.04
    -
    2023.03

    画像電子学会

     View Details

    査読

    画像電子学会誌に投稿された論文の査読

  • Review

    2021.09
    -
    2022.03

    日本ロボット学会

     View Details

    査読

    日本ロボット学会誌に投稿された論文の査読

  • Review

    2021.08
    -
    2021.09

    日本機械学会

     View Details

    査読

    日本機械学会学術誌に投稿された論文の査読

  • Review

    2021.04
    -
    2021.06

    計測自動制御学会

     View Details

    査読

    計測自動制御学会論文集に投稿された論文の査読

  • 高専インターンシップ

    2017.04

    その他

     View Details

    小・中・高校生を対象とした学部体験入学・出張講座等

    近畿大学工業高等専門学校からインターンシップを1名受け入れ、ロボット機構の設計製作とマイコンによる制御に関する指導を行った。,日付:8月21日~8月25日

  • 第3回和歌山大学・和歌山県工業技術センター研究者交流会

    2017.04

    和歌山県工業技術センター

     View Details

    公開講座・講演会の企画・講師等

    「ロボットアームと多指ハンドの動作計画」に関する講演,日付:3月16日

  • 高専インターンシップ

    2016.04

    その他

     View Details

    小・中・高校生を対象とした学部体験入学・出張講座等

    大阪府大高専からインターンシップを1名受け入れ、ロボット機構の設計製作とマイコンによる制御に関する指導を行った。,日付:8月22日~8月26日

  • 第24回わかやまビジネス・テクノフェア

    2015.11

    公益財団法人わかやま産業振興財団

     View Details

    公開講座・講演会の企画・講師等

    3次元計測技術のシーズに関する講演,日付:2015年11月10日

  • 工学研究シーズ合同発表会

    2015.04

    大阪府立大学大学院工学研究科・和歌山大学大学院システム工学研究科

     View Details

    公開講座・講演会の企画・講師等

    パワーアシストスーツに関するシーズの発表,日付:2015年11月9日

  • Associate Editor

    2015.03
    -
    2018.02

    計測自動制御学会 和文論文集編集委員会

     View Details

    学術雑誌等の編集委員・査読・審査員等

    Associate Editor,任期:2015.3-2018.2

  • 常任査読委員

    2010.08
    -
    Now

    電子情報通信学会 ソサイエティ論文誌

     View Details

    学術雑誌等の編集委員・査読・審査員等

    常任査読委員 ,任期:2010.8~

  • メディア出演等

    2010.03

    西日本新聞

     View Details

    研究成果に係る新聞掲載、テレビ・ラジオ出演

    研究紹介記事.
    「教育・理科大好き!・物まねロボット」

▼display all

Committee member history in academic associations, government agencies, municipalities, etc.

  • 第28回ロボティクスシンポジア 実行委員(出版委員)

    2022.06
    -
    2023.03
     

    計測自動制御学会

     View Details

    国内会議運営

    予稿集の編集・出版に関する業務。

  • ロボティクス・メカトロニクス 講演会 2021出版委員(副委員長)

    2020.06
    -
    2021.06
     

    日本機械学会

     View Details

    国内会議運営

    講演会予稿集の編集・出版に関する業務。

  • ロボティクス・メカトロニクス部門運営委員

    2020.04
    -
    2022.03
     

    日本機械学会

     View Details

    学協会、政府、自治体等の公的委員

    ロボティクス・メカトロニクス部門 運営委員会,任期:2020.4-2022.3

  • 第19回計測自動制御学会システムインテグレーション部門講演会 出版委員(副委員長)

    2017.12
    -
    2018.12
     

    計測自動制御学会

     View Details

    国内会議運営

    講演会予稿集の編集・出版に関する業務。

  • 事業計画委員

    2016.04
    -
    2019.03
     

    日本ロボット学会

     View Details

    学協会、政府、自治体等の公的委員

    事業計画委員会,任期:2016.3-2018.3

  • ロボティクス・メカトロニクス部門広報委員(2016年度 副委員長,2017年度 委員長)

    2015.04
    -
    2019.03
     

    日本機械学会

     View Details

    学協会、政府、自治体等の公的委員

    ロボティクス・メカトロニクス部門 広報委員会,任期:2015.4-2019.3

  • PRMU研究会専門委員

    2014.05
    -
    2020.04
     

    電子情報通信学会

     View Details

    学協会、政府、自治体等の公的委員

    PRMU研究専門委員会,任期:2014.5-2020.4

  • コンピュータビジョンとイメージメディア(CVIM)研究会運営委員

    2008.04
    -
    2012.03
     

    情報処理学会

     View Details

    学協会、政府、自治体等の公的委員

    研究運営委員会,任期:2008.4-2012.3

  • 会誌編集委員

    2005.05
    -
    2007.03
     

    日本ロボット学会

     View Details

    学術雑誌等の編集委員・査読・審査員等

    委員

▼display all

Other Social Activities

  • 平成30年度産業・理科教育教員派遣研修の受け入れ

    2018.04
    -
    2019.03

    その他

     View Details

    産業界、行政諸機関等と行った共同研究、新技術創出、コンサルティング等

    和歌山県立和歌山工業高等学校から教諭を1名受け入れ、ロボットアームの運動計画・制御に関する指導を行った。 ,実施者:小川原光一