2024/04/16 更新

写真a

チン キンキ
陳 金輝
所属
システム工学部 社会情報学メジャー
職名
准教授
兼務
情報学領域(准教授)
ホームページ
外部リンク

学歴

  • 2013年
    -
    2016年

    神戸大学 大学院システム情報学研究科博士課程後期課程修了  

学位

  • 博士(工学)

  • 修士(工学)

経歴

  • 2023年04月

    和歌山大学   システム工学部   准教授

  • 2020年04月

    県立広島大学   地域創生学部   准教授

  • 2016年07月

    神戸大学   経済経営研究所・計算社会科学研究センター   助教

  • 2016年03月

    神戸大学   経済経営研究所   特命助教

  • 2014年04月

    神戸大学   都市安全研究センター   研究員

研究分野

  • 情報通信 / 知覚情報処理

  • 情報通信 / 知能情報学

研究キーワード

  • パータン認識

  • 音声処理

  • データサイエンス

  • 機械学習

  • 画像処理

論文

  • Any-to-Any Voice Conversion with Multi-layer Speaker Adaptation and Content Supervision (accepted)

    Xu Xuexin, Xunquan Chen, Jinhui Chen, Zhihong Zhang, Edwin R. Hancock等 (担当区分: 責任著者 )

    IEEE/ACM Trans. Audio Speech Lang. Process.     2023年08月  [査読有り]

    DOI

  • Emotional Voice Conversion with a Novel Content-Style Fusion Block

    陳訓泉, 陳金輝, 高島遼一, 滝口哲也

    日本音響学会2023年春季研究発表会講演論文集     2023年03月

  • Towards Expressive Speech Conversion based on StarGANv2

    牟尚泱, 陳金輝, 高島遼一, 滝口哲也

    日本音響学会2023年春季研究発表会講演論文集     2023年03月

  • Speaker-Independent Emotional Voice Conversion via Disentangled Representations(accepted)

    Xunquan Chen, Takashi Kamihigashi, Jinhui Chen, Tetsuya Takiguchi, Edwin R. Hancock等 (担当区分: 責任著者 )

    IEEE Trans. Multimedia     2022年11月  [査読有り]

    DOI

  • Direction of Arrival Estimation for Indoor Environments Based on Acoustic Composition Model with a Single Microphone

    Xuexin Xu, Xunquan Chen, Jinhui Chen, Zhihong Zhang, Tetsuya Takiguchi, Edwin R. Hancock

    Pattern Recognition,   Volume 129   108715~   2022年09月  [査読有り]

    DOI

  • Phoneme-guided Dysarthric Speech Conversion With Non-parallel Data by Joint Training

    Xunquan Chen, Atsuki Oshiro, Jinhui Chen, Ryoichi Takashima, Tetsuya Takiguchi (担当区分: 責任著者 )

    Signal, Image and Video Processing   2022 ( 16 ) 1641 - 1648   2022年09月  [査読有り]

    DOI

  • Emotional Voice Conversion Using Disentangled Representation Learning and Attention Mechanism

    陳訓泉, 陳金輝, 高島遼一, 滝口哲也

    日本音響学会2022年春季研究発表会講演論文集     2022年03月

  • Towards Natural Emotional Voice Conversion with Novel Attention Module

    陳訓泉, 陳金輝, 高島遼一, 滝口哲也

    日本音響学会2022年秋季研究発表会講演論文集     2022年

  • Two-Pathway Style Embedding for Arbitrary Voice Conversion

    Xuexin Xu, Jinhui Chen, Xunquan Chen and Edwin R. Hancock

    Interspeech     2021年09月  [査読有り]

  • Towards an Efficient Real-time Kernel Function Stream Clustering Method via Shared Nearest-neighbor Density for the IIoT

    Ruohe Huang, Ruliang Xiao, Weifu Zhu, Jinhui Chen, Imad Rida

    Information Sciences     2021年08月  [査読有り]

    DOI

  • Multimodal Fusion for Indoor Sound Source Localization

    Jinhui Chen, Ryouichi Takashima, Xingchen Guo, Zhihong Zhang, Xuexin Xu, Tetsuya Takiguchi, Edwin R. Hancock (担当区分: 筆頭著者 )

    Pattern Recognition     2021年07月  [査読有り]

    DOI

  • Emotional Voice Conversion by Learning Disentangled Representations with Spectrum and Prosody Features

    陳訓泉, 陳金輝, 高島遼一, 滝口哲也

    日本音響学会2021年秋季研究発表会講演論文集     2021年03月

  • Dysarthric Speech Conversion by Learning Disentangled Representations with Non-parallel Data

    陳訓泉, 陳金輝, 高島遼一, 滝口哲也

    日本音響学会論文集     2021年

  • Emotional Voice Conversion Using Dual Supervised Adversarial Networks With Continuous Wavelet Transform F0 Features.

    Zhaojie Luo, Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki (担当区分: 責任著者 )

    IEEE/ACM Trans. Audio Speech Lang. Process.   27 ( 10 ) 1535 - 1548   2019年06月  [査読有り]

    DOI

  • Neutral-to-emotional voice conversion with cross-wavelet transform F0 using generative adversarial networks

    Zhaojie Luo, Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki

    APSIPA Transactions on Signal and Information Processing ( CAMBRIDGE UNIV PRESS )  8   1 - 11   2019年03月  [査読有り]

     概要を見る

    In this paper, we propose a novel neutral-to-emotional voice conversion (VC) model that can effectively learn a mapping from neutral to emotional speech with limited emotional voice data. Although conventional VC techniques have achieved tremendous success in spectral conversion, the lack of representations in fundamental frequency (F0), which explicitly represents prosody information, is still a major limiting factor for emotional VC. To overcome this limitation, in our proposed model, we outline the practical elements of the cross-wavelet transform (XWT) method, highlighting how such a method is applied in synthesizing diverse representations of F0 features in emotional VC. The idea is (1) to decompose F0 into different temporal level representations using continuous wavelet transform (CWT); (2) to use XWT to combine different CWT-F0 features to synthesize interaction XWT-F0 features; (3) and then use both the CWT-F0 and corresponding XWT-F0 features to train the emotional VC model. Moreover, to better measure similarities between the converted and real F0 features, we applied a VA-GAN training model, which combines a variational autoencoder (VAE) with a generative adversarial network (GAN). In the VA-GAN model, VAE learns the latent representations of high-dimensional features (CWT-F0, XWT-F0), while the discriminator of the GAN can use the learned feature representations as a basis for a VAE reconstruction objective.

    DOI

  • Oil Price Forecasting Using Supervised GANs with Continuous Wavelet Transform Features

    Zhaojie Luo, Jinhui Chen, Xiao Jing Cai, Katsuyuki Tanaka, Tetsuya Takiguchi, Takuji Kinkyo, Shigeyuki Hamori

    IEEE ICPR     2018年11月  [査読有り]

     概要を見る

    © 2018 IEEE. This paper proposes a novel approach based on a supervised Generative Adversarial Networks (GANs) model that forecasts the crude oil prices with Adaptive Scales Continuous Wavelet Transform (AS-CWT). In our study, we first confirmed that the possibility of using Continuous Wavelet Transform (CWT) to decompose an oil price series into various components, such as the sequence of days, weeks, months and years, so that the decomposed new time series can be used as inputs for a deep-learning (DL) training model. Second, we find that applying the proposed adaptive scales in the CWT method can strengthen the dependence of inputs and provide more useful information, which can improve the forecasting performance. Finally, we use the supervised GANs model as a training model, which can provide more accurate forecasts than those of the naive forecast (NF) model and other nonlinear models, such as Neural Networks (NNs), and Deep Belief Networks (DBNs) when dealing with a limited amount of oil prices data.

    DOI

  • Polar Transformation on Image Features for Orientation-Invariant Representations.

    Jinhui Chen, Zhaojie Luo, Zhihong Zhang, Faliang Huang, Zhiling Ye, Tetsuya Takiguchi, Edwin R. Hancock (担当区分: 筆頭著者 )

    IEEE Trans. Multimedia   21 ( 2 ) 300 - 313   2018年06月  [査読有り]

    DOI

  • An AI-based Approach to Auto-analyzing Historical Handwritten Business Documents: As Applied to the Kanebo Database

    CHEN Jinhui, KAMIHIGASHI Takashi, ITOH Munehiko, TAKATSUKI Yasuo, TAKIGUCHI Tetsuya (担当区分: 筆頭著者 )

    Journal of Computational Social Science ( Springer )  1 ( 1 ) 167 - 185   2018年01月  [査読有り]

    DOI

  • Rotation-reversal invariant HOG cascade for facial expression recognition

    Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki (担当区分: 筆頭著者 )

    SIGNAL IMAGE AND VIDEO PROCESSING ( SPRINGER LONDON LTD )  11 ( 8 ) 1485 - 1492   2017年11月  [査読有り]

     概要を見る

    This paper presents a novel classification framework derived from AdaBoost to classify facial expressions. The proposed framework adopts rotation-reversal invariant HOG as features. The framework is implemented by configuring the area under receiver operating characteristic curve of the weak classifier with HOG, which is a discriminative classification framework. The proposed classification framework is evaluated with three very popular and representative public databases: CK+, MMI, and AFEW. The results showed that the proposed classification framework outperforms the state-of-the-art methods.

    DOI

  • Overlapping Community Detection for Multimedia Social Networks

    Faliang Huang, Xuelong Li, Shichao Zhang, Jilian Zhang, Jinhui Chen (担当区分: 最終著者, 責任著者 )

    IEEE Trans. Multimedia ( IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC )  19 ( 8 ) 1881 - 1893   2017年08月  [査読有り]

     概要を見る

    Finding overlapping communities from multimedia social networks is an interesting and important problem in data mining and recommender systems. However, extant overlapping community discovery with swarm intelligence often generates overlapping community structures with superfluous small communities. To deal with the problem, in this paper, an efficient algorithm (LEPSO) is proposed for overlapping communities discovery, which is based on line graph theory, ensemble learning, and particle swarm optimization (PSO). Specifically, a discrete PSO, consisting of an encoding scheme with ordered neighbors and a particle updating strategy with ensemble clustering, is devised for improving the optimization ability to search communities hidden in social networks. Then, a postprocessing strategy is presented for merging the finer-grained and suboptimal overlapping communities. Experiments on some real-world and synthetic datasets show that our approach is superior in terms of robustness, effectiveness, and automatically determination of the number of clusters, which can discover overlapping communities that have better quality than those computed by state-of-the-art algorithms for overlapping communities detection.

  • Emotional voice conversion using neural networks with arbitrary scales F0 based on wavelet transform

    Zhaojie Luo, Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki (担当区分: 責任著者 )

    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING ( SPRINGEROPEN )  2017 ( 2017:18 ) 1 - 13   2017年08月  [査読有り]

     概要を見る

    An artificial neural network is an important model for training features of voice conversion (VC) tasks. Typically, neural networks (NNs) are very effective in processing nonlinear features, such as Mel Cepstral Coefficients (MCC), which represent the spectrum features. However, a simple representation of fundamental frequency (F0) is not enough for NNs to deal with emotional voice VC. This is because the time sequence of F0 for an emotional voice changes drastically. Therefore, in our previous method, we used the continuous wavelet transform (CWT) to decompose F0 into 30 discrete scales, each separated by one third of an octave, which can be trained by NNs for prosody modeling in emotional VC. In this study, we propose the arbitrary scales CWT (AS-CWT) method to systematically capture F0 features of different temporal scales, which can represent different prosodic levels ranging from micro-prosody to sentence levels. Meanwhile, the proposed method uses deep belief networks (DBNs) to pre-train the NNs that then convert spectral features. By utilizing these approaches, the proposed method can change the spectrum and the F0 for an emotional voice simultaneously as well as outperform other state-of-the-art methods in terms of emotional VC.

    DOI

  • Facial Expression Recognition with deep age.

    Zhaojie Luo, Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki

    IEEE ICME ( IEEE Computer Society )    2017年  [査読有り]

    DOI

  • Facial Expression Recognition with deep age.

    Zhaojie Luo, Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki

    The Second Workshop on Human Identification in Multimedia     657 - 662   2017年  [査読有り]

    DOI

  • Expression Recognition with Ri-HOG Cascade

    Jinhui Chen, Zhaojie Luo, Tetsuya Takiguchi, Yasuo Ariki

    ACCV 2016 WORKSHOPS ( SPRINGER INTERNATIONAL PUBLISHING AG )  10118 ( 3 ) 517 - 530   2017年  [査読有り]

     概要を見る

    This paper presents a novel classification framework derived from AdaBoost to classify facial expressions. The proposed framework adopts rotation-reversal invariant HOG as features. The Framework is implemented through configuring the Area under ROC curve (AUC) of the weak classifier with HOG, which is a discriminative classification framework. The proposed classification framework is evaluated with two very popular and representative public databases: MMI and AFEW. As a result, it outperforms the state-of-the-arts methods.

    DOI

  • Emotional Voice Conversion with Adaptive Scales F0 based on Wavelet Transform using Limited Amount of Emotional Data

    Zhaojie Luo, Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki

    Interspeech     3399 - 3403   2017年  [査読有り]

  • Multithreading Cascade of SURF for Facial Expression Recognition

    Jinhui Chen, Zhaojie Luo, Tetsuya Takiguchi, Yasuo Ariki

    EURASIP Journal on Image and Video Processing ( SpringerOpen )  2016 ( 1 ) 37 - 37   2016年10月  [査読有り]

    DOI

  • Emotional Voice Conversion Using Neural Networks with Different Temporal Scales of F0 based on Wavelet Transform.

    Zhaojie Luo, Jinhui Chen,Tetsuya Takiguchi, Yasuo Ariki

    9th ISCA Speech Synthesis Workshop ( ISCA )    140 - 145   2016年  [査読有り]

    DOI

  • Matrix Variate Distribution-Induced Sparse Representation for Robust Image Classification

    Jinhui Chen

    IEEE Transactions on Neural Networks and Learning Systems ( IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC )  26 ( 10 ) 2291 - 2300   2015年10月  [査読有り]

     概要を見る

    Sparse representation learning has been successfully applied into image classification, which represents a given image as a linear combination of an over-complete dictionary. The classification result depends on the reconstruction residuals. Normally, the images are stretched into vectors for convenience, and the representation residuals are characterized by l(2)-norm or l(1)-norm, which actually assumes that the elements in the residuals are independent and identically distributed variables. However, it is hard to satisfy the hypothesis when it comes to some structural errors, such as illuminations, occlusions, and so on. In this paper, we represent the image data in their intrinsic matrix form rather than concatenated vectors. The representation residual is considered as a matrix variate following the matrix elliptically contoured distribution, which is robust to dependent errors and has long tail regions to fit outliers. Then, we seek the maximum a posteriori probability estimation solution of the matrix-based optimization problem under sparse regularization. An alternating direction method of multipliers (ADMMs) is derived to solve the resulted optimization problem. The convergence of the ADMM is proven theoretically. Experimental results demonstrate that the proposed method is more effective than the state-of-the-art methods when dealing with the structural errors.

  • A Robust SVM Classification Framework Using PSM for Multi-­‐class Recognition

    CHEN Jinhui, TAKIGUCHI Tetsuya, ARIKI Yasuo

    EURASIP Journal on Image and Video Processing ( SPRINGER INTERNATIONAL PUBLISHING AG )  2015 ( 1 ) 1 - 12   2015年03月  [査読有り]

     概要を見る

    Our research focuses on the question of classifiers that are capable of processing images rapidly and accurately without having to rely on a large-scale dataset, thus presenting a robust classification framework for both facial expression recognition (FER) and object recognition. The framework is based on support vector machines (SVMs) and employs three key approaches to enhance its robustness. First, it uses the perturbed subspace method (PSM) to extend the range of sample space for task sample training, which is an effective way to improve the robustness of a training system. Second, the framework adopts Speeded Up Robust Features (SURF) as features, which is more suitable for dealing with real-time situations. Third, it introduces region attributes to evaluate and revise the classification results based on SVMs. In this way, the classifying ability of SVMs can be improved. Combining these approaches, the proposed method has the following beneficial contributions. First, the efficiency of SVMs can be improved. Experiments show that the proposed approach is capable of reducing the number of samples effectively, resulting in an obvious reduction in training time. Second, the recognition accuracy is comparable to that of state-of-the-art algorithms. Third, its versatility is excellent, allowing it to be applied not only to object recognition but also FER.

    DOI

  • Rotation-­‐invariant Histograms of Oriented Gradients for Local Patch Robust Representation

    CHEN Jinhui, LUO Zhaojie, TAKIGUCHI Tetsuya, ARIKI Yasuo

    Asia-­‐Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) 2015 ( IEEE )    196 - 199   2015年  [査読有り]

    DOI

  • Content-­‐based Image Retrieval Using Rotation-­‐invariant Histograms of Oriented Gradients

    CHEN Jinhui, NAKASHIKAK Toru, ARIKI Yasuo

    ACM ICMR ( ASSOC COMPUTING MACHINERY )    2015年  [査読有り]

     概要を見る

    Our research focuses on the question of feature descriptors for robust effective computing, proposing a novel feature representation method, namely, rotation-invariant histograms of oriented gradients (Ri-HOG) for image retrieval. Most of the existing HOG techniques are computed on a dense grid of uniformly-spaced cells and use overlapping local contrast of rectangular blocks for normalization. However, we adopt annular spatial bins type cells and apply radial gradient to attain gradient binning invariance for feature extraction. In this way, it significantly enhances HOG in regard to rotation invariant ability and feature descripting accuracy. In experiments, the proposed method is evaluated on Corel-5k and Corel-10k datasets. The experimental results demonstrate that the proposed method is much more effective than many existing image feature descriptors for content-based image retrieval.

    DOI

  • A Robust Learning Framework Using PSM and Ameliorated SVMs for Emotional Recognition

    Jinhui Chen, Yosuke Kitano, Yiting Li, Tetsuya Takiguchi, Yasuo Ariki

    ACCV 2014 ( SPRINGER-VERLAG BERLIN )  9009   629 - 643   2015年  [査読有り]

     概要を見る

    This paper proposes a novel machine-learning framework for facial-expression recognition, which is capable of processing images fast and accurately even without having to rely on a large-scale dataset. The framework is derived from Support Vector Machines (SVMs) but distinguishes itself in three key technical points. First, the measure of the samples normalization is based on the Perturbed Subspace Method (PSM), which is an effective way to improve the robustness of a training system. Second, the framework adopts SURF (Speeded Up Robust Features) as features, which is more suitable for dealing with real-time situations. Third, we use region attributes to revise incorrectly detected visual features (described by invisible image attributes at segmented regions of the image). Combining these approaches, the efficiency of machine learning can be improved. Experiments show that the proposed approach is capable of reducing the number of samples effectively, resulting in an obvious reduction in training time.

    DOI

  • MULTITHREADING ADABOOST FRAMEWORK FOR OBJECT RECOGNITION

    Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki

    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) ( IEEE )  2015-December   1235 - 1239   2015年  [査読有り]

     概要を見る

    Our research focuses on the study of effective feature description and robust classifier technique, proposing a novel learning framework, which is capable of processing multiclass objects recognition simultaneously and accurately. The framework adopts rotation-invariant histograms of oriented gradients (Ri-HOG) as feature descriptors. Most of the existing HOG techniques are computed on a dense grid of uniformly-spaced cells and use overlapping local contrast of rectangular blocks for normalization. However, we adopt annular spatial bins type cells and apply the radial gradient to attain gradient binning invariance for feature extraction. In this way, it significantly enhances HOG in regard to rotation-invariant ability and feature description accuracy; The classifier is derived from AdaBoost algorithm, but it is ameliorated and implemented through non-interfering boosting channels, which are respectively built to train weak classifiers for each object category. In this way, the boosting cascade can allow the weak classifier to be trained to fit complex distributions. The proposed method is valid on PASCAL VOC 2007 database and it achieves the state-of-the-arts performance.

    DOI

  • Facial Expression Recognition with Multithreaded Cascade of Rotation-­‐invariant HOG

    CHEN Jinhui, TAKIGUCHI Tetsuya, ARIKI Yasuo

    IEEE International Conference on Affective Computing and Intelligent Interaction (ACII2015) ( IEEE )    636 - 642   2015年  [査読有り]

     概要を見る

    We propose a novel and general framework, named the multithreading cascade of rotation-invariant histograms of oriented gradients (McRiHOG) for facial expression recognition (FER). In this paper, we attempt to solve two problems about high-quality local feature descriptors and robust classifying algorithm for FER. The first solution is that we adopt annular spatial bins type HOG (Histograms of Oriented Gradients) descriptors to describe local patches. In this way, it significantly enhances the descriptors in regard to rotation-invariant ability and feature description accuracy; The second one is that we use a novel multithreading cascade to simultaneously learn multiclass data. Multithreading cascade is implemented through non-interfering boosting channels, which are respectively built to train weak classifiers for each expression. The superiority of McRiHOG over current state-of-the-art methods is clearly demonstrated by evaluation experiments based on three popular public databases, CK+, MMI, and AFEW.

    DOI

  • Novel Continuous-­‐multi-­‐class Cascade for Real-­‐Time Emotional Recognition

    CHEN Jinhui, TAKIGUCHI Tetsuya, ARIKI Yasuo

    Asian Conference on Computer Vision (ACCV'14), Singapore     2014年11月  [査読有り]

  • A Robust Learning Algorithm Based on SURF and PSM for Facial Expression Recognition

    Jinhui Chen, Xiaoyan Lin, Tetsuya Takiguchi, Yasuo Ariki

    2014 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP) ( IEEE )  2015-January ( October ) 1352 - 1357   2014年  [査読有り]

     概要を見る

    This paper proposes a novel machine-learning framework for facial-expression recognition, which is capable of processing images fast and accurately even without having to rely on a large-scale dataset. The framework is derived from Support Vector Machines (SVMs) but distinguishes itself in three key ways. First, the measure of the samples normalization is based on the Perturbed Subspace Method (PSM), which is an effective way to improve the robustness of a training system. Second, the framework adopts SURF (Speeded Up Robust Features) as features, which is more suitable for dealing with real-time situations. Third, we use region attributes to revise incorrectly detected visual features (described by invisible image attributes at segmented regions of the image). Combining these approaches, the proposed method has the following beneficial properties. First, the efficiency of machine learning can be improved. Experiments show that the proposed approach is capable of reducing the number of samples effectively, resulting in an obvious reduction in training time. Second, the recognition accuracy is comparable to state-of-the-art algorithms.

    DOI

  • Robust facial expressions recognition using 3D average face and ameliorated adaboost

    Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki

    the 21st ACM International Conference on Multimedia (ACM MM)     2013年12月  [査読有り]

     概要を見る

    One of the most crucial techniques associated with Computer Vision is technology that deals with facial recogntion, especially, the automatic estimation of facial expressions. However, in real-time facial expression recognition, when a face turns sideways, the expressional feature extraction becomes difficult as the view of camera changes and recognition accuracy degrades signicantly. Therefore, quite many conventional methods are proposed, which are based on static images or limited to situations in which the face is viewed from the front. In this paper, a method that uses Look-Up-Table (LUT) AdaBoost combining with the three- dimensional average face is proposed to solve the problem mentioned above. In order to evaluate the proposed method, the experiment compared with the conventional method was executed. These approaches show promising results and very good success rates. This paper covers several methods that can improve results by making the system more robust. facial expressions recognition, 3D average face, AdaBoos. Copyright © 2013 ACM.

    DOI

▼全件表示

Misc

  • StyleGAN2によるアニメの顔画像の生成

    高橋 紅葉, 陳 金輝, 肖業貴

    MIRU 2023     2023年07月

  • CNNに基づく犬種の詳細画像分類

    無川 風韻, 滝口 哲也, 陳 金輝

    MIRU 2021     2022年07月

  • 少数データを用いたGANの適応的クロスドメイン画像生成

    大城 明津輝, 陳 訓泉, 滝口 哲也, 陳 金輝

    MIRU 2022     2022年07月

  • NTS-Netを用いた食品画像の詳細分類

    大城明津輝, 陳金輝

    MIRU 2021     2021年07月

講演・口頭発表等

  • Generative Neural Networks: Risks, Opportunities and Challenges

    York大学における招待講演(ZOOM)  2022年11月  

  • AIの社会的応用に向けた研究事例

    陳 金輝

    三次市官民共創 DXコンソーシアム  2022年10月  

  • 認識,理解,創造:AI君が忙しいのだ

    陳金輝

    県立広島大学公開講座  2022年09月03日  

  • AI: 君のことって,知っているよ,顔から足元まで

    陳金輝

    広島県高大連携公開講座  2021年07月31日  

  • A Key to the Orientation-invariant Representation on the Image Feature: Polar Model

    陳金輝

    Southern University of Science and Technologyにおける招待講演  2019年11月  

  • Spatial-invariant Features for Robust Image Representations.

    陳金輝

    Fujian Normal Universityにおける招待講演  2019年01月  

  • Expression Recognition with Ri-HOG Cascade

    CHEN Jinhui

    Asian Conference on Computer Vision  2016年11月   (Taipei International Convention Center)  ACCV

  • SIFT Boosting for Handwriting Recognition

    CHEN Jinhui

    ACCV 2016  2016年11月   (Taipei International Convention Center)  ACCV

  • SIFT Boosting for Handwriting Recognition

    CHEN Jinhui

    MIRU2016  2016年08月   (アクトシティ浜松)  MIRU

  • SIFT Boosting for Handwriting Recognition

    CHEN Jinhui, KAMIHIGASHI Takashi, ITOH Munehiko, TAKATSUKI Yasuo, TAKIGUCHI Tetsuya, ARIKI Yasuo

    画像の認識・理解シンポジウム  2016年08月  

  • Emotional Voice Conversion Using Neural Networks with Different Temporal Scales of F0 based on Wavelet Transform

    CHEN Jinhui

    9th ISCA Speech Synthesis Workshop  2016年08月   (Sunnyvale, CA)  9th ISCA Speech Synthesis Workshop

  • (座長)Session4F IMAGE PROCESSING & PATTERN RECOGNITION 4

    CHEN Jinhui

    15th IEEE, ACIS International Conference on Computer and Information Science  2016年06月   (Okayama Convention Center)  IEEE・ACIS

  • (司会)

    CHEN Jinhui

    15th IEEE ACIS International Conference on Computer and Information Science  2016年06月   (Okayama Convention Center)  IEEE ACIS

  • Session4F IMAGE PROCESSING & PATTERN RECOGNITION 4

    CHEN Jinhui

    15th IEEE, ACIS International Conference on Computer and Information Science  2016年06月   (Okayama Convention Center)  IEEE・ACIS

  • Session8C SOFTWARE SPECIFICATION TECHNIQUES 3

    CHEN Jinhui

    15th IEEE/ACIS International Conference on Computer and Information Science  2016年06月   (Okayama Convention Center) 

  • (座長)Session8C SOFTWARE SPECIFICATION TECHNIQUES 3

    CHEN Jinhui

    15th IEEE/ACIS International Conference on Computer and Information Science  2016年06月   (Okayama Convention Center) 

  • Robust ObjectRecognition Using Rotation-invariant

    CHEN Jinhui

    2016 Kobe University Core-Team Workshop on Cyber-Physical System for Smarter World (CPS-SW 2016)  2016年03月   (Kobe University)  Kobe University

  • 画像の局所特徴と分類器に基づく画像処理枠組みに関する研究

    CHEN Jinhui

    RIEBセミナー  2016年02月   (神戸大学)  神戸大学経済経営研究所

▼全件表示

科学研究費

  • 大規模生体データを用いたAIによる個人・集団レベルの創造性に関する実証研究

    2019年04月
    -
    2024年03月
     

    基盤研究(A)  分担

公開講座等の講師、学術雑誌等の査読、メディア出演等

  • 非常勤講師

    2023年04月01日
    -
    2024年03月23日

    広島県公立大学法人

     詳細を見る

    非常勤講師

    本学開講科目である「データ構造とアルゴリズム」及び「データマイニング」の2科目について,授業をご担当いただきます。