趙高逸 (Carl)

Data scientist, Machine Learning Engineer

Machine & Deep learning  with 4+ year experience. My research mainly focus on :

1) Speech emotion recognition,

2) Data augmentation(GAN, AAE, VAE)

3) Unsupervised domain adaptation. 

4) NLP

 A self-motivated learner with solid insight visualization skill.




Email: [email protected]

Birth: 1995/05/20 

TEL: +886 981 899 719

Education


National Tsing Hua University | 2017.6 - 2019.8

Master in electrical engineering

Lab: Behavioral Informatics & Interaction Computation lab (BIIC) Committee: Chi-Chun Lee


National Taiwan Ocean University | 2013.8 - 2017.6 

Bachelor of communication engineering


Programming Skills

Python 
Pytorch
Mssql

TA Experience

Linear algebra 2018

Probability 2019


Competition 


2019 INTERSPEECH 2019 ComPareE Challenge Baby sound 

Research and Publications

Generating fMRI-Enriched Acoustic Vectors using a Cross-Modality Adversarial Network for Emotion Recognition

ACM ICMI 2018 Chao, G. Y., Chang, C. M., Li, J. L., Wu, Y. T., & Lee, C. C.

Use Gaussian Mixture regression and Cycle GAN to reconstruct fMRI feature (functional Magnetic Resonance Imaging) based on their corresponding vocal emotional stimulation.
Experiences 00 00@2x 504900dc09d82f711fdb54cf5763251cab5193a844856da978b2b8f3907ecd5a

Enforcing Semantic Consistency for Cross Corpus Valence Regression from Speech using Adversarial Discrepancy Learning

ISCA Interspeech 2019 Gao-Yi Chao, Yun-Shao Lin, Chun-Min Chang and Chi-Chun Lee

Try to deal with the problem of domain adaptation applied to the particular case of valence prediction from emotional speech. We propose a method based on the so-called adversarial discrepancy learning.
Experiences 00 00@2x 504900dc09d82f711fdb54cf5763251cab5193a844856da978b2b8f3907ecd5a

Using Attention Networks and Adversarial Augmentation for Styrian Dialect Continuous Sleepiness and Baby Sound Recognition

ISCA Interspeech 2019  Sung-Lin Yeh, Gao-Yi Chao, Bo-Hao Su, nvidia, and Chi-Chun Lee

Presented and evaluated extensive attention-based network with data augmentation methods with INTERSPEECH 2019 ComPareE Challenge. Proposed techniques evaluated in three different subchallenges. Final results show that data augmentation and fusing attention network models with conventional support vector machine benefits the test set robustness.

Result: 1st in both  Styrian Dialect and Baby Sound challenges
Experiences 00 00@2x 504900dc09d82f711fdb54cf5763251cab5193a844856da978b2b8f3907ecd5a

Adversarial General Voice Conversion 

Mainly inspired by paper "Voice Conversion from Unaligned Corpora using Variational Autoencoding Wasserstein Generative Adversarial Networks" and we replace wgan into "wgan-gp".

Github link: https://github.com/w102060018w/Adversarial-General-Voice-Conversion
Experiences 00 01@2x 6daa47d1df70d315f129dcecdd562003831581c4cdd8f9c819aa22d4a24ae150

Master’s thesis: Individual Speech Emotion Recognition by Mixture of Experts 

Consider the individual difference by means of Mahalanobis distance and Mixture of Experts model to chose the suitable subsets base

Experiences 00 01@2x 6daa47d1df70d315f129dcecdd562003831581c4cdd8f9c819aa22d4a24ae150

Work Experience 

Synergies AI  Data Scientist,2020.3 - Now

1. Developed sale forecasting project for chain restaurant.

2. Developed user knowledge tracing project.

3. Developed text sentiment analysis system.