CakeResume Job Search
Use the APP to find jobs.
3~4 years experience with data engineer. (distributed queue system, database, web crawling, CI/CD, cloud)
2 years experience with data science. (data analysis, machine learning and deep learning)
• Develop python Api (shioaji) for stock/option/future place orde and account.
• Develop C# Api (shioaji) for stock/option/future place orde and account, and setup CI/CD with GitHub actions.
• Deploy test system for simulate trading by docker swarm.
• Collecting distributed system Log by elk, grafana and prometheus. 13GB log data/daily.
• Monitor distributed system and alert chatbot.
• Develop a transaction-by-trade and odd lot trading API.
• Analysis travel data and build a machine learning model. Estimating increase 3% orders (revenue).
• Maintain and develop an ETL distributed queuing system with 20 machines.
• Optimize the ETL system reduced more than 50% execution time.
• Develop new product crawler let product volume increase 1.5%.
• Making analysis BI charts provide for other departments.
Analysing G7 financial data. Model validation and parameter estimation by regression models ( SUR, MLE, Bootstrapping ). And comparing single equation estimators and confidence interval with system equation.
Calculus, Linear Algebra, Statistics.
FinMind Open data Api
Open source financial data, more than 50 dataset, provide Api.
More than 1000 people registered.
1400 stars on github.
Automatic update daily by docker swarm, distributed queue system rabbitmq and celery ( 8 cloud machines ).
Total more than 1 billion data, 10 million streaming data per day.
Highly imbalance data, ratio is 1000 : 1, 10 GB dataset size. And the data is 50% missing value. More than 4000 variables, but I build models by only 50 features.
Post-competition analysis, top 10% rank.
Time series problem. Building models predict sales after 48 days.
Post-competition analysis, top 8% rank.
Time series problem, eighty millions data size. Building models predict inventory demand after 2 weeks.
Real competition, top 25% rank.
Predicting which products will an consumer purchase again.
Create python package of Taiwan Train Verification Code to text.
The model is made by keras-CNN.
1. Rabbitmq & Celery & Flower.
2. 8 nodes ( Cloud ) distributed queue system for web crawling.
3. deploy by docker.
1. MySQL ( RDBMS ).
2. Redis ( NoSQL ).
3. Dolphindb ( TSDB ).
1. Python - request, BeautifulSoup, lxml, selenium.
2. Auto recognition captcha code by CNN model.
1. Create automated tests and automated deploy for the FinMind team.
2. Using gitlab runner.
3. CD for auto publish python package.
4. CD for auto update and deploy new version service.
1. Distributed system log collect by elk.
2. Prometheus and Grafana. Monitor user usage, request latency, request count
3. Monitor by telegram bot.
xgboost, random forest, svm. statistics - ols, lasso.
Python - numpy, pandas, sklearn.
R - parallel, dplyr, data.table, mice.
3. frontend - vue
4. backend - python
Major : Mathematics and Statistics.
R, Python. Basic in English and proficient in Chinese.