CakeResume 找人才

进阶搜寻
On
4 到 6 年
6 到 10 年
10 到 15 年
15 年以上
Avatar of 黃季承.
Avatar of 黃季承.
曾任
後端工程師 & DevOps @創業家兄弟Kuobrothers Corp.
2022 ~ 2024
Senior Backend Engineer | DevOps | SRE
一個月內
黃季承 Backend Developer | DevOps [email protected]我從事 5 年的電商後端開發與 1 年的 DevOps 維運,並參與超過 4 年的 Scrum 敏捷開發。後端主要負責產品功能研發、後台系統開發與既有服務重構。曾參與生活市集即享券開發,負責與合作夥伴釐清事項、跟 PM 討論整合方式、設
AWS
CI/CD Drone
Cloudflare
待业中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
National Taipei University of Technology
資工系
Avatar of 李佳謙.
Avatar of 李佳謙.
曾任
Marketing Manager @幫你優股份有限公司 BoniO Inc. / 閱讀優有限公司 TaaO Company Limited
2021 ~ 现在
Marketing Manager
一個月內
李佳謙 CHIEN LI Marketing Manager / BoniO Inc. Marketing Strategy | Customer Growth 負責品牌行銷,規劃產品銷售策略,推動品牌會員成長 熟悉市場、訂閱經濟、平台營運 以終為始策略型思考,帶領團隊有效達到營運目標 工作專長 用戶、營運成長數據指標分析 Operating Data Management ● 產品市場規模及用戶調
WordPress
Google Analytics
Project Management
待业中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
淡江大學
英文學系
Avatar of the user.
Avatar of the user.
曾任
資深前端工程師 @比房科技
2022 ~ 2024
Frontend developer.
一個月內
Frontend
Backend
Product
待业中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
暨南大學
電機工程
Avatar of the user.
Avatar of the user.
行銷副理 / KOL Radar 行銷科技事業部 @愛卡拉互動媒體股份有限公司
2021 ~ 现在
品牌專案企劃、網路行銷企劃、數位行銷企劃
一個月內
Google Analytics
Sales & Marketing
Photoshop
就职中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
臺北市立大學
英語教學系
Avatar of the user.
Avatar of the user.
智慧製造全端開發工程師 @聯華電子股份有限公司
2022 ~ 现在
AI工程師、機器學習工程師、深度學習工程師、影像演算法工程師、資料科學家、Machine Learning Engineer、Deep Learning Engineer、Data Scientist
一個月內
Python
Qt
Git
就职中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
元智大學
工業工程與管理學系所
Avatar of Sosuke Guo.
Avatar of Sosuke Guo.
曾任
資深前端工程師 @辰凝有限公司
2022 ~ 2023
前端工程師 Front-End Developer
一個月內
Sosuke Guo 專職於網頁前端工程師近五年,擅於從0開始打造產品,有用Vue + Golang + Python自己打造產品的經驗。 前端工程師 Front-End Developer [email protected] 作品 - SocialPicMaker.com 製作精美Twtter card 的小工具網站 只要兩個步驟,輸入網址、點擊下載,即可完成 可以選擇黑白兩種介面佈
vue.js
golang
Python
待业中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
Avatar of Patrick Hsu.
Avatar of Patrick Hsu.
Algorithm Research & Development @適着三維科技股份有限公司 TG3D Studio Inc.
2021 ~ 现在
Software Engineer
一個月內
Patrick Hsu AI Research & Development As a seasoned AI engineer with six years of experience, I specialize in computer vision, 3D body model reconstruction, generative AI, and possessing some knowledge in natural language processing (NLP). | New Taipei City, [email protected] Work Experience (6 years) Algorithm Research & Design• TG3D Studio MayPresent A skilled engineer specialized in computer vision and generative AI with experience in developing and training AI models for digital fashion applications. Body AI: Virtual Try On Integrated cutting-edge technologies such as Stable Diffusion, ControlNet, and Prompt Engineering to create a sophisticated system for
Python
AI & Machine Learning
Image Processing
就职中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
國立台灣大學
生物產業機電工程所
Avatar of Jimmy Lu.
Avatar of Jimmy Lu.
曾任
Lead of Country Product Manager @Asus 華碩電腦股份有限公司
2022 ~ 2023
Business Development / Product Manager / Product Marketing/ Strategy Manager
一個月內
Jimmy Lu (呂正彥) Senior Product Manager [Consumer Electronics Expatriate PM/Sales/BD] Entrepreneurship business development & management Leadership flexible & efficient international/cross-functional organizing Target-oriented project lead & SOP consolidation, product lifecycle management Begin with the end in mind Go-to-market execution Taipei, Taiwan < > London, UK https://www.linkedin.com/in/itsjimmy/ [email protected] Work experience Senior Product Manager [Consumer NB & Gaming ] • ASUSTeK Computer Indonesia JulDec 2023 | Jakarta, Indonesia Key responsibilities & Achievements - #business management #business development #team leading #cross-functional organizing
Business Development Project Management
Cross-Functional Project Management
Product Life Cycle Management
待业中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
國立陽明交通大學(National Yang Ming Chiao Tung University)
Bachelor of management , Management of Transportation and Logistics
Avatar of Ryan Po-Hsuan Chang.
Avatar of Ryan Po-Hsuan Chang.
資深全端工程師 @誠諾工程技術股份有限公司
2023 ~ 现在
Front-End / Back-End / Full Stack Web Developer
一個月內
張栢瑄 Ryan Po-Hsuan Chang 已有五年開發經驗,擅長使用Vue + TypeScript 和Laravel 來建構網頁系統,另外也有React 和Python 的開發經驗。喜歡挑戰新事務,不怕踩坑和重構,持續精進自己的技術。 Kaohsiung City, Taiwan https://ryanxuan930.github.io/ [email protected]技能 Frontend Nuxt (Vue 3) Next (React) Pinia TypeScript Tailwind CSS SCSS PrimeVue Next UI Backend
Vue.js
JavaScript
Python
就职中
正在积极求职中
全职 / 我只想远端工作
4 到 6 年
國立中山大學 National Sun Yat-Sen University
人文暨科技跨領域學士學位學程
Avatar of 楊晟.
Avatar of 楊晟.
運維工程師 DevOps @愛盛娛樂科技有限公司
2019 ~ 现在
Java 軟體工程師
一個月內
楊晟 運維工程師 DevOps New Taipei City, Taiwan 喜歡尋找程式碼中更優雅的做法,熱衷找到更高效率、更優雅的解決方案。 喜歡尋找 Solution,討厭遷就 Workaround https://www.cakeresume.com/sam0324sam 工作經歷 運維工程師 DevOps • 愛盛娛樂科技有限公司 七月Present - 全遠端 - (作品集) 使用 Java Quarkus 開發 RESTful API 後
JAVA
JavaScript
MySQL
就职中
正在积极求职中
全职 / 我只想远端工作
4 到 6 年
National Kaohsiung First University of Science and Technology
電腦與通訊工程系

最轻量、快速的招募方案,数百家企业的选择

搜寻简历,主动联系求职者,提升招募效率。

  • 浏览所有搜寻结果
  • 每日可无限次数开启陌生对话
  • 搜尋僅開放付費企業檢視的简历
  • 检视使用者信箱 & 电话
搜寻技巧
1
Search a precise keyword combination
senior backend php
If the number of the search result is not enough, you can remove the less important keywords
2
Use quotes to search for an exact phrase
"business development"
3
Use the minus sign to eliminate results containing certain words
UI designer -UX
免费方案仅能搜寻公开简历。
升级至进阶方案,即可浏览所有搜寻结果(包含数万笔览仅在 CakeResume 平台上公开的简历)。

职场能力评价定义

专业技能
该领域中具备哪些专业能力(例如熟悉 SEO 操作,且会使用相关工具)。
问题解决能力
能洞察、分析问题,并拟定方案有效解决问题。
变通能力
遇到突发事件能冷静应对,并随时调整专案、客户、技术的相对优先序。
沟通能力
有效传达个人想法,且愿意倾听他人意见并给予反馈。
时间管理能力
了解工作项目的优先顺序,有效运用时间,准时完成工作内容。
团队合作能力
具有向心力与团队责任感,愿意倾听他人意见并主动沟通协调。
领导力
专注于团队发展,有效引领团队采取行动,达成共同目标。
一個月內
Senior Data Engineer at Paktor x M17 Entertainment Group | AWS x GCP x Azure Big Data Specialist | Data Architect
Logo of KKCompany.
KKCompany
2023 ~ 现在
Taiwan
专业背景
目前状态
就职中
求职阶段
目前会考虑了解新的机会
专业
数据工程师, 后端开发人员
产业
软件
工作年资
10 到 15 年
管理经历
我有管理 1~5 人的经验
技能
Big Data
Data Engineering
ETL
AWS
GCP
Python
BigQuery
Data Warehouse
Data Pipeline
Java
Azure
SQL
Spark
kafka
spark streaming
Scala
Redshift
HBase
SQL Server
AWS S3
AWS Lambda
MongoDB
Hadoop
Hadoop Distributed File System
AWS SQS
Azure Storage
MySQL
PostgreSQL
Postman for API
Snowflake
语言能力
English
专业
Chinese
母语或双语
求职偏好
希望获得的职位
Backend Engineer, Data Engineer, MLOps Engineer
预期工作模式
全职
期望的工作地点
Taiwan, 台灣, Singapore, Hong Kong
远端工作意愿
对远端工作有兴趣
接案服务
学历
学校
National Taiwan University
主修科系
EMBA Programs, Business Administration, Accounting, Finance and International Business.
列印
Ahzwaym2ourqm1t0glsc

Chin-Hung (Wilson) Liu

I am a lead architect responsible for designing and implementing a large-scale data pipeline for Lomotif, Paktor x 17LIVE, utilizing GCP/AWS/Python/Scala, in collaboration with data science and machine learning teams in Singapore and TW HQ, as well as with the Hadoop ecosystem (HDFS/HBase/Kafka) at JSpectrum in Hong Kong and Sydney. 


With over 15 years of experience in designing and developing Java/Scala/Python-based applications for daily operations, I bring:

● At least 8 years of experience in data analysis, pipeline design and development, and tool building as a team member. 

● In-depth knowledge of the Spark and Hadoop ecosystems, including Hadoop, HDFS, HBase, and more. 
● Strong skills in designing and developing Big Data services on AWS and GCP. 
 Extensive expertise in developing generic distributed systems, streaming processing, machine learning pipelines, and continuously improving ML models.


Senior Data Engineer at Paktor x 17LIVE| AWS Big Data Specialist | Data Architect 
Singapore / Hong Kong / Taiwan

[email protected]

https://www.linkedin.com/in/chin-hung-wilson-liu-29392957

Nanxing Rd., Xizhi Dist., New Taipei City, Taiwan (R.O.C.)

Experience 

Senior Data Engineer (DataOps / AI) / Lomotif Private Limited / Singapore

Jul. 2021 - Present.

Description and Responsibilities: Lomotif is a leading short video social platform in South America and India that holds PBs of videos in buckets and serves millions of users. DataOps and AI team take part in many challenging projects e.g. Ncanto, XROAD services, Ray Serve, and scalable model serving frameworks for support the recommendation and moderation pipeline, also integrated Universal Music Group music (UMG) and full catalog feed with 7digital. DataOps team handling 10TB+ data for day-to-day operation, moderating model training results, and designing SLIs/SLOs for EKS Clusters. More responsibilities/details as below.

  • Optimize music (UMG) pipeline with queries and memories for Elasticsearch and PostgreSQL, the pipeline saving 90% execution time from 10+ hours to 40 mins.
  • Migrate service from apache spark, AWS Data Lake Formation to AWS MWAA, EKS airflow environment. 
  • Design, and deliver distributed system for Ray Serve with AI team.
  • Design, and implement a modern machine learning pipeline for a recommendation, and moderation pipe.
  • Design SLA and implement alert log reporting system (history logs) for moderation pipeline, histories logs handling application, server levels information for further investigation.
  • Supporting other departments to gather data in the appropriate platforms.
Tech Stacks : 
  • Streaming, Snowpipe/Kinesis/Firehose
  • Monitoring, CloudWatch/Grafana
  • Orchestration, AWS MWAA / Airflow
  • Kubernetes, EKS
  • Message, SQS/SNS
  • MLflow, Ray Serve/EMR/Lambda
  • Storage, Snowflake / RDS (PostgreSQL) / ElastiCache (Redis) / Elastic search
  • Bucket, AWS S3
Reports to : VP of Data Engineering


Senior Data Engineer / Handshakes by DC Frontiers / Singapore

Oct. 2020 - May. 2021.

Description and Responsibilities: The main responsibility of the engineering team is launching ScoutAsia by Nikkei and The Financial Times Nikkei content to SGX TitanOTC's platform. Titan Users will be able to access Nikkei news articles from across 11 categories, including equities, stocks, indices, foreign exchange, and iron ore. DPP (Data team) is processing hundreds of GB articles/market/financial/relationships and organization for day-to-day operation on Azure and on-premise environments. More responsibilities/details as below.

  • Identifying, digging bottlenecks, and problem-solving especially optimizing the performance of SQL Server, NoSQL (Azure Cosmos), resource units, and message queues, reducing/saving almost 50-75% of resources. 
  • Identifying and solving the problems between machine learning/backend/frontend/DDP side and giving the advance logical/physical design of a system. Displayed technical expertise in optimizing the databases and improving the data pipeline to achieve the objective.
  • Bring in industry standards to data management to deliver data at the end objective. 
  • Building, and recruiting the new data engineering staff for the next-generation, enterprise data pipeline.
Tech Stacks : 
  • Storage, Azure Cosmos DB/Gremlin/SQL Server/MYSQL/Redis
  • Storage (Bucket), Azure Blob/AWS S3
  • Streaming/Batch/transform, Spark/Scala (90% codebase coverage)
  • Message, Azure service bus, queue storage
  • Search, Elastic search
  • Algorithm, graph/concordance
Reports to : CTO

Senior Data Engineer / 17LIVE Inc. / Taiwan, Taipei.

Feb. 2020 - Jul. 2020

Description and Responsibilities: The big challenge of 17 Media data teams is facing fast-growing data volume (processing 5-10x TB level daily), complex cooperation with stakeholders, the cost optimization of the pipeline, and refactoring big latency systems .etc. As a senior data member, I’m making a data dictionary and trying to explain/design how the whole pipeline works with each component, especially how to solve those bottlenecks. More responsibilities/details as below. 

  • Leading, and architect large-scale data pipeline for supporting scientists and shareholders. 
  • Optimize, ensure quality and play a tough role in data lake projects/data pipes. infrastructure. 
  • Define, and design stage, dimension, production, and fact tables for data warehouse (BigQuery). 
  • Coordinate with client / QA / backend team for QC lists / MongoDB change stream workers. 
  • Architect workflows with those components, Dataflow, Cloud Functions, and GCS. 
  • Recruiting (Jr./Sr.) data engineering members, setting goals, and sprint management.

Tech Stacks : 

  • Storage, GCS/BigQuery/Firebase/MongoDB/MYSQL 
  • Realtime process and Message system, DataFlow (Apache Beam) / BigQuery Streaming / MongoDB Change Stream / Fluentd / Firebase / Pub/Sub 
  • ETL/ELT workflow, Digdag / Embulk 
  • Data warehouse, Visualization, BigQuery / Superset / Chartio / Data Studio 
  • Continuous deployment, docker, CricleCI 

Reports to : Data Head

Data Engineer / Paktor Pte. Ltd. / Singapore 

Sep. 2015 - Dec. 2019.

Description and Responsibilities : This is another 0 to 1 story. As an early data member, we need to figure out the data driven policy, strategies, engineering requirements from the company. In Paktor, data / backend sides are 100% on AWS, therefore the whole data ingestion, automation and data warehouse etc. are relying on those components. We are processing 50-100x GB realtime / batch jobs and the other data sources (RDBMS, APIs) for ETL/ELT on S3, Redshift, the data platform helps our marketing / HQ scientists team getting data into insights and making good decisions. More responsibilities / details as below. 

  • Supports Big Data and batch, real-time analytical solutions leveraging transformational technologies. 
  • Optimize data pipeline on AWS using Kinesis-Firehose/Lambda/Kinesis Analytics/Data Pipeline, and optimize, resizing Redshift clusters and related scripts. 
  • Translates complex analytics requirements into detailed architecture, design, and high performing software such as machine-learning, CI/CD of recommendation pipeline. 
  • Collaborate with client / backend side developers to formulate innovative solutions to experiment and implement related algorithms. 

Tech Stacks : 

  • Storage, S3/Redshift/Aurora - Realtime process and Message system, Kinesis Firehose / SNS 
  • Data warehouse, Visualization, Redshift / Klipfolio / Metabase 
  • ETL/ELT workflow, Lambda / SNS / Batch / Python 
  • Recommendation, ML, DynamoDB / EMR / Spark / Sagemaker 
  • Metadata management, Athena (presto) / Glue / Redshift Spectrum 
  • Continuous deployment, Elasticbeanstalk / Cloudformation 
  • Operations, PagerDuty / Zapier / Cloud Watch 

Reports to : CTO, Data Head

System Analyst (Data Backend Engineer) / JSpectrum Software Limited / Hong Kong 

 Jan. 2014 - Aug 2015.

Description and Responsibilities : JSPectrum is a leading passive location-based service company in Hong Kong which holds many interesting products such as NetProbe, NetWhere, NetAd etc. In Optus (The main project in Sydney), the main responsibility of system analyst is designing / implementing data ingestion (real-time processing) / load and management data with major components of the Hadoop ecosystem. We meet the challenge to process 15,000 TPS, 60,000 inserts per second and 300 GB daily storages, therefore we are trying to optimize those components with Kafka consumers, HDFS storages and re-designing keys / columns of HBase to fulfill the requirement and deployed NetAd, whole in-house solutions on Optus. More responsibilities / details as below. 

  • Design, implement and optimize Hadoop ecosystems, MLP, real-time processing on Optus in house servers with our main product NetAd, NetWhere. We are focusing on HBase schema, HDFS, balancing Kafka consumers and more issues on data ingestion. 
  • Collaborate with shareholders and LBS team members for further requirements with HeapMap. 

Tech Stacks : 

  • Storage, HDFS / HBase
  • Realtime process and Message system, Kafka streaming, Log systems 
  • Data warehouse, Visualization, HBase / NetWhere (Dashboard) 
  • Hadoop ecosystem, Hadoop / HDFS / Zookeeper / Spark / Hive
  • ETL/ELT workflow, Spark / Hive / Scala / Java

Reports to : CTO


Senior Software Engineer / Toro Development Ltd. / Taiwan, Taipei. 

Oct. 2012 - Dec. 2013.

Description and Responsibilities : TORO is a technology business that provides a mobile platform and its associated systems, services and rules to help Brands (with initial focus on Sports Teams, Smart Cities and Streaming apps) become super-apps to generate additional revenue with minimum effort. Responsibilities as below. 

  • Design, implement and test back-office modules for NFC wallet platform, Trusted Service Managers (TSM) and distributed NFC services to end­ users / stakeholders. 
  • Implement RESTful services and deliver endpoints for wallet managers and collaborating with front­end, backend teams for further business requirements. 

Tech Stacks: MYSQL / Spring / Hibernate / XML / Apache Camel / Java / POJO .etc. 

Reports to : Head of Server Solutions


Software Engineer / Digital River / Taiwan, Taipei. 

Oct. 2011 - Sep. 2012.

Description and Responsibilities : Digital river proactive partners, providing API-based Payments & Risk, Order Management and Commerce services to leading enterprise brands. The big challenge to DR is integrating with the current module and working well with a huge code base (over 2+ millions lines), the strict process including analysis requirements, design, implement, test and code review. More responsibilities as below. 

  • Design, implement custom bundle project, bundle customized by shoppers to pick products of groups and get special discounts, the main stakeholders /users from Logitech, Microsoft. 
  • Analysis, collect business requirements, identify use cases and collaborate with business analysts and deliver related diagrams, documents. 

Tech Stacks: Oracle / Tomcat / Spring / Struts / JDO / XML / JUnit / Java / J2EE .etc. 

Reports to : Technical Development Manager


Technical Supervisor / Stark Technology Inc. / Taiwan, Taipei. 

Oct. 2008 - Sep. 2011.

Description and Responsibilities : Stark Technology (STI) is the largest domestic system integrator in Taiwan. We plan and deliver complete ICT solutions for a wide spectrum of industries through representing and reselling the world's leading products. This is made possible by using the most advanced technology, and providing the best professional services. More responsibilities / projects as below. 

  • Lead, coach JR. programmers for the development process of enterprise modules, and design Fatwire CMS components as Template/Page/Cache .etc. 
  • Design, analyze DMDB systems, and implement functions to meet the requirements of queries / storage. Optimize performance for online servers and GC tuning. 

Tech Stacks : Oracle / Sybase / Tomcat / Weblogic / Spring / Struts / Hibernate / Fatwire / Java / J2EE .etc. 

Reports to : Technical Manager


Relevant Skills and Qualifications


Big Data Tech Stacks

  • AWS Services, EC2/S3/Lambda/EMR/CloudWatch/SNS/SQS/Elastic Beanstalk 
  • AWS Big Data Solutions, Kinesis/Firehose/Athena/Redshift/Dynamodb 
  • GCP Big Data Solutions, BigQuery/PubSub/Dataflow/Cloud Functions 
  • Hadoop ecosystem, Hadoop/HDFS/Zookeeper/Hbase/Hive 
  • Spark Streaming/Apache Kafka 
  • CI/CD: Jenkins/Cloud Formation/GitLab/Grafana

Specific Skills

  • Solid, well-designed real-time streaming/batch processing, ETL systems.
  • Monitors and conducts data-pipeline / machine learning pipeline development requests through lifecycle management and ensures that the technical solution meets.
  • Diagnosing and troubleshooting Redshift and specific clusters management.
  • Development of micro-services and endpoints based on enterprise integration patterns. Knowledge over garbage collection (JVM) tuning technologies for various servers.
  • Developed multi-threading processing consuming work and managed transactions.

Certifications and Training

  • Sun Certified Web Component Developer Java 2 Platform, Enterprise Edition. 
  • Sun Certified Programmer for the Java 2 Platform. 
  • Red Hat Enterprise Directory Services and Authentication Attended. 
  • Project Management Professional (PMP)® Attended. 
  • AWS Certified Solutions Architect Attended. 
  • Big Data on AWS Attended. 
  • Azure Data Engineer AssociateAttended. 

Education


National Taiwan University, 2010 – 2011

EMBA Programs, Business Administration, Accounting, Finance and International Business.


Chinese Culture University Master of Information Management, 2002 – 2005

Computer Science, Data Mining, Expert Systems and Knowledge Base as major concentration.


Chinese Culture University, Bachelor Degree of Science in Journalism, 1998 - 2002

简历
个人档案
Ahzwaym2ourqm1t0glsc

Chin-Hung (Wilson) Liu

I am a lead architect responsible for designing and implementing a large-scale data pipeline for Lomotif, Paktor x 17LIVE, utilizing GCP/AWS/Python/Scala, in collaboration with data science and machine learning teams in Singapore and TW HQ, as well as with the Hadoop ecosystem (HDFS/HBase/Kafka) at JSpectrum in Hong Kong and Sydney. 


With over 15 years of experience in designing and developing Java/Scala/Python-based applications for daily operations, I bring:

● At least 8 years of experience in data analysis, pipeline design and development, and tool building as a team member. 

● In-depth knowledge of the Spark and Hadoop ecosystems, including Hadoop, HDFS, HBase, and more. 
● Strong skills in designing and developing Big Data services on AWS and GCP. 
 Extensive expertise in developing generic distributed systems, streaming processing, machine learning pipelines, and continuously improving ML models.


Senior Data Engineer at Paktor x 17LIVE| AWS Big Data Specialist | Data Architect 
Singapore / Hong Kong / Taiwan

[email protected]

https://www.linkedin.com/in/chin-hung-wilson-liu-29392957

Nanxing Rd., Xizhi Dist., New Taipei City, Taiwan (R.O.C.)

Experience 

Senior Data Engineer (DataOps / AI) / Lomotif Private Limited / Singapore

Jul. 2021 - Present.

Description and Responsibilities: Lomotif is a leading short video social platform in South America and India that holds PBs of videos in buckets and serves millions of users. DataOps and AI team take part in many challenging projects e.g. Ncanto, XROAD services, Ray Serve, and scalable model serving frameworks for support the recommendation and moderation pipeline, also integrated Universal Music Group music (UMG) and full catalog feed with 7digital. DataOps team handling 10TB+ data for day-to-day operation, moderating model training results, and designing SLIs/SLOs for EKS Clusters. More responsibilities/details as below.

  • Optimize music (UMG) pipeline with queries and memories for Elasticsearch and PostgreSQL, the pipeline saving 90% execution time from 10+ hours to 40 mins.
  • Migrate service from apache spark, AWS Data Lake Formation to AWS MWAA, EKS airflow environment. 
  • Design, and deliver distributed system for Ray Serve with AI team.
  • Design, and implement a modern machine learning pipeline for a recommendation, and moderation pipe.
  • Design SLA and implement alert log reporting system (history logs) for moderation pipeline, histories logs handling application, server levels information for further investigation.
  • Supporting other departments to gather data in the appropriate platforms.
Tech Stacks : 
  • Streaming, Snowpipe/Kinesis/Firehose
  • Monitoring, CloudWatch/Grafana
  • Orchestration, AWS MWAA / Airflow
  • Kubernetes, EKS
  • Message, SQS/SNS
  • MLflow, Ray Serve/EMR/Lambda
  • Storage, Snowflake / RDS (PostgreSQL) / ElastiCache (Redis) / Elastic search
  • Bucket, AWS S3
Reports to : VP of Data Engineering


Senior Data Engineer / Handshakes by DC Frontiers / Singapore

Oct. 2020 - May. 2021.

Description and Responsibilities: The main responsibility of the engineering team is launching ScoutAsia by Nikkei and The Financial Times Nikkei content to SGX TitanOTC's platform. Titan Users will be able to access Nikkei news articles from across 11 categories, including equities, stocks, indices, foreign exchange, and iron ore. DPP (Data team) is processing hundreds of GB articles/market/financial/relationships and organization for day-to-day operation on Azure and on-premise environments. More responsibilities/details as below.

  • Identifying, digging bottlenecks, and problem-solving especially optimizing the performance of SQL Server, NoSQL (Azure Cosmos), resource units, and message queues, reducing/saving almost 50-75% of resources. 
  • Identifying and solving the problems between machine learning/backend/frontend/DDP side and giving the advance logical/physical design of a system. Displayed technical expertise in optimizing the databases and improving the data pipeline to achieve the objective.
  • Bring in industry standards to data management to deliver data at the end objective. 
  • Building, and recruiting the new data engineering staff for the next-generation, enterprise data pipeline.
Tech Stacks : 
  • Storage, Azure Cosmos DB/Gremlin/SQL Server/MYSQL/Redis
  • Storage (Bucket), Azure Blob/AWS S3
  • Streaming/Batch/transform, Spark/Scala (90% codebase coverage)
  • Message, Azure service bus, queue storage
  • Search, Elastic search
  • Algorithm, graph/concordance
Reports to : CTO

Senior Data Engineer / 17LIVE Inc. / Taiwan, Taipei.

Feb. 2020 - Jul. 2020

Description and Responsibilities: The big challenge of 17 Media data teams is facing fast-growing data volume (processing 5-10x TB level daily), complex cooperation with stakeholders, the cost optimization of the pipeline, and refactoring big latency systems .etc. As a senior data member, I’m making a data dictionary and trying to explain/design how the whole pipeline works with each component, especially how to solve those bottlenecks. More responsibilities/details as below. 

  • Leading, and architect large-scale data pipeline for supporting scientists and shareholders. 
  • Optimize, ensure quality and play a tough role in data lake projects/data pipes. infrastructure. 
  • Define, and design stage, dimension, production, and fact tables for data warehouse (BigQuery). 
  • Coordinate with client / QA / backend team for QC lists / MongoDB change stream workers. 
  • Architect workflows with those components, Dataflow, Cloud Functions, and GCS. 
  • Recruiting (Jr./Sr.) data engineering members, setting goals, and sprint management.

Tech Stacks : 

  • Storage, GCS/BigQuery/Firebase/MongoDB/MYSQL 
  • Realtime process and Message system, DataFlow (Apache Beam) / BigQuery Streaming / MongoDB Change Stream / Fluentd / Firebase / Pub/Sub 
  • ETL/ELT workflow, Digdag / Embulk 
  • Data warehouse, Visualization, BigQuery / Superset / Chartio / Data Studio 
  • Continuous deployment, docker, CricleCI 

Reports to : Data Head

Data Engineer / Paktor Pte. Ltd. / Singapore 

Sep. 2015 - Dec. 2019.

Description and Responsibilities : This is another 0 to 1 story. As an early data member, we need to figure out the data driven policy, strategies, engineering requirements from the company. In Paktor, data / backend sides are 100% on AWS, therefore the whole data ingestion, automation and data warehouse etc. are relying on those components. We are processing 50-100x GB realtime / batch jobs and the other data sources (RDBMS, APIs) for ETL/ELT on S3, Redshift, the data platform helps our marketing / HQ scientists team getting data into insights and making good decisions. More responsibilities / details as below. 

  • Supports Big Data and batch, real-time analytical solutions leveraging transformational technologies. 
  • Optimize data pipeline on AWS using Kinesis-Firehose/Lambda/Kinesis Analytics/Data Pipeline, and optimize, resizing Redshift clusters and related scripts. 
  • Translates complex analytics requirements into detailed architecture, design, and high performing software such as machine-learning, CI/CD of recommendation pipeline. 
  • Collaborate with client / backend side developers to formulate innovative solutions to experiment and implement related algorithms. 

Tech Stacks : 

  • Storage, S3/Redshift/Aurora - Realtime process and Message system, Kinesis Firehose / SNS 
  • Data warehouse, Visualization, Redshift / Klipfolio / Metabase 
  • ETL/ELT workflow, Lambda / SNS / Batch / Python 
  • Recommendation, ML, DynamoDB / EMR / Spark / Sagemaker 
  • Metadata management, Athena (presto) / Glue / Redshift Spectrum 
  • Continuous deployment, Elasticbeanstalk / Cloudformation 
  • Operations, PagerDuty / Zapier / Cloud Watch 

Reports to : CTO, Data Head

System Analyst (Data Backend Engineer) / JSpectrum Software Limited / Hong Kong 

 Jan. 2014 - Aug 2015.

Description and Responsibilities : JSPectrum is a leading passive location-based service company in Hong Kong which holds many interesting products such as NetProbe, NetWhere, NetAd etc. In Optus (The main project in Sydney), the main responsibility of system analyst is designing / implementing data ingestion (real-time processing) / load and management data with major components of the Hadoop ecosystem. We meet the challenge to process 15,000 TPS, 60,000 inserts per second and 300 GB daily storages, therefore we are trying to optimize those components with Kafka consumers, HDFS storages and re-designing keys / columns of HBase to fulfill the requirement and deployed NetAd, whole in-house solutions on Optus. More responsibilities / details as below. 

  • Design, implement and optimize Hadoop ecosystems, MLP, real-time processing on Optus in house servers with our main product NetAd, NetWhere. We are focusing on HBase schema, HDFS, balancing Kafka consumers and more issues on data ingestion. 
  • Collaborate with shareholders and LBS team members for further requirements with HeapMap. 

Tech Stacks : 

  • Storage, HDFS / HBase
  • Realtime process and Message system, Kafka streaming, Log systems 
  • Data warehouse, Visualization, HBase / NetWhere (Dashboard) 
  • Hadoop ecosystem, Hadoop / HDFS / Zookeeper / Spark / Hive
  • ETL/ELT workflow, Spark / Hive / Scala / Java

Reports to : CTO


Senior Software Engineer / Toro Development Ltd. / Taiwan, Taipei. 

Oct. 2012 - Dec. 2013.

Description and Responsibilities : TORO is a technology business that provides a mobile platform and its associated systems, services and rules to help Brands (with initial focus on Sports Teams, Smart Cities and Streaming apps) become super-apps to generate additional revenue with minimum effort. Responsibilities as below. 

  • Design, implement and test back-office modules for NFC wallet platform, Trusted Service Managers (TSM) and distributed NFC services to end­ users / stakeholders. 
  • Implement RESTful services and deliver endpoints for wallet managers and collaborating with front­end, backend teams for further business requirements. 

Tech Stacks: MYSQL / Spring / Hibernate / XML / Apache Camel / Java / POJO .etc. 

Reports to : Head of Server Solutions


Software Engineer / Digital River / Taiwan, Taipei. 

Oct. 2011 - Sep. 2012.

Description and Responsibilities : Digital river proactive partners, providing API-based Payments & Risk, Order Management and Commerce services to leading enterprise brands. The big challenge to DR is integrating with the current module and working well with a huge code base (over 2+ millions lines), the strict process including analysis requirements, design, implement, test and code review. More responsibilities as below. 

  • Design, implement custom bundle project, bundle customized by shoppers to pick products of groups and get special discounts, the main stakeholders /users from Logitech, Microsoft. 
  • Analysis, collect business requirements, identify use cases and collaborate with business analysts and deliver related diagrams, documents. 

Tech Stacks: Oracle / Tomcat / Spring / Struts / JDO / XML / JUnit / Java / J2EE .etc. 

Reports to : Technical Development Manager


Technical Supervisor / Stark Technology Inc. / Taiwan, Taipei. 

Oct. 2008 - Sep. 2011.

Description and Responsibilities : Stark Technology (STI) is the largest domestic system integrator in Taiwan. We plan and deliver complete ICT solutions for a wide spectrum of industries through representing and reselling the world's leading products. This is made possible by using the most advanced technology, and providing the best professional services. More responsibilities / projects as below. 

  • Lead, coach JR. programmers for the development process of enterprise modules, and design Fatwire CMS components as Template/Page/Cache .etc. 
  • Design, analyze DMDB systems, and implement functions to meet the requirements of queries / storage. Optimize performance for online servers and GC tuning. 

Tech Stacks : Oracle / Sybase / Tomcat / Weblogic / Spring / Struts / Hibernate / Fatwire / Java / J2EE .etc. 

Reports to : Technical Manager


Relevant Skills and Qualifications


Big Data Tech Stacks

  • AWS Services, EC2/S3/Lambda/EMR/CloudWatch/SNS/SQS/Elastic Beanstalk 
  • AWS Big Data Solutions, Kinesis/Firehose/Athena/Redshift/Dynamodb 
  • GCP Big Data Solutions, BigQuery/PubSub/Dataflow/Cloud Functions 
  • Hadoop ecosystem, Hadoop/HDFS/Zookeeper/Hbase/Hive 
  • Spark Streaming/Apache Kafka 
  • CI/CD: Jenkins/Cloud Formation/GitLab/Grafana

Specific Skills

  • Solid, well-designed real-time streaming/batch processing, ETL systems.
  • Monitors and conducts data-pipeline / machine learning pipeline development requests through lifecycle management and ensures that the technical solution meets.
  • Diagnosing and troubleshooting Redshift and specific clusters management.
  • Development of micro-services and endpoints based on enterprise integration patterns. Knowledge over garbage collection (JVM) tuning technologies for various servers.
  • Developed multi-threading processing consuming work and managed transactions.

Certifications and Training

  • Sun Certified Web Component Developer Java 2 Platform, Enterprise Edition. 
  • Sun Certified Programmer for the Java 2 Platform. 
  • Red Hat Enterprise Directory Services and Authentication Attended. 
  • Project Management Professional (PMP)® Attended. 
  • AWS Certified Solutions Architect Attended. 
  • Big Data on AWS Attended. 
  • Azure Data Engineer AssociateAttended. 

Education


National Taiwan University, 2010 – 2011

EMBA Programs, Business Administration, Accounting, Finance and International Business.


Chinese Culture University Master of Information Management, 2002 – 2005

Computer Science, Data Mining, Expert Systems and Knowledge Base as major concentration.


Chinese Culture University, Bachelor Degree of Science in Journalism, 1998 - 2002