CakeResume 找人才

进阶搜寻
On
4 到 6 年
6 到 10 年
10 到 15 年
15 年以上
Avatar of Yen-Ting Liu.
Avatar of Yen-Ting Liu.
Data Engineer @Tesla
2023 ~ 2023
Data engineer / Data anyayst
兩個月內
data that included geo-location data from BigQuery and deployed it on the GCP environment. The API saved 80% of the time on fetching data (Cloud Run, IAM, BigQuery) 十月七月 2021 Data engineer • 富盈數據 Maintained distributed system and database • Constructed and managed the Hadoop ecosystem with Ambari. Built ETL pipeline to query multi-source database which processing more than three terabytes (TB) provided 90% of the analysis needs (Hive, HBase, Python, ELK, MySQL) • Established data collection and analysis workflow, saving Data scientists’ 30% of the time to analyze and build machine learning
python
Linux
R
就职中
正在积极求职中
全职 / 对远端工作有兴趣
4 到 6 年
University of Texas at Dallas
Information Technology and Management
Avatar of Chin-Hung (Wilson) Liu.
Avatar of Chin-Hung (Wilson) Liu.
Principal Engineer, Data Engineering @KKCompany
2023 ~ 现在
Backend Engineer, Data Engineer, MLOps Engineer
一個月內
Chin-Hung (Wilson) Liu I am a lead architect responsible for designing and implementing a large-scale data pipeline for Lomotif, Paktor x 17LIVE, utilizing GCP/AWS/Python/Scala, in collaboration with data science and machine learning teams in Singapore and TW HQ, as well as with the Hadoop ecosystem (HDFS/HBase/Kafka) at JSpectrum in Hong Kong and Sydney. With over 15 years of experience in designing and developing Java/Scala/Python-based applications for daily operations, I bring: ● At least 8 years of experience in data analysis, pipeline design
Big Data
Data Engineering
ETL
就职中
目前会考虑了解新的机会
全职 / 对远端工作有兴趣
10 到 15 年
National Taiwan University
EMBA Programs, Business Administration, Accounting, Finance and International Business.
Avatar of 陳柄宏.
Avatar of 陳柄宏.
Staff Cloud Architect Enginner @域動行銷股份有限公司
2023 ~ 现在
雲端工程師,雲端架構師,數據架構師
一個月內
及進步的團隊。 [email protected], Taiwan Education 輔仁大學圖書資訊學系,Skills Python 程式寫作、實作爬蟲及資料清理作業。 Database SQL : PostgreSQL, MySQL, MSSQL NoSQL: MongoDB, Redis, DynamoDB, Hadoop Docker 容器化技術 Data Lakehouse Databricks Azure 持有Microsoft Certified: Azure Solutions Architect Expert 及 Data enginner 證照 AWS 持有 AWS Certified Solutions Architect – Professional 證照 工作經歷 Staff Cloud Architect Enginner , 域動行銷股份有限
git
hadoop ecosystem
MongoDB
就职中
目前没有兴趣寻找新的机会
全职 / 对远端工作有兴趣
4 到 6 年
輔仁大學
圖書資訊
Avatar of the user.
Avatar of the user.
Team Lead / Sr. Data Engineer @新加坡商競舞電競娛樂有限公司 Garena Online Private Ltd
2021 ~ 现在
資料工程師
一個月內
Hadoop
Spark
SQL
就职中
全职 / 对远端工作有兴趣
6 到 10 年
Avatar of Carter Lin.
Avatar of Carter Lin.
Senior Data Engineer @美光科技
2021 ~ 现在
Software Engineer / Backend Engineer / DevOps Engineer
半年內
CD pipelines from scratch which follow GitOps flow and deploying service to GKE cluster using Helm . Familiar with GCP service , IAM, GCS, Big Query, Cloud Function, Pub/Sub, Cloud Scheduler Data Engineer Micron OctOct 2021 Taichung, Taiwan Developed and maintained ETL processes using Python to transfer data into Hadoop Ecosystem, including HBase and Hive, for efficient data storage and retrieval. Proficient in SQL for data manipulation and query optimization. Collaborated with cross-functional teams to design and implement data pipelines, ensuring data integrity and accuracy. Streamlined data processing workflows, resulting in significant time and resource
Python
Google cloud platform
Helm
就职中
全职 / 我只想远端工作
4 到 6 年
National Chiao Tung University
資訊管理學系
Avatar of Aiden Wu.
Avatar of Aiden Wu.
Senior Data Engineer @Garena
2021 ~ 现在
Data engineer
一年內
Aiden Wu Data Engineer / Machine Learning Engineer Taipei, Taiwan • Enthusiastic software developer: focus on distributed systems, especially Hadoop ecosystem • Experience in data engineering: develop batch and real-time data pipelines with an average of TBs per month via Spark and Airflow • Experience in machine learning: develop machine learning (ML) and deep learning (DL) models while providing services on RESTful API https://www.slideshare.net/ssuserf88631/presentations 工作經歷 Senior Data Engineer • Garena 八月Present • Build and manage self-distributed systems (e.g., Hadoop, Spark, and Kafka Cluster) • Design
Python
Spark
Machine Learning
就职中
全职 / 对远端工作有兴趣
4 到 6 年
National Cheng Kung University
Department of Electrical Engineering
Avatar of 陳慶全.
Avatar of 陳慶全.
Senior Data Engineer @Microsoft
2021 ~ 现在
資料科學家、資料工程師、資料分析師
一個月內
Ching-Chuan Chen 陳慶全 資料科學家、資料工程師、資料分析師 • City, TW • [email protected] Data engineer and data scientist with over four half years of experience. Proven success in processing big volume of data (6TB per day) in Spark in Scala and MPI in R and Python, developing a machine learning model with Spark in Scala on 30 billions of records for IoT device recognition and developing algorithms to classify unlabeled network behaviors of customers to protect their devices from compromising. Skilled in programming
R
Python
C++
就职中
全职 / 对远端工作有兴趣
4 到 6 年
National Cheng Kung University,
Statistics
Avatar of the user.
一年內
Python
Bigdata
Docker
就职中
全职 / 对远端工作有兴趣
10 到 15 年
Avatar of the user.
Avatar of the user.
Jr. Programmer @德義資訊股份有限公司
2013 ~ 2015
Developer Team Leader, Architect, FullStack Developer
超過一年
Word
PowerPoint
Excel
就职中
全职 / 对远端工作有兴趣
6 到 10 年
National Taiwan University
Bachelor of Bio-Industrial Mechatronics Engineering
Avatar of Mallikarjunareddy Guruguntla.
Big data developer
超過一年
ZOOKEEPER. Summary Excellent understanding /knowledge on HADOOP(Gen-1 and Gen-2) and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager (YARN), Node Manager and Aplication Master. Expert in understanding the data and designing/implementing the enterprise platforms like Hadoop data lake and huge Data warehouses. Have over 2 years of experience as Hadoop Architect with very good exposure on Hadoop Technologies like HDFS, YARN, MapReduce, Sqoop, Flume, HBase, Hive, Presto, Oozie and Spark. Good understanding of NoSQL databases and hands on working experience in writing applications
hadoop ecosystem
Python
Scala
全职 / 对远端工作有兴趣
6 到 10 年
JNTUH
Computer science

最轻量、快速的招募方案,数百家企业的选择

搜寻简历,主动联系求职者,提升招募效率。

  • 浏览所有搜寻结果
  • 每日可无限次数开启陌生对话
  • 搜尋僅開放付費企業檢視的简历
  • 检视使用者信箱 & 电话
搜寻技巧
1
Search a precise keyword combination
senior backend php
If the number of the search result is not enough, you can remove the less important keywords
2
Use quotes to search for an exact phrase
"business development"
3
Use the minus sign to eliminate results containing certain words
UI designer -UX
免费方案仅能搜寻公开简历。
升级至进阶方案,即可浏览所有搜寻结果(包含数万笔览仅在 CakeResume 平台上公开的简历)。

职场能力评价定义

专业技能
该领域中具备哪些专业能力(例如熟悉 SEO 操作,且会使用相关工具)。
问题解决能力
能洞察、分析问题,并拟定方案有效解决问题。
变通能力
遇到突发事件能冷静应对,并随时调整专案、客户、技术的相对优先序。
沟通能力
有效传达个人想法,且愿意倾听他人意见并给予反馈。
时间管理能力
了解工作项目的优先顺序,有效运用时间,准时完成工作内容。
团队合作能力
具有向心力与团队责任感,愿意倾听他人意见并主动沟通协调。
领导力
专注于团队发展,有效引领团队采取行动,达成共同目标。
兩個月內
Sr. Data Engineer / Team Lead
Logo of 新加坡商競舞電競娛樂有限公司 Garena Online Private Ltd.
新加坡商競舞電競娛樂有限公司 Garena Online Private Ltd
2021 ~ 现在
Taoyuan, 桃園區桃園市台灣
专业背景
目前状态
就职中
求职阶段
专业
其他
产业
工作年资
6 到 10 年
管理经历
我有管理 1~5 人的经验
技能
Hadoop
Spark
SQL
python
Scala
AWS
GCP
语言能力
Chinese
母语或双语
English
进阶
求职偏好
希望获得的职位
資料工程師
预期工作模式
全职
期望的工作地点
Taipei, 台灣
远端工作意愿
对远端工作有兴趣
接案服务
学历
学校
主修科系
列印
Vxftskax4mfwtiqudlao

Yi-Lun Wu (Velen)

Summary

  • 7+ years of experience in big data fields both Cloud(GCP, AWS) and on-premised (Cloudera CDH). 
  • Develop data catalog with hybrid cloud environment(on-perm+AWS) for global commerce at Yahoo.
  • Lead a machine learning team to build up offline/real-time platforms for recommendation systems from scratch at Garena. 
  • Lead and architect large-scale data pipelines/warehouses from Innova Solutions(AWS) and 17LIVE (GCP). And Cooperate with data science/machine learning team/TW HQ data team.
  • Expert of Hadoop ecosystems such as HDFS/Hive/Hbase/Spark in Athemaster and Xuenn.

Sr. Big Data Engineer / Team Lead
Taoyuan,TW
[email protected]

Skills


Languages

  • Python 
  • SQL
  • Linux Shell Script
  • Scala


Big Data Solutions

  • Cloudera CDH
  • Hadoop echosystems
  • AWS EMR
  • GCP Dataporc


Data Warehouse

  • Hive
  • Google BigQuery
  • MySQL
  • HBase
  • Clickhouse
  • AWS RDS
  • AWS Athena


ETL Skills in Big Data

  • Spark / Spark Streaming
  • Hive (for ELT) 
  • Impala 
  • Kafka 
  • Cloud DataFlow  (GCP)


Workflow Skills

  • Airflow 
  • Digdag 
  • NiFi 
  • AWS CloudFormation


Other Skills

  • Great communication 
  • Leaderships 
  • Scrum 
  • JIRA 
  • Linux 
  • Git  

Experience

Senior Engineer at Yahoo.Jan, 2023 - Present

Working at Yahoo. The greatest challenge lies in developing in response to rapid market changes, where the data catalog needs to integrate with various complex systems. Simultaneously, maintaining the highest stability and ensuring high-quality data is essential. 

More responsibilities/details as below.
  • Fetch providers data with various way via Java. Such as fetching data from client's API, GraphQL, FTP, S3 or GCP... etc.
  • standardizing data from above to feed into data warehouse in Hadoop/Hive/HBase using Spark
  • Implement a checking system to guarantee high quality data via Java.
  • Migrate part of services from on-perm(Hadoop) to cloud(AWS) to become a hybrid cloud environment.

 

Senior Data Engineer/ Team Lead, at BOOYAH! Live Garena.Oct, 2021 - Sep, 2022

Working at ML team as a first data engineer. The challenge include build up data warehouse/pipeline from scratch. And design the data flow to support both batch/real-time recommendation systems.

More responsibilities/details as below.
  • Design data model from scratch and Manage Hadoop based data warehouse for the training system.
  • Develop streaming ETL pipeline via Spark from Message queue(Kafka) into in-memory data structure store(Redis) and ClickHouse for real-time recommendation system.
  • Design ELT job for offline report system in Hadoop/Hive using Spark.
  • Build up monitoring dashboard on Grafana
  • Take leadership on TW side.

Senior Data Engineer at 17 Media.Jun, 2020 - Oct, 2021

The big challenge of 17 Media data teams is facing fast-growing data volume (processing 5-10x TB level daily), complex cooperation with stakeholders, the cost optimization of pipeline and refactor big latency systems .etc.

More responsibilities/details as below. 
  • Manage Google BigQuery based data warehouse/lake. 
  • Refactory architecture of data warehouse to enhance 2x performance
  • Develop batch/streaming ETL pipeline to process data from diverse data sources(e.g. MongoDB, MySQL, APIs) into GCP
  • Design workflow using Digdag
  • Implement CI/CD on BIgQuery
  • Build up visualization tool(Superset) via Kubernetes
  • Well leadership and guiding junior members.

Senior Software Engineer at Innova Solutions Ltd.Oct, 2018 - May, 2020

Development Intelligent Healthcare Data Platform(IHDP) for empowering compony solutions using AWS service. 

More responsibilities/details as below. 
  • Build APIs to the processing of patient records and providing access for downstream usages.
  • Build Infrastructure on AWS and compliance for HIPPA and GDPR standard.

IT Consultant at Xuenn Pte Ltd.May, 2018 - Sep, 2018

The biggest challenge in Xuenn is facing performance issues in the original data warehouse. And I lead a project to build up a Hadoop cluster to reduce original EDW loading and improve various data pipelines.

More responsibilities/details as below.  
  • Perform adopting new technologies to implant into, to fuse into or to replace with existing systems for gaining leaps in performance, benefits and capabilities of the users. 
  • Building-up multiple systems and integrating with existing system, implemented Hadoop, data mining or data warehouse systems.
  • Design an architecture which processing real-time data end-to-end using various Hadoop solutions without coding. 

Software Engineer at Athemaster Co., Ltd.Jan, 2016 - Apr, 2018 

Athemaster is a technology company offering solutions and expertise in implementing Enterprise Data Hub and automating Data integration with Open Source technologies such as Apache Hadoop and Spark. More responsibilities/details as below.
  • Focus on Enterprise Big Data solution such as Hadoop and Spark (Cloudera CDH). 
  • Maintain and improved other companies' Hadoop cluster. 
  • Help other companies integrate with Hadoop and resolved the technical issues. 
  • Build data pipelines via python ETL data to Hadoop.

Certification and License


Readings 00 00@2x ce5676dabcce042724a6fc4c3413d6a86ad9c78eecb848896433e32c60b7006b


CCA-175: CCA Spark and Hadoop Developer



Readings 00 01@2x 77cc06c91fae4dd43a069fa4b813524cd022d4a79115524d3f0d6b9220dfd71d

Cloudera Certified Administrator for Hadoop

Education



Undergraduate studies at Tamkang University, with a concentration in Department of Management Sciences.

                                                                                                                                                                                                     2008 - 2012

简历
个人档案
Vxftskax4mfwtiqudlao

Yi-Lun Wu (Velen)

Summary

  • 7+ years of experience in big data fields both Cloud(GCP, AWS) and on-premised (Cloudera CDH). 
  • Develop data catalog with hybrid cloud environment(on-perm+AWS) for global commerce at Yahoo.
  • Lead a machine learning team to build up offline/real-time platforms for recommendation systems from scratch at Garena. 
  • Lead and architect large-scale data pipelines/warehouses from Innova Solutions(AWS) and 17LIVE (GCP). And Cooperate with data science/machine learning team/TW HQ data team.
  • Expert of Hadoop ecosystems such as HDFS/Hive/Hbase/Spark in Athemaster and Xuenn.

Sr. Big Data Engineer / Team Lead
Taoyuan,TW
[email protected]

Skills


Languages

  • Python 
  • SQL
  • Linux Shell Script
  • Scala


Big Data Solutions

  • Cloudera CDH
  • Hadoop echosystems
  • AWS EMR
  • GCP Dataporc


Data Warehouse

  • Hive
  • Google BigQuery
  • MySQL
  • HBase
  • Clickhouse
  • AWS RDS
  • AWS Athena


ETL Skills in Big Data

  • Spark / Spark Streaming
  • Hive (for ELT) 
  • Impala 
  • Kafka 
  • Cloud DataFlow  (GCP)


Workflow Skills

  • Airflow 
  • Digdag 
  • NiFi 
  • AWS CloudFormation


Other Skills

  • Great communication 
  • Leaderships 
  • Scrum 
  • JIRA 
  • Linux 
  • Git  

Experience

Senior Engineer at Yahoo.Jan, 2023 - Present

Working at Yahoo. The greatest challenge lies in developing in response to rapid market changes, where the data catalog needs to integrate with various complex systems. Simultaneously, maintaining the highest stability and ensuring high-quality data is essential. 

More responsibilities/details as below.
  • Fetch providers data with various way via Java. Such as fetching data from client's API, GraphQL, FTP, S3 or GCP... etc.
  • standardizing data from above to feed into data warehouse in Hadoop/Hive/HBase using Spark
  • Implement a checking system to guarantee high quality data via Java.
  • Migrate part of services from on-perm(Hadoop) to cloud(AWS) to become a hybrid cloud environment.

 

Senior Data Engineer/ Team Lead, at BOOYAH! Live Garena.Oct, 2021 - Sep, 2022

Working at ML team as a first data engineer. The challenge include build up data warehouse/pipeline from scratch. And design the data flow to support both batch/real-time recommendation systems.

More responsibilities/details as below.
  • Design data model from scratch and Manage Hadoop based data warehouse for the training system.
  • Develop streaming ETL pipeline via Spark from Message queue(Kafka) into in-memory data structure store(Redis) and ClickHouse for real-time recommendation system.
  • Design ELT job for offline report system in Hadoop/Hive using Spark.
  • Build up monitoring dashboard on Grafana
  • Take leadership on TW side.

Senior Data Engineer at 17 Media.Jun, 2020 - Oct, 2021

The big challenge of 17 Media data teams is facing fast-growing data volume (processing 5-10x TB level daily), complex cooperation with stakeholders, the cost optimization of pipeline and refactor big latency systems .etc.

More responsibilities/details as below. 
  • Manage Google BigQuery based data warehouse/lake. 
  • Refactory architecture of data warehouse to enhance 2x performance
  • Develop batch/streaming ETL pipeline to process data from diverse data sources(e.g. MongoDB, MySQL, APIs) into GCP
  • Design workflow using Digdag
  • Implement CI/CD on BIgQuery
  • Build up visualization tool(Superset) via Kubernetes
  • Well leadership and guiding junior members.

Senior Software Engineer at Innova Solutions Ltd.Oct, 2018 - May, 2020

Development Intelligent Healthcare Data Platform(IHDP) for empowering compony solutions using AWS service. 

More responsibilities/details as below. 
  • Build APIs to the processing of patient records and providing access for downstream usages.
  • Build Infrastructure on AWS and compliance for HIPPA and GDPR standard.

IT Consultant at Xuenn Pte Ltd.May, 2018 - Sep, 2018

The biggest challenge in Xuenn is facing performance issues in the original data warehouse. And I lead a project to build up a Hadoop cluster to reduce original EDW loading and improve various data pipelines.

More responsibilities/details as below.  
  • Perform adopting new technologies to implant into, to fuse into or to replace with existing systems for gaining leaps in performance, benefits and capabilities of the users. 
  • Building-up multiple systems and integrating with existing system, implemented Hadoop, data mining or data warehouse systems.
  • Design an architecture which processing real-time data end-to-end using various Hadoop solutions without coding. 

Software Engineer at Athemaster Co., Ltd.Jan, 2016 - Apr, 2018 

Athemaster is a technology company offering solutions and expertise in implementing Enterprise Data Hub and automating Data integration with Open Source technologies such as Apache Hadoop and Spark. More responsibilities/details as below.
  • Focus on Enterprise Big Data solution such as Hadoop and Spark (Cloudera CDH). 
  • Maintain and improved other companies' Hadoop cluster. 
  • Help other companies integrate with Hadoop and resolved the technical issues. 
  • Build data pipelines via python ETL data to Hadoop.

Certification and License


Readings 00 00@2x ce5676dabcce042724a6fc4c3413d6a86ad9c78eecb848896433e32c60b7006b


CCA-175: CCA Spark and Hadoop Developer



Readings 00 01@2x 77cc06c91fae4dd43a069fa4b813524cd022d4a79115524d3f0d6b9220dfd71d

Cloudera Certified Administrator for Hadoop

Education



Undergraduate studies at Tamkang University, with a concentration in Department of Management Sciences.

                                                                                                                                                                                                     2008 - 2012