Chin-Hung (Wilson) Liu
Senior Data Engineer at Paktor x M17 Entertainment Group | AWS x GCP x Azure Big Data Specialist | Data Architect
Job search preferences
Senior Data Engineer
July 2021 - Present1 yr8 mos
新加坡Description and Responsibilities: Lomotif is a leading short video social platform in South America and India that holds PBs of videos in buckets and serves millions of users. DataOps and AI team take part in many challenging projects e.g. Ncanto, XROAD services, Ray Serve, and scalable model serving frameworks for support the recommendation and moderation pipeline, also integrated Universal Music Group music (UMG) and full catalog feed with 7digital. DataOps team handling 10TB+ data for day-to-day operation, moderating model training results, and designing SLIs/SLOs for EKS Clusters. More responsibilities/details as below. - Optimize music (UMG) pipeline with queries and memories for Elasticsearch and PostgreSQL, the pipeline saving 90% execution time from 10+ hours to 40 mins. - Migrate service from apache spark, AWS Data Lake Formation to AWS MWAA, EKS airflow environment. - Design, and deliver distributed system for Ray Serve with AI team. - Design, and implement a modern machine learning pipeline for a recommendation, and moderation pipe. - Design SLA and implement alert log reporting system (history logs) for moderation pipeline, histories logs handling application, server levels information for further investigation. - Supporting other departments to gather data in the appropriate platforms. Tech Stacks : - Streaming, Snowpipe/Kinesis/Firehose - Monitoring, CloudWatch/Grafana - Orchestration, AWS MWAA / Airflow - Kubernetes, EKS - Message, SQS/SNS - MLflow, Ray Serve/EMR/Lambda - Storage, Snowflake / RDS (PostgreSQL) / ElastiCache (Redis) / Elastic search - Bucket, AWS S3 Reports to : VP of Data Engineering
Senior Data Engineer
October 2020 - May 20218 mosDescription and Responsibilities: The main responsibility of the engineering team is launching ScoutAsia by Nikkei and The Financial Times Nikkei content to SGX TitanOTC's platform. Titan Users will be able to access Nikkei news articles from across 11 categories, including equities, stocks, indices, foreign exchange, and iron ore. DPP (Data team) is processing hundreds of GB articles/market/financial/relationships and organization for day-to-day operation on Azure and on-premise environments. More responsibilities/details as below. - Identify, digging bottlenecks, and problem-solving especially optimizing the performance of SQL Server, NoSQL (Azure Cosmos), resource units, and message queues, reducing/saving almost 50-75% of resources. - Identify, solving the problems between machine learning/backend/frontend/DDP side and giving the advance logical/physical design of a system. - Displayed technical expertise in optimizing the databases and improving the data pipeline to achieve the objective. - Bring in industry standards to data management to delivery of data at the end objective. - Build, recruiting the new data engineering staff for the next-generation, enterprise data pipeline. Tech Stacks : - Storage, Azure Cosmos DB/Gremlin/SQL Server/MYSQL/Redis - Storage (Bucket), Azure Blob/AWS S3 - Streaming/Batch/transform, Spark/Scala (90% codebase coverage) - Message, Azure service bus, queue storage - Search, Elasticsearch - Algorithm, graph/concordance Reports to : CTO
Senior Data Engineer
February 2020 - July 20206 mosDescription and Responsibilities : The big challenge of 17 Media data teams is facing fast-growing data volume (processing 5-10x TB level daily), complex cooperation with stakeholders, the cost optimization of pipeline and refactor big latency systems .etc. As a senior data member, I’m making a data dictionary and trying to explain/design how the whole pipeline working with each component, especially how to solve those bottlenecks. More responsibilities/details as below. - Leading, architect a large-scale data pipeline for supporting scientists and shareholders. - Optimize, ensure quality and play a tough role in data lake projects/data pipes. infrastructure. - Define, designing stage, dimension, production, and fact tables for data warehouse (BigQuery). - Coordinate with client / QA / backend team for QC lists / MongoDB change stream workers. - Architect workflows with those components, Dataflow, Cloud Functions and GCS. - Recruiting (Jr./Sr.) data engineering members, setting goals and sprint management. Tech Stacks : - Storage, GCS/BigQuery/Firebase/MongoDB/MYSQL - Realtime process and Message system, DataFlow (Apache Beam) / BigQuery Streaming / MongoDB Change Stream / Fluentd / Firebase / Pub/Sub - ETL/ELT workflow, Digdag / Embulk - Datawarehouse, Visualization, BigQuery / Superset / Chartio / Data Studio - Continuous deployment, docker, Cricle CI Reports to : Data Head
September 2015 - December 20194 yrs4 mosDescription and Responsibilities : This is another 0 to 1 story. As an early data member, we need to figure out the data driven policy, strategies, engineering requirements from the company. In Paktor, data / backend sides are 100% on AWS, therefore the whole data ingestion, automation and data warehouse etc. are relying on those components. We are processing 50-100x GB realtime / batch jobs and the other data sources (RDBMS, APIs) for ETL/ELT on S3, Redshift, the data platform helps our marketing / HQ scientists team getting data into insights and making good decisions. More responsibilities / details as below. - Supports Big Data and batch, real-time analytical solutions leveraging transformational technologies. - Optimize data pipeline on AWS using Kinesis-Firehose/Lambda/Kinesis Analytics/Data Pipeline, and optimize, resizing Redshift clusters and related scripts. - Translates complex analytics requirements into detailed architecture, design, and high performing software such as machine-learning, CI/CD of recommendation pipeline. - Collaborate with client / backend side developers to formulate innovative solutions to experiment and implement related algorithms. Tech Stacks : - Storage, S3/Redshift/Aurora - Realtime process and Message system, Kinesis Firehose / SNS - Data warehouse, Visualization, Redshift / Klipfolio / Metabase - ETL/ELT workflow, Lambda / SNS / Batch / Python - Recommendation, ML, DynamoDB / EMR / Spark / Sagemaker - Metadata management, Athena (presto) / Glue / Redshift Spectrum - Continuous deployment, Elasticbeanstalk / Cloudformation - Operations, PagerDuty / Zapier / Cloud Watch Reports to : CTO, Data Head
System Analyst (Data Backend Engineer)
January 2014 - August 20151 yr8 mosDescription and Responsibilities : JSPectrum is a leading passive location-based service company in Hong Kong which holds many interesting products such as NetProbe, NetWhere, NetAd etc. In Optus (The main project in Sydney), the main responsibility of system analyst is designing / implementing data ingestion (real-time processing) / load and management data with major components of the Hadoop ecosystem. We meet the challenge to process 15,000 TPS, 60,000 inserts per second and 300 GB daily storages, therefore we are trying to optimize those components with Kafka consumers, HDFS storages and re-designing keys / columns of HBase to fulfill the requirement and deployed NetAd, whole in-house solutions on Optus. More responsibilities / details as below. - Design, implement and optimize Hadoop ecosystems, MLP, real-time processing on Optus in house servers with our main product NetAd, NetWhere. We are focusing on HBase schema, HDFS, balancing Kafka consumers and more issues on data ingestion. - Collaborate with shareholders and LBS team members for further requirements with HeapMap. Tech Stacks : - Storage, HDFS - Realtime process and Message system, Kafka streaming, Log systems - Data warehouse, Visualization, HBase / NetWhere (Dashboard) - Hadoop ecosystem, Hadoop / HDFS / Zookeeper / Spark / Hive - ETL/ELT workflow, MLP / Scala / Java Reports to : CTO
Senior Software Engineer
October 2012 - December 20131 yr3 mosDescription and Responsibilities : TORO is a technology business that provides a mobile platform and its associated systems, services and rules to help Brands (with initial focus on Sports Teams, Smart Cities and Streaming apps) become super-apps to generate additional revenue with minimum effort. Responsibilities as below. - Design, implement and test back-office modules for NFC wallet platform, Trusted Service Managers (TSM) and distributed NFC services to end users / stakeholders. - Implement RESTful services and deliver endpoints for wallet managers and collaborating with frontend, backend teams for further business requirements. Tech Stacks: MYSQL / Spring / Hibernate / XML / Apache Camel / Java / POJO .etc. Reports to : Head of Server Solutions
October 2011 - September 20121 yr0 mosDescription and Responsibilities : Digital river proactive partners, providing API-based Payments & Risk, Order Management and Commerce services to leading enterprise brands. The big challenge to DR is integrating with the current module and working well with a huge code base (over 2+ millions lines), the strict process including analysis requirements, design, implement, test and code review. More responsibilities as below. - Design, implement custom bundle project, bundle customized by shoppers to pick products of groups and get special discounts, the main stakeholders /users from Logitech, Microsoft. - Analysis, collect business requirements, identify use cases and collaborate with business analysts and deliver related diagrams, documents. Tech Stacks: Oracle / Tomcat / Spring / Struts / JDO / XML / JUnit / Java / J2EE .etc. Reports to : Technical Development Manager
October 2008 - September 20113 yrs0 mosDescription and Responsibilities : Stark Technology (STI) is the largest domestic system integrator in Taiwan. We plan and deliver complete ICT solutions for a wide spectrum of industries through representing and reselling the world's leading products. This is made possible by using the most advanced technology, and providing the best professional services. More responsibilities / projects as below. - Lead, coach JR. programmers for the development process of enterprise modules, and design Fatwire CMS components as Template/Page/Cache .etc. - Design, analyze DMDB systems, and implement functions to meet the requirements of queries / storage. - Optimize performance for online servers and GC tuning. Tech Stacks: Oracle / Sybase / Tomcat / Weblogic / Spring / Struts / Hibernate / Fatwire / Java / J2EE .etc. Reports to : Technical Manager
Master of Business Administration (MBA)・EMBA Programs, Business Administration, Accounting, Finance and International Business.
2010 - 2011
Master of Science (MS)・Computer Science, Data Mining, Expert Systems and Knowledge Base as major concentration.
2002 - 2005
Build Your Professional Network
Click icon on the company page or under talent search engine to start the conversation.