資深資料工程師 Senior Data Engineer

Job updated 6 months ago

Job Description


你是否對挑戰性的資料工程項目充滿熱情?

我們正在尋找一位優秀的數據工程師,負責參與具有挑戰性的數據項目,幫助定義和領導數據架構、數據質量、數據治理,每天處理、加工和存儲數百萬行數據。這個實踐性和混合角色將幫助解決真實的大數據問題。


我們正在尋找一位具備豐富的Data ,SQL, Python經驗,並擁有卓越的數據分析和解決複雜數據問題的優秀資深資料工程師,你將負責設計解決方案、構建資料架構和維護高效的數據流程和數據工具,並與跨功能團隊合作實現目標,為組織的關鍵決策提供強而有力的支持!



What You’ll Be Doing:
• Design, implement and work on Data engineering , contribute and suggest Data Architecture, Data Quality, Data Governance across pods
• Establish data governance processes, procedures, policies, and guidelines to maintain the integrity and security of the data.
• Drive the successful adoption of organizational data utilization and self-serviced data platforms.
• Develop standards and write template codes for sourcing, collecting, and transforming data for streaming or batch processing data.
• Design data schemes, object models, and flow diagrams to structure, store, process, and integrate data
• Apply hands-on subject matter expertise in the Architecture and administration of Big Data platforms, Data Lake Technologies (AWS S3/Hive), and experience with ML and Data Science platforms.
• Implement and manage industry best practice tools and processes such as Data Lake, Databricks, Delta Lake, S3, Spark ETL, Airflow, Snowflake, dbt, Hive Catalog, Redshift, Kafka, Kubernetes, Docker, CI/CD,
• Translate big data and analytics requirements into data models that will operate at a large scale and high performance and guide the data analytics engineers on these data models.


Requirements

What We are Looking For:

  • 超過5年的數據工程師職位經驗
  • 具大數據工具的經驗:Spark、Kafka、Parquet、Redshift等。
  • 有關聯式 SQL 和 NoSQL 數據庫
  • 具支持數據轉換、數據結構、數據管理的經驗
  • 具AWS 或其他雲端服務的經驗
  • 具有物件導向/函數式語言的經驗:Python
  • 強大的問題解決能力和獨立工作能力

• 5+ years of hands-on experience in productionizing and deploying Big Data platforms and applications, Hands-on experience working with: Relational/SQL, distributed columnar data stores/NoSQL databases, time-series databases, Spark streaming, Kafka, Hive, Parquet, Avro, and more

• Highly skilled in SQL, Python, Spark, AWS S3, Hive Data Catalog, Parquet, Redshift, Airflow, or similar tools.

• Knowledge of infrastructure requirements such as Networking, Storage, and Hardware Optimization with Hands-on experience with Amazon Web Services (AWS) or similar cloud computing platforms

• Strong verbal and written communications skills are a must and work effectively across internal and external organizations.

• Demonstrated industry leadership in the fields of Data Warehousing, Data Science, and Big Data related technologies.

• Strong understanding of distributed systems and container-based development using Docker and Kubernetes ecosystem is an added advantage

• Knowledge of data structures and algorithms.

• Experience in working with teams using CI/CD and agile methodologies is an added advantage

1
6 years of experience required
60,000 ~ 100,000 TWD / month
Optional Remote Work
Personal Invitation Link
This is your personal referral link for job invitation. You'll receive an email notification when someone applied for the position via your job link.
Share this job
People who applied for this job also applied for
Logo of the organization.
Full-time
Mid-Senior level
1
1.1M ~ 2.1M TWD / year
Logo of the organization.
Full-time
Entry level
1
55K ~ 100K TWD / month
Logo of the organization.
Full-time
Entry level
2
Regular earnings reach NT$40,000
Logo of the organization.
Full-time
Mid-Senior level
3
80 ~ 120 TWD / year
Logo of 雨林新零售股份有限公司.

About us

DNA of Rainforest Retail

持續改善 Iteration

勇於當責 Accountability

團隊合作 Collaboration

全力以赴 Addiction

在迅速多變的電商環境中,我們身為全球品牌的合作夥伴,提供專業、多元、質量兼具的電商服務;以專業分工、團隊合作的方式建構雨林團隊,以各項敏捷專案不斷推動服務模式與品質的優化。

◇品牌服務

操作來自全球四大洲,5個垂直行業,超過20個領先品牌,完成其在電子商務及多元通路中的佈局。

◇平台通路

經由強大的服務品質保證,多元通路管理、跨品類的零售運營能力、整合的庫存體系以及交易和結算系統。作為多元通路解決方案的領先者,實現各類品牌操作策略,帶領更多的品牌數位轉型、助力品牌升級。



Jobs

Contract
Entry level
1
28K ~ 35K TWD / month
Save

Full-time
Mid-Senior level
1
40K ~ 50K TWD / month
Save