At CMoney, our mission is to assist people in their lifelong investment. To this end, we have already launched several services and apps to help users to make decisions on different matters. So far CMoney has had the most popular stock mobile app and stock forum in Taiwan, where each has almost 800k monthly active users. Everyday more than 100k articles and messages emerge from our apps and forum. Meanwhile, we collaborate with 50+ investment KOLs to help users succeed in investment. Currently, CMoney is at the fast-growing stage, and thus we are looking for brilliant talents to join our team, especially the ones with integrity, high social interest and growing mindset. For more detail of CMoney, please refer to the following link: https://www.cmoney.tw/jobs/ .
Our Big Data team focuses on implementing the business intelligence platform to support multiple internal partner teams to meet their operational needs. We are responsible for establishing processes for continual improvement,
crafting data and reporting solutions at large scale. Our mission is to keep our business running optimally while maintaining a balanced application environment.
As a member of the Big Data team, you will build and own mission-critical data pipelines as well as modern data warehouse solutions while collaborating closely with Data Science, Marketing, and Product teams. You will be a part of an early stage team and have a significant stake in defining its future with a considerable potential to impact all of CMoney's revenue and millions of users. Your efforts will reveal invaluable business and user insights to fuel revenue by leveraging vast amounts of data.
B.S. and/or M.S. in Computer Science or related field
Broad knowledge of the data infrastructure ecosystem
Experience with Hadoop or other MapReduce-based architectures
Good understanding of one or more of the following programming languages:
Python, Scala, C++, or Java
Experience working with large data volumes
Experience writing Big Data pipelines, as well as custom or structured ETL,
implementation and maintenance
Experience with large-scale data warehousing architecture and data modeling
Experience with Druid or Apache Flink
Experience with real-time streaming (Apache Kafka, Apache Beam, Heron, Spark
Ability in managing and communicating data warehouse project plans to internal