Job Description

Avalanche Computing:

If you are looking for working in a team where people believe in their mission (provide computing technique for future innovation), you should join Avalanche Computing! 

Here, we are committed to making hyper-scale computing and scaleable techniques easy and effective for everyone powered by technology innovations (now, it will be industry AI and medical AI at ASIA). You will contribute to participant one AI project and help AI scientists to clean and prepare the high confidential dataset for doing deep learning.
We would love to hear from you if you are passionate about using data engineering, computing, and technique to revolute the journey using data to make the world better.

What you’ll do:

Using your Data Science, Deep Learning and Machine Learning skills, you will help our data scientists to train the deep learning models. And have the opportunity to implement your favorite AI tool to train yourself in real AI project.

All of the avalanche computing clients are using AI techniques based on Computer Vision or NLP algorithms and then apply to different AI applications. Few projects may be related to reinforcement learning. Thus, you must be trained to understand how a real AI product is developed.

We offer (some is full-time only):

  • Competitive salary and extensive social benefits
  • Diverse and dynamic work environment (smart office)
  • Work-life balance and support for career development
  • Learning resources for online training and physical training (including conference, meetup, book)
  • Snack bar, refrigerator, snack, coffee machine with beans, red tea, japan tea, green tea, US flower tea, or milk tea from Japan (日東紅茶).
  • Company outings, festive parties, lunch days, tail teeth, employee travel (2019 we went to Japan), birthday cake.
  • Visa assistance if needed.
  • Want to know more about Avalanche Computing? Then let’s stay connected!

Requirements

Responsibilities:
We’re looking for one Data Science student intern to parse and clean the hundreds of thousands of datasets that our customers and engineers download weekly.
Your goal will be an assistant to clean, prepare the training dataset, and automatically scripting programming. Depend on different projects, the dataset and definition of data attributes will be changed. For this intern job, you may need to work with us 8hrs to 24 hrs in a week (3 working days).
You also need to do the following tasks:
- Paper reading
- 1-1 weekly meeting with your manager
- Weekly meeting with your group
- Tech talk, tell every member what you learned
- Bi-weekly Top-5 report
- Finished the assigned tasks

Education:
- Bachelors degree or currently pursuing a Masters degree or working towards completion of PhD program in Computer Science or Computer Engineering. (note that if you are pursuing a masters/Ph.D. degree, please discuss with your professor before you apply).

Experience:
- Python, Linux shell script, and cloud CLI experience.
- Fundamental knowledge of Machine learning or deep learning.
- Data engineering experience on the fundamentals of data clean and data pipeline.
- Automate data collection, pre-processing and/or analysis knowledge.

Others:
- Communicate findings clearly and succinctly to the technical and non-technical audience
- Kaggle or any contest experiences

If the cakersume system not works, please email our email box.

Salary

180 ~ 240 TWD/hour

Location

台北台灣

Share this Job

Please Sign in or Register to get your personal invite link.

About us

The avalanche computing is funded by Nvidia USA alumni at Santa Clara, CA, USA, 2018. At the end of 2018, the core team was built in Taipei, Taiwan. The barrier to entry in AI is lower than ever before because of the open-source software including a number of frameworks (tensorflow, keras, pytorch, et., al.). However, to develop the specific AI application, the company engineers need to build the data pipeline, computing environment, and AI models. Those processes are still difficult for traditional companies and SMEs.

In order to overcome the challenges above, we provide the hyper-scale computing technique for doing deep learning. The hyper-scale computing framework is a deep learning end-to-end solution. Through our hyper-scale solution, the clients can focus on innovation and then we do rest (for example, we provide hyper-scale architecture, multi-GPUs, and distributed model training, and hyper-scale inference on 10000+ edges devices). Then, our clients just need to put their AI models into the framework in mins.

This is our first goal, which is to provide the high-speed and lower deploy time framework for the AI application providers. Now, for the ASIA clients, we only focus on industry AI and medical AI provides. For US clients, we will provide cloud-based hyper-scale inference services.

Team

Default avatar
Default avatar