The avalanche computing is funded by Nvidia USA alumni at Santa Clara, CA, USA, 2018. At the end of 2018, the core team was built in Taipei, Taiwan. After 1 year collaborate with many enterprises, the Avalanche Computing Taiwan Inc. is registered in Feb, 2020. At the 1st month, we won the Airbus global top-10 innovation startup in March, joined NVIDIA global inception program, and join NTUTEC.
The barrier to entry in AI is lower than ever before because of the open-source software including a number of frameworks (tensorflow, keras, pytorch, et., al.). However, to develop the specific AI application, the company engineers need to build the data pipeline, computing environment, and AI models. Those processes are still difficult for traditional companies and SMEs.
In order to overcome the challenges above, we provide the hyper-scale computing technique for doing deep learning. The hyper-scale computing framework is a deep learning end-to-end solution. Through our hyper-scale solution, the clients can focus on innovation and then we do rest (for example, we provide hyper-scale architecture, multi-GPUs, and distributed model training, and hyper-scale inference on 10000+ edges devices). Then, our clients just need to put their AI models into the framework in mins.
This is our first goal, which is to provide the high-speed and lower deploy time framework for the AI application providers. Now, for the ASIA clients, we only focus on industry AI and medical AI provides. For US clients, we will provide cloud-based hyper-scale inference services.
If you are interested in our Position, please send your CV to: [email protected]
We provide performance optimized AI development and deployment workflow to AI experts. We call our core technique, Hyper-scale computing, leverage the distributed and multiple GPUs techniques for saving time and money.
The product of the Avalanche Computing called Hyper-scale computing framework. the detailed components are as follows:
1) Hyper-scale training engine (from 1 GPU to 8 Gpus on each machine)
2) Hyper-scale inference engine (from 1 edge device to 100000+ edge devices)
3) Smart labeling (AI empowered data annotation tool)
4) Hyper-scale computing, edge computing, and Nvidia DLI training
5) Consulting services
is to be the best advanced computing technology company and hyper-scale analysis services provider to intelligence application providers and traditional industries, and in collaboration with our clients, to build a strong competitive power in the intelligence application industry.
To achieve our vision, we must have the following advantages:
1) be a computing technology leader, competitive with pure hardware or software solutions
2) be the top-tier computing technology, data analysis, and AI experts
3) be the most trusted, service-oriented and maximum-total-values intelligence software end-to-end solution inventor
is to be the leader, and trusted computing technology provider of intelligence application industry (for medical, manufacturing, et., al) for the future.
We offer (some is full-time only):
Competitive salary and extensive social benefits
Work-life balance and support for career development
Learning resources for online training and physical training (including conference, meetup, book)
Snack bar, refrigerator, snack, coffee machine with beans, red tea, japan tea, green tea, US flower tea, or milk tea from Japan (日東紅茶)
Company outings, festive parties, lunch days, tail teeth, employee travel (2019 we went to Japan), birthday cake
Visa assistance if needed
我們需要可以長期任職，至少完成完整的 Rotate Program 設計的人選，短期的人員或者超過資格 (Over qualify) 的人員我們雖想一起合作，但仍然以可以完整走完Rotate Program的人選為優先，作為長期培育之人才投資。
The position in cakeresume is for TAIWAN ONLY. Do not submit for the position in Here (go LinkedIn).
Before you apply for any position here, please check your location and VISA status.
本公司近期建立台灣之研發團隊，中長期返美持續拓展 Hyper-Scale Computing 之最佳化技術予智慧應用之廠商。
公司也有訂閱線上課程以及核心團隊己取得之授課資源 （例如：GCP USA, NVIDIA Santa Clara 之課程資源）
每日上午 9：30－10：30之間到公司或者remote work，按台灣公司勞基法準則，需要做打卡的一個動作 （使用軟體）。
每日工時以 8小時為基準。例如：9：30 AM 抵遠公司，則在7：00 PM (中間午休1.5小時)。