∙ At least 5 years of experience in enterprise-class software development and delivery.
∙ Experience architecting and designing high-performance server-side components and big data processing pipelines using popular libraries and frameworks.
∙ Experience with Tensorflow (or similar libraries) from GPU training, efficient input pipelines (queues, Dataset API, and the like), to the deployment of packaged/compiled models, and distributed computing (sharding, clusters, etc.)
∙ Experience with resource management service workflow: queue systems (RabbitMQ, Kafka), AWS (RDS, S3), Kubernetes, build systems (Ansible, Terraform, Jenkins, etc.), and Docker.
∙ Experience with Postgres and big data technologies: Cassandra
∙ Experience with developing production code in Python, C/C++.
∙ Masters in Computer Science or a quantitative field and at least one year of industry experience.