Want to help bring speed and efficiency to big-data analytic on the cloud?
About the job to be done:
● Join our DevOps team and stack
● Design, build and maintain a highly available 24/7 global AWS cloud infrastructure of a high-scale Data Warehouse SaaS product
● Take part in the definition of our R&D ecosystem and ALM
- BS/Master degree in Computer Science, Engineering, or a related field.
- 5+ years of hands-on experience as a DevOps engineer.
- 3+ years of experience in large-scale project management of distributed systems running on AWS.
- 2+ years of experience with “Infrastructure as a code” tools such as Terraform, Ansible or similar
- 2+ years of experience with container orchestration on Kubernetes.
- Has built/led/developed a diverse team of technology engineering and data professionals by optimizing the use of resources to achieve departmental and organizational objectives.
- Demonstrated ability to set technology direction and oversee successful delivery and/or support of technology solutions preferred.
- Experience in Cloud and Data Management Engineering, engineering solutions for Data Management.
- Developing and managing operations leveraging messaging and DB services such as Kafka, DynamoDB, Kinesis, Cloud Pub/Sub, etc.
- Experience in monitoring/logging technologies such as: ELK, Prometheus, Graphite, Grafana.
- Expertise in SaaS Big Data Analytics technologies e.g. Redshift, Snowflake, BigQuery, Athena.
- Solid understanding of data security and privacy technologies.
- Strong customer focus (internal and external). Understanding of customer centricity and design thinking to drive solutions.
- Directs and oversees the management of the IT budget process, costs, and resources for cloud & data engineering-related initiatives
What we offer you:
- An opportunity to make an impact on the industrial future and be part of disruptive and groundbreaking products.
- In-depth exposure to a modern cloud-scale distributed data warehouse.
- Competitive salary and benefits (including pension plans, insurance, benefits, and more).
- IT equipment and tools to allow you to be productive.
About the project
Firebolt delivers extreme speed and elasticity at any scale solving your impossible data challenges. Firebolt was built with three principles in mind:
About our Tech stack:
Firebolt is composed of several open-source projects and relies on a unique IP that boosts data analytics and enables full scalability and decauling compute from storage.
- SQL core teams work with C++.
- Backend teams work with Go, Python, Rust in order to create microservices exposing REST APIs and GraphQL interface.
- The team is using both CockroachDB and FoundationDB as application data storage.
- Frontend teams work with TypeScript, React, Redux + Apollo.
- CI/CD is handled by a combination of CircleCI and CodeDeploy to test and deploy code to production.
- The infrastructure is managed as code with Terraform and services are monitored using Prometheus and Grafana.