Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.
About the client and role:
We are looking for a DevOps Engineer to join our client’s team. Our client is a defense tech company building a real-time battlefield monitoring platform designed to enhance situational awareness and operational decision-making. The team has already developed a working MVP and is now entering the next critical phase — scaling the solution and implementing a production-grade version based on field validation and practical deployment experience. The platform focuses on active real-time monitoring, rapid data processing, and actionable insights to support complex operational environments. This is a high-impact engineering challenge at the intersection of deep tech, hardware integration, and mission-critical software systems.
- Design, build, and operate cloud-agnostic infrastructure and platforms using Infrastructure as Code.
- Own and evolve Kubernetes platforms (Helm charts, CI/CD integration, autoscaling, RBAC, multi-environment setups).
- Implement and maintain CI/CD pipelines with a strong focus on automation, reliability, and fast feedback loops.
- Operate and scale real-time data platforms n production environments (e.g. Kafka).
- Setup, maintain, and optimize analytical data stores such as Apache Druid or ClickHouse for high-throughput workloads.
- Build comprehensive observability and monitoring (metrics, logs, tracing) and drive SLO/SLA practices.
- Embed security best practices into infrastructure and delivery pipelines (secrets management, least-privilege access, scanning).
- Design systems with high availability, disaster recovery, and fault tolerance in mind.
- Collaborate closely with engineering and data teams, acting as a technical leader and mentor on DevOps and platform practices.
- Expert-level knowledge of Kubernetes, including managing clusters, and proficiency in Helm for package management. Solid experience with Docker.
- Deep understanding of CI/CD pipelines and best practices for automated deployment.
- Strong hands-on experience with Kafka (setup, configuration, and optimization).
- Solid grasp of modern Authentication standards, including OIDC and Federated Authentication.
- Advanced proficiency in Linux administration and Bash scripting for task automation.
- Experience setting up and managing observability tools, specifically Prometheus.
- Good working knowledge of AWS services and infrastructure.
- Paid vacation
- Paid sick leaves;
- Paid holidays (according to Ukrainian law);
- PE accounting and support;
- Flexible working schedule;
- Professional development, motivation and learning opportunities.
