
Back to jobs
DataOps Engineer (AI Platform Engineer)
Cyprus
Why this role matters
You will design and operate an on-prem AI platform for deploying and scaling models, working across multi-node GPU clusters, distributed systems, and Kubernetes. You will be responsible for building reliable and efficient infrastructure for large-scale model inference, ensuring optimal GPU utilization, performance, and availability of the platform.
The role is based in our Limassol office, Cyprus. In case of relocation, we offer full relocation support for you and your family to make your move smooth and worry-free.
What you'll actually do
- Close collaboration with infrastructure teams on selection and configuring GPU servers, high-performance networking, and RDMA-enabled clusters.
- Perform and manage GPU MIG configurations based on workload requirements and model characteristics.
- Ensure reliable and scalable GPU operations in Kubernetes, including runtime integration, device plugins, and GPU scheduling capabilities.
- Design, deploy, and maintain model serving runtimes, including vLLM, ONNX, SGLang, Nvidia Triton Runtimes, and KServe, ensuring high performance, scalability, and efficient GPU utilization.
- Build and maintain CI/CD pipelines and tooling for model packaging, versioning, and deployment, enabling reliable and model delivery for internal teams.
- Build and maintain platform tooling for model lifecycle management, including experiment tracking, model versioning, and registry systems (e.g. MLflow).
- Enable infrastructure and workflows for model fine-tuning and adaptation (e.g. LoRA), focusing on scalability, reproducibility, and automation within the platform.
- Develop and support internal tooling for managing model inputs and configurations (e.g. prompt templates), enabling consistent and reusable model usage patterns.
- Conduct performance testing and evaluation of multi-node GPU clusters to identify and resolve bottlenecks.
- Build and maintain observability for GPU clusters and model workloads, including metrics such as GPU utilization, memory usage, throughput, and latency.
- Integrate tracing for model inference workflows to provide end-to-end visibility into requests, and model behavior.
- Ensure compliance with security requirements for platform development.
- Evaluate and benchmark model inference performance across different runtimes, hardware setups, and configurations to guide platform optimization.
Who we’re looking for
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
- 5+ years of experience in infrastructure, platform engineering, or distributed systems, preferably in environments involving machine learning or GPU workloads
- Strong experience with Kubernetes, including deploying and operating production workloads
- Experience with Linux-based environments
- Strong programming skills in Python and/or Go
- Experience working with GPU infrastructure, including NVIDIA or AMD stack and multi-GPU environments will be considered highly advantageous
- Understanding of distributed systems and multi-node workloads
- Experience with model serving and inference systems (e.g. vLLM, ONNX, SGLang, Nvidia Triton Runtimes, KServe)
- Experience with CI/CD pipelines and automation for deploying services or models
- Experience with monitoring and observability tools (metrics, tracing, logging)
- Nice to have familiarity with networking concepts relevant to distributed systems (e.g. RDMA, high-performance networking)
- Good communication and problem-solving skills
- Ability to use advanced English for different work and business purposes
- Critical thinking and attention to detail
- Decision-making skills and the ability to adapt to new changes
- Ability to write concise and clear documentation
- Capability of dealing with constructive critics and knowing how to develop relationships with the team to achieve common goals
What we offer along the way
- Competitive salary and annual performance bonus
- Full relocation support for you and your family — flights, housing, visas, and legal assistance included
- Top-tier health insurance with full family coverage — medical, dental, vision, mental health — plus life insurance for peace of mind
- Unlimited learning opportunities: external courses, English lessons, career and leadership development
- Education allowance covering school and kindergarten fees
- 21 working days of annual leave, plus public holidays and fully paid sick, maternity, and paternity leave
- Employee appreciation program: branded gifts, birthday day-offs, celebration budgets for weddings, newborns, and milestones
- “Get to know Team” trips — meet colleagues across our global hubs, along with company-wide offsites that raise the bar
- Employee share scheme — grow with us
- Branded MINI Cooper Countryman company car and private parking
- Free in-house sports clubs, Sanctum Club gym access, and jet skis
- Access to a Corporate doctor
- Exclusive discount program with cafes, gyms, and local services
- Expat tax perks: up to 50% income tax exemption
- Support with the naturalisation process for relocated employees
What your journey looks like
- Intro call with Recruiter (30 minutes)
- Tech interview (90 minutes)
- Behavioural interview (60 minutes)
Please use your exness work email for internal applications and ensure to disclose any existing Conflict of Interest you may have.
Apply for this job
*
indicates a required field
