System Engineer, Servers Hardware R&D Team
About Nebius AI
Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.
Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023).
Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.
Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day.
About the role
Nebius is looking for a System Engineer (Servers Hardware R&D Team). You’re welcome to work in our offices in Amsterdam.
Key Responsibilities:
- Participate in the design, deployment, and maintenance of high-performance cloud systems tailored for AI workloads.
- Troubleshoot and resolve complex system issues related to GPUs, networking (InfiniBand, NVLink), PCIe, and server infrastructure.
- Conduct deep investigations into hardware, software, and networking problems to ensure optimal system performance and reliability.
- Develop tests and test methodologies for advanced GPU, InfiniBand, and Compute systems to benchmark and validate performance.
- Collaborate with cross-functional teams to improve system performance and reliability.
- Monitor system performance and continuously fine-tune configurations for maximum efficiency.
- Participate in research of new technologies for future development and application in AIcentric cloud systems.
Required Skills & Qualifications:
- Strong knowledge of modern server architecture, especially in high-performance GPU-based environments.
- Hands-on experience with GPUs, InfiniBand, NVLink, and PCIe.
- Proficient in Linux systems, with expertise in Python and Bash scripting for automation.
- Demonstrated ability to troubleshoot complex system issues, including hardware, software, and networking problems.
- Experience with deep problem investigation, root cause analysis, and resolving performance issues in cloud-based or high-performance computing environments.
- Strong analytical and problem-solving skills, with a focus on optimizing system performance.
Nice to Have:
- Experience with C/C++ programming.
- Knowledge of the Linux kernel and experience with kernel-level troubleshooting.
- Familiarity with containerization technologies (e.g., Docker, Kubernetes) in GPU-accelerated environments.
If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Apply for this job
*
indicates a required field