New

AI Developer with Python for Customer Care AI Platform team (hybrid)

Calea Floreasca 246C, 014476 Bucharest

At IONOS, the leading European provider of cloud infrastructure, cloud services and hosting services, you will work together with a wide range of teams. We are characterized by open structures, a friendly working culture and flat hierarchies with a strong team spirit. We firmly believe that work and fun are compatible, and offer you the right environment for this. Our constant growth means that we are always looking for new colleagues. Become part of IONOS and grow with us.

About the team:

Our mission is to  build a modern ecosystem used for all IONOS customer support needs. The tools developed by us are used in over 20 locations, by more than 2.000 users, supporting 8 million customer contracts in 10 markets.

The development team has full responsibility for the development lifecycle. This means we plan, develop, test and deploy our software without any other internal or external dependencies.

Our portfolio revolves around an internally built CRM which is now being enhanced with AI capabilities. 

About the product you will be building:

We are building a next-generation AI platform designed to redefine how our company interacts with customers. This isn't just a chatbot; it's a high-performance, multimodal AI ecosystem powered by state-of-the-art Speech-to-Speech (S2S) models, advanced Large Language Models (LLMs), and intelligent orchestration frameworks. Our platform will understand, reason, and respond across text and voice — while seamlessly executing real-time actions to resolve customer needs.

We are aiming for a hybrid architecture of Open Source LLMs, industry-leading proprietary models, and Model Context Protocol (MCP) to enable contextual reasoning, tool invocation, and seamless orchestration across systems. The goal is not just to talk to the customer, but to act on their needs.

What makes this project unique:

The Voice Frontier: We are building low-latency, emotive speech-to-speech pipelines for a truly natural voice channel experience.

Deep System Integration: Our platform connects directly to the company's core systems via MCPs, allowing the AI to access real-time customer context and execute complex workflows.

Self-Evolving Logic: We are developing  an automated QA and evaluation module that continuously analyzes interactions across channels.By programmatically measuring quality, accuracy, latency, and resolution outcomes, we can close the feedback loop, and adapt system behavior in hours, not weeks.

Hybrid Innovation: You’ll work at the intersection of "build vs. buy," integrating the best of the open-source community with custom-built internal infrastructure.

What's in it for you:

You won't just be shipping code; you’ll be part of making this concept evolve and shift.
You’ll join a friendly, experienced team where your voice matters and your contribution shapes real-world outcomes. You’ll work in a modern environment with technologies and practices that help us ship reliable software efficiently.

Role description:

As an AI Engineer on this team, you will build the core intelligence systems behind our multimodal AI platform.You will be responsible for moving beyond simple chat interfaces to build high-performance, real-time systems that handle complex reasoning, deep context retrieval, LLM orchestration, retrieval-augmented generation (RAG) and seamless voice interactions.

Main responsibilities:

  • Design Agentic Workflows: Design and implement LLM-based systems that go behind response generation - enabling structured tool usage, workflow orchestration, and secure interaction with internal services via MCP (Model Context Protocol).
  • Build and Optimize RAG & CAG: Develop high-performance Retrieval-Augmented Generation and Context-Augmented Generation pipelines to ensure accurate, relevant, and low-latency responses. Continuously improve context management, ranking strategies, and grounding mechanisms to support complex, multi-step interactions.
  • Voice Channel Mastery: Develop and optimize real-time Speech-to-Speech (S2S) pipelines, focusing on streaming architectures, latency reduction  (including Time to First Word - TTFW) and maintaining a natural conversational flow.
  • Evaluation, Quality & Alignment: Build and maintain an automated QA module, including LLM-as-a-judge patterns, to measure accuracy, safety, latency, and resolution quality at scale. Translate evaluation insights into systematic models and prompt improvements..
  • Model Strategy & Hybrid Integration: Integrate and operate both commercial foundation  models (e.g., OpenAI, Anthropic, Google) and open-source alternatives (e.g., Qwen, Kimi, DeepSeek, Moonshot, GLM), selecting and optimizing models based on performance, latency, cost, and use-case requirements.

We are looking for some of:

  • Strong Python and/or Java Engineering Skills: Advanced-level Python development experience, including  asynchronous programming (e.g., FastAPI, asyncio) and building high-performance, production-grade services. Experience with streaming architectures is a strong advantage.
  • LLM Application & Multi-Agent Orchestration Experience: Hands-on experience building LLM-powered systems, including multi-step workflows, stateful agents, and tool invocation. Familiarity with orchestration frameworks such as LangChain, LlamaIndex, or LangGraph, particularly in building stateful, multi-turn agents.
  • Advanced Retrieval & Context Management: Deep understanding of vector databases (e.g., Weaviate, Qdrant, pgvector, Elasticsearch), semantic search, embedding strategies, and re-ranking techniques. Experience designing and optimizing RAG pipelines.
  • Real-Time & Low-Latency Systems: Experience in designing systems that operate under latency constraints, including streaming APIs, event-driven architectures, and performance optimization. Understanding of trade-offs between quality, cost, and response time.
  • Evaluation-Driven Development: Experience in implementing evaluation frameworks for LLM-based systems, including automated QA pipelines and LLM-as-a-judge patterns.
  • Familiar with API Design: knowledge of RESTful API design, OAuth2

What we offer:

  • Access to local/international trainings, development and growth opportunities, including access to e-learning platforms, covering both technical and soft skills areas;
  • Modern technologies, product responsibility;
  • Flexible work schedule;
  • Hybrid work option;
  • Medical services package from one of two private providers;
  • 25 vacation days per year;
  • Substitute days off for public holidays that occur on the weekend;
  • Meal tickets;
  • Internal referral program;
  • Team events, networking events organized to promote a passionate, creative and diverse culture;
  • Summerfest and Winterfest parties;
  • Of course, coffee, soft drinks and fresh fruits are on us in the office.

About IONOS Romania

Our Engineering Center lives and breathes technology since 2003. We proudly Plan, Build and Run the products and services we are responsible for.

Our engineering culture is shaped by the use of cloud native technologies, microservices combined with a DevOps attitude and an innovative mindset. We have a customer centric culture where every colleague is a contributor to our products’ design and success working closely with teams in Bucharest and abroad. Our strength lies in our team spirit and a positive and respectful interaction.

We value diversity and welcome all applications - regardless of, for example, gender, nationality, ethnic or social origin, religion, disability, age as well as sexual orientation and identity, physical characteristics, marital status or any other irrelevant factor subject to applicable law.

Create a Job Alert

Interested in building your career at IONOS Romania? Get future opportunities sent straight to your email.

Apply for this job

*

indicates a required field

Phone
Resume/CV*

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf


Select...
Select...
Select...

RON

Select...
Select...

For more information, we invite you to read our data protection policy here.