Back to jobs

ML Ops Engineer - Berlin

Cherry Ventures is supporting our portfolio with this hire

Data Engineer - MLOps Focus

Berlin based - m/f/d - full-time

What we do at Plato

Plato is building the digital backbone of the global trade economy. Starting with the $48T wholesale industry, we empower the modern wholesaler to connect their people and data in a single analytics and workflow hub. By leveraging data science and AI, we automate workflows and combat labor shortages, making SMB wholesalers competitive with large corporations.

Why we do what we do

The future of wholesale is data-driven. Unlike popular opinion, Industrial SMEs are ready to make the step to become more proactive in their processes but lack the technology to steer them to success. Our founders come from a wholesale family and gathered a rock star team of ex-Big Tech, VC, and top-tier consulting companies to reshape the operations of this $48tn industry. Our initial product leverages cutting-edge data science to provide customized demand forecasts and product recommendations combined with intelligent workflow automation.

We are about to create category-defining software. Our primary customers are C-suite executives within large-scale wholesale and distribution businesses. We are committed to helping them enhance their decision-making processes and optimize their operations through the smart use of their data - bringing their operations into the 21st century! But don’t just hear it from us! We are supported by a list of top-tier EU & US VCs, advisors, and angels providing insights from some of the best SME tech companies such as Miro, Celonis, Personio, Workday, Forto, and Microsoft.

What we’re looking for

MLOps = DataOps + DevOps + ModelOps

At Plato, we are building a real-time data platform. As a member of our team, you will play a pivotal role in developing and maintaining the infrastructure that powers our ML solutions.

A significant challenge we face is onboarding a large number of customers onto our platform, where automation and repeatability are crucial for success. This role combines the best of data engineering with the demands of machine learning operations, ensuring the seamless operation of data pipelines and ML models from development to production.

The ideal candidate is an innovative problem-solver who is passionate about both data engineering and MLOps. You will thrive in our dynamic startup environment, wearing multiple hats when required to meet our evolving needs. Your ability to collaborate across teams, automate processes, and ensure repeatability will be key to driving our data and ML infrastructure forward.

What you’d be working on

  • Design, develop, and manage scalable and robust data pipelines that support machine learning models in production.
  • Implement and maintain CI/CD pipelines for data and ML workflows, ensuring smooth transitions from development to production.
  • Automate the onboarding process for new customers, ensuring scalability and repeatability across deployments.
  • Collaborate with data scientists and AI Engineers to optimize and automate data processing and model deployment.
  • Ensure the seamless integration of ML models with production environments, enabling real-time and batch inference capabilities.
  • Develop and maintain tools and frameworks for automated data management, model retraining, and monitoring.
  • Utilize MLFlow for model tracking, versioning, and lifecycle management.
  • Implement model serving solutions to deploy and manage real-time inference pipelines.
  • Work closely with our product, engineering, and data science teams to build and maintain a data platform that scales with our growing customer base.
  • Establish and enforce best practices for data and ML pipeline versioning, experimentation, and reproducibility.

What you bring along

  • 3+ years of experience in data engineering, ML Engineering or a related field, with a strong focus on MLOps.
  • Experience in building and managing data pipelines that feed into machine learning models.
  • Very good knowledge of Python; experience with PySpark is appreciated.
  • Hands-on experience with data engineering tools and platforms; a background in Databricks is highly valued.
  • Experience with MLFlow for model tracking and lifecycle management.
  • Familiarity with model serving technologies and deploying models for real-time inference.
  • Familiarity with CI/CD tools and processes, including Jenkins, Git, or similar.
  • Ability to wear multiple hats and thrive in a startup environment where flexibility and initiative are key.
  • A mindset that embraces continuous improvement, automation, and scalability, particularly in customer onboarding and deployment processes.

The tools you will be using

  • Python
  • PySpark
  • Databricks
  • MLFlow
  • Model Serving Technologies
  • CI/CD Tools (e.g., Jenkins, Git)
  • Cloud Platforms (AWS)
  • Data Engineering Tools (Apache Spark, DBT)

Cherry Ventures is an equal opportunity employer and values diversity. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, or disability status.

Apply for this job

*

indicates a required field

Resume/CV

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter

Accepted file types: pdf, doc, docx, txt, rtf

Select...

Your personal data will be retained by Controller as long as Controller determines it is necessary to evaluate your application for employment. Under GDPR, you have the right to request access to your personal data, to request that your personal data be rectified or erased, and to request that processing of your personal data be restricted. You also have the right to data portability. In addition, you may lodge a complaint with an EU supervisory authority.