FlexHired LogoFlexHired
Logo of Nebius

Nebius

ML Engineer, Large Language Models (LLM Training & Inference Optimization)

Job Summary

This role involves architecture and implementation of distributed training and inference pipelines for large AI models, focusing on optimizing performance using techniques like parallelism and custom kernels. Candidates should have expertise in deep learning frameworks, GPU programming, and distributed systems, along with strong software engineering skills. The position offers opportunities for professional growth within a collaborative, innovative environment, with benefits including competitive salary and hybrid work options. Ideal for experienced engineers passionate about AI and ML development in dynamic settings.

Required Skills

Python
CI/CD
Communication Skills
Machine Learning
Distributed Systems
Software Engineering
High-Performance Computing
Deep Learning Frameworks
CUDA
Performance Tuning
Version Control
Triton
Parallelism Techniques
Inference Optimization
Neural Network Training

Benefits

Competitive Salary
Professional Growth Opportunities
Comprehensive Benefits Package
Collaborative Work Environment
Hybrid Working Arrangements

Job Description

Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.

The role

This role is for Nebius AI R&D, a team focused on applied research and the development of AI-heavy products. Examples of applied research that we have recently published include:

  • investigating how test-time guided search can be used to build more powerful agents;
  • dramatically scaling task data collection to power reinforcement learning for SWE agents;
  • maximizing efficiency of LLM training on agentic trajectories.

One example of an AI product that we are deeply involved in is Nebius AI Studio — an inference and fine-tuning platform for AI models.

We are currently in search of senior and staff-level ML engineers to work on optimizing training and inference performance in a large-scale multi-GPU multi-node setups.

This role will require expertise in distributed systems and high-performance computing to build, optimize, and maintain robust pipelines for training and inference.

Your responsibilities will include:

  • Architecting and implementing distributed training and inference pipelines that leverage techniques such as data, tensor, context, expert (MoE) and pipeline parallelism.
  • Implementing various inference optimization techniques, e.g. speculative decoding and its extensions (Medusa, EAGLE, etc.), CUDA-graphs, compile-based optimization.
  • Implementing custom CUDA/Triton kernels for performance-critical neural network layers.

We expect you to have:

  • A profound understanding of theoretical foundations of machine learning
  • Deep understanding of performance aspects of large neural networks training and inference (data/tensor/context/expert parallelism, offloading, custom kernels, hardware features, attention optimizations, dynamic batching etc.)
  • Expertise in at least one of those fields:
    • Implementing custom efficient GPU kernels in CUDA and/or Triton
    • Training large models on multiple nodes and implementing various parallelism techniques
    • Inference optimization techniques - disaggregated prefill/decode, paged attention, continuous batching, speculative decoding, etc.
  • Strong software engineering skills (we mostly use python)
  • Deep experience with modern deep learning frameworks (we use JAX & PyTorch)
  • Proficiency in contemporary software engineering approaches, including CI/CD, version control and unit testing
  • Strong communication and ability to work independently

Nice to have:

  • Familiarity with modern LLM inference frameworks (vLLM, SGLang, TensorRT-LLM, Dynamo)
  • Familiarity with important ideas in LLM space, such as MHA, RoPE, ZeRO/FSDP, Flash Attention, quantization
  • Bachelor’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field Master’s or PhD preferred
  • Track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment
  • Experience in engineering complex systems, such as large distributed data processing systems or high-load web services
  • Open-source projects that showcase your engineering prowess
  • Excellent command of the English language, alongside superior writing, articulation, and communication skills

What we offer

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Hybrid working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.

We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

Interested in this job?

Application deadline: Open until filled

Logo of Nebius

Nebius

Discover the most efficient way to build, tune and run your AI models and applications on top-notch NVIDIA® GPUs.

See more jobs
Date PostedJuly 24th, 2025
Job TypeFull Time
LocationAmsterdam, Netherlands; London, United Kingdom; Remote - Europe
SalaryCompetitive rates
Exciting remote opportunity (requires residency in Netherlands, United Kingdom) for a ML Engineer, Large Language Models (LLM Training & Inference Optimization) at Nebius. Offering competitive salary (full time). Explore more remote jobs on FlexHired!

Safe Remote Job Search Tips

Verify Employer Thoroughly

Research the company's identity thoroughly before applying. Check for a professional website with contacts, active social media, and LinkedIn profiles. Verify details across platforms and look for reviews on Glassdoor or Trustpilot to confirm legitimacy.

Never Pay to Get a Job

Legitimate employers never require payment for applications, training, background checks, or equipment. Always reject upfront payment requests or demands for bank details, even if they claim it's for purchasing necessary work gear on your behalf.

Safeguard Your Personal Information

Protect sensitive data like SSN, bank details, or ID copies. Share this only after accepting a formal, written job offer. Ensure it's submitted via a secure company system or portal, never through insecure channels like standard email attachments.

Scrutinize Communication & Interviews

Watch for communication red flags: poor grammar, generic emails (@gmail), vague details, or undue pressure. Be highly suspicious of interviews held only via text or chat apps; legitimate companies typically use video or phone calls.

Beware of Unrealistic Offers

If an offer's salary or benefits seem unrealistically high for the work involved, be cautious. Research standard pay for similar roles. Offers that appear 'too good to be true' are often scams designed to lure you into providing information or payment.

Insist on a Formal Contract

Always secure and review a formal, written job offer or employment contract before starting work or sharing final personal details. Ensure it clearly defines your role, compensation, key terms, and conditions to avoid misunderstandings or scams.

Subscribe Newsletter

Never miss a remote job opportunity. Subscribe to our newsletter today and receive exclusive job alerts, career advice, and industry insights delivered straight to your inbox.