Principal Research Scientist – Scaling

ML / AI San Francisco, California Today
Apply for this role
Listed via Greenhouse · Redirects to Databricks's careers page

Job Description

Principal Research Scientist – Scaling

P-1227

About Databricks AI

At Databricks, we are obsessed with enabling data teams to solve the world’s toughest problems, from security threat detection to cancer drug development, by building and running the world’s best data and AI platform. The Databricks AI Research organization enables companies to develop AI models and systems using their own data; from pre-training LLMs from scratch to state-of-the-art retrieval-augmented generation by producing novel science and putting it into production.

We believe a company’s AI models are a core part of their IP, and that high‑quality AI models should be available to all.

About the Scaling Research Team

The Databricks AI Scaling team focuses on pushing the boundaries of large language model (LLM) training and inference efficiency beyond what is required to support existing models. The team explores novel avenues for scaling and efficiency improvements across algorithms, systems, and infrastructure, requiring researchers who can both drive independent research agendas and dive deep into low‑level implementation details with engineering partners.

Role Summary

As a Principal Research Scientist – Scaling, you will lead a team of world‑class researchers and engineers to advance the state of the art in large‑scale machine learning, focusing on post-training, RL and inference efficiency, optimization, and scaling. You will define and execute a research roadmap that advances the Databricks AI platform and delivers tangible improvements to how customers train, serve, and adapt LLMs at scale, working closely with product, data, and engineering leaders to bring cutting‑edge methods into production.

The Impact You Will Have

  • Lead and grow a multidisciplinary research team focused on foundational and applied AI problems, with a particular emphasis on LLM scaling, efficiency, and systems performance.
  • Define the scaling research roadmap in alignment with Databricks’ strategic objectives, prioritizing advances in foundation model efficiency and large‑scale training and inference.
  • Drive algorithmic innovations for large‑scale neural network training and inference, including novel optimizers, low‑precision techniques, and model adaptation methods, and guide your team in rigorous empirical validation against state‑of‑the‑art approaches.
  • Optimize end‑to‑end ML systems for distributed training and RL, memory efficiency, and compute efficiency through close collaboration with core systems and platform teams, ensuring that research ideas translate into performant, reliable infrastructure.
  • Partner with product and engineering to translate research breakthroughs, especially around scaling and efficiency, into customer‑impacting capabilities in the Databricks AI platform.
  • Foster a culture of scientific excellence and openness, including high‑quality research practices, reproducible experimentation, and effective internal knowledge sharing across Databricks AI.
  • Represent Databricks AI research externally through top‑tier publications, conference talks, and collaborations with academia and the open‑source community, with a focus on optimization and efficiency for large‑scale models.
  • Mentor and develop talent, providing both technical guidance (research agendas, experimentation, implementation) and career development support for research scientists and engineers.

What You Will Do

  • Define and lead independent research programs on foundation model efficiency, covering topics such as optimizer design, low‑precision training/inference, scalable model architectures, and efficient adaptation methods.
  • Oversee the design and execution of large‑scale experiments, including benchmarking against state‑of‑the‑art methods and evaluating trade‑offs in quality, latency, throughput, and cost.
  • Work hands‑on with your team on high‑quality, efficient code in Python and PyTorch for research implementation, rapid prototyping, and integration with Databricks’ production systems.
  • Collaborate with distributed systems and infra teams to push the limits of distributed training, parallelism strategies, memory management, and hardware utilization for LLMs and other large models.
  • Establish metrics, evaluation protocols, and best practices for scaling‑focused research (e.g., training efficiency, inference cost, energy usage) and drive their adoption across Databricks AI.
  • Champion responsible and robust deployment of scaling innovations, ensuring that model behavior, reliability, and safety remain first‑class considerations.

What We Look For

  • Proven ability to lead a research team to develop novel techniques for foundation model efficiency and related topics, with a strong track record of industry impact.
  • Deep expertise in at least one of: generative AI, LLMs, distributed ML systems, model optimization, or responsible AI, with a strong emphasis on scaling and efficiency for large‑scale neural networks.
  • Hands on leadership - strong programming skills and demonstrated ability to write high‑quality, efficient code in Python and PyTorch for research implementation and experimentation.
  • Demonstrated ability to translate research innovation into scalable product capabilities in partnership with product and engineering teams.
  • Excellent communication, leadership, and stakeholder management skills, with experience influencing cross‑functional roadmaps and aligning research with business impact.

Nice to Have

  • Prior work at the intersection of systems and ML, such as distributed training frameworks, compiler and kernel optimization for deep learning workloads, or memory‑/compute‑efficient model design.
  • Strong industry and academic network in large‑scale ML, with ongoing collaborations or service (e.g., PC/area chair) at top conferences in ML and systems.
  • A strong record of research impact—such as first‑author publications at top ML/systems conferences (e.g., ICLR, ICML, NeurIPS, MLSys), influential open‑source contributions, or widely used deployed systems—especially in optimization or efficiency.



Pay Range Transparency

Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.

Local Pay Range$280,000—$350,000 USD

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.

Benefits

At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.

Our Commitment to Diversity and Inclusion

At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.

Compliance

If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

About Databricks

Databricks is actively hiring on The Code Deck.

All Databricks jobs →
Career Toolkit

Ready to apply?

Check your CV against this job, generate a cover letter, and prep for the interview — all in one place.

Open Career Toolkit →
For Employers

Want this spot?

Pin your listing to the top of every search with a gold Featured badge. From £49.

Feature a listing →

Paste your CV

We'll save it so you can tailor it to any job with one click.