Researcher, Post Training
Job Description
About Cartesia
Our mission is to architect AI that learns from and interacts with the world like humans do.
We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.
We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.
About the Role
The next leap in model intelligence won’t come from scale alone — it will come from better post-training and alignment. Cartesia’s Post-Training team is developing the methods and systems that make multimodal models truly adaptive, aligned, and grounded in human intent.
As a Researcher on the Post-Training team, you’ll work at the intersection of machine learning research, alignment, and infrastructure, designing new techniques for preference optimization, model evaluation, and feedback-driven learning. You’ll explore how feedback signals can guide models to reason more effectively across modalities, and you’ll build the infrastructure to measure and improve these behaviors at scale.
Your work will directly shape how Cartesia’s foundation models learn, improve, and ultimately connect with people.
Your Impact
Own research initiatives to improve the alignment and capabilities of multimodal models
Develop new post-training methods and evaluation frameworks to measure model improvement
Partner closely with research, product, and platform teams to define best practices for creating specialized models
Implement, debug, and scale experimental systems to ensure reliability and reproducibility across training runs
Translate research findings into production-ready systems that enhance model reasoning, consistency, and human alignment
What You Bring
Deep knowledge of preference optimization and alignment methods, including RLHF and related approaches
Experience designing evaluations and metrics for generative or multimodal models
Strong engineering and debugging skills, with experience building or scaling complex ML systems
Ability to trace and diagnose complex behaviors in model performance across the training and evaluation pipeline
Nice-To-Haves
Experience with multimodal model training (e.g., text, audio, or vision-language models)
Contributions to alignment research or open-source projects related to model evaluation or fine-tuning
Background in designing or implementing human-in-the-loop evaluation systems
More Details
🏢 In-office policy: We’re an in-person team based out of offices in 🇺🇸 San Francisco, 🇬🇧 London and 🇮🇳 Bangalore We love being in the office, hanging out together, and learning from each other every day.
🌎 Visa sponsorship: We provide visa sponsorship support and assess each circumstance on a case-by-case basis. However, visa sponsorship is dependent on many factors, including the role you are applying for, and the location you are going to be based, and so we can't always guarantee success. Your Recruiter will work with you to understand your visa sponsorship needs from the first call.
🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality or design along the way.
🤝 We support each other. We have an open & inclusive culture that’s focused on giving everyone the resources they need to succeed.
Our Benefits
💰 Compensation. Competitive base salary alongside attractive equity package.
🚆 Commuter Allowance. A monthly stipend to help you get to and from the office.
🏖️ Flexible PTO. Take as much time as you need to recharge your batteries.
🍲 Meals & Snacks. Lunch, dinner and plenty of snacks, provided daily.
🦖 Your own personal Yoshi.