← Toutes les offres
D

Software Engineer - GenAI inference

Databricks

San Francisco, California
Publié le

Description du poste

<p>P-1284</p> <h3><strong>About This Role</strong></h3> <p>As a software engineer for GenAI inference, you will help design, develop, and optimize the inference engine that powers Databricks’ Foundation Model API. You’ll work at the intersection of research and production, ensuring our large language model (LLM) serving systems are fast, scalable, and efficient. Your work will touch the full GenAI inference stack — from kernels and runtimes to orchestration and memory management.</p> <h3><strong>What You Will Do</strong></h3> <ul> <li>Contribute to the design and implementation of the inference engine, and collaborate on model-serving stack optimized for large-scale LLMs inference</li> <li>Collaborate with researchers to bring new model architectures or features (sparsity, activation compression, mixture-of-experts) into the engine</li> <li>Optimize for latency, throughput, memory efficiency, and hardware utilization across GPUs, and accelerators</li> <li>Build and maintain instrumentation, profiling, and tracing tooling to uncover bottlenecks and guide optimizations</li> <li>Develop and enhance scalable routing, batching, scheduling, memory management, and dynamic loading mechanisms for inference workloads</li> <li>Support reliability, reproducibility, and fault tolerance in the inference pipelines, including A/B launches, rollback, and model versioning</li> <li>Integrate with federated, distributed inference infrastructure – orchestrate across nodes, balance load, handle communication overhead</li> <li>Collaborate cross-functionally: with platform engineers, cloud infrastructure, and security/compliance teams</li> <li>Document and share learnings, contributing to internal best practices and open-source efforts when possible</li> </ul> <h3><strong>What We Look For&