Help fal maintain its frontier position on model performance for generative media models. Design and implement novel approaches to model serving architecture on top of our in-house inference engine, focusing on maximizing throughput while minimizing latency and resource usage. Develop performance monitoring and profiling tools to identify bottlenecks and optimization opportunities. Work closely with our Applied ML team and customers (frontier labs on the media space) and make sure their workloads benefit from our accelerator.
Key Responsibilities:
Help fal maintain its frontier position on model performance for generative media models.
Design and implement novel approaches to model serving architecture on top of our in-house inference engine, focusing on maximizing throughput while minimizing latency and resource usage.
Develop performance monitoring and profiling tools to identify bottlenecks and optimization opportunities.
Work closely with our Applied ML team and customers (frontier labs on the media space) and make sure their workloads benefit from our accelerator.
Requirements:
Strong foundation in systems programming with expertise in identifying and fixing bottlenecks.
Deep understanding of cutting edge ML infrastructure stack (anything from PyTorch, TensorRT, TransformerEngine to Nsight), including model compilation, quantization, and serving architectures. Ideally following closely the developments in all these systems as they happen.
Have a fundamental view of the underlying hardware (Nvidia based systems at the moment), and when necessary go deeper into the stack to fix bottlenecks (custom GEMM kernels with CUTLASS for common shapes).
Proficient in Triton or willingness to learn with comparable experience in lower-level accelerator programming.
New frontier: multi-dimensional model parallelism (combining multiple parallelism techniques like TP with context parallel / sequence parallel).
Familiar with internals of Ring Attention, FA3, FusedMLP implementations.
