Staff Software Engineer, ML Performance & Systems
fal is the generative media ecosystem powering the next generation of AI products. We build the infrastructure, tools, and model access that teams need to move from idea to production, and do it at scale without compromise. For developers and enterprises, fal is the foundation that makes generative media not just possible, but practical: a unified platform where high-performance inference, orchestration, and observability come together to unlock new categories of AI-native products.
As generative media reshapes industries across a market projected to grow by hundreds of billions over the next decade, fal is becoming the ecosystem that ambitious teams build on.
About this role:
Help fal maintain its frontier position on model performance for generative media models. Design and implement novel approaches to model serving architecture on top of our in-house inference engine, focusing on maximizing throughput while minimizing latency and resource usage. Develop performance monitoring and profiling tools to identify bottlenecks and optimization opportunities. Work closely with our Applied ML team and customers (frontier labs on the media space) and make sure their workloads benefit from our accelerator.
Key Responsibilities:
-
Help fal maintain its frontier position on model performance for generative media models.
-
Design and implement novel approaches to model serving architecture on top of our in-house inference engine, focusing on maximizing throughput while minimizing latency and resource usage.
-
Develop performance monitoring and profiling tools to identify bottlenecks and optimization opportunities.
-
Work closely with our Applied ML team and customers (frontier labs on the media space) and make sure their workloads benefit from our accelerator.
Requirements:
-
Strong foundation in systems programming with expertise in identifying and fixing bottlenecks.
-
Deep understanding of cutting edge ML infrastructure stack (anything from PyTorch, TensorRT, TransformerEngine to Nsight), including model compilation, quantization, and serving architectures. Ideally following closely the developments in all these systems as they happen.
-
Have a fundamental view of the underlying hardware (Nvidia based systems at the moment), and when necessary go deeper into the stack to fix bottlenecks (custom GEMM kernels with CUTLASS for common shapes).
-
Proficient in Triton or willingness to learn with comparable experience in lower-level accelerator programming.
-
New frontier: multi-dimensional model parallelism (combining multiple parallelism techniques like TP with context parallel / sequence parallel).
-
Familiar with internals of Ring Attention, FA3, FusedMLP implementations.
What we offer at fal:
-
Interesting and challenging work
-
Competitive salary and equity
-
A lot of learning and growth opportunities
-
We offer relocation assistance to San Francisco.
-
Health, dental, and vision insurance (US)
-
Regular team events and offsite
Compensation:
-
$180,000 - $250,000 + equity + comprehensive benefits package
Location:
-
We are currently hiring in downtown San Francisco.
