About the Team
We are building an MLIR-based compiler for a new non–von Neumann architecture, unlocking massive parallelism and efficiency. Our work sits at the frontier of compilers, architectures, and AI-driven code generation. Engineers here work hands-on with new chips, from primitives to ML frameworks. Researchers tackle fundamental questions in autonomous code generation and optimization. Together, we design the software stack that will make tomorrow’s hardware usable.
About the Role
As an AI compiler engineer, you’ll take on a central role in building the core infrastructure that connects advanced hardware with modern ML systems. You will:
Build the compiler layer: integrate a new non–von Neumann accelerator into ML frameworks like PyTorch and TensorFlow using MLIR.
Bridge high-level ML to low-level compute: transform model graphs into optimized kernels and collaborate with low-level engineers on core operations.
Work close to the metal: dive into C++ libraries and hardware APIs to configure cores, control execution, and fine-tune data flows.
You might thrive in this role if you have
Strong C/C++ and performance optimization skills
Low-level programming on GPUs/TPUs/DSPs/FPGA (CUDA, OpenCL, SYCL)
Experience with LLVM/MLIR and compiler backends
Understanding of ML models and core ops (matmul, conv, etc.)
Integration of PyTorch/TensorFlow models to custom hardware