Software - Compiler DevOps Engineer
Seoul, South Korea (On-site)
Responsibilities
Build and Maintain Tooling Infrastructure: Develop and maintain specialized tools to support the NPU compiler workflow, including automated testing frameworks, debugging tools, and performance profiling utilities tailored for AI workloads.
Testing Frameworks and Automation: Design and implement automated testing frameworks to ensure the reliability, accuracy, and stability of the NPU compiler across diverse AI models and NPU configurations.
Continuous Integration and Deployment (CI/CD): Establish and manage CI/CD pipelines that streamline the integration, testing, and deployment of compiler features, with a focus on ensuring NPU stability and supporting AI model compatibility.
Collaboration and Documentation: Work closely with compiler engineers to identify tooling needs specific to NPU compiler requirements, document infrastructure processes, and ensure that tools are efficient, reliable, and accessible for the team.
Minimum Qualifications
Bachelor’s degree in Computer Science, Electrical Engineering, or a closely related field, or equivalent practical experience.
Proficiency in at least one programming language (e.g., Python, C/C++, Go) and a strong understanding of software development best practices, including version control (Git) and code reviews.
Experience setting up and maintaining CI/CD pipelines using industry-standard tools and practices (e.g., Jenkins, GitLab CI, GitHub Actions).
Hands-on experience creating automated testing frameworks and tooling infrastructure, particularly within a Linux-based development environment.
Solid understanding of containerization and virtualization technologies (e.g., Docker, Kubernetes) and familiarity with infrastructure-as-code practices.
Preferred Qualifications
Master’s degree in Computer Science, Electrical Engineering, or a related technical field.
Prior experience working on compiler toolchains, especially for specialized hardware accelerators (e.g., NPUs, GPUs, TPUs) or AI/ML-focused architectures.
Familiarity with performance profiling, code optimization techniques, and debugging tools tailored for heterogeneous computing environments.
Experience with large-scale distributed systems and infrastructure, including orchestration frameworks and resource managers.
Knowledge of AI frameworks (e.g., TensorFlow, PyTorch) and an understanding of common AI workloads and models.
Experience contributing to open-source projects or working in a highly collaborative, cross-functional team setting.
Contact