[< BACK]
// POSTED: Apr 16, 2026

Research Engineer

APPLY NOW
About Valka.ai Valka, a visionary spin-off from the Realms Group (the parent company of Oddin.gg), is on a mission to revolutionize the way people create and experience digital content. Our team believes that content shouldn’t just be consumed; it should be co-created in real time, blurring the lines between imagination and reality. By harnessing the power of cutting-edge AI, we aim to build an interactive human-digital platform where virtual characters respond dynamically to each user’s voice, text, gestures, and more. This is your chance to join a diverse group of innovators who are driven to redefine what’s possible in generative content. Together, we’re changing the paradigm from passive viewing to active participation, unlocking new creative frontiers across gaming, entertainment, education, and beyond. Role summary You will sit at the boundary between our Video Generation research team and the product engineering platform! Your primary job will be to take models and demos produced by research scientists and turn them into robust, deployable Python services that can be plugged into the production platform. You will report to the Head of Engineering and partner closely with Video Generation researchers. You won’t be a researcher inventing new models, nor a backend generalist — you will be the person who makes the research land. What you'll do - Package and containerize AI models (Python/Docker) from research into clean, versioned services with well-defined APIs - Own the engineering side of the tech transfer process: inference specs, environment setup, model mocking, and integration scaffolding - Collaborate closely with research scientists on quantization, TensorRT compilation, and hitting latency budgets (e.g. <200ms real-time response targets) - Maintain and operate model services in production, debugging stability and performance issues under load - Contribute to the dual-track delivery model — keeping the engineering platform moving even while research is still iterating Skills you need - Strong Python engineering skills; comfortable writing production-grade, maintainable code - Comfortable owning ambiguous model-to-service transfers end-to-end with a high degree of autonomy - Hands-on experience deploying ML/AI models (inference pipelines, serving frameworks) - Familiarity with GPU workloads, containerization, and model optimization concepts - Ability to read and work directly with research code and translate it into reliable services - Bonus: experience with video/image generation models, TensorRT, or real-time streaming pipelines About Valka.ai Valka, a visionary spin-off from the Realms Group (the parent company of Oddin.gg), is on a mission to revolutionize the way people create and experience digital content. Our team believes that content shouldn’t just be consumed; it should be co-created in real time, blurring the lines between imagination and reality. By harnessing the power of cutting-edge AI, we aim to build an interactive human-digital platform where virtual characters respond dynamically to each user’s voice, text, gestures, and more. This is your chance to join a diverse group of innovators who are driven to redefine what’s possible in generative content. Together, we’re changing the paradigm from passive viewing to active participation, unlocking new creative frontiers across gaming, entertainment, education, and beyond. Role summary You will sit at the boundary between our Video Generation research team and the product engineering platform! Your primary job will be to take models and demos produced by research scientists and turn them into robust, deployable Python services that can be plugged into the production platform. You will report to the Head of Engineering and partner closely with Video Generation researchers. You won’t be a researcher inventing new models, nor a backend generalist — you will be the person who makes the research land. What you'll do - Package and containerize AI models (Python/Docker) from research into clean, versioned services with well-defined APIs - Own the engineering side of the tech transfer process: inference specs, environment setup, model mocking, and integration scaffolding - Collaborate closely with research scientists on quantization, TensorRT compilation, and hitting latency budgets (e.g. <200ms real-time response targets) - Maintain and operate model services in production, debugging stability and performance issues under load - Contribute to the dual-track delivery model — keeping the engineering platform moving even while research is still iterating Skills you need - Strong Python engineering skills; comfortable writing production-grade, maintainable code - Comfortable owning ambiguous model-to-service transfers end-to-end with a high degree of autonomy - Hands-on experience deploying ML/AI models (inference pipelines, serving frameworks) - Familiarity with GPU workloads, containerization, and model optimization concepts - Ability to read and work directly with research code and translate it into reliable services - Bonus: experience with video/image generation models, TensorRT, or real-time streaming pipelines About Valka.ai Valka, a visionary spin-off from the Realms Group (the parent company of Oddin.gg), is on a mission to revolutionize the way people create and experience digital content. Our team believes that content shouldn’t just be consumed; it should be co-created in real time, blurring the lines between imagination and reality. By harnessing the power of cutting-edge AI, we aim to build an interactive human-digital platform where virtual characters respond dynamically to each user’s voice, text, gestures, and more. This is your chance to join a diverse group of innovators who are driven to redefine what’s possible in generative content. Together, we’re changing the paradigm from passive viewing to active participation, unlocking new creative frontiers across gaming, entertainment, education, and beyond. Role summary You will sit at the boundary between our Video Generation research team and the product engineering platform! Your primary job will be to take models and demos produced by research scientists and turn them into robust, deployable Python services that can be plugged into the production platform. You will report to the Head of Engineering and partner closely with Video Generation researchers. You won’t be a researcher inventing new models, nor a backend generalist — you will be the person who makes the research land. What you'll do Package and containerize AI models (Python/Docker) from research into clean, versioned services with well-defined APIs Own the engineering side of the tech transfer process: inference specs, environment setup, model mocking, and integration scaffolding Collaborate closely with research scientists on quantization, TensorRT compilation, and hitting latency budgets (e.g. <200ms real-time response targets) Maintain and operate model services in production, debugging stability and performance issues under load Contribute to the dual-track delivery model — keeping the engineering platform moving even while research is still iterating Skills you need Strong Python engineering skills; comfortable writing production-grade, maintainable code Comfortable owning ambiguous model-to-service transfers end-to-end with a high degree of autonomy Hands-on experience deploying ML/AI models (inference pipelines, serving frameworks) Familiarity with GPU workloads, containerization, and model optimization concepts Ability to read and work directly with research code and translate it into reliable services Bonus: experience with video/image generation models, TensorRT, or real-time streaming pipelines
Interested in this role?Apply on iHire