Direct message the job poster from Norconsult Telematics
- Design and operationalize scalable MLOps workflows to support deployment and lifecycle management of traditional ML, LLMs, RAG pipelines, and agentic AI systems across production environments.
- Build CI/CD and GitOps pipelines, model serving infrastructure (e.g., Triton, ModelMesh), and ensure observability, performance monitoring, and traceability to enable reliable and efficient AI delivery.
- Develop and manage end-to-end CI/CD pipelines for AI/ML workloads using tools like Kubeflow, Tekton, or Argo Workflows.
- Implement GitOps workflows for reproducible, version-controlled model deployments.
- Integrate and manage LLM and GenAI model serving using platforms such as Triton, ModelMesh, ONNX, or OpenVINO.
- Monitor model drift, latency, resource usage, and enable automated alerts or rollbacks.
- Collaborate with Data Science, AI, and Infrastructure teams to define scalable compute and storage needs.
- Ensure model traceability, audit readiness, and performance tracking across environments.
- Support production-grade agentic workflows and multi-step LLM inference pipelines.
- Apply strong communication and documentation skills to explain model workflows and infrastructure clearly.
- Demonstrate adaptability and problem-solving to debug complex AI pipelines and implement scalable solutions.
- Work collaboratively across cross-functional teams, manage time effectively, and deliver on fast-paced deadlines.
- Bachelor's degree in Computer Science, Software Engineering, or related field with 4+ years' experience in MLOps, DevOps, or AI platform engineering.
- Proficient in OpenShift, Kubernetes, and GPU orchestration for scalable AI workloads.
- Hands on experience with LLM model serving, containerized AI (Docker, Triton, ONNX), and observability tools like Prometheus and Grafana.
- Familiarity with MLflow, ModelMesh, Seldon, KServe, and supporting GenAI/LLM pipelines, RAG, and agentic frameworks (preferred).
- Certified in Kubernetes (CKA), OpenShift, or major cloud platforms (AWS, Azure, GCP) is advantageous.
- KPIs include pipeline uptime/failure rate, time to deploy models, and model latency/availability.
- Fluency in English is mandatory; Arabic proficiency is a plus.
Mid-Senior level
Employment typeContract
Job functionInformation Services and Data Infrastructure and Analytics
- Locations
- Riyadh
About Glow Beauty On Demand
Glow Beauty On Demand brings licensed beauty professionals and premium products straight to your doorstep, offering customized hair, nail, skincare, and wellness services for effortless, salon-quality results wherever you are.
Already working at Glow Beauty On Demand?
Let’s recruit together and find your next colleague.