IT@Intel: Push-button Productization of AI Models
Read/Download White Paper (PDF)
IT Best Practices: Intel IT has a large AI group that works across Intel to transform critical work, optimize processes, eliminate scalability bottlenecks and generate significant business value (more than USD 1.5B return on investment in 2020). Our efforts unlock the power of data to make Intel’s business processes smarter, faster and more innovative, from product design to manufacturing to sales and pricing.
To enable this operation at scale, we developed Microraptor: a set of MLOps capabilities that are reused in all of our AI platforms. Microraptor enables world-class MLOps to accelerate and automate the development, deployment and maintenance of machine learning models. Our approach to model productization avoids the typical logistical hurdles that often prevent other companies’ AI projects from reaching production. Our MLOps methodology enables us to deploy AI models to production at scale through continuous integration/continuous delivery, automation, reuse of building blocks and business process integration.
Our MLOps methodology provides many advantages:
- The AI platforms abstract deployment details and business process integration so that data scientists can concentrate on model development.
- We can deploy a new model in less than half an hour, compared to days or weeks without MLOps.
- Our systematic quality metrics minimize the cost and effort required to maintain the hundreds of models we have in production.
For more information on Intel IT Best Practices, please visit intel.com/IT
Posted in:
Artificial Intelligence, Intel, Intel IT, IT White Papers, IT@Intel