Intel Chip Chat

Accelerating AI Deployments with the Edge to Cloud Intel AI Portfolio – Intel® Chip Chat episode 648

Informações:

Synopsis

Wei Li, Vice President of Intel® Architecture, Graphics and Software, and General Manager of Machine Learning and Translation at Intel, joins Chip Chat to share Intel’s overarching strategy and vision for the future of AI and outline the company’s edge to cloud AI portfolio. Wei discusses how Intel architecture enables consistency across different platforms without having to overhaul systems. He also highlights increased inference performance with the 2nd Generation Intel® Xeon® Scalable processor with Intel® Deep Learning Boost (Intel DL Boost) technology, introduced at Intel Data-Centric Innovation Day. Intel DL Boost speeds inference up to 14x [1] by combining what used to be done in three instructions into one instruction and also allowing lower precision (int8) across multiple frameworks such as TensorFlow*, PyTorch*, Caffe* and Apache MXNet*. He also touches on the work Intel has done on the software side with projects like the OpenVINO™ toolkit – which accelerates DNN workloads and optimizes deep learn