Intel Chip Chat

Informações:

Synopsis

Intel® Chip Chat is a recurring podcast series of informal, one-on-one interviews with some of the brightest minds in the industry. Hosted by Intel employee Allyson Klein since 2007, Intel Chip Chat strives to bring listeners closer to the innovations and inspirations of the men and women shaping the future of computing, and in the process share a little bit about the technologists themselves.

Episodes

  • Delivering on the Exascale Opportunity for the Advancement of Science – Intel Chip Chat episode 641

    29/03/2019 Duration: 10min

    On March 18, Intel and the U.S. Department of Energy (DOE) announced a plan to deliver the first U.S. supercomputer to exceed one exaflop (a quintillion floating point operations per second). Trish Damkroger, Vice President and General Manager of the Extreme Computing Organization at Intel, joins Chip Chat to discuss this exascale class system called Aurora that is being developed for the Argonne National Lab. Damkroger outlines a few of the key technologies providing the foundation of the system including a future generation Intel® Xeon® Scalable processor, the recently announced Intel Xe compute architecture, and Intel® Optane™ DC persistent memory, while also diving into the ground-breaking science Aurora will enable, such as precision medicine, climate modeling and forecasting, and materials science. Aurora will be anchored on Intel’s six pillars of technology innovation: process, architecture, memory, interconnect, security, and software, which Damkroger also touches on when talking about the potential u

  • Modernizing Networks from Core to Edge for Data-Centric, 5G Services – Intel® Chip Chat episode 640

    27/03/2019 Duration: 10min

    Dan Rodriguez, Vice President of Intel’s Data Center Group and General Manager of the Network Compute Division, joins Chip Chat to share some of the excitement from Mobile World Congress Barcelona. Dan explains how Intel approaches silicon development to drive an end-to-end network encompassing a series of compute pools capable of running any network function and any workload anywhere in the network. From the high performance and scale of Intel® Xeon® Scalable processors to SOCs, like Intel Xeon D processors, that support dense compute with lower power requirements, the company has extended the Intel architecture across network, cloud and edge. Now Intel has announced plans to deliver a 10nm SOC, codenamed Snow Ridge, to support power efficient performance, faster memory support and large I/O capacity for 5G Radio Access Network and broader network edge applications. The architectural consistency allows customers to deploy and scale software seamlessly anywhere in the network to reduce research and developme

  • Data-Centric Computing at Intersection of AI, Edge, IoT, 5G and Cloud – Intel® Chip Chat episode 639

    21/03/2019 Duration: 12min

    Lisa Spelman, Vice President and General Manager Intel® Xeon® Products, Data Center Marketing at Intel Corporation, shares the origin of Intel’s data centric computing message as a customer-focused framework to tackle the today’s ultimate challenge and opportunity - data. Intel works with broad partner ecosystems to deliver hardware and software innovation in compute, storage and networking that supports the movement, storage and processing of every bit of data for better business outcomes. Cloudification of services and infrastructure is pervasive, which accelerates the utilization and creation of this data. Intel has invested over ten years in building the silicon foundation and software ecosystem for the cloud and cloud native architecture. That revolution has moved into the communication service provider market to support delivery of new services in a more cost-effective manner. Lisa talks about the convergence of 5G, cloud, edge and Internet of Things (IoT) that drives requirements for artificial inte

  • 5G and Edge Innovations Underscore Network Transformation Momentum – Intel® Chip Chat episode 638

    14/03/2019 Duration: 14min

    Sandra Rivera, senior vice president of the Intel Data Center Group and general manager of the Network Platforms Group, joins Chip Chat for a wide-ranging conversation about Intel’s announcements, partnerships and technology milestones at Mobile World Congress Barcelona 2019. Sandra is responsible for the Intel business group charged with providing innovative technology and solutions to the networking industry and is Intel’s 5G executive sponsor. She talks about Intel's work with the industry to address network transformation and 5G as a high-performing computing challenge. After building momentum with the Intel architecture and partner ecosystem in core network transformation, the company has lead the move of compute, storage and capabilities that are extending from the data center to the network edge where data is generated and consumed. Rivera details several new Intel architecture advancements, including the recently announced 10nm SOC for wireless base stations, next-generation of Intel® Xeon® D for

  • Fortanix Delivers Tremendous Value to Security – Intel® Chip Chat episode 637

    04/03/2019 Duration: 10min

    In this episode of Intel Chip Chat, Anand Kashyap, CTO and Co-Founder of Fortanix, joins us to talk about their mission to solve for security and privacy in the cloud. Anand talks about how Fortanix believes the best security for applications and data in the cloud can’t be built just using software, it has to be enforced with hardware security. Anand talks about the start of Fortanix and how they built around Intel® Software Guard Extensions (Intel® SGX) and now offer an array of products from their Runtime Encryption* Platform to their recently launched Enclave Development Program (EDP). Fortanix will be at RSA this year to showcase their new platform EDP, and demo how they are making Intel SGX available to more developers across the world with broader support for languages like Java and Python. Learn more about how Intel and Fortanix are protecting data at-rest, in-motion, and in-use by visiting www.fortanix.com or stopping by booth #N6173 at RSA. Intel technologies' features and benefits depend on system

  • Efficient, Performant, Virtualized Networks with Intel® FPGAs - Intel® Chip Chat episode 636

    25/02/2019 Duration: 07min

    Chuck Tato, Director for the Wireline and NFV Business Division for the Programmable Solutions Group at Intel, joins Chip Chat to help welcome Intel® FPGA Programmable Acceleration Card N3000 -- Intel's first FPGA for NFV and 5G workloads. Designed with networking in mind, Intel FPGA Programmable Acceleration Card N3000 is a highly efficient programmable solution with the right capabilities to support the high-throughput, line-rate applications happening in the networking arena today. In this interview, Tato speaks to the value of virtualizing network functions and the use of FPGAs to enable the low latency and high throughput that can sometimes be elusive in virtualized solutions. Tato additionally discusses the architecture and performance characteristics of these FPGAs and how Data Plane Development Kit (DPDK) facilitates the integration of Intel FPGAs into existing Intel® Xeon® Scalable processor based solutions. For more information on Intel FPGA Programmable Acceleration Card N3000, please visit https

  • Visual Cloud: Why focusing on ‘Media’ is not sufficient - Intel® Chip Chat episode 635

    25/02/2019 Duration: 14min

    Media is undergoing a rapid evolution going well beyond the traditional streaming of content over a TV screen. People are addicted to their screens and expect their content to be rich, immersive, personalized and available anytime, anywhere and on any device. Delivery to these new user expectations requires the content be processed in the cloud and consumed remotely. We are calling these new visual experiences Visual Cloud. This is changing the game, introducing new challenges and driving new platform requirements leading the service providers to take an end to end platform perspective. Lynn Comp (Vice President, Data Center Group and General Manager, Visual Cloud Division, Intel Corporation) has talked about Visual Cloud in prior Chip Chat sessions. In this Chip Chat segment Lynn explains how the Media industry is transforming and why calling it ‘Media’ is simply not sufficient. Lynn also discusses the importance of on-demand cloud and transformed networks to delivery of visual workloads as we move into the

  • The Next Step in Confidential Computing with Google Cloud - Intel® Chip Chat episode 634

    22/02/2019 Duration: 13min

    In this episode of Chip Chat, Nelly Porter, Senior Product Manager for Google Cloud, joins us to discuss Confidential Computing and why Google is delivering this service to its customers. Nelly talks about the key drivers for Google’s Cloud strategy and how everything they do takes security, privacy and control into account to bring better protections to customers. Google announced an open source framework in May 2018 called Asylo, to make it easier to create and use enclaves on Google Cloud. Nelly also talks about Google’s collaboration with Intel around hardening their cloud infrastructure with Intel® Software Guard Extensions (Intel® SGX). Google Cloud, in collaboration with Intel, is hosting the Confidential Computing Challenge to generate new ideas in the future of computing. Learn more and join the challenge: https://cloudplatformonline.com/Confidential-Computing-Challenge-2019-Reg.html Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software o

  • Game-Changing Memory Technology for the Data Center - Intel® Chip Chat episode 633

    16/02/2019 Duration: 08min

    Learn about the operating mode capabilities included with Intel® Optane™ DC Persistent Memory: memory mode and app direct mode. Kristie Mann, Product Line Director for Intel Optane DC Persistent Memory in Intel’s Data Center Group, talks about the innovative capabilities of Intel Optane DC Persistent Memory and the key benefits of its two operating modes. For more information: http://intel.com/optanedcpersistentmemory Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel, the Intel logo, and Optane are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation

  • Accelerating AI Inference with Intel® Deep Learning Boost – Intel® Chip Chat episode 632

    09/02/2019 Duration: 11min

    When Intel previewed an array of data-centric innovations in August 2018, one that captured media attention was Intel® Deep Learning Boost, an embedded AI accelerator in the CPU designed to speed deep learning inference workloads. Intel DL Boost will make its initial appearance in the upcoming generation of Intel® Xeon® Scalable processors code-named Cascade Lake. In this Chip Chat podcast, Intel Data-centric Platform Marketing Director Jason Kennedy shares details about the optimization behind some impressive test results. The key to Intel DL Boost – and its performance kick – is augmentation of the existing Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instruction set. This innovation significantly accelerates inference performance for deep learning workloads optimized to use vector neural network instructions (VNNI). Image classification, language translation, object detection, and speech recognition are just a few examples of workloads that can benefit. Early tests have shown image recognition

  • IBM Optimizing the Cloud for HPC Workloads - Intel® Chip Chat episode 631

    01/02/2019 Duration: 09min

    Director, Offering Management for Compute at IBM Cloud, Jay Jubran discusses why enterprises are choosing the cloud for their HPC workloads. IBM's new HPC-as-a-Service offering is powered by Intel® Xeon® Scalable processors as they offer the performance required to meet customers' needs. IBM Cloud is focused on innovative cloud capabilities and partnering with Intel to provide the best solution for their HPC customers. Learn more about IBM's HPC cloud offerings here: https://cloud.ibm.com/. IBM will be announcing their latest enterprise cloud offerings at IBM Think, February 12-15th 2019. Learn more about the agenda: https://www.ibm.com/events/think/. Successful Cloud Service Providers are staying ahead of the technology curve, not chasing it. Explore intel.com/csp. Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure.

  • Architected for HPC, AI, and IaaS Leadership Performance - Intel® Chip Chat episode 630

    25/01/2019 Duration: 07min

    Jennifer Huffstetler, VP and GM for Data Center Product Management at Intel, joins Chip Chat for a deep dive into the capabilities of a new class of processors: future Intel® Xeon® Scalable processors codenamed Cascade Lake advanced performance. Architected to deliver performance leadership across the widest range of demanding workloads[1], future Intel Xeon Scalable processors codenamed Cascade Lake advanced performance deliver unprecedented memory bandwidth[2] with more memory channels than any other CPU. These processors are expected to offer superior performance (results estimated based on pre-production hardware) in comparison to AMD EPYC on many demanding applications including: • Physics -- MILC up to 1.5X [quantum chromodynamics] [3] • Weather – WRF up to 1.6X [weather research and forecasting model] [4] • Manufacturing – OpenFOAM up to 1.6X [open source CFD] [5] • Life/material sciences – NAMD (APOA1) up to 2.1X [Nanoscale Molecular Dynamics] [6] • Energy – YASK (ISO3DFD) up to 3.1X [stencil benchm

  • For Lunar Exploration, Intel AI Can Help Where GPS Can’t – Intel® Chip Chat episode 629

    17/01/2019 Duration: 14min

    With no GPS in space, how can a rover know its exact location on a lunar surface? In this Chip Chap podcast, Phil Ludivig, rover navigation engineer with iSpace, Inc.* joins Shashi Jain, innovation manager at Intel, to talk about research that applied AI to one of the biggest challenges in space exploration. Ludivig and Jain, along with other researchers, came together at NASA Frontier Development Lab (NASA FDL) to tackle questions facing NASA and the commercial space industry. Their team took on one of the most fundamental – and answered it a highly inventive way. Starting with a game engine, the team created a simulated lunar environment to train an AI algorithm that produced the ground truth needed for machine learning. Next, they created synthetic images, called reprojections, from cameras mounted on a rover. AI matched reprojected images to actual orbital images, figuring out terrain features that made sense. The team used Intel® AI DevCloud for inference, an Intel Core™ i7+ PC and Intel Xeon® Scal

  • AI and HPC Are Converging with Support from Intel® Technology – Intel® Chip Chat episode 628

    10/01/2019 Duration: 10min

    AI and HPC are highly complementary – flip sides of the same data- and compute-intensive coin. In this Chip Chat podcast Dr. Pradeep Dubey, Intel Fellow and director of its Parallel Computing Lab, explains why it makes sense for the two technology areas to come together and how Intel is supporting their convergence. AI developers tend to be data scientists, focused on deriving intelligence and insights from massive amounts of digital data, rather than typical HPC programmers with deep system programming skills. Because Intel® architecture serves as the foundation for both AI and HPC workloads, Intel is uniquely positioned to drive their convergence. Its technologies and products span processing, memory, and networking at ever-increasing levels of power and scalability. For more information on developing HPC and AI on Intel hardware and software, visit the Intel Developer Zone at software.intel.com. More about AI activities across Intel is online at ai.intel.com. For details, click on the Technology and

  • Applying AI to Advance Space Exploration – Intel® Chip Chat episode 627

    03/01/2019 Duration: 14min

    In this Chip Chat podcast, Zahi Kakish, a doctoral student focusing on swarm robotics at Arizona State University, joins Shashi Jain, innovation manager at Intel, to talk about some remarkable work that’s emerged from NASA FDL, the space agency’s Frontier Development Lab. Designed as an eight-week challenge, FDL is an AI accelerator project set up by NASA Ames, the SETI Institute, and private partners. Its goal? To bring together the best minds in AI and planetary science to tackle challenges facing NASA and the commercial space industry. This past summer, with support from Intel principal engineers and an Intel® Xeon processor-based server at SETI, Kakish and his FDL team created a planning tool called the Mission Planner for Cooperative Multi-Agent Systems, MARMOT for short. Through the use of IA and machine learning, it enables two semi-autonomous rovers to communicate and work together to solve a task. With its collaborative systems and assistive AI, MARMOT delivers a significant performance improveme

  • Accelerating AI Inference with Microsoft Azure* Machine Learning - Intel® Chip Chat episode 626

    20/12/2018 Duration: 07min

    Dr. Henry Jerez, Principal Group Product and Program Manager for Azure* Machine Learning Inferencing and Infrastructure at Microsoft, joins Chip Chat to discuss accelerating AI inference in Microsoft Azure. Dr. Jerez leads the team responsible for creating assets that help data scientists manage their AI models and deployments, both in the cloud and at the edge, and works closely with Intel to deliver the fastest-possible inference performance for Microsoft's customers. At Ignite 2018, Microsoft demoed an Azure Machine Learning model running atop the OpenVINO toolkit and Intel® architecture for highly-performant inference at the edge. This capability will soon be incorporated into Azure Machine Learning. Microsoft additionally announced at Ignite a refreshed public preview of Azure Machine Learning that now provides a unified platform and SDK for data scientists, IT professionals, and developers. For more on Microsoft Azure Machine Learning, please visit http://aka.ms/azureml-docs. Intel technologies' featur

  • Descartes Labs Helps Customers Understand the Planet - Intel® Chip Chat episode 625

    18/12/2018 Duration: 13min

    Descartes Labs helps companies to get business insight from huge volumes of satellite and geographic data, using a combination of Software as a Service and custom development. Handling petabytes of data, compression is hugely important for packaging the data in usefully sized files and for driving down storage costs. By upgrading to the latest generation Intel® processor, provided in the Google Cloud Platform*, Descartes Labs was able to accelerate its compression. To learn more about Descartes Labs solutions visit their website at https://www.descarteslabs.com/. To learn more about Intel's partnership with Google Cloud Platform visit https://cloud.google.com/intel/. Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel and the Intel logo

  • Driving Data Center Performance Through Intel Memory Technology – Intel® Chip Chat episode 624

    13/12/2018 Duration: 10min

    Dr. Ziya Ma, vice president of Intel® Software and Services Group and director of Data Analytics Technologies, gives Chip Chat listeners a look at data center optimization along with a preview of advancements well underway. In their work with the broad industry, Dr. Ma and her team have found that taming the data deluge calls for IT data center managers to unify their big data analytics and AI workflows. As they’ve helped customers overcome the memory constraints involved in data caching, Apache Spark*, which supports the convergence of AI on big data, has proven to be a highly effective platform. Dr. Ma and her team have already provided the community a steady stream of source code contributions and optimizations for Spark. In this interview she reveals that more – and even more exciting work – is underway. Spark depends on memory to perform and scale. That means optimizing Spark for the revolutionary new Intel® Optane™ DC persistent memory offers performance improvement for the data center. In one ex

  • New Advances in Storage Ease the Move to Hyperconvergence – Intel® Chip Chat episode 623

    12/12/2018 Duration: 10min

    Christine McMonigal, Director of Hyperconverged Infrastructure at Intel, joins Chip Chat at the Microsoft Ignite 2018 conference to talk about what’s new from Microsoft and Intel. The two companies have collaborated extensively on hyperconvergence. One result, announced at Ignite, is a refreshed version of Intel® Select Solutions for Windows Server* Software Defined Storage. It adds support for Microsoft’s newly announced Windows Server 2019, which in turn supports Intel® Optane™ DC persistent memory. Intel is working with many industry partners, including Microsoft, to create what it calls Intel Select Solutions, full stack solutions optimized and benchmarked for verified performance. The aim is to bring more data centers into hyperconverged environments more readily. To support Storage Spaces Direct, one of Windows Server 2019’s new features, Intel just released two configurations to the market as reference designs. The Base configuration can support a wide range of workloads while the Plus configuration

  • Windows Server 2019 and SQL vNext Modernize Enterprise Infrastructure – Intel® Chip Chat episode 622

    05/12/2018 Duration: 11min

    Jeff Woolsey, Principal Program Manager of Windows Server and Hybrid Cloud at Microsoft, joins Chip Chat to talk about how Microsoft has intentionally designed their products to meet the growing infrastructure needs of businesses using the Hybrid Cloud. Jeff emphasizes the importance of migrating to the Windows Server 2019 and how this new release will greatly impact performance, security and scalability. He also talks about the benefits of Microsoft’s deep partnership with Intel and the exciting capabilities of Microsoft SQL Server. To learn more go to aka.ms/wssd and aka.ms/sqlserver. To learn more about Hybrid Cloud visit www.intel.com/Cloud. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information a

page 5 from 7