Intel and Preferred Networks collaborate to jointly develop Chainer, deep learning open source framework
The companies aim to significantly accelerate CPU performance for Chainer running on Intel Architecture.
Intel Corporation and Preferred Networks Inc. (PFN) announced today that the companies will collaborate on the development of Chainer(R)(http://chainer.org/), PFN’s open source framework for deep learning, with the aim to accelerate out of the box deep learning performance on general purpose infrastructure powered by Intel.
Advanced technologies including IoT (Internet of Things), 5G (fifth generation mobile networks), and AI (artificial intelligence) are expected to be used in a range of industries ahead, giving rise to data-driven business opportunities and user experiences. The advance of technologies related to AI and deep learning, in particular, will accelerate the creation of applications that further enhance the intrinsic value of data.
The use of special-purpose computing environments for developing and implementing AI applications and deep learning frameworks poses challenges for the developer community, including development complexity, time and cost.
PFN, the developer of Chainer, which is an advanced deep learning framework with a reputation for ease of use among application developers in various industries, and Intel Corporation, a provider of general purpose computing technologies and industry leading AI/deep learning accelerators, will collaborate in an effort to make AI development easier and more affordable. The collaboration will bring both companies’ technologies to bear in the aim of optimizing development/execution of applications that use advanced AI and deep learning frameworks, as well as accelerating the performance of image and voice recognitions.
Chainer is a Python-based deep learning framework developed by PFN, which has unique features and powerful performance that enables users to easily and intuitively design complex neural networks thanks to its “Define-by-Run” feature. Since it was open-sourced in June 2015, as one of the most popular frameworks, Chainer has attracted not only the academic community but also many industrial users who need a flexible framework to harness the power of deep learning in their research and real-world applications.
Intel Corporation, a technology leader that is uniquely poised to drive the AI computing era, will help Chainer deliver breakthrough deep learning throughput for the industry’s most comprehensive compute portfolio for AI, which includes the Intel(R) Xeon(R) processors, Intel(R) Xeon Phi™ processors, Intel(R) Arria(R) 10 FPGAs, Intel(R) Nervana™ technology and more products. This framework will employ the highly-optimized Intel’s open source library—Intel(R) Math Kernel Library (MKL) and Intel(R) Math Kernel Library Deep Neural Network (MKL-DNN) as a fundamental building block.
Through the collaboration, Intel and PFN will undertake the following.
- Continuously optimize the performance of Chainer on Intel architecture
- Continuously align to Chainer updates
- Continuously optimize Chainer to updates to Intel architectures for general purpose computing, accelerators, libraries, and so on
- Share the results of the companies’ collaboration with the community on Intel’s GitHub repository
- Collaborate on marketing activities designed to accelerate AI/deep learning market growth
＊Intel, Xeon, Xeon Phi, Arria and Nervana are trademarks or registered trademarks of Intel Corporation in the United States and other countries.
＊Chainer, DIMo are trademarks or registered trademarks of Preferred Networks, Inc. in Japna and other countries.