Preferred Networks Migrates its Deep Learning Research Platform to PyTorch
PFN to work with PyTorch and the open-source community to develop the framework and advance MN-Core processor support.
December 5, 2019, Tokyo Japan – Preferred Networks, Inc. (PFN, Head Office: Tokyo, President & CEO: Toru Nishikawa) today announced plans to incrementally transition its deep learning framework (a fundamental technology in research and development) from PFN’s Chainer™ to PyTorch. Concurrently, PFN will collaborate with Facebook and the other contributors of the PyTorch community to actively participate in the development of PyTorch. With the latest major upgrade v7 released today, Chainer will move into a maintenance phase. PFN will provide documentation and a library to facilitate the migration to PyTorch for Chainer users.
PFN President and CEO Toru Nishikawa made the following comments on this business decision.
“Since the start of deep learning frameworks, Chainer has been PFN’s fundamental technology to support our joint research with Toyota, FANUC, and many other partners. Chainer provided PFN with opportunities to collaborate with major global companies, such as NVIDIA and Microsoft. Migrating to PyTorch from Chainer, which was developed with tremendous support from our partners, the community, and users, is an important decision for PFN. However, we firmly believe that by participating in the development of one of the most actively developed frameworks, PFN can further accelerate the implementation of deep learning technologies, while leveraging the technologies developed in Chainer and searching for new areas that can become a source of competitive advantage.”
Developed and provided by PFN, Chainer has supported PFN’s R&D as a fundamental technology and significantly contributed to its business growth since it was open-sourced in June 2015. Its unique Define-by-Run method has gained support from the community of researchers and developers. It has been widely adopted as a standard method by the current mainstream deep learning frameworks, because it allows users to build complex neural networks intuitively and flexibly, speeding up the advancement of deep learning technology.
Meanwhile, the maturation of deep learning frameworks over the last several years has marked the end of the era when deep learning framework itself was the competitive edge to development. PFN believes that instead of making small adjustments to differentiate itself from competitors, it should contribute to the sustainable growth of the community of developers and users and create a healthy ecosystem with the common goal of further advancing deep learning technology.
● Migrating PFN’s deep learning R&D platform to PyTorch
PFN will migrate its deep learning research platform to PyTorch, which draws inspiration from Chainer, to enable flexible prototyping and a smooth transition from research to production for machine learning development. With a broad set of contributing developers including Facebook, PyTorch boasts an engaged developer community and is one of the most frequently used frameworks in academic papers. Migrating to PyTorch will allow PFN to efficiently incorporate the latest research results into its R&D activities and leverage its existing Chainer assets by converting them to PyTorch. PFN will cooperate with PyTorch team at Facebook and in the open-source community to contribute to the development of PyTorch, as well as supporting PyTorch on MN-Core, a deep learning processor currently being developed by PFN.
PFN has received the following comments from Facebook and the Toyota Research Institute :
Bill Jia, Vice President of AI Infrastructure, Facebook
“As a leading contributor to PyTorch, we’re thrilled that a pioneer in machine learning (ML), such as PFN, has decided to adopt PyTorch for future development,” said Bill Jia, Facebook Vice President of AI Infrastructure. “PyTorch’s enablement of leading-edge research, combined with its ability for distributed training and inference, will allow PFN to rapidly prototype and deploy ML models to production for its customers. In parallel, the entire PyTorch community will benefit from PFN code contributions given the organization’s expertise in ML tools.”
Gill Pratt, CEO, Toyota Research Institute
“TRI and TRI-AD welcome the transition by PFN to PyTorch,” said Gill Pratt, CEO of Toyota Research Institute (TRI), Chairman of Toyota Research Institute – Advanced Development (TRI-AD), and a Fellow of Toyota Motor Corporation. “PFN has in the past strongly contributed to our joint research, development, and advanced development in automated driving by creating and maintaining Chainer. TRI and TRI-AD have used PyTorch for some time and feel that PFN’s present adoption of PyTorch will facilitate and accelerate our application of PFN’s expertise in deep learning.”
● Major features of the latest deep learning framework Chainer™ v7 and general-purpose matrix calculation library CuPy™ v7.
Chainer v7 features improved inter-operability with C++-based ChainerX
- Chainer v7 includes the distributed deep learning package ChainerMN, and ChainerX is supported by many Chainer functions
- TabularDataset class has been added to flexibly process multi-column datasets
- With ONNX and Chainer consolidated, Chainer v7 can work with inference engines through ONNX
For details about Chainer’s new features, future development, and documentation on how to migrate to PyTorch, please read the latest blog post from the Chainer development team.
- With cuTENSOR and CUB library supported, CuPy has improved performance on NVIDIA GPUs
- CuPy has experimentally added support for ROCm, enabling it to be used on AMD GPUs
Chainer Release Note: https://github.com/chainer/chainer/releases/tag/v7.0.0
Chainer Documentation: https://docs.chainer.org/en/v7.0.0/
PFN will continue to develop other open-source software (namely CuPy, and Optuna) as actively as ever.