Preferred Networks Releases PFRL Deep Reinforcement Learning Library for PyTorch Users
PFRL succeeds ChainerRL as comprehensive library with cutting-edge deep reinforcement learning algorithms and features
TOKYO – July 30, 2020 – Preferred Networks, Inc. (PFN) today released PFRL, a new open-source deep reinforcement learning (DRL) library for PyTorch users who intend to apply cutting-edge DRL algorithms to their problems of interest. Succeeding Chainer™-based ChainerRL, PFRL is part of PFN’s ongoing effort to strengthen its ties with the PyTorch developer community while the company transitions its deep learning framework from Chainer to PyTorch.
PFRL implements a comprehensive set of DRL algorithms and techniques drawn from the state-of-the-art research in the field, allowing researchers to quickly compare, combine and experiment with them for fast iteration. In particular, PFRL offers high-quality, thoroughly benchmarked implementations attempting to reproduce nine key DRL algorithms which can be used as a base for research and development. By using PFRL, existing ChainerRL users will be able to retain much of their existing code as they migrate to PyTorch.
PFN will provide baseline implementations using PFRL for the MineRL competition at the 2020 Conference on Neural Information Processing Systems (NeurIPS), which participants can use as a starting point to develop their own novel systems. PFRL also provides examples for using Optuna to enable hyperparameter search for DRL applications.
PFN released PFRL in response to the PyTorch community’s demand for a comprehensive DRL library similar to ChainerRL. First launched in February 2017, ChainerRL has been applied internally at PFN and in the Chainer community outside the company in a variety of research and industrial settings. Going forward, PFN aims to use PFRL to accelerate internal research and development, as well as to serve the broader reinforcement learning community.
PFRL is currently available at: https://github.com/pfnet/pfrl