Multi-agent systems with autonomous robots are suitable for dealing with large-scale and complex problems. However, many traditional frameworks have a centralized system that requires all the information of the entire system somehow, and are not scalable. In this research, we propose bottom-up multi-agent reinforcement learning in which autonomous robots understand and cooperate with each other through minimal mutual communication in a decentralized manner.
Among them, we are working on the following problems, for example.
- Reward shaping based on probabilistic reward predictions of other agents
- Online estimation of interests between agents
In the proposed framework, humans can also participate as agents, so we are also applying it to human-robot cooperative systems.