Distributed Multi-Agent Lifelong Learning

Authors

  • Prithviraj Tarale University of Massachusetts Amherst
  • Edward Rietman
  • Hava Siegelmann

Keywords:

Machine Learning, Constraint & Uncertainty Theory

Abstract

Lifelong learning (LL) systems must adapt to changing environments by continuously updating their knowledge. Traditional LL paradigms assume new data come labeled and that agents learn independently. However, labeled data are scarce, especially in remote settings. We introduce the Peer Parallel Lifelong Learning (PEEPLL) framework for distributed Multi-Agent LL, where agents actively seek peer assistance when needed. Unlike classical distributed AI, where communication scales poorly, PEEPLL agents reduce communication as their knowledge evolves. To improve resilience to low-quality peer responses, we propose (a) TRUE confidence score, a compute-efficient application of Variational Autoencoder without input reconstruction, (b) REFINE algorithm to selectively accept peer responses, (c) DYMEM, Dynamic Memory-Update for storing necessary information. Each of the three contributions improves upon their baselines. In our solution to PEEPLL, agents outperform traditional LL agents, even when the latter has environmental supervision available, marking a substantial step towards scalable lifelong learning at the edge.

DOI: https://doi.org/10.24135/ICONIP17

Downloads

Published

2025-03-17