Abstract
Artificial intelligence (AI) that accumulates and presents knowledge to users has the potential to become a vital ally of humanity. Such systems should continually accumulate knowledge while operating without the need for relearning new data inputs. This study developed an appendable memory system for AI to acquire new knowledge even after deployment. Certain dialogue agents can acquire new knowledge after deployment and have the capability to make conversations based on past interactions stored in their memory, referred to as long-term memory, where the information is stored in natural language. However, existing dialogue agents have insufficient memory capacity to continually accumulate and share knowledge with users. This is because long-term memory must be processed by dialogue agents, and consequently, the format of the memory is limited to the modality of natural language, making it impossible to achieve a high compression efficiency. To continue accumulating knowledge post deployment, dialogue agents need memory that can store any amount of information in a finite-sized medium. This study aims to construct an AI system that can generate such memories. However, we demonstrate that such systems cannot be realized by the current machine learning methods, i.e., by merely repeating the input and output learning sequences with AI. Instead, this study proposes a method for storing any amount of information within a single, modality-unrestricted finite-sized vector and retrieving information from it by utilizing a neural network resembling a recurrent neural network and an encoder–decoder network, referred to as memorizer–recaller hereinafter. The proposed system can generate external memory, which is updated dynamically when AI acquires new knowledge; this is called appendable memory. Although this study yielded only preliminary results, it differs from traditional neural network learning methods and provides fundamental approaches for building an AI system that can store any amount of information within finite-sized data and can freely utilize them in the future.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
The manuscript is proofread by a native English speaker.