Abstract
Artificial intelligence that accumulates knowledge and presents it to users is likely to become a good friend to humanity. It is thought to be useful for such artificial intelligence to have the ability to continuously accumulate knowledge while operating as artificial intelligence, rather than having to relearn when acquiring new knowledge. Artificial intelligence that can share past knowledge with users is thought to be a human-like existence. Some dialogue agents developed so far can partially use the memory of past conversations. Such memory is sometimes referred to as long-term memory. However, the ability of existing dialogue agents to continue accumulating knowledge and share that knowledge with users is likely not sufficient at present. Within the framework of their current implementation, their memory capacity is at least finite. It is thought necessary to develop a method to improve this memory capacity, in order to realize memory to store any information in a medium of finite size. This might be a kind of research on data compression. Also, in the current method of developing dialogue agents by repetitively using human conversation data as training data, there is some doubt as to whether or not artificial intelligence can effectively utilize the memory they have acquired in the past. In this study, we demonstrate the impossibility. In contrast, this research proposes a method to store various information in a single finite vector data and retrieve it, utilizing a neural network with a structure similar to a recurrent neural network and encoder–decoder network, which we named mnemonist. This study only yielded very preliminary results, but it is fundamentally different from the conventional learning method of neural networks, and is a fundamental consideration for building artificial intelligence that can store any information in finite size data and freely utilize them in the future.
Competing Interest Statement
The authors have declared no competing interest.