Topology’s Continuous Learning Model (CLM) is a natural language program that accumulates knowledge, experience, and skills over time, just like humans.

Untitled

LLMs have the following problems:

  1. Static world knowledge
  2. Amnesia across different conversations
  3. Unable to acquire new skills unless fine-tuned

Topology’s CLM:

  1. Has no knowledge cut-off
  2. Remembers content across conversations
  3. Can acquire new skills (without fine-tuning) through trial and error

<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/8d5858df-1063-4797-a4e7-227579d3406e/785dfe0d-7ed0-4c79-9b87-b0719f6206e0/TopologyLogoBig.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/8d5858df-1063-4797-a4e7-227579d3406e/785dfe0d-7ed0-4c79-9b87-b0719f6206e0/TopologyLogoBig.png" width="40px" /> Have questions about our docs? Chat with the CLM! It’s already learned all of this so you don’t have to.

</aside>

How Does the CLM Work?

A novel algorithm that improves with scale enables continuous learning.

Topology has built a ‘hippocampus’ for transformer-based language models. Human frontal lobes are responsible for language, while our hippocampus aids memory organization and learning. We call hippocampal damage ‘amnesia.’