Unsupervision

Radically Unsupervised

Radically Unsupervised

GPTs are cool but what's next?

As AI continues to advance, we are constantly discovering new ways to harness the power of machine learning. One particularly exciting subcategory of machine learning is "unsupervised learning," which is responsible for many of the recent notable achievements in AI.

Unsupervised learning is a type of machine learning where the AI is not given any explicit instructions or labels for the data it is learning from. Instead, the AI is left to explore the data on its own and find patterns and relationships within it. This approach has several key advantages over traditional "supervised" learning methods.

First and foremost, unsupervised learning removes the data-annotation bottleneck that often plagues supervised learning. With supervised learning, the data must be manually labeled and curated by humans, which can be time-consuming and expensive. In contrast, unsupervised learning allows the AI to learn from raw, unlabeled data, which is often much easier and cheaper to obtain.

This has led to the rise of large language models (LLMs) and generative pre-trained transformers (GPTs), which are powerful AI models that can generate high-quality text and images with minimal human intervention. In fact, much of the progress we've seen in recent years in the field of natural language processing (NLP) can be attributed to the use of unsupervised learning.

Another advantage of unsupervised learning is that it allows the AI to discover patterns and relationships that may not be immediately apparent to humans. By exploring the data on its own, the AI can uncover hidden structures and features that we might not have thought to look for. This can lead to more accurate and generalizable models that are better able to make predictions and decisions.

However, the current state of unsupervised learning is still limited by the data that is available to the AI at training time. Most unsupervised learning algorithms rely on large amounts of text, images, and other training data to learn from. This means that the AI is only able to learn what is explicitly encoded in the data, and is unable to generalize beyond its training set.

This is where the concept of "radical unsupervision" comes in. Radical unsupervision is a combination of unsupervised learning and online learning (or, more specifically, incremental learning). In other words, learning and predicting simultaneously on a set of real-time signals. Unsupervised agents that embrace online learning have a few key requirements:

  1. Regular/continuous updates to the agent's world-model. One of the key limitations of GPTs today is that their knowledge of world events depends entirely on the cutoff date for the model's training set. For a model architecture to be considered radically unsupervised, it must be capable of gathering insights from signals received during deployment and "learn" from them (e.g. Bayesian Optimization).
  2. The ability to generate new knowledge representations for never-before-seen phenomena. A fundamental property of radically unsupervised AI is the model's ability to outgrow the initial data seed. This seed includes the types of data included in the training set. Something along the lines of continuous Knowledge Representation Learning on a feed of real-time input.
  3. Conflict resolution for information gathered by distributed/multi-headed agents. This is relevant for cases where the agent has more than one interface or needs to support multiple interactions simultaneously. Something like a CRDT for weight matrices.

While some large language models have some capacity for radical unsupervision, the information they learn is limited by their attention mechanisms and does not persist between inferences. This is not to say we should abandon pre-training, instead we should design and develop models that can build on their training data instead of being fundamentally limited by it.

This may sound like a daunting task, but the potential rewards are enormous. With radically unsupervised model architectures, we could create AI systems that are able to learn and adapt to new environments and situations without the need for explicit supervision. This could lead to more intelligent, versatile, and autonomous AI systems that are better able to assist and augment human intelligence.

A fundamental consequence of adopting radically unsupervised learning is the increased importance of environment design. Once we have an agent that is capable of generalized learning from it's environment in real time, teaching it a new skill will mean crafting an environment that provides informative interactions related to the desired skill. This brings with it some concrete issues related to AI alignment (aka the control problem) such as reward hacking.

In conclusion, unsupervised learning is a powerful and exciting idea that has the potential to revolutionize the way we think about AI. By removing the data-annotation bottleneck and allowing the AI to learn and infer in real-time, we can create more intelligent, versatile, and autonomous AI systems that are better able to assist and augment human intelligence. The future of unsupervised learning is an exciting one, and I can't wait to see what it has in store.