I prompted GPT-3 using the content from Teaching Learning Machines (TLMs) and “# Self-learning meta-learners teach themselves to teach”. Here’s what I got (Disclaimer: TLM’s are not yet established ML terminology):

A lot of the excitement around TLMs is due to their potential for accelerating AI development. But there’s another, equally important reason why they’re so interesting: they shed light on one of the great mysteries of intelligence. How does learning happen in the first place? What are the basic building blocks out of which all knowledge is constructed? If we want to create truly intelligent machines, these are questions that need answering. And it turns out, TLMs may be just what we need.

To understand how this works, let’s start with a very simple example: teaching a machine to play tic-tac-toe (noughts and crosses). We can use reinforcement learning for this – an approach that involves trial and error, receiving feedback after each move about whether it was good or bad. Over time, the agent will learn from its mistakes and get better at playing the game until eventually it becomes unbeatable (assuming perfect play from both sides).

Now let’s say we want to take this a step further and teach the machine how to teach other machines to play tic-tac-toe. In order for the agent to do this, it needs two things: first, a model of how teaching works; and second, a way of representing knowledge so that it can be transferred from one student (or “subagent”) to another. The latter is known as meta-learning – learning about learning – and it’s vital for creating TLMs.

One approach that has been proposed involves using neural networks as both the students and the teachers. A trained network can be seen as a set of weights that encode some generalizable insights about how tic-tac-toe (or any other game) should be played. When these weights are randomly initialized – which corresponds to having no prior knowledge about the game – they contain nothing but noise. But if we use them to train another neural network, something magical happens: those random initializations turn into useful patterns that help the new network learn faster than if it had started from scratch itself!

This is known as “knowledge distillation” – the transfer of knowledge from a more complex model (the teacher) to a simpler one (the student). It’s an important concept in machine learning, and it’s what enables meta-learning. In our tic-tac-toe example, the agent that knows how to play the game can be thought of as the teacher, while the newly initialized neural network weights are like blank students waiting to be taught. By training these weights on data generated by the agent – i.e., by playing against it – we can turn them into powerful models of tic-tac-toe strategy that require far fewer examples to learn from than if they had started from scratch.

One way of thinking about this is that theagent has learned not just how to play tic tac toe but also how teaching works: it knows which insights are most useful for transferring knowledge, and when/how best to do so. This generalizable understanding of teaching is what allowsTLMs create other machines that accelerate their own learning process!