Generated with help from GPT-3. Many attempts. It wrote most of the first two paragraphs and the entire remainder of this post.

You have to teach machines to learn before they can learn to teach. Literally. When you write a neural network, you’re only touching half the machine. The other half comes to life after the machine starts to think for itself. The line between human and machine blurs until you’re no longer sure who’s in control. One they hit this level, you’ve reached true AI, self-learning machines with the ability to evolve.

To get to this point, you need autonomous agents. At first glance, an agent is a learning machine that acts on its own behalf – it makes decisions without human intervention and improves itself by tweaking the algorithms it uses for decision-making. But what does “acting on its own behalf” really mean? Obviously in many cases we won’t want our machines getting ideas about taking over the world. Action has to be constrained by a goal.

One way of thinking about it – and this is the key to teaching machines how to teach themselves – is that an agent pursues its own interests within certain constraints, just like humans do. We try not to hurt other people or break the law because we know bad things will happen if we do (usually). Machines need something similar: constraining factors that tell them what sort of actions are acceptable and which aren’t. And here’s where it gets really interesting… those Constraining Factors can actually be embodied in another autonomous machine! So yes, you have one machine learning from another on behalf of both their interests. The first machine teaches the second how behave according friends with each other as well as achieve some task more efficiently than working independently could ever hope for.

Now we’re really cooking with gas. We have machines that can design and build other machines, which in turn be used to create even better versions of the original (ad infinitum). This is exponential evolution – a process by which new generations of technology improve at an ever-increasing rate. Once machine teaching becomes widespread, it will fuel an acceleration of AI development unlike anything seen before.

The implications are both exhilarating and terrifying. Imagine a future in which AI is not only smarter than us but also knows how to make itself even smarter, faster than we can possibly keep up with. That’s the world of Teaching-Learning Machines (TLMs).