We cannot expect the same algorithm to make increasing returns on all problems. The complexity (and ⏱️,💲,🏭) growth of symbolic, deep learning, reinforcement learning, and other AI approaches motivate considering other dimensions of Intelligence. I draw attention to several broad, overlapping dimensions on the evolutionary scale of artificial intelligence:
- Intrinsic properties: What are the intrinsic properties of a problem? How should the information flow from problem to solution? How should information flow from inputs to outputs for a single instance of the problem? What priors can we assume?
- Algorithm design: How do we concretely express the ideal information flow? What computational tricks will help us? How do we choose the right hyper/parameters? How are learnable components (hidden states, parameters, architectures) initialized?
- Datasets and environments: What data will we need? Does it need to be preprocessed?
- Training paradigms: How will we impose feedback on the solution space? SL, US, SSL, RL, MARL, multi-paradigm? Should the loss be a weighted sum of losses or should it look more like a pareto curve?
- Evaluation: How do we evaluate the performance of the solution? What quantitative metrics will we use? What qualitative assessments can we reasonably make?
- Infrastructure: What are the hardware and software requirements?
- Human feedback: How will we understand and communicate the system’s performance? What should my iteration speed as the developer be?
- Existing code: Do I know what’s already been done? What should I do myself?
- Human capital: What intelligence have others (developers, researchers, other thinkers) already contributed? Could this be a group effort? What motivational forces (free time, interest, project complexity and understandability) should be considered?
I try to trackle these sprawling points in future posts.