Agent-centric learning: from external reward maximization to internal knowledge curation
By: Hanqi Zhou , Fryderyk Mantiuk , David G. Nagy and more
Potential Business Impact:
Teaches computers to learn and adapt better.
The pursuit of general intelligence has traditionally centered on external objectives: an agent's control over its environments or mastery of specific tasks. This external focus, however, can produce specialized agents that lack adaptability. We propose representational empowerment, a new perspective towards a truly agent-centric learning paradigm by moving the locus of control inward. This objective measures an agent's ability to controllably maintain and diversify its own knowledge structures. We posit that the capacity -- to shape one's own understanding -- is an element for achieving better ``preparedness'' distinct from direct environmental influence. Focusing on internal representations as the main substrate for computing empowerment offers a new lens through which to design adaptable intelligent systems.
Similar Papers
A brief note on learning problem with global perspectives
Machine Learning (Stat)
Helps computers learn from many different information sources.
Empowerment Gain and Causal Model Construction: Children and adults are sensitive to controllability and variability in their causal interventions
Artificial Intelligence
Teaches machines to learn like kids.
Microeconomic Foundations of Multi-Agent Learning
Machine Learning (Stat)
Teaches AI to make fair deals in markets.