This blog continues from
GPT3 and AGI: Beyond the dichotomy – part one
GPT3 and AGI
Let’s first clarify what AGI should look like
Consider the movie ‘Terminator’
When the Arnold Schwarzenegger character comes to earth – he
is fully functional. To do so, he must be aware of the context. In
other words, AGI should be able to operate in any
Such an entity does not exist
And nor is GPT3 such an entity
But GPT3 however has the capacity to respond ‘AGI-like’ to
an expanded set of contexts much more than traditional AI
GPT 3 has got many things going for it
- Unsupervised learning is the future
- Linguistic capabilities distinguish humans
- But Language is much more than encoding information. At a
social level, language involvesjoint attention to environment,
expectations and patterns.
- Attention serves as a foundation for social trust
- Hence, AGI needs a linguistic basis â but that needs
attention and attention needs context. So, GPT-3 â linguistic â
attention â context could lead to AGI-like behaviour
Does AGI need to be conscious as we know it or would
access consciousness suffice?
In this context, a recent paper
A Roadmap for Artificial General Intelligence: Intelligence,
Knowledge, and Consciousness: Garrett Mindt and Carlos
- integrated information in the form of attention suffices for
- AGI must be understood in terms of epistemic agency, (epistemic
= relating to knowledge or the study of knowledge) and
- Eepistemic agency necessitates access consciousness.
- access consciousness: acquiring knowledge for action,
decision-making, and thought, without necessarily being
Therefore, the proposal goes that AGI necessitates
- selective attention for accessing information relevant to
action, decision-making, memory and
- But not necessarily consciousness as we know it
This line of thinking leads to many questions
- Is consciousness necessary for AGI?
- If so, should that consciousness be the same as human
- Intelligence is typically understood in terms of
problem-solving. Problem solving by definition leads to specialized
mode of evaluation. Such tests are easy to formulate but check for
compartmentalized competency (which cannot be called intelligence).
They also do not allow intelligence to âspill overâ from one
domain to another â as it does in human intelligence.
- Intelligence needs information to be processed in a
contextually relevant way.
- Can we use epistemic agency through attention as the
distinctive mark of general intelligence even without
consciousness? (as per Garrett Mindt and Carlos Montemayor)
- In this model, AGI is based on joint attention to preferences
in a context sensitive way.
- Would AI be a peer or subservient in the joint attention
Finally, let us consider the question of spillover of
intelligence. In my view, that is another characteristic of AGI.
Its not easy to quantify because tests are specific to problem
types currently. A recent example of spillover of intelligence is
from facebook AI supposedly inventing itâs own secret
language. The media would have you believe that groups of
AGI are secretly plotting to take over humanity. But the reality is
a bit mundane as explained.
The truth behind facebook AI inventing a new language
In a nutshell, the system was using Reinforcement learning.
Facebook was trying to create a robot that could negotiate. To do
this, facebook let two instances of the robot negotiate with each
other â and learn from each other. The only measure of their
success was how well they transacted objects. The only rule to
follow was to put words on the screen. As long as they were
optimizing the goal(negotiating) and understood each other it did
not matter that the language was accurate (or indeed was English).
Hence, the news about âinventing a new languageâ. But
to me, the real question is: does it represent intelligence
Much of future AI could be in that direction.
We are left with some key questions:
- Does AGI need consciousness or access consciousness?
- What is role of language in intelligence?
- GPT3 has reopened the discussion but still hype and dichotomy
(both donât help because hype misdirects discussion and dichotomy
shuts down discussion)
- Does the âBitter lessonâ apply? If so, what are its
- Will AGI see a take-off point like Google translate did?
- What is the future of bias reduction other than what we see
- Can bias reduction improve human insight and hence improve
- GPT-3 â linguistic â attention â context
- If context is the key, what other ways can be to include
- Does problem solving compartmentalize intelligence?
- Are we comfortable with the âspilloverâ of intelligence in
AI? â like in the facebook experiment
image source: Learn English Words: