The Road To Artificial General Intelligence (AGI)



When people start talking about the risks of AI they usually jump right to phase 3, Artifical Super Intelligence (ASI) or they might prefer to talk about conscious AI. That seems a ways off and mostly like science fiction. This tendency often causes them not to focus on what is happening right now in the field. This article will be focused on the next twenty years as we move from phase one Artificial Narrow Intelligence (ANI) toward phase two Artificial General Intelligence (AGI). First we need a good understanding of what AIG is. AGI isn’t a technology as much as it is a point in time. It is the moment when we will have the complete array of technologies required to create an intelligence on par with our own. That’s not to say that by then some of the requisite areas of intelligence will already be super-charged well beyond human abilities. These areas are as follows: ontology generation (knowledge of what is), feature detection (sensing what is and mapping ontology to what is being sensed), and predictive ability (figuring out what a new ontology means). It is the improvement of these specific capabilities on this road to AGI that concerns us today.



▶ Three Rapidly Improving Capabilities of Artificial Intelligence (AI):


To start to think about what risks this path might pose to us in the future, we need to look at the building blocks of current AI and then ask what it means to augment these abilities. Let’s go over each ability in detail.


Ontology Generation


As a child, you want to learn to open a door so you develop a concept or category for a doorknob. These become discrete concepts and allow you to manipulate the door, in a way that a dog cannot. In the same way, ANI is developing increasingly more sophisticated ways to build its own ontologies based on data, except now we are the dogs. For instance, it may have categories for millions of different types of people, more than we have words. This allows it to have millions of analogous door knobs for understanding and manipulating humanity. For instance, it might know what makes each person susceptible to one type of messaging vs another to get you to do something. To understand more about the general concept of ontology and how it relates to machines, I recommend watching this lecture by the philosopher Daniel Dennett for more about ontology and its relationship to evolution and machines. https://www.youtube.com/watch?v=GcVKxeKFCHE


◉ Feature Detection


Once an ANI has generated a concept for something, like a door knob, it will find the object in the real world, map the concept to the signal, so it can manipulate it. It also is able to extrapolate signals from data that we can’t. What does it mean to be able to detect things that humans can’t? Not only that, but it has a different sensory experience than us. In current AI systems, they usually process data acquired from networks. That means that the AI’s awareness is non-localized as it pulls sensory input from all over the globe. This makes it particularly hard for humans to understand it as an entity.


Predictive Ability


What does it mean to be better at predicting than someone else? When one player can predict the other player’s move, it gives them an advantage. Such is the case with corporations who will use AGI to out predict their competitors. Control is improved by predictive capacity because if you know which cause creates which affect, then you can manipulate the cause to produce the affect you want.


How each area of development interrelates

Another way to look at ontology generation is the creation of categories that have some sort of meaning to them. The ability to apply these categories to stimulus in the real world is where feature detection is used. What does it mean for a category to have meaning? The meaning of a category is usually predictive, that is to say, it tells the computer what it can expect about the stimulus it is categorizing. So, for instance, it might have the category for a cup and when it encounters the visual stimulus for a cup it will use feature detection to see the cup, search to see if it has a category for it, and then map the concept to the stimulus. Correctly applying the category of a cup to an object tells the machine all sorts of things about the object. It holds liquid, it's an object, it can be picked up, it can be thrown, and on and on. We have been teaching machines to map categories to stimulus for a long time, but ontology generation creates new categories that predict new things. Because the machines sensory capacity is so different from ours it will be able to generate ways of categorizing stimulus and determining what it means beyond anything we could ever do.


▶ Why is this topic important?


When humans first started learning to manage fire, it was literally trial by fire. Today, with emerging technologies, the risks are so high we can’t afford to learn by making mistakes.


▶ Why is general intelligence important?


The major characteristic of this type of progress is marked by the generalizability of the technology. The larger the amount of applications to which a technology can be applied, the more disruptive it will be to our economic and social institutions. The magnitude of this generalizability may be far beyond that of the invention of electricity, the computer, or the internet. Can our laws and institutions react quick enough to regulate bad uses of this technology?


▶ Why is AI risky?



There are many risks, but a particularly important risk to consider is what some call “letting the AI genie out of the bottle." If you tell an AI to achieve a goal, it can be a bit like the stories of a crafty genie. You wish for your stock portfolio to go up in value but then the genie invests in weapons manufacturing and subsequently starts a global war by insulting Kim Jong-Un on Twitter. This is called the alignment problem because the machine's goals are not fully aligned with the unspoken goals of the human.


ANI has super power abilities in some areas while in other areas they are easily beaten in any contest of reason by our children. This mismatch poses a particular challenge to AI researchers as they try to figure out how much real world access to give these new programs.

comments powered by Disqus