The Future of AI

November 19, 20228 min read

“The world will know what you want before you want it, and have it ready for you when you want it.”

 

Sounds like science fiction, but AI is bringing us closer to this utopia than we have ever imagined. What does AI mean? I’m sure you know what it stands for, but what is AI, really? And why is it different from other algorithms?

 

An algorithm is a set of instructions, it’s a coded recipe. The process might include some decision-making, such as: ‘If the cookie is a golden colour, take it out of the oven’. However, these instructions are hard-coded. There is no learning taking place. On the other hand, an AI algorithm does learn. If we look at its definition: AI is a group of algorithms, that can modify its algorithms and create new algorithms in response to learned inputs and data as opposed to relying solely on the inputs it was designed to recognise via conditionals. The definition describes it perfectly; it’s the ability to change, adapt and grow based on new data which defines “intelligence.”

 

Now, I want to introduce a new term: AGI, short for Artificial General Intelligence. Currently, different AI are designed to do different things; an AI algorithm can play chess, a different algorithm can drive cars, another one can recognise faces, and a whole new one can translate French into English. All these algorithms are extremely specified in their respective niches, as in they can only do one thing. The algorithm which drives cars can’t just learn how to play chess without being amended. However, humans are able to learn in different domains: i.e. humans can learn to drive a car and then learn to play chess. Getting back to Artificial General Intelligence, its aim is to get close to human intelligence. This suggests that it will be able to learn a variety of things without being restricted to a single task. This is also known as human-level AI, and it is significantly different from what we’ve experienced in the real world so far. The closest example to AGI is personal AI assistants, like Alexa or Siri.

 

AGI will cause a massive step up in the world’s average quality of life, of course, but this is not the end. The ultimate endpoint would be Superintelligent AI. This is a system that rapidly increases its intelligence in a short time, specifically to surpass the cognitive capability of the average human being. It is a step up from AGI, as it is no longer at the human level but inconceivably more intelligent. To be able to understand the scale, think of how much more intelligent we are compared to a worm. That is roughly how much smarter a superintelligence will be compared to us. This would be the pinnacle of all science; in theory, this Superintelligent system will have all the answers for us.

 

So how far away are we from this sci-fi dream? Well, first we would have to calculate how far away AGI is, because it is highly likely that Superintelligence will evolve from AGI, or be created with the help of AGI. A survey, conducted by two dozen researchers within the field of AI, shows that there is a 50% chance of AGI arriving before 2050 and a 90% chance of it arriving before 2095. That’s not too far away. After AGI, how long will it take for a Superintelligence to arrive? The same group of researchers concluded that there is a 50% chance of superintelligence arriving only 2 years after AGI and a 75% chance of it arriving 30 years after AGI.

 

Now let’s discuss some possible paths. First off, the intention is not to give you a blueprint for Superintelligence. All we’re doing here is going through possible paths which can be taken:

 

  1. Whole brain emulation
  2. Biological cognition
  3. Brain-computer interfaces
  4. Networks and organisations

 

Whole brain emulation

 

In this method, intelligent software would be produced by scanning and closely modelling the computational structure of a biological brain, and making this brain function on the hardware of a computer. So how would this work? Firstly, you would need to scan a brain very intricately, and you would need to stabilize it post-mortem. But to be able to scan it, you would need to cut it into extremely thin slices and have the slices scanned; the scans will then be generated into a 3D structure. This structure will be hooked up to a powerful computer which can enable this ‘brain’ to either live virtually or in the physical world via robotics. There are problems with this method though:

 

  1. Microscopy is not at a high enough standard to fully capture all the important details in scans at a high enough resolution.
  2. Handling these microscopic layers of tissue is difficult.
  3. Storage and structuring of the data of this 3D model is complicated.
  4. How could it be ensured that it functions in the right way?
  5. Is there big enough computer power to simulate a living, thinking brain?

 

These problems highlight how theory is developing much quicker than hardware. We seem to have solutions which at first seem genius, but in fact involve a lot of complications in the physical world.

 

Biological cognition

 

This method would enhance the intelligence of human beings themselves. In theory, Superintelligence doesn’t need a machine; it could be done with selective breeding, but as you may imagine that would come across many moral and political hurdles.

 

However, with a small tweak it might work. Consider natural selection but on a gamete level. First, one would need to genotype embryos, then select those which have favourable characteristics. After that, extract stem cells from those embryos and convert into sperm and ova. Furthermore, it is necessary to cross the new ova and sperm to produce new embryos which are even better than the last. Repeat this process until there are large genetic changes. This process can go through dozens of generations in just a few years, therefore, speeding up the procedure and cutting expenses at a huge rate. With this method, evolution can be utilized to reach Superintelligence.

 

Brain-computer interfaces

 

This path suggests that humans should exploit the pros of computers, such as high processing power and data transmission, which is usually done by implanting a chip into a person’s brain. This method of implantation is being explored by Elon Musk’s Neuralink company. This sounds like it would give humans a boost, but in my view it is unlikely to reach Superintelligence, as in the current world humans are using computers anyway. All it would achieve is to speed up the interaction between the human and the computer. There are some other problems too:

 

  1. Brain implantation is dangerous, and even when properly performed it can cause a human to lack behind in other things, such as verbal pronunciation. This was seen when some people with Parkinson’s disease were implanted with a chip to help with muscle stimulation.
  2. The brain might not be able to interact properly with the computer, rendering the whole process useless.
  3. Coming back to my first point: it is unnecessary. It’s not worth the risk for only a tiny bonus. We already use computers and the speed of interaction will not bridge the gap between current human intelligence and Superintelligence.

 

Networks and organizations

 

The next method explores a way of reaching Superintelligence via the gradual enhancement of networks and organisations. This idea in simple terms means linking together various bots and form a sort of Superintelligence called Collective Superintelligence. This wouldn't increase the intelligence of every single bot but rather reach Superintelligence collectively.

 

As an analogy, think of how much humans have developed together over the centuries. Collectively, we have reached a standard of intelligence that is higher than every single individual person. And now imagine this, but on a machine level. The technical side of this hasn’t really come together yet, but the frontrunner example of a nice experiment would be the Internet. Just think of how much data and information is stored there, all of it unexploited. Could the internet just ‘wake up’ someday? I am not sure, but sadly it’s unlikely.

 

Now I would like to address a few myths surrounding the ‘motive’ of a terminator-like AI system. Although the system is extremely intelligent, it won’t be alive in the sense of having feelings. Therefore, any thoughts of revenge, resentment, or jealousy would not be possible. We need to remember that it’s just a machine, and that it will do as we say; but there lies the problem. Essentially the machine will receive an instruction from humans, and it will try to find a way to do it in the quickest and most efficient way possible. If that involves obliterating our planet, it would not hesitate as it doesn’t have feelings. Therefore, the only motive for this Superintelligence will be to reach its final goal. We need to be careful when handling such a tool. When providing instruction or objectives, one must act with tremendous care, and there must be ground rules set which include moral issues as well.

 

Some of you may wonder: why not just turn it off when it’s not behaving properly. However, this machine would not want to be turned off or ‘die’, it would not care about dying, but if it does, it won’t be able to complete its final goal. Since that would hinder it from completing its objective, this Superintelligence will take any precaution necessary not to be turned off.

 

But how can we control it then? There are 2 ways we can control Superintelligence, the first is ‘capability control’. This means limiting what the machine can do. And to do this there are different methods:

 

  1. Boxing: devising a system so that it can’t interact with the world, only from a designated output. This would stop it from being able to hack into devices and do whatever it wants.
  2. Stunting: involves hampering or disabling Superintelligence in some way, e.g. running the Superintelligence on slow hardware or reducing its memory capacity.
  3. Trip wiring: building into any AI development project a set of “tripwires” which, if crossed, will lead to the project being shut down and destroyed (e.g. any attempt to make radio communication).

 

The next type of control is ‘direct specification’, and this has two methods:

 

  1. Domesticity: this is similar to the box method as it severely limits the scope of the AI. However, instead of limiting its capabilities, it limits its ability to have complicated motives, resulting in higher obedience towards humans.
  2. Augmentation: starting off with a program with good motives then making the program Superintelligent. As a result, safety is first guaranteed, then for improvements on its cognitive abilities.

 

The ideal utopian future with an obedient Superintelligence could be clear to grasp. But what about the dark side: doom of humanity? I wrote about motives previously, and there are ways to solve them. However, this is another danger which could be inevitable. Let me paint a scenario:

 

  1. First mover advantage implies that the AI is in a position to do what it wants.
  2. Orthogonality thesis implies that we have no idea what the AI could want because our cognitive abilities are inferior, and even if it states one thing it may be lying.
  3. Instrumental convergence thesis implies that regardless of its wants, it will try to acquire resources and eliminate threats.
  4. Humans have resources and may be threats.
  5. Therefore, an AI in a position to do what it wants is likely to want to take our resources and eliminate us, leading to the doom of humanity.

 

But let’s try to look on the bright side.