Free Will, Desire, and the Artificial Intelligence Paradox

What follows is a thought experiment, out of which a paradox sprung.

An important characteristic of progress is the idea of obtaining more knowledge about the world, in order to better manipulate the variables that eventually lead to a greater standard of living for humans and other sentient beings.  In this respect science is the engine of progress, that sheds light into the darkness of the unknown in order to make the intangible tangible, and to untangle the mystery to arrive at mastery.

We know the causes of various diseases and illnesses, and can both cure and prevent them, which is just another example of how all across the board, we create systems for rendering outcomes predictable.  As knowledge of how we can improve our well-being becomes more widespread we can either choose to ignore it, or to benefit from it, but either way arriving at a relatively predictable result.

Knowing that smoking causes cancer and a plethora of other health-related problems is a huge step towards reducing their rate of incidence.  However, people are still able to ignore those facts, and unfortunately not everyone is aware of them, or in a state where they are able to make rational decisions about that information. It would be wrong to assume for example, that a 3 year-old is in such a position.  The main point though, is that in order to have choice, or perhaps to maximise our power of choice, we must have and understand the most up-to-date information.

If we do not know that ingesting a particular mushroom will likely kill us, then we cannot ultimately make the right decision about whether or not to eat it, because if we are hungry and want to continue living, eating a deadly fungus will not produce the desired result.

But if we now think beyond the idea of simply the most up-to-date information, we must try to conceive of the most complete information possible: that which has the greatest and most accurate predictive power.  It is up to you whether you conceptualise the origins of this information as being God, or simply a god or higher power, or the more earthly version: a powerful artificial intelligence.

Now if we suppose and accept that for whatever reasons, you trust, or put all of your faith in the infallibility of this entity, what would you do if it told you exactly how you must act in order to live a long, happy and healthy life? (Either the 10 commandments or the 1010 commandments depending on whether you worship an analogue god or a digital deity)

But supposing that these were not just a list of general guidelines or heuristics even, but were in fact detailed step by step instructions that produced exactly the results as predicted.  Would you follow them, and if not, why not?


It’s my suggestion that the world we are currently living in resembles more and more such a hypothetical scenario, yet we don’t always seem to follow the “advice” we are given.  Smoking and the obesity epidemic are two obvious examples, which also hint at the idea that knowledge alone is not enough to change our actions, as temptation quite often wins us over.  But if this is the case, in our thought experiment the artificial god would have foreseen such temptation and inherent human weakness, and accordingly taken this into account in the list of instructions.

Having identified and acknowledged your failings, your tailor-made life program is flawless, except for one thing.  You must execute it.  Regardless of whether or not we are granted a free pass to absolute bliss, the higher power can never reach into the game in order to directly manipulate the players.  Doing so would violate God’s code, and cause the freewill module to bug indefinitely.  Even if you ask god to make you better at following instructions, you must already possess the ability to do so, at least a minimum.

Self-help books, tutorials, scientific studies and all manner of sources of information, generated by different people at different times, and through different methods, when imagined as a coherent whole, constitute an atlas of human knowledge that in effect does tell us how to get to our chosen destinations.  Technology is helping make that coherent whole less and less metaphorical, as the storage, indexing of, and access to information becomes ever more sophisticated and democratised.  It’s not that we’ve got everything figured out, but that we know enough, and have the means to transmit and act upon that knowledge as to be significantly more effective than not just our distant ancestors, but even the generation preceding us.

One might worry that if we are just left to carry out a list of albeit beneficial and highly fulfilling instructions, then how would we derive personal meaning or purpose, knowing that we were not the true source of our actions, that we were mere executors instead of creators?  Would we become vehicles for the divine?  And could we be content with allowing god to speak through us?

As it currently stands, our alien god, evolution, has been speaking through us ever since we were told to go forth and multiply.  Many of us are dissatisfied with the idea of serving as DNA mules and rampant replicators, so it stands to reason that our better selves take control and begin to shape the universe consciously, telling evolution itself to go forth and multiply, permanently.

To simplify things, if we imagine that by expressing our desires to this higher power, in a genie-like way we are granted knowledge of the path we must take to achieve them, then we might meet these instructions with much less resistance.  In this hypothetical scenario we get to decide the destination but the route is determined by the unbreakable laws of the universe, and only conveyed to us by A.I.  In that sense we should avoid shooting the messiah.

If we input our desires into the machine, and the output is a list of instructions for what we must and must not do, then it is our desires that constrain our actions and the paths we must take.  In this sense we cannot possibly have free-will; the ability to choose and act without constraint, because our desires are always the determining factor.  To have this kind of free-will would mean we were able to achieve our goals regardless of how we behave, which on the surface at least, appears to be the expectation that some people actually have.  We want to attain our long-term goals, and at the same time we want the freedom to act in ways that ultimately contradict those well-thought-out ideas, but giving in to short-term desires seems like the best way to fail to achieve what you want in the long run.

Decision making is close-mindedness.  We don’t have the ability to live our lives literally as if anything is possible, and therefore we must make conclusions, i.e. shut ourselves off from a large number of possibilities.  Ultimate subjectivity is not practicable, and sometimes the idea of being open-minded is really just a way of avoiding the facts, because if we don’t examine them, if we ignore them or avoiding coming to any conclusion, then anything could be possible.  When the reality is that facts will remain facts whether or not you choose to accept them or even look in their direction.  Rationality is about being appropriately swayed by evidence, on the other hand, subjectivity and the corresponding concept of open-mindedness is about not being swayed either way.

So, by having desires we impose certain rules on ourselves, and while it might be that we have some kind of conflict with those rules, or with the people who deliver them, we must ultimately accept them in order to achieve what we want.  It may be difficult to avoid eating fat-laden, sugary foods and to get ourselves out of the house to go running regularly, but if our goal is to improve our physical health, then we’d be silly to argue with, or resent the things that are necessary in order to achieve and maintain that goal.  If we expressly state and clarify our highest goals and ideals, then it should make accepting the means of achieving them easier, and keep us from trying to find alternative routes in an effort to make less effort.


Now, imagine that we take a trip to the centre of the earth to pay our respects, and to ask of the Almighty Intelligence what we should do to live happily ever after, and the reply comes in a booming (and beeping) voice: “DO NOT HEED MY ADVICE!”

We suddenly have a paradox.

If we take the advice, the advice is not to take the advice, so if we don’t take the advice then we are suddenly taking the advice, and are back to the beginning of a strange loop.

It seems that the only hope for humanity is to build a friendly A.I that wouldn’t touch human affairs with a barge pole.