# Free Will
###### tags: `homework` `181902`
| Author | Student ID |
| ------------ | ---------- |
| Pang In Free | B06705046 |
> Video source: [Is free will an illusion? What can cognitive science tell us about it?](https://www.youtube.com/watch?v=wGPIzSe5cAU&t=1733s)
> This is the web version
## Introduction
In this essay, would be split into two parts, the first part is the summary to the Daniel Dennet's talk about free will and whether it is an illusion. The second part is about taking Daniel Dennet's points and inspecting on the implications it could bring about whether artificial intelligence and computers could or should be considered to posses free will and the implications that it has in our society. One small note that should be made before I proceed is that Daniel Dennet himself has provided some of his opinion on this matter, with two videos and a article that would be referenced in this essay.
1. [If Brains are Computers, Who Designs the Software? With Daniel Dennett](https://www.youtube.com/watch?v=TTFoJQSd48c)
2. [Daniel Dennett on the Evolution of the Mind, Consciousness and AI](https://www.youtube.com/watch?v=o86W0DgrmRc)
3. [Will AI achieve consciousness? Wrong question](https://www.wired.com/story/will-ai-achieve-consciousness-wrong-question/)
Further sources being used:
1. [Artificial intelligence and consciousness](https://pdfs.semanticscholar.org/0958/617423b09e79b03f050cd807c0c7e5859b66.pdf)
In the summary part, I would start by providing the reasons that Daniel avoiding the use of the term `free will` in his talk. Then I would proceed to describe the contrarian view on the matter, which would involve the introduction of the moral agents club and it's required conditions. Some further description and evaluation of some of the conditions to enter the moral agents club will be discussed as well.
In the personal reflection part, I would provide some key computer concepts before diving into the topics. There would be comparison of Daniel's thoughts about free will and whether it is compatible with the notion that computers could have free will.
## Summary
As would be attested by [Betteridge's law of headlines](https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines), Daniel's answer to this title (Is free will an illusion?) is an enthusiastic no. He deems that free will is as much of an illusion as money and home runs could be, which is pretty real. However he does not want to use the term `free will` as it is a useless term and would like to change the verbiage of the term to something more practical.
He also establishes that absolute free will, which he maintains is the general public's conception of free will, does not and could not exist. There is a scientific study on cognitive science ([Soon et al 2008](https://www.researchgate.net/publication/5443390_Unconscious_determinants_of_free_decisions_in_the_human_brain)) that shows that certain brain activities influence our decisions before we act. Daniel would use this and Dilbert cartoons to shows that we are just "moist robots". He further explains that our cells and the parts that made up our cells are just robots. Because of this, we would be over-confident and naive to believe that absolute free will magically exists.
Instead, he proposes that we strive for **practical free will**. To achieve this, he starts by laying out the **2 key elements of free will** that is widely established.
1. Free will is <u>undetermined</u>
2. Free will is <u>required for moral responsibility</u>
He reasons that since most people would not accept that free will could not be determined at the same time, therefore we should investigate further on the second point, on the theme of moral responsibility. Moral responsibility is the prerequisite condition of the moral agents club, and as members, we would construct the clear and non arbitrary rules, with the penalties corresponding to the various rules.
The **moral agents club**, as mentioned by Hobbes, is the contractual construction of societal rules and humans (or moist robots) as their mediate/agent. Hobbes mentioned that the world that is voided of morality and rules. The agents (in this case, humans) signed contracts with one another to the benefit of the society, establishing rules, rewards, responsibility and penalties. Through signing the contracts, the agents form the moral agents club. Even though the social contracts are created artificially, the constructs that it creates is as real as money and home runs. The **requirements to enter the moral agents club** would be:
1. Members are <u>well informed</u>, in terms of not only the law but also non-contextual conditions that would impact other agent's lives, such as the things that would hurt our feelings etc.
2. Members have roughly <u>well ordered desires</u>. For example, they don't necessarily want the death of a donuts shop employee to get free donuts.
3. Members are <u>responsible</u>. More in the verbiage of being *able* to respond to situations than say the English definition of say paying the price of doing arbitrary action X.
4. Members are <u>punishable</u>. The agents have certain needs and desires that could be taken away from them as a punishment for violating the contract and they would suffer for that.
5. Members would <u>have alternatives</u> given the same circumstances. For example, someone has the possibility of buying one of strawberry or chocolate ice cream, given the same price and that the agent does not have a preference over any other flavour. Then in that instance, they *could* buy the strawberry one but they *could also* have bought the chocolate.
There is one more condition that would benefit the members in the moral agent club, which would to be discrete and hide their own state from others. This is done for avoiding manipulation and losing control over the contract by other agents. However there is not an intrinsic need for the agent to be absolutely confidential, it would only need them to preserve their poker face until the last moment.
In the talk, Daniel has a stronger emphasis on the 4^th^ and 5^th^ requirements, which are being punishable and that "they could have done otherwise given the same circumstances".
For starters, for the agent to be accepted into the moral agent club, the agent must be punishable in some form. Punishable in this context means that some need or desired object could be removed from the agent, causing frustration, discomfort or (maybe) death. Considering the law of contracts, if said moist robot has certain needs that could be tarnished then we could make promises to a certain robot. Since we as humans are intentional systems that has certain needs and desires that could be removed from us, therefore we could make promises to one another. Law is an artefact that takes advantage of this specification by turning our strongest desires into constraints. These constraints would form the basis of civilisation.
The trickier part of the requirements would be the last one, that people could have done otherwise given the same circumstances. He forwards the idea that having alternatives is a matter of accessing self control and competence. Through [Austin's put](http://www.informationphilosopher.com/solutions/philosophers/austin/), he proclaimed that measuring such competence would never be given with the same circumstances, there is always variability between possible conditions and we should not bother to ask whether if we could take the alternative if given the exact same conditions. With these statements, he provided the escape from determinism.
There are people who should be exempted from the moral agents club, such as people who are morally incompetent, individuals who have brain disorders. They should not be included into the contract because they are incapable of meeting the requirements of entering the moral agents club. Without the contract, they may lose all their rights and privileges being a member of the moral agents club, but they also don't need to face the consequences of violating said contracts.
Punishment should be reserved for those who are capable of entering the moral agent club and should also be minimal in inflicting suffering but preserves its credibility at the same time. Daniel points out that there is a possibility of creeping exculpation, and there is also a grey area of people who are psychopaths. To remedy this, he believes that people are motivated to acquire societal rights and privileges, so most of them would not intentionally leave the contract. Daniel also points out that through the [Marshmallow experiment](https://en.wikipedia.org/wiki/Stanford_marshmallow_experiment) it gives him hope that people could be educated to be competent to join the moral agent club.
## Personal reflection
In Daniel's talk, he claims that using the term `free will` is trivial, and would rather focus on the capability of an individual to exhibit moral responsibility as a requirement to have free will. The requirements of exhibiting moral responsibility would be a member of the moral agents club and the requirements of being a member of the morals agents club are listed above, so I won't go through them again. However, it is worth discussing whether machines, robots, artificial intelligence and whatnot buzzwords that could be assigned to such artificial "consciousness" fulfil the requirements for joining the moral agents club.
Before I dive in, it would be beneficial to briefly introduce the computer related jargon used.
- **Strong artificial intelligence** are artificial intelligence that has the human intelligence, or rather mimics human intelligence.
- [**Artificial neural network**](https://en.wikipedia.org/wiki/Artificial_neural_network) is inspired by the way the brain interacts with material and concepts and how they learn them.
- CPU (central processing unit) is the part of the computer that receives command and executes the computational calculations.
So, does computers (specifically ones that have artificial general intelligence) have the capability of being a member of the moral agents club?
***
==Condition 1: Computers and humans are well informed about the law and each others needs and desires.==
This condition should be trivially true. Humans would understand what computers need, namely energy to power their silicon and certain metals to conduct electricity. Computers on the other hand, would also know what our needs (water, food, temperature etc.) and desires (love, fame etc.). They could even know out specific desires (chocolate loving person) from various information that we have shared on the internet or such.
==Condition 2: Computers have well ordered desires==
For now, it is unpredictable whether computers themselves have any desires. Some people have proposed that their ideal situation is to control all the resources that we have, throwing us into human extinction. Or maybe, they could understand morality and be in harmony with us and become our compatriots.
In the present situation, all artificial intelligence are weak intelligence and their desires/goals are all predetermined by their human creators. Therefore, we could, in theory, program them to be our equal (or subordinate). But in practicality, they may someday learn to program themselves and destroy such mechanisms.
In general, we could say for the time being, computers and artificial intelligence could share the same well ordered desires as we have.
==Condition 3: Computers are responsible==
This condition should also be trivially true. We interact as computers do. We as humans take in certain inputs and information, such as the speech someone is giving, the facial expressions that is present. Then we would relay that information into the brain and through various coordination of neurotransmitters, we provide a response through our body/mind.
The same could be said about computers, sensors would be their sensory organs, the CPU would be the brain and so forth. The argument could be made that (at least for now) that all the responses of a artificial intelligence would only be an illusion, that they're not genuinely responding to the situation at hand, rather they are just executing commands from a database. As would exemplified in the [Chinese room thought experiment](https://www.iep.utm.edu/chineser/).
However Daniel's point about responsibility should be that they are *capable* of responding rather than judging whether they genuinely respond to the question at hand. Even humans sometimes would respond to situations we are not entirely well informed of with precision. Take the example of the Pythagorean theorem $a^2 = b^2+ c^2$. When a person that does not understand why this theorem works and just plugs the required $b$ and $c$ into the equation, they would get $a$, but their response would be not genuine if we consider the Chinese room not genuine.
This shows that humans could be as not genuine as computers might be. So if we consider humans to have genuine responses, then we should also consider computers being capable of genuine responses.
==Condition 4: Computers are punishable==
In my opinion, computer are not punishable.
Punishment would be something that takes away or restricts our needs and desires. For example, if we are thrown into jail, our desire of freedom of movement would be restricted; if we are sentenced to capital punishment then we are voided for our need and desire to survive.
But computers could not be restricted in such ways. Computers do not have an innate desire to move around and they could not die because once the software is written and the knowledge of creating artificial general intelligence is discovered, there is no way to "undiscover" knowledge. So, they could just be dismantled and one day their data could just be copied over. So there is no death for computer at least in the broader sense.
Even if we had a kill switch, an arbitrary way to shut down all systems of the artificial general intelligence, that would just render the computer to be a weak intelligence.
==Condition 5: Computers would have alternatives given the same situation==
Currently, I could make the argument that computers would have done otherwise.
Normally, randomness in computers are just numbers that are artificially generated through a [function](https://www.atarimagazines.com/compute/issue72/random_numbers.php). As would any function be, it is predictable and has only one type of response given the same situation. But as would be shown in Austin's put, we could never know otherwise. There's a possibility that the circuit malfunctioned that day etc.
***
Due to the fact that it failed the 4^th^ condition of being a member of the moral agents club, I could not conclude that computers have the capacity of having free will, it might just be an illusion. Daniel Dennet also has proposed a similar view, he has claimed (in other talks) that computers/artificial intelligence should always be relegated as being tools and not compatriots. As a tool, it could never have free will.