# From Computability to Computer Architecture (under construction ... skip this ... to do this properly will require its own course ... the most interesting parts so far are the link to outside references) ([home](https://github.com/alexhkurz/introduction-to-programming/blob/master/README.md) ... [previous](https://hackmd.io/@alexhkurz/H1CyS5v08) ... [next](https://hackmd.io/@alexhkurz/HyccPGbJv)) In this session we want to take a glimpse at what happens "under the hood" when programs are running on hardware. ## What is computation? For a full understanding we would have to work carefully through many levels of abstractions. For example, ### The physical level On the physics level is one needs to Maxwells equations, quanum mechanics and solid state physics. Fortunately, engineers have been able to build a layer of abstraction on top of the physics level on which we modify discrete bits of information. Technically a [bit](https://en.wikipedia.org/wiki/Bit) is a unit of information which can take two values, often called `0` and `1`, or `False` and `True`, or `No` and `Yes`, or `Low` and `High`, etc. An easy way of implementing a bit in an electric circuit is via a switch (`Off` and `On`) and we can make bits visible by putting a lamp in the circuit. At this point it would be great to make a digression and implement a simple adder using simple electronic hardware ... maybe I can add this in the future ... Building a simple adder in off-the-shelf electrical components shows how hardware works in principle. One important principle that we need switches that are operated not by hand, but are switched automatically by the output of other switches. Such a switch is called a [relay](http://www.glolab.com/relays/relays.html). As Zuse proved with the [Z3](https://en.wikipedia.org/wiki/Z3_(computer)) in 1941, it is possible to build a machine essentially equivalent to a modern computer based on relays. It operated at a clock frequency of about 4–5 Hz. More successful than the relay as a basic building block was the [vacuum tube](https://en.wikipedia.org/wiki/Vacuum_tube), the see [list of vacuum tube computers](https://en.wikipedia.org/wiki/List_of_vacuum_tube_computers). But still, computers were big and clumsy. A breakthrough was the invention of the [transistor](https://en.wikipedia.org/wiki/Transistor). Similarly to a vacuum tube it can be used both as a switch and as an amplifier (as in the transistor radio). But transistors are much smaller and cheaper. Another breakthrough was to assemble many transistor on one [integrated circuit](https://en.wikipedia.org/wiki/Integrated_circuit), aka chip. Until recently we could rely on [Moore's law](https://en.wikipedia.org/wiki/Moore%27s_law) which predicted that the number of switches one could integrate on any given area doubles every 2 years. It is quite possible that the next big advance in hardware technology will be the area of [quantum computing](https://en.wikipedia.org/wiki/Quantum_computing). This was a very quick review of the history of hardware on the physical level ... let us return to computer architecture which is about how to design fast computers from physical switches. - Where do Babbage and Lovelace fit in? - Who was the first to understand that computers can be implemented with logical circuits? ### Turing machines On the abstraction level of bits, a modern hardware is roughly similar to Turing's mathematical abstract machine from 1936, nowadays called a Turing machine. Turing machine's share many features with modern computers: - Program and data are in the memory. - Programs can modify themselves. - The memory is extensible. There is no mathematical limit on the size of the memory. It can, therefore, often be modelled appropriately as infinite. - The control that executes the program by modifying the memory consists of a finite number of rules fixed in advance once and for all. - Even though the control is "hardwired", computers are universal because the programs in memory can be changed. Turing wrote pen and paper programs for his machines. Nowadays one can simulate them on a computer, of course. But Turing machines are primitive, slow, "low-level" and generally painful to program and execute. So why did Turing invent them? ### Computable functions Turing defined his Turing machines in order to define mathematically what is meant by *computable*. A function is computable, by definition, if it can be computed on a Turing machine. ### Church-Turing thesis Of course, other mathematicians came up with other notions of computability. Most famously, Goedel and Church had their own formalisms even before Turing. But these formalisms did not capture in an obvious way the intuitive notion of humans computing on pen and paper. (Did you know that in the times of Turing the word "computer" referred to a person computing on pen and paper?) Turing proved in his famous paper that these three definitions of computation are equivalent. Of course, since then many people came up with new definitions. All of these definitions turned out to be equivalent. The Church-Turing these captures this idea: According to the Church-Turing these all functions that are computable in any meaningful sense of "computable" are computable on a Turing machine. ### von Neumann Architecture John von Neumann knew Turing's work very well. In fact, he offered Turing a position to stay with him in Princeton. But Turing preferred to return to England. During the war von Neumann was involved in the building of the atomic bomb in Los Alamos and used an early vacuum tube computer, the [ENIAC]() for computations in this project. (On the way he pioneered with Ulam the so-called Monte Carlo method which we have used to get an estimate of $\pi$.) After the war von Neumann consulted on the successor of the ENIAC, the [EDVAC](https://en.wikipedia.org/wiki/EDVAC), and wrote the influential report [First Draft of a Reporton the EDVAC](https://fa82ee93-a-62cb3a1a-s-sites.googlegroups.com/site/michaeldgodfrey/vonneumann/vnedvac.pdf) which describes for the first time the outlines of the modern architecture of a computer, also sometimes called a stored-program computer. I think it is interesting that one of the novelties attributed to John von Neumann is something he already knew from Turing, namely that a computer can be a universal computation device if the program is itself part of the data (the stored-programm computer). On the other hand the fact that von Neumann got more credit than the other on the EDVAC team is sometimes quoted as an example of [Stigler's law](https://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy). [^Stigler] There is one step missing in our review and that is [RAM](https://en.wikipedia.org/wiki/Random-access_memory), or random access memory. It is not clear how much of the idea of a RAM was around at the time of the EDVAC and I didn't have time so far to read more on this ... ## Further Reading There are some excellent books on this fascinating topic. In fact the below would easily make it into my top ten list of non-fiction books. A credible claim to the best popular science book of all times has - Hofstadter: [Goedel, Escher, Bach](https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach) Hofstadter weaves together the paintings of Escher and the music of Bach with an introduction to logic and computer science based on the groundbreaking work of the logician Goedel. Apart from being brilliant entertainment for readers who like to think for themselves, the book also is a great introduction to many of the fundamental themes of computer science. [^GEB] And, if you make it to the end, you will have a complete proof of Goedels famous incompleteness theorem withoug having had to read any frozen mathematics. Another amazing book is - Petzold: [The Annotated Turing](https://en.wikipedia.org/wiki/The_Annotated_Turing) The author takes us line by line through Turing's famous paper, starting with anecdotes about the Greek mathematician Diphantus ... von Neumann lived in interesting times and apart from being one of the most influential mathematicians, physicists, computer scientists and economists of the 20th century he also was a truly remarkable character. He might well be the scientist having the largest number of funny anecdotes to his name (with the exception of Feynman) and many of them can be found in - Ulam: [Adventures of a Mathematician](https://archive.org/details/adventuresofmath0000ulam/mode/2up). Ulam was himself one of the great mathematicians of the 20th century and a close personal friend of von Neumann. While this book is officially an autobiography I remember it as containing more about von Neumann than about Ulam himself. [^GEB]: What information is in a code and what information is in the user of the code? If we sent our DNA out in a spaceship and it was discovered by an alien intelligence, what could they learn about us from this? What, really, is the difference between a for-loop and a while-loop? What are the limits of computability? Etc, etc ## References A random selection of references I took a look at: Wikipedia: [Relay](https://en.wikipedia.org/wiki/Relay) [Vaccum Tubes](https://www.engineering.com/ElectronicsDesign/ElectronicsDesignArticles/ArticleID/16337/Vacuum-Tubes-The-World-Before-Transistors.aspx) [The cool sound of tubes](https://spectrum.ieee.org/consumer-electronics/audiovideo/the-cool-sound-of-tubes) [Computer History 101: The Development Of The PC](https://www.tomshardware.com/reviews/upgrade-repair-pc,3000.html) [The History of the ENIAC Computer](https://www.thoughtco.com/history-of-the-eniac-computer-1991601) [The Vacuum Tube in Computer History](https://www.mapcon.com/us-en/the-vacuum-tube-in-computer-history) [^Stigler]: From my own experience in research, I believe that the reason for not giving proper credit in science is sometimes due to the fact that we are mostly interested in advancing science and not so much in understanding the history of science. So we put a lot of attention into what we can create and can be lazy and careless when it comes to the past. I wish there was more of a culture that takes science and history of science together, even in teaching and education.