# Option C IB Computer Science ## General advice. In this option you have a lot of questions and little time. You need to _nail down_ the concepts using _correct writing_ and the proper keywords. Don't use place holders words like "stuff". Write full sentences. I recommend to write at least __1 full sencence per mark of the question__ and I suggest that even 2 per mark. ## The content itself For the first part I'm relying on this link: https://ibcomputerscience.xyz/option-c-web-science/ ## General lexicon * **handshake**. In real life people do the handshake before doing the actual communication like this: ![imagen](https://hackmd.io/_uploads/HkS2gGzXR.png) Then you do the formal message (the interview, the deal that you try to do or other stuff). Computers do the same, before the _actual_ messaging they usually have to do (depending on the **protocol**) do several steps where they say in Computer something like "hey, I want to connect to you, is it ok?" and they expect "yeah, connect to me". This process is called **handshake** ## C.1 Creating the web ### C.1.1 Distinguish between the Internet and World Wide Web. Internet is the network (hardware) of computers and the world wide web (the three w in www.website.com) are the resources (data, software) availabile in that network using the protocol HTTP (hypertext transfer protocol). There are other information on the internet that is not the www, like mail or voice ip because they use other protocols. They ask usually this. Distinguish the Internet and the World Wide Web. ### C 1.2 Describe how the web is constantly evolving. They don't usually ask this directly but you need to know that protocols come and go, websites blossom and fade off and how we, the users, use the web is changing over time. One way to describe this is with the concepts and difference between Web 1.0, Web 2.0 and Web 3.0 #### Web 1.0 Web 1.0 is the _old_ web. The _really old web_. One example is the preserved version of Space Jam movie website ![imagen](https://hackmd.io/_uploads/SkFfVzMmA.png) https://www.spacejam.com/1996/ The main characteristic of this era of the web is that it was _hard_ for a not speciallized user to interact with the web. The web only serves information but didn't take informatio (for better or worse) #### Web 2.0. Between the year 2000 and 2008 something happened. Several technologies added up so websites could be more interactive with the users. Now it was easy to post something in a forum, or having webs where a user without Computer Science knowledge could upload things. Pages like youtube, twitter, blogger (to do blogs) were born. #### Web 3.0 Usually called the semantic web. Other call them the crypto web. Not clearly defined yet. For purposes of the exams I recommend to stick to web 1.0 and 2.0. ### C 1.3 Characteristics of protocols #### HTTP Http, the element that goes before any URL in your browser means "hyper text transfer protocol" **what the hell is an hyper text?** An hyper text is any kind of text (bunch of words) that has **links** to other texts or to the text itself. An example of an hypertext that is physical are the books of ["Choose your own adventure"](https://en.wikipedia.org/wiki/Choose_Your_Own_Adventure) ![imagen](https://hackmd.io/_uploads/H1v4figXR.png) In this case the links are when you have to decide which page you want to go next: ![imagen](https://hackmd.io/_uploads/HJgaMse7R.png) Also there are other text that refer to other text like in the academia where you refer to other texts. In this case the links is the bibliography where you have the information to look for other books or papers. Now that we know what is an hyper text, the other 2 words (transfer protocol) are easier. We wand to send those hypertext from one computer to another (so there is a transfer) and we want to do it following certain rules (that's a protocol) #### Characteristics * **Application layer protocol** ([following the OSI model](https://en.wikipedia.org/wiki/OSI_model)) from the Internet Protocol suite to transfer and exchange hypermedia * **Request-response protocol based on client-server model.** This means that this protocol works on the assumption that there is a computer who send the requests (client) and other who responds to those requests (server). * There are different methods for request (asking) from a server. You can get data or you can post data using the same URL #### HTTPS It's built on top of HTTP, is an extension of it and adds the part of "secure". Yeah, that's the meaning of the S. * Adds an additional security layer of SSL or TLS * ensures authentication of website by using digital certificates * ensures integrity and confidentiality through encryption of communication :::info When you have the secure of HTTPS it means the connection to the place that you're connecting to is secured. But if you're connecting to www.notmybanktotallyscammeplease.co.uk it's outside of the scope of HTTPS ::: :::info Video about http versions (1, 1.1, 2, 3) https://www.youtube.com/watch?v=UMwQjFzTQXw {%youtube UMwQjFzTQXw %}. My take as a teacher. This video is interesting because explains what a TCP and UTP contection is (more less) but I think that QUIC is not yet thaaat important. Also http2 and http3 were implemented after 2014, after the first examinations of this syllabus of Computer Science ::: ##### Process of SSL and TSL ([resource](https://www.entrust.com/resources/learn/how-does-ssl-work#:~:text=A%20browser%20or%20server%20attempts,it%20trusts%20the%20SSL%20certificate)) 1) A browser or server attempts to connect to a website (i.e. a web server) secured with SSL. The browser/server requests that the web server identify itself. 2) The web server sends the browser/server a copy of its SSL certificate. 3) The browser/server checks to see whether or not it trusts the SSL certificate. If so, it sends a message to the web server. 4) The web server sends back a digitally signed acknowledgement to start an SSL encrypted session. 5) Encrypted data is shared between the browser/server and the web server. You need to describe the process of SSL and TSL Version with images: ![imagen](https://hackmd.io/_uploads/H1uCMfGQ0.png) ![imagen](https://hackmd.io/_uploads/Hkv1QGfXR.png) ![imagen](https://hackmd.io/_uploads/S1XlXMzXR.png) Example question: Explain how website certificates are used to authenticate a client's browser through secure protocol communication like https Answer: You can have 3 marks for the explanation of the process (see description before) and 1 mark for saying what are website certificates used for. A website SSL/TSL certificate is used to initialize session between clients and the server. #### HTML [Go to here](https://hackmd.io/8g9EfWtdSRGYnvkY41fS9A?both#HTML1) #### URL Uniform Resource Locator * Unique string that identifies a web resource More on URL here: https://www.geeksforgeeks.org/components-of-a-url/ ![imagen](https://hackmd.io/_uploads/SJ9d8MGmR.png) ## C2 Raw content from M.L. :::spoiler C2 Searching Web Search Engine: A program accessed via a browser that allows users to search for information on the web, returning a list of results known as the search engine results page (SERP). Surface Web vs. Deep Web: Surface Web: Accessible part of the web indexed by search engines. Characteristics include static and fixed pages, reachable through links from other surface websites, and no special access configurations required. Deep Web: Part of the web is not indexed by standard search engines due to reasons like proprietary content requiring authentication, paywalls, personal information protection, and dynamically generated content. It is substantially larger than the surface web. Search Algorithms: PageRank Algorithm: Determines the importance of a website by counting the number and quality of links. Factors influencing PageRank include the age of the page, keyword frequency, and the PageRank of linking pages. The algorithm assumes that more important websites receive more links from other sites. HITS Algorithm: Differentiates between two types of pages - authorities, which contain valuable information relevant to the search query, and hubs, which link to authorities. It is based on mathematical graph theory, representing pages as vertices and links as edges. Characteristics of PageRank and HITS Algorithms: PageRank focuses on the link quality and quantity to estimate a page's importance, with the rank determining the order in which pages appear in search results. HITS identifies authorities and hubs to evaluate relevance, not solely based on keyword matches but also on the structure of the web. These notes encapsulate the essential definitions and distinctions between the surface and deep web, along with the principles behind the PageRank and HITS algorithms, highlighting their approach to ranking and finding relevant web pages. Web Crawlers Definition: · Automated programs that systematically browse the web to index website content. Functionality: · Downloads and indexes web pages. · Extracts links from pages and follows them to index more content. Limitations: · May struggle with dynamically generated content. · Often relies on metadata, which might not fully represent page content. Robots.txt: · A file used to specify which parts of a site should not be crawled. · Placed in the site's root directory. · Adheres to the Robots Exclusion Protocol. · Can target specific or all crawlers. · Not all bots respect this file, particularly malicious ones. Purpose of Robots.txt: · Manages crawler access to conserve bandwidth. · Prioritizes indexing of important content by excluding less relevant sections. ::: ## C3 Distributed approaches to the web Raw content from C.V. :::spoiler Option C C3 - .1 Grid Computing: a connected system of computer and communication nodes that prove high performance computing/storage resource it dynamically pools IT resources together based on need. This prevents the underutilization of resources Lexicon: Interoperability: The ability for software to operate within completely different environments. For example, a computer network might include both PCs and Macintosh computers. Open standards: A technique. Unlike proprietary standards, which can belong exclusively to a single entity, anyone can adopt and use an open standard. Easier to integrate. Control node: At least one computer which handles all the administrative duties for the system. Cluster: group of networked computers sharing the same resources. Ubiquitous Computing: popular in science fiction refers to the presence of computers everywhere, in everything (invisible computing) advantages: improved communication, health & way of living disadvantages: overreliance on an object which can be hacked in order to breach our privacy .2 Mobile computing: allows for communication and data transmission without physical link between devices characteristics; portable (phones), socially interactive (communication), context sensitive, connective, individual (through the use of cookies and algorithms) advantages; increase in productivity, entertainment, cloud computing (storage elsewhere) & portability (ex. you can access your email from any device) disadvantages; quality of the connectivity may vary, security concerns, power consumption due to batteries Peer-to-peer computing: PCs are connected directly to each other rather than to a common server decentralized. if one PC fails network does not shut down however it is impossible to regain data lost from that one PC each peer (PC) acts as client and server Resources and contents shared across all peers and shared faster than client <-> server Has to be done by software Malware can be faster distributed .3 Interoperability vs Open Standards: Interoperability: the ability of two or more systems or components to exchange information and to use the information that has been exchanged. In order for systems to be able to communicate they need to agree on how to proceed and for this reason standards are necessary. Open standards are standards that follow certain open principles. Definitions vary, but the most common principles are: public availability collaborative development, usually through some organization such as the World Wide Web Consortium (W3C) or the IEEE royalty-free voluntary adoption Tim Berners-Lee: “the decision to make the Web an open system was necessary for it to be universal. You can’t propose that something be a universal space and at the same time keep control of it.” Some examples of open standards include: file formats (HTML, PNG, SVG), protocols (IP, TCP) & programming languages (JavasScript) .4 For each approach to distributed system, more specific types of hardware could be used: Mobile computing: wearables, smartphones, tablets, laptops, Ubiquitous computing: embedded devices, IoT devices, mobile computing devices, networking devices Peer-to-peer computing: usually PCs, but can include dedicated servers for coordination Grid computing: PCs and servers Content delivery networks (CDNs) is a system of distributed servers. They can cache content and speed up the delivery of content Blockchain technology(ex. Bitcoin) are decentralized and based on multiple peers, which can be PCs but also server farms Botnets can probably be considered a form of distributed computing as well, consisting of hacked devices, such as routers or PCs .5 Distributed systems consist of many different nodes that interact with each other advantages: higher fault tolerance stability scalability privacy data portability is more likely independence from large corporations such as Facebook, Google, Apple or Microsoft potential for high performance systems disadvantages: more difficult to maintain harder to develop and implement increased need for security .6 Lossless compression: a form of data compression that reduces file sizes without sacrificing any significant information in the process (ex. won’t diminish the quality of your photos) Lossy compression: typically used when a file can afford to lose some data, and/or if storage space needs to be drastically 'freed up algorithm scans image files and reduces their size by discarding information considered less important or undetectable to the human eye .7 Data Compression basically refuses the amount of bits a file/image has. The goal is to not affect quality as much as possible. Anything that can be stored as bits can be compressed. Why do we need compression? Even though storage capacity has gotten bigger compared to twenty years ago, it still is not big enough. Especially considering that nowadays portability of an object is highly valued which means making computational devices smaller and smaller therefore shrinking their capacity for storage. *even videos can be compressed. Text Compression Huffman: Huffman coding is a lossless data compression algorithm. The idea is to assign variable-length codes to input characters, lengths of the assigned codes are based on the frequencies of corresponding characters. The most frequent character gets the smallest code and the least frequent character gets the largest code. The variable-length codes assigned to input characters are Prefix Codes, means the codes (bit sequences) are assigned in such a way that the code assigned to one character is not prefix of code assigned to any other character. This is how Huffman Coding makes sure that there is no ambiguity when decoding the generated bit stream. ::: ## C4 /TO-DO ## C5 (HL extention) /TO-DO ## C6 (HL extention) /TO-DO ## Code to understand In option C you don't need to code but you need to understand code and what it does. Here are some example languages that you should at least understand the bare minimum. ### HTML Reference https://www.w3schools.com/html/default.asp #### Characteristics HTML stands for **h**yper**t**ext **m**ark-up **l**anguage * Standard language for the _content_ of pages * Uses tags to classify different parts of the page that go with the syntax ``<name-of-Tag>stuff tagged</name-of-Tag>`` * Describe the structure of a webpage #### Examples / TO-DO ### CSS / TO-DO https://www.w3schools.com/css/default.asp ### Javascript / TO-DO https://www.w3schools.com/js/default.asp ### PHP / TO-DO https://www.w3schools.com/php/default.asp ### SQL / TO-DO https://www.w3schools.com/sql/default.asp