# Google Dorking Writeup - Tryhackme
# Task One:
This section goes over a quick explanation of Google as a search engine and website indexer, that uses spiders/crawlers to gather keywords and website urls to make a dictionary on websites to suggest when using the search engine.
*Answer: There is nothing needed to be done. Press complete and away we go!*
# Task Two:
**Question 1:** Name the key term of what a “Crawler” is used to do
Ans:index
**Question 2:** What is the name of the technique that “Search Engines” use to retrieve this information about websites?
*Ans: crawling*
**Question 3**: What is an example of the type of contents that could be gathered from a website?
*Ans: Keywords*
# Task Three
No Answer needed.
# Task Four
**Question 1:** Where would "robots.txt" be located on the domain "ablog.com"
*Answer: ablog.com/robots.txt*
**Question 2**:If a website was to have a sitemap, where would that be located?
* Answer:/sitemap.xml*
**Question 3**:How would we only allow "Bingbot" to index the website?
*Answer: User-agent: Bingbot*
**Question 4:** How would we prevent a "Crawler" from indexing the directory "/dont-index-me/"?
*Answer: Disallow: /dont-index-me/*
**Question 5:** What is the extension of a Unix/Linux system configuration file that we might want to hide from "Crawlers"?
*Answer: .conf*
# Task Five.
**Question 1:** What is the typical file structure of a "Sitemap"?
*Answer:XML*
**Question 2:** What real life example can "Sitemaps" be compared to?
*Answer: Map*
**Question 3:** Name the keyword for the path taken for content on a website
*Answer:Route*
# Task Six
**Question 1:** What would be the format used to query the site bbc.co.uk about flood defences
*Answer: site: bbc.co.uk flood defences*
**Question 2:** What term would you use to search by file type?
*Answer 2: filetype:*
**Question 3:** What term can we use to look for login pages?
*Answer : intitle: login*