# Introduction to Text and Image Analysis Using Python
- Workshop Webpage: https://raphaelaheil.github.io/2022-03-15-dhnb/
- Code of Conduct: https://coderefinery.org/about/code-of-conduct/
- JupyterHub: https://worker02.chcaa.au.dk/jupyter/hub/login (username and password have been distributed via email)
- Link to this note: https://hackmd.io/@rheil/dhnb_2022
## Program
|Time|Topic|Instructor|
|---|---|---|
|9:00|Introduction to Jupyter (based on [this CodeRefinery lesson](https://coderefinery.github.io/jupyter/))|Diana|
|9:25|[Python basics and first computational notebook](https://github.com/coderefinery/jupyter-dhnb)|Radovan|
|10:00|Analyzing Documents with TF-IDF, [material](https://github.com/RaphaelaHeil/2022-03-15-dhnb/tree/gh-pages/text-analysis) (based on [this Programming Historian lesson](https://programminghistorian.org/en/lessons/analyzing-documents-with-tfidf))|Henrik|
|11:00|[Clustering-based Analysis of Handwritten Digits](https://github.com/RaphaelaHeil/clustering-dhnb)|Raphaela|
## Icebreaker question
How do you use or plan to use Python in your research?
- text analysis
- Using Python for pre- and post-processing of data and for data visualization.
- Data visualization. Scraping.
- Image Analysis
- Text wrangling / pattern searching. statistics
- As an additional tool to supplement R - especially audio and image analysis.
- Text classification, topic modeling, text mining
- Plan: code-switching research in early modern Latin-Greek texts
- create alternate lessons (from existing R code) for text/image/network analysis
- Audio analysis
- Text analysis
- I don't do any research, I'm a librarian, but I would like to learn more about the possibilities for text analysis
- Will start a phd soon where I'll use NLP on cuneiform economic sources // work on Sumerian dialects in cuneiform sources in my current project
- Online text re-use research
- Text analysis, pattern searching
- Just a developer in the field of digital humanities curious to see whats happening.
- Webscraping
- creating text analysis workflows for HTR output
- Identify handwritten text in printed maps
## Questions and Notes for Part I: "Introduction to Jupyter"
- What is your experience with Jupyter notebooks?
- no experience: oooooooooo
- some experience: ooooooooooo
- Which operating system are you using?
- Windows: ooooooooo
- macOS: ooooooooo
- Linux: ooo
- Python experience:
- have never written any Python: ooooooo
- have already written some Python: ooooooooooo
### Starting Jupyter-Lab locally
On Windows:
1. Open "Anaconda Prompt" from the start menu
2. write (without quotes) "jupyter-lab"
3. press Enter
MacOS and Linux:
1. Open a Terminal
2. write (without quotes) "jupyter-lab"
3. press Enter
To Quit (any operating system):
- press the Control key and "C" at the same time
## Questions and Notes for Part II: "Python basics and first computational notebook"
- Can you run the whole notebook, so you don't get an error from not running a previous cell?
- yes, in the menu at the top, under "Cell" (or "Run"), there is the option to "Run All"
- Do you have any good practice tips for naming variables?
- good variable names are descriptive for humans. the computer does not mind whether it is `n` or `num_words` but the latter is better for humans.
- avoid variable names that are reserved by python, here is a list of reserved keywords: https://realpython.com/lessons/reserved-keywords/
- Maybe an overly specific question, so feel free not to prioritize. This is my first time using conda and jupyter notebooks and since activating the environment EVERY terminal window opens with the conda environment activated. As a heavy terminal user, this is pretty irritating. Is there any way to limit the scope of the environment, e.g. only to the terminal in which it was first activated? (Linux OS)
- it is an excellent question. when you installed Conda you probably answered yes to a question whether your .bashrc environment should be modified to activate the base environment every time. you could take that out of your .bashrc and activate Conda environment explicitly. I also recommend to create separate environments for separate projects and then you would not like to use the base environment anyway. The reason why we did not go too deep into this is to not confuse with too many things.
- yes! it's .bashrc. Thanks.
- what I do (but it is a bit more advanced use): .bashrc is not modified and I activate Conda explicitly myself. I use separate environments for each project and I never install anything into the base environment but always into the project environments. Motivation: it forces me to document my dependencies and no risk of me messing up my base environment. If I mess up, I just delete the project enviroment and recreate it.
- :thumbsup: I do the same, but with venv.
## Questions and Notes for Part III: "Analyzing Documents with TF-IDF"
- notebook "TF_IDF_and_cosine_similarity.ipynb" if you want to follow the already written code
- notebook "TF_IDF_and_cosine_similarity_blank.ipynb" if you want to write the code that Henrik presents yourself
- ...
- I can't load the text data. The file isn't found. I can find `2_text_analysis\lesson-files\lesson-files\txt` but
2 `input_files = os.listdir('lesson-files/txt')` doesn't work.
- does this work: `input_files = os.listdir('lesson-files/lesson-files/txt')`?
Yes, thank you!
- (for the later discussion maybe) just a small question about word_tokenize(): the Counter output has a lot of punctuation marks as well as the default split, but does nltk tokenizer has an argument to remove punctuation? or it should be done separately before?
- RB: I am looking for a good answer ... not knowing that code I would probably remove punctuation myself outside but looking whether there is not a better way
- I looked more and it seems nltk can do a lot and recognize punctuation: https://www.nltk.org/api/nltk.tokenize.html
- Maybe you may get some inspiration from https://stackoverflow.com/questions/15547409/how-to-get-rid-of-punctuation-using-nltk-tokenizer or https://www.codegrepper.com/code-examples/python/how+to+remove+punctuation+in+python+nltk.
- Yes, thanks, my question was just about the function itself: similar R function `unnest_tokens()` would delete all punctuation marks by default (and it's a question sometimes how to keep them:)), so now I see here it's the other way round
- One could use `stop_words`.
- One could wrap word_tokenize in a custom function that filters out the punctuation, for instance by using a regular expression
- This part of the workshop was quite difficult to follow without previous experience. Do you know of a more basic tutorial? (Since the programming historian one is also quite advanced)
- and you mean basic tutorial introducing Python or introducing NLTK or this method?
- For nltk: https://www.nltk.org/book/
- For introduction to python (including pandas, with emphasis on loading and plotting data): https://swcarpentry.github.io/python-novice-gapminder/
- Any good way of visualizing the results?
- https://www.geeksforgeeks.org/visualize-data-from-csv-file-in-python/
- Network graphs: http://jonathanstray.com/a-full-text-visualization-of-the-iraq-war-logs
- Some graph plotting libraries I (HA) have used are networkx and pyvis:
https://pyvis.readthedocs.io/en/latest/
https://networkx.org/documentation/stable/index.html
## Break until 11:10
## Questions and Notes for Part IV: "Clustering-based Analysis of Handwritten Digits"
- Can you explain a little bit about how pixels_per_cell impacts on the accuracy of the overall process? Is it better to just have more pixels per cell? Why not 64x64 according to the original image?
- 64x64 and 32x32 causes errors in the later parts of the doc. fyi
- Under 1. Loading the data, I get an error message when running: utils.digit_grid(images[:100]) The error message is: ValueError: Supply a single scale, or one value per spatial axis. It seems I have to specify the multichannel argument as true, but I cannot figure out how to do that (and wonder why I have to do it) Any suggestions? I will try to update Python. I was just asking in case you see immediately what the problem is.
## Feedback
One thing that was particularly good/useful:
- A plus was that it was a good start to see some examples of what you can do with Python. Especially the handwriting recognition part was interesting....
- It was very interesting to see the part on image analysis, which is all new to me. The text analysis part was also very good. ...
- Image analyses was great!! I'm hooked & now wondering about the next steps..letters, word shapes?
- Good and varied examples, showing different types of applications.
One thing we should change/remove/adjust for next time:
- This is all new to me so a bit hard to follow.
- ...
- I would suggest for next time, try to estimate different levels of the participants and do a beginner & intermediate group separately. Some comments about the level going too quickly, on the other hand it's easy to become disengaged while the presenter goes over fundamentals.
- I agree with parting the group in two for different levels. It was hard to follow many steps in the workshop.
- It would have been useful to get some exercises to try out some step of the analysis. This was more of a demonstration than a workshop and I had no hope trying to do the steps while shown.