# Python & LLM
###### tags: `llm python`
> Install python 3
> Install Miniconda https://docs.anaconda.com/miniconda/install/ or Anaconda from https://www.anaconda.com/
> Install Pycharm
> Install Ollama (Optional)
## Cursor shortcuts
https://www.youtube.com/watch?v=puKVeqChsgM
https://www.youtube.com/watch?v=wJk2_Ds-9cM
1. Chat : Command + L
2. Add context : @Somefile.java
3. Open composer : Command + I
4. Open composer full display : Command + Shift + I
5. Inline edits : Command + K (Applied on terminal too)
## Enviornment
https://medium.com/@maheshkarthu/understanding-when-and-how-to-use-conda-pipenv-virtualenv-pip-and-poetry-is-crucial-for-2a518a951945
### Conda
```bash
conda --version
conda env list
# create new environment called learn-langchain
conda create -n <ENV_NAME> python
conda install -n <ENV_NAME> learn-langchain
conda activate <ENV_NAME>
conda create -n <ENV_NAME> python=3.11 scipy=0.17.3 astroid babel
conda create -n llm python=3.12
conda install -n llm ipython jupyter
# Listing dependencies
conda list -n ice_breaker
conda list
# Install dependency
#conda install langchain
conda install black
conda install python-dotenv
conda install conda-forge::langchain
conda install conda-forge::langchain-openai
conda install conda-forge::langchain-community
conda install conda-forge::langchainhub
conda install langchain-ollama
pip install langchain-ollama
pip list
conda remove -n <ENV_NAME> --all
# Clone an existing environment
conda create --name clone_envname --clone envname
# for requirement text
pip install -e .
```
### Working with environment.yaml
```bash!
conda env create -f enivironemt.yml
```
#### Working with requirement.txt
```bash!
# create the requirements.txt file
conda list -e > requirements.txt
# install the requirements.txt
conda install --file requirements.txt
# channels
conda config --env --add channels bioconda
conda config --env --add channels conda-forge
conda update --all
```
### Jupyter Notebook
```bash!
pip install jupyterlab
jupyter lab
```
### Virtual Enviormnent
Use conda instead
```bash
# create enviornment
python3 -m venv env
# activate enviornment
source env/bin/activate
# deactivate enviornment
deactivate
rm -r env
```
Envoirnment File
```yaml
name: ai-studio
channels:
- conda-forge
- defaults
dependencies:
- python=3.11
- pip
- python-dotenv
- requests
- numpy
- pandas
- scipy
- pytorch
- jupyterlab
- ipywidgets
- matplotlib
- scikit-learn
- chromadb
- jupyter-dash
- sentencepiece
- pyarrow
- faiss-cpu
- pip:
- beautifulsoup4
- plotly
- bitsandbytes
- transformers
- sentence-transformers
- datasets
- accelerate
- openai
- anthropic
- google-generativeai
- gradio
- gensim
- modal
- ollama
- psutil
- setuptools
- speedtest-cli
- langchain
- langchain-core
- langchain-text-splitters
- langchain-openai
- langchain-chroma
- langchain-community
- faiss-cpu
- feedparser
- twilio
- pydub
```
### Enviornment for LLM development
```bash!
conda create -n langchain python=3.12
conda activate langchain
pip install langchain
pip install langchain-openai
pip install langchain-ollama
pip install langchain-community
pip install langchainhub
pip install black
black .
```
## Cursor
In Cursor, alt+cmd+B, toggles the AI panel
* Chat (cmd+L). We can chat like with a normal LLM to clarify our doubts. For example, details about the documentation of a Python framework
* Composer (cmd+I). Whenever we want to create a whole new feature or make large changes to say multiple files, we need to use composer. We will use it extensively in the next section.
* Bug Finder. As the name suggests, we can provide context to find and fix bugs in our existing project.
#### Cursor or VS Code
1. Open Pallete : Command + Shift + P -> Select conda interpreter
2. Click Run & Debug -> Create launch.json -> Select debug python file
```json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Icebreaker Runner",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
"envFile": "${workspaceFolder}/.env",
"env": {
"PYTHONPATH": "${workspaceFolder}"
}
}
]
}
```
3. Read .env file
```python
import os
from dotenv import load_dotenv
if __name__ == "__main__":
load_dotenv()
print("Hello, Langchain!")
print(os.getenv("OPENAI_API_KEY"))
```
## LLM
### :memo: What LLM can do?
- [ ] Summarization
- [ ] Image generation
- [ ] Code Generation
- [ ] Code Optimization
- [x] Multi step reasoning
- [x] Common sense
### Ollama Commands
```bash
# list all models available
ollama list
# list currently loaded models
ollama ps
# pull model from repository
ollama pull llama3.2
#how model information
ollama show llama3.2
# pull (if not exists) and run a model
ollama run phi3:medium
# stop a currently running model
ollama stop llama3.2
# remove a model
ollama rm llama3.2
```
### Open Web UI
```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
## Prompt Engineering
https://brightpool.notion.site/fe947b16fe894c3e8a8a19a6b81aec2c
### Prompt
1. Instruction
2. Context
3. Input Data
4. Output Indicator
#### Instruction
Which task need to be performed?
| Prompt | Instruction | Context | Input Data | Output Indicator |
|------------------------------------------------------------------------------------------------|-------------|----------|-------------------|-----------------------------|
| Summarize the following text into three or less sentences | Summarize | Length | Text | Summary |
| Complete the following sentence with creative ending | Complete | Context | Sentence Fragment | Completed Sentence |
| Translate following text from english to french | Translate | Language | Text in English | Text in french |
| Answer the following question based on given passage | Answer | Question | Text & Question | Answer to question |
| Write a story about a character who overcome a difficult challenge, using the following prompt | Write | Theme | Prompt or theme | Creative story or narrative |
#### Context
Additional information to fine tune instruction. It is optional but can significantly increase performance.
| Prompt | Instruction | Context | Input Data | Output Indicator |
|-------------------------------------------------------------------------------------------------------------------------------------------|-------------|---------------------------|--------------------------------------|------------------------|
| Write a recipie for vegan lasagna that is easy to make and use common ingredients | Write | Constraints and audiences | Ingredients, cooking tools and steps | Vegan lasagna recipie |
| Summarise a complex legal document in a clear and concise manner for non lawyer audiences | Summarize | Domain and audiences | Legal document | Summary |
| Complete the sentence "The wind was howling outside, rattling the windows and doors, when suddenly..." in a frighting and suspenseful way | Complete | Tone or Style | Sentence Fragment | Completed Sentence |
#### Input Data
The data that the model will process
| Prompt | Instruction | Context | Input Data | Output Indicator |
|------------------------------------------------------------------------------------------------------|-------------|----------------------|----------------------------------|------------------|
| Translate the sentence "I love you" to spanish | Translate | Language & Domain | "I love you" sentence in english | Text in spanish |
| Rewrite a paragraph from a classical novel in your own words, using contemporary language and idioms | Rewrite | Domain and audiences | Paragraph from classical novel | Output paragraph |
| Count number of words in sentence "The dog was eating a pizza" | Count | NA | "The dog was eating a pizza" | count |
#### Output Indicator
Can be implicit or explicit in prompt
### Zero Shot Prompt
> Most popular - https://arxiv.org/pdf/2205.11916
Model generate the output for a task, it has not been explicitly trained upon
Prompt does not contain any explicit instruction or example to follow. Instead it relies on the model's ability to understand and interpret natural language
Examples : Create a list of top 10 must cities in the world, in no particular order.
Issue : accuracy
### Few Shot Prompt
1. **Zero shot prompt** : Model is guessing its best effort without having seen any examples of the result you want.
2. **One shot prompt** : Model is given just 1 example of the result you want
3. **Few shot prompt** : Model is given just few examples of the result you want
### Chain of thought
https://arxiv.org/pdf/2201.11903v1
1. **Zero shot CoT Prompting** : Prefixing the answer block with "Lets think step by step". This prompts the LLM to complete the output in that format.
2. **Few shot CoT Prompting** : Provide example of <question, answer> where the answer is explained step by step
### ReAct
> Reason + Act <=> CoT + Actions
https://arxiv.org/pdf/2210.03629
ReAct is a paradigm that integrates language models with reasoning and acting capabilities, allowing for dynamic reasoning and interaction with external environment to perform complex tasks
## gcloud
```
gcloud auth login
gcloud init
gcloud config set project curious-framing-433708-g6
```
```python
def write_output(cpp):
print(cpp)
code = cpp.replace("```cpp","").replace("```","")
print(code)
with open("optimized.cpp", "w") as f:
f.write(code)
```
def write_output(cpp):
print(cpp)
code = cpp.replace("```cpp","").replace("```","")
print(code)
with open("optimized.cpp", "w") as f:
f.write(code)
```java
// Java Program to sort an elements
// by bringing Arrays into play
// Main class
class GFG {
// Main driver method
public static void main(String[] args)
{
// Custom input array
int arr[] = { 4, 3, 2, 1 };
// Outer loop
for (int i = 0; i < arr.length; i++) {
// Inner nested loop pointing 1 index ahead
for (int j = i + 1; j < arr.length; j++) {
// Checking elements
int temp = 0;
if (arr[j] < arr[i]) {
// Swapping
temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
// Printing sorted array elements
System.out.print(arr[i] + " ");
}
}
}
```