# Pytorch-free implementation
- https://github.com/ggerganov/llama.cpp
- has been around for months
- uses its own GGML format for quantized formats
- https://github.com/karpathy/llama2.c?search=1
- Andrej Karpathy's pure C implementation
-
- @jdiaz97 has appparently a pure Julia implementation
https://github.com/jdiaz97/llama2.jl/blob/main/run.jl
- Here's another from @cafaxo https://github.com/cafaxo/Llama2.jl
- I wrote my own https://gist.github.com/jiahao/07a93a1bd8597e100a09c671e379a03b
# Ideas
- Would be cool to do a REPL mode to shell out to LLAMA
- [ReplMaker.jl](https://github.com/MasonProtter/ReplMaker.jl)
- Explain this code; explain this error message
- What does useful consistent state look like? (for the Llama)
- Fine tuning
- https://pypi.org/project/finetuner/
- still need labeled examples
- grammar-based sampling - "make Llama emit valid JSON code"
- https://github.com/ggerganov/llama.cpp/pull/1773
- Can we use a fine-tuned llama to construct "fairly minimal" repairs for broken Julia syntax?
- Easy to generate (fixed,broken) pairs in seveal ways:
- Take valid code (from Base etc) and delete/move/copy parts in a similar way that users would while editing text
- Can potential fixes back into the parser and see whether there's an error
- Automated MRE (minimal reproducible example) generation for filing bugs (automated test reduction)
## Related work
Tool that calls out to GPT to work with Julia code
https://github.com/svilupp/GPTCodingTools
## Funding?
- NSF OAC
- https://www.nsf.gov/div/index.jsp?div=OAC
- They like automated program repair, AI, new tooling for programming productivity
- NSF OISE - Has Australian collab
- https://www.nsf.gov/dir/index.jsp?org=OISE
- NSF-CSIRO collab
- https://new.nsf.gov/news/new-nsf-australia-awards-will-tackle-responsible
- Small business research awards in general