We need better tools and standards for QA in AI (in other words, testing and issue tracking) -- in particular, ways to pass issues experienced with LLMs [whether "hallucinations," incorrect data, unexpected behavior] up the chain of command. Because the output of generative AI is stochastic, it is much more challenging to reproduce bugs and confirm that they have been fully addressed. Right now devs working at the API level have little recourse or ability to influence the evolution of the technologies they have committed to build in. It is certainly possible to update models without re-training (read this paper on Transfer Learning to learn more https://cacm.acm.org/opinion/building-machine-learning-models-like-open-source-software/). Standardizing this process and integrating it into CI/CD would be a major contribution for open source advocates.