owned this note
owned this note
Published
Linked with GitHub
# Planning Meeting: Package Metrics and DevStats
**Date:** Monday 9:00 - 10:00 AM
**Topics:** Package Metrics and [DevStats](https://devstats.scientific-python.org/)
**Issues:** [#12](https://github.com/scientific-python/summit-2023/issues/12), [#17](https://github.com/scientific-python/summit-2023/issues/17)
### Attendees
- Juanita Gomez, Jim Pivarski (@jpivarski), Matthias Bussonnier, Tim H (@betatim), Madicken Munk, leah wasser (@lwasser), Henry Schreiner (@henryiii), Jarrod Millman (jarrodmillman), Inessa Pawson (inessapawson), Sebastian Berg (@seberg)
## Planning issues
- https://github.com/scientific-python/summit-2023/issues/17
- https://github.com/scientific-python/summit-2023/issues/12
## Relevant links
- [Measuring API usage for popular numerical and scientific libraries](https://labs.quansight.org/blog/python-library-function-usage)
- https://github.com/Quansight-Labs/python-api-inspect
- https://www.gharchive.org
- https://github.com/pyOpenSci/update-web-metadata
- https://github.com/pyOpenSci/pyopensci.github.io/blob/main/_data/packages.yml#L26
- https://github.com/nschloe/github-trends
- https://www.coiled.io/blog/how-popular-is-matplotlib
## Ideas from issues
### DevStats
### Package Metrics
**Jim Pivarski:** From the Feb 27 meeting: "How do we collect metrics and package stats?"
@lwasser brought it up
@jpivarski has ideas to contribute (I'll follow up)
@InessaPawson has been researching this topic and has implemented some solutions in NumPy
@lagru asks if there has been interest/recent work on (regular) user surveys (perhaps Scientific Python-wide?)
@Carreau said that there was a discussion/project 8 years ago to have opt-in user feedback for SciPy: https://github.com/njsmith/sempervirens
**Jim** Wow! This is exactly what I'm working on for a physics conference, and I was planning on following up on these techniques at the Scientific Python Summit. I just didn't know that Christopher Ostrouchov has already done it, talked about it at SciPy 2019, and provided a tool.
Christopher has already addressed this problem:
```
def foobar(array):
return array.transpose()
a = numpy.array(...)
a.transpose()
foobar(a)
```
and I'll look at his code to see how he did it or use that code directly.
On the tool's GitHub page, he notes
> NOTE: this dataset is currently extremely biased as we are parsing the top 4,000 repositories for few scientific libraries in data/whitelist. This is not a representative sample of the python ecosystem nor the entire scientific python ecosystem. Further work is needed to make this dataset less biased.
In my case, I've been asking these questions about a specific sub-community, nuclear and high-energy physicists, and I have a trick for that (PDF page 29 of this talk): one major experiment, CMS, requires its users to fork a particular GitHub repo. From that, I can get a set of GitHub users who are all CMS physicists, and (where I wave my hands) I assume that the CMS experiment is representative of the whole field. This is 2847 GitHub users (CMS members over a 10 year timespan) and 22961 non-fork repositories.
I also have another technique I've been trying out: using the GitHub archive in BigQuery to find a set of GitHub users who have ever commented on the ROOT project, which occupies a central place in our ecosystem. Then I would look up their non-fork repos in the same way.
But Christopher has solved a lot of the other issues, and I'm going to use as much of his work, with credit, as I can. Thanks for the pointer!
**Jim:** For my part, I usually do plots in a time domain. One of the specific questions I'll be asking about ROOT/physics usage is how often people use TLorentzVector (deprecated in 2005, but still widely used) versus PxPyPzEVector (and its other replacements). That will definitely be a time-based plot. I'd want to see if there's any trend away from the legacy class. If there isn't, I think it would be a lesson that deprecation without consequences (never actually removing it) doesn't change user behavior.
**Leah:** i am super interested in this. i started a small module (that needs a lot of work) to parse our packages and get some basic stats via the github api. we of course have a very specific use case with reviews and such. but if there were some tooling around getting other types of stats and storing i might pull that into our workflow rather than continue to develop that component myself! i was thinking it would be super cool to have a page of stats for each package in our ecosystem. think snyk stats but w a bit more depth potentially?
here is a quick snapshot of what i'm bringing down (statically) ... no time series right now (which would be super cool).
**Leah:** @stefanv i'd LOVE to combine efforts. i can show you what we have. some of what i'm parsing are github issues to get package names, reviews, etc. but other stuff i'm parsing to get stars and other metrics that i bet you are parsing for as well. What can i create to make this potential collab more efficient? we got some people working on this for us during our last sprints as well! but at the end of the day it's really just me working on this by myself to support tracking reviews, packages etc...
**Stefan:** Great! There's a bit of machinery around GraphQL paging that is service specific (crazy, but so it is); so perhaps we can aggregate that into a "package" (submodule), and then just feed the package with the queries we want, built from GitHub GraphQL explorer. Later, we can add bells & whistles like caching, exporting in different formats, etc.
**Leah:** We output to YAML right now but have no long term storage which i'd love to look at trends over time.
i've just been making REST API calls. and have hit rate limits but that may have been fixed in our last sprint. i'm happy to wrap around / use devstats as it makes sense and contribute effort there.
-------------------------------------------------------
## Meeting notes – Monday, May 15, 2023
Stefan : Let's maybe not record the informal meetings, but we will record the formal ones.
Two topics:
* package usage statistics
* in the form of API usage
* help answer questions like "what is actually used?", decisions around deprecation cycles,
* TODO: collect use-cases for having data
* is there a script that collects these stats?
* Could we host this type of script in the scientific python org?
* how/when to run such a script?
* what is sourcegraph doing? can this be reused?
* Separate use cases for this - one to showcase package usage, the other could be for developers when deprecating APIs to see which packages will be impacted.
* If a package has an API update and we have API usage stats, we can also have a sub-metric that helps see which packages are more reactive.
* Time spread of API usage (?)
* Does it uses both older and younger API ? (i.e. maintaining compatibility with both/many)
* Matplotlib / Napari image watermarking
* Matplotlib did some work at the SciPy sprints to query academic papers that have matplotlib embedded images. This had a direct connection to the impact of the project in the scientific academic space.
* Many papers don't attribute packages they use in their analysis properly, and especially in publications
* [name=Jim] distinction between dependencies declared in pyproject.toml and `import X`, `from X import` regex results from codebase
* Maybe it would be too bulky, but could we embed metadata in images about larger analysis stack a person is using?
* dangerous if we include everything by default. But an opt-in list from sys.module + version woudl be great.
* could embed some stats in jupyter notebooks (similar to proposal about images) [name=Tim]
* Definitely! +1 on opt in from madicken
* create a website/dashboard that hosts reports
* Hosted on dev.stats? hosted in Scientific-Python ? at least a landing page on Scientific-Python.
* naive approach of using how many search results/grep hits/etc there are for a bit of code will lead to all sorts of wonderful/weird usage
* Major data cleaning issues
* [Inessa shared this github activity document](https://docs.google.com/document/d/1y7vgtLEBBFRZG_WwH22D1Whe3HDfjyoXtcV3GXEAKLg/edit)
* Track usage of tools across ecosystem (linters / code style: Ruff/black, packaging tools, build back ends etc.)
* To identify trends
* To help new projects decide what to use
* To support content around packaging best practices
* Package use / activity / health in terms of traditional stats
* stars, contributor data, commit frequency, last commit date, pypi downloads, conda downloads, dependencies
* "Retraction watch" Maybe dangerous and misleading, but we could track uses of old incorrect functions in published work.
* [name=Jim] "maybe dangerous and misleading," since the venv/conda env in which the final plot is made may be different from the environment in which the numerical calculation takes place
* [name=Jarrod] Need to track / document uses cases / purpose of tools (add on dev-stats website about page or something)
* Consent broker in frontends like IPython, Jupyter, Spyder, etc.
* Ask specific questions of the user?
* Collect data, show data to user, get permission to upload or
* this is going to get into politics around user's comfort with data being collected. how you allow the data to be collect. how and when they consent.
* [some discussion around telemetry here](https://github.com/pyOpenSci/software-peer-review/issues/183)
* [name=Jim] fundamental issue in tracking "API usage" is that it's much easier to identify free-floating function/class usage than it is to track methods/properties due to dynamic typing. That could be easier if it's done in the frontend, since that's running in the same environment and does some type-checking to know the object types (to know which classes methods/properties belong to). Telemetry has technical benefits.
* [name=Ross] How useful is telemetry, ultimately? Worth thinking about how it impacts any specific project.
* [name=Jim] Existence proof: Uproot has an unfortunate feature (parses colon as a special character in filenames) that was the cause of [11 issues](https://github.com/jpivarski-talks/2023-05-09-chep23-analysis-of-physicists/blob/main/PLOTS/uproot-open-colon-issues.png) (a lot), wanted to deprecate it, but as a result of a API usage scan, found that [it is widespread](https://raw.githubusercontent.com/jpivarski-talks/2023-05-09-chep23-analysis-of-physicists/main/analysis/github-ast-uproot-filename-colon.svg). If it hadn't been, we would have started a deprecation cycle.
* [name=madicken] provide something a project can run in their CI that collects stats (like the ones in the devstats pages) and publishes them. This could be useful for self-monitoring community health and/or transparently showing communities what packages are like.
* [bitergia](https://www.anaconda.com/blog/anaconda-partners-with-numfocus-and-bitergia-to-bring-community-metrics-to-open-source-projects)
* [another blog - numfocus](https://blog.bitergia.com/2022/09/09/bitergia-becomes-the-official-metrics-partner-of-the-numfocus-foundation/#more-6921)
* link from chat - https://chaoss.community/
* [Grimoirelab](https://chaoss.community/software/)
* [neat dashboard that Sarah put in the chat](https://metrix.chaoss.io/chaoss)
* https://github.com/oss-aspen/8Knot <- code for the dashboard from sarah
* [name=Jarrod, name=Ross] find subclassing that shouldn't be done
* [name=Jim] that _would_ be easy to find (since class names are globals)
* [name=Jim] Jim's analysis: https://github.com/jpivarski-talks/2023-05-09-chep23-analysis-of-physicists (this repo builds the presentation PDF and also has complete instructions for reproducing the analysis)
### Summit tasks
* [name=Tim] collect actual use-cases for API usage data
* Develop a list of all of the things that we could use and how they would be used
* then how you'd collect that data
* reviewing existing tools we've been developing and figure out how to start integrating / architecting bigger tool
* [dev-stats tool](https://github.com/scientific-python/devstats.scientific-python.org), [Leah's tool (very much a hack job but works! we also are parsing issues to grab review metadata but then use the package github repo to graph package stats)](https://github.com/pyOpenSci/update-web-metadata), ~~Jim's tool~~ Jim's informal analysis, the quansight python-api code, etc.
* pull code out of website and integrate other tools (we could create Python package that is shared on PyPI)
* [name=leah] TODO: maybe what each of us could do PRIOR to the summit is create a short summary of each of our tools and what is currecntly being collected and how?
* [name=Matthias] I woudl love some help to add IPython and a few other packages to devstats.
* [name=Madicken] I would like to help create a tool that packages can use to create a devstats page for themselves (Matthias: +1)
* [name=Matthias] I'm happy to work on the consent broker as I already started.
* [name=rossbar, name=seberg] Whiteboard concrete metrics/deliverables that packages/developers/users are interested in
* [name=Madicken] I can join this too!
Shared with me:
[name=leah] https://app.ospn.org/project-details/16072 <- this is a nice dashboard is it the quantsight dashboard