Date: Monday 9:00 - 10:00 AM
Topics: Package Metrics and DevStats
Issues: #12, #17
Jim Pivarski: From the Feb 27 meeting: "How do we collect metrics and package stats?"
@lwasser brought it up
@jpivarski has ideas to contribute (I'll follow up)
@InessaPawson has been researching this topic and has implemented some solutions in NumPy
@lagru asks if there has been interest/recent work on (regular) user surveys (perhaps Scientific Python-wide?)
@Carreau said that there was a discussion/project 8 years ago to have opt-in user feedback for SciPy: https://github.com/njsmith/sempervirens
Jim Wow! This is exactly what I'm working on for a physics conference, and I was planning on following up on these techniques at the Scientific Python Summit. I just didn't know that Christopher Ostrouchov has already done it, talked about it at SciPy 2019, and provided a tool.
Christopher has already addressed this problem:
def foobar(array):
return array.transpose()
a = numpy.array(...)
a.transpose()
foobar(a)
and I'll look at his code to see how he did it or use that code directly.
On the tool's GitHub page, he notes
NOTE: this dataset is currently extremely biased as we are parsing the top 4,000 repositories for few scientific libraries in data/whitelist. This is not a representative sample of the python ecosystem nor the entire scientific python ecosystem. Further work is needed to make this dataset less biased.
In my case, I've been asking these questions about a specific sub-community, nuclear and high-energy physicists, and I have a trick for that (PDF page 29 of this talk): one major experiment, CMS, requires its users to fork a particular GitHub repo. From that, I can get a set of GitHub users who are all CMS physicists, and (where I wave my hands) I assume that the CMS experiment is representative of the whole field. This is 2847 GitHub users (CMS members over a 10 year timespan) and 22961 non-fork repositories.
I also have another technique I've been trying out: using the GitHub archive in BigQuery to find a set of GitHub users who have ever commented on the ROOT project, which occupies a central place in our ecosystem. Then I would look up their non-fork repos in the same way.
But Christopher has solved a lot of the other issues, and I'm going to use as much of his work, with credit, as I can. Thanks for the pointer!
Jim: For my part, I usually do plots in a time domain. One of the specific questions I'll be asking about ROOT/physics usage is how often people use TLorentzVector (deprecated in 2005, but still widely used) versus PxPyPzEVector (and its other replacements). That will definitely be a time-based plot. I'd want to see if there's any trend away from the legacy class. If there isn't, I think it would be a lesson that deprecation without consequences (never actually removing it) doesn't change user behavior.
Leah: i am super interested in this. i started a small module (that needs a lot of work) to parse our packages and get some basic stats via the github api. we of course have a very specific use case with reviews and such. but if there were some tooling around getting other types of stats and storing i might pull that into our workflow rather than continue to develop that component myself! i was thinking it would be super cool to have a page of stats for each package in our ecosystem. think snyk stats but w a bit more depth potentially?
here is a quick snapshot of what i'm bringing down (statically) … no time series right now (which would be super cool).
Leah: @stefanv i'd LOVE to combine efforts. i can show you what we have. some of what i'm parsing are github issues to get package names, reviews, etc. but other stuff i'm parsing to get stars and other metrics that i bet you are parsing for as well. What can i create to make this potential collab more efficient? we got some people working on this for us during our last sprints as well! but at the end of the day it's really just me working on this by myself to support tracking reviews, packages etc…
Stefan: Great! There's a bit of machinery around GraphQL paging that is service specific (crazy, but so it is); so perhaps we can aggregate that into a "package" (submodule), and then just feed the package with the queries we want, built from GitHub GraphQL explorer. Later, we can add bells & whistles like caching, exporting in different formats, etc.
Leah: We output to YAML right now but have no long term storage which i'd love to look at trends over time.
i've just been making REST API calls. and have hit rate limits but that may have been fixed in our last sprint. i'm happy to wrap around / use devstats as it makes sense and contribute effort there.
Stefan : Let's maybe not record the informal meetings, but we will record the formal ones.
Two topics:
package usage statistics
import X
, from X import
regex results from codebasecreate a website/dashboard that hosts reports
naive approach of using how many search results/grep hits/etc there are for a bit of code will lead to all sorts of wonderful/weird usage
Track usage of tools across ecosystem (linters / code style: Ruff/black, packaging tools, build back ends etc.)
Package use / activity / health in terms of traditional stats
"Retraction watch" Maybe dangerous and misleading, but we could track uses of old incorrect functions in published work.
Jarrod Need to track / document uses cases / purpose of tools (add on dev-stats website about page or something)
Consent broker in frontends like IPython, Jupyter, Spyder, etc.
Jim fundamental issue in tracking "API usage" is that it's much easier to identify free-floating function/class usage than it is to track methods/properties due to dynamic typing. That could be easier if it's done in the frontend, since that's running in the same environment and does some type-checking to know the object types (to know which classes methods/properties belong to). Telemetry has technical benefits.
Ross How useful is telemetry, ultimately? Worth thinking about how it impacts any specific project.
madicken provide something a project can run in their CI that collects stats (like the ones in the devstats pages) and publishes them. This could be useful for self-monitoring community health and/or transparently showing communities what packages are like.
link from chat - https://chaoss.community/
neat dashboard that Sarah put in the chat
Jarrod, name=Ross find subclassing that shouldn't be done
Jim Jim's analysis: https://github.com/jpivarski-talks/2023-05-09-chep23-analysis-of-physicists (this repo builds the presentation PDF and also has complete instructions for reproducing the analysis)
Tim collect actual use-cases for API usage data
reviewing existing tools we've been developing and figure out how to start integrating / architecting bigger tool
Matthias I'm happy to work on the consent broker as I already started.
rossbar, name=seberg Whiteboard concrete metrics/deliverables that packages/developers/users are interested in
Shared with me:
[name=leah] https://app.ospn.org/project-details/16072 <- this is a nice dashboard is it the quantsight dashboard