Analytical benchmark functions are a like well-established datasets in the field of machine-learning: by comparing your algorithm on the same ones, you can accurately compare your results. They are essential for reproducibility, as typos in the formulas will obviously give very different results, but are not an 'external' utility like the examples given under Package Categories in the guidebook.
Several of our collaborators are still not sure what this package this does.
This package may be out of scope for us right now.
We should consider – adding additional info to the submission review. maybe about what a readme contains. etc…
We should point the author to the current guidelines that contain recommendations for structuring a README: https://www.pyopensci.org/dev_guide/packaging/packaging_guide
1.a. Notes of short discussion with submitting author (Sander) who joined a bit later:
Do we want to keep track of people somehow that are willing to review ?
Google Sheet / Form –
Give people the option to opt in – or make it anonymous ??
What is the scope of pyopensci / ropensci
Here's some info on research compendia: https://research-compendium.science/
Here's some rOpenSci content relevant to reproducibility and research compendia: https://ropensci.github.io/reproducibility-guide/sections/introduction/
Link from Aaryn: https://github.com/benmarwick/rrtools
Why does doumentation matter
Have you developed a python package?
Possibly more basic questions to start a group discussion at AGU session
Post AGU – IRB –
We could potentially team up with ropensci on the education side of things.
martin: there are mnay options associated with python packages
How to best manage an old school (prepared slides only) room??
Static link to a Dashboard page for attendees to explore, possibly with walkthrough, though attendees could be offline as well given the volume of folks accessing network
Action items - What could be done to make software “value” more visible – the academic challenge of getting “Credit”
Even the open source software – abstracts lose attention because people often get funding for research not software
Publication –- when you publish you don’t get the same credit as you do for papers
What are the challenges associated with creating reproducible code /tools
Have
What is missing from your toolkit to make workflows more reproducible?
What sorts of support do you need to…
List the components of a good open source software package
Which do you understand the least
Discovering/ finding tools…
or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Do you want to remove this version name and description?
Syncing