# GSoC'24(nx-parallel) Meetings
notes: https://hackmd.io/xlZEjShuR9uUNaXBzCsrCQ?both
## Date : 22nd August, 2024
**Attendees**: Aditi, Dan
### Topics
- Create an issue on really fast function not showing any speedups in nx-parallel with links to some closed PRs(https://github.com/networkx/nx-parallel/pull/74 , https://github.com/networkx/nx-parallel/pull/44)
- Configs : edit `Config.md`
- duplicate context manager gets created
- [this example](https://github.com/networkx/nx-parallel/pull/75#discussion_r1725574266) in `Config.md`
- Issue on joblib creating 2 extra functions
- Possible names for the decorator: `set_up_configure`, `_set_nx_config`, `apply_nx_config`, `allow_nx_config`, `_configure_context`, `configure_if_nx`, `configure_if_nx_active`, `setup_nx_config`
- `_configure_if_nx_active` seems good!
- add `_set_nx_config` in nxp main namespace(like networkx do with `_dispatchable`)
- https://stackoverflow.com/questions/2360724/what-exactly-does-import-import
- https://docs.python.org/3/reference/simple_stmts.html#the-import-statement
- summarize https://github.com/networkx/nx-parallel/issues/51 in a comment
- share the draft GSoC's final report
## Date : 15th August, 2024
**Attendees**: Aditi, Dan
- Code from the meeting:
```python=
import networkx as nx
from joblib import parallel_config
print(nx.config)
nx.config.backends.parallel.n_jobs = 5
nx.config.backends.parallel = parallel_config
G = nx.path_graph(10)
nx.square_clustering(G, backend="parallel")
parallel_config(n_jobs=3)
nx.square_clustering(G, backend="parallel")
with parallel_config(n_jobs=7):
nx.square_clustering(G, backend="parallel")
with parallel_config(n_jobs=None):
nx.square_clustering(G, backend="parallel")
with parallel_config(n_jobs=2):
nx.square_clustering(G, backend="parallel")
# with nx.config(backend_priority=[parallel, cugraph], configs=[cugraph_config, nx_parallel_config]):
# nx.square_clustering(G, backend="parallel")
```
- We want `joblib.parallel_config()` to be present in `nx.config.backends.parallel`.
## Date : 18th July, 2024
**Attendees**: Aditi, Dan
## Date : 4th July, 2024
**Attendees**: Aditi, Dan
### Topics
- Created conda feedstock for nx-parallel
- Setuptools PR - merged!
- Discussion on Config
## Date : 27th June, 2024
**Attendees**: Aditi, Dan
### Topics
- PyRustLin meetup:
- audience: mostly beginners in Python
- had interesting discussions after the 10-min talk
- created a [recipe PR](https://github.com/conda-forge/staged-recipes/pull/26768) for nx-parallel (for ref. [NetworkX's feedstock](https://github.com/conda-forge/networkx-feedstock))
- dependent on https://github.com/networkx/nx-parallel/pull/69 (will probably merge by this end of the week)
- [Ask Mridul] About maintainers
- GIL and python 3.13 plans and its impact on nx-parallel : https://docs.python.org/3.13/whatsnew/3.13.html
- How much inspiration to take from https://github.com/networkx/nx-parallel/pull/7 and if we can have discussions with the author of this PR.
- [Dan] scikit-learn references for nx-parallel config
- https://github.com/scikit-learn/scikit-learn/issues/29302
- https://github.com/scikit-learn/scikit-learn/issues/20717
- Why are they deprecating these?: https://scikit-learn.org/stable/api/deprecated.html
- Config PR in NetworkX - [Erik's reply]( https://github.com/networkx/networkx/pull/7485#issuecomment-2195042192)
- It was supposed to be
```.py
with nx.config(backend=backend_name, backend_configs={}):
# nx code
```
in [this](https://github.com/networkx/networkx/pull/7485#pullrequestreview-2144913888) comment.
# GSoC'24(nx-parallel) Meetings
## Date : 13th June, 2024
**Attendees**: Aditi, Dan
### Topics
- discussing [PR 7398: colliders and v_structures](https://github.com/networkx/networkx/pull/7398)
- `stacklevel` behaves differently in terminal Python and IPython
- https://github.com/networkx/nx-parallel/pull/69
#### From developer summit:
- [spatch](https://github.com/scientific-python/spatch) - like nx-j4f
- scikit image backend(inspired from networkx backend dispatching) : [cuCIM](https://github.com/rapidsai/cucim)
- pandas extension(by Erik) : `pd.Dataframe.nx`
#### From last week:
- [TODO-Aditi] How sklearn deals with logging and config context manager? - ref. [sklearn context_manager](https://github.com/scikit-learn/scikit-learn/blob/8d0b243cb53ff609d32ecd7aafc5c098381eac86/sklearn/_config.py#L212), (maybe) just using joblib's default `verbose` for logging
# GSoC'24(nx-parallel) Meetings
## Date : 6th June, 2024
**Attendees**: Aditi, Dan
### Topics
- discussing [PR 7398](https://github.com/networkx/networkx/pull/7398)
#### From developer summit:
- [spatch](https://github.com/scientific-python/spatch) - like nx-j4f
- scikit image backend(inspired from networkx backend dispatching) : [cuCIM](https://github.com/rapidsai/cucim)
- pandas extension(by Erik) : `pd.Dataframe.nx`
#### From last week:
- [TODO-Aditi] How sklearn deals with logging and config context manager? - ref. [sklearn context_manager](https://github.com/scikit-learn/scikit-learn/blob/8d0b243cb53ff609d32ecd7aafc5c098381eac86/sklearn/_config.py#L212), (maybe) just using joblib's default `verbose` for logging
- [TODO-Dan] reviewing [PR#63](https://github.com/networkx/nx-parallel/pull/63) - Merged :tada:
## Date : 23rd May, 2024
**Attendees**: Aditi, Dan
### Topics
- discussing proposal updates
- [TODO-Mridul] VM access for heatmaps
- different timing functions
- `time = (initial non-parallel part of algo) + avg or max (time taken by all parallel processes) + (non-parallel ending part)` is this a good way? and can we actually do this?
- `timeit.default_timer` seems the best option so far(it is used by asv benchmarks)
- reasons for not using `time.time` (ref [blog](https://superfastpython.com/time-time-vs-timeit/#Dont_Use_timeit_for_Benchmarking_More_Complicated_Code))
- increasing the default number of times ASV runs an algorithm for benchmarking(change the default `repeat` parameter i.e. 2)
- should the `Client` for dask and ray backends be on the networkx's side(ref. [issue](https://github.com/joblib/joblib/issues/1563))
- Mridul made a comment on this in one of the arongodb's presentations
- dealing with registered_backends in joblib?
- probably should keep the joblib's `verbose` logging different from the networkx's logging. Two types of logging:
- logging each parallel process, batch_size etc. (what `verbose` outputs)
- logging which networkx backend is being used and which backend is joblib using. Does joblib use logging to see which backend is being implemented
- [TODO-Aditi] How sklearn deals with logging and config context manager?
- [TODO-Dan] reviewing [PR#63](https://github.com/networkx/nx-parallel/pull/63)
-