owned this note
owned this note
Published
Linked with GitHub
---
title: volume rendering intro for geodata in yt
tags: yt
---
# volume rendering intro for geodata in yt
this presentation: https://bit.ly/ytgeostuff
yt's overview on volume rendering: [link](https://yt-project.org/doc/visualizing/volume_rendering.html)
Chris's 2020 AGU poster: [citeable archive](https://www.essoar.org/doi/abs/10.1002/essoar.10506118.2), [direct repo link](https://github.com/earthcube2020/ec20_havlin_etal).
Other code/repos:
yt: https://yt-project.org/
ytgeotools: https://github.com/chrishavlin/ytgeotools
yt_idv: https://github.com/yt-project/yt_idv
## where to get seismic data
tomography http://ds.iris.edu/ds/products/emc-earthmodels/
## general workflow for seismic data
First step is to read 3d model data from native storage (e.g., netcdf file) into a yt in-memory dataset `yt.load_uniform_grid()` [docs link](https://yt-project.org/doc/reference/api/yt.loaders.html#yt.loaders.load_uniform_grid)
How to do that depends: making maps or 3D renderings?
### map making
load as an `internal_geographic` dataset:
```python=
ds = yt.load_uniform_grid(data,
sizes,
1.0,
geometry=("internal_geographic", dims),
bbox=bbox)
```
where:
`data`: dictionary of field data arrays read from your netcdf
`sizes`: the shape of the field data
`bbox`: the bounding box
`dims`: the ordering of dimensions in the data arrays
See [this notebook](https://nbviewer.jupyter.org/github/chrishavlin/AGU2020/blob/main/notebooks/seismic.ipynb#Fixed-Depth-Maps) for a full example.
See [this notebook](https://nbviewer.jupyter.org/github/chrishavlin/yt_scratch/blob/master/notebooks/yt_obspy_raypath_sampling.ipynb) for an example of some analysis once data is loaded: sampling data along a seismic ray path.
Caveats:
* fixed-depth maps only (no cross-sections)
* still a little buggy (global models seem to work better than non-global models)
* no volume rendering
## geo-volume rendering with yt (background)
**data must be in cartesian coordinates**
Step 1: interpolate the seismic model to a (uniform) cartesian grid.
My approach:
1. convert lat, lon, depth arrays to geocentric cartesian coordinates, `x,y,z`
2. find bounding box in cartesian coordinates:
```python=
cart_bbox = [
[min_x, max_x],
[min_y, max_y],
[min_z, max_z],
]
```
3. create a uniform grid to cover bounding box:
```python=
x_i = np.linspace(min_x, max_x, n_x)
y_i = np.linspace(min_y, max_y, n_y)
z_i = np.linspace(min_z, max_z, n_z)
```
(`i` for interpolation)
4. find the data values at all those points
```
data_on_cart_grid = new array
for xv, yv, zv in np.meshgrid(x_i, y_i, z_i):
data_on_cart_grid[xvi, yvi, zvi] = sample_the_data(xv, yv, zv)
```
**How to `sample_the_data` ?**
Build a KDtree of actual model points
```
tree = KDtree(x, y, z)
```
for every point on the new cartesian grid, find the `N` closest points, use inverse-distance weighting (IDW) to average those points. Can set a max distance to exclude points too far away.
Could experiment with different methods here! just take the "nearest"? Different interopolation method entirely?
Sometimes a little buggy... depending on the resolution of the covering grid: weird artifacts or empty interpolated data.
**Once data is sampled**
Use `yt.load_uniform_grid`:
```python
data = {"dvs": dvs_on_cartesian}
bbox = cart_bbox
sizes = dvs_on_cartesian.shape
ds = yt.load_uniform_grid(data, sizes, 1.0)
```
can now use volume rendering!
Seismic examples:
* [AGU2020](https://nbviewer.jupyter.org/github/chrishavlin/AGU2020/blob/main/notebooks/seismic.ipynb#Volume-Rendering)
* EarthCube2020: [citeable archive](https://www.essoar.org/doi/abs/10.1002/essoar.10506118.2) or the [repo](https://github.com/earthcube2020/ec20_havlin_etal).
both those rely on https://github.com/chrishavlin/yt_velmodel_vis
The EarthCube notebook ([direct link](https://nbviewer.jupyter.org/github/earthcube2020/ec20_havlin_etal/blob/master/notebook/ec20_havlin_etal.ipynb)) covers the interpolation above and goes into transfer functions in more detail.
### geo-volume rendering in practice:
New package!
https://github.com/chrishavlin/ytgeotools
streamlines all the above, with some additional analysis functionality.
```python
import yt
from ytgeotools.seismology.datasets import XarrayGeoSpherical
filename = "IRIS/GYPSUM_percent.nc"
# to get a yt dataset for maps:
ds_yt = XarrayGeoSpherical(filename).load_uniform_grid()
# to get a yt dataset on interpolated cartesian grid:
ds = XarrayGeoSpherical(filename)
ds_yt_i = ds.interpolate_to_uniform_cartesian(
["dvs"],
N=50,
max_dist=50,
return_yt=True,
)
```
**should** be useable for any IRIS EMC netcdf file to get a yt dataset.
Installation not yet streamlined... need to install two packages from source in the following order:
1. https://github.com/yt-project/yt_idv
2. https://github.com/chrishavlin/ytgeotools
Need `cartopy`... easiest to use a conda environment.
Download/clone 1 and 2, make sure `conda` environment is active, then `cd` into `yt_idv` and install from source with
```
$ python pip install .
```
Repeat for `ytgeotools`.
Test installation: from a python shell, try:
```
>>> import yt_idv
>>> import ytgeotools
>>> import yt
```
There may be some more missing depencies to install...
#### manual volume rendering
The AGU and EarthCube notebooks use the standard yt volume rendering interfaces. Excerpted from those notebooks, typical workflow to create an image would look like:
```python=
import yt
from ytgeotools.seismology.datasets import XarrayGeoSpherical
# load the cartesian dataset
ds_raw = XarrayGeoSpherical(filename)
ds = ds_raw.interpolate_to_uniform_cartesian(
["dvs"],
N=50,
max_dist=50,
return_yt=True,
)
# create the scene (loads full dataset into the scene for rendering)
sc = yt.create_scene(ds, "dvs")
# adjust camera position and orientation (so "up" is surface)
x_c=np.mean(bbox[0])
y_c=np.mean(bbox[1])
z_c=np.mean(bbox[2])
center_vec = np.array([x_c,y_c,z_c])
center_vec = center_vec / np.linalg.norm(center_vec)
pos=sc.camera.position
sc.camera.set_position(pos,north_vector=center_vec)
# adjust camera zoom
zoom_factor=0.7 # < 1 zooms in
init_width=sc.camera.width
sc.camera.width = (init_width * zoom_factor)
# set transfer function
# initialize the tf object by setting the data bounds to consider
dvs_min=-8
dvs_max=8
tf = yt.ColorTransferFunction((dvs_min,dvs_max))
# set gaussians to add
TF_gaussians=[
{'center':-.8,'width':.1,'RGBa':(1.,0.,0.,.5)},
{'center':.5,'width':.2,'RGBa':(0.1,0.1,1.,.8)}
]
for gau in TF_gaussians:
tf.add_gaussian(gau['center'],gau['width'],gau['RGBa'])
source = sc.sources['source_00']
source.set_transfer_function(tf)
# adjust resolution of rendering
res = sc.camera.get_resolution()
res_factor = 2
new_res = (int(res[0]*res_factor),int(res[1]*res_factor))
sc.camera.set_resolution(new_res)
# if in a notebook
sc.show()
# if in a script
sc.save()
```
A lot of trial and error for transfer functions, camera positioning. Make videos by adjusting the camera position, saving off the images and then stitching together with a different tool.
### interactive volume rendering
yt_idv!
```python=
import numpy as np
import yt_idv
from ytgeotools.seismology.datasets import XarrayGeoSpherical
def refill(vals):
vals[np.isnan(vals)] = 0.0
vals[vals > 0] = 0.0
return vals
filename = "IRIS/NWUS11-S_percent.nc"
ds = XarrayGeoSpherical(filename)
ds_yt = ds.interpolate_to_uniform_cartesian(
["dvs"],
N=50,
max_dist=50,
return_yt=True,
rescale_coords=True, ## IMPORTANT!
apply_functions=[refill, np.abs],
)
rc = yt_idv.render_context(height=800, width=800, gui=True)
sg = rc.add_scene(ds_yt, "dvs", no_ghost=True)
rc.run()
```
great for getting a sense of what's in the data!
caveats:
* max intensity and projection transfer functions only for now...
* need to install from source for now (maybe not later today?)
### annotations:
In manual plotting,
```python=
from yt.visualization.volume_rendering.api import LineSource, PointSource
```
Coordinates need to be in cartesian as well... The EarthCube/AGU notebooks have some helper functions to streamline this. Not yet in ytgeotools or yt_idv.
In manual plotting, adding the Point and Line sources can affect the overall brightness of the scene, often a lot of trial and error to adjust the opacity of the sources so that they do not wash out the volume rendering.