this presentation: https://bit.ly/ytgeostuff
yt's overview on volume rendering: link
Chris's 2020 AGU poster: citeable archive, direct repo link.
Other code/repos:
yt: https://yt-project.org/
ytgeotools: https://github.com/chrishavlin/ytgeotools
yt_idv: https://github.com/yt-project/yt_idv
tomography http://ds.iris.edu/ds/products/emc-earthmodels/
First step is to read 3d model data from native storage (e.g., netcdf file) into a yt in-memory dataset yt.load_uniform_grid()
docs link
How to do that depends: making maps or 3D renderings?
load as an internal_geographic
dataset:
ds = yt.load_uniform_grid(data,
sizes,
1.0,
geometry=("internal_geographic", dims),
bbox=bbox)
where:
data
: dictionary of field data arrays read from your netcdf
sizes
: the shape of the field data
bbox
: the bounding box
dims
: the ordering of dimensions in the data arrays
See this notebook for a full example.
See this notebook for an example of some analysis once data is loaded: sampling data along a seismic ray path.
Caveats:
data must be in cartesian coordinates
Step 1: interpolate the seismic model to a (uniform) cartesian grid.
My approach:
x,y,z
cart_bbox = [
[min_x, max_x],
[min_y, max_y],
[min_z, max_z],
]
x_i = np.linspace(min_x, max_x, n_x)
y_i = np.linspace(min_y, max_y, n_y)
z_i = np.linspace(min_z, max_z, n_z)
(i
for interpolation)
data_on_cart_grid = new array
for xv, yv, zv in np.meshgrid(x_i, y_i, z_i):
data_on_cart_grid[xvi, yvi, zvi] = sample_the_data(xv, yv, zv)
How to sample_the_data
?
Build a KDtree of actual model points
tree = KDtree(x, y, z)
for every point on the new cartesian grid, find the N
closest points, use inverse-distance weighting (IDW) to average those points. Can set a max distance to exclude points too far away.
Could experiment with different methods here! just take the "nearest"? Different interopolation method entirely?
Sometimes a little buggy… depending on the resolution of the covering grid: weird artifacts or empty interpolated data.
Once data is sampled
Use yt.load_uniform_grid
:
data = {"dvs": dvs_on_cartesian}
bbox = cart_bbox
sizes = dvs_on_cartesian.shape
ds = yt.load_uniform_grid(data, sizes, 1.0)
can now use volume rendering!
Seismic examples:
both those rely on https://github.com/chrishavlin/yt_velmodel_vis
The EarthCube notebook (direct link) covers the interpolation above and goes into transfer functions in more detail.
New package!
https://github.com/chrishavlin/ytgeotools
streamlines all the above, with some additional analysis functionality.
import yt
from ytgeotools.seismology.datasets import XarrayGeoSpherical
filename = "IRIS/GYPSUM_percent.nc"
# to get a yt dataset for maps:
ds_yt = XarrayGeoSpherical(filename).load_uniform_grid()
# to get a yt dataset on interpolated cartesian grid:
ds = XarrayGeoSpherical(filename)
ds_yt_i = ds.interpolate_to_uniform_cartesian(
["dvs"],
N=50,
max_dist=50,
return_yt=True,
)
should be useable for any IRIS EMC netcdf file to get a yt dataset.
Installation not yet streamlined… need to install two packages from source in the following order:
Need cartopy
… easiest to use a conda environment.
Download/clone 1 and 2, make sure conda
environment is active, then cd
into yt_idv
and install from source with
$ python pip install .
Repeat for ytgeotools
.
Test installation: from a python shell, try:
>>> import yt_idv
>>> import ytgeotools
>>> import yt
There may be some more missing depencies to install…
The AGU and EarthCube notebooks use the standard yt volume rendering interfaces. Excerpted from those notebooks, typical workflow to create an image would look like:
import yt
from ytgeotools.seismology.datasets import XarrayGeoSpherical
# load the cartesian dataset
ds_raw = XarrayGeoSpherical(filename)
ds = ds_raw.interpolate_to_uniform_cartesian(
["dvs"],
N=50,
max_dist=50,
return_yt=True,
)
# create the scene (loads full dataset into the scene for rendering)
sc = yt.create_scene(ds, "dvs")
# adjust camera position and orientation (so "up" is surface)
x_c=np.mean(bbox[0])
y_c=np.mean(bbox[1])
z_c=np.mean(bbox[2])
center_vec = np.array([x_c,y_c,z_c])
center_vec = center_vec / np.linalg.norm(center_vec)
pos=sc.camera.position
sc.camera.set_position(pos,north_vector=center_vec)
# adjust camera zoom
zoom_factor=0.7 # < 1 zooms in
init_width=sc.camera.width
sc.camera.width = (init_width * zoom_factor)
# set transfer function
# initialize the tf object by setting the data bounds to consider
dvs_min=-8
dvs_max=8
tf = yt.ColorTransferFunction((dvs_min,dvs_max))
# set gaussians to add
TF_gaussians=[
{'center':-.8,'width':.1,'RGBa':(1.,0.,0.,.5)},
{'center':.5,'width':.2,'RGBa':(0.1,0.1,1.,.8)}
]
for gau in TF_gaussians:
tf.add_gaussian(gau['center'],gau['width'],gau['RGBa'])
source = sc.sources['source_00']
source.set_transfer_function(tf)
# adjust resolution of rendering
res = sc.camera.get_resolution()
res_factor = 2
new_res = (int(res[0]*res_factor),int(res[1]*res_factor))
sc.camera.set_resolution(new_res)
# if in a notebook
sc.show()
# if in a script
sc.save()
A lot of trial and error for transfer functions, camera positioning. Make videos by adjusting the camera position, saving off the images and then stitching together with a different tool.
yt_idv!
import numpy as np
import yt_idv
from ytgeotools.seismology.datasets import XarrayGeoSpherical
def refill(vals):
vals[np.isnan(vals)] = 0.0
vals[vals > 0] = 0.0
return vals
filename = "IRIS/NWUS11-S_percent.nc"
ds = XarrayGeoSpherical(filename)
ds_yt = ds.interpolate_to_uniform_cartesian(
["dvs"],
N=50,
max_dist=50,
return_yt=True,
rescale_coords=True, ## IMPORTANT!
apply_functions=[refill, np.abs],
)
rc = yt_idv.render_context(height=800, width=800, gui=True)
sg = rc.add_scene(ds_yt, "dvs", no_ghost=True)
rc.run()
great for getting a sense of what's in the data!
caveats:
In manual plotting,
from yt.visualization.volume_rendering.api import LineSource, PointSource
Coordinates need to be in cartesian as well… The EarthCube/AGU notebooks have some helper functions to streamline this. Not yet in ytgeotools or yt_idv.
In manual plotting, adding the Point and Line sources can affect the overall brightness of the scene, often a lot of trial and error to adjust the opacity of the sources so that they do not wash out the volume rendering.