---
tags: yt, lab
---
# yt Issues: Memory Leakage
- [First recognize issue description](https://mail.python.org/archives/list/yt-users@python.org/thread/H3WCK5PA2WNGEDBVHOCWV5YCPV2ZYIXY/)
## Use Valgrind to Check Memory Consumption
```python
import yt
if __name__ == "__main__":
iter_time = 10
#data = "HiresIsolatedGalaxy/DD0044/DD0044" # Enzo
#data = "./output_00002/info_00002.txt" # Rames
data = "./JetICMWall/Data_000060" # Gamer
min_coord = [0.0, 0.0, 0.0]
max_coord = [4.0, 4.0, 4.0] # Enzo and Rames has max_coord [1.0, 1.0, 1.0]
for i in range(iter_time):
ds = yt.load( data )
box = ds.box(min_coord, max_coord)
vz = box[("gas", "density")]
print("Done~")
```
### RAMES Dataset
:::danger
It looks like `tsearch` is piling up.
:::
![](https://i.imgur.com/Kl6K1R5.png)
### ENZO Dataset
Nothing special :slightly_smiling_face:.
![](https://i.imgur.com/88eu2iq.png)
### GAMER Dataset
Nothing special :slightly_smiling_face:.
![](https://i.imgur.com/qja992A.png)
## Related Issue
:::info
Tests and runs I did before. Not sure if this is related or some new issues, so I put it here for reference only. *It might not related to the above.*
:::
Conclusion for this section is memory increment doesn't necessarily mean leakage, it could be Python itself is doing something else for you. ([link](https://rushter.com/blog/python-garbage-collector/#:~:text=Reference%20counting%20is%20a%20simple,to%20the%20right%2Dhand%20side))
### Covering Grid
We ran `covering_grid` in both inline-analysis and post-processing multiple times before. Their only difference are how they load dataset and some parameters for `covering_grid`. (One through `gamer` frontend and the other through `libyt` frontend.)
```python
# Some global variable above, but that doesn't matter here.
def yt_inline( data ):
global data_idx
ds = yt.load( data )
ad = ds.covering_grid( level=lv,
left_edge=[ target_left_edge[0],\
target_left_edge[1],\
target_left_edge[2]],
dims=[dim_x, dim_y, dim_z] )
real_part = ad["Real"].in_units("sqrt(code_mass)/code_length**(3/2)").swapaxes(0, 2)
imag_part = ad["Imag"].in_units("sqrt(code_mass)/code_length**(3/2)").swapaxes(0, 2)
dens_part = ad["Dens"].in_units("code_mass/code_length**3").swapaxes(0, 2)
if yt.is_root():
np.savez(cube_filename.format("%09d" % data_idx), Real=real_part, Imag=imag_part, Dens=dens_part)
data_idx += 1
if __name__ == "__main__":
data = "/projectY/cindytsai/HaloData_ForProfile/Data_000000"
iter_time = 5
for i in range(iter_time):
yt_inline( data )
```
#### In Post-Processing
:::warning
`PyObject_GC_Malloc` is piling up.
:::
![](https://i.imgur.com/MdzafjQ.png)
#### In Inline-Analysis
:::warning
`Phy_Peak` (last column) is piling up, but doesn't necessarily grow in every function call to run the full routine in above `yt_inline` script.
:::
![](https://i.imgur.com/QCB5YU9.png)
#### Conclusion
Our conclusion then was memory increment doesn't necessarily mean leakage, it could be Python itself is doing something else for you. ([link](https://rushter.com/blog/python-garbage-collector/#:~:text=Reference%20counting%20is%20a%20simple,to%20the%20right%2Dhand%20side))
Not sure if it is the case here.