# Hierarchical Risk Parity (HRP) ## Introduction In the dynamic world of finance, crafting portfolios that balance risk and return is a perpetual challenge. Hierarchical Risk Parity (HRP) emerges as a groundbreaking solution, blending the power of machine learning with established financial principles. ### What sets HRP apart? * **Robustness:** It thrives even when traditional methods falter, handling cases where the covariance matrix isn't invertible. * **Adaptability:** It gracefully navigates the complexities of real-world financial data, even with ill-conditioned or near-singular matrices. * **Risk Mitigation:** Empirical evidence reveals HRP's consistent ability to build portfolios with lower risk compared to alternatives. * **Performance Edge:** In diverse market scenarios, HRP portfolios have a track record of outperforming traditional approaches. ## HRP Algorithm: A Three-Stage Process Hierarchical Risk Parity is a structured process that optimizes portfolio allocation through three key stages: 1. **Tree Clustering:** This initial step groups assets based on their correlation, forming a hierarchical structure similar to a tree. Assets with similar behavior are clustered together. 2. **Quasi-Diagonalization:** This step reorganizes the covariance matrix so that highly correlated assets are placed closer together. This enhances the visualization of risk and simplifies the allocation process. 3. **Recursive Bisection:** This final stage determines the actual asset weights. It recursively divides the clusters into two equal halves, allocating weights based on an inverse-variance principle to balance risk and potential return. ### Tree Clustering Tree clustering aims to identify assets with similar risk profiles and group them together. It achieves this by utilizing a distance metric based on pairwise correlations: #### Algorithm :::success <font color=blue>Algorithm: Tree Clustering </font> Input: $T \times N$ matrix $X$ of observations (e.g., returns series of $N$ variables over $T$ periods) Output: Hierarchical structure of clusters 1. Compute $N \times N$ correlation matrix $\rho$: $\rho = (\text{corr}(X_i, X_j))_{i,j = 1,\dots,N}$ 2. Compute $N \times N$ distance matrix $D$: $D = (d_{ij})_{i,j = 1,...,N}$ where $d_{ij} = \sqrt{\frac{1 - \rho_{ij}}{2}}$ 3. Compute pairwise Euclidean distances $\tilde{d}$: $\tilde{d}_{ij} = \sqrt{\sum_{k=1}^N (d_{ki} - d_{kj})^2}$ 4. Initialize clusters: $C = \{\{1\}, \{2\}, \dots, \{N\}\}$ 5. While $|C| > 1$: a. Find $(i^*, j^*) = \arg\min_{i,j, i \neq j} {\tilde{d}_{ij}}$ b. Create new cluster $u = C[i^*] ∪ C[j^*]$ c. Update distances: For each cluster $v$ in $C$, $v \neq C[i^*]$, $v \neq C[j^*]$: $\tilde{d}[u,v] = \min(\tilde{d}[C[i^*],v], \tilde{d}[C[j^*],v])$ d. Update $C$: $C = C \backslash \{C[i^*], C[j^*]\} \cup \{u\}$ e. Update $\tilde{d}$ matrix: - Remove rows and columns for $C[i^*]$ and $C[j^*]$ - Add row and column for u 6. Return final cluster hierarchy ::: #### Code Implementation ```python= def correlDist(corr): dist = ((1 - corr) / 2) ** 0.5 # Distance metric based on correlation return dist dist = correlDist(corr) condensed_dist = squareform(dist) # Convert to condensed distance matrix link = sch.linkage(condensed_dist, 'single') # Perform single-linkage clustering ``` #### Interpreting the Results * **Linkage Matrix:** This matrix details the hierarchical clustering process, recording which assets are merged at each step and the distance at which they merge. * **Dendrogram:** This visual representation of the clustering helps identify natural asset groupings and the relative distances between them. ![005-visualizing-dendrograms-cutree-1](https://hackmd.io/_uploads/rkMGe7eZC.png) ### Quasi-Diagonalization This step rearranges the covariance matrix's rows and columns based on the tree clustering results. The goal is to place highly correlated assets closer to the diagonal, creating a quasi-diagonal structure. **Key Benefit:** Quasi-diagonalization makes risk concentrations more apparent and simplifies subsequent weight allocation. #### Algorithm :::success <font color=blue>Algorithm: Quasi-Diagonalization</font> Input: - Linkage matrix `Y` from hierarchical clustering - Original covariance matrix `Sigma` Output: - Reordered covariance matrix `Sigma_new` 1. Initialize order list `L = []` 2. Function `QuasiDiagonalize(cluster)`: * If `cluster` is a leaf node (original asset): * Append cluster to `L` * Else: * `QuasiDiagonalize(cluster.left_child)` * `QuasiDiagonalize(cluster.right_child)` 3. Start with the root cluster (last row of `Y`): * `QuasiDiagonalize((Y[N-1, 1], Y[N-1, 2]))` 4. Reorder `Sigma` based on `L`: `Sigma_new = Sigma[L, :][:, L]` 5. Return `Sigma_new` ::: #### Code Implementation ```python= def getQuasiDiag(link): # Sort clustered items by distance link=link.astype(int) sortIx=pd.Series([link[-1,0],link[-1,1]]) numItems=link[-1,3] # number of original items while sortIx.max()>=numItems: sortIx.index=range(0,sortIx.shape[0]*2,2) # make space df0=sortIx[sortIx>=numItems] # find clusters i=df0.index;j=df0.values-numItems sortIx[i]=link[j,0] # item 1 df0=pd.Series(link[j,1],index=i+1) sortIx=sortIx.append(df0) # item 2 sortIx=sortIx.sort_index() # re-sort sortIx.index=range(sortIx.shape[0]) # re-index return sortIx.tolist() ``` ### Recursive Bisection Recursive bisection is the final and crucial stage of HRP, where the algorithm determines the optimal weights for each asset in the portfolio. This stage leverages the hierarchical structure created in the previous steps and allocates weights using an inverse-variance approach. The goal is to strike a balance between risk and potential return. #### Intuition The recursive bisection algorithm starts by treating the entire portfolio as one large cluster. It then repeatedly splits each cluster into two roughly equal halves based on the hierarchical structure. For each split, the algorithm calculates weights for each half using an inverse-variance approach, giving higher weight to the less risky (lower variance) half. This process continues recursively until all assets are assigned individual weights. :::success <font color=blue>Algorithm: Recursive Bisection</font> Input: - Covariance matrix $V$ - Quasi-diagonalized order of assets $L$ Output: - Portfolio weights $w$ 1. Initialize: * $L = \{L_0\}$, where $L_0 = \{1, \dots, N\}$ * $w_n = 1$ for $n = 1, \dots, N$ 2. While any $|L_i| > 1$ for $L_i \in L$: * For each $L_i \in L$ with $|L_i| > 1$: * Bisect $L_i$ into $L^{(1)}_i$ and $L^{(2)}_i$: * $L^{(1)}_i$ is the first $\text{int}(\frac{|L_i|}{2})$ elements of $|L_i|$ * $L^{(2)}_i$ is the remaining elements of $L_i$ * Compute inverse-variance weights for each subset $j = 1, 2$: * $V^{(j)}_i$ is the covariance matrix between the constituents of the $L^{(j)}_i$ bisection * $\tilde{W}^{(j)}_i = \frac{\text{diag}[V^{(j)}_i ]^{−1}}{\text{tr}[\text{diag}[V^{(j)}_i]^{−1}]}$ * $\tilde{v}^{(j)}_i ≡ (\widetilde{W}^{(j)}_i)^T V^{(j)}_i \widetilde{W}^{(j)}_i$ * Compute split factor: $\alpha = 1 - \frac{\tilde{v}^{(1)}_i}{\tilde{v}^{(1)}_i + \tilde{v}^{(2)}_i}$ $\in [0,1]$ * Update weights: * For $n \in L^{(1)}_i$: $w_n \leftarrow \alpha w_n$ * For $n \in L^{(2)}_i$: $w_n \leftarrow (1 - \alpha) w_n$ * Update $L$: $L = L \backslash \{L_i\} \cup \{L^{(1)}_i, L^{(2)}_i\}$ 3. Return $W$ ::: ### Code Implementation ```python= def getIVP(cov): # Compute inverse-variance portfolio weights ivp = 1.0 / np.diag(cov) ivp /= ivp.sum() return ivp def getClusterVar(cov, cItems): # Compute variance of a cluster cov_ = cov.loc[cItems, cItems] # Slice covariance matrix w_ = getIVP(cov_).reshape(-1, 1) cVar = np.dot(np.dot(w_.T, cov_), w_)[0, 0] return cVar def getRecBipart(cov, sortIx): # Compute HRP allocation w = pd.Series(1, index=sortIx) cItems = [sortIx] # Start with all assets in one cluster while len(cItems) > 0: # Bisection of clusters cItems = [i[j:k] for i in cItems for j, k in ((0, len(i) // 2), (len(i) // 2, len(i))) if len(i) > 1] for i in range(0, len(cItems), 2): # Process in pairs cItems0, cItems1 = cItems[i], cItems[i + 1] cVar0, cVar1 = getClusterVar(cov, cItems0), getClusterVar(cov, cItems1) alpha = 1 - cVar0 / (cVar0 + cVar1) # Split factor w[cItems0] *= alpha # Update weights for cluster 1 w[cItems1] *= (1 - alpha) # Update weights for cluster 2 return w ``` ## Example: US Finance Stocks and Futures Portfolio ```python= import matplotlib.pyplot as mpl import scipy.cluster.hierarchy as sch from scipy.spatial.distance import squareform import numpy as np import pandas as pd import yfinance as yf # Yahoo Finance API from datetime import timedelta ,date from dateutil.relativedelta import relativedelta ``` To showcase the effectiveness of HRP, we constructed a portfolio consisting of 35 diversified assets: * **US Stocks:** `AAPL`, `ABT`, `ADBE`, `AMAT`, `AMZN`, `AVY`, `BALL`, `BAX`, `BDX`, `CMI`, `CPB`, `CSX`, `GILD`, `HAS`, `INSM`, `KO`, `MCD`, `MMM`, `MSFT`, `NVDA`, `PFE`, `TGT`, `TJX`, `TSM`, `WFC`, `XOM`, `YUM` * **Futures Contracts:** `GC=F` (Gold), `HG=F` (Copper), `SI=F` (Silver), `^FVX` (CBOE Volatility Index), `^GSPC` (S&P 500), `^NDX` (Nasdaq 100), `^TNX` (10-Year Treasury Yield), `^TYX` (30-Year Treasury Yield) The portfolio was rebalanced monthly using an 8-month lookback period, spanning from September 2000 to May 2024. ```python= class Stocks(): def __init__(self, arg_stocks_list=[], arg_begin='2015-01-01', arg_end='2023-10-01'): """ Initialize the Stocks class. Parameters: - arg_stocks_list: List of stock symbols. For example: ['2330.TW','2337.TW','2357.TW','2454.TW','3231.TW','3443.TW'] or ['TSM','AAPL','GOOG','MSFT','AMZN','GOOGL'] - arg_begin: Start date for the stock data. - arg_end: End date for the stock data. Returns: - None: This function doesn't return anything but initializes the class instance. """ self.stocks_list = arg_stocks_list # List of stock symbols self.stocks_list.sort() self.start_date = arg_begin # Start date self.end_date = arg_end # End date self.N = len(self.stocks_list) # Number of stocks self.data = None self.set_data() self.M = len(self.data) # The number of rows in the data self.rtns = None self.set_rtns() def set_data(self): self.data = pd.DataFrame() # Initialize an empty DataFrame to store data # Iterate over each stock in the list of stocks for stock in self.stocks_list: # Download the stock data using the yfinance library df = yf.download(stock, start=self.start_date, end=self.end_date) # Select only the adjusted closing price from the downloaded data df = df[['Adj Close']] # Rename the column to the stock symbol for clarity df = df.rename(columns = {'Adj Close': stock}) # Check if the main DataFrame is empty if self.data.empty: # If it is, assign the downloaded data to the main DataFrame self.data = df else: # If it's not, concatenate the downloaded data to the main DataFrame self.data = pd.concat([self.data, df], axis=1) return def set_rtns(self): # percentage return self.rtns = pd.DataFrame() for stock in self.stocks_list: self.rtns[stock] = self.data[stock].pct_change() + 1 self.rtns.fillna(1, inplace=True) return def plotComparePrice(arg_stocks): mpl.figure(figsize=(20,12)) mpl.title('Normalized Prices', fontsize=20) mpl.xticks(fontsize=16) mpl.yticks(fontsize=16) mpl.ylabel('Price', fontsize=16) mpl.plot(np.log(arg_stocks.rtns.cumprod(axis = 0)),label=arg_stocks.data.columns) mpl.legend(loc='upper left', bbox_to_anchor=(1, 1), fontsize=12) mpl.plot() return stocks = Stocks(stocks_list,start_date ,end_date) plotComparePrice(stocks) ``` ![ComparePrice](https://hackmd.io/_uploads/S1IZgDqUC.png) --- ```python= def testPerformance(): # data start_date = date(2000,1,1) end_date = date(2024,6,1) td = (end_date - start_date)/timedelta(days=1) stocks_list=['AAPL', 'ABT', 'ADBE', 'AMAT', 'AMZN', 'AVY', 'BALL', 'BAX', 'BDX', 'CMI', 'CPB', 'CSX','GC=F', 'GILD', 'HAS','HG=F', 'INSM', 'KO', 'MCD', 'MMM', 'MSFT', 'NVDA', 'PFE','SI=F', 'TGT', 'TJX', 'TSM', 'WFC', 'XOM', 'YUM', '^FVX', '^GSPC', '^NDX', '^TNX', '^TYX'] stocks = Stocks(stocks_list,start_date ,end_date) weight_history=pd.DataFrame() tm=8 assetHRP=1 assetEW=1 assetRandom_1=1 assetRandom_2=1 assetRandom_3=1 xl=[start_date + relativedelta(months=+(tm))] y_HRP=[1] y_EW=[1] y_Random_1=[1] y_Random_2=[1] y_Random_3=[1] np.random.seed(0) rollw=rolling_window(stocks.rtns, memory=168, min_periods=168) for i in range(int((td/30)-tm-4)): x=stocks.rtns[start_date + relativedelta(months=+i) : start_date + relativedelta(months=+(tm+i))] dt=start_date + relativedelta(months=+(tm+i)) temp=rollw[dt:dt] while temp.empty: dt=dt + relativedelta(days=+1) temp=rollw[dt:dt] cov=pd.DataFrame(temp.values, index=temp.columns, columns=temp.columns) corr=x.corr() # cluster dist=correlDist(corr) condensed_dist = squareform(dist) link=sch.linkage(condensed_dist,'single') sortIx=getQuasiDiag(link) sortIx=corr.index[sortIx].tolist() # recover labels # Capital allocation hrp=getRecBipart(cov,sortIx) weight_history=pd.concat([weight_history,pd.DataFrame([hrp],index=[start_date + relativedelta(months=+(tm+i))])]) tempHRP=hrp.copy() tempEW=0 tempRandom_1=hrp.copy() rd_w_1=random_weight(len(stocks.rtns.columns)) tempRandom_2=hrp.copy() rd_w_2=random_weight(len(stocks.rtns.columns)) tempRandom_3=hrp.copy() rd_w_3=random_weight(len(stocks.rtns.columns)) k=len(stocks.rtns.columns)-1 for key in stocks_list: rtn=float(stocks.rtns[start_date + relativedelta(months=+(tm+i)): start_date + relativedelta(months=+(tm+1+i))].cumprod(axis = 0).iloc[-1][key]) tempHRP[key]*=rtn tempRandom_1[key]=rd_w_1[k]*rtn tempRandom_2[key]=rd_w_2[k]*rtn tempRandom_3[key]=rd_w_3[k]*rtn k-=1 tempEW+=rtn tempEW/=len(stocks_list) assetHRP*=tempHRP.sum() assetEW*=tempEW assetRandom_1*=tempRandom_1.sum() assetRandom_2*=tempRandom_2.sum() assetRandom_3*=tempRandom_3.sum() xl.append(start_date + relativedelta(months=+(tm+i))) y_HRP.append(assetHRP) y_EW.append(assetEW) y_Random_1.append(assetRandom_1) y_Random_2.append(assetRandom_2) y_Random_3.append(assetRandom_3) plotPerformance(xl,y_HRP.copy(),y_EW.copy(),y_Random_1.copy(),y_Random_2.copy(),y_Random_3.copy()) plotHistoryWeight(weight_history) summary_stat(y_HRP.copy()) summary_stat(y_EW.copy()) summary_stat(y_Random_1.copy()) summary_stat(y_Random_2.copy()) summary_stat(y_Random_3.copy()) plotDrawdown(xl,y_HRP.copy(),y_EW.copy(),y_Random_1.copy(),y_Random_2.copy(),y_Random_3.copy()) return def random_weight(arg_len): rd = np.random.random(size=arg_len-1) rd.sort() rd_w=rd.copy() rd_w=np.append(rd_w,1) for k in range(len(rd)): rd_w[k+1]=rd_w[k+1]-rd[k] return rd_w ``` ### Tree Clustering The dendrogram below illustrates the hierarchical clustering of assets, revealing natural groupings based on their correlation structure. ```python= def plotDendrogram(path,linkage,labels=None): if labels is None:labels=[] p = len(labels) mpl.figure(figsize=(12,6)) mpl.title('Hierarchical Clustering', fontsize=20) mpl.ylabel('stock symbol', fontsize=16) mpl.xlabel('distance', fontsize=16) # call dendrogram to get the returned dictionary # (plotting parameters can be ignored at this point) R = sch.dendrogram( linkage, truncate_mode='lastp', # show only the last p merged clusters p=p, # show only the last p merged clusters no_plot=True, orientation='right' ) # create a label dictionary temp = {R["leaves"][ii]: labels[ii] for ii in range(len(R["leaves"]))} def llf(xx): return "{}".format(temp[xx]) sch.dendrogram( linkage, truncate_mode='lastp', # show only the last p merged clusters p=p, # show only the last p merged clusters color_threshold=0.35, # If color_threshold is None or ‘default’,the threshold is set to 0.7*max(Z[:,2]) leaf_label_func=llf, # leaf_rotation=60., leaf_font_size=12., show_contracted=True, # to get a distribution impression in truncated branches orientation='right', ) mpl.savefig(path, format='png', bbox_inches='tight') mpl.clf();mpl.close() # reset pylab return ``` Use `color_threshold=0.35` let the height less than 0.35 be same color, which also meaning they are highly correlated assets ![HRP_Dendrogram_single](https://i.imgur.com/z8zPHJ8.png) ### Quasi-Diagonalization By reordering the correlation matrix based on the clustering, we obtain a quasi-diagonalized matrix. This visually highlights blocks of highly correlated assets, aiding in risk assessment. ```python= def plotCorrMatrix(path,corr,labels=None): # Heatmap of the correlation matrix if labels is None:labels=[] mpl.figure(figsize=(8,6)) mpl.pcolor(corr) mpl.colorbar() mpl.yticks(np.arange(.5,corr.shape[0]+.5),labels) mpl.xticks(np.arange(.5,corr.shape[0]+.5),labels,rotation = 60.) mpl.savefig(path) mpl.clf();mpl.close() # reset pylab return plotCorrMatrix('HRP_corr.png',corr,labels=corr.columns) ``` **Correlation matrix** ![HRP3_corr0](https://hackmd.io/_uploads/HkHRzruL0.png) ```python= sortIx=getQuasiDiag(link) sortIx=corr.index[sortIx].tolist() # recover labels df0=corr.loc[sortIx,sortIx] # reorder plotCorrMatrix('HRP_corr_Q_diag.png',df0,labels=df0.columns) ``` **Quasi-diagonalisation of the correlation matrix** ![HRP3_corr1](https://hackmd.io/_uploads/Sy5RMB_UR.png) ### Recursive Bisection and Performance: The final HRP portfolio weights were determined through recursive bisection. Here we use Covariance Predictors-Rolling Window to Predict the covariance matrix of rebalance month. ```python= def rolling_window(returns, memory, min_periods=20): rollw=pd.DataFrame() min_periods = max(min_periods, 1) times = returns.index assets = returns.columns returns = returns.values Sigmas = np.zeros((returns.shape[0], returns.shape[1], returns.shape[1])) Sigmas[0] = np.outer(returns[0], returns[0]) for t in range(1, returns.shape[0]): alpha_old = 1 / min(t + 1, memory) alpha_new = 1 / min(t + 2, memory) if t >= memory: Sigmas[t] = alpha_new / alpha_old * Sigmas[t - 1] + alpha_new * ( np.outer(returns[t], returns[t]) - np.outer(returns[t - memory], returns[t - memory]) ) else: Sigmas[t] = alpha_new / alpha_old * Sigmas[t - 1] + alpha_new * ( np.outer(returns[t], returns[t]) ) Sigmas = Sigmas[min_periods - 1 :] times = times[min_periods - 1 :] rollw=pd.concat([pd.DataFrame(Sigmas[t], index=assets, columns=assets) for t in range(len(times))], keys=[times[t].date() for t in range(len(times))]) return rollw ``` The resulting portfolio's performance is compared against equal-weighted (EW) and randomly weighted portfolios in the figures below. ```python= def plotHistoryWeight(w_history): tot = np.zeros(len(w_history)) mpl.figure(figsize=(30,13)) mpl.title('History of weight values', fontsize=30) mpl.xticks(fontsize=16) mpl.yticks(fontsize=16) mpl.ylabel('Weights', fontsize=24) for col_idx in w_history: mpl.fill_between(x=w_history.index.tolist(), y1=w_history[col_idx].values+tot,y2=tot,step='post',label=col_idx) tot += w_history[col_idx].values mpl.legend(loc='upper left', bbox_to_anchor=(1, 1), fontsize=12) mpl.savefig('HistoryWeight.png', format='png', bbox_inches='tight') mpl.clf();mpl.close() return ``` ![History of weight values](https://i.imgur.com/MBA6b8a.png) ```python= def plotPerformance(d,target1,target2,target3,target4,target5): mpl.figure(figsize=(12,6)) mpl.plot(d, target1, 'r',label='HRP') mpl.plot(d, target2, 'b',label='EW') mpl.plot(d, target3, 'g',label='Random_1') mpl.plot(d, target4, 'm',label='Random_2') mpl.plot(d, target5, 'c',label='Random_3') mpl.title('Performance', fontsize=20) mpl.xticks(rotation = 60.) mpl.ylabel('cumprod', fontsize=16) mpl.legend() mpl.savefig('Performance.png', format='png', bbox_inches='tight') mpl.clf();mpl.close() return ``` ![performance](https://i.imgur.com/KCytkSS_d.webp?maxwidth=760&fidelity=grand) ```python= def historyDrawdown(target): history_drawdown=[] max_price=target[0] for price in target: if price > max_price: max_price=price history_drawdown.append((price - max_price) *100 / max_price) return history_drawdown def plotDrawdown(d,target1,target2,target3,target4,target5): drawdown_HRP=historyDrawdown(target1) drawdown_EW=historyDrawdown(target2) drawdown_Random_1=historyDrawdown(target3) drawdown_Random_2=historyDrawdown(target4) drawdown_Random_3=historyDrawdown(target5) mpl.figure(figsize=(12,6)) mpl.title('History Drawdown(%)', fontsize=16) mpl.plot(d, drawdown_HRP, 'r',label='HRP Drawdown(%)') mpl.plot(d, drawdown_EW, 'b',label='EW Drawdown(%)') mpl.plot(d, drawdown_Random_1, 'g',label='Random_1 Drawdown(%)') mpl.plot(d, drawdown_Random_2, 'm',label='Random_2 Drawdown(%)') mpl.plot(d, drawdown_Random_3, 'c',label='Random_3 Drawdown(%)') mpl.ylabel('Drawdown(%)', fontsize=16) mpl.legend() mpl.savefig('HistoryDrawdown.png', format='png', bbox_inches='tight') mpl.clf();mpl.close() return ``` ![Drawdown](https://i.imgur.com/Zm9CeQ0.png) ### Key Observations * **Risk Management:** HRP demonstrates lower volatility and smaller maximum drawdowns, particularly during the 2008 financial crisis and the 2020 pandemic, suggesting superior risk-adjusted performance. * **Competitive Returns:** - HRP achieved a CAGR of 16.87%, slightly outperforming the equally weighted (EW) portfolio with a CAGR of 16.26% - HRP’s volatility was 15.78%, slightly lower than the EW portfolio’s 15.88% , lower than all random portfolios. - HRP’s Sharpe ratio was higher than EW and all random portfolios. - HRP experienced a maximum drawdown of 36.14%, which was slightly better than the EW portfolio’s 37.18%, better than Random Portfolio 2 and 3. * **Summary Statistics:** The table below summarizes key performance metrics: ```python= def summary_stat(target): max_drawdown = 0 max_price=target[0] for price in target: if price > max_price: max_price=price drawdown = (price - max_price) / max_price if drawdown < max_drawdown: max_drawdown = drawdown pct_rtn=(np.array(target[1:])/np.array(target[:-1]))-1 print('Compound annual growth rate',(target[-1]/target[0])**(12/len(target))-1) print('Vol.(annual)', (pct_rtn.std())*(12**(1/2))) print('Sharpe ratio(annual)',(pct_rtn.mean())*(12**(1/2))/(pct_rtn.std())) print('max_drawdown',max_drawdown) print('Sortino ratio(annual)',(pct_rtn.mean())*(12**(1/2))/(pct_rtn[pct_rtn<0].std())) print('Calmar ratio(annual)',(pct_rtn.mean())*(12**(1/2))/max_drawdown) return ``` | | HRP | EW | Random 1 | Random 2 | Random 3 | |:----------------------:|:-------:|:-------:|:--------:|:--------:|:--------:| | CAGR | 16.87% | 16.26% | 18.62% | 15.36% | 16.35% | | Volatility(annualized) | 15.78% | 15.88% | 16.46% | 17.10% | 17.18% | | Sharpe ratio | 1.077 | 1.038 | 1.131 | 0.929 | 0.976 | | Maximum drawdown | -36.14% | -37.18% | -31.14% | -37.82% | -37.11% | --- ## **Reference** * [Marcos López de Prado](https://www.quantresearch.org/index.html) * [Building Diversified Portfolios that Outperform Out-of-Sample](https://doi.org/10.3905/jpm.2016.42.4.059) * [HRP.py.txt](https://www.quantresearch.org/HRP.py.txt) * [HRP_MC.py.txt](https://www.quantresearch.org/HRP_MC.py.txt) * [Hierarchical Construction of Investment Portfolios Using Clustered Machine Learning](https://patents.google.com/patent/US20180089762A1) * [Asset Allocation - Hierarchical Risk Parity](https://www.mathworks.com/videos/asset-allocation-hierarchical-risk-parity-1577957794663.html) * [Covariance/Correlation Matrix HRP-Clustering](https://youtu.be/wtd3K-Ubr1g?si=rQomxWT2Xinw8UXo) * [Towards Robust Portfolios](https://www.quantresearch.org/FIVE_VROBUST_Index__Research-EN.pdf) * [Hierarchical Risk Parity on RAPIDS: An ML Approach to Portfolio Allocation](https://developer.nvidia.com/blog/hierarchical-risk-parity-on-rapids-an-ml-approach-to-portfolio-allocation/) * [Asset Allocation - Hierarchical Risk Parity](https://youtu.be/e21MfMe5vtU?si=hS23MDNGDQlSmD-A) * [Covariance Predictor](https://hackmd.io/@Howard531/B1DphEqXR)