# Demo Proposal: Digital Twins for Building Energy Management
:::success
This is a working document to plan out a TRIC demonstrator at the intersection of Infrastructure & Environment.
**current team**
==please adjust to best represent yourselves!==
- **Ziad Ghauch** (ATI) - will lead on integrating RL agent into digital twin pipeline based on his previous theoretical framework (==add link to manuscript/preprint==)
- **Chaoqun Zhuang** (Cambridge Uni) - experience in modelling energy systems linking both physics-based and data-driven models, will provide domain expertise & virtual twin model
- **Sophie Arana** (ATI) - experience in research translation and stakeholder engagement, will bring in user perspective, agile project methodology and establish connections to stakeholders within Turing & beyond (eg through ADViCE)
:::
[toc]
## Mission
To demonstrate the benefit of digital twins enhanced by reinforcement learning for tackling real-world challenges in building energy management.
## Background: Why Building Energy Management?
Why is Building Energy Management a suitable application for Reinforcement learning approaches?
Reinforcement learning in building energy management tackles the optimization of energy storage and use (e.g. load shifting) to reduce carbon emissions and counteract the unreliability of renewable energy sources.
1) Building Energy Management is a **sequential planning problem**, where decisions regarding energy usage and storage, must be made continuously based on the evolving state of the building and its environment. This fits well with the Reinforcement Learning (RL) framework, which excels at optimizing decision sequences for complex, dynamic systems.
3) RL agents have the ability to **learn and adapt continuously**. This makes them perfectly suited for building energy management, because houses are highly unique and developing specific models for each new house is costly and time-consuming. By using RL, systems can adjust to the specific energy needs and behavioral patterns of different households without the need for extensive individual programming.
4) RL enable **forcasting** ==Chaoqun to add more detailed explanation for why forcasting is valuable in this context==
5) ==any other points?==
## Goal & Objectives
The project goal is to integrate Reinforcement Learning algorithms within Digital Twin architectures for Energy System Modeling. Specifically, we will engineer a digital twin pipeline designed to incorporate RL algorithms, using simulated data that mirrors real-world energy scenarios.
### The problem Formulation
In this project, we are NOT aiming to develop novel RL algorithms or demonstrate their advantage over rule-based systems as this has been demonstrated elsewhere and is subject to ongoing research.
Instead, we will demonstrate the most efficient methods for integrating an RL agent into a complete Digital Twin (DT) pipeline, addressing specific challenges of applying Reinforcement Learning within the DT framework for building energy management, such as:
- ==Ziad could you please describe some of the challenges of integration that we would demonstrate?==
:::warning
## Open Questions
- What toy example can we use to model our proof-of-concept? Optimally this toy example should be a realistic use case such that any solution we are developing will be applicable for potential partners
- Do we need to build out a full digital twin or can we build a smaller component to showcase the integration?
- Who else could support us in this project
- What do we need in terms of compute?
- Chaoqun's reduced-order models are not computationally expensive, so likely won't require additional compute.
- ==Please add more questions that you have at this stage==
:::
:::info
## Resources
### Papers
- [Nagy et al. 2023 Ten questions concerning reinforcement learning for building energy management](https://pdf.sciencedirectassets.com/271434/1-s2.0-S0360132323X00134/1-s2.0-S0360132323004626/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjECcaCXVzLWVhc3QtMSJIMEYCIQCWD0VtY%2FgjBYotYlfjyVAGViVc5tZLgvzJGyGcmM6XIAIhAPEGtToJCk%2Fxq6WnjgU%2F6POpk8%2BniFIjMdMVwEj3aM4qKrwFCND%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQBRoMMDU5MDAzNTQ2ODY1IgzKDejXWgl5ap4D8cgqkAWYVYsc7gSVgwIEfm7EeOvTEedes2KIISAJjah2n8LURzxKVHHsOr6VYxAnkDdt8bYiYcHjqIOgWsBBLhutVPbITM2sF%2FXg%2FozFYB0jSudF%2FdGrOlisJLIcJ8UqnJxhDEaaqgHqfJdm1tl%2FJPBJ0z6pScEgwoEKIZk%2BgcyRL9Eyk6438gdFYlzi5VouObTh91z1fWzveTw4E6IJG1Hq3sq7TCC5VY%2FVO0XcuzZJJ422eTVEyalg%2B5pRndlTZApKUKZ%2F3%2Fs54kJyIWy4zwjZbeaPO0RLXRBApi59NCN1Tz9SV0OoBesG8G%2FKAYtL%2FfTlzWwGGuHFLGR05xIy5SiEv6kSmlUzokubDG%2BFYdzNuXwlbInJTqymsjNm2CRurnIZGBmV9x2Y%2B81SAiKLEF79mIZYf8BOz35I7T49ly10TaT8C%2FB4DsJI%2BftI3Nx9rhB33sZJOg%2B0e176NPy2NGbBcIrmOarTK7SKs1JM9jpNkLAGyvDVKz66CfQwLao12Jn6JB5lTJv%2FmudkLjFEg01qOvaJffjceB9dMP5E%2FZR%2BeBFaiECM0KnEblyfTmvRwJ7qAhqX62Lt2EIiK6rBcr7rnlZKbnj%2BSc7S3UCqSpsvjHwMmvaDZL2MFF8Xh7d6kyq8dsM%2FlQL9bUNpl1rxShITNVIWLnJSBOm6yv8%2BpBIJnFA4RHmDg4mJgTIo0y%2BbVNl13KQrue5mvppMKG0J82N28QROd%2FZzCXJ4tejWFiJLgBhkB%2FR%2FPJIbizerSFzy5dPUqztg9fJCPdcOq4aogmveeDCUrz%2F82FImcOeygwRTmax7O%2F8238PldFU0LTe2AeVVKTQZXlYOEPpt31KbIDxuo4dqMohLxFUrsb3wF7cG3y2KHDCM8O6zBjqwAbfZLCh7mhrcNdlLvU5hjXl2mG0JdelgDIyNE19BIoFbOCcZS1sSYVcuHpyNmcF3hUpmegUei7O82pfhs0qheX51o4HpL5fTX1up4F3yYXjun76ojdK%2FZKW9sNbkpemEDQAfL1flsWdgKzlLrCUxD0wsxK8vnE%2BzUMZF9glcr5dVT9zvne%2BytmJ6868c6ZHmOIyJNZ%2FJMfJaxR9ELSiDItLHHwSJ%2BYheM4qwZgeMPDC8&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20240626T071356Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTYQEMLGPN5%2F20240626%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=0a3a56c5de974505fdca93727434e3f93da1e47ae6c9af629127b0c65ecf29b6&hash=042f7647f9df50374ffc0bce4e2f7fd676d573b2f747050d82657f6f7a5018de&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S0360132323004626&tid=spdf-7e56cf5c-4a38-47f7-9f72-0d5ef54d1cdb&sid=1a9f911d6228094aa4180ed-4e02240f853fgxrqb&type=client&tsoh=d3d3LnNjaWVuY2VkaXJlY3QuY29t&ua=04075b5b5e06595400&rr=899b65c69f666419&cc=gb)
- [Li et al. 2024 A hardware-in-the-loop (HIL) testbed for cyber-physical energy systems in smart commercial buildings](https://www.tandfonline.com/doi/full/10.1080/23744731.2024.2336839) ==Chaoqun could you add a few sentences for context for why this paper matters==
### Software/Code
- Open source library for reduced-order models of energy systems https://simulationresearch.lbl.gov/modelica/\
- Building Simulations Environment [CityLearn](https://github.com/intelligent-environments-lab/CityLearn)
- Past RL for energy systems [challenge](https://neurips.cc/virtual/2023/competition/66590) [also here](https://www.aicrowd.com/challenges/neurips-2023-citylearn-challenge)
- https://github.com/EECi/Annex_37
:::