<style type="text/css" rel="stylesheet">
#left {
float: left;
width: 48%;
height: 100% !important;
display: flex;
align-items: center;
}
#right {
float: right;
width: 48%;
height: 100% !important;
}
</style>
<!-- .slide: style="font-size: 18px;" -->
# Outline
- What requirements engineering process and tools you used.
- What software engineering process and tools you used.
- What project management processes and tools you used.
- How you worked effectively with other developers on the project.
- A time when you faced a significant technical challenge and how you overcame it.
Note:
So.. here is the outline of this presentation, I will use my recent work at ESRI as an example to address these questions.
---
### Project
<!-- .slide: style="font-size: 24px;" -->
`WebGPU` extension for ESRI runtime core
🏷️pilot research 🏷️system integration 🏷️cross-platforms development
Note:
The project main goal is to integrate webgpu, as an extension in to the company core codebase.
esri has a core codebase written in cpp.
It calls runtime core, or RTC.
It is as a single source of turth of all the company functionality.
It communicates to other languages and platforms through a c api.
In theory, once the interagtion completed, all their products will have the ability to use the extension in their context.
It is a pilot study, we will need to investigate
whether the technology goes well with the company existing system and find out what happen if it fall short.
It involves building a serise of proof-of-concept application in different platform but sourcing the new extension from the core.
---
<!-- .slide: style="font-size: 24px;" -->
#### ...but what is ?
- It is a better , become popular last year.
- It utilises different GPU without handling the low level complexity
- A standardised & stateless API
- _i.e._ less buggy, _arguably_.
- Natively support compute shader
- easy parellel computation
- write `wgsl` (webGPU shader language) once, run everywhere.
Note:
so what is WebGPU? basically, it is a better WebGL.
It become popular from last Apr when Chrome announce native support for it.
Essentially, it is a library aim to utilise the different GPU without handling the low level complexity
and most importantly, it support compute shader natively. This is the main reason why the company is interested.
It means developers can utitlise the local device GPU power for heavy computation other than rendering.
They can write `wgsl` once and run on different platforms where the webGPU available.
Arguably, it is the cheapest way to improve performance compare to optimasing the exisiting algroithm.
---
<!-- .slide: style="font-size: 18px;" -->
<div style="clear:both; display:flex; flex-wrap: nowrap;">
<div>
#### User Stories
> 🎯 I want to be able to know if a target could be seen if i place a sniper in a location. **==I want the result in real time==**.[name=Military intelligence] [time=2024]
> 🚁 I want to be able to ==**use a mobile device**== to know all the possible landing places for a helicopter at the current location with some radius, **I want it quick**. [name=RAF] [time=2024]
> 🚑 I want to be able to know the area I could reach with a given location and duration, i will need this to process a **==big amount of data==** [name=NHS] [time=2024]
...etc
</div>
<div>

<small>Fig. view shed</small>

<small>Fig. hill shed</small>

<small>Fig. service area analysis</small>
</div>
</div>
Note:
The project is motivated by increasing customer demand. Here are some user story that is from the product team.
The demand actually is not about making a new product
but more about they want it faster and able to use it on mobile device.
for example, the request from the military is simply view shed, which is available in ArcGISPro but it over a minute to compute a reasonable big dataset.
similarly, RAF recuse team is asking for something call hillshed which an algor that computes the gradient over a terrian, which is available in ArcGISPro too.
but it is only available on windows desktop.
---
## Requirement engineering
<!-- .slide: style="font-size: 18px;" -->
<div style="clear:both; display:flex; flex-wrap: nowrap;">
<div id="right">
#### Tools
Github Wiki, C4 diagram, UML, Gantt Chart

<small>Fig. feasibility study </small>
</div>
<div id="right">
### Process
1. Requirment specification analysis
- multiplaform
- fast
- process big data
2. Feasibility study
- webGL/cuda/webGPU
- integration feasibility
- code sharing possiblity
- time constraint & deliverables
3. Document the potential risk
- precision problem
- legal issue
4. Review and seek approval
</div>
</div>
Note:
so we had few stakeholder meeting, we started the requirement engineerring process.
we firstly study the user stories and requirements
we realised that the gist of the problem is about the performance and multiplatform.
then we kick off the feasibility study.
we conduct research for the choice of GPU technology.
we consult reading the internal and webGPU official doc to estimate
how likely we can integration could be done
and how likely we can share the compute shader code between differnet plaform.
Although we are not enitrely sure,
we are confident that we can bring it to life after building few PoC Apps to clear out the uncertainties.
We documented the study and the potential risk on github wiki, used few very high level component diagrams to illustrating the achitecture, and used gannet chart to estimate the time and deliverables.
with few back and forth meeting with the product owner and stakholder we had an agreeement.
---
### Epics
<!-- .slide: style="font-size: 24px;" -->
1. Integrate `WebGPU` into ESRI runtimecore (RTC) binary as an extension.
2. Using the extension to develop series of PoCs application on multiplatform
- Win, MacOS, Linux, ==Android==, iOS, ==Web==
3. GPU fallback
- CPU SMID
- algorithm improvement
Note:
here are the epics we boiled down from the requirements.
first of all,
we are going to integrate the webGPU in the RTC as an extension such that it could port it to other plaforms.
and then we are going to build a PoC for each platform to confirm the extension works well and compute shader can be shared.
and finally we will handle the less ideal case:
for example, when GPU is not available, presision matters, we propose to use the traditional CPU SMID as a subsitution for parallel computation or pay out effort to improve the existing algorithm.
with these epic, we started our 1st sprint.
---
### Software engineering process
<!-- .slide: style="font-size: 18px;" -->
<div style="clear:both; display:flex; flex-wrap: nowrap;">
<div id="right">
### Process
- Design
- C4 & UML diagram
- Backlog creation
- Analyse
- Pirority
- Dependency
- Implementation
- Testing
- Release & Maintenance
</div>
<div id="right">
### Tools
- **Version Control**:  git  github
- **Build system**:  cmake,  gradle
- **IDE**:  vscode (Web),  visual studio (WPF),  android studio (android)
- **OS**:  win,  macOS,  android
- **Language**: , , , , , , 
- **Lib**: Esri Kotlin & .Net sdk, esri internal code generator, `dawn`, `jni`, `ndk`, `numpy`, `scipy`, `matplotlib` ... etc
- **Test**:  jenkin
- **Doc**: 
</div>
</div>
Note:
we spend our 1st sprint in design, create backlog and analsys the work dependencies and pirority. we used more detail C4 and UML digram to explain the component relationship and how possibly to call the core code from different languages.
after that we started out the implemenation, we started out with the extension integration 1st as it is the source of dependence.
once it is done and passes all build test from jenkin, we start off the cross platform PoC developemnt.
i am in charge with the android and web platform.
The PoC we are building is putting an animated madelbrot set on a map application.
basically we are using webgpu to compute an image with some parameters, and by adjusting the parameters at each frame, we have animation.
---
#### PoC: Mendelbrot App
<!-- .slide: style="font-size: 18px;" -->
- every pixel is computed by a 10k loop at runtime
<div style="clear:both; display:flex; flex-wrap: nowrap;">
<div id='right'>
<small>android: gpu, 128*128 px</small>

</div>
<div id='right'>
<small>android: cpu, 128*128 px</small>

</div>
</div>
Note:
so you may ask why it is important?
the image overlay on the map is simply for the illustration and validation
important thing is that
we confirmed the extension's corss platform ability and
the compute shader could be shared.
the image creation process is to mimic some heavy computation.
for example,
there are over million pixels in an 1k by 1k images,
each of them is computed in a loop which loop for ten thousand time.
it is a very good stress test to estimate the performance different between CPU and GPU
---
<!-- .slide: style="font-size: 18px;" -->
<div style="clear:both; display:flex; flex-wrap: nowrap;">
<div id="right">
### Project management
### Tool

### Process
```mermaid
graph TD
%% style %%
classDef fix padding:5px, line-height:20px
A(Sprint Planning):::fix --> B(Daily Stand Up):::fix
B --> C(Sprint Review):::fix
B -- 2weeks --> B
C --> D(Sprint Retrospective):::fix
D --> A
```
<small>Fig. Scrum loop</small>
</div>
<div id="right">
### Effective working with others
- Bi-weekly Sprint Planning
- Define sprint goal
- Define roles and responsibilities
- Create/select backlog base on priority
- Define acceptance criterias
- Estimate and break down work with story point
- Daily Stand Up
- Update progress and direction
- Seek/offer help
- Report blocker
- Discuss issue/create backlog
- Update sprint board
- Sprint Review & Retrospect
- Review the sprint goal and what happened
- Demonstrate increments or bugs
- Adjust backlog
- Gather members' feedback
- Documentation
- Knowledge sharing
</div>
</div>
Note:
for the project management, we are using scrum. so things are very standard. it is an iterative process.
it start with a sprint planning meeting.
in the meeting, we create and select backlog based on their pirority and manpower.
as soon as we reach and agreement with the role and responsiblity we set the sprint goal.
then we kick off the bi weekly sprint,
we had our daily stand up meeting, where we update progress and report if there is any blocker.
we seek and offer help in the daily stand up, and keep the sprint board up-to-date to facilitate communication.
after the 2 week, we will have a sprint review to retrospect what we have achieved and what have gone wrong.
it is also where we express our concern for the project and share our knowledge. for example, i have done few show and tell for sharing how to build and port rtc to android help other member to build and test the app on their own machine.
---
<!-- .slide: style="font-size: 18px;" -->
<div style="clear:both; display:flex; flex-wrap: nowrap;">
<div id="right">
### challenge
- No experience in `WebGPU` and Android development
- Cross platform problem
- color different between window and android
- same function different results
- passed in debug both remote and local.
- failed in release locally.
</div>
<div id="right">
### solution
- WebGPU
- learn `wgsl`
- following Google hands on turtorial and use it as a template
- Android dev
- learn Kotlin an official online course
- use the internal existing android repo as a template, start with some hello world example
- Cross platform problem
- pixel format in differnet platform
</div>
</div>
Note:
This probject involved quite a few new things for me but it is no big deal, with the help of google and chatgpt, I learn it by doing it.
The bigger problem is there is very little resources for cross platform development, neither online or in the company, espeacially the android team is pirmarily in us.
There is a bug worth mention, which made us worry about whether we can really share the shader as it is between platform.
as you have seen in previous slide, the Medelbroth set is kind of blue. it was originaly yellow when it ported to android. but on some platform, for example, window it was blue
but it was the same shader and same memory output,
since we have done a lot of bitwise operation in the shader, I thought of the big and little edian problem and checked the byte order on each platform but everything is normal.
Until I dig in real deep in to the android official doc there is a very tiny remark talking about the default pixel format,
then i found out android default pixel format is different from windows.
for example, on window it is RGBA on android it is the BGRA
since we set the 2nd and 3rd byte to 255 so on window is blue, but on android is yellow.
as soon as we know the reason, so we simply need to unitfy the pixel format in this context to ensure the same color in the shader and it wont compromise the code sharing purpose.
---
Thank you.
{"title":"RSE Interview Presentation","slideOptions":"{\"theme\":\"white\",\"spotlight\":{\"enabled\":false}}","contributors":"[{\"id\":\"97b569fe-8f84-4ee6-8504-3a4416110ae4\",\"add\":51676,\"del\":36064}]"}