# Adobe Research
## Target Conference
- Septemper: AAAI, CHI (ACM SIGHCI)
- October: PerCom (IEEE HCI Conference)
## Artificial Design Intelligence (ADI) in Visual Design
### Past works
#### Contextual bandits as online recommendation system
- https://acris.aalto.fi/ws/portalfiles/portal/35738592/ELEC_Koch_May_AI_ACM_SIGHI.pdf
- https://dl.acm.org/doi/abs/10.1145/2505515.2514700?casa_token=W62PTQYpWpYAAAAA:apOWf_xMTcF3eVo3CYqYuUk3dRKfDM6RluMipDFyXZg89g09wRr82IP-kltJ8eWSsrgJICsKjFDkXX4
### Prelimenary Design
1. **Inverse Reinforcement Learning** to estimate reward function while designing webpages
- Allows us to model more complicated actions other than selecting colors/styles etc.
- Need to collect data (tracking on creative cloud?)
2. Use the learned reward to learn a **Markov Decision Process**
- Contextual bandits don't consider long-term consequences, but in reality choosing an action at any time affects the final result.
3. Simpler model: Contextual Bandit with explanability
- https://www.ijcai.org/proceedings/2019/0532.pdf
#### Justification
- The reward function may be hard to write down, hence need to learn it through data
- Contextual Bandits do not consider long term consequences (not ture in design!)
- Models the transition of state (i.e. the current design + recommendation + user customize)
#### Ideas
- State: Page content, css styling
- May need feature extraction
- Actions:
- css styles of added component (e.g. font-size, background color, border, padding, margin)
- Sort actions in descending order based on predicted likelihood (softmax prob.)
- More complicated actions with IRL, e.g. recommend to add a new component itself.
- Reward:
- Higher if actions with more likelihood is chosen
- Higher if not removed by user
- Update policy: Based on reward
### Use Cases
- Design suggestions based on user interactions
- This would probably be difficult to get data, at least in the case of emails
- Design suggestions based on the current sequence of elements and how they are displayed, e.g., text, image, link, text, ...
- e.g., if someone is designing an email, then we can recommend next design suggestion based on the previous elements).
- Best part of this, is that we can use the collection of emails for training, and then evaluate the model by holding out one of those elements.
- Suggest alternative data queries to the user based on sequence of queries executed (query logs)
- Dashboard suggestions based on website traffice logs (TODO: links on Adobe Experience Platform(AEP))
- Website ad suggestions based on visitor logs
### Baselines
- Metric Learning (recommend based on similarities of each action)
- Offline Learning (train a sequence model)
- Other contextual bandit recommendation work (there are a few papers)
### Potential Problems
- How to initialize?
- How to benchmark?
- Can analyze regret while interacting with user
- Can also use time, and hold-out future interactions for evaluation
### Motivating Examples
### Available Open Datasets
## Meeting Notes
### 4/20 Notes
#### Email design
- Recommend components to add while designing
-- Use email dataset for training. Simulate top-to-bottom approach while designing.
- Recommend to change the layout if too messy
-- How to implement?
- Main questions:
-- Other applications beyond recommending?
#### Visualization
- Given a chart, select the most interesting annotations to display (i.e. ranking the annotations)
#### AEP dashboard
- Choose which options to display based on web logs
-- Can probably be achieved by ranking options
#### TODO
- Identify several potential projects to work on
- Datasets
### 05/03 Notes
#### Paper 1: FaceOff: Assisting the Manifestation Design of Web Graphical User Interface
- https://acbull.github.io/pdf/wsdm19-faceoff.pdf
- Segmenting website and reformat the segmented parts
#### Paper 2: GUIGAN: Learning to Generate GUI Designs Using Generative Adversarial Networks
- https://arxiv.org/pdf/2101.09978.pdf
- GAN to generate new HTML image fragment
#### Style Transfer on HTML fragment
- http://cs231n.stanford.edu/slides/2022/lecture_8_ruohan.pdf
- input: HTML fragment, set of image palette representing the brand; output: brand aware HTML fragment
-
### 05/09 Notes
- Possible idea on layout recommendation
- https://dl.acm.org/doi/pdf/10.1145/3097983.3098184
- Given a style, and a website, transfer the style to website
### 05/23 Notes
- Paper 1 (CHI'22): [GANSpiration: Balancing Targeted and Serendipitous Inspiration in User Interface Design with Style-Based Generative Adversarial Network](https://arxiv.org/pdf/2203.03827.pdf)
- Unlike GUIGAN (component-by-component), generates user interface as a whole, based on given image.
- Paper 2 (CVPR'18): [High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs](https://arxiv.org/pdf/1711.11585.pdf)
- a.k.a. Pix2PixHD
- Generates photo-realistic image given semantic annotations
- Ideas:
1. Can we generate website designs (or app GUI, etc.) based on **style + semantic annotations**
2. Possible workflow:
a. User specifies a style by uploading an image & chooses potential from several templates
###### tags: `Notes`