### Continuous Encoder
META TEMPORAL POINT PROCESSES
https://arxiv.org/pdf/2301.12023.pdf
code and etc: https://openreview.net/forum?id=QZfdDpTX1uM
Simplified State Space Layers for Sequence Modeling
https://arxiv.org/abs/2208.04933
SIMPLIFIED STATE SPACE LAYERS FOR SEQUENCE
MODELING
https://arxiv.org/pdf/2208.04933.pdf
Modeling Irregular Time Series with Continuous Recurrent Units
https://proceedings.mlr.press/v162/schirmer22a/schirmer22a.pdf
Closed-form Continuous-time Neural Networks
https://arxiv.org/pdf/2106.13898.pdf
Semi-supervised sequence classification through change point detection
https://arxiv.org/pdf/2009.11829.pdf
Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
https://arxiv.org/pdf/2106.13898.pdf
Continuous-time convolutions model of event sequences
https://arxiv.org/pdf/2302.06247.pdf
Diagonal State Spaces are as Effective as Structured State Spaces
https://proceedings.neurips.cc/paper_files/paper/2022/file/9156b0f6dfa9bbd18c79cc459ef5d61c-Paper-Conference.pdf
Time-series Generative Adversarial Networks
https://proceedings.neurips.cc/paper_files/paper/2019/file/c9efe5f26cd17ba6216bbe2a7d26d490-Paper.pdf
Neural Continuous-Discrete State Space Models
for Irregularly-Sampled Time Series
https://arxiv.org/pdf/2301.11308.pdf
## Modern RNN
RWKV: Reinventing RNNs for the Transformer Era
https://arxiv.org/abs/2305.13048
RRWKV: Capturing Long-range Dependencies in RWKV
https://arxiv.org/pdf/2306.05176.pdf
Retentive Network: A Successor to Transformer
for Large Language Models
https://arxiv.org/pdf/2307.08621.pdf
## Contrastive NLP
https://github.com/ryanzhumich/Contrastive-Learning-NLP-Papers
### Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
## Contrasitve and self supervises methods:
- A Survey of Self-Supervised Learning from Multiple Perspectives: Algorithms, Theory, Applications and Future Trends https://arxiv.org/pdf/2301.05712.pdf
### Denoising Diffusion Autoencoders are Unified Self-supervised Learners
https://arxiv.org/abs/2303.09769
## Event sequence representations
### TS2Vec: Towards Universal Representation of Time Series
### Time Series Contrastive Learning with Information-Aware Augmentations
### Time-Series Representation Learning via Temporal and Contextual Contrasting
### Time-Series Representation Learning via Temporal and Contextual Contrasting (TS-TCC)
### A Survey on Time-Series Pre-Trained Models
### Coles: contrastive learning for event sequences with self-supervision
### DuETT: Dual Event Time Transformer for Electronic Health Record
### Contrastive self-supervised sequential recommendation with robust augmentation
## KSOM
### Improving contrastive learning with model augmentation
### Uniform Sequence Better: Time Interval Aware Data Augmentation for Sequential Recommendation
### Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation
### Contrastvae: Contrastive variational autoencoder for sequential recommendation
### Bootstrapping user and item representations for one-class collaborative filtering
### CL4CTR: A Contrastive Learning Framework for CTR Prediction
### Siamese Masked Autoencoders
### Byol for audio: Self-supervised learning for general-purpose audio representation
### Byol-s: Learning self-supervised speech representations by bootstrapping
### Audio Barlow Twins: Self-Supervised Audio Representation Learning
### Non-contrastive approaches to similarity learning: positive examples are all you need
Topological Neural Discrete Representation Learning\a la Kohonen
Som-vae: Interpretable discrete representation learning on time series
Time2Vec: Learning a Vector Representation of Time 2019 c: 222
## TPP
Hierarchical Contrastive Learning for Temporal Point Processes
Recurrent Marked Temporal Point Processes: Embedding Event History to Vector 2016 c: 629
Multi-Time Attention Networks for Irregularly Sampled Time Series 2021 c: 78
https://github.com/reml-lab/mTAN
Modeling Irregular Time Series with Continuous Recurrent Units 2021 c: 19
https://arxiv.org/pdf/2111.11344.pdf#page=2&zoom=100,96,926
Simplified State Space Layers for Sequence Modeling 2022 с: 29
CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation 2021 c: 105
INITIATOR: Noise-contrastive Estimation for Marked Temporal
Point Process2018 c: 33
Continuous-time convolutions model of event sequences
POINT PROCESS FLOWS 2019 c: 11
https://arxiv.org/pdf/1910.08281.pdf
Neural Spatio-Temporal Point Processes 2020 c: 61
https://arxiv.org/pdf/2011.04583.pdf
Transformer Hawkes Process
MULTI-TIME ATTENTION NETWORKS FOR IRREGULARLY SAMPLED TIME SERIES
Latent ODEs for Irregularly-Sampled Time Series
META TEMPORAL POINT PROCESSES
TRANSFORMER EMBEDDINGS OF IRREGULARLY SPACED EVENTS AND THEIR PARTICIPANTS