owned this note
owned this note
Published
Linked with GitHub
---
title: Noteworthy points about Elasticsearch and Atlas Search
tags: Search, Elasticsearch, atlas search
description: ES and AS note
type: slide
---
# Noteworthy points about Elasticsearch and Atlas Search
---
## Architecture for both side
Both of the two index storage use Apache Lucene as it search and indexing engine, so they will share some of the same knowledge about **Text Analysis, Analyzer, Character filters, Tokenizer and Token filters.**
----
### Atlas Search

https://www.mongodb.com/docs/atlas/atlas-search/atlas-search-overview/
----
### Atlas Search
1. Creates Atlas Search indexes based on the rules in the index definition for the collection.
2. Monitors change streams for the current state of the documents and index changes for the collections for which you defined Atlas Search indexes.
3. Processes Atlas Search queries and returns matching documents.
----
### ElasticSearch

https://www.elastic.co/pdf/architecture-best-practices.pdf
---
## Text Analysis in Apache Lucene
----
### Text Analysis
Text Analysis:
> Text analysis enables Elasticsearch to perform full-text search, where the search returns all relevant results rather than just exact matches.
> Elasticsearch performs text analysis when **indexing or searching text fields**.
> If you search for **Quick fox jumps**, you probably want the document that contains **A quick brown fox jumps over the lazy dog**,
> and you might also want documents that contain related words **like fast fox or foxes leap**.
It’s the core of the full-text search, and how Lucene or this kind of searching engine to do search well.
----
### Analyzer
Analyzer:
> An analyzer — whether built-in or custom — is just a package which contains three lower-level building blocks: **character filters, tokenizers, and token filters.**
----
### Character filters
Character filters:
> A character filter receives the original text as a stream of characters and can transform the stream by **adding, removing, or changing characters**.
> For instance, a character filter could be used to **convert Hindu-Arabic numerals (٠١٢٣٤٥٦٧٨٩) into their Arabic-Latin equivalents (0123456789), or to strip HTML elements like \<b> from the stream**.
An analyzer may have **zero or more character filters**, which are applied in order.
----
### Character filters
```
GET /_analyze
{
"tokenizer": "keyword",
"char_filter": [
"html_strip"
],
"text": "<p>I'm so <b>happy</b>!</p>"
}
> [ \nI'm so happy!\n ]
```
```
GET /_analyze
{
"tokenizer": "keyword",
"char_filter": [
{
"type": "mapping",
"mappings": [
"٠ => 0",
"١ => 1",
"٢ => 2",
"٣ => 3",
"٤ => 4",
"٥ => 5",
"٦ => 6",
"٧ => 7",
"٨ => 8",
"٩ => 9"
]
}
],
"text": "My license plate is ٢٥٠١٥"
}
> [ My license plate is 25015 ]
```
----
### Tokenizer
Tokenizer
> Analysis makes full-text search possible through **tokenization**: breaking a text down into smaller chunks, called tokens. In most cases, these tokens are individual words.
> It’s a kind of tokenize.
> <br>
> the quick brown fox jumps → the, quick, brown, fox, jumps
> The tokenizer is **also responsible for recording the order or position of each term** and the start and end character offsets of the original word which the term represents.
An analyzer must **have exactly one tokenizer.**
----
### Tokenizer
- Word Oriented Tokenizers are usually used for tokenizing full text into individual words.
- Standard Tokenizer, Letter Tokenizer, lowercase Tokenizer, etc...
- Partial Word Tokenizers break up text or words into small fragments, for partial word matching
- N-Gram Tokenizer, Edge N-Gram Tokenizer
- Structured Text Tokenizers are usually used with structured text like identifiers, email addresses, zip codes, and paths, rather than with full text
- Keyword Tokenizer, Pattern Tokenizer etc...
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-tokenizers.html#analysis-tokenizers
----
### Tokenizer
- N-Gram Tokenizer
The ngram tokenizer can break up text into words when it encounters any of a list of specified characters (e.g. whitespace or punctuation), then it returns n-grams of each word: a **sliding window of continuous letters**,
**e.g. quick → [qu, ui, ic, ck]. (n-gram min and max gram 2)**
- Edge N-Gram Tokenizer
The edge_ngram tokenizer can break up text into words when it encounters any of a list of specified characters (e.g. whitespace or punctuation), then it returns n-grams of each word which are anchored to **the start of the word**,
**e.g. quick → [q, qu, qui, quic, quick]. (edge n-gram min gram 1, max gram > 5)**
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-ngram-tokenizer.html
----
### Token Filter
Token filters
A token filter **receives the token stream and may add, remove, or change tokens**.
For example,
> - a lowercase token filter converts all tokens to lowercase,
> - a stop token filter removes common words (stop words (the, and etc...)) like the from the token stream,
> - a synonym token filter introduces synonyms into the token stream.
Token filters **are not allowed to change the position or character offsets** of each token.
An analyzer may have **zero or more token filters, which are applied in order.**
----
### Token Graph
When a tokenizer converts a text into a stream of tokens, it also records the following:
* The position of each token in the stream
* The positionLength, the number of positions that a token spans
Using these, you can create a directed acyclic graph, called a token graph, for a stream. **In a token graph, each position represents a node**. Each token represents an edge or arc, pointing to the next position.

https://www.elastic.co/guide/en/elasticsearch/reference/current/token-graphs.html
----
### Token Graph
Synonyms ([With synonyms token filter](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-synonym-tokenfilter.html))

Multi-position tokens

---
## Our use cases
----
### Index/Component Template
```
PUT _component_template/keyword_component_tmpl
{
"template": {
"settings": {
"index": {
"analysis": {
"filter": {
"limited_length": {
"length": "32766",
"type": "truncate"
},
"ngram_filter": {
"type": "ngram",
"min_gram": "2",
"max_gram": "10"
},
"edge_ngram_filter": {
"type": "edge_ngram",
"min_gram": "1",
"max_gram": "30"
}
},
"analyzer": {
"lowercase_keyword": {
"filter": [
"lowercase",
"limited_length"
],
"type": "custom",
"tokenizer": "keyword"
},
"lowercase_keyword_ngram": {
"filter": [
"lowercase",
"limited_length",
"ngram_filter"
],
"type": "custom",
"tokenizer": "keyword"
},
"ngram_analyzer": {
"filter": [
"lowercase"
],
"char_filter": [
"html_strip",
"icu_normalizer"
],
"type": "custom",
"tokenizer": "ngram_1_1_tokenizer"
}
},
"tokenizer": {
"ngram_1_1_tokenizer": {
"token_chars": [],
"min_gram": "1",
"type": "ngram",
"max_gram": "1"
}
}
},
"max_ngram_diff": "30",
"sort": {
"field": [
"keyword.term",
"timestamp"
],
"order": [
"asc",
"desc"
]
}
}
},
"mappings": {
"_source": {
"excludes": [],
"includes": [],
"enabled": true
},
"_routing": {
"required": false
},
"dynamic": false,
"dynamic_templates": [],
"properties": {
"count": {
"type": "integer"
},
"keyword": {
"analyzer": "lowercase_keyword",
"type": "text",
"fields": {
"ngram": {
"eager_global_ordinals": false,
"index_phrases": true,
"fielddata": false,
"norms": true,
"analyzer": "ngram_analyzer",
"term_vector": "with_positions_offsets",
"index": true,
"store": false,
"type": "text",
"index_options": "positions"
},
"term": {
"eager_global_ordinals": true,
"norms": false,
"index": true,
"store": false,
"type": "keyword",
"split_queries_on_whitespace": false,
"index_options": "docs",
"doc_values": true
}
}
},
"region": {
"type": "keyword"
},
"createdAt": {
"type": "date",
"index": true,
"ignore_malformed": false,
"doc_values": true,
"store": false
}
}
}
}
}
```
```
# document would be like
{
"createdAt": <date>,
"keyword": <string>,
"region": <string>,
"count": <int>
}
PUT _index_template/keyword_aggr_tmpl
{
"template": {
"settings": {
"index": {
"index.lifecycle.name": "7_days_ilm"
"index.lifecycle.rollover_alias": "keyword_aggr_write"
}
},
"aliases": {
"popular": {},
"trending": {}
}
},
"index_patterns": [
"keyword_aggr-*"
],
"composed_of": [
"keyword_component_tmpl"
]
}
```
----
## ES Query
The [match_phrase](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query-phrase.html) query analyzes the text and creates a phrase query out of the analyzed text. For example:
```
GET /_search
{
"query": {
"match_phrase": {
"message": "this is a test"
}
}
}
> ["aaa this is a test bbb", "this is a test ssss"]
```
[Terms query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-terms-query.html) returns documents that **contain one or more exact terms** in a provided field. The terms query is the same as the term query, except you can search for multiple values.
```
GET /_search
{
"query": {
"terms": {
"user.id": [ "kimchy", "elkbee" ],
"boost": 1.0
}
}
}
> something with kimchy, elkbee
```
----
## ES Query
```
GET keyword-alias/_search
{
"size": 0,
"aggs": {
"filterby": {
"filter": {
"bool" : {
"must" : [
{
"range": {
"@timestamp": {
"gte": "2022-03-23T00:00:00Z"
}
}
},
{
"match_phrase": {
"keyword.ngram": "test"
}
},
{
"term": {
"userID": {
"value": $user_id
}
}
},
]
}
},
"aggs": {
"group_by_keyword": {
"terms": {
"field": "keyword.term"
}
}
}
}
}
}
```
----
### Atlas search index
```
{
"mappings": {
"dynamic": true,
"fields": {
"createdTime": [
{
"dynamic": true,
"type": "document"
},
{
"type": "date"
}
],
"keyword": [
{
"analyzer": "custom_analyzer",
"multi": {
"term": {
"analyzer": "lucene.keyword",
"searchAnalyzer": "lucene.keyword",
"type": "string"
}
},
"searchAnalyzer": "custom_analyzer",
"type": "string"
},
{
"foldDiacritics": false,
"minGrams": 1,
"tokenization": "nGram",
"type": "autocomplete"
}
]
}
},
"analyzers": [
{
"charFilters": [
{
"type": "htmlStrip"
},
{
"type": "icuNormalize"
}
],
"name": "custom_analyzer",
"tokenFilters": [
{
"type": "lowercase"
}
],
"tokenizer": {
"maxGram": 1,
"minGram": 1,
"type": "nGram"
}
}
],
"storedSource": {
"include": [
"keyword"
]
}
}
```
---
## Noteworthy points in Atlas Search
----
### $search operator
- Don't mix up [$search](https://www.mongodb.com/docs/atlas/atlas-search/query-syntax/#mongodb-pipeline-pipe.-search) with [text search in Mongodb](https://www.mongodb.com/docs/manual/core/link-text-indexes/#perform-a-text-search--legacy-)
- It's only supported in Atlas mongo. If you use it in open source Mongodb it will throw an exception.
----
### Faceted search
In Elasticsearch, there is a powerful operation call **aggregation** and it bases on faceted search. There is a facets operator in Atals Search and it would be a great thing ...
### But!!
it doesn’t support sharded collections currently.
> To use Atlas Search facets, you must be running your Atlas cluster on MongoDB 4.4.11 and above or MongoDB 5.0.4 and above. These clusters must be running on the M10 tier or higher. Facets and counts currently work on non-sharded collections. Support for sharded collections is scheduled for next year. ([ref](https://www.mongodb.com/blog/post/100x-faster-facets-counts-mongodb-atlas-search-public-preview))
https://en.wikipedia.org/wiki/Faceted_search
----
### Faceted search
```
db.getCollection('keyword').aggregate([
{
"$search": {
"index": "keyword",
"compound": {
"must": [
{
"range": {
"path": "createdTime",
"gt": ISODate("2022-03-20T00:00:00.000Z")
}
},
{
"phrase": {
"path": "keyword",
"query": "你好"
}
}
]
}
}
},
{
$group:
{
_id: "$keyword",
count: { $sum: 1 }
}
}
])
```
```
db.getCollection('keyword').aggregate([
{
"$searchMeta": {
"facet": {
"operator": {
"compound": {
"must": [{
"range": {
"path": "createdTime",
"gt": ISODate("2022-03-20T00:00:00.000Z")
}
}, {
"phrase": {
"path": "keyword",
"query": "你好"
}
}]
}
},
"facets": {
"count": {
"type": "string",
"path": "keyword"
}
}
}
}
}
])
```
----
### Faceted search
```plantuml
@startuml
'https://plantuml.com/component-diagram
node "atlas search" {
[$search]
}
node "mongod" {
[$group]
[aggregation]
}
[client] <--> [aggregation] : 1, 4
[aggregation] --> [$search]:2
[$search] --> [$group] :3
[$group] --> [aggregation]:4
```
```plantuml
@startuml
'https://plantuml.com/component-diagram
node "atlas search" {
[$searchMeta]
[$facet]
}
node "mongod" {
[aggregation]
}
[client] <--> [aggregation] : 1, 4
[aggregation] --> [$searchMeta]:2
[$searchMeta] --> [$facet] :3
[$facet] --> [aggregation]:4
```
---
## Noteworthy points in Elasticsearch
----
### Index Lifecycle Management (ILM)
Before ES v6.6, we use [cruator](https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/ilm.html) to manage indexes.
(After ES v6.6) You can configure index lifecycle management (ILM) policies to automatically manage indices according to your performance, resiliency, and retention requirements. For example, you could use ILM to:
* Spin up a new index when an index reaches a certain size or number of documents (rollover)
* Create a new index each day, week, or month and archive previous ones (rollover)
* Delete stale indices to enforce data retention standards (delete state)
----
### Rollover and Data Stream
When you continuously **index timestamped documents into Elasticsearch**, you typically use a d**ata stream so you can periodically roll over to a new index**.
This enables you to implement **a hot-warm-cold architecture to meet your performance requirements for your newest data, control costs over time, enforce retention policies, and still get the most out of your data**.
> Data streams are **best suited for append-only use cases**. If you need to frequently update or delete existing documents across multiple indices, we recommend using an **index alias and index template** instead. You can still use ILM to manage and rollover the alias’s indices. Skip to Manage time series data without data streams.
----
## Alias
**An alias is a secondary name for a group of data streams or indices**. Most Elasticsearch APIs accept an alias in place of a data stream or index name.
You can change the data streams or indices of an alias at any time. If you use aliases in your application’s Elasticsearch requests, you can reindex data with no downtime or changes to your app’s code.
https://www.elastic.co/guide/en/elasticsearch/reference/current/aliases.html
----
### Alias

----
## Alias

---
### Wrap up
- Atlas search is a great hosted indexing storage solution, if you want every hosted.
- Atlas search facet is not supported sharding right now.
- If you want to do lots of faceted search, you need to use Elasticsearch currently.
- Elasticsearch is very very very good at index management.
-
---
### Thank you! :sheep:
<style>
.reveal {
font-size: 24px;
}
</style>