owned this note
owned this note
Published
Linked with GitHub
Enabling the Integration of Different Search Engines with CKAN
===
[toc]
## Motivation
The motivation for enabling this feature can be found in this issue https://github.com/ckan/ckan/issues/7552, as well as in some additional aspects:
1. The owner/user of the CKAN portal already owns an infrastructure that uses another search engine for other purposes.
2. The owner/user/developer of the CKAN portal is more familiar (has knowledge/expertise) in using another search engine.
3. This would enable other developers to join the CKAN community and give them the opportunity to contribute to the improvement of CKAN.
## Overview of Existing Search Engine Implementation (Solr)
CKAN currently uses Solr for its search functionality. If we want (decide) to support multiple search engines, we will need to create an abstraction layer for the search functionality. This means creating a standard interface for search operations, and then implementing this interface for each search engine we want to support.
The current state is that the Search functionality is deeply integrated with the rest of the CKAN core code. It looks roughly like the following diagram, but we should take into account that identifying the connection with the rest of the CKAN core code can be done in a relatively easy way.
```mermaid
graph TD
subgraph CKAN Core
cc[The rest of CKAN Core code]
search[ckan/lib/search]
package[ckan/logic/action]
cli[ckan/cli/search_index.py]
dataset[ckan/views/dataset.py]
cc --> package
package --> cc
cc --> cli
cc --> dataset
dataset --> cc
package --> search
cli --> search
dataset --> search
end
```
In general, to decouple the search functionality and make it compatible with different search engines, we'll need to create an abstraction layer for the search functionality. Here's a general approach:
**Identify the Search Functionality:** Start by identifying all the places in the code where Solr is being used. This will likely be in the parts of the code that handle searching for datasets.
**Create an Abstraction Layer:** Create a standard interface for all the search operations that CKAN application needs to perform. This could be operations like "search for datasets", "index dataset", etc.
**Implement the Interface for Each Search Engine:** For each search engine that we want to support, create a class that implements the search interface we created. Each class will need to translate the generic search operations into operations that are specific to the search engine it represents.
**Add Configuration to Select the Search Engine:** Add a configuration option that allows the user to select which search engine they want to use. The application should use this configuration option to decide which implementation of the search interface to use.
**Test Our Implementation:** Make sure to thoroughly test our new implementation to ensure that it works correctly with each search engine.
## Possible Approaches
1. Using client library for the communication with the particular search engine
2. Implementing search engine as a separate microservice
In both cases, the approach can be very similar, but in the second case, we need to implement, for example, a REST API server for communicating with the microservice.
As an example for further analysis I will use Solr (as an existing solution), Elasticsearch (as a robust solution) and Typesense (as a lightweight option)
### 1. Using Abstraction Layer
The diagram of this solution could look like this
```mermaid
graph TD
subgraph CKAN Core
cc[The rest of CKAN Core code]
solr-search[ckan/lib/search/solr]
es-search[ckan/lib/search/elastic]
ts-search[ckan/lib/search/typesense]
package[ckan/logic/action]
cli[ckan/cli/search_index.py]
dataset[ckan/views/dataset.py]
al[Abstraction Layer]
solr-common[common]
solr-index[index]
solr-query[query]
es-common[common]
es-index[index]
es-query[query]
ts-common[common]
ts-index[index]
ts-query[query]
package --> cli
cc --> package
package --> cc
cc --> cli
cc --> dataset
dataset --> cc
package --> al
cli --> al
dataset --> al
al --> solr-search
al --> es-search
al --> ts-search
solr-search --> solr-common
solr-search --> solr-index
solr-search --> solr-query
es-search --> es-common
es-search --> es-index
es-search --> es-query
ts-search --> ts-common
ts-search --> ts-index
ts-search --> ts-query
end
```
#### Code Example
We'll define a Python class that will serve as an interface for our search functionality. This class will have a query method that takes in search parameters and returns search results
```python
from abc import ABC, abstractmethod
class SearchEngineInterface(ABC):
@abstractmethod
def query(self, query_params):
"""
Perform a search query and return the results.
Parameters:
query_params (dict): A dictionary of search parameters.
Returns:
dict: A dictionary containing the search results and some metadata.
"""
pass
```
In this example, SearchEngineInterface is an abstract base class that defines a single method query. This method takes in a dictionary of query parameters and returns a dictionary of results. The abstractmethod decorator indicates that this method must be implemented by any class that inherits from SearchEngineInterface.
Next, we'll create a class for each search engine that we want to support. Each of these classes will inherit from SearchEngineInterface and implement the query method. Here's an example for Elasticsearch:
```
from elasticsearch import Elasticsearch
class ElasticsearchSearch(SearchEngineInterface):
def __init__(self):
self.es = Elasticsearch()
def query(self, query_params):
# Translate the query_params into an Elasticsearch query
es_query = self._translate_query(query_params)
# Send the query to the Elasticsearch server
response = self.es.search(index="my_index", body=es_query)
# Process the response and return the results
results = self._process_response(response)
return results
```
In this example, `ElasticsearchSearch` is a class that implements the `SearchEngineInterface`. It has an `__init__` method that creates an Elasticsearch client, and a `query` method that handles the search queries.
By using this approach and structure, we will retain the same functionality that we currently have. For Solr we only need to make small modifications, and for other Search engines we have to write code based on the existing Solr code.
Also, this approach allows us to use all the specific features of each search engine.
Knowing that each Search Engine has its own Domain Query Language (DQL), this approach allows us to use it in the best way. It is obvious that we will have to adapt the documentation for each Search engine separately
### 2. Implementing Search Engine Feature as a Separate Microservice
This approach is very similar to the previous one, with the exception that part of the code would be transferred behind, for example, a REST API server. In that case, we would create API requests from the abstraction level and forward them to the API Server, from where we would receive answers.
The diagram of this solution would be like this
```mermaid
graph TD
subgraph CKAN Core
cc[The rest of CKAN Core code]
package[ckan/logic/action]
cli[ckan/cli/search_index.py]
dataset[ckan/views/dataset.py]
al[Abstraction Layer]
package --> cli
cc --> package
package --> cc
cc --> cli
cc --> dataset
dataset --> cc
package --> al
cli --> al
dataset --> al
end
al --> api
subgraph Search Engine Microservice
api[Search Engine API Server]
solr-search[ckan/lib/search/solr]
es-search[ckan/lib/search/elastic]
ts-search[ckan/lib/search/typesense]
solr-common[common]
solr-index[index]
solr-query[query]
es-common[common]
es-index[index]
es-query[query]
ts-common[common]
ts-index[index]
ts-query[query]
api --> solr-search
api --> es-search
api --> ts-search
solr-search --> solr-common
solr-search --> solr-index
solr-search --> solr-query
es-search --> es-common
es-search --> es-index
es-search --> es-query
ts-search --> ts-common
ts-search --> ts-index
ts-search --> ts-query
end
```
This design has its advantages and disadvantages. In any case, it is modern, easy to improve and maintain.
Using the REST API for communication between CKAN and Seach Engine also has its advantages and disadvantages. The advantages are that it is easy to implement and upgrade, and the disadvantages are that it will most likely slow down the search feature. Security is also a big disadvantage if everything takes place on a public network. I don't know of such a case, but we should certainly take it into consideration.
Other options for communication could be gRPC, or some message-broker like RabbitMQ or Kafka. We should take this into consideration, too.
## Final Notes
Before I start with the implementation of an example, which can be only a PoC, or grow into a solution, I would be very grateful for any input, comment and/or advice.