changed 2 years ago
Linked with GitHub

===

1. Introduction

Traditionally, CKAN uses the SOLR search engine, an open-source platform from the Apache Lucene project. However, with the rapid advancements in technology, the need to make CKAN modular and adaptable to other search engines is a necessity. This analysis presents an affirmative argument on how to make this possible.

1.1 Challenges with SOLR-Dependent CKAN

The SOLR search engine, though powerful and flexible, has its limitations. SOLR requires a specific level of expertise to configure and manage. It also has a relatively complex query syntax which can be a barrier for some users. Additionally, relying solely on SOLR may limit the potential and scalability of CKAN given the availability of other efficient search engines.

1.2 Need for Modular CKAN

Making CKAN modular to integrate other search engines is not just about replacing SOLR, but enhancing the flexibility and functionality of CKAN. With a modular design, users will have the freedom to choose their preferred search engine depending on the specific requirements of their projects. This can also provide an opportunity to develop an ecosystem where various search engines can co-exist and complement each other, thereby enhancing CKAN's overall performance.

1.3 Approach to Modularity

Implementing modularity can be accomplished by developing a standard API (Application Programming Interface) that can communicate with various search engines. This approach abstracts the underlying complexity of different search engines and provides a uniform interface for data querying and retrieval. The API should be designed in such a way that it could accept plugins for different search engines.

1.4 Advantages of a Modular CKAN

Adopting a modular approach would have several benefits. Primarily, it would allow CKAN to be more adaptable and flexible, accommodating a broader range of use-cases. The ability to plug in different search engines would enable users to leverage the strengths of various search tools, tailoring their data management solutions to fit specific needs. Furthermore, the risk of reliance on a single search engine would be mitigated, enhancing the reliability and robustness of the platform.

1.5 Conclusion

In the data-driven era, making CKAN modular to accommodate various search engines is a strategic necessity. A modular approach can not only enhance CKAN's adaptability and flexibility but also promote innovation by fostering an ecosystem where different search engines can co-exist. The proposed changes would make CKAN a more powerful and comprehensive tool for data management and retrieval. This transition might require considerable effort, but the benefits that it would bring are potentially far-reaching and transformative.

2. Technical Aspects

2.1 Interface Layer Implementation

To make CKAN modular, we would need to create an interface layer to decouple CKAN's core functionalities from the search engine. This layer would serve as a standard API, allowing CKAN to interact with any search engine that implements the API.

2.2 Plugin System Development

The API should be accompanied by a robust plugin system that would enable developers to write their custom search engine plugins. Each plugin would have a standard way of receiving search requests from CKAN and sending back the results. This way, if someone wants to use a different search engine, they could write a plugin for that engine, adhering to the API specification.

2.3 Data Indexing Strategies

Different search engines have different data indexing strategies. To accommodate this, CKAN's data loading and storing modules need to be enhanced. They should be designed to support different indexing strategies as per the underlying search engine.

2.4 Query Language Abstraction

Given that different search engines use different query languages (e.g., Elasticsearch uses Query DSL, while SOLR uses its own Lucene-based query language), there must be an abstraction layer. This layer will allow users to make queries without needing to know the underlying query language. In other words, the same query syntax would work for all supported search engines.

2.5 Data Migration Support

Switching from one search engine to another might require data migration. So, the modular CKAN should also incorporate mechanisms to facilitate easy data migration from one search engine to another.

2.6 Performance Considerations

A modular approach might bring some performance overhead due to the abstraction layers. However, with careful and efficient design, these impacts can be minimized.

2.7 Error Handling and Debugging

Lastly, error handling and debugging should be made transparent across different search engines. This would involve developing comprehensive logging and error handling mechanisms that can effectively handle and report errors from any search engine in a standard way.

By focusing on these technical details, it's feasible to make CKAN a more versatile and powerful tool, capable of using any search engine, thereby expanding its usability and effectiveness.

3. CKAN Solr implementation

The SOLR integration within CKAN is primarily handled through the package ckan.lib.search, these are the key areas we might want to look at:

  1. ckan/lib/search/__init__.py: This file is the main entry point for the CKAN search system. It initializes a SearchIndex object and defines the commit method to commit changes to the SOLR index.

  2. ckan/lib/search/index.py: This file defines the SearchIndex class, which is responsible for adding, updating, and removing datasets from the SOLR index.

  3. ckan/lib/search/query.py: This file contains the PackageSearchQuery class which is responsible for running searches on the SOLR index and returning the results. It is here where the SOLR connection and query execution happens.

  4. ckan/lib/search/common.py: This file contains several utility functions and constants used throughout the search system.

It's also worth noting that CKAN's search system is extensible via plugins, and these can affect how search queries are constructed and how search results are returned.

3.2 Action - package_create, package_update, package_patch and package_delete

CKAN uses action functions for performing various operations, including dataset creation and update. These operations indirectly interact with SOLR through the indexing system:

  1. ckan/logic/action/create.py: This file contains the logic for creating resources, including datasets (also known as packages in CKAN). The function package_create handles the creation of a new dataset. After the dataset is created and validated, it is indexed to SOLR.

  2. ckan/logic/action/update.py: This file contains the logic for updating resources. The function package_update handles the dataset update process. Similar to the creation process, after the dataset is updated, the new data is indexed in SOLR.

  3. ckan/logic/action/update.py: This file contains the package_patch function. The patch operation in CKAN is a kind of update operation that only changes the provided fields of the dataset. After the changes are made and validated, the updated data is indexed in SOLR.

  4. ckan/logic/action/delete.py: This file contains the package_delete function. This function doesn't actually delete the dataset from the database but updates the dataset's state to 'deleted'. Again, after this state change, the dataset is reindexed in SOLR.

In all of the cases, the indexing is done through SearchIndex class in ckan/lib/search/index.py as described in the previous message. The actual SOLR indexing happens in the index_package method of the SearchIndex class. The commit method then sends these changes to SOLR.

Important note:
It's crucial to remember that the action functions in CKAN might interact with many other parts of the system, including plugins and extensions, before finally updating the SOLR index. Therefore, a complete understanding of these processes may require a broader review of the CKAN codebase.

3.3 User Interface

In CKAN, the user interface (UI) typically interacts with the backend through action API calls, which in turn interact with the SOLR engine for search-related operations. The frontend itself should not have any SOLR-specific code. Instead, it will send requests to the backend, and the backend will translate these requests into appropriate SOLR queries.

Here are a couple of key components where this interaction occurs:

  1. Templates and Forms: The frontend uses Jinja2 templates for rendering the HTML views. For example, the search page (ckan/templates/package/search.html) will contain the search form that users interact with. When a user performs a search, the frontend sends a request to the backend with the user's query.

  2. JavaScript and AJAX: Some dynamic frontend features, like autocomplete, might use AJAX to interact with the backend. For instance, in ckan/public/base/javascript/modules/autocomplete.js, there are AJAX calls to the CKAN's action API (not directly to SOLR) for auto-suggesting while typing in the search bar.

  3. Controllers and Routes: Controllers in ckan/controllers handle incoming HTTP requests and use the logic layer (including SOLR queries via action functions) to produce the necessary data for the views.

Remember, the aim is to separate concerns – the frontend should not have to know about the specifics of how the backend fulfills its requests. It's the backend's job to interact with SOLR and send a response back to the frontend in a generic format (usually JSON) that the frontend can understand.

3.4 CLI

CKAN provides a command-line interface (CLI) that allows administrators to interact with CKAN directly from the command line to perform various tasks such as managing datasets, rebuilding the search index, etc.

The interaction with SOLR primarily occurs when rebuilding the search index. In CKAN, this is performed using the search-index command.

The CKAN CLI is implemented in ckan/cli/cli.py, and the search indexing commands can be found in ckan/cli/search_index.py.

In search_index.py, you'll find various commands related to SOLR interaction:

  • search_index rebuild [dataset-name]: Rebuilds the search index. This is useful after the SOLR server has been unavailable, or for some other reason, the search index is out of sync with the data. If a dataset name is provided, only that dataset will be re-indexed.

  • search_index check: Checks if the search index needs to be rebuilt. It compares the latest dataset modification date in SOLR with the one in CKAN's database.

  • search_index show [dataset-name]: Outputs the SOLR document for a dataset.

  • search_index clear [dataset-name]: Clears the search index for the provided dataset, or for the entire CKAN instance if no dataset name is provided.

These commands use the SearchIndex class in ckan/lib/search/index.py to interact with SOLR.

3.5 Views

As far as SOLR is concerned, when a user performs a search operation, the query string is passed to CKAN's search logic (which interacts with SOLR) and results are returned to the view for display. This is not done directly in views/dataset.py but is handed off to CKAN's logic layer, which includes the SOLR interaction.

This separation of concerns — with views/dataset.py handling the interaction with the user and a separate logic layer managing the interaction with SOLR — allows CKAN's front end to remain decoupled from the specifics of the search backend.

Here the search is implemented in many places as we can see here https://github.com/ckan/ckan/blob/2.10/ckan/views/dataset.py#L175.
The difficulty is that all is adapted to use Solr.

3.6 Datastore extension

The Datastore doesn't interact directly with SOLR, as they serve different purposes. However, when data is added to the Datastore, CKAN's background jobs update the corresponding metadata in CKAN's main database and SOLR index.

4. Estimation

Estimating the time to implement a different search engines in CKAN is not a straightforward task. It largely depends on several factors such as the complexity of the existing system, the features of the new search engines to be supported, the proficiency of the developers, the level of testing required, and more.

That being said, here's a rough breakdown for key tasks:

  1. Design and Planning: To understand the existing SOLR-dependent system and design a search engine agnostic system. This would include designing the interface, deciding on how plugins will work, planning for data migration, and so on. This can take around 2-4 weeks.

  2. Development: Once the design and planning phase is complete, the actual development can start. This would include creating the interface, developing the plugin system, modifying the existing code to use the new system, and so on. This is the most time-consuming phase and could take around 8-12 weeks.

  3. Testing: After development, thorough testing is needed to ensure everything works as expected. This includes unit testing, integration testing, performance testing, and more. This could take around 4-6 weeks.

  4. Documentation: Updating the CKAN documentation to reflect the new changes is a vital task. This includes documenting how to use the new system, how to develop plugins for new search engines, etc. This could take around 2-3 weeks.

In total, we're looking at around 16-25 weeks or 4-6 months for the entire project. This is a very rough estimation and the actual time could be less or more depending on many factors. Also, these tasks can overlap to some extent. For instance, we can start documenting as soon as some part of the system is stable.

We nee to keep in mind that we'll also need to plan for some buffer time for unforeseen challenges, requirement changes, etc. Lastly, after the initial release, we'll need to allocate some time for maintenance, handling user feedback, fixing bugs, etc.

Select a repo