CocoIndex Changelog 2025-04-30
In the past 2 weeks, we've added support for Knowledge Graphs, Qdrant, Supabase, KTable/LTable, and more LLM providers, along with tons of assorted core/performance improvements - full changelog.
In the past 2 weeks, we've added support for Knowledge Graphs, Qdrant, Supabase, KTable/LTable, and more LLM providers, along with tons of assorted core/performance improvements - full changelog.
In the past 2 weeks, we added incremental processing with live update mode, evaluation utilities, support for date/time types, Google Drive, and assorted core/performance improvements.
Today, we are excited to announce the support of continuous updates for long-running pipelines in CocoIndex. This powerful feature automatically applies incremental source changes to keep your index up-to-date with minimal latency.
With continuous updates, your indexes remain synchronized with your source data in real-time, ensuring that your applications always have access to the most current information without the performance overhead of full reindexing.
It fits into situations that you need to access the fresh target data continuously in most of the time.
It continuously captures changes from the source data and updates the target data accordingly. It's long-running and only stops when being aborted explicitly.
A data source may enable one or multiple change capture mechanisms:
CocoIndex supports two main categories of change detection mechanisms:
General Mechanism:
Source-Specific Mechanisms:
These mechanisms work together to ensure CocoIndex can detect and process changes as they happen, maintaining your index in perfect sync with source data with minimal latency and resource usage.
Under the hood, after the change is detected, CocoIndex will use its incremental processing mechanism to update the target data.
Here is an example of how to enable continuous updates for Google Drive. It is pretty simple:
@cocoindex.flow_def(name="GoogleDriveIndex")
def my_flow(flow_builder: cocoindex.FlowBuilder, data_scope: cocoindex.DataScope):
data_scope["documents"] = flow_builder.add_source(
cocoindex.sources.GoogleDrive(
service_account_credential_path=credential_path,
root_folder_ids=root_folder_ids,
recent_changes_poll_interval=datetime.timedelta(seconds=10)),
refresh_interval=datetime.timedelta(minutes=1))
In this example, we've configured two change detection mechanisms:
recent_changes_poll_interval=datetime.timedelta(seconds=10)
: This is a Google Drive-specific mechanism that uses the Drive API's changes endpoint to efficiently detect modifications every 10 seconds.
This is effecient fast scan to capture all latest modified files, and we could set it to a short interval to get fresher data.
It doesn't capture file deletions, so we need the fallback mechanism to ensure all changes are eventually captured.
refresh_interval=datetime.timedelta(minutes=1)
: This is the universal fallback mechanism that performs a complete scan of the data source every minute.
This is to scan all the files, to ensure all the changes - including the deleted files, are captured.
The refresh_interval
parameter is particularly important as it serves as a safety net to ensure all changes are eventually captured,
even if source-specific mechanisms miss something. It works by:
While source-specific mechanisms like recent_changes_poll_interval
are more efficient for near real-time updates,
the refresh_interval
provides comprehensive coverage.
We recommend setting it to a reasonable value based on your freshness requirements and resource constraints - shorter intervals provide fresher data but consume more resources.
You can read the full documentation:
Add a @cocoindex.main_fn()
decorator to your main function,
so CocoIndex CLI will take over the control (when cocoindex is the first command line argument).
@cocoindex.main_fn()
def main():
pass
if __name__ == "__main__":
main()
To run the CLI with live update mode, you can use the following command:
python main.py cocoindex update -L
This will start the flow in live update mode, which will continuously capture changes from the source and update the target data accordingly.
You can create cocoindex.FlowLiveUpdater
. For example,
@cocoindex.main_fn()
async def main():
my_updater = cocoindex.FlowLiveUpdater(demo_flow)
await my_updater.wait()
...
if __name__ == "__main__":
asyncio.run(main())
And you run with the flow with
python main.py
You can also use the updater as a context manager. It will abort and wait for the updater to finish automatically when the context is exited. See full documentation here.
Now you are all set! It is super simple to get started and have continuous updates for your data. Get started now quickstart guide 🚀
It would mean a lot to us if you could support Cocoindex on Github with a star if you like our work. Thank you so much with a warm coconut hug 🥥🤗.
Incremental processing is one of the core values provided by CocoIndex. In CocoIndex, users declare the transformation, and don't need to worry about the work to keep index and source in sync.
CocoIndex creates & maintains an index, and keeps the derived index up to date based on source updates, with minimal computation and changes. That makes it suitable for ETL/RAG or any transformation tasks that stay low latency between source and index updates, and also minimizes the computation cost.
If you like our work, it would mean a lot to us if you could support Cocoindex on Github with a star. Thank you so much with a warm coconut hug 🥥🤗.
Figuring out what exactly needs to be updated, and only updating that without having to recompute everything throughout.
You don't really need to do anything special, just focus on defining the transformation needed.
CocoIndex automatically tracks the lineage of your data and maintains a cache of computation results. When you update your source data, CocoIndex will:
And CocoIndex will handle the incremental processing for you.
CocoIndex provides two modes to run pipeline:
Both modes run with incremental processing. You can view more details in Life Cycle of an Indexing Flow.
Many people may think incremental processing are only beneficial for large scale data. Thinking carefully, it really depends on the cost and requirement for data freshness.
Google processes huge scale data, backed by huge amount of resources. Your data scale is much less, but your resource provision is also much more limited.
Incremental processing is needed upon the following conditions:
High freshness requirement
For most user-facing applications this is needed. e.g. users update their documents, and it's unexpected if they see stale information in search results.
If the search result is fed into an AI agent, it may mean unexpected response to users (i.e. LLM generate output based on inaccurate information). It's more dangerous and users may even take the unexpected response without noticing.
Transformation cost is significantly higher than retrieval itself
Overall, say T is your most acceptable staleness. If you don't want to recompute the whole thing repeatedly every cycle of T, you will need incremental processing more or less.
We could take a look at a few examples to understand what CocoIndex handles behind the scene for incremental processing.
Consider this scenario:
So we need to keep 3 rows, remove 2 previously existing rows, and add 2 new rows. These need to happen behind the scene:
CocoIndex takes care of this.
Continuing with the same example. If we delete the document later, we need to delete all 7 rows derived from the document. Again, this needs to be based on the lineage tracking maintained by CocoIndex.
The transformation flow may also be changed, for example, the chunking logic is upgraded, or a parameter passed to the chunker is adjusted. This may result in the following scenario:
This falls into a similar situation as document update (example 1), and CocoIndex will take care of it. The approach is similar, while this involves some additional considerations:
We can still safely reuse embeddings for 4 unchanged chunks by the caching mechanism: this needs a prerequisite that the logic and spec for embedding is unchanged. If the changed part is the embedding logic or spec, we will recompute the embeddings for everything. CocoIndex is able to see if the logic or spec for an operation step is changed from the cached version, by putting this additional information in the cache key.
To remove stale rows in the target index, the lineage tracking works well again. Note that some other systems handle stale output deletions on source update/deletion by replaying the transformation logic on the previous version of input: this only works well when transformation is fully deterministic and never upgraded. CocoIndex's lineage tracking based approach doesn't have this limitation: it's robust to transformation logic non-determinism and changes.
All examples above are simple cases: each single input row (e.g. a document) is involved independently during each specific transformation.
CocoIndex is a highly customizable framework, not only limited to simple chunking and embedding. It allows users for more complex advanced transformations, such as:
Merge. For example, you're building an index for "all AI products", and you want to combine information from multiple sources, some product exists in one source and some in multiple. For each product, you want to combine information from different sources.
Lookup. For example, you also have a data source about company information. During your transformation for each product, you want to enrich it with information of the company building the product, so a lookup for the company information is needed.
Clustering. For example, you want to cluster different products into scenarios, and create cluster-level summaries based on information of products in the cluster.
The common theme is that during transformation, multiple input rows (coming from single or multiple sources) need to be involved at the same time. Once a single input row is updated or deleted, CocoIndex will need to fetch other related rows from the same or other sources. Here which other rows are needed is based on which are involved in the transformations. CocoIndex keeps track of such relationships, and will fetch related rows and trigger necessary reprocessings incrementally.
Some source connectors support push change. For example, Google Drive supports drive-level changelog and sends change notifications to your public URL, which is applicable for team drive and personal drive (only by OAuth, service account not supported). When a file is created, updated, or deleted, CocoIndex could compute based on the diff.
All source connectors in CocoIndex provide a basic list API capability, which enables a generic change detection mechanism applicable to any data source.
For example, with local file systems, we can traverse all directories and subdirectories recursively to get the full list of entries and their metadata (like modification time). By comparing the current state with the previous state, CocoIndex can detect changes even without source-specific change notifications.
This approach works universally across all data sources, though when the number of entries is large, performing a complete traversal can be resource-intensive.
This is a generic mechanism applicable to all data sources, so all data sources can leverage this even if there's no source-specific change capture.
When source connector provides advanced features to list entries, for example, list most recently changed entries. CocoIndex could take advantage of that to provide more efficient change detection.
For example, when changelog is not available for Google Drive, see condition here on when it is not available.
CocoIndex could monitor the change based on last modified vs last poll time, periodic trigger to check modified entries. However this cannot capture full change, for example when an entry has been deleted.
In CocoIndex, every piece of the lego block in the pipeline can be cached. Currently, whether or not cache is enabled is decided by the implementation of the function. For builtin functions, if it performs heavy transformation, cache is enabled.
Custom functions can take a parameter cache.
When True
, the executor will cache the result of the function for reuse during reprocessing.
We recommend to set this to True
for any function that is computationally intensive.
Output will be reused if all these are unchanged: spec (if exists), input data, behavior of the function.
For this purpose, a behavior_version
needs to be provided, and should increase on behavior changes.
For example, this enables cache for a standalone function:
@cocoindex.op.function(cache=True, behavior_version=1)
def compute_something(arg1: str, arg2: int | None = None) -> str:
...
This enables cache for a function defined by a spec and an executor:
class ComputeSomething(cocoindex.op.FunctionSpec):
...
@cocoindex.op.executor_class(cache=True, behavior_version=1)
class ComputeSomethingExecutor:
spec: ComputeSomething
...
Thanks for reading and we love to hear from you! If you like our work, it would mean a lot to us if you could support Cocoindex on Github with a star.
In this blog, we will show you how to use OpenAI API to extract structured data from patient intake forms with different formats, like PDF, Docx, etc. from Google Drive.
In this blog, we will show you how to use CocoIndex to build text embeddings from Google Drive for RAG step by step including how to setup Google Cloud Service Account for Google Drive. CocoIndex is an open source framework to build fresh indexes from your data for AI. It is designed to be easy to use and extend.
We're excited to share our progress with you! We'll be publishing these updates weekly, but since this is our first one, we're covering highlights from the last two weeks.
We had 9 releases in the last 2 weeks over 100+ PRs merged (Yes, we shipped a lot!), here are the highlights.
In this blog, we will show you how to index codebase for RAG with CocoIndex. CocoIndex is a tool to help you index and query your data. It is designed to be used as a framework to build your own data pipeline. CocoIndex provides built-in support for code base chunking, with native Tree-sitter support.
In this blog, we will show you how to use Ollama to extract structured data that you can run locally and deploy on your own cloud/server.
We are thrilled to announce the open-source release of CocoIndex, the world's first engine that supports both custom transformation logic and incremental processing specialized for data indexing.
CocoIndex is the world's first open-source engine that supports both custom transformation logic and incremental processing specialized for data indexing. So, what is custom transformation logic?
When building data processing systems, it's easy to think all pipelines are similar - they take data in, transform it, and produce outputs. However, indexing pipelines have unique characteristics that set them apart from traditional ETL, analytics, or transactional systems. Let's explore what makes indexing special.
When building data processing and indexing systems, one of the key challenges is handling system updates gracefully. These systems maintain state across multiple components (like Pinecone, PostgreSQL, etc.) and need to evolve over time. Let's explore the challenges and potential solutions.
When building data indexing pipelines, handling large files efficiently presents unique challenges. For example, patent XML files from the USPTO can contain hundreds of patents in a single file, with each file being over 1GB in size. Processing such large files requires careful consideration of processing granularity and resource management.
An indexing pipeline builds indexes derived from source data. The index should always be converging to the current version of source data. In other words, once a new version of source data is processed by the pipeline, all data derived from previous versions should no longer exist in the target index storage. This is called data consistency requirement for an indexing pipeline.
At its core, data indexing is the process of transforming raw data into a format that's optimized for retrieval. Unlike an arbitrary application that may generate new source-of-truth data, indexing pipelines process existing data in various ways while maintaining trackability back to the original source. This intrinsic nature - being a derivative rather than source of truth - creates unique challenges and requirements.
High-quality data tailored for specific use cases is essential for successful AI applications in production. The old adage "garbage in, garbage out" rings especially true for modern AI systems - when a RAG pipeline or agent workflow is built on poorly processed, inconsistent, or irrelevant data, no amount of prompt engineering or model sophistication can fully compensate. Even the most advanced AI models can't magically make sense of low-quality or improperly structured data.
Welcome to the official CocoIndex blog! We're excited to share our journey in building high-performance indexing infrastructure for AI applications.
CocoIndex is designed to provide exceptional velocity for AI systems that need fast, reliable access to their data. Whether you're building large language models, recommendation systems, or other AI applications, our goal is to make data indexing and retrieval as efficient as possible.