Flow-based schema inference for Qdrant
CocoIndex supports Qdrant natively - the integration features a high performance Rust stack with incremental processing end to end for scale and data freshness. 🎉 We just rolled out our latest change that handles automatic target schema setup with Qdrant from CocoIndex indexing flow.
That means, developers don’t need to do any schema setup - including setting up table, field type, keys and index for target stores. The setup is the result of schema inference from CocoIndex flow definition. It is already supported with native integration with Postgres, Neo4j, and Kuzu. This allows for more seamless operation between the indexing and target stores.
No more manual setup​
Previously, users had to manually create the collection before indexing:
curl -X PUT 'http://localhost:6333/collections/image_search' \
-H 'Content-Type: application/json' \
-d '{
"vectors": {
"embedding": {
"size": 768,
"distance": "Cosine"
}
}
}'
With the new change, user don’t need to do any manual collection management.
How it works​
Flow Definition​
Following dataflow programming model, user defines a flow, where every step has output data type information, and next setup takes in data type information. See an example (~100 lines of python end to end)
In short, it can be presented as the following lineage graph.
In the declarative dataflow as above
Target = Formula (Source)
It implies both data and the expected target schema. A single flow definition drives both data processing (including change handling) and target schema setup—providing a single source of truth for both data and schema. A similar way to think about it is like type systems inferring data type from operators and inputs - type inference (for example, Rust)
In the indexing flow, export embeddings and metadata directly to Qdrant is all you need.
doc_embeddings.export(
"doc_embeddings",
cocoindex.storages.Qdrant(collection_name=QDRANT_COLLECTION),
primary_key_fields=["id"],
)
In this example,
As part of the Qdrant schema setup, it is necessary to specify the vector size for embedding fields. Qdrant only needs schema for vector fields, including the vector size and distance. Other fields are not part of the schema. The vector name and size need to be consistent with the flow and users need to maintain manually otherwise.
When using CocoIndex, the vector size is decided by the embedding model. For example
-
At the
SentenceTransformerEmbed
transformation step, the data fieldembedding
has Vector[float, 384] type. It is automatically generated because we usedSentenceTransformerEmbed
withall-MiniLM-L6-v2
model, which has 384 dimensions. -
When we added it to the
doc_embeddings
collector, the datatype of field embeddings is carried over. -
doc_embeddings
collector exports to Qdrant, and the schema setup is derived consistently in robust way.
If you have multiple fields with different embedding models, the vector size will be different for each embedding field. It'll end up with multiple different Named Vectors in Qdrant, with different sizes.
CocoIndex always automatically handle the schema, no matter how many fields/vectors are involved in your flow, and you can just focus on the transformation logic.
Setup and Update​
To run start a CocoIndex process, users need to first run the setup, that covers all the necessary setup for any backends needed.
cocoindex setup main.py
cocoindex setup
- Create new backends for the schema setup, like tables/collections/etc.
- Alter existing backends with schema change - it’ll try to do an non-destructive update if possible, e.g. primary keys don’t change and target storage support in-place schema update (e.g.
ALTER TABLE
in Postgres), otherwise drop and recreate. - Drop stale backends.
Developers then run
cocoindex update main.py [-L]
to start a indexing pipeline (-L for long running).
If you’ve made logic updates that requires the schema on the target store to be updated, don’t worry.
When you run cocoindex update
again after the logic update. CocoIndex will infer the schema for the target store.
It requires an cocoindex setup
to push the schema to the target store, which will notify you in the CLI.
As a choice of design, CocoIndex won’t update any schema without your notice, as some schema update may involve destructive changes.
For example, in the example above, if users change the embedding model, the vector size may change.
cocoindex setup
will drop the previous collection and create a new one, and in the next cocoindex update
run,
values will be populated.
Here the cached intermediate computation data will be reused, so it'll be a lot faster than the building the index from scratch.
Note that Qdrant doesn't support ALTER TABLE
like most relational databases, so it's a drop-and-recreate.
Drop a flow​
To drop a flow, you’d run
cocoindex drop main.py
cocoindex drop
drops the backends when dropping the flow.
All backend entities for the target stores — such as a PostgreSQL table or a Qdrant collection - are owned by the flow as derived data, so will be dropped too.
Why automatic target schema inference?​
The question should really be, why not?
The traditional way is users fully figure out when and how to setup/update the target schema themselves, including the specific schema. Indexing flows often span multiple systems. For example:
On the target store:
- Vector databases (PGVector, Qdrant, etc.)
- Relational databases (PostgreSQL)
- Graph databases (Neo4j, Kuzu etc.)
The data types you're outputting and your target schema must match up.
If there’s any internal state tracking, e.g., in the case of incremental processing
- Internal tables (state tracking)
It’s tedious and painful in doing this manually, as all of these systems must agree on schema and structure. This typically requires:
- Manual setup and syncing of schemas.
- Tight coordination between developers, DevOps, and data engineers - people writing the code may not be the same people deploying / running it in an organization.
- Debugging misalignments between flow logic and storage layers.
- Production rollout is typically stressful.
Any addition moving parts to the indexing pipeline system adds frictions — any mismatch between the logic and the storage schema could result in silent failures or subtle bugs.
- In some cases it’s not silent failures. The failure should be obvious, e.g. if users forgot to create a table or collection, it’ll just error out when writing to the target. In this case, the way to figure out the exact schema/configuration for the target is still subtle though.
- Some other scenarios can lead to non-obvious issues, i.e. out of sync between storage for internal states and the target. e.g. users may drop the flow and recreate, but not do so for the target; or drop and recreate the target, but not do so for the internal storage. Then they’re out of sync and will be hard-to-debug issues. The gist is, a pipeline usually needs multiple backends and it can be error prone to keep them in sync manually.
Continuous changes to a system introduce persistent pains in production. Every time a data flow is updated, the target schema must evolve alongside — making it not a one-off tedious process, but an ongoing source of friction.
In real-world data systems, new fields often need indexing, old ones get deprecated, and transformations evolve. If a type changes, the schema must adapt. These shifts magnify the complexity and underscore the need for more resilient, adaptable infrastructure.
Following the dataflow programming model, every step is derived data all the way to the end. Indexing infrastructure requires data consistency between indexing pipeline and target stores, and the less loose ends, the easier and more robust it will be.
Our Vision: Declarative, Flow-Based Indexing​
When we started CocoIndex, our vision was to allow developers to define data transformation and indexing logic declaratively — and CocoIndex do the rest. One big step toward this is automatic schema setup.
We’re committed to taking care of the underlying infrastructure, so developers can focus on what matters: the data and the logic. We are serious when we say, you can have production-ready data pipeline for AI with ~100 lines of Python code.
If you’ve ever struggled with keeping your indexing logic and storage setup in sync — we’ve been there. Let us know what you’d love to see next.