Curious if anyone here has used Reducto for document parsing or retrieval pipelines.
They seem to focus on generating LLM-ready chunks using a mix of vision-language models and something they call “embedding-optimized” or intelligent chunking. The idea is that it preserves document layout and meaning (tables, figures, etc.) before generating embeddings for RAG or vector search systems.
I’m mostly wondering how this works in practice
– Does their “embedding-aware” chunking noticeably improve retrieval or reduce hallucinations?
– Did you still need to run additional preprocessing or custom chunking on top of it?
Would appreciate hearing from anyone who’s tried it in production or at scale.
Comments URL: https://news.ycombinator.com/item?id=45703569
Points: 1
# Comments: 0
Source: news.ycombinator.com
