React Native RAG - v0.2.0
    Preparing search index...

    Class RAG

    Orchestrates Retrieval Augmented Generation. Coordinates a VectorStore and an LLM to ingest, retrieve, and generate.

    const rag = await new RAG({ vectorStore, llm }).load();
    const answer = await rag.generate({ input: 'What is RAG?' });
    Index

    Constructors

    • Creates a new RAG instance.

      Parameters

      • params: { llm: LLM; vectorStore: VectorStore }

        Object containing the implementations.

        • llm: LLM

          Large Language Model used for generation.

        • vectorStore: VectorStore

          Vector store used for retrieval.

      Returns RAG

    Methods

    • Adds a document to the vector store.

      Parameters

      • params: {
            document?: string;
            embedding?: number[];
            id?: string;
            metadata?: Record<string, any>;
        }

        Parameters for the operation.

        • Optionaldocument?: string

          Raw text content for the document.

        • Optionalembedding?: number[]

          Embedding for the document.

        • Optionalid?: string

          ID for the document.

        • Optionalmetadata?: Record<string, any>

          Metadata for the document.

      Returns Promise<string>

      Promise that resolves to the ID of the newly added document.

    • Deletes documents from the vector store by the provided predicate.

      Parameters

      • params: { predicate: (value: GetResult) => boolean }

        Parameters for deletion.

        • predicate: (value: GetResult) => boolean

          Predicate to match documents for deletion.

      Returns Promise<void>

      Promise that resolves once the documents are deleted.

    • Generates a response based on the input messages and retrieved documents. If augmentedGeneration is true, it retrieves relevant documents from the vector store and includes them in the prompt for the LLM.

      Parameters

      • params: {
            augmentedGeneration?: boolean;
            callback?: (token: string) => void;
            input: string | Message[];
            nResults?: number;
            predicate?: (value: QueryResult) => boolean;
            promptGenerator?: (
                messages: Message[],
                retrievedDocs: QueryResult[],
            ) => string;
            questionGenerator?: (messages: Message[]) => string;
        }

        Generation parameters.

        • OptionalaugmentedGeneration?: boolean

          Whether to augment with retrieved context (default: true).

        • Optionalcallback?: (token: string) => void

          Token callback for streaming.

        • input: string | Message[]

          Input messages or a single string.

        • OptionalnResults?: number

          Number of docs to retrieve (default: 3).

        • Optionalpredicate?: (value: QueryResult) => boolean

          Filter applied to retrieved docs.

        • OptionalpromptGenerator?: (messages: Message[], retrievedDocs: QueryResult[]) => string

          Builds the context-augmented prompt from messages and retrieved docs.

        • OptionalquestionGenerator?: (messages: Message[]) => string

          Maps the message list to a search query (default: last message content).

      Returns Promise<string>

      Promise that resolves to the generated text.

    • Interrupts the ongoing text generation process.

      Returns Promise<void>

      Promise that resolves when the interruption is complete.

    • Initializes the RAG system by loading the vector store and LLM.

      Returns Promise<RAG>

      A promise that resolves to the same RAG instance.

    • Splits a document into chunks and adds them to the vector store. If no textSplitter is provided, a default RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 100 }) is used.

      Parameters

      • params: {
            document: string;
            metadataGenerator?: (chunks: string[]) => Record<string, any>[];
            textSplitter?: TextSplitter;
        }

        Parameters for the operation.

        • document: string

          The content of the document to split and add.

        • OptionalmetadataGenerator?: (chunks: string[]) => Record<string, any>[]

          Function to generate metadata for each chunk. Must return an array which length is equal to the number of chunks.

        • OptionaltextSplitter?: TextSplitter

          Text splitter implementation.

      Returns Promise<string[]>

      Promise that resolves to the IDs of the newly added chunks.

    • Unloads the RAG system, releasing resources used by the vector store and LLM.

      Returns Promise<void>

      A promise that resolves when unloading is complete.

    • Updates a document in the vector store by its ID.

      Parameters

      • params: {
            document?: string;
            embedding?: number[];
            id: string;
            metadata?: Record<string, any>;
        }

        Parameters for the update.

        • Optionaldocument?: string

          New content for the document.

        • Optionalembedding?: number[]

          New embedding for the document. If not provided, it will be generated based on the document.

        • id: string

          The ID of the document to update.

        • Optionalmetadata?: Record<string, any>

          New metadata for the document.

      Returns Promise<void>

      Promise that resolves once the document is updated.