LLM Retrieval Systems Engineering in 2026: Building Scalable RAG Pipelines for Accurate and Context-Aware Systems (Intelligent Systems Engineering Automation Series, Band 2) - Softcover

Buch 2 von 2: Intelligent Systems Engineering Automation Series

Corvin, Kairo

 
9798254751359: LLM Retrieval Systems Engineering in 2026: Building Scalable RAG Pipelines for Accurate and Context-Aware Systems (Intelligent Systems Engineering Automation Series, Band 2)

Inhaltsangabe

LLM Retrieval Systems Engineering delivers a practical and structured approach to building high-performance retrieval-augmented generation (RAG) systems. This book focuses on enhancing large language models with external knowledge sources to produce more accurate, reliable, and context-aware outputs.
Inside, you will learn how to:

  1. Design end-to-end RAG pipelines from ingestion to response generation
  2. Structure and optimize vector databases for efficient retrieval
  3. Implement embedding strategies for improved semantic search
  4. Handle data chunking, indexing, and query optimization
  5. Evaluate and improve response accuracy and relevance
The book also covers advanced topics such as hybrid retrieval methods, system scalability, latency optimization, and real-world deployment patterns.
Through practical examples and engineering-focused insights, this guide helps you build AI systems that overcome the limitations of standalone language models by integrating dynamic and up-to-date knowledge sources.
Ideal for developers, data engineers, and AI practitioners, this book provides the tools needed to design robust and production-ready retrieval-based AI applications.

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.