Auffarth / Kuligin | Generative AI with LangChain | E-Book | www.sack.de
E-Book

E-Book, Englisch, 484 Seiten

Auffarth / Kuligin Generative AI with LangChain

Build production-ready LLM applications and advanced agents using Python, LangChain, and LangGraph
1. Auflage 2025
ISBN: 978-1-83702-200-7
Verlag: Packt Publishing
Format: EPUB
Kopierschutz: 0 - No protection

Build production-ready LLM applications and advanced agents using Python, LangChain, and LangGraph

E-Book, Englisch, 484 Seiten

ISBN: 978-1-83702-200-7
Verlag: Packt Publishing
Format: EPUB
Kopierschutz: 0 - No protection



This second edition tackles the biggest challenge facing companies in AI today: moving from prototypes to production. Fully updated to reflect the latest developments in the LangChain ecosystem, it captures how modern AI systems are developed, deployed, and scaled in enterprise environments. This edition places a strong focus on multi-agent architectures, robust LangGraph workflows, and advanced retrieval-augmented generation (RAG) pipelines.
You'll explore design patterns for building agentic systems, with practical implementations of multi-agent setups for complex tasks. The book guides you through reasoning techniques such as Tree-of -Thoughts, structured generation, and agent handoffs-complete with error handling examples. Expanded chapters on testing, evaluation, and deployment address the demands of modern LLM applications, showing you how to design secure, compliant AI systems with built-in safeguards and responsible development principles. This edition also expands RAG coverage with guidance on hybrid search, re-ranking, and fact-checking pipelines to enhance output accuracy.
Whether you're extending existing workflows or architecting multi-agent systems from scratch, this book provides the technical depth and practical instruction needed to design LLM applications ready for success in production environments.

Auffarth / Kuligin Generative AI with LangChain jetzt bestellen!

Weitere Infos & Material


1


The Rise of Generative AI: From Language Models to Agents


The gap between experimental and production-ready agents is stark. According to LangChain’s State of Agents report, performance quality is the #1 concern among 51% of companies using agents, yet only 39.8% have implemented proper evaluation systems. Our book bridges this gap on two fronts: first, by demonstrating how LangChain and LangSmith provide robust testing and observability solutions; second, by showing how LangGraph’s state management enables complex, reliable multi-agent systems. You’ll find production-tested code patterns that leverage each tool’s strengths for enterprise-scale implementation and extend basic RAG into robust knowledge systems.

LangChain accelerates time-to-market with readily available building blocks, unified vendor APIs, and detailed tutorials. Furthermore, LangChain and LangSmith debugging and tracing functionalities simplify the analysis of complex agent behavior. Finally, LangGraph has excelled in executing its philosophy behind agentic AI – it allows a developer to give a large language model (LLM) partial control flow over the workflow (and to manage the level of how much control an LLM should have), while still making agentic workflows reliable and well-performant.

In this chapter, we’ll explore how LLMs have evolved into the foundation for agentic AI systems and how frameworks like LangChain and LangGraph transform these models into production-ready applications. We’ll also examine the modern LLM landscape, understand the limitations of raw LLMs, and introduce the core concepts of agentic applications that form the basis for the hands-on development we’ll tackle throughout this book.

In a nutshell, the following topics will be covered in this book:

  • The modern LLM landscape
  • From models to agentic applications
  • Introducing LangChain

The modern LLM landscape


Artificial intelligence (AI) has long been a subject of fascination and research, but recent advancements in generative AI have propelled it into mainstream adoption. Unlike traditional AI systems that classify data or make predictions, generative AI can create new content—text, images, code, and more—by leveraging vast amounts of training data.

The generative AI revolution was catalyzed by the 2017 introduction of the transformer architecture, which enabled models to process text with unprecedented understanding of context and relationships. As researchers scaled these models from millions to billions of parameters, they discovered something remarkable: larger models didn’t just perform incrementally better—they exhibited entirely new emergent capabilities like few-shot learning, complex reasoning, and creative generation that weren’t explicitly programmed. Eventually, the release of ChatGPT in 2022 marked a turning point, demonstrating these capabilities to the public and sparking widespread adoption.

The landscape shifted again with the open-source revolution led by models like Llama and Mistral, democratizing access to powerful AI beyond the major tech companies. However, these advanced capabilities came with significant limitations—models couldn’t reliably use tools, reason through complex problems, or maintain context across interactions. This gap between raw model power and practical utility created the need for specialized frameworks like LangChain that transform these models from impressive text generators into functional, production-ready agents capable of solving real-world problems.

Key terminologies

Tools: External utilities or functions that AI models can use to interact with the world. Tools allow agents to perform actions like searching the web, calculating values, or accessing databases to overcome LLMs’ inherent limitations.

Memory: Systems that allow AI applications to store and retrieve information across interactions. Memory enables contextual awareness in conversations and complex workflows by tracking previous inputs, outputs, and important information.

Reinforcement learning from human feedback (RLHF): A training technique where AI models learn from direct human feedback, optimizing their performance to align with human preferences. RLHF helps create models that are more helpful, safe, and aligned with human values.

Agents: AI systems that can perceive their environment, make decisions, and take actions to accomplish goals. In LangChain, agents use LLMs to interpret tasks, choose appropriate tools, and execute multi-step processes with minimal human intervention.

Year

Development

Key Features

1990s

IBM Alignment Models

Statistical machine translation

2000s

Web-scale datasets

Large-scale statistical models

2009

Statistical models dominate

Large-scale text ingestion

2012

Deep learning gains traction

Neural networks outperform statistical models

2016

Neural Machine Translation (NMT)

Seq2seq deep LSTMs replace statistical methods

2017

Transformer architecture

Self-attention revolutionizes NLP

2018

BERT and GPT-1

Transformer-based language understanding and generation

2019

GPT-2

Large-scale text generation, public awareness increases

2020

GPT-3

API-based access, state-of-the-art performance

2022

ChatGPT

Mainstream adoption of LLMs

2023

Large Multimodal Models (LMMs)

AI models process text, images, and audio

2024

OpenAI o1

Stronger reasoning capabilities

2025

DeepSeek R1

Open-weight, large-scale AI model

Table 1.1: A timeline of major developments in language models

The field of LLMs is rapidly evolving, with multiple models competing in terms of performance, capabilities, and accessibility. Each provider brings distinct advantages, from OpenAI’s advanced general-purpose AI to Mistral’s open-weight, high-efficiency models. Understanding the differences between these models helps practitioners make informed decisions when integrating LLMs into their applications.

Model comparison


The following points outline key factors to consider when comparing different LLMs, focusing on their accessibility, size, capabilities, and specialization:

  • Open-source vs. closed-source models: Open-source models like Mistral and LLaMA provide transparency and the ability to run locally, while closed-source models like GPT-4 and Claude are accessible through APIs. Open-source LLMs can be downloaded and modified, enabling developers and...



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.