Alto | Building LLM Powered Applications | E-Book | sack.de
E-Book

E-Book, Englisch, 342 Seiten

Alto Building LLM Powered Applications

Create intelligent apps and agents with large language models
1. Auflage 2024
ISBN: 978-1-83546-263-8
Verlag: De Gruyter
Format: EPUB
Kopierschutz: 0 - No protection

Create intelligent apps and agents with large language models

E-Book, Englisch, 342 Seiten

ISBN: 978-1-83546-263-8
Verlag: De Gruyter
Format: EPUB
Kopierschutz: 0 - No protection



Get hands-on with GPT 3.5, GPT 4, LangChain, Llama 2, Falcon LLM and more, to build LLM-powered sophisticated AI applications

Key Features

  • Embed LLMs into real-world applications
  • Use LangChain to orchestrate LLMs and their components within applications
  • Grasp basic and advanced techniques of prompt engineering

Book Description

Building LLM Powered Applications delves into the fundamental concepts, cutting-edge technologies, and practical applications that LLMs offer, ultimately paving the way for the emergence of large foundation models (LFMs) that extend the boundaries of AI capabilities.



The book begins with an in-depth introduction to LLMs. We then explore various mainstream architectural frameworks, including both proprietary models (GPT 3.5/4) and open-source models (Falcon LLM), and analyze their unique strengths and differences. Moving ahead, with a focus on the Python-based, lightweight framework called LangChain, we guide you through the process of creating intelligent agents capable of retrieving information from unstructured data and engaging with structured data using LLMs and powerful toolkits. Furthermore, the book ventures into the realm of LFMs, which transcend language modeling to encompass various AI tasks and modalities, such as vision and audio.



Whether you are a seasoned AI expert or a newcomer to the field, this book is your roadmap to unlock the full potential of LLMs and forge a new era of intelligent machines.

What you will learn

  • Explore the core components of LLM architecture, including encoder-decoder blocks and embeddings
  • Understand the unique features of LLMs like GPT-3.5/4, Llama 2, and Falcon LLM
  • Use AI orchestrators like LangChain, with Streamlit for the frontend
  • Get familiar with LLM components such as memory, prompts, and tools
  • Learn how to use non-parametric knowledge and vector databases
  • Understand the implications of LFMs for AI research and industry applications
  • Customize your LLMs with fine tuning
  • Learn about the ethical implications of LLM-powered applications

Who this book is for

Software engineers and data scientists who want hands-on guidance for applying LLMs to build applications. The book will also appeal to technical leaders, students, and researchers interested in applied LLM topics.



We don't assume previous experience with LLM specifically. But readers should have core ML/software engineering fundamentals to understand and apply the content.

Alto Building LLM Powered Applications jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


Table of Contents - Introduction to Large Language Models
- LLMs for AI-Powered Applications
- Choosing an LLM for Your Application
- Prompt Engineering
- Embedding LLMs within Your Applications
- Building Conversational Applications
- Search and Recommendation Engines with LLMs
- Using LLMs with Structured Data
- Working with Code
- Building Multimodal Applications with LLMs
- Fine-Tuning Large Language Models
- Responsible AI
- Emerging Trends and Innovations


Preface


With this book, we embark upon an exploration of large language models (LLMs) and the transformative paradigm they represent within the realm of artificial intelligence (AI). This comprehensive guide helps you delve into the fundamental concepts, from solid theoretical foundations of these cutting-edge technologies to practical applications that LLMs offer, ultimately converging on the ethical and responsible considerations while using generative AI solutions. This book aims to provide you with a firm understanding of how the emerging LLMs in the market can impact individuals, large enterprises, and society. It focuses on how to build powerful applications powered by LLMs, leveraging new AI orchestrators such as LangChain and uncovering new trends in modern application development.

By the end of this book, you will be able to navigate the rapidly evolving ecosystem of generative AI solutions more easily; plus, you will have the tools to get the most out of LLMs in both your daily tasks and your businesses. Let’s get started!

Who this book is for


The book is designed to mainly appeal to a technical audience with some basic Python code foundations. However, the theoretical chapters and the hands-on exercises are based on generative AI foundations and industry-led use cases, which might be of interest to non-technical audiences as well.

Overall, the book caters to individuals interested in gaining a comprehensive understanding of the transformative power of LLMs and define, enabling them to navigate the rapidly evolving AI landscape with confidence and foresight. All kinds of readers are welcome, but readers who can benefit the most from this book include:

  • Software developers and engineers: This book provides practical guidance for developers looking to build applications leveraging LLMs. It covers integrating LLMs into app backends, APIs, architectures, and so on.
  • Data scientists: For data scientists interested in deploying LLMs for real-world usage, this book shows how to take models from research to production. It covers model serving, monitoring, and optimization.
  • AI/ML engineers: Engineers focused on AI/ML applications can leverage this book to understand how to architect and deploy LLMs as part of intelligent systems and agents.
  • Technical founders/CTOs: Startup founders and CTOs can use this book to evaluate if and how LLMs could be used within their apps and products. It provides a technical overview alongside business considerations.
  • Students: Graduate students and advanced undergraduates studying AI, ML, natural language processing (NLP), or computer science can learn how LLMs are applied in practice from this book.
  • LLM researchers: Researchers working on novel LLM architectures, training techniques, and so on will gain insight into real-world model usage and the associated challenges.

What this book covers


, , provides an introduction to and deep dive into LLMs, a powerful set of deep learning neural networks in the domain of generative AI. It introduces the concept of LLMs, their differentiators from classical machine learning models, and the relevant jargon. It also discusses the architecture of the most popular LLMs, moving on to explore how LLMs are trained and consumed and compare base LLMs with fine-tuned LLMs. By the end of this chapter, you will have the foundations of what LLMs are and their positioning in the landscape of AI, creating the basis for the subsequent chapters.

, , explores how LLMs are revolutionizing the world of software development, leading to a new era of AI-powered applications. By the end of this chapter, you will have a clearer picture of how LLMs can be embedded in different application scenarios, with the help of new AI orchestrator frameworks that are currently available in the AI development market.

, , highlights how different LLMs may have different architectures, sizes, training data, capabilities, and limitations. Choosing the right LLM for your application is not a trivial decision as it can significantly impact the performance, quality, and cost of your solution. In this chapter, we will navigate the process of choosing the right LLM for your application. We will discuss the most promising LLMs in the market, the main criteria and tools to use when comparing LLMs, and the various trade-offs between size and performance. By the end of this chapter, you should have a clear understanding of how to choose the right LLM for your application and how to use it effectively and responsibly.

, , explains how prompt engineering is a crucial activity while designing LLM-powered applications since prompts have a massive impact on the performance of LLMs. In fact, there are several techniques that can be implemented to not only to refine your LLM’s responses but also reduce risks associated with hallucination and biases. In this chapter, we will cover the emerging techniques in the field of prompt engineering, from basic approaches up to advanced frameworks. By the end of this chapter, you will have the foundations to build functional and solid prompts for your LLM-powered applications, which will also be relevant in the upcoming chapters.

, , discusses a new set of components introduced into the landscape of software development with the advent of developing applications with LLMs. To make it easier to orchestrate LLMs and their related components in an application flow, several AI frameworks have emerged, of which LangChain is one of the most widely used. In this chapter, we will take a deep dive into LangChain and how to use it, and learn how to call open-source LLM APIs into code via Hugging Face Hub and manage prompt engineering. By the end of this chapter, you will have the technical foundations to start developing your LLM-powered applications using LangChain and open-source Hugging Face models.

, , allows us to embark on the hands-on section of this book with your first concrete implementation of LLM-powered applications. Throughout this chapter, we will cover a step-by-step implementation of a conversational application, using LangChain and its components. We will configure the schema of a simple chatbot, adding a memory component, non-parametric knowledge, and tools to make the chatbot “agentic.” By the end of this chapter, you will be able to set up your own conversational application project with just a few lines of code.

, , explores how LLMs can enhance recommendation systems, using both embeddings and generative models. We will discuss the definition and evolution of recommendation systems, learn how generative AI is impacting this field of research, and understand how to build recommendation systems with LangChain. By the end of this chapter, you will be able to create your own recommendation application and leverage state-of-the-art LLMs using LangChain as the framework.

, , covers a great capability of LLMs: the ability to handle structured, tabular data. We will see how, with plug-ins and an agentic approach, we can use LLMs as a natural language interface between us and our structured data, reducing the gap between the business user and the structured information. To demonstrate this, we will build a database copilot with LangChain. By the end of this chapter, you will be able to build your own natural language interface for your data estate, combining unstructured with structured sources.

, , covers another great capability of LLMs: working with programming languages. In the previous chapter, we’ve already seen a glimpse of this capability, when we asked our LLM to generate SQL queries against a SQL Database. In this chapter, we are going to examine in which other ways LLMs can be used with code, from “simple” code understanding and generation to the building of applications that behave as if they were an algorithm. By the end of this chapter, you will be able to build LLM-powered applications for your coding projects, as well as build LLM-powered applications with natural language interfaces to work with code.

, , goes beyond LLMs, introducing the concept of multi-modality while building agents. We will see the logic behind the combination of foundation models in different AI domains – language, images, audio – into one single agent that can adapt to a variety of tasks. You will learn how to build a multi-modal agent with single-modal LLMs using LangChain. By the end of this chapter, you will be able to build your own multi-modal agent, providing it with the tools and LLMs needed to perform various AI tasks.

, , covers the technical details of fine-tuning LLMs, from the theory behind it to hands-on implementation with Python and Hugging Face. We will delve into how you can prepare your data to fine-tune a base model on your data, as well as discuss hosting...


Alto Valentina:

After completing her bachelor's degree in finance, Valentina Alto pursued a master's degree in data science in 2021. She began her professional career at Microsoft as an Azure Solution Specialist, and since 2022, she has been primarily focused on working with Data & AI solutions in the Manufacturing and Pharmaceutical industries. Valentina collaborates closely with system integrators on customer projects, with a particular emphasis on deploying cloud architectures that incorporate modern data platforms, data mesh frameworks, and applications of Machine Learning and Artificial Intelligence. Alongside her academic journey, she has been actively writing technical articles on Statistics, Machine Learning, Deep Learning, and AI for various publications, driven by her passion for AI and Python programming.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.