Verlag: Mercury Learning and Information, 2024
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: Books From California, Simi Valley, CA, USA
EUR 41,33
Währung umrechnenAnzahl: 1 verfügbar
In den Warenkorbpaperback. Zustand: Very Good.
EUR 47,88
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbPaperback. Zustand: new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Verlag: Mercury Learning and Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: California Books, Miami, FL, USA
EUR 47,93
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbZustand: New.
Verlag: Mercury Learning and Information 1/1/2025, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: BargainBookStores, Grand Rapids, MI, USA
EUR 65,02
Währung umrechnenAnzahl: 5 verfügbar
In den WarenkorbPaperback or Softback. Zustand: New. Large Language Models for Developers: A Prompt-Based Exploration of Llms. Book.
EUR 65,12
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbPaperback. Zustand: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Verlag: Mercury Learning and Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
EUR 50,36
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbZustand: New. In.
Anbieter: AussieBookSeller, Truganina, VIC, Australien
EUR 34,80
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbPaperback. Zustand: new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability.
Verlag: Mercury Learning and Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irland
EUR 62,72
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbZustand: New. 2025. Paperback. . . . . .
Anbieter: Rarewaves.com USA, London, LONDO, Vereinigtes Königreich
EUR 76,97
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbPaperback. Zustand: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Verlag: Mercury Learning and Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: Kennys Bookstore, Olney, MD, USA
EUR 78,16
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbZustand: New. 2025. Paperback. . . . . . Books ship from the US and Ireland.
Anbieter: CitiRetail, Stevenage, Vereinigtes Königreich
EUR 56,15
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbPaperback. Zustand: new. Paperback. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architectures attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES Covers the full lifecycle of working with LLMs, from model selection to deployment Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization Teaches readers to enhance model efficiency with advanced optimization techniques Includes companion files with code and images -- available from the publisher This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engi Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability.
Verlag: Mercury Learning & Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
EUR 77,64
Währung umrechnenAnzahl: 2 verfügbar
In den WarenkorbPaperback. Zustand: Brand New. 1012 pages. 6.00x1.90x9.00 inches. In Stock.
EUR 67,05
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbPaperback. Zustand: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
EUR 71,38
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbPaperback. Zustand: New. This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES. Covers the full lifecycle of working with LLMs, from model selection to deployment. Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization. Teaches readers to enhance model efficiency with advanced optimization techniques. Includes companion files with code and images -- available from the publisher.
Anbieter: PBShop.store UK, Fairford, GLOS, Vereinigtes Königreich
EUR 52,95
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbPAP. Zustand: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.
Anbieter: PBShop.store US, Wood Dale, IL, USA
EUR 62,68
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbPAP. Zustand: New. New Book. Shipped from UK. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000.
Anbieter: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Deutschland
EUR 58,95
Währung umrechnenAnzahl: 2 verfügbar
In den WarenkorbTaschenbuch. Zustand: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware 1046 pp. Englisch.
Anbieter: moluna, Greven, Deutschland
EUR 61,50
Währung umrechnenAnzahl: Mehr als 20 verfügbar
In den WarenkorbZustand: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Oswald Campesato (San Francisco, CA) specializes in Deep Learning, Python, Data Science, and Generative AI. He is the author/co-author of over forty-five books including Google Gemini for Python, Large Language Models, and GPT-4 for Developers (all Mercury .
Verlag: Mercury Learning And Information, De Gruyter Jan 2025, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
EUR 58,95
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbTaschenbuch. Zustand: Neu. This item is printed on demand - Print on Demand Titel. Neuware -This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES¿ Covers the full lifecycle of working with LLMs, from model selection to deployment¿ Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization¿ Teaches readers to enhance model efficiency with advanced optimization techniques¿ Includes companion files with code and images -- available from the publisherWalter de Gruyter, Genthiner Straße 13, 10785 Berlin 1046 pp. Englisch.
Verlag: Mercury Learning And Information, 2025
ISBN 10: 1501523562 ISBN 13: 9781501523564
Sprache: Englisch
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
EUR 65,89
Währung umrechnenAnzahl: 1 verfügbar
In den WarenkorbTaschenbuch. Zustand: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - This book offers a thorough exploration of Large Language Models (LLMs), guiding developers through the evolving landscape of generative AI and equipping them with the skills to utilize LLMs in practical applications. Designed for developers with a foundational understanding of machine learning, this book covers essential topics such as prompt engineering techniques, fine-tuning methods, attention mechanisms, and quantization strategies to optimize and deploy LLMs. Beginning with an introduction to generative AI, the book explains distinctions between conversational AI and generative models like GPT-4 and BERT, laying the groundwork for prompt engineering (Chapters 2 and 3). Some of the LLMs that are used for generating completions to prompts include Llama-3.1 405B, Llama 3, GPT-4o, Claude 3, Google Gemini, and Meta AI. Readers learn the art of creating effective prompts, covering advanced methods like Chain of Thought (CoT) and Tree of Thought prompts. As the book progresses, it details fine-tuning techniques (Chapters 5 and 6), demonstrating how to customize LLMs for specific tasks through methods like LoRA and QLoRA, and includes Python code samples for hands-on learning. Readers are also introduced to the transformer architecture's attention mechanism (Chapter 8), with step-by-step guidance on implementing self-attention layers. For developers aiming to optimize LLM performance, the book concludes with quantization techniques (Chapters 9 and 10), exploring strategies like dynamic quantization and probabilistic quantization, which help reduce model size without sacrificing performance.FEATURES¿ Covers the full lifecycle of working with LLMs, from model selection to deployment¿ Includes code samples using practical Python code for implementing prompt engineering, fine-tuning, and quantization¿ Teaches readers to enhance model efficiency with advanced optimization techniques¿ Includes companion files with code and images -- available from the publisher.