Mastering Large Language Models with Python: Unleash the Power of Advanced Natural Language Processing for Enterprise Innovation and Efficiency Using Large Language Models (LLMs) with Python

Mastering Large Language Models with Python: Unleash the Power of Advanced Natural Language Processing for Enterprise Innovation and Efficiency Using Large Language Models (LLMs) with Python

Mastering Large Language Models with Python: Unleash the Power of Advanced Natural Language Processing for Enterprise Innovation and Efficiency Using Large Language Models (LLMs) with Python
Автор: Raj Arun R
Дата выхода: 2024
Издательство: Orange Education Pvt Ltd, AVA™
Количество страниц: 554
Размер файла: 8,7 МБ
Тип файла: PDF
Добавил: codelibs
 Проверить на вирусы  Дополнительные материалы 

Dedicated To....4

About the Author....5

About the Technical Reviewers....6

Acknowledgements....8

Preface....9

Downloading the code bundles and colored images....11

Errata....11

Table of Contents....13

Unfolding the Journey of Language Models....38

Influence of Large Language Models....40

Understanding Transformers....41

Transformers in Large Language Models....42

Attention Mechanisms....44

Transformers and Large Language Models....49

KM Scaling Law....50

Chinchilla Scaling Law....51

Key Techniques for Large Language Models....51

Alignment Tuning and Tools Manipulation....52

Tools Manipulation....54

Publicly Available Model Checkpoints or APIs....55

Collecting Data....56

Configuring LLMs in Detail....59

Emergent Abilities of Large Language Models....60

Exploring the Inner Workings of LLMs....61

Confluence of ICL and CoT....62

CoT Prompt Design....63

Assessment Yardsticks for Large Language Models....64

In-depth Analysis of the Capabilities of LLMs....65

References....67

Open-Source versus Proprietary Large Language Models....69

Risks and Drawbacks of Open-Source LLMs....70

Security Vulnerabilities in Open-Source LLMs....71

StableLM: Empowering Language Generation with Stability AI....72

BERT: Advancing Language Representations with Bidirectional Encoder Representations from Transformers....74

BLOOM: Empowering Open Science with the Largest Multilingual Language Model....75

RedPajama: Advancing Open-Source Language Models....77

Falcon-40B: Empowering Open-Source Language Models....78

StarCoder: Empowering Developers with Code Generation....79

Replit-Code: Empowering Developers with Intelligent Code Completion....81

GPT-Neo: Empowering Open and Collaborative Research in Language Models....82

Galactica: Revolutionizing Scientific Knowledge with Meta AI....83

Segment Anything Model (SAM): Advancing Image Segmentation with Meta AI....85

Dolly: Empowering Natural Language Processing with Databricks....86

GPT-4 Limited Beta....94

GPT-3....94

GPT-3.5....95

DALL·E Beta....95

Whisper Beta....95

Embeddings....95

Moderation....95

Codex....96

Accessing GPT Models via OpenAI API....96

Function Calling with OpenAI....99

Completions API....100

Image Generation....103

Embedding Model Understanding Embeddings....107

Whisper: OpenAI’s Speech-to-Text Model....108

Moderation Model: Ensuring Content Compliance....109

Models....112

Exploring Cohere Playground....113

Selecting the Right Model Size....115

Security Concerns when Using API Inferencing with Sensitive Data....123

Natural Language Processing....125

Audio....130

Computer Vision....131

Code Overview — Hugging Face APIs in Action....133

Function Signature....133

Setting Up....133

Task Selection....134

Sending the Request....134

Example Usage....135

Installation....136

Authentication....136

Models....137

Chat....138

Completion....139

Edit....141

Images....142

Embeddings....147

Audio....148

Moderation....149

Installation....151

Authentication....151

Text Classification....170

Text Generation....170

Text Summarization....170

Required Knowledge and Tools....171

Setting up Google Sheets and Google Apps Script....171

Getting the Cohere API key....172

Explanation of the Boilerplate code....175

Walkthrough of the code and its structure....177

Text Classification....180

Text Generation....181

Text Summarization....182

Expected results and how to interpret them....185

Understanding the Use Case: Movie Recommendations....191

Background of Sentence Transformers /all-MiniLM-L12-v1....192

Vector Databases: An Overview and Importance....193

Environment Preparation in Google Colab....199

Data Preprocessing for Transformers....202

Choosing the Right Transformer Model:....203

Defining Movie Data Loading and Vector Encoding....204

Defining the Vector Database Indexing Process....205

Defining the Search Function....207

The Load and Index and Search Functions....208

Wrapper Functions....209

Summarizing the Use of Transformers and Vector Databases....212

Future Improvements and Scalability Considerations....213

Benefits of Vector Databases over Traditional Databases....221

Tech-Stack Walkthrough and Explanation....223

Pre-requisites....223

Implementation Steps....226

Detailed Code Walkthrough....227

Tech-Stack Walkthrough and Explanation....231

Understanding FAISS and Pinecone....231

Pre-requisites....233

Implementation Steps....233

Detailed Code Walkthrough....234

Benefits and Importance of LLMs....241

Types of Quantization Techniques....242

Specialized Quantization Strategies for LLMs....242

Quantization Using....250

Integration with Hugging Face Transformers....251

Quantization Using GPTQ....257

Foundation LLM....267

Pre-trained LLM....267

Fine-Tuned LLM....267

Faster Training and Deployment....268

Better Performance on Specific Domains....268

Requires Less Data for Fine-Tuning....268

Lower Risk....269

Access to State-of-the-Art Models....269

More Data, More Knowledge....269

Model Scale and Architecture Matter....269

Diminishing Returns....270

Balancing Corpus Size with Compute Resources....270

Corpus Relevance....270

Multi-domain Versatility....270

Understanding the Dataset....271

Choosing the Right Pre-trained Model....271

Targeted Parameter Fine-Tuning....271

Customizing the Training Objective....271

In-Context Learning and Other Advancements....272

Tips for Creating an Instruction Dataset....272

GPU Architecture: Core Components....274

Programming GPUs....275

GPUs in LLMs....275

Selecting the Right GPU for LLM Training....275

GPU for Model Inference....276

Key Factors to Consider....277

Task-Specific Recommendations....278

General Guidelines....278

Token Economics....280

Art of Prompt Optimization....280

GPT Versions Cost Ratio....280

Embedding and Fine-Tuning Costs....281

Training and Fine-Tuning Costs....281

GPU Memory Requirements....281

Areas for Innovation....281

Evaluation Metrics....282

Evaluating General NLP Tasks....285

Challenges....287

Implementation Walkthrough....294

Environment Preparation for DeepSpeed....323

Implementation Walkthrough....337

Data Preparation....349

Model Training....349

Model Evaluation....350

Model Deployment....350

Model Monitoring....350

Importance of Data Management....351

Data Collection and Preprocessing....351

Data Labeling and Annotation....352

Data Storage, Organization, and Versioning....352

Traditional Development Process....352

Platform LLMOps Approach....353

Computational Resources....354

Transfer Learning....355

Human Feedback....355

Hyperparameter Tuning....355

Performance Metrics....355

Prompt Engineering....356

Building LLM Chains or Pipelines....356

Exploratory Data Analysis (EDA)....356

Data Preparation and Prompt Engineering....357

Model Fine-Tuning....357

Model Review and Governance....358

Model Inference and Serving....358

General Best Practices....359

Efficiency....360

Scalability....360

Risk Reduction....360

Enhanced Customer Experience....361

Large Model Size....362

Complex Datasets....362

Continuous Monitoring and Evaluation....362

Scalability....362

Model Optimization....362

Infrastructure Optimization....363

Security and Privacy....363

Integration....363

Automation....363

Monitoring....363

Validation....363

Latency Considerations....364

Cost Management....364

Resource Management....364

Deployment Options: Cloud-based or On-premise....365

Deployment Strategies....366

Data Privacy and Protection....367

Data Encryption and Access Controls....368

Model Security....368

Regulatory Compliance....368

Prohibit Misuse....369

Thoughtfully Collaboration with Stakeholders....370

Output Validation....371

Prepare for DDoS Attacks....371

Building User Limits....371

Care About Latency....372

Avoid Retrofitting Logs and Monitoring Records for LLMs....372

Implement Data Privacy....372

Costs....373

Optimization....373

Trade-offs....373

Checklist for LLMOps Deployment....379

DataLoader....400

Summarizer....401

MLflowHandler....404

Wrapping Up — The Pipeline....413

Prompt Shape....422

Manual Template Engineering....422

Answer Shape....425

Answer Space Design Methods....425

Prompt Ensembling....426

Prompt Augmentation....427

Prompt Composition....427

Prompt Decomposition....427

Training Settings....428

Parameter Update Methods....428

Knowledge Probing....431

Classification-based Tasks....432

Information Extraction....432

“Reasoning” in NLP....433

Question Answering....433

Text Generation....433

Ensemble Learning....434

Few-Shot Learning....434

Larger-Context Learning....434

Query Reformulation....434

QA-based Task Formulation....435

Controlled Generation....435

Supervised Attention....435

Data Augmentation....435

Prompt Design....436

Answer Engineering....437

Selection of Tuning Strategy....438

Multiple Prompt Learning....438

Choosing Optimal Pre-trained Models....440

Analyzing Prompting Theoretically and Empirically....440

Exploring Prompts’ Transferability....440

Calibration of Prompting Methods....441

Combination of Different Paradigms....441

Three Pillars of Prompt Anatomy....445

Significance of Understanding Prompt Anatomy....447

Advanced Techniques....448

Controlling Inconsistencies: Temperature and Self-Consistency....449

Prompt Pattern Catalog....450

Meta Language Creation Pattern....454

Output Automater Pattern....456

Understanding Flipped Interaction Pattern....457

Persona Pattern....459

Question Refinement Pattern....460

Alternative Approaches Pattern....461

Cognitive Verifier Pattern....462

Fact Checklist Pattern....464

Template Pattern....465

Infinite Generation Pattern....466

Visualization Generator Pattern....467

Game Play Pattern....469

Reflection Pattern....470

Refusal Breaker Pattern....471

Context Manager Pattern....472

Recipe Pattern....474

Separate Instructions and Context....475

Be Specific and Detailed....476

Articulate Desired Output Format Through Examples....477

Zero-Shot, Few-Shot, and Fine-Tuning....478

Avoid Fluffy Descriptions....479

Being Explicit About What to Do....479

Code Generation Specifics....480

Text-based Conversational AI....484

Text-based Image Synthesis....485

The Power of Learning from Human Input (RLHF)....489

Guardrails — Protective Measures....489

Intrinsic Issues....490

Deliberate Attacks....491

Unintended Glitches....492

Evaluation Stage....493

Runtime Monitoring....494

Ethical Principles and AI Regulations....495

Red Teaming....495

Manipulating LLMs....495

Checking the Checkers: Verification of NLP Models....497

Interval Bound Propagation: Establishing the Fence....498

Navigating Uncertainty with Abstract Interpretation....498

Bracing for Change with Randomized Smoothing....498

Black-Box Verification: Cracking the Code....499

Assessing the Resilience of LLMs....499

A Case for Smaller Models....499

Runtime Monitors: The Guardians of LLMs....499

Detecting the Deviations: Monitoring Out-of-Distribution....500

Guarding Against Output Failures....500

Perspective....501

Regulate or Ban?....502

Responsible AI Principles....502

Transparency and Explainability....502

Introduction to Symbolic Systems and Their Capabilities....506

Introduction to Symbolic Systems and their Capabilities: A Deep Dive into Cyc....506

The Untapped Potential of Combining Both for Trustworthiness....507

Identifying Gaps and Proposing Extensions to the Desiderata....509

Examination of the Desiderata....509

The Role of Semantic Amplification in the Trust-Enhanced Generative Framework (TEGF)....518

Statistical Language Model (SLM)....518

Symbolic Reasoning Engine....519

Trustworthiness Layer....520

Explainability Module....521

Data Provenance Tracker....521

Contextual Understanding Module....522

Component Interactions and Trust Propagation....523

Recommendations for Enhanced Cohesion....524

The Mechanics of the Provenance Layer....527

Real-World Implications: A Multi-Sector Focus....527

Case Study: Healthcare Complex Diagnoses....528

User Experience....528

Security Aspects....528

Future Developments....528

Components of TIGAI....529

TIGAI: Complementary or Contrasting Aspects with TEGF....532

Case Studies....533

Technical Depth....534

Future Scope....534

User Experience....534

Security and Compliance....534

Performance Metrics....535

Data Privacy and Consent: The Double-Edged Sword....535

Transparency and Accountability: The Pillars of Ethical AI....536

Potential for Misuse: The Dark Side of Trustworthiness....536

Ethical Guidelines for TEGF in Healthcare....537

Future Outlook and Public Policy....537

“Mastering Large Language Models with Python” is an indispensable resource that offers a comprehensive exploration of Large Language Models (LLMs), providing the essential knowledge to leverage these transformative AI models effectively. From unraveling the intricacies of LLM architecture to practical applications like code generation and AI-driven recommendation systems, readers will gain valuable insights into implementing LLMs in diverse projects.Covering both open-source and proprietary LLMs, the book delves into foundational concepts and advanced techniques, empowering professionals to harness the full potential of these models. Detailed discussions on quantization techniques for efficient deployment, operational strategies with LLMOps, and ethical considerations ensure a well-rounded understanding of LLM implementation.Through real-world case studies, code snippets, and practical examples, readers will navigate the complexities of LLMs with confidence, paving the way for innovative solutions and organizational growth. Whether you seek to deepen your understanding, drive impactful applications, or lead AI-driven initiatives, this book equips you with the tools and insights needed to excel in the dynamic landscape of artificial intelligence. 


Похожее:

Список отзывов:

Нет отзывов к книге.