£44.24

50 ML projects to understand LLMs: Investigate transformer mechanisms through data analysis, visualization, and experimentation

Price data last checked 10 day(s) ago - will refresh soon

View at Amazon

Price History & Forecast

Last 50 days • 50 data points (No recent data available)

Historical
Generating forecast...
£44.24 £42.03 £42.91 £43.80 £44.68 £45.57 £46.45 24 February 2026 08 March 2026 20 March 2026 01 April 2026 14 April 2026

Price Distribution

Price distribution over 50 days • 1 price levels

Days at Price
50 days 0 13 25 38 50 £44 Days at Price

Price Analysis

Most common price: £44 (50 days, 100.0%)

Price range: £44 - £44

Price levels: 1 different prices over 50 days

Description

Most books about LLMs teach you how to build language models from scratch or deploy them via APIs. This book does something different: it uses guided machine-learning projects to teach you how to understand, visualize, and investigate LLMs including GPT and BERT. Through 50 hands-on, guided projects solved in Python, you will investigate the internal mechanisms of large language models by treating their hidden states, attention patterns, and embeddings as data to analyze. Rather than accepting LLMs as black boxes, you will open them up, examine what's inside, and run experiments to understand why they behave the way they do. All projects are based on Python (using libraries such as NumPy, PyTorch, statsmodels, scikit-learn, Matplotlib, Pandas, and Seaborn) and come with full solutions and partial solution notebook files, so you can practice and improve your skills in data science, deep learning, data visualization, and scientific and statistical coding. What makes this book unique: Each project is built around three learning goals: machine learning techniques, LLM mechanisms, and Python coding with data visualization. This is not a dense theoretical textbook; it's hands-on, pratical, and project-oriented. You will learn how to measure, visualize, and manipulate the internal components of LLMs (including embeddings, transformer outputs, hidden-states, attention, and MLP layers) directly. Projects range from analyzing tokenization and embedding geometry to dissecting attention heads, probing MLP neurons, and running causal experiments that reveal how information flows through a model during inference. Topics covered include: Tokenization schemes and their statistical properties Embedding spaces: cosine similarity, semantic axes, and analogy vectors Output logits, softmax distributions, perplexity, and language biases Layer-by-layer transformer dynamics and dimensionality Attention mechanisms: QKV weights, attention scores, head ablation, and activation patching MLP subblocks: neuron tuning, mutual information, subspace analysis, and statistics-based causal manipulations Logit lens, indirect object identification, and causal tracing Who this book is for: This book is for data scientists, ML engineers, and researchers who want to go beyond surface-level understanding of LLMs. Prior Python experience is required. Familiarity with machine learning or deep learning is helpful but not required — techniques are introduced as they arise throughout the projects. Practical and accessible: All code runs on Google Colab, so there is nothing to install and no local configuration required. Each of the 50 projects comes with two Jupyter notebooks: one with hints and incomplete code for guided practice, and one with a complete working solution. All code is freely available on GitHub at https://github.com/mikexcohen/ML4LLM_book Mike X Cohen, PhD, is a former neuroscience professor and full-time educator with 25 years of experience teaching machine learning, mathematics, and data science. His courses are bestsellers on Udemy and his textbooks are published by O'Reilly, MIT Press, and independently.

Product Specifications

Format
paperback
Domain
Amazon UK
Release Date
18 February 2026
Listed Since
18 February 2026

Barcode

No barcode data available