Relevance Matters
  • Home
  • About Me
Sign in Subscribe

Latest

Beyond Absolute Positional Embeddings with Relative and Rotary Methods

This post explores how positional embeddings evolved from absolute to relative to rotary forms, showing how each approach helps transformers capture sequence order and relationships more effectively while balancing flexibility, efficiency, and model complexity.

By suthee 15 Aug 2025

Inside Transformers: Scaled Dot-Product Attention & the Role of Position

Dive into the heart of transformer layers with a step-by-step look at scaled dot-product attention and discover how adding positional embeddings lets models capture both meaning and order.

By suthee 08 Aug 2025

BM25 Demystified: A Simple, Robust Baseline for Modern Retrieval

BM25 remains the go-to search baseline: this post shows you how its TF saturation, length normalization, and probabilistic foundation keep it robust.

By suthee 06 Aug 2025
Relevance Matters
  • Sign up
Powered by Ghost

Relevance Matters

Posts on search, ranking, and machine learning for when relevance actually matters.