====== Positional Encoding ====== Since [[concepts:softmax_attention|softmax attention]] is permutation-invariant, positional encodings inject sequence-order information into [[concepts:transformer|Transformer]] inputs. Variants include sinusoidal (original), learned, and rotary (RoPE) encodings. Not directly modified by [[papers:attention_residuals|Attention Residuals]], but RoPE interacts with [[concepts:linear_attention|linear attention]] in [[concepts:kimi_linear|Kimi Linear]]. See also: [[concepts:transformer]], [[concepts:softmax_attention]], [[concepts:kimi_linear]]