LLM outputs are unreliable because context is polluted. 30-40% of context assembled from multiple sources is semantically redundant. Same information from docs, code, memory, and tools competing for ...