Overview
AI research wiki focused on LLMs, agentic workflows, and applied AI.
Current State (2026-05-13)
Three sources ingested. Two clusters emerging: AI interaction design (ambient interfaces, rich output formats) and AI security (offensive use of LLMs by threat actors).
Emerging Themes
Friction reduction as design principle — Dominant trend in the interaction cluster: eliminate steps between user intent and AI action. DeepMind’s AI Pointer brings AI to the user’s cursor; HTML output eliminates the formatting gap between AI work and human comprehension.
Format determines comprehension — From html-effectiveness: the same information in different formats is not equivalent. Connects to AI Pointer’s “Transform Pixels into Entities” — both are about making AI output structurally richer than raw text.
Progressive disclosure — Both sources support hiding complexity until needed: AI Pointer via contextual shorthand, HTML via collapsible sections and hover definitions.
AI as force multiplier for adversaries — From ai-zero-day-exploit-2026: AI is compressing the exploit development lifecycle. Nation-state actors (China, North Korea, Russia) have operationalized LLMs across vulnerability discovery, exploit generation, malware development, and operational support. The 2026-05-11 disclosure marks the first confirmed in-the-wild AI-generated zero-day. AI fingerprints (educational docstrings, hallucinated CVSS scores) currently enable attribution — but this window is closing.
Gaps
- No sources yet on LLMs / prompt engineering fundamentals
- No comparison of ambient AI approaches across vendors
- Security cluster needs defensive counterpart — AI-assisted detection, attribution, and response