LiteLLM Hack Shows AI Infrastructure Is Fragile
LiteLLM's supply chain attack shows why modern AI infrastructure is still dangerously fragile and far too trusted.
Het laatste nieuws over AI, grote taalmodellen en autonome agents. Van OpenAI tot Google DeepMind.
LiteLLM's supply chain attack shows why modern AI infrastructure is still dangerously fragile and far too trusted.
Wikipedia officially banned its 260,000 human editors from using AI to write articles. Only typo fixes and formatting tweaks are still allowed. Here is what it means and why it matters.
Wikipedia bans AI-generated articles with new policy targeting LLM content. Editors can still use AI for copy editing and translation, but not for writing articles.
GPT-5.4 matched Gemini 3.1 within 0.01 points. Nine models shipped in March, seven open-weight. The leaderboard race just became irrelevant.
DeepSeek V4 has missed every predicted launch window. As of March 23, 2026, no model exists, no API ID, no announcement. Here is what we know and what the silence means.
Terence Tao frames AI's role in mathematics as jumping versus climbing. LLMs make probabilistic leaps, but the walls of mathematics still require human judgment and sustained reasoning.
New research reveals 97-99% success rates for LLM jailbreak attacks, with large reasoning models now capable of autonomously planning attacks against other AI systems.
20 new uncensored LLMs hit the market in March 2026. From 3GB edge models to 42B MoE giants, here's your complete guide.
A trillion-parameter AI model appeared on OpenRouter with zero attribution. Early evidence points to DeepSeek V4, but the truth remains elusive.
MIT onderzoekers ontwikkelden instance-adaptive scaling: LLM's die zelf bepalen hoeveel rekenkracht ze nodig hebben. Tot 50% efficiënter met dezelfde nauwkeurigheid.
Anthropic overtook OpenAI as the #1 enterprise LLM provider with 40% market share. A dramatic shift in just 2 years.
Google's Gemini 3.1 Flash-Lite offers API access at just $0.025 per million tokens with 2.5x faster inference, making it the most cost-effective option for scalable AI applications.