<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>AI Digest</title>
    <link>https://ai-digest.kerby.pro/en/</link>
    <atom:link href="https://ai-digest.kerby.pro/en/feed.xml" rel="self" type="application/rss+xml"/>
    <description>AI releases, tools, research, and industry: a daily roundup with an emphasis on source verifiability.</description>
    <language>en</language>
    <copyright>© 2026 Alexei Lukin · CC BY 4.0</copyright>
    <lastBuildDate>Sat, 02 May 2026 10:41:35 +0000</lastBuildDate>
    
      <item>
        <title>OpenCode v1.14.31 — interactive Azure setup and permission inheritance for task sessions</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-05-01-opencode-v1-14-31/</link>
        <guid isPermaLink="false">2026-05-01-opencode-v1-14-31</guid>
        <pubDate>Fri, 01 May 2026 00:00:00 +0000</pubDate>
        <dc:creator>SST</dc:creator>
        <category>tools</category>
        <category>opencode</category><category>sst</category><category>coding-agent</category><category>cli</category>
        <description><![CDATA[SST released opencode v1.14.31 (May 1, 2026). It adds an interactive Azure setup that prompts for the resource name and saves the API key. Task child sessions now inherit permissions from the parent session. Clearer errors are surfaced for invalid remote MCP URLs. A Desktop app crash when restoring sessions with missing models has been fixed.

Why it matters: One of the few open-source coding agents actively keeping pace with Claude Code and Codex on features; releases ship same-day.]]></description>
      </item>
    
      <item>
        <title>Baidu releases ERNIE-5.1-Preview — #1 Chinese model on LMArena</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-05-01-ernie-5-1-preview/</link>
        <guid isPermaLink="false">2026-05-01-ernie-5-1-preview</guid>
        <pubDate>Fri, 01 May 2026 00:00:00 +0000</pubDate>
        <dc:creator>Baidu</dc:creator>
        <category>models-llm</category>
        <category>baidu</category><category>ernie</category><category>china</category><category>lmarena</category><category>preview</category>
        <description><![CDATA[On April 30, 2026, Baidu unveiled a preview version of ERNIE-5.1-Preview. The model debuted at #13 on the global LMArena Text Arena leaderboard with a score of 1476, becoming the top-ranked Chinese model and overtaking DeepSeek-V4-Pro. According to Baidu, the model uses roughly one-third of the total parameters and half the active parameters of ERNIE-5.0, at approximately 6% of the pre-training cost of comparable models. The full ERNIE 5.1 release is expected at the Baidu Create conference.

Why it matters: Confirms the sharp acceleration of the Chinese race following DeepSeek V4: Baidu claims leadership among Chinese labs on LMArena at substantially lower training cost.]]></description>
      </item>
    
      <item>
        <title>OpenAI Codex CLI 0.128.0 — persisted /goal workflows and expanded permission profiles</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-05-01-codex-cli-0-128-0/</link>
        <guid isPermaLink="false">2026-05-01-codex-cli-0-128-0</guid>
        <pubDate>Fri, 01 May 2026 00:00:00 +0000</pubDate>
        <dc:creator>OpenAI</dc:creator>
        <category>tools</category>
        <category>codex</category><category>openai</category><category>coding-agent</category><category>cli</category>
        <description><![CDATA[OpenAI shipped a stable release of Codex CLI v0.128.0 following a series of 0.126.x alphas. The headline feature is persisted /goal workflows: long-running goals are stored via the app-server API, exposed as model tools, support runtime continuation, and have dedicated TUI controls. Permission profiles have been expanded with built-in defaults and sandbox-profile selection directly from the CLI; the --full-auto flag is deprecated in favor of explicit permission profiles. Plugin workflows are improved (marketplace install, remote-bundle cache), and external-agent session import with background import has been added. MultiAgentV2 gained configurable thread caps and wait-time.

Why it matters: Persisted /goal turns Codex CLI from a stateless helper into a platform for long-lived autonomous tasks, competing with Claude Code and Cursor for background agents.]]></description>
      </item>
    
      <item>
        <title>Claude Code 2.1.126 — project purge, model picker via gateway, security fixes</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-05-01-claude-code-v2-1-126/</link>
        <guid isPermaLink="false">2026-05-01-claude-code-v2-1-126</guid>
        <pubDate>Fri, 01 May 2026 00:00:00 +0000</pubDate>
        <dc:creator>Anthropic</dc:creator>
        <category>tools</category>
        <category>claude-code</category><category>anthropic</category><category>coding-agent</category><category>cli</category>
        <description><![CDATA[Anthropic released Claude Code 2.1.126. A new `claude project purge [path]` command fully wipes state (transcripts, tasks, file history, config). The model picker now pulls the model list from a compatible gateway&#39;s /v1/models endpoint when ANTHROPIC_BASE_URL is set. The --dangerously-skip-permissions flag now genuinely bypasses confirmation prompts for writes to protected paths (.claude/, .git/, .vscode/). Regressions in allowManagedDomainsOnly/allowManagedReadPathsOnly have been fixed, and images larger than 2000px are now automatically downscaled on paste.

Why it matters: A cumulative fix release that closes several security regressions in the permission allowlist and streamlines work through enterprise gateways.]]></description>
      </item>
    
      <item>
        <title>AutoResearchBench — a benchmark for autonomous scientific literature search by AI agents</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-05-01-autoresearchbench/</link>
        <guid isPermaLink="false">2026-05-01-autoresearchbench</guid>
        <pubDate>Fri, 01 May 2026 00:00:00 +0000</pubDate>
        <dc:creator>BAAI</dc:creator>
        <category>research</category>
        <category>agents</category><category>benchmark</category><category>rag</category><category>evaluation</category>
        <description><![CDATA[A new benchmark has been published for evaluating agents on autonomous scientific literature search and review. It includes two complementary setups: Deep Research (multi-step investigation leading to a specific target paper) and Wide Research (exhaustive collection of publications matching given criteria, scored by IoU). Even the strongest LLM agents reach only 9.39% accuracy on Deep Research and 9.31% IoU on Wide Research.

Why it matters: Closes a methodological gap between general-purpose web agents and the actual work of a researcher; the ~9% figures set a ceiling against which progress on research agents can be measured throughout 2026.]]></description>
      </item>
    
      <item>
        <title>Yandex Commerce Protocol: first retailers launch sales via Alice AI</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-yandex-commerce-protocol-launch/</link>
        <guid isPermaLink="false">2026-04-30-yandex-commerce-protocol-launch</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Yandex</dc:creator>
        <category>industry</category>
        <category>russia</category><category>agents</category><category>partnership</category><category>yandex</category>
        <description><![CDATA[Yandex disclosed the first partners of the Yandex Commerce Protocol (YCP) — a standard for integrating online stores with AI scenarios in Alice AI, Search, and Yandex Ritm. Going live with sales directly from chat with Alice AI are Stockmann, restore:, pharmacy chains Gorzdrav and 36.6, telecom operator Beeline, the brand The Act, and a number of other retailers; over 200 large online retailers and brands have begun YCP integration, and more than 1,600 additional stores have applied. The technology lets shoppers proceed to checkout directly from the assistant dialog without visiting the merchant&#39;s website — Alice AI acts as a transactional AI agent on top of partner catalogs.

Why it matters: YCP is Yandex&#39;s bid to be the AI-commerce standard in the Russian-language internet and one of the first large-scale launches of an LLM assistant as a direct sales channel in Russia. If the protocol catches on, it shifts the role of voice and chat assistants from informational to transactional.]]></description>
      </item>
    
      <item>
        <title>TIDE: cross-architecture distillation for diffusion LLMs</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-tide-diffusion-llm-distillation/</link>
        <guid isPermaLink="false">2026-04-30-tide-diffusion-llm-distillation</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Peking University</dc:creator>
        <category>research</category>
        <category>inference</category><category>paper</category><category>china</category>
        <description><![CDATA[TIDE is a distillation framework that transfers knowledge between different architectures for diffusion LLMs. It comprises three components: TIDAL (adaptive distillation strength by timestep), CompDemo (context via mask splitting), and Reverse CALM (cross-tokenizer objective). Teachers are a dense 8B and a 16B MoE; the student is a 0.6B diffusion model; the student&#39;s HumanEval score is 48.78 versus 32.3 for an AR baseline of the same size.

Why it matters: Diffusion LLMs remain a marginal but actively growing alternative to autoregressive models. Cross-architecture distillation from a dense teacher → MoE → diffusion student is a rare combination, and the notable jump on code benchmarks at 0.6B parameters makes the idea practically interesting for on-device.]]></description>
      </item>
    
      <item>
        <title>Recursive Multi-Agent Systems: agent communication in latent space</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-recursive-multi-agent-systems/</link>
        <guid isPermaLink="false">2026-04-30-recursive-multi-agent-systems</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Stanford University</dc:creator>
        <category>research</category>
        <category>agents</category><category>reasoning</category><category>paper</category><category>us</category>
        <description><![CDATA[RecursiveMAS replaces text exchange between agents with communication via latent representations connected by a lightweight RecursiveLink module, and trains the whole system jointly using a dedicated optimization algorithm. Across 9 benchmarks (math, science, medicine, search, code) the authors report +8.3% average accuracy, a 1.2–2.4x speedup in end-to-end inference, and a 34.6–75.6% reduction in token consumption versus text-based multi-agent baselines.

Why it matters: 176 upvotes on HF Daily. The text interface between agents is a bottleneck both in latency and in tokens; latent communication plus joint training is an attempt to move MAS out of the «several LLMs glued together with prompts» mode into a unified system.]]></description>
      </item>
    
      <item>
        <title>Programming with Data: test-driven data engineering for self-improving LLMs</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-programming-with-data/</link>
        <guid isPermaLink="false">2026-04-30-programming-with-data</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>OpenDataLab</dc:creator>
        <category>research</category>
        <category>paper</category><category>benchmark</category><category>alignment</category>
        <description><![CDATA[The authors reframe data engineering for LLMs as software engineering: training data = source code of the model&#39;s behavioral spec, training = compilation, benchmarks = unit tests. If structured knowledge is extracted from the source corpus and used simultaneously for training and evaluation, model failures can be traced back to specific defects in the data and fixed surgically. The method is applied to 16 disciplines; a knowledge base, benchmarks, and training corpora are released.

Why it matters: 77 upvotes on HF Daily. The approach formalizes what frontier labs already do by hand: traceability from a metric back to a specific gap in the data. Releasing the corpora makes it reproducible.]]></description>
      </item>
    
      <item>
        <title>OpenCode v1.14.30: Mistral Medium 3.5 with reasoning and Desktop session fixes</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-opencode-v1-14-30/</link>
        <guid isPermaLink="false">2026-04-30-opencode-v1-14-30</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>SST</dc:creator>
        <category>tools</category>
        <category>coding</category><category>open-weights</category><category>update</category><category>mit</category>
        <description><![CDATA[SST released opencode v1.14.30 (April 29, 2026). Support for Mistral Medium 3.5 with reasoning mode was added, Azure response handling improved, and issues with sessions in the Desktop app and editor context across multiple directories were fixed. The April release cadence has been tight: v1.14.27 introduced a configurable default shell, v1.14.25 added Roslyn LSP for C#/Razor, and v1.14.21 brought improved compaction for long conversations.

Why it matters: Opencode is one of the leading open-source competitors to Claude Code and Codex, multi-provider by architecture. Support for Mistral Medium 3.5 with reasoning expands the model selection for offline/edge scenarios.]]></description>
      </item>
    
      <item>
        <title>Mistral Workflows: public preview of a Temporal-based engine for enterprise AI orchestration</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-mistral-workflows-preview/</link>
        <guid isPermaLink="false">2026-04-30-mistral-workflows-preview</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Mistral</dc:creator>
        <category>tools</category>
        <category>agents</category><category>preview</category><category>eu</category><category>mistral</category><category>orchestration</category>
        <description><![CDATA[Mistral AI announced Workflows in public preview on April 29 — durable, observable AI orchestration in Studio and Le Chat. The architecture is built on Temporal with AI extensions: streaming, payload handling, and extended observability. The control plane runs on Mistral-managed infrastructure, while execution workers and data processing run inside the customer&#39;s environment. Workflows are written in Python, can be published to Le Chat to be triggered by non-technical users, and every step is traceable in Studio. According to VentureBeat, the engine is already handling millions of daily executions for early customers: ASML, ABANCA, CMA-CGM, France Travail, La Banque Postale.

Why it matters: A direct response to LangGraph/CrewAI/Temporal DIY stacks for production agents. Hybrid deployment (managed control plane, on-prem data plane) removes the main enterprise objection — data residency.]]></description>
      </item>
    
      <item>
        <title>GLM-5V-Turbo: a natively multimodal foundation model for agents</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-glm-5v-turbo/</link>
        <guid isPermaLink="false">2026-04-30-glm-5v-turbo</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Z.ai</dc:creator>
        <category>research</category>
        <category>multimodal</category><category>agents</category><category>paper</category><category>china</category><category>zai-org</category>
        <description><![CDATA[Z.ai unveiled GLM-5V-Turbo, a multimodal foundation model in which visual perception is embedded as a first-class component of reasoning, planning, and tool use rather than bolted on after the fact. The model handles images, video, web pages, and documents; the authors report gains on multimodal coding, visual tool use, and agent tasks while preserving text-only quality. The role of end-to-end verification of agent trajectories during training is emphasized.

Why it matters: One of the most-hyped releases of the week on HF Daily — 2.28k upvotes. A bid for a natively multimodal agent (rather than a VLM with tacked-on tool use) — a direction in which Z.ai is systematically competing with GPT-5 and Gemini.]]></description>
      </item>
    
      <item>
        <title>ElevenLabs launches ElevenMusic — a licensed platform for music generation, remix, and streaming</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-elevenmusic-launch/</link>
        <guid isPermaLink="false">2026-04-30-elevenmusic-launch</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>ElevenLabs</dc:creator>
        <category>audio</category>
        <category>music-gen</category><category>release</category><category>us</category><category>elevenlabs</category>
        <description><![CDATA[ElevenLabs unveiled an updated ElevenMusic — a product that combines music discovery, remixing existing tracks (genre swaps, tempo changes, reinterpretation), and creating original compositions from text, melody, or mood. The platform is built on a fully licensed music model; at launch it features more than 4,000 independent artists and a curated release, Eleven Album Vol. 2. It is positioned not as passive listening but as a fan-engagement layer with publishing and monetization options for creators.

Why it matters: The first major generative-music player to enter the market with a licensing model from day one — unlike Suno and Udio, which have already settled lawsuits with UMG/WMG. Combining generation, remix, and streaming in a single product is a bid for a new category between Spotify and Suno.]]></description>
      </item>
    
      <item>
        <title>DeepSeek V4: official open-source release with Day-0 adaptation for Huawei Ascend</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-deepseek-v4-official-release/</link>
        <guid isPermaLink="false">2026-04-30-deepseek-v4-official-release</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>DeepSeek</dc:creator>
        <category>models-llm</category>
        <category>deepseek-v4</category><category>open-weights</category><category>mit</category><category>china</category><category>release</category><category>huawei-ascend</category><category>moe</category><category>long-context</category>
        <description><![CDATA[DeepSeek officially released the V4 lineup in open-source under the MIT license on April 29. It includes DeepSeek-V4-Pro at 1.6T parameters (49B active) and DeepSeek-V4 at 284B (13B active) — both MoE models with native 1M token context. The release claims roughly a 9.5x reduction in memory requirements versus V3.2 and a near-closed gap with frontier closed models on reasoning benchmarks. A defining feature of the release is optimization for Chinese accelerators: Huawei Ascend, Cambricon, Hygon, and Moore Threads completed Day-0 adaptation on release day, with multi-deploy on Ascend 950 expected in the second half of the year.

Why it matters: The first major frontier open-weights release purpose-built for Ascend rather than Nvidia — an infrastructure shift for the Chinese AI stack and a signal that US export restrictions have accelerated the formation of a self-sufficient inference ecosystem.]]></description>
      </item>
    
      <item>
        <title>Anthropic in talks for a round at over $900B valuation</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-30-anthropic-900b-funding-talks/</link>
        <guid isPermaLink="false">2026-04-30-anthropic-900b-funding-talks</guid>
        <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Anthropic</dc:creator>
        <category>industry</category>
        <category>funding</category><category>us</category><category>anthropic</category>
        <description><![CDATA[Anthropic has received preemptive offers to raise around $50B at a valuation in the $850–900B range, more than doubling its current capitalization and potentially putting the company ahead of OpenAI as the most valuable AI startup. Talks are at an early stage and no term sheet has been signed. In parallel, run-rate revenue is reported at &gt;$30B versus ~$9B at the end of 2025.

Why it matters: If the round closes in this range, the balance of power in the frontier-lab race formally shifts in Anthropic&#39;s favor — for the first time since 2023.]]></description>
      </item>
    
      <item>
        <title>Yandex announces results of the Yandex AI Startup Lab accelerator</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-yandex-ai-startup-lab/</link>
        <guid isPermaLink="false">2026-04-29-yandex-ai-startup-lab</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Yandex</dc:creator>
        <category>industry</category>
        <category>russia</category><category>yandex</category><category>accelerator</category><category>startups</category><category>ai-startups</category>
        <description><![CDATA[Yandex announced the winners of the Yandex AI Startup Lab accelerator for students and young researchers, which received applications from about 1,000 teams from 146 universities. First place went to Gradius (students from HSE and NSTU) — a technology for embedding contextual ads in AI service responses as a new monetization format; the team received 3 million rubles and a 1-million-ruble grant for Yandex Cloud resources. Second place went to VisioMed.AI, a decision-support system for ophthalmologists based on retinal image analysis.

Why it matters: A signal of how Yandex is building a pipeline of young AI teams within the Russian market after the departure of foreign venture funds.]]></description>
      </item>
    
      <item>
        <title>vLLM v0.20.0 — third release in two weeks</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-vllm-v0-20-0/</link>
        <guid isPermaLink="false">2026-04-29-vllm-v0-20-0</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>vLLM</dc:creator>
        <category>tools</category>
        <category>inference</category><category>vllm</category><category>v0.20.0</category><category>release</category>
        <description><![CDATA[On April 27, vLLM released v0.20.0 — the third version in half a month after v0.18.0 and v0.19.0. The April lineup brought gRPC serving, GPU-accelerated speculative decoding, advanced KV-cache offloading, full support for Gemma 4 (E2B/E4B/26B MoE/31B Dense with MoE routing, multimodality, reasoning traces, and tool use), and the async scheduler — overlap of engine scheduling with GPU execution — is now enabled by default.

Why it matters: The high release cadence fills the production-ready inference niche for fresh open models — a competitor to TensorRT-LLM and SGLang in speed of supporting new architectures.]]></description>
      </item>
    
      <item>
        <title>Tencent releases HY-Embodied-0.5-X update for embodied agents</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-tencent-hy-embodied-0-5-x/</link>
        <guid isPermaLink="false">2026-04-29-tencent-hy-embodied-0-5-x</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Tencent</dc:creator>
        <category>models-llm</category>
        <category>china</category><category>open-weights</category><category>tencent</category><category>hunyuan</category><category>embodied</category><category>robotics</category><category>vision-language</category>
        <description><![CDATA[The Hunyuan team published an updated version of its embodied foundation model on Hugging Face — HY-Embodied-0.5-X, described as an Enhanced Embodied Foundation Model for Real-World Agents. The base lineup (MoT-2B and MoE-32B) is built on a Mixture-of-Transformers architecture, trained on 100M+ embodied samples, and targets spatiotemporal perception, planning, and VLA scenarios for robotics.

Why it matters: An evolution of Tencent&#39;s open lineup for embodied AI — a competitor to Qwen3-VL and closed frontier models in robotics tasks.]]></description>
      </item>
    
      <item>
        <title>OpenAI brings GPT-5.5, Codex, and Managed Agents to Amazon Bedrock</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-openai-on-aws-bedrock/</link>
        <guid isPermaLink="false">2026-04-29-openai-on-aws-bedrock</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>OpenAI</dc:creator>
        <category>industry</category>
        <category>openai</category><category>aws</category><category>bedrock</category><category>gpt-5-5</category><category>gpt-5-4</category><category>codex</category><category>managed-agents</category><category>partnership</category><category>microsoft</category>
        <description><![CDATA[AWS and OpenAI expanded their partnership and launched three offerings on Amazon Bedrock in limited preview: OpenAI&#39;s frontier models (GPT-5.5 and GPT-5.4), the Codex agent with CLI/desktop/VS Code support, and Bedrock Managed Agents based on OpenAI. GA is promised within weeks; the models are integrated with IAM, PrivateLink, guardrails, and CloudTrail.

Why it matters: The release came a day after the end of OpenAI&#39;s exclusivity with Microsoft and effectively makes Bedrock a second full-fledged distribution channel for OpenAI&#39;s frontier models in the enterprise.]]></description>
      </item>
    
      <item>
        <title>Mistral releases Medium 3.5 — 128B dense, 256k context, open weights</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-mistral-medium-3-5/</link>
        <guid isPermaLink="false">2026-04-29-mistral-medium-3-5</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Mistral</dc:creator>
        <category>models-llm</category>
        <category>mistral</category><category>mistral-medium-3-5</category><category>open-weights</category><category>vibe</category><category>le-chat</category><category>swe-bench</category><category>remote-agents</category><category>coding-agents</category>
        <description><![CDATA[Mistral AI introduced Mistral Medium 3.5 — a flagship dense model with 128B parameters, 256k context, and switchable reasoning effort. Weights are open under a modified MIT license and available on Hugging Face. In parallel, the company launched remote agents in Vibe (cloud coding sessions with CLI and &#34;teleportation&#34; of a local session into the cloud) and a Work mode in Le Chat for multi-step tasks. Claimed scores: 77.6% on SWE-Bench Verified and 91.4% on τ³-Telecom; API pricing is $1.5/$7.5 per million tokens.

Why it matters: Mistral returns to the frontier with a cheap open-weight model on par with Claude Sonnet 4.5 in coding, while also offering its own analog of Codex/Claude Code — the strongest European release of spring 2026.]]></description>
      </item>
    
      <item>
        <title>Sber unveils Kandinsky 6.0 Image — flagship image generation model</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-kandinsky-6-0-image/</link>
        <guid isPermaLink="false">2026-04-29-kandinsky-6-0-image</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Sber</dc:creator>
        <category>image</category>
        <category>russia</category><category>kandinsky</category><category>sber</category><category>gigachat</category><category>image-generation</category><category>moe</category><category>image-editing</category>
        <description><![CDATA[Sber released Kandinsky 6.0 Image based on a Mixture of Experts architecture: the model runs up to twice as fast as its predecessor, better understands complex prompts, and renders text in images more accurately. New features include restoration of old photos, neural photo shoots, stylization, swapping clothes and locations, retouching, and makeup. A built-in Image RAG was added — visual reference search for current people and objects that were not in the training set. Available for free with no limits in the web version, the mobile app, and GigaChat messengers.

Why it matters: The first public MoE image model in the Russian segment, a direct response to Western editors like Nano Banana and Seedream.]]></description>
      </item>
    
      <item>
        <title>DeepSeek launches image recognition mode in a gray-scale test</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-deepseek-vision-mode/</link>
        <guid isPermaLink="false">2026-04-29-deepseek-vision-mode</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>DeepSeek</dc:creator>
        <category>models-llm</category>
        <category>china</category><category>multimodal</category><category>vision</category><category>deepseek</category><category>rollout</category>
        <description><![CDATA[DeepSeek opened a new Image Recognition Mode to a portion of web and app users — the company&#39;s first consumer multimodal image understanding. The mode joined Quick Mode and Expert Mode; for now, only understanding is supported (viewing, reading, analysis), not generation. Multimodal team lead Chen Xiaokang hinted at the launch with an image of a blue whale with an open eye.

Why it matters: DeepSeek&#39;s first step from a purely text model to a multimodal product — an important signal following the V4 release a few days earlier.]]></description>
      </item>
    
      <item>
        <title>Cursor SDK — TypeScript framework for programmatic coding agents</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-cursor-sdk/</link>
        <guid isPermaLink="false">2026-04-29-cursor-sdk</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Cursor</dc:creator>
        <category>tools</category>
        <category>coding-agent</category><category>cursor</category><category>sdk</category><category>typescript</category><category>public-beta</category><category>cloud-vm</category><category>subagents</category><category>release</category>
        <description><![CDATA[On April 29, Cursor opened the public beta of its new TypeScript SDK (npm install @cursor/sdk). The SDK provides programmatic access to the same agent harness that runs in the desktop app, CLI, and web. Capabilities: running agents locally or in Cursor Cloud on an isolated VM, choice of any frontier model, sandboxed VMs, subagents, hooks, and token-based pricing. Target scenarios include embedding agents in CI/CD pipelines, end-to-end automation, and integration into your own products.

Why it matters: Cursor is turning its agent from an IDE feature into an infrastructure API — a direct competitor to Codex SDK and Claude Agent SDK for headless scenarios.]]></description>
      </item>
    
      <item>
        <title>OpenAI Codex CLI 0.126.0-alpha — series of pre-releases on April 28-29</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-codex-cli-alpha-9-15/</link>
        <guid isPermaLink="false">2026-04-29-codex-cli-alpha-9-15</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>OpenAI</dc:creator>
        <category>tools</category>
        <category>coding-agent</category><category>codex</category><category>v0.126.0-alpha</category><category>cli</category><category>pre-release</category>
        <description><![CDATA[On github.com/openai/codex, a series of 0.126.0 alpha builds (alpha.9 → alpha.15) shipped on April 28-29. The pace — several releases per day — reflects active integration of Codex with the new OpenAI ↔ AWS Bedrock partnership and app-server improvements from the previous cycle (Unix socket transport, pagination-friendly resume/fork, sticky environments, remote thread config). A stable 0.126.0 has not yet appeared in the window. Continuation of the chain from alpha.8 (April 27).

Why it matters: The pace of alpha releases signals that 0.126.0 stable is close — worth tracking if you use Codex CLI on working branches.]]></description>
      </item>
    
      <item>
        <title>Anthropic launches Claude for Creative Work with connectors to Adobe, Blender, Ableton</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-claude-for-creative-work/</link>
        <guid isPermaLink="false">2026-04-29-claude-for-creative-work</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Anthropic</dc:creator>
        <category>tools</category>
        <category>anthropic</category><category>claude</category><category>mcp</category><category>creative-tools</category><category>adobe</category><category>blender</category><category>ableton</category><category>autodesk</category><category>connectors</category><category>claude-design</category>
        <description><![CDATA[Anthropic announced the Claude for Creative Work bundle — nine official connectors that let Claude work directly with Adobe Creative Cloud, Blender, Autodesk Fusion, Ableton Live/Push, Affinity by Canva, Resolume, SketchUp, and Splice. In parallel, Anthropic Labs launched a new product, Claude Design, for rapid visual prototyping, and announced education programs with RISD, Ringling, and Goldsmiths.

Why it matters: Anthropic is moving beyond the &#34;code and text assistant&#34; niche into professional creative pipelines — for the first time, a major frontier lab gets an official place inside Adobe and Blender.]]></description>
      </item>
    
      <item>
        <title>Claude Code 2.1.123 — fix for OAuth 401-loop and Bedrock service tier</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-claude-code-v2-1-123/</link>
        <guid isPermaLink="false">2026-04-29-claude-code-v2-1-123</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Anthropic</dc:creator>
        <category>tools</category>
        <category>coding-agent</category><category>claude-code</category><category>v2.1.123</category><category>v2.1.122</category><category>release</category><category>bedrock</category><category>mcp</category><category>opentelemetry</category>
        <description><![CDATA[Anthropic released Claude Code 2.1.123 (April 29) and 2.1.122 (April 28). Highlights: fixed an infinite OAuth 401 loop with CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1; a new ANTHROPIC_BEDROCK_SERVICE_TIER variable (default | flex | priority) for selecting an Amazon Bedrock tier via the X-Amzn-Bedrock-Service-Tier header; pasting a PR URL into /resume now finds the session that created that PR (GitHub, GitHub Enterprise, GitLab, Bitbucket); /mcp highlights claude.ai connectors hidden by a manually added server with the same URL; OpenTelemetry — numeric api_request/api_error attributes are now emitted as numbers, and a claude_code.at_mention event was added. Continuation of the v2.1.121 release chain.

Why it matters: The Bedrock service tier is needed by enterprise users, and the 401 fix removes a blocking bug for those who disabled experimental betas.]]></description>
      </item>
    
      <item>
        <title>AWS Quick — AI assistant for work with a desktop app</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-29-aws-quick/</link>
        <guid isPermaLink="false">2026-04-29-aws-quick</guid>
        <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>AWS</dc:creator>
        <category>tools</category>
        <category>aws</category><category>quick</category><category>ai-assistant</category><category>desktop-app</category><category>release</category>
        <description><![CDATA[At What&#39;s Next with AWS on April 29, Amazon introduced Quick — an AI assistant for work that connects to all of a user&#39;s apps, learns what matters to them, and takes actions on their behalf. A desktop app is available with Free and Plus tiers. The same announcement block also added the ability to build custom apps via natural language.

Why it matters: AWS is moving into its own horizontal AI assistant — a competitor to Microsoft Copilot and Google Gemini Workspace, but with AWS service integration.]]></description>
      </item>
    
      <item>
        <title>World-R1: Reinforcing 3D Constraints for Text-to-Video Generation</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-world-r1-text-to-video/</link>
        <guid isPermaLink="false">2026-04-28-world-r1-text-to-video</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Microsoft Research</dc:creator>
        <category>research</category>
        <category>paper</category><category>rl</category>
        <description><![CDATA[RL fine-tuning of text-to-video with a reward signal based on 3D geometric consistency; the 3D-aware reward sharply improves temporal coherence without degrading visual quality.]]></description>
      </item>
    
      <item>
        <title>Sora — final shutdown</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-sora-discontinuation/</link>
        <guid isPermaLink="false">2026-04-28-sora-discontinuation</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>OpenAI</dc:creator>
        <category>video</category>
        <category>deprecation</category><category>us</category>
        <description><![CDATA[On April 26, the Sora web and mobile apps were permanently shut down; the API will be discontinued on September 24, 2026.]]></description>
      </item>
    
      <item>
        <title>OpenCode v1.14.28</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-opencode-v1-14-28/</link>
        <guid isPermaLink="false">2026-04-28-opencode-v1-14-28</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>sst</dc:creator>
        <category>tools</category>
        <category>release</category><category>update</category><category>coding</category>
        <description><![CDATA[Released April 27. Bugfix for `opencode upgrade`, which was failing for Bun users outside a directory containing `package.json`.]]></description>
      </item>
    
      <item>
        <title>OpenAI publishes &#34;Our principles&#34;</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-openai-our-principles/</link>
        <guid isPermaLink="false">2026-04-28-openai-our-principles</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>OpenAI</dc:creator>
        <category>industry</category>
        <category>policy</category><category>us</category>
        <description><![CDATA[On April 26, Sam Altman released a five-principle document (democratization, empowerment, universal prosperity, resilience, adaptability), effectively an update to the 2018 charter that codifies public commitments on AGI and compute infrastructure ahead of regulatory pressure in the US and EU.]]></description>
      </item>
    
      <item>
        <title>LLM Safety From Within (SIREN)</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-llm-safety-from-within/</link>
        <guid isPermaLink="false">2026-04-28-llm-safety-from-within</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>University of Toronto CSSLab / McGill / LMU Munich</dc:creator>
        <category>research</category>
        <category>paper</category><category>safety</category><category>interpretability</category>
        <description><![CDATA[Linear probes across all internal LLM layers identify &#34;safety neurons&#34; with adaptive weighting. Beats SoTA open-source guard models on multiple benchmarks with 250× fewer trainable parameters, and supports streaming detection.]]></description>
      </item>
    
      <item>
        <title>Firefly AI Assistant — Public Beta</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-firefly-ai-assistant-public-beta/</link>
        <guid isPermaLink="false">2026-04-28-firefly-ai-assistant-public-beta</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Adobe</dc:creator>
        <category>image</category>
        <category>release</category><category>beta</category><category>agents</category><category>us</category>
        <description><![CDATA[On April 27, Adobe launched the global public beta of an AI assistant that orchestrates multi-step creative workflows across 60+ Creative Cloud tools via chat prompts; includes Creative Skills and integration with partner models (GPT Image 2, Veo 3.1, Runway Gen-4.5, ElevenLabs Multilingual v2).]]></description>
      </item>
    
      <item>
        <title>DeepSeek V4 — API price cuts</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-deepseek-v4-pricing/</link>
        <guid isPermaLink="false">2026-04-28-deepseek-v4-pricing</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>DeepSeek</dc:creator>
        <category>models-llm</category>
        <category>china</category><category>pricing</category><category>open-weights</category><category>deepseek-v4</category><category>release</category>
        <description><![CDATA[On April 27, DeepSeek aggressively cut prices for V4-Pro and V4-Flash (preview from April 24, 1.6T MoE / 49B active, 1M context, optimized for Huawei Ascend, open weights), kicking off another round of the price war in the Chinese market.]]></description>
      </item>
    
      <item>
        <title>Google DeepMind ↔ Republic of Korea</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-deepmind-korea-ai-campus/</link>
        <guid isPermaLink="false">2026-04-28-deepmind-korea-ai-campus</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Google DeepMind</dc:creator>
        <category>industry</category>
        <category>partnership</category><category>global</category>
        <description><![CDATA[On April 27, DeepMind and MSIT announced the creation of an AI Campus in Seoul; Korean researchers gain access to AlphaFold and AlphaGenome for life sciences, climate, and energy. The announcement was timed to the 10th anniversary of the AlphaGo match.]]></description>
      </item>
    
      <item>
        <title>Codex CLI rust-v0.126.0-alpha.8</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-codex-cli-rust-alpha-8/</link>
        <guid isPermaLink="false">2026-04-28-codex-cli-rust-alpha-8</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>OpenAI</dc:creator>
        <category>tools</category>
        <category>release</category><category>alpha</category><category>coding</category><category>agents</category><category>us</category>
        <description><![CDATA[Pre-release on April 27. Alpha iteration of the Rust version: binaries for macOS/Linux/Windows (ARM64 + x86_64), application server, command runner, responses API proxy, SHA256/Sigstore signatures.]]></description>
      </item>
    
      <item>
        <title>Claude Code v2.1.121</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-claude-code-v2-1-121/</link>
        <guid isPermaLink="false">2026-04-28-claude-code-v2-1-121</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Anthropic</dc:creator>
        <category>tools</category>
        <category>release</category><category>update</category><category>coding</category><category>agents</category><category>claude-opus-4.7</category>
        <description><![CDATA[Released April 28. Adds `alwaysLoad` for MCP servers, a `claude plugin prune` command to remove orphan dependencies, and type-to-filter search in `/skills`; fixes memory leaks, fullscreen scrolling, auto-retry on transient MCP errors, and OAuth token handling.]]></description>
      </item>
    
      <item>
        <title>Agentic World Modeling: Foundations, Capabilities, Laws, and Beyond</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-28-agentic-world-modeling-survey/</link>
        <guid isPermaLink="false">2026-04-28-agentic-world-modeling-survey</guid>
        <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>HKUST/NUS/Oxford/NTU</dc:creator>
        <category>research</category>
        <category>paper</category><category>agents</category><category>rl</category><category>multimodal</category>
        <description><![CDATA[A manifesto-style survey of world models for agents: theoretical foundations, empirical scaling laws, capability framework. The top paper of the day on HF (177 upvotes).]]></description>
      </item>
    
      <item>
        <title>Qualcomm + OpenAI + MediaTek — AI processors for smartphones</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-27-qualcomm-openai-mediatek-smartphone-chips/</link>
        <guid isPermaLink="false">2026-04-27-qualcomm-openai-mediatek-smartphone-chips</guid>
        <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Qualcomm</dc:creator>
        <category>industry</category>
        <category>partnership</category><category>us</category>
        <description><![CDATA[Report of joint chip development; Qualcomm shares up 12% pre-market. Mass production targeted for 2028. Per a report from analyst Ming-Chi Kuo; not officially confirmed by the parties.]]></description>
      </item>
    
      <item>
        <title>OpenClaw 2026.4.25</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-27-openclaw-2026-4-25/</link>
        <guid isPermaLink="false">2026-04-27-openclaw-2026-4-25</guid>
        <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>OpenClaw</dc:creator>
        <category>tools</category>
        <category>release</category><category>update</category>
        <description><![CDATA[TTS expansion (`/tts latest`, new providers including Azure Speech). Plugin registry moved to cold storage for faster startup. Expanded OpenTelemetry monitoring. Calendar versioning `YYYY.M.D`.]]></description>
      </item>
    
      <item>
        <title>Microsoft–OpenAI restructuring</title>
        <link>https://ai-digest.kerby.pro/en/i/2026-04-27-microsoft-openai-restructuring/</link>
        <guid isPermaLink="false">2026-04-27-microsoft-openai-restructuring</guid>
        <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
        <dc:creator>Microsoft / OpenAI</dc:creator>
        <category>industry</category>
        <category>partnership</category><category>us</category>
        <description><![CDATA[End of cloud exclusivity: OpenAI can sell products via AWS/Google Cloud; Microsoft license becomes non-exclusive. Microsoft remains the primary cloud partner and stops paying OpenAI a revenue share. OpenAI continues sharing revenue with Microsoft through 2030; IP license runs through 2032.]]></description>
      </item>
    
  </channel>
</rss>
