Daily digest
12 items · ~12 min · Week 2026-W18
Must-read (3)
OpenAI brings GPT-5.5, Codex, and Managed Agents to Amazon Bedrock
OpenAIAWS and OpenAI expanded their partnership and launched three offerings on Amazon Bedrock in limited preview: OpenAI's frontier models (GPT-5.5 and GPT-5.4), the Codex agent with CLI/desktop/VS Code support, and Bedrock Managed Agents based on OpenAI. GA is promised within weeks; the models are integrated with IAM, PrivateLink, guardrails, and CloudTrail.
Mistral releases Medium 3.5 — 128B dense, 256k context, open weights
MistralMistral AI introduced Mistral Medium 3.5 — a flagship dense model with 128B parameters, 256k context, and switchable reasoning effort. Weights are open under a modified MIT license and available on Hugging Face. In parallel, the company launched remote agents in Vibe (cloud coding sessions with CLI and "teleportation" of a local session into the cloud) and a Work mode in Le Chat for multi-step tasks. Claimed scores: 77.6% on SWE-Bench Verified and 91.4% on τ³-Telecom; API pricing is $1.5/$7.5 per million tokens.
Anthropic launches Claude for Creative Work with connectors to Adobe, Blender, Ableton
AnthropicAnthropic announced the Claude for Creative Work bundle — nine official connectors that let Claude work directly with Adobe Creative Cloud, Blender, Autodesk Fusion, Ableton Live/Push, Affinity by Canva, Resolume, SketchUp, and Splice. In parallel, Anthropic Labs launched a new product, Claude Design, for rapid visual prototyping, and announced education programs with RISD, Ringling, and Goldsmiths.
Worth knowing (4)
Sber unveils Kandinsky 6.0 Image — flagship image generation model
SberSber released Kandinsky 6.0 Image based on a Mixture of Experts architecture: the model runs up to twice as fast as its predecessor, better understands complex prompts, and renders text in images more accurately. New features include restoration of old photos, neural photo shoots, stylization, swapping clothes and locations, retouching, and makeup. A built-in Image RAG was added — visual reference search for current people and objects that were not in the training set. Available for free with no limits in the web version, the mobile app, and GigaChat messengers.
DeepSeek launches image recognition mode in a gray-scale test
DeepSeekDeepSeek opened a new Image Recognition Mode to a portion of web and app users — the company's first consumer multimodal image understanding. The mode joined Quick Mode and Expert Mode; for now, only understanding is supported (viewing, reading, analysis), not generation. Multimodal team lead Chen Xiaokang hinted at the launch with an image of a blue whale with an open eye.
Cursor SDK — TypeScript framework for programmatic coding agents
CursorOn April 29, Cursor opened the public beta of its new TypeScript SDK (npm install @cursor/sdk). The SDK provides programmatic access to the same agent harness that runs in the desktop app, CLI, and web. Capabilities: running agents locally or in Cursor Cloud on an isolated VM, choice of any frontier model, sandboxed VMs, subagents, hooks, and token-based pricing. Target scenarios include embedding agents in CI/CD pipelines, end-to-end automation, and integration into your own products.
AWS Quick — AI assistant for work with a desktop app
AWSAt What's Next with AWS on April 29, Amazon introduced Quick — an AI assistant for work that connects to all of a user's apps, learns what matters to them, and takes actions on their behalf. A desktop app is available with Free and Plus tiers. The same announcement block also added the ability to build custom apps via natural language.
For reference (5)
Yandex announces results of the Yandex AI Startup Lab accelerator
YandexYandex announced the winners of the Yandex AI Startup Lab accelerator for students and young researchers, which received applications from about 1,000 teams from 146 universities. First place went to Gradius (students from HSE and NSTU) — a technology for embedding contextual ads in AI service responses as a new monetization format; the team received 3 million rubles and a 1-million-ruble grant for Yandex Cloud resources. Second place went to VisioMed.AI, a decision-support system for ophthalmologists based on retinal image analysis.
Tencent releases HY-Embodied-0.5-X update for embodied agents
TencentThe Hunyuan team published an updated version of its embodied foundation model on Hugging Face — HY-Embodied-0.5-X, described as an Enhanced Embodied Foundation Model for Real-World Agents. The base lineup (MoT-2B and MoE-32B) is built on a Mixture-of-Transformers architecture, trained on 100M+ embodied samples, and targets spatiotemporal perception, planning, and VLA scenarios for robotics.
Claude Code 2.1.123 — fix for OAuth 401-loop and Bedrock service tier
AnthropicAnthropic released Claude Code 2.1.123 (April 29) and 2.1.122 (April 28). Highlights: fixed an infinite OAuth 401 loop with CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1; a new ANTHROPIC_BEDROCK_SERVICE_TIER variable (default | flex | priority) for selecting an Amazon Bedrock tier via the X-Amzn-Bedrock-Service-Tier header; pasting a PR URL into /resume now finds the session that created that PR (GitHub, GitHub Enterprise, GitLab, Bitbucket); /mcp highlights claude.ai connectors hidden by a manually added server with the same URL; OpenTelemetry — numeric api_request/api_error attributes are now emitted as numbers, and a claude_code.at_mention event was added. Continuation of the v2.1.121 release chain.
vLLM v0.20.0 — third release in two weeks
vLLMOn April 27, vLLM released v0.20.0 — the third version in half a month after v0.18.0 and v0.19.0. The April lineup brought gRPC serving, GPU-accelerated speculative decoding, advanced KV-cache offloading, full support for Gemma 4 (E2B/E4B/26B MoE/31B Dense with MoE routing, multimodality, reasoning traces, and tool use), and the async scheduler — overlap of engine scheduling with GPU execution — is now enabled by default.
OpenAI Codex CLI 0.126.0-alpha — series of pre-releases on April 28-29
OpenAIOn github.com/openai/codex, a series of 0.126.0 alpha builds (alpha.9 → alpha.15) shipped on April 28-29. The pace — several releases per day — reflects active integration of Codex with the new OpenAI ↔ AWS Bedrock partnership and app-server improvements from the previous cycle (Unix socket transport, pagination-friendly resume/fork, sticky environments, remote thread config). A stable 0.126.0 has not yet appeared in the window. Continuation of the chain from alpha.8 (April 27).