- Anybody Can AI
- Posts
- Meta Launches "Vibes" for AI-Generated Videos
Meta Launches "Vibes" for AI-Generated Videos
PLUS: Tencent Open-Sources HunyuanImage-3.0, an 80B MoE Model
Next-Gen Open Vision: HunyuanImage-3.0 Brings Unified Multimodal Generation to the Public

Tencent has now officially released HunyuanImage-3.0, a high-capacity, open-source text-to-image model that uses a unified autoregressive multimodal architecture and MoE scaling strategies. With 80B parameters (13B active), advanced evaluation metrics, and rich tooling around inference and prompts, it clearly aims to close the gap with leading closed models in image generation.
Key Points:
Massive scale with MoE activation - The model is backed by 80B parameters but only activates ~13B per token, leveraging a 64-expert MoE architecture. This helps manage resource costs while retaining expressive power.
Unified model (text + image) and intelligent expansion of prompts - Deviating from the diffusion pipeline norm, it integrates text understanding and image generation in one autoregressive model. It also “fills in” sparse prompts using world knowledge to produce richer visuals.
Strong evaluation and benchmarking - Tencent developed SSAE for automated alignment metrics and conducted human GSB comparisons across 1,000 prompts with professional annotators. These benchmarks help position how HunyuanImage-3.0 stacks up to other models.
Conclusion
The public release of HunyuanImage-3.0 is a major milestone for open generative vision models. Its scale, architecture, and tooling suggest Tencent is serious about making a contender to closed high-end systems. For researchers and developers, this offers a fertile ground to experiment, adapt, and push image models further. In your newsletter, this could be positioned as one of the most consequential open releases of 2025 in multimodal AI.
Meta Launches Vibes — A New AI Video Feed & Creation Hub in Meta AI

Meta has unveiled Vibes, a new feed inside the Meta AI app (and on meta.ai) designed to let users discover, remix, and create short AI-generated videos. Instead of just seeing AI content, you can start from scratch or transform existing videos with music, styles, or visual tweaks—and then share them to your social networks.
Key Points:
Discover & remix AI videos - In Vibes, you can browse AI video content from creators and communities. If something catches your eye, you can remix it—add music, change the aesthetic, layer new visuals—right from inside Meta AI.
Creation tools + cross-posting - Users can build videos from scratch or import their own content and evolve them via new AI visual tools. Once done, you can post directly to Vibes, DM to friends, or cross-post to Instagram or Facebook Reels/Stories.
Feed personalization & growth trajectory - Over time, the Vibes feed will become more personalized based on what you engage with. Meta says it will continue rolling out “more powerful creation tools and models” to expand this video ecosystem.
Conclusion
With Vibes, Meta is bridging AI video creation and social discovery—making it just as easy to remix someone else’s digital art as to produce your own. It’s a shift toward participatory AI content, where users become co-creators. As the toolset evolves, Vibes could become a new hub for AI art, short video trends, and social media experiments.
Exa Releases exa-code — Web Context Tool for Coding Agents to Slash Hallucinations
Exa Labs has launched exa-code, a context retrieval tool built specifically for AI coding agents. Rather than returning pages or long documents, exa-code surfaces only the few hundred tokens of web context (especially code examples) that are most relevant to a coding query. The goal: reduce hallucinations in code generation by tightly grounding assistant suggestions in verified examples.
Key Points:
Dense, code-centric context retrieval - exa-code is designed to prioritize code examples over general prose, extracting and reranking snippets from a hybrid search across 1 billion+ web pages. The result is a compact context payload that fits better into LLM prompts.
Hallucination benchmark & evaluation - In internal evaluations, Exa created a benchmark centered on hallucination-prone tasks (library/API calls). They found exa-code outperforms other popular context sources in reducing hallucinations—while using fewer tokens.
Prompt integration & trigger semantics - To use it, agents simply include a “use exa-code” instruction in their prompt. exa-code then intercepts that, runs its retrieval + extraction flow, and returns the minimal context needed.
Conclusion
The release of exa-code signals that “context tooling” for AI is evolving from generic search to domain-aware, precision tools. For coding agents, where hallucinating a wrong API name or parameter can break the build, exa-code offers a lean and focused way to ground generation. Over time, tools like this may become de facto infrastructure in agent design—complementing LLMs rather than being buried underneath them.
Thankyou for reading.