Perplexity Launches AI Patent Research Agent

PLUS: Canva Unveils Design Model & AI Tools That Think Like a Designer

World’s First” Design Model: Canva’s AI-Leap into Editable Layers, Data-Driven Design & Agentic Marketing

Canva has launched a major update, introducing what it calls its “world’s first AI model trained to understand the full complexity of design,” alongside a suite of new features including an upgraded assistant (Canva AI), interactive design elements, and deep new tools for marketers and data-driven creators.

Key Points:

  1. Design-Model Intelligence - Canva’s new model “understands every aspect of design from structure and layering to hierarchy, branding, and visual logic,” enabling it to generate editable layers and objects rather than just flat images.

  2. Multi-format Reach - The upgrade spans social posts, presentations, whiteboards and websites - so whether you’re making a deck, a campaign asset or a web page, the AI-powered design engine adapts.

  3. Boosted Assistant & Marketing Platform - The assistant (Canva AI) now supports @mentions in comments, 3D-object generation and style-copying across designs; meanwhile the new Canva Grow platform uses AI to create assets and track performance, enabling marketers to publish ads directly to platforms like Meta.

  4. Enterprise Focus Amid Competition - As Canva aims to retain and attract corporate customers in a crowded market, these AI builds are a clear play to move beyond “simple drag-and-drop” into full-scale brand, campaign and data workflows.

Conclusion

Design tools are now becoming AI-native platforms where creative output, data and marketing converge. When a design tool understands layering, brand logic, data inputs and marketing publishing in one flow, the boundaries between design, data analytics and campaign execution blur. That means new opportunities - for faster iteration, personalization at scale, less reliance on specialist designers - but also new questions around ownership, editability, bias in templates, and platform lock-in

AI Patent Search for All: Perplexity Patents Launches

Perplexity AI has introduced Perplexity Patents, a new AI-powered research agent designed to make patent and prior-art exploration accessible to everyone. Gone are the days of rigid keyword syntax and expensive specialist tools: this platform supports natural-language questions about inventions, fields and trends.

Key Points:

  1. Natural-language patent queries - Instead of crafting complex keyword searches, you can ask things like “Are there any patents on AI for language learning?” or “Key quantum computing patents since 2024?” and get relevant results and links to original documents.

  2. Beyond exact keyword matching - The system doesn’t just look for exact terms. For example, searching “fitness trackers” may uncover patents labelled “activity bands” or “health-monitoring watches” by concept rather than wording.

  3. Multi-source innovation view - Perplexity Patents doesn’t limit itself to patent-databases alone. It also draws on academic papers, public software repositories and other innovation sources so users can see the wider invention ecosystem, not just formal patents.

Conclusion

With Perplexity Patents, the barrier drops dramatically, meaning inventors, entrepreneurs, researchers and even hobbyists can more easily explore what’s been built, what’s possible, and where opportunities lie. That means faster ideation, quicker competitive insight, and new dynamics in how IP is examined, challenged or built upon in the AI age.

AI Video Startup Synthesia Raises $200 M at $4 B Valuation

London-based startup Synthesia, which uses generative AI to turn text into professional videos with lifelike avatars, has reportedly secured a fresh $200 million funding round at a valuation of around $4 billion.

Key Points:

  1. $200 M Growth Round at ~$4 B Valuation - The new capital will dramatically raise Synthesia’s valuation (from around $2.1 billion earlier this year) and mark a major milestone in its scaling phase.

  2. Enterprise Video-AI Platform - Synthesia’s product lets companies turn script/text input into multilingual videos featuring avatars, voice-overs, lip-sync and multiple languages - aimed at corporate training, communications, marketing, and sales enablement.

  3. Competitive and Strategic Momentum - The funding round reportedly involves major backers (including the venture arm of Alphabet Inc.) and comes amid talks of acquisitions and expanding global demand for AI-driven video content.

Conclusion

The ability to create branded, multilingual videos at scale via AI unlocks huge productivity gains, shifts production economics, and opens new application domains. It also raises important questions around authenticity (deepfakes risk), brand governance (who controls avatars/voices), and platform power (when the “video creation stack” consolidates around few players).

Figma Acquires Weavy

Figma this week announced the acquisition of Weavy, a startup specialising in AI-driven image and video generation and editing workflows. The move positions Figma to enhance its creative platform with advanced generative-AI media tools, enabling users to design, refine and scale visual content with far deeper flexibility.

Key Points:

  1. Unified AI + Editing Canvas - Weavy’s core capability is combining leading generative-AI models with professional editing tools in a browser-based node-based interface, now brought into Figma as “Figma Weave”.

  2. Multi-Media Support (Image, Video, Motion, VFX) - With the acquisition, Figma is explicitly extending beyond static design into video, animation, motion graphics and visual-effects workflows inside its ecosystem.

  3. Craft, Control & Creative “Beyond the Prompt” - Figma emphasises that AI outputs are starting points to be shaped, refined and iterated — the designer remains central. The node-based model allows branching, remixing and combining outputs from several models/tools in one flow.

Conclusion

The convergence of generative-AI media with full-fledged editing inside one unified platform means that the creative workflow is becoming more integrated, more iterative, and more AI-native. Designers will increasingly ask not “Which tool do I open next?” but “How do I orchestrate models, edits, iterations in one canvas?” That shift has implications for workflow architecture, team skills, and creativity itself: the value moves from the “raw generation” of content to the “crafting, branching, refining and layering” of AI outputs.

Thankyou for reading.