Google Releases Gemini 3.1 Pro

PLUS: Sony Moves to “Meter” AI Music

Google Releases Gemini 3.1 Pro

Google has officially launched Gemini 3.1 Pro, the newest and most capable iteration in its Gemini 3 AI model series. This version is designed for complex reasoning and multi-step tasks, with significant improvements over earlier Gemini models.

Key Points:

  1. Benchmark performance: Gemini 3.1 Pro scored 77.1% on the ARC-AGI-2 abstract reasoning benchmark, more than doubling the score of earlier models and placing it ahead of many competitive models on this evaluation.

  2. Wide availability: The model is rolling out in preview across Google’s ecosystem:

    • Developers: via the Gemini API, Google AI Studio, Android Studio, and the Antigravity agent-based IDE.

    • Enterprise: through Vertex AI and Gemini Enterprise tools.

    • Consumers: via the Gemini app and NotebookLM.

  3. Flexible reasoning: Google says developers can adjust how much the model “thinks,” trading off faster responses for deeper analysis depending on the task.

This release underscores Google’s push to make AI models that can handle not just simple queries but structured, multi-step reasoning - useful for synthesis, planning, and complex problem-solving.

Apple Is Centering AI on the Camera

Apple appears to be pushing “Visual Intelligence” - AI that sees the world through cameras - to the forefront of its next major product wave.

According to Bloomberg’s Mark Gurman, Apple CEO Tim Cook has signaled that Visual Intelligence will be a defining feature of the company’s upcoming wearable-AI devices. This includes AI-powered wearables expected around a product launch week starting March 2–4, 2026 and updates tied to iOS 26.4 and beyond.

Key Points:

  1. Visual Intelligence first appeared on the iPhone 16 Pro, letting users analyze photos and objects with AI help. Gurman says Cook has repeatedly highlighted this feature internally, hinting it will expand into new hardware.

  2. Rumored devices include smart glasses, an AI pendant, and AirPods with built-in cameras, all designed to feed camera data into AI features.

  3. These cameras aren’t just for photos - the intent is for the AI to understand your surroundings, contextualize experiences, and offer insights without a tap.

This direction is similar to what Meta (with its Ray-Ban AI glasses) and others have explored, but Apple’s strategy leans into where users already raise their phones — to point, scan, and discover.

Key question: will people welcome a future where their devices “see first and talk second,” especially in wearables they carry or wear all day?

Sony Moves to “Meter” AI Music

Sony Group has developed AI technology aimed at identifying and quantifying how much original music contributes to AI-generated songs.

Recent reports say this system - developed by Sony’s AI division and highlighted by multiple outlets - can:

  • Detect original copyrighted recordings embedded in an AI-generated track.

  • Estimate what percentage of the AI output is influenced by those works, rather than just flagging direct matches.

The tool is pitched as a way to give music creators and publishers evidence of how their work was used, potentially enabling more precise royalty or licensing frameworks instead of broad legal disputes. Sony and other labels have been vocal about AI’s use of copyrighted material without permission - including lawsuits and industry pushback across the U.S. and Europe.

This move is important because it shifts the conversation from blunt copyright takedown tools to measuring contribution and provenance, giving rights holders a technical foundation for negotiations or legal claims.

Big question: will AI developers adopt this “math” and transparency - or will the debate end up in court for years?

Google Labs Launches Product Image Generator & Advanced Coding Model Enhancements

At the same time, Google Labs has introduced a couple of notable tools aimed at businesses and developers:

Photoshoot in Pomelli

  • Google’s experimental Pomelli tool now includes a Photoshoot feature that helps small businesses generate professional-looking product images from simple photos.

  • Using internal image models like Nano Banana Pro (Gemini 3 Pro Image), Photoshoot can create multiple angles and branded visuals without a physical studio, offering background control and campaign-ready outputs.

Upgrades Toward Full App Development

  • Google is also evolving its AI development environment — particularly AI Studio’s Build feature — to support real user sign-in, Firebase authentication, third-party APIs, and better framework support (e.g., Next.js), bringing AI closer to real full-stack app creation.

These moves signal Google’s intent to broaden AI beyond chat and text: from image generation and marketing tools to code-assisting environments where AI can help build, test, and launch applications.

Thankyou for reading.