OpenAI Launches ChatGPT Pulse

PLUS: Perplexity Launches Search API Giving Developers Direct Access to Its Web Index

From Reactive to Proactive — ChatGPT Pulse Lets the AI Kickstart Your Day

OpenAI has introduced ChatGPT Pulse, a new preview feature for Pro users on mobile that transforms ChatGPT from a passive responder into a proactive assistant. Pulse works overnight to synthesize information from your chats, memory, and optionally connected apps like calendar or email, then delivers a curated set of visual “cards” each morning—so you wake up with timely insights, reminders, and suggestions tailored to your interests.

Key Points:

  1. Asynchronous daily research & morning briefings - Pulse conducts background research using your chat history, feedback, and memory (if enabled) overnight, then presents 5–10 visual summaries the next day.

  2. Opt-in connectors & user control - To enhance personalization, you may link optional apps such as Gmail or Calendar (these are off by default). You can also thumb up/down cards and curate what Pulse tracks.

  3. Availability & constraints - Pulse is currently in preview for Pro (mobile only) users on iOS and Android. It’s not yet available on the web or for lower tiers. You need to have memory enabled to use Pulse, and any Pulse card expires after 24 hours unless you “save” it into a chat.

Conclusion
ChatGPT Pulse is a bold step toward an AI that anticipates your needs rather than just reacting to your prompts. While it’s still early, the feature hints at a future where your AI assistant helps you start the day with contextually relevant insights and nudges. For now, it’s limited to Pro mobile users, but OpenAI plans to expand availability after refining the experience.

Perplexity Rolls Out Structured, Agent-Friendly Search API

Perplexity has officially unveiled its Search API, opening up the same global-scale search infrastructure that powers its public answer engine to developers. Rather than exposing just links or raw documents, the API is built with AI use cases in mind: it delivers structured, fine-grained snippets, ranked context, and developer tools (SDK, evaluation framework) so LLMs and AI agents can more seamlessly access real-time web knowledge.

Key Points:

  1. Fine-grained, sub-document retrieval - The Search API splits up documents into smaller units (e.g. paragraphs or snippets) and scores them relative to the query. This lets downstream agents get more precise, relevant context without having to preprocess or rediscover within large documents.

  2. Same infrastructure as Perplexity’s public engine - The API taps into the same index of “hundreds of billions” of webpages that the main Perplexity answer engine uses, ensuring parity in freshness and coverage.

  3. Developer tools & openness - To support adoption, Perplexity is releasing an SDK, an open-source evaluation framework (search_evals), and in-depth design documentation explaining ranking, indexing, and optimization choices.

Conclusion

By launching a Search API built for AI (rather than just web search), Perplexity is positioning itself as infrastructure for the next generation of intelligent agents—giving them direct, structured access to the live web. It’s a bold move to shift from being a front-end “answer engine” toward becoming a backbone of AI systems. That said, competition with giants like Google/Microsoft, and issues around quality, latency, and compliance will all be pressure points to watch as this unfolds.

Moonshot AI Unveils “OK Computer” Mode

Moonshot AI has launched OK Computer, a new agent mode for its Kimi K2 model, enabling it to act more autonomously like a “virtual computer” rather than just a passive language model. With OK Computer, Kimi can interpret simple human instructions and carry out multi-step tasks across domains like web development, data analysis, media generation, and more — all while managing its own tools, execution, and planning logic.

Key Points:

  1. From “model as agent” to “agent with environment” - OK Computer embodies the idea that Kimi K2 isn’t just a model to be queried; it becomes a system with a virtual environment, decision loops, and tool invocation capabilities.

  2. Simplified instructions, complex execution - Users can issue high-level prompts (e.g. “build a website,” “analyze this dataset,” “generate a slide deck”) and Kimi under OK Computer orchestrates subtasks, invokes tools, and tracks intermediate states to fulfill the goal end-to-end.

  3. Built on K2’s agentic foundations & MoE architecture - Kimi K2 is a Mixture-of-Experts model with 1 trillion total parameters (32B active) and has been designed with tool calling and autonomous behavior in mind. OK Computer is a natural extension of those capabilities.

Conclusion

OK Computer marks an important leap in AI agent design: instead of treating LLMs as smart oracles, Moonshot is turning Kimi K2 into an agent that lives in a virtual computational environment, capable of planning, executing, and overseeing its own pipeline of actions. As with all early agentic systems, execution reliability, safety, and controllability will be key challenges—but OK Computer demonstrates that Moonshot is pushing hard into the frontier of autonomous AI.

Thankyou for reading.