Anthropic Launches ‘Skills’

PLUS: Cognition AI Introduces SWE-grep & SWE-grep-mini

Claude Gains Folder­Based Expert Modules for Custom Workflows

Anthropic has introduced a new capability called Skills, a system that allows organizations to bundle procedures, reference materials, scripts, and instructions into folders that Claude can dynamically load when handling specific tasks. The idea is to let your AI assistant understand how you work, not just what you say.

Key Points:

  1. Folder-based modules - Skills are structured directories containing a SKILL.md file (with metadata: name, description) plus optional linked files (scripts, resources) that Claude can load when needed.

  2. Progressive disclosure & dynamic loading - Claude initially sees only the metadata (name + description) of available Skills; when it determines relevance, it loads additional files from the Skill directory. This helps manage context window constraints and keeps tasks efficient.

  3. No-code creation & composability - Users can build custom Skills via an interactive “skill creator” assistant, without heavy coding. Multiple Skills can be composed for complex workflows (e.g., brand-guidelines plus finance-reporting plus design).

Conclusion

With Skills, Anthropic is moving Claude from being a general-purpose assistant into a context-aware, workflow-driven agent. By packaging domain-expertise in reusable modules rather than relying solely on prompts or ad-hoc engineering, companies can embed their proprietary workflows into the AI. The success will depend on how well organizations design, maintain, and govern these Skills, but this approach could significantly raise the bar for enterprise-grade AI assistants.

New Subagent Family Speeds Codebase Context Fetching by 10× in Windsurf

Cognition AI has released two new models, SWE-grep and its lighter counterpart SWE-grep-mini, designed specifically for high-speed, multi-turn context retrieval in large codebases. These sub-agents use reinforcement learning and parallel tool calls to locate relevant files and line ranges far quicker than traditional embedding or agentic search methods - addressing the bottleneck many coding agents face when fetching context.

Key Points:

  1. Parallel tool-call architecture - Instead of sequential “search → read → search” loops, SWE-grep issues up to 8 parallel tool calls per turn, over just 4 turns, significantly reducing latency.

  2. Reinforcement learning + objective metrics - SWE-grep is trained via multi-turn RL on retrieval tasks with a reward based on weighted F1 for file + line retrieval, enabling the model to optimize retrieval behavior directly.

  3. Order-of-magnitude speed improvement - According to Cognition’s benchmarks, SWE-grep and the mini version match the retrieval capability of frontier models while executing much faster (e.g., SWE-grep-mini serves at >2,800 tokens/second).

Conclusion

By focusing on the “reading” stage of coding agents - i.e., retrieving the right code context - Cognition AI addresses a major productivity drag in agent workflows. With SWE-grep and SWE-grep-mini, the company is enabling engineering tools to stay in “flow” rather than stall while hunting through code. While adoption and integration will be key, this work highlights how agent architectures are shifting from “big model + embeddings” to more specialized subagents optimized for speed and precision.

OpenAI Recruits Black-Hole Physicist to Lead Its ‘AI for Science’ Drive

OpenAI has taken a bold step in its research strategy by hiring Alex Lupsasca, a prize-winning theoretical physicist known for his work on black-hole symmetries, as the inaugural academic researcher for its newly launched “OpenAI for Science” initiative. Led by VP Kevin Weil, the team will focus on using AI to tackle frontier problems in physics, mathematics and scientific reasoning—marking a move by OpenAI to shift from consumer-apps to deep science.

Key Points:

  1. First hire for the science arm - Lupsasca will remain affiliated with his academic post (at Vanderbilt University) while joining OpenAI’s science team, signalling the initiative is research-oriented rather than purely product-driven.

  2. AI vs research pace - According to reporting, Lupsasca was deeply impressed when the model GPT‑5 Pro “rediscovered” a complex physics symmetry in under 30 minutes, a task that would have taken days. This anecdote illustrates OpenAI’s belief that AI is becoming a scientific instrument.

  3. Broader strategic context - The move comes as OpenAI faces criticism for product-centric focus (e.g., social video tools, consumer apps). By building a science team, OpenAI is signalling it wants to compete with labs like DeepMind at the frontier of scientific discovery.

Conclusion

This hiring is a strong indicator that OpenAI is intent on broadening its mission beyond chatbots and image-generation: it wants to become a tool for scientific breakthroughs. If successful, the initiative could reshape how complex scientific problems are tackled - but the real test will be whether the research yields novel discoveries (and not just faster automation of existing work). For now, the hire sends a clear message: OpenAI is aiming for the big leagues of science.

Thankyou for reading.