Originally published on the GRID blog.
They’re inevitable
Imagine an app store for your conversational AI assistant. A place where you could browse, install, and purchase capabilities to enhance what ChatGPT or Claude can do.
- Want market data? Add “The Markets” capability and ask:“How’s AAPL doing today?”
- The ability to easily generate memes?Well you can already try that one.
- Managing a task? Add the capability for your project management system and say:”@clickup add a ticket about using the new graphics on the front page and assign it to marketing”
- Or how about:“@imdb suggest a heist movie with an average rating of 8 or more that’s streaming on my services.”
This isn’t just a vision — it’s the natural trajectory for LLM environments as they mature as ecosystems.

The state of function calling
Today, function calling — the technology that could power this future — feels clunky and inaccessible in consumer-facing LLM environments like ChatGPT and Claude.
It’s an exciting, but underdeveloped, feature.
In ChatGPT, function calling is limited to custom GPTs, which already seem like a neglected feature compared to the shiny “Projects” launched in December. Projects’ functionality overlaps that of custom GPTs a lot but — notably — Projects don’t support function calls. The way to use function calling in a custom GPT is by creating so called Actions. This requires a certain level of tech-savvy-ness and the APIs you use must meet a somewhat strict set of requirements: Have OpenAPI specs, no optional parameters, use specific authentication methods, and more. In short, the complexity makes the functionality rather inaccessible to most business or consumer users, unless provided with very specific instructions for the exact functionality in question (something we’re gradually mastering in our spreadsheets for ChatGPT solution).
Claude, on the other hand, recently introduced function calling through MCP (Model Context Protocol). But here, too, the barriers are high. In oerder to call a web service using MCP you need to run a local server to proxy calls to the web service. And this doesn’t work on Claude.ai, only in Claude Desktop. Essentially, unless you’re both a developer and the creator of the web service you’re integrating, you’re not going anywhere.
LLMs can’t do it — alone
What’s interesting is that function calling is already a crucial part of any enterprise LLM integration project as a way to interact with existing systems and data, and as such quite mature in both OpenAI’s Developer Platform and Anthropic’s Development Suite.
I’m convinced that function calls are the inevitable future of cloud-based AI assistants too as they represent the key to unlocking capabilities that LLMs can’t perform unassisted, such as:
- Reliable and verifiable business calculations
- Real-time or recent data retrieval
- Custom document generation and templating
- Interfacing seamlessly with countless SaaS and other software systems on users’ behalf
In other words, we’ll let LLMs handle the language, but leave specialized capabilities to traditional software, integrated via function calls.
The real potential lies in democratizing these capabilities. Taking away the current technical barriers and creating an app store-like marketplace, where users can extend their AI assistants easily and intuitively — without writing code. Developers and companies can monetize new capabilities or enhance existing products, just as they do with plugins, extensions, or mobile apps in other ecosystems today. The ecosystem would thrive on innovation, giving users the power to shape the experiences they have with their assistants into exactly what they need.
The future is headless
The idea of a conversational, headless software ecosystem is compelling. Instead of navigating dashboards and clicking buttons, the AI assistant becomes the primary or even the only way we interact with them— and the AI, enhanced with tailored capabilities, would deliver (see LLMs as the Interface to Everything).
ChatGPT and Claude may feel far from this future now, but the groundwork has already been laid. When these ecosystems mature, app stores for AI assistants won’t just be a possibility — they’ll be an inevitability.
—
The author is busy bringing spreadsheet data and calculations to AI at GRID and has done a lot of function calling lately!