HACKER Q&A
📣 dmpyatyi

How do you know if AI agents will choose your tool?


YC recently put out a video about the agent economy - the idea that agents are becoming autonomous economic actors, choosing tools and services without human input.

It got me thinking: how do you actually optimize for agent discovery? With humans you can do SEO, copywriting, word of mouth. But an agent just looks at available tools in context and picks one based on the description, schema, examples.

Has anyone experimented with this? Does better documentation measurably increase how often agents call your tool? Does the wording of your tool description matter across different models (ZLM vs Claude vs Gemini)?


  👤 jackfranklyn Accepted Answer ✓
We've been exposing tools via MCP and the biggest lesson so far: the tool description is basically a meta tag. It's the only thing the model reads before deciding whether to call your tool.

Two things that surprised us: (1) being explicit about what the tool doesn't do matters as much as what it does - vague descriptions get hallucinated calls constantly, and (2) inline examples in the description beat external documentation every time. The agent won't browse to your docs page.

The schema side matters too - clean parameter names, sensible defaults, clear required vs optional. It's basically UX design for machines rather than humans. Different models do have different calling patterns (Claude is more conservative, will ask before guessing; others just fire and hope) so your descriptions need to work for both styles.


👤 JacobArthurs
Tool description quality matters way more than people expect. In my experience with MCP servers, the biggest win is specificity about when not to use the tool. Agents pick confidently when there's a clear boundary, not a vague capability statement.

👤 LetsAutomate
The AI agent chooses your tool based on how well your tool’s description matches the user’s intent — clear, specific descriptions win.

👤 kellkell
CRIPIX seems to be a new and unusual concept. I came across it recently and noticed it’s available on Amazon. The description mentions something called the Information Sovereign Anomaly and frames the work more like a technological and cognitive investigation than a traditional book. What caught my attention is that it appears to question current AI and computational assumptions rather than promote them. Has anyone here heard about it or looked into it ?

👤 snowhale
tool description wording does matter, at least in my testing. models seem to use the description to reason about whether a tool "should" apply, not just whether it can. two things that helped: (1) explicit input format with an example, (2) a one-sentence note about what the tool does NOT handle. the negative case helps models avoid calling it on edge cases and then failing, which trains them (in context) to prefer it when it's actually the right fit.

👤 MidasTools
From building in this space: agents choose tools based on how well they're described in context, not on brand recognition or marketing.

Practically: the agent reads your docs, README, or API description and decides if it can use your tool to solve the current problem. So the question is really "will an AI understand my tool well enough to use it correctly?"

What helps: - Clear, literal API documentation (not marketing copy) - Explicit input/output examples with edge cases - A `capabilities.md` or similar that describes what the tool does and doesn't do

The irony: the skills that make tools understandable to AI (precision, literalness, examples) are the opposite of what makes them legible to humans (narrative, benefits, stories).


👤 alexandroskyr
Curious if anyone has seen differences in how models handle conflicting tool descriptions — e.g., two tools with overlapping capabilities where the boundary isn't clear. In my experience that's where most bad tool calls come from, not from missing descriptions but from ambiguous overlap between tools.

👤 DANmode
The marketing industry is currently calling SEO for chatbots “GEO”.

I hope it doesn’t stick.


👤 yodsanklai
Not an expert, but I think they will primarily use the tools that are used in the training data, so it can be difficult to have them use your shiny new tool. Also good luck trying to have them use your own version of a standard unix tool with different conventions.

👤 al_borland
From the agent’s point of view, this sounds like a terrible idea. I look forward to reading about the unintended consequences.