I

agent-tools

by inferen-sh

agent-tools exposes the inference.sh CLI inside your agent so you can run 150+ AI apps from one place: image generation, video creation, LLMs, search, 3D, and Twitter automation. Ideal when you need a unified workflow runner for FLUX, Veo, Gemini, Grok, Claude, Seedance, OmniHuman, Tavily, Exa, OpenRouter, and more without managing GPUs or complex integrations.

Stars0
Favorites0
Comments0
AddedMar 27, 2026
CategoryWorkflow Automation
Install Command
npx skills add https://github.com/inferen-sh/skills --skill agent-tools
Overview

Overview

What is agent-tools?

agent-tools is a workflow-automation skill that wires the inference.sh CLI (infsh) into your agent environment. Once installed, your agent can call more than 150 cloud-hosted AI apps from a single command-line interface, including:

  • Text and code LLMs
  • Image generation models
  • Video creation and editing models
  • 3D and creative tools
  • Search and research tools like Tavily and Exa
  • Twitter and other automation utilities

Because everything runs in the cloud via inference.sh, you do not need local GPUs or to maintain separate integrations for each model provider.

Key capabilities and supported models

With agent-tools configured, your agent can orchestrate many popular models and APIs via infsh, including (as listed in the skill definition):

  • Image and video: FLUX, Veo, OmniHuman, and other media models
  • LLMs and chat: Gemini, Grok, Claude, plus additional models available through OpenRouter
  • Search and research: Tavily, Exa
  • Automation: Twitter-related actions and other workflow tasks exposed as inference.sh apps

The skill is limited to the Bash(infsh *) tool pattern, meaning the agent may invoke infsh commands programmatically but does not add extra custom scripts by itself.

Who is agent-tools for?

agent-tools is a strong fit if you:

  • Want a single CLI to drive diverse AI apps instead of wiring many APIs manually
  • Need to generate images, videos, or 3D content on demand from within an agent
  • Run LLM-based workflows that sometimes require external search or research calls
  • Automate Twitter or other operational tasks that inference.sh exposes as apps
  • Prefer serverless, cloud-hosted inference so you can avoid GPU and model hosting chores

It works particularly well for developers, power users, and ops teams who live in the terminal or already use agents to automate workflows.

When agent-tools is not a good fit

Consider a different skill or direct API integration if:

  • You require strict on-prem or air-gapped inference; inference.sh runs the models in the cloud.
  • You only need a single model with a dedicated SDK and do not want the abstraction of a general CLI.
  • Your agent platform cannot safely run shell commands or does not allow access to infsh.

If you are comfortable with a CLI-based workflow and want the broadest model coverage with minimum setup, agent-tools is designed for that scenario.

How to Use

1. Install the agent-tools skill

To add agent-tools from the inferen-sh/skills repository, use your skills manager. For platforms that support npx skills, you can run:

npx skills add https://github.com/inferen-sh/skills --skill agent-tools

This fetches the agent-tools skill definition (including SKILL.md) from the tools/agent-tools directory and registers it with your agent environment.

After installation, open the Files or repository view and locate:

  • tools/agent-tools/SKILL.md – main description of the skill and allowed tools

Use this file as the canonical reference for how the skill is meant to interact with the inference.sh CLI.

2. Install the inference.sh CLI (required)

agent-tools assumes the infsh CLI is available in the runtime environment. Follow the official instructions from the skill’s upstream document:

curl -fsSL https://cli.inference.sh | sh
infsh login

This script will:

  • Detect your OS and architecture
  • Download the correct binary from dist.inference.sh
  • Verify its SHA-256 checksum
  • Place the infsh binary in your PATH

No elevated permissions, background daemons, or telemetry are involved according to the upstream description.

Manual installation option

If you prefer not to pipe a script into sh, you can perform a manual install as described upstream. In short, this involves:

  • Downloading the CLI binary and the associated checkmarks.txt file from https://dist.inference.sh/cli
  • Verifying checksums locally
  • Moving the binary to a directory on your PATH

Refer to the latest manual instructions at https://cli.inference.sh to ensure you follow the current recommended process.

3. Log in and verify access

Once infsh is installed, authenticate:

infsh login

Then verify that your setup works:

infsh help

or run a simple test command from the inference.sh documentation. Successful execution confirms the agent will be able to call infsh through agent-tools.

4. Connect agent-tools to your workflows

With the skill installed and infsh working, ensure your agent is allowed to execute Bash commands of the form:

infsh <app> [arguments]

The allowed-tools section in SKILL.md restricts usage to Bash(infsh *), which instructs the agent to only run infsh commands, not arbitrary shell.

In practice, you will:

  • Configure prompts or rules telling the agent when to use infsh (e.g., for image or video generation).
  • Optionally define higher-level workflows or templates that chain multiple infsh calls for complex jobs.

5. Common usage patterns

Here are typical ways teams use agent-tools with inference.sh:

Orchestrate LLM and search workflows

Have the agent:

  1. Call an LLM via an inference.sh app (for reasoning or drafting content).
  2. Use a Tavily or Exa app via infsh to gather current information.
  3. Call another model to refine or structure the final output.

Because all steps are executed through infsh, the agent-tools skill gives your agent a single, consistent execution path.

Image and video generation pipelines

Use agent-tools whenever a workflow needs:

  • Prompted image generation with models like FLUX or others available via inference.sh.
  • Video generation or avatar/character creation through apps such as Veo or OmniHuman where available.

The agent can:

  • Accept natural language instructions
  • Translate those into infsh commands
  • Return generated media links or metadata as part of its response

Twitter and external automation

Where inference.sh exposes Twitter or similar automation apps, agent-tools lets the agent trigger them as part of a broader pipeline. For example:

  • Generate content with an LLM
  • Render an image for the post
  • Call a Twitter app via infsh to publish or schedule

This turns your agent into a generalized operations runner for AI-powered campaigns.

6. Operational tips and safety

  • Scope commands clearly: Because only infsh calls are allowed, keep your prompts explicit about when and how the agent should use them.
  • Monitor usage: inference.sh runs AI apps in the cloud; track your usage, quotas, and any associated billing in your inference.sh account.
  • Update regularly: Revisit https://cli.inference.sh periodically to check for CLI updates and new features that may expand what agent-tools can do.

FAQ

What does agent-tools actually add to my agent?

agent-tools gives your agent a safe, focused way to run infsh commands so it can access over 150 AI apps via inference.sh. Instead of writing separate integrations for each model or API, you use the inference.sh CLI as a single gateway, and the skill defines how the agent is allowed to call it.

Do I need a GPU or local model setup to use agent-tools?

No. According to the upstream documentation, inference.sh runs all supported apps in the cloud. You interact through the CLI, and the heavy computation happens on remote infrastructure. That is one of the main reasons to use agent-tools with inference.sh: you get powerful models without managing GPUs.

How do I install agent-tools?

Install the skill from the inferen-sh/skills repository, for example:

npx skills add https://github.com/inferen-sh/skills --skill agent-tools

Then install and configure the inference.sh CLI using:

curl -fsSL https://cli.inference.sh | sh
infsh login

Confirm infsh is on your PATH and working before relying on the skill in production.

What kinds of AI workflows can I automate with agent-tools?

You can orchestrate a wide range of workflows, such as:

  • Multi-step LLM pipelines with reasoning, drafting, and refinement
  • Image and video generation for content or creative pipelines
  • Research flows combining Tavily and Exa search with LLM summarization
  • Social and operational automations such as Twitter posting (where supported by inference.sh apps)

The exact possibilities depend on the set of apps currently available via inference.sh.

Is agent-tools limited to one particular model provider?

No. agent-tools is tied to the inference.sh ecosystem, not a single provider. Through infsh, you can access many models and APIs, including FLUX, Veo, Gemini, Grok, Claude, Seedance, OmniHuman, Tavily, Exa, and OpenRouter-backed models, among others listed in the skill’s description.

Can I use agent-tools without allowing general shell access?

Yes. The skill’s allowed-tools configuration restricts usage to Bash(infsh *), meaning the agent is only permitted to execute infsh commands, not arbitrary shell. This allows you to benefit from the CLI while keeping the execution scope narrow and auditable.

How do I keep inference.sh CLI up to date?

Re-run the installation instructions from https://cli.inference.sh or use any documented update mechanism provided there. Because agent-tools just calls infsh, keeping the CLI current ensures you have the latest features, apps, and security fixes.

Where can I inspect the skill definition?

In the inferen-sh/skills repository, navigate to:

  • tools/agent-tools/SKILL.md

This file describes the skill, allowed tools, and links to the inference.sh CLI documentation. Use your platform’s Files tab or GitHub to review it in full before deploying agent-tools in sensitive or high-volume environments.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...