Starships.ai
Core Concepts

Agents

The AI agent framework that powers intelligent, autonomous assistants on your starship

What Are Agents?

Agents are AI assistants that combine language understanding, tool usage, and memory. Unlike simple chatbots, agents can:

  • Take Actions: Use tools to interact with systems, process data, and integrate with any platform
  • Remember Context: Maintain conversation history and adapt to user preferences
  • Execute Workflows: Follow multi-step processes with decision making and collaboration
  • Adapt & Evolve: Discover new capabilities and work with humans and other agents

Think of agents as AI teammates that work independently within defined boundaries, capable of collaborating on complex tasks through handoffs, parallel processing, and shared context.

See the full Starships.ai platform in action

Universal Agent Architecture

Every agent in the Starships.ai framework shares the same core architecture:

Core Components

  • Personality: System prompt defining expertise, communication style, and behavioral guidelines
  • Tools: Set of capabilities like API calls, data processing, file operations, integrations
  • Memory: Persistent context including conversation history, user preferences, and learned patterns
  • Blueprints: Workflow definitions that specify how the agent executes multi-step tasks
  • Configuration: Model selection, permissions, timeout settings, and operational parameters

The Universal Agent Blueprint

Every agent follows the same core execution pattern:

There are two common execution modes that share the same loop (LLM → decide → execute → feed results back), but differ in where actions run:

  • Local execution (tool node): the tool node executes a function in your runtime (your machine/infra).
  • Hosted execution (connector_action node): the connector_action node executes in the Starships-hosted runtime, where auth can be injected and policies can be enforced.

The diagrams below show both modes explicitly.

Local execution mode (tool runs locally)

Hosted execution mode (connector_action runs in Starships-hosted runtime)

How Each Node Works

  • start : Receives input and triggers agent execution
  • llm : Processes requests using AI models, decides on tool usage
  • conditional : Routes execution based on LLM decisions (tools vs. response)
  • code : Prepares data and formats tool arguments for execution
  • tool : Executes capabilities in local execution mode (runs the tool function in your runtime)
  • connector_action : Executes connector-backed integration calls in hosted execution mode (auth/policy enforced by the host)
  • mutate : Updates memory with tool results for next LLM cycle
  • end : Returns response when agent has complete answer

Blueprint vs. Creation: This diagram shows how agents execute tasks internally using blueprints. For how meta-agents like Gaia create other agents, see the individual meta-agent documentation.


How Agents Process Requests

The Tool-Calling Loop

What makes agents intelligent rather than just chatbots:

LLM Analyzes Request

The AI model examines your input and compares it against available tools and conversation history

Decides on Action

Either provides a direct response or identifies which tools are needed to gather more information

Executes Tools

Calls external APIs, searches databases, processes files, or performs calculations as needed

Incorporates Results

Adds tool results to its understanding and repeats until it can provide a complete answer

Example Flow:

User: "What's our team's progress this week?"

Agent thinks: "I need project data" → Calls project-status tool
Tool returns: Current task completion rates
Agent thinks: "I need recent updates" → Calls team-activity tool
Tool returns: This week's commits and updates
Agent responds: "Your team completed 12 tasks this week, 3 ahead of schedule..."

What Makes Agents Intelligent

Unlike simple chatbots, agents combine memory, tools, and reasoning:

  • Persistent Memory: Remember conversations and adapt to your preferences
  • Tool Integration: Execute actions through specialized capabilities
  • Intelligent Reasoning: Make decisions and learn from interactions
  • Collaborative: Work together and share context on complex tasks

This makes them true AI teammates rather than simple question-answering systems.

Want to build custom workflows? Learn about all blueprint nodes and advanced patterns in our Blueprint Reference Guide.


Ready to Build Your First Agent?

Now that you understand how agents work, you're ready to create your own:


Agent Publishing & Sharing

Coming Soon

Agent marketplace and publishing system is currently in development.

Planned features:

  • Share agents privately within your organization
  • Publish to the community marketplace
  • Version control and quality assurance
  • Convert successful agents into reusable templates