centosSystem Architecture

The PromptHub protocol is built on a modular, extensible architecture that reflects the lifecycle of prompts from definition to execution, storage, governance, and monetization. As an independent protocol layer and trading system, PromptHub is designed to be compatible with various model interaction standards:

1. PromptDSL: Semantic Definition Layer

PromptDSL is the entry point of the system. It defines prompts as typed, structured templates that declare:

  • Input and output schema

  • Template body and embedded logic

  • Parameterized configuration

  • Dependency injection (referencing other prompt modules)

This abstraction makes prompts interoperable, testable, and reusable across different contexts and models.

2. PromptModule: Execution Interface Layer

Each prompt defined in DSL is registered as a PromptModule, serving as the standard interface for prompt execution. A PromptModule:

  • Exposes specific prompts as capabilities to LLMs

  • Provides resources and tools that can be accessed by models

  • Maintains consistent context and execution patterns

  • Enables models to interact with on-chain and off-chain data sources

PromptModule supports multiple model interaction protocols, enabling it to function with various AI systems and model providers.

3. PromptVault: Persistence and Version Control

Once registered, prompts are published to PromptVault, a smart contract on Solana responsible for:

  • Canonical versioning and metadata binding

  • Storing IPFS references

  • Enforcing access licenses and token restrictions

  • Enabling auditability and forking transparency

The vault maintains a secure on-chain record of prompt ownership, version history, and execution permissions. Each prompt entry contains:

4. PromptRouter: Coordination Layer

PromptRouter acts as the coordination layer responsible for managing interactions between LLMs and PromptModules:

  • Connects models to appropriate resources based on context needs

  • Routes model requests to the right prompt modules

  • Manages authentication and access control

  • Orchestrates complex workflows through PromptDAG

PromptRouter's core functionality is not dependent on any specific model interaction protocol, allowing it to work with various AI systems.

5. LLM Integration Layer

The outermost layer consists of language models that consume PromptHub resources:

  • Large language models (e.g., Claude, GPT-4)

  • Model-specific adapters

  • Frontend applications that utilize LLMs

  • AI-powered tools and platforms

This architectural design ensures that PromptHub is not merely a backend service, but a programmable, composable, and trustworthy semantic layer for decentralized AI ecosystems. The core value of PromptHub lies in its capability as a prompt protocol layer and trading system, able to operate independently while integrating with various model interaction standards.

6. Data Flow Diagram

The following diagram illustrates the typical data flow for prompt registration, execution, and monetization within the PromptHub ecosystem:

This data flow demonstrates how PromptHub facilitates the entire prompt lifecycle from creation through execution to monetization, with cryptographic verification at every step.

Last updated