FUTURE

On the Horizon

We’re continuously exploring new ways to extend Private GPT’s capabilities. Here are some topics currently under evaluation.

LangFlow integration for building and connecting custom workflows

  • Investigating integration of LangFlow for
    building visual workflows
  • Goal: empower admins to create and expose
    custom logic through Private GPT

A Trusted Provider setup to securely offload AI processing across isolated systems

  • Concept to support secure, multi-instance
    deployments for enterprise tenants
  • Enables external AI containers to handle
    model inference, while customer systems
    manage data

MCP Integration to enable external tool calling and contextual extensions

  • Evaluating support for connecting to external
    MCP (Model Context Protocol) servers
  • Would allow the system to call tools or
    services during a conversation (e.g., for
    calculations, database queries, or API calls)

A Model Switcher to allow flexible use of different LLMs

  • Aims to allow system administrators to switch LLMs via a CLI command
  • Would support a predefined list of officially
    tested models
  • Compatible models would be listed in a
    Model Hub, integrated into the License Portal
  • Models can be downloaded separately,
    reducing installation mage size and speeding
    up upgrades

These and other features are on the horizon, and we’ll share more once concrete plans take shape.