Open Source · Self-hosted · AGPL v3
Build AI-powered applications, not infrastructure
Agents, functions, database queries, state, files, and templates — behind a single API with role-based access control. Deploy with Docker Compose.
Trusted by awesome brands around the world
Terminal
$ sudo curl -fsSL https://raw.githubusercontent.com/sinas-platform/sinas/main/install.sh -o /tmp/sinas-install.sh && sudo bash /tmp/sinas-install.sh
# Running at localhost:80 (API) & localhost:51245 (Console)
why choose Sinas
Everything you need to ship AI applications
Six integrated subsystems, one unified API. Add what you need, ignore the rest.
AI Agents
Multi-provider LLM agents with tool calling, streaming, and agent-to-agent orchestration. OpenAI, Anthropic, Mistral, or Ollama.
Python Functions
Serverless Python in isolated Docker containers. Triggered by agents, webhooks, schedules, or the API. Automatic execution tracking.
Database Queries
SQL templates against external PostgreSQL, ClickHouse, or Snowflake. Parameterized, validated, and usable as agent tools.
Skills
Reusable instruction documents that agents retrieve on-demand or preload into system prompts. Share expertise without prompt bloat.
State Management
Persistent key-value storage with namespaces. Agents maintain memory and context across conversations with fine-grained access control.
Files & Templates
File collections with versioning and metadata validation. Jinja2 templates for emails and dynamic content. Upload hooks for processing.
Agents
Agents that call agents, use tools, and remember
Jump to any task, report, or invoice instantly. No digging, no delays – just clean, lightning-fast search across all your data.
Switch between OpenAI, Anthropic, Mistral, or Ollama per agent
Tool calling with functions, queries, skills, state, and file search
Agent-to-agent calls via async queue — no recursive blocking
SSE streaming with reconnectable Redis Streams
Jinja2 system prompts with input variable injection
agent-config.yaml
agents:
- namespace: support
name: ticket-agent
model: gpt-4o
system_prompt: |
You are a support agent for {{company}}.
Use tools to look up customer data.
enabled_functions:
- crm/lookup_customer
- email/send_reply
enabled_skills:
- skill: default/tone_guidelines
preload: true
enabled_queries:
- analytics/recent_orders
state_namespaces_readwrite:
- conversation_memory
lookup_customer.py
def lookup_customer(input, context):
"""Look up a customer by email."""
import requests
headers = {
"Authorization": f"Bearer {context['access_token']}"
}
# Call back into Sinas API
resp = requests.get(
f"http://host.docker.internal:8000/states",
params={"namespace": "customers",
"key": input["email"]},
headers=headers
)
return resp.json()
Functions
Write Python, deploy instantly. Functions run in pre-warmed Docker containers with resource limits. Every call is tracked — including nested function calls that build execution trees.
Isolated containers with memory, CPU, and disk limits
Input/output validation via JSON Schema
Context injection: user_id, access_token, execution_id
Webhooks and cron schedules as triggers
Admin-approved package management
And more
Everything else included
Authentication, permissions, configuration management, and operational tooling — all built in.
Use cases
What you can build
Customer support
Agents that look up customer data via queries, follow tone guidelines via skills, and escalate via function calls. Full conversation history with state persistence.
Deployment
Self-host on your infrastructure. Docker Compose handles everything — PostgreSQL, Redis, ClickHouse, workers, scheduler, and the console.
3
COMMANDS TO INSTALL
4
LLM PROVIDERS SUPPORTED
2
API LAYERS (RUNTIME + MGMT)
∞
APPLICATIONS PER DEPLOYMENT
Real teams. Real results.
See how businesses use Mojave to track time smarter, bill faster, and run smoother every day.




