Unified interface for PostgreSQL, Neo4j, and document stores with natural language querying, bulk operations, and space-scoped isolation.
You're running a SaaS business. Customer data in PostgreSQL. Support tickets in a Neo4j graph connecting tickets to users, products, and team members. Product usage events stored as JSON documents.
Three separate databases. Three different query languages. Finding insights means writing complex joins, graph traversals, and JSON queries. It takes hours.
"Show me users who signed up last month but haven't invited any teammates. I think they might churn."
"I'll query your customer database for signups from last month, check the Neo4j relationships for team invitations, and cross-reference with usage events..."
*Generates SQL for PostgreSQL, Cypher for Neo4j, combines results*
"Found 47 users. Here's the breakdown with their engagement scores and last activity."
"Can you set up an automatic check for this every Monday morning?"
"Done. Created a workflow that runs every Monday at 9am and posts results to your #growth Slack channel."
This isn't magic. This is collaboration.
You don't need to know SQL, Cypher, or JSON query syntax. Just describe what you need. Your Awareness handles the complexity.
Traditional SQL database with ACID guarantees
Relationship-focused database for knowledge graphs
Flexible schema with PostgreSQL JSONB storage
interface IDataStoreManager {
// Store management
listStores(): Promise<DataStore[]>
createStore(config: DataStoreConfig): Promise<DataStore>
deleteStore(storeId: string): Promise<void>
// Collection management
listCollections(storeId: string): Promise<DataCollection[]>
createCollection(storeId: string, schema: Schema): Promise<DataCollection>
// Query operations
query(storeId: string, query: string, params?: any[]): Promise<QueryResult>
naturalLanguageQuery(storeId: string, nl: string): Promise<QueryResult>
// Bulk operations
bulkInsert(storeId: string, table: string, records: any[]): Promise<void>
bulkUpdate(storeId: string, table: string, updates: any[]): Promise<void>
bulkDelete(storeId: string, table: string, ids: string[]): Promise<void>
}The LLMQueryBuilder converts natural language requests into safe, parameterized queries with validation against dangerous keywords.
User Request:
"Show me customers from New York with revenue greater than $10,000"
Generated SQL:
SELECT * FROM customers WHERE city = $1 AND revenue > $2 ORDER BY revenue DESC
Params: ["New York", 10000]
User Request:
"Find all people who work at companies in the tech industry"
Generated Cypher:
MATCH (p:Person)-[:WORKS_AT]->(c:Company) WHERE c.industry = $industry RETURN p.name, c.name, c.industry
Params: { industry: "tech" }
Efficiently insert, update, or delete thousands of records using optimized batch patterns.
INSERT INTO schema.products (name, price) SELECT * FROM UNNEST($1::text[], $2::numeric[]) ON CONFLICT (id) DO UPDATE SET price = EXCLUDED.price
Uses UNNEST for batch inserts with upsert support via ON CONFLICT.
UNWIND $nodes AS props CREATE (n:Product) SET n = props RETURN elementId(n) as id
Uses UNWIND to create multiple nodes in a single transaction.
// Bulk insert 1000 customer records
await manager.bulkInsert("customers_db", "customers", [
{ name: "Alice", email: "alice@example.com", city: "NYC" },
{ name: "Bob", email: "bob@example.com", city: "SF" },
// ... 998 more records
]);
// Result: Single query, efficient batch insert
// ~100x faster than 1000 individual INSERTsEach space gets isolated database resources to prevent cross-space data access.
-- Space "workspace_xyz" queries: SELECT * FROM "space_workspace_xyz".customers -- Space "workspace_abc" queries: SELECT * FROM "space_workspace_abc".customers -- Zero cross-space visibility
Each space uses a dedicated PostgreSQL schema prefixed with space ID.
// Space "workspace_xyz" nodes: MATCH (p:Space_workspace_xyz_Person) // Space "workspace_abc" nodes: MATCH (p:Space_workspace_abc_Person) // Labels prevent cross-space queries
Node labels are prefixed with space ID for complete isolation.
Execute complex analytics queries and integrate ML models via workflow code nodes.
// Workflow Node 1: Query customer behavior
const result = await dataStore.query("crm_db", `
SELECT
customer_id,
EXTRACT(DAYS FROM NOW() - last_order_date) as days_inactive,
total_orders,
avg_order_value
FROM customers
WHERE last_order_date < NOW() - INTERVAL '90 days'
`);
// Workflow Node 2: Calculate churn risk (JavaScript)
const churnRisks = result.rows.map(customer => ({
...customer,
churn_score: (
customer.days_inactive * 0.4 +
(1 / customer.total_orders) * 0.3 +
(1000 / customer.avg_order_value) * 0.3
)
}));
// Workflow Node 3: Store results
await dataStore.bulkInsert("analytics_db", "churn_predictions", churnRisks);// Find similar customers via Neo4j pattern matching
const similarCustomers = await dataStore.query("graph_db", `
MATCH (c:Customer {id: $customerId})-[:PURCHASED]->(p:Product)
MATCH (p)<-[:PURCHASED]-(similar:Customer)
WHERE similar.id <> $customerId
RETURN similar.id, similar.name, COUNT(p) as shared_products
ORDER BY shared_products DESC
LIMIT 10
`, [customerId]);
// Use results to generate recommendations7 tools enable both users (via UI) and AI (via LLM function calls) to manage data stores.
list_data_storesList all connected databases
Returns: Array of {id, name, type, status}
get_schemaIntrospect table/collection schemas
Returns: Schema with columns, types, constraints
query_databaseExecute SQL/Cypher or natural language
Returns: QueryResult with rows, columns, metrics
insert_recordInsert single record into table
Returns: Inserted record with generated ID
update_recordUpdate existing record by ID
Returns: Updated record
delete_recordDelete record by ID
Returns: Success boolean
create_collectionCreate new table/collection
Returns: Collection metadata