AI Assistant
The AI Assistant provides intelligent network analysis powered by LLM providers, with an in-browser Python playground for advanced computations.
Overview
The AI Assistant combines pre-computed network metrics with large language model capabilities to answer questions about your organizational network. It can identify key influencers, detect burnout risk, analyze department connectivity, and generate custom visualizations — all through natural language conversation.
Setup
Configuring a Provider
Navigate to Settings > AI Assistant to configure your API provider:
Cloud Providers
| Provider | Models | API Key Required |
|---|---|---|
| Google Gemini | gemini-2.0-flash, gemini-2.0-pro | Yes |
| OpenAI | gpt-4o, gpt-4o-mini | Yes |
| Groq | llama-3.3-70b, mixtral-8x7b | Yes |
Enter your API key and select a model. The AI chat button will appear in the header.
Edge Hosted Providers
| Provider | Default Model | API Key Required |
|---|---|---|
| Ollama (Local) | llama3.2 | No |
Edge hosted providers run models locally on your machine. No data leaves your computer, making this ideal for sensitive organizational data. See the Ollama Setup Guide below for detailed instructions.
Local models (7B–8B parameters) have significantly less context awareness than cloud providers. They may struggle with long system prompts, miss network data details, or give generic answers to open-ended questions like "What am I looking at?" For best results with Ollama, ask specific questions (e.g., "Who has the highest betweenness centrality?") rather than broad ones. Cloud providers (Gemini, OpenAI, Groq) use much larger models that handle the full network context reliably.
Opening the Chat
- Click the chat icon in the sidebar, or
- Use the keyboard shortcut (configured in Settings)
Pre-Computed Network Metrics
When a graph is loaded, the AI receives rich context about your network including:
- Degree centrality (in/out/total) — how connected each node is
- Betweenness centrality — nodes that bridge different groups
- Closeness centrality — how quickly a node can reach all others
- Eigenvector centrality — connection to other well-connected nodes
- PageRank — importance based on incoming connections
- Clustering coefficient — how interconnected a node's neighbors are
- k-core number — core vs. periphery position
- Burt's constraint — structural holes and information access
- Bridging score — cross-department boundary spanning
- Structural roles — hub, broker, bridge, or peripheral classification
- Global efficiency — how efficiently information flows across the network (average inverse shortest path, 0–1 scale)
- Clique analysis — maximal fully-connected subgroups, largest clique size, and per-node clique membership counts
- Suggested connections — top 10 pairs of unconnected nodes that share many mutual contacts but lack a direct link (Adamic-Adar link prediction)
Additional context includes department breakdowns, cross-department connection counts, and network topology statistics (density, reciprocity, path length, diameter, global efficiency).
Business Analysis Templates
The AI is trained to apply ONA-specific analysis patterns when you ask about:
Burnout Risk
Identifies overloaded brokers: nodes with high betweenness AND high degree AND low reciprocity — people who channel information but receive little support back.
Churn/Attrition Risk
Detects peripheral nodes with low degree, low closeness, and high constraint. Declining connectivity over time signals disengagement.
Onboarding Health
Evaluates whether new nodes are building connections at an appropriate rate. Low department diversity in connections suggests siloed onboarding.
Influence & Campaigns
Uses PageRank and bridging scores to identify organic influencers who can maximize information diffusion across communities.
Suggested Connections
Recommends the top 10 pairs of people who share many mutual contacts but are not directly connected. Uses the Adamic-Adar link prediction algorithm. Cross-department pairs are flagged as high-value introductions that could improve information flow.
Network Efficiency & Cliques
Global efficiency measures how well information can flow across the entire network (0–1 scale, where 1.0 means every node can reach every other in one hop). Clique analysis reveals tightly-knit subgroups where every member is connected to every other — useful for identifying cohesive teams or echo chambers.
Python Playground
For analysis beyond pre-computed metrics, the AI can generate executable Python code that runs directly in your browser via Pyodide (Python compiled to WebAssembly).
How It Works
- Ask a question that requires custom computation (e.g., "Show me a degree distribution histogram")
- The AI generates a Python code block with a Run button
- Click Run to execute the code in-browser
- Results (text output and charts) appear inline in the chat
First Run
The first time you run Python code, Pyodide downloads and initializes (~20MB, cached for subsequent uses). This takes 10-20 seconds. After that, code execution is near-instant.
Pre-Loaded Variables
Every code block has access to:
G— A NetworkX DiGraph/Graph of your current network- Node attributes:
name,department,activity - Edge attributes:
weight,count
- Node attributes:
df— A Pandas DataFrame of your raw dataset rows (all CSV columns)
Available Libraries
networkx— 500+ graph algorithmspandas— data manipulationnumpy— numerical computationmatplotlib/seaborn— data visualization
Example Prompts
- "Show me a degree distribution histogram"
- "Run a diffusion simulation from the most central node"
- "Detect communities using label propagation and visualize them"
- "Plot a heatmap of department-to-department connections"
- "Compare clustering coefficients across departments"
Chat Panel Controls
| Button | Action |
|---|---|
| Expand (⛶) | Grow panel to fill the main content area |
| Collapse (⧉) | Shrink panel back to default size |
| Clear (🗑) | Clear chat history |
| Close (✕) | Close the chat panel |
Voice Input/Output
When enabled in Settings, you can:
- Speak your questions using the microphone button
- Listen to AI responses via text-to-speech
- Select your preferred voice and speed
Ollama Setup
Ollama lets you run large language models locally. All data stays on your machine — nothing is sent to external servers.
1. Install Ollama
macOS:
brew install ollama
Or download from ollama.com/download.
Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows: Download the installer from ollama.com/download.
2. Pull a Model
ollama pull llama3.2
Other recommended models for network analysis:
| Model | Size | Notes |
|---|---|---|
llama3.2 | 2GB | Fast, good for most queries |
llama3.2:3b | 2GB | 3B parameter variant |
llama3.1:8b | 4.7GB | Stronger reasoning |
mistral | 4.1GB | Good balance of speed and quality |
qwen2.5:7b | 4.7GB | Strong multilingual support |
3. Start the Ollama Server
ollama serve
Ollama runs on http://localhost:11434 by default. The server must be running when using the AI Assistant.
4. Enable Browser Access (CORS)
When running the ONA Dashboard from a web server (including localhost dev servers or Vercel preview), you need to allow cross-origin requests from the browser.
macOS/Linux — set the environment variable before starting Ollama:
OLLAMA_ORIGINS=* ollama serve
To make this permanent on macOS:
launchctl setenv OLLAMA_ORIGINS "*"
Then restart Ollama.
Windows — set the environment variable in System Settings > Environment Variables, or run:
$env:OLLAMA_ORIGINS="*"
ollama serve
If you open the dashboard directly as a file (file:///...), CORS is typically not required. CORS configuration is only needed when the dashboard is served from a web server.
5. Configure in ONA Dashboard
- Go to Settings > AI Assistant
- Under AI Brand, select Ollama (Local) from the Edge Hosted group
- Set the Model field (default:
llama3.2) — must match a model you've pulled - Adjust the Ollama URL if your server is on a different host or port (default:
http://localhost:11434) - The AI chat button appears in the header immediately (no API key needed)
Troubleshooting
"Failed to fetch" or network error
- Verify Ollama is running:
curl http://localhost:11434/api/tags - Check CORS is configured (see step 4 above)
- Ensure the URL in settings matches your Ollama server address
"Model not found" error
- Run
ollama listto see installed models - Pull the model:
ollama pull <model-name> - Ensure the model name in settings exactly matches (e.g.,
llama3.2notllama-3.2)
Slow responses
- Larger models require more RAM and a capable GPU
- Try a smaller model like
llama3.2for faster responses - Close other memory-intensive applications
Requirements
- A graph must be created (source/target columns selected) for network metrics and Python playground
Gvariable - Dataset can be loaded without a graph — the AI will have access to raw data via
df - Without any data loaded, the AI can still answer general ONA methodology questions