Q&A over Project Knowledge (Assisting Mode)
Assignee: All team members π‘
In Assisting mode, Loyal Professor does not answer from general knowledge β it queries directly against the Knowledge Base of the selected project: Approved BRD/FRD/Spec, Foundation Documents, Test Cases, Jira tickets, and Confluence pages. Answers are grounded with citations from real documents, not fabricated.
This is the core difference compared to a regular ChatGPT/Claude β see the details in the Differences section below.
Step 1: Switch to Assisting modeβ
- Open the chat (the R button in the bottom-right, or
Ctrl/Cmd + Shift + L). - Click the Loyal Professor logo in the header β select Assisting (π§ BrainCircuit icon).
- The empty state changes to the title "Work on a ticket" with the subtitle "Include a ticket code (e.g. AMD-123) so I can load its context."
Loyal Professor reads the current activeProjectId β every question will be answered within the scope of that project's Knowledge Base.
Step 2: Ask a questionβ
There are 3 ways for the AI to understand the right context:
Option 1: Mention a ticket code directlyβ
If your question is about a specific Jira ticket, type the ticket code in the question:
"Summarize ticket PROJ-142"
"What are the acceptance criteria of PROJ-142?"
The AI automatically loads that ticket's context β related BRD, FRD, Test Cases.
Option 2: Attach context via @ mentionβ
Type @ in the input β the Context Explorer modal opens, letting you pick specific documents.
6 context groups available:
| Group | Includes |
|---|---|
| Documents | API Specifications, Architecture Overview, Database Schema, Data Dictionary |
| Requirements | Approved BRDs |
| Tech Specs | Implementation Plans, FRDs, Agent workflows, CI/CD docs |
| Test Cases | Test Cases, Test Data, QA Checklists |
| Jira Issues | Project tickets |
| Confluence | Synced Confluence pages |
Keyboard shortcuts in Context Explorer:
| Key | Action |
|---|---|
| β / β | Move between items |
| β / Enter | Expand the group or select a document |
| β / Esc | Go back or close |
| Tab | Switch groups (browse mode) |
Once picked, the document appears as a pill above the input β click X to remove it. Attach multiple documents if needed.
Option 3: Ask naturallyβ
If you don't know exactly which document, just type a natural-language question β the AI will use semantic search to find relevant documents by itself.
Step 3: Follow the Thinking Processβ
While the AI processes the question in Assisting mode, you will see thinking pills appearing in realtime above the answer:
| Tool | Display label |
|---|---|
similarity_search | "Searching knowledge base..." |
graph_query | "Running graph query..." |
contextualize | "Expanding context..." |
list_templates | "Loading query templates..." |
Each pill has a spinning icon while running and turns into β once complete. After all tools finish, a "Thinking..." pill appears last before the AI starts streaming the answer.
Below the answer, an expandable "Sources (N)" section lets you see exactly which documents were used as sources.
Differences vs a regular Chatbotβ
| Regular ChatGPT / Claude | Loyal Professor (Assisting) | |
|---|---|---|
| Answer source | General knowledge from training data | Real documents of the project itself |
| Accuracy | May "hallucinate" | Answers come with source citations, no fabrication |
| Team knowledge | None | Knows the team's BRD, FRD, tickets, specs, conventions |
| Up-to-date | Limited by cutoff date | Updated with the Knowledge Base in realtime |
| Credential safety | No masking mechanism | Auto-masks API keys/tokens/passwords before sending |
| Transparency | "Black box" answers | Displays tool calls + citations |
Concrete example:
Ask ChatGPT: "What auth method does this project use?"
β "I don't have information about your project. Common methods are JWT, OAuth2, etc."Ask Loyal Professor (Assisting): "What auth method does this project use?"
β "Per the Foundation Document 'Security Standards' and BRD-Authentication, the project uses JWT with refresh tokens. Access tokens expire after 15 minutes, refresh tokens after 7 days. The endpoint/api/v1/auth/loginreturns both..." π Sources: Security Standards, BRD-Authentication, API Spec v2.3
Usage examples by roleβ
π©βπΌ Business Analystβ
"List all BRDs related to the Payment module."
"Are the acceptance criteria of ticket PROJ-142 complete?"
"Does @BRD-Chat-Module conflict with @BRD-Authentication anywhere?"
"Does Feature X have a technical spec yet? If so, who owns it?"
ποΈ Solution Architect / Tech Leadβ
"Summarize @Architecture-Overview β is any microservice hitting a bottleneck?"
"What database are we using? How many tables, and which ones relate to the users table?"
"Are there any Implementation Plans that touch the schema of the orders table?"
"What is the team's current coding convention for naming API endpoints?"
π¨βπ» Developerβ
"In the @Implementation-Plan for ticket PROJ-142, how is the Momo API call flow described?"
"Does authentication currently use JWT or sessions? How long until the token expires?"
"What validation does the POST /orders API endpoint have, and what's the return format?"
"Are there any error codes defined for the payment service?"
π§ͺ QA Engineerβ
"List the test cases created for ticket PROJ-142."
"Does @TC-002-Context-Mention cover the edge case when the user types @ repeatedly?"
"What is included in the test scope for the Authentication feature?"
"Is there any test case that touches JWT expiry?"
π Project Managerβ
"How many BRDs are currently in WAITING_REVIEW status?"
"Summarize the most important architectural decisions in the foundation docs."
"Which stages has the Payment feature gone through β is BRD / FRD / Test Cases done?"
"Which tickets in this sprint don't have an Approved BRD yet?"
π Newly onboarded Developerβ
"What's the project's tech stack? Are there any team-specific coding conventions?"
"What's the overall architecture β main services and a typical request flow?"
"@Dev-Setup-Guide β what are the steps to run locally?"
"What is the team's code review process?"
Tips for effective chat usageβ
β Specific questions with contextβ
| β Good question | β Hard for AI |
|---|---|
| "Which API endpoint handles user login? What are the request/response schemas?" | "How does the system work?" |
| "@BRD-Payment β what is the business rule when the balance is insufficient?" | "Describe the payment module" |
| "In ticket PROJ-142, which DB tables does the FRD mention?" | "Our database" |
| "Compare @Implementation-Plan-v1 and @Implementation-Plan-v2" | "Compare the docs" |
β Attach context when the question is narrowβ
Instead of letting the AI search the entire Knowledge Base (slow and potentially less accurate), type @ to attach 1β3 directly relevant documents β the AI focuses entirely on that scope.
β Use follow-ups to go deeperβ
The AI remembers the context within the same conversation. Instead of starting over, ask follow-up questions:
You: "Summarize ticket PROJ-142"
AI: [returns summary]
You: "Which edge cases were mentioned?"
AI: [continues in the same context, no need to re-mention PROJ-142]
You: "Write a test case for edge case #2"
β Create a new conversation when changing topicsβ
The AI remembers the context of an entire conversation β if you are asking about Payment and want to switch to Authentication, create a New Chat to avoid the AI being confused by old context.
β Rename conversations meaningfullyβ
After finishing a topic, click the title to rename it to something memorable (e.g., "PROJ-142 test case analysis") β it's much easier to find later in History.
β Don't ask general knowledge in Assisting modeβ
A question like "Explain what dependency injection is" should be asked in General mode. In Assisting, the AI will try to search the Knowledge Base first β slow and not optimal.
If the AI's answer is inaccurate, the cause is usually that the Knowledge Base is not complete enough:
- BRD/FRD is not Approved yet (only Approved docs enter the KB).
- Foundation Documents are missing or incomplete β see System Foundation.
- Jira/Confluence is not synced β check Jira & Confluence setup.
Even with citations, AI answers can still misinterpret context. For important decisions (architecture changes, adding an endpoint, DB migrations, etc.), open the source document the AI cites and read it yourself to confirm before proceeding.