Skip to main content

Q&A over Project Knowledge (Assisting Mode)

Assignee: All team members πŸ’‘

In Assisting mode, Loyal Professor does not answer from general knowledge β€” it queries directly against the Knowledge Base of the selected project: Approved BRD/FRD/Spec, Foundation Documents, Test Cases, Jira tickets, and Confluence pages. Answers are grounded with citations from real documents, not fabricated.

This is the core difference compared to a regular ChatGPT/Claude β€” see the details in the Differences section below.


Step 1: Switch to Assisting mode​

  1. Open the chat (the R button in the bottom-right, or Ctrl/Cmd + Shift + L).
  2. Click the Loyal Professor logo in the header β†’ select Assisting (🧠 BrainCircuit icon).
  3. The empty state changes to the title "Work on a ticket" with the subtitle "Include a ticket code (e.g. AMD-123) so I can load its context."

Loyal Professor reads the current activeProjectId β€” every question will be answered within the scope of that project's Knowledge Base.


Step 2: Ask a question​

There are 3 ways for the AI to understand the right context:

Option 1: Mention a ticket code directly​

If your question is about a specific Jira ticket, type the ticket code in the question:

"Summarize ticket PROJ-142"
"What are the acceptance criteria of PROJ-142?"

The AI automatically loads that ticket's context β€” related BRD, FRD, Test Cases.

Option 2: Attach context via @ mention​

Type @ in the input β†’ the Context Explorer modal opens, letting you pick specific documents.

6 context groups available:

GroupIncludes
DocumentsAPI Specifications, Architecture Overview, Database Schema, Data Dictionary
RequirementsApproved BRDs
Tech SpecsImplementation Plans, FRDs, Agent workflows, CI/CD docs
Test CasesTest Cases, Test Data, QA Checklists
Jira IssuesProject tickets
ConfluenceSynced Confluence pages

Keyboard shortcuts in Context Explorer:

KeyAction
↑ / ↓Move between items
β†’ / EnterExpand the group or select a document
← / EscGo back or close
TabSwitch groups (browse mode)

Once picked, the document appears as a pill above the input β€” click X to remove it. Attach multiple documents if needed.

Option 3: Ask naturally​

If you don't know exactly which document, just type a natural-language question β€” the AI will use semantic search to find relevant documents by itself.


Step 3: Follow the Thinking Process​

While the AI processes the question in Assisting mode, you will see thinking pills appearing in realtime above the answer:

ToolDisplay label
similarity_search"Searching knowledge base..."
graph_query"Running graph query..."
contextualize"Expanding context..."
list_templates"Loading query templates..."

Each pill has a spinning icon while running and turns into βœ“ once complete. After all tools finish, a "Thinking..." pill appears last before the AI starts streaming the answer.

Below the answer, an expandable "Sources (N)" section lets you see exactly which documents were used as sources.


Differences vs a regular Chatbot​

Regular ChatGPT / ClaudeLoyal Professor (Assisting)
Answer sourceGeneral knowledge from training dataReal documents of the project itself
AccuracyMay "hallucinate"Answers come with source citations, no fabrication
Team knowledgeNoneKnows the team's BRD, FRD, tickets, specs, conventions
Up-to-dateLimited by cutoff dateUpdated with the Knowledge Base in realtime
Credential safetyNo masking mechanismAuto-masks API keys/tokens/passwords before sending
Transparency"Black box" answersDisplays tool calls + citations

Concrete example:

Ask ChatGPT: "What auth method does this project use?"
β†’ "I don't have information about your project. Common methods are JWT, OAuth2, etc."

Ask Loyal Professor (Assisting): "What auth method does this project use?"
β†’ "Per the Foundation Document 'Security Standards' and BRD-Authentication, the project uses JWT with refresh tokens. Access tokens expire after 15 minutes, refresh tokens after 7 days. The endpoint /api/v1/auth/login returns both..." πŸ“Ž Sources: Security Standards, BRD-Authentication, API Spec v2.3


Usage examples by role​

πŸ‘©β€πŸ’Ό Business Analyst​

"List all BRDs related to the Payment module."
"Are the acceptance criteria of ticket PROJ-142 complete?"
"Does @BRD-Chat-Module conflict with @BRD-Authentication anywhere?"
"Does Feature X have a technical spec yet? If so, who owns it?"

πŸ—οΈ Solution Architect / Tech Lead​

"Summarize @Architecture-Overview β€” is any microservice hitting a bottleneck?"
"What database are we using? How many tables, and which ones relate to the users table?"
"Are there any Implementation Plans that touch the schema of the orders table?"
"What is the team's current coding convention for naming API endpoints?"

πŸ‘¨β€πŸ’» Developer​

"In the @Implementation-Plan for ticket PROJ-142, how is the Momo API call flow described?"
"Does authentication currently use JWT or sessions? How long until the token expires?"
"What validation does the POST /orders API endpoint have, and what's the return format?"
"Are there any error codes defined for the payment service?"

πŸ§ͺ QA Engineer​

"List the test cases created for ticket PROJ-142."
"Does @TC-002-Context-Mention cover the edge case when the user types @ repeatedly?"
"What is included in the test scope for the Authentication feature?"
"Is there any test case that touches JWT expiry?"

πŸ“‹ Project Manager​

"How many BRDs are currently in WAITING_REVIEW status?"
"Summarize the most important architectural decisions in the foundation docs."
"Which stages has the Payment feature gone through β€” is BRD / FRD / Test Cases done?"
"Which tickets in this sprint don't have an Approved BRD yet?"

πŸŽ“ Newly onboarded Developer​

"What's the project's tech stack? Are there any team-specific coding conventions?"
"What's the overall architecture β€” main services and a typical request flow?"
"@Dev-Setup-Guide β€” what are the steps to run locally?"
"What is the team's code review process?"

Tips for effective chat usage​

βœ… Specific questions with context​

βœ… Good question❌ Hard for AI
"Which API endpoint handles user login? What are the request/response schemas?""How does the system work?"
"@BRD-Payment β€” what is the business rule when the balance is insufficient?""Describe the payment module"
"In ticket PROJ-142, which DB tables does the FRD mention?""Our database"
"Compare @Implementation-Plan-v1 and @Implementation-Plan-v2""Compare the docs"

βœ… Attach context when the question is narrow​

Instead of letting the AI search the entire Knowledge Base (slow and potentially less accurate), type @ to attach 1–3 directly relevant documents β€” the AI focuses entirely on that scope.

βœ… Use follow-ups to go deeper​

The AI remembers the context within the same conversation. Instead of starting over, ask follow-up questions:

You: "Summarize ticket PROJ-142"
AI: [returns summary]
You: "Which edge cases were mentioned?"
AI: [continues in the same context, no need to re-mention PROJ-142]
You: "Write a test case for edge case #2"

βœ… Create a new conversation when changing topics​

The AI remembers the context of an entire conversation β€” if you are asking about Payment and want to switch to Authentication, create a New Chat to avoid the AI being confused by old context.

βœ… Rename conversations meaningfully​

After finishing a topic, click the title to rename it to something memorable (e.g., "PROJ-142 test case analysis") β€” it's much easier to find later in History.

❌ Don't ask general knowledge in Assisting mode​

A question like "Explain what dependency injection is" should be asked in General mode. In Assisting, the AI will try to search the Knowledge Base first β†’ slow and not optimal.


When answers are wrong or incomplete

If the AI's answer is inaccurate, the cause is usually that the Knowledge Base is not complete enough:

  • BRD/FRD is not Approved yet (only Approved docs enter the KB).
  • Foundation Documents are missing or incomplete β€” see System Foundation.
  • Jira/Confluence is not synced β€” check Jira & Confluence setup.
Verify before acting

Even with citations, AI answers can still misinterpret context. For important decisions (architecture changes, adding an endpoint, DB migrations, etc.), open the source document the AI cites and read it yourself to confirm before proceeding.