Goal
Transform your single configuration file into a modular system with separate libraries for conversation frameworks and learning supports. This enables you to build a library of different educational approaches and combine them flexibly for different learning contexts.
By the end of this stage, you'll have a configuration system where you can select a conversation framework (like "Biology Study Partner" or "Writing Assistant"), combine it with multiple learning supports (like "Socratic Questioning" or "Step-by-Step Scaffolding"), and have each conversation preserve its instruction snapshot even when you change the global configuration.
This is the pedagogical heart of the system—the architecture that enables true educator autonomy and curriculum-aligned modular learning.
Before You Begin
Pre-flight checklist:
- Stage 3 is complete and working
- Review
architecture.mdsections on "Configuration System", "Conversation Instruction Snapshots", and "Storage Architecture" - Review
testing.mdfor production testing strategies - Update your TODO list with Stage 4 tasks
Components to Build
1. Conversation Frameworks Library File (JSON)
A JSON file containing an array of complete conversation frameworks. Each framework defines how the AI should behave for a particular learning context or interaction type.
Why it exists: Separates conversation behavior from the core system. You can build a library of different frameworks without editing the main configuration repeatedly. Switch frameworks by changing an ID reference.
What it contains:
- Array of conversation framework objects
- Each framework includes: title, description, full prompt text, optional example interactions
- Referenced by array position (index) or unique ID
2. Learning Supports Library File (JSON)
A JSON file containing modular instructional strategies that modify how the AI assists students.
Why it exists: Enables mixing and matching different teaching approaches. One conversation might use Socratic questioning, another might combine scaffolding with explicit modeling. This modularity is what makes the system pedagogically powerful.
What learning supports are: Instructional modifications that adapt the AI's assistance for different student needs or learning goals. Examples:
- English Language Learner Support: Simplified vocabulary, explicit cultural context, visual descriptions
- Chunking Strategy: Breaks complex tasks into smaller, manageable steps with checkpoints
- Socratic Questioning: Guides learning through questions rather than direct answers
- Worked Examples: Shows step-by-step problem solving before asking student to try
What it contains:
- Array of learning support objects
- Each support includes: unique ID, title, prompt text, usage notes
- Supports can be combined (multiple selected at once)
3. Enhanced Configuration File
Update your main configuration file to reference conversation frameworks and learning supports by ID instead of containing full text.
Why it exists: Keeps the main config small and manageable. You select which conversation framework and which supports to use, and the server assembles the complete instruction.
What changes:
- Replace full system instruction text with conversation framework ID reference
- Add array of selected learning support IDs
- Keep core application settings (title, model, etc.)
4. Enhanced Instruction Assembly Component
Upgrade your server-side assembly logic to combine multiple sources: core system instruction, selected conversation framework, and selected learning supports.
Why it exists: This is where the magic happens—assembling a complete, coherent instruction from modular pieces. The AI provider sees one complete instruction, but you built it from combinable parts.
What it does:
- Read main configuration for selected IDs
- Look up conversation framework by ID from frameworks library
- Look up each learning support by ID from supports library
- Combine all components into single complete instruction
- Return assembled instruction
5. Conversation Instruction Snapshots
Update your conversation storage to save a complete copy of the assembled instruction with each conversation.
Why it exists: Pedagogical consistency. When a student starts a conversation with "Biology Study Mode" active, that conversation should maintain that mode even if you later switch to "Writing Assistant" for new conversations.
What changes:
- When creating new conversation, get complete assembled instruction
- Store full instruction text in conversation object (not just IDs)
- When loading saved conversation, use its frozen instruction instead of assembling fresh
- Display which instruction type the conversation uses
Note: The initial AI greeting (implemented in Stage 2) will automatically reflect the conversation framework because the framework is part of the system instruction sent with the hidden trigger message.
6. Path-Based Storage Namespacing
Implement storage namespacing based on URL path, allowing multiple installations to coexist on the same domain.
Why it exists: One domain can host /math-class/ and /biology-class/ as separate installations with independent conversation storage. No cross-contamination of student data.
What it does:
- Derive namespace from URL path
- Include namespace in all storage operations
- Each namespace maintains its own independent conversation list
- Maximum 15 conversations per namespace
Implementation Tasks
Update your TODO list with these tasks and track progress as you work.
Create Library Files
-Design conversation frameworks library JSON structure
-Create conversation frameworks library file with at least 2-3 example frameworks
-Design learning supports library JSON structure
-Create learning supports library file with at least 2-3 example supports
-Decide on ID scheme (array indices, unique string IDs, or numeric IDs)
Update Main Configuration
-Replace full instruction text with conversation framework ID reference
-Add array for selected learning support IDs
-Keep existing settings (title, model, etc.)
-Ensure configuration is still valid JSON
Enhance Configuration Handler
-Add ability to read conversation frameworks library file
-Add ability to read learning supports library file
-Validate that referenced IDs exist in libraries
-Handle missing libraries or invalid IDs gracefully
Upgrade Instruction Assembly
-Read selected conversation framework ID from main config
-Look up conversation framework in library by ID
-Read selected support IDs from main config
-Look up each support in library by ID
-Combine core system instruction + conversation framework + all supports into single instruction
-Return complete assembled instruction
-Handle cases where framework or supports are missing
Implement Conversation Snapshots
-When creating new conversation, get complete assembled instruction
-Store full instruction text in conversation object
-Store instruction type label (e.g., title) for display
-When loading conversation, use its frozen instruction
-Update API proxy to accept instruction from conversation or assemble fresh for new conversations
-Display which instruction framework each conversation uses
Add Path-Based Namespacing
-Extract path from URL to generate namespace
-Update storage manager to use namespace in all operations
-Test that different paths maintain separate conversation stores
-Verify namespace is consistent across page loads
Test Modular Configuration
-Create conversation with one conversation framework
-Switch to different framework in config
-Create another conversation
-Verify first conversation still uses original framework
-Verify second conversation uses new framework
-Test combining different learning supports
-Verify each combination produces appropriate AI behavior
Test Namespacing
-If you have multiple paths, verify conversations remain separate
-Check that conversation limit (15) applies per namespace, not globally
Success Indicators
When Stage 4 is complete, you should observe these behaviors:
Modular Configuration Working
- ✓ Main configuration references conversation frameworks and learning supports by ID
- ✓ Changing selected framework ID in config affects new conversations
- ✓ Changing selected support IDs affects new conversations
- ✓ You can combine multiple learning supports in one instruction
Conversation Frameworks Library
- ✓ You have at least 2-3 different conversation frameworks defined
- ✓ Switching between frameworks produces noticeably different AI behavior
- ✓ Each framework loads and assembles correctly
Learning Supports Library
- ✓ You have at least 2-3 different learning supports defined
- ✓ You can enable/disable individual supports by changing ID array
- ✓ Multiple supports combine into coherent instruction
Conversation Snapshots
- ✓ Each conversation displays which instruction framework it uses
- ✓ Saved conversations maintain their original instruction even after config changes
- ✓ Old conversations don't suddenly change behavior when you update config
- ✓ You can have conversations with different frameworks open simultaneously
Instruction Assembly
- ✓ Server successfully combines core system instruction + conversation framework + supports into single instruction
- ✓ Assembled instruction is coherent and complete
- ✓ AI behavior reflects all combined components
Path Namespacing
- ✓ If using multiple paths, each maintains independent conversation storage
- ✓ Conversations in one namespace don't appear in another
- ✓ URL path determines which conversation store is accessed
Testing and Verification
Before proceeding to Stage 5:
- Test framework switching: Create conversation, change framework in config, create another conversation, verify both maintain correct frameworks
- Test support combinations: Try different combinations of learning supports and verify AI behavior reflects changes
- Test snapshots: Save multiple conversations with different frameworks, change config multiple times, verify saved conversations unchanged
- Test namespacing: If applicable, verify different paths maintain separate stores
- Test edge cases: What happens if you reference a framework ID that doesn't exist? What if supports array is empty?
- Review conversation list: Verify UI clearly shows which framework each conversation uses
Consult testing.md for additional verification strategies and troubleshooting guidance.
What You've Accomplished
With Stage 4 complete, you have:
- Built a truly modular pedagogical system - the core value proposition of this architecture
- Enabled curriculum-aligned learning - different conversation frameworks and approaches without rebuilding
- Preserved pedagogical consistency - students' conversations maintain their instructional context
- Created educator autonomy - full control over how AI supports learning in your context
This is the stage where the system becomes pedagogically sophisticated. You can now adapt the AI's behavior for different subjects, different learning goals, and different student needs—all through configuration, not code changes.
Stage 5 will add production-ready features (security, error handling, UI polish) to make this robust and safe for student use.
STOP: Before Proceeding to Stage 5
Do not move forward until:
- ✓ Conversation frameworks library and learning supports library are working
- ✓ You can switch between different frameworks and see behavior changes
- ✓ Conversation snapshots preserve instruction frameworks correctly
- ✓ Multiple conversations with different frameworks coexist without conflicts
- ✓ Path namespacing works (if applicable to your deployment)
When Stage 4 is working completely:
- Update your TODO list - mark Stage 4 tasks complete, add any notes or learnings
- Create a git commit with your working Stage 4 code
- Update any other contextual documents you're maintaining
- Proceed to
stage-5-context.md