Whether running a single installation or multiple instances across different classes, these practices help you maintain and improve your system over time.
Testing Configuration Changes
Before deploying new conversation frameworks or learning supports to students, test them thoroughly.
Recommended workflow:
- Self-test: Try the new framework yourself with various prompts and questions
- Iterate: Refine based on what works and what doesn't
- Pilot with students: Have a few students test it and observe how they interact
- Debrief: Gather feedback from students about what was helpful or confusing
- Refine: Make improvements based on student experience
- Deploy: Roll out to full class
Why this matters:
- Conversation frameworks rarely work perfectly on first attempt
- Student interactions reveal issues you wouldn't anticipate
- Small refinements can significantly improve effectiveness
- Testing prevents deploying confusing or unhelpful frameworks
This iterative process is normal and valuable—each refinement makes your system more effective for your specific students and context.
Backup Strategy
Before making significant changes to your working system:
Create a backup:
- Copy your entire project directory to a timestamped folder
- Include all files: PHP, JavaScript, JSON, configuration
- Store backups separately from your live deployment
- Document what this backup represents (date, working state, purpose)
Why this matters:
- Allows experimentation without risk
- Provides rollback point if customization doesn't work
- Enables comparison between versions
- Creates tested baseline for new deployments
Version control alternative: If using Git, commit working state before customization work. Can create branches for experimental changes.
Technical Adjustments
Common technical modifications you might need to make.
Switching AI Providers
If you want to change from one AI provider to another (e.g., Gemini to OpenAI, or OpenAI to Anthropic):
What needs to change:
- API endpoint URL in your API proxy
- Request format (different providers structure requests differently)
- Response parsing (different providers structure responses differently)
- Authentication method (header format, API key location)
- Model names and available parameters
How to approach:
Provide your AI assistant with:
- Current API proxy code
- Documentation for your new provider's API
- Request: "Help me modify this API proxy to work with [new provider] instead of [current provider]"
The core architecture remains the same - you're just changing how the proxy communicates with the external API.
Adjusting Model Parameters
Common parameters you might want to adjust:
Temperature: Controls randomness/creativity in responses
- Lower (0.0-0.5): More focused, deterministic responses
- Higher (0.7-1.0): More creative, varied responses
- Adjust in your API proxy where you construct the API request
Max tokens: Controls response length
- Adjust based on whether you want concise or detailed responses
- Consider cost implications (longer responses = higher cost)
Safety settings: (Provider-specific)
- Some providers allow granular safety controls
- Balance student safety with allowing substantive discussions
- Review provider documentation for available options
Top-p, top-k, frequency penalty: Advanced parameters
- Consult your provider's documentation
- Test incrementally to understand effects
Modifying Rate Limiting
Rate limiting prevents abuse by controlling how frequently requests can be made.
Current implementation: Set in your API proxy (requests per time period, typically per IP address)
To adjust:
- Modify minimum time between requests
- Change whether tracking is per-IP or per-session
- Add different limits for different types of requests
- Consider your use case (classroom vs. homework, supervised vs. independent)
Trade-offs:
- Stricter limits: Lower costs, prevent rapid-fire questions, encourage thoughtful use
- Looser limits: Better user experience, supports rapid iteration, higher potential costs
Changing Safety Settings
Safety settings control what content the AI will engage with and how it responds to edge cases.
Provider-level controls: Most AI providers offer safety settings in their API
- Review provider documentation for available options
- Test thoroughly before deploying to students
- Balance protection with educational authenticity
Application-level controls: Add custom logic to your API proxy
- Filter certain topics or keywords
- Log concerning interactions for review
- Provide custom error messages for blocked content
Consider your context:
- Student age and maturity
- Supervised vs. unsupervised use
- Subject matter (science discussions may need different settings than general chat)
- Educational goals vs. risk management
Managing Template Updates Across Installations
Pattern:
- Maintain one "template" installation with your best current configuration
- Test all changes thoroughly in the template
- Selectively copy improvements to production installations
- Keep configuration files (JSON) separate per installation
- Share core code improvements (PHP, JavaScript) across installations
What to sync:
- Bug fixes and security improvements (always)
- New features (selectively, based on need)
- Core system instruction improvements (consider per-installation needs)
What to keep separate:
- Conversation frameworks (may differ by class/subject)
- Learning supports (may differ by student population)
- API keys and environment variables (always separate)
- Installation-specific settings (titles, branding)