Whether running a single installation or multiple instances across different classes, these practices help you maintain and improve your system over time.

Testing Configuration Changes

Before deploying new conversation frameworks or learning supports to students, test them thoroughly.

Recommended workflow:

  1. Self-test: Try the new framework yourself with various prompts and questions
  2. Iterate: Refine based on what works and what doesn't
  3. Pilot with students: Have a few students test it and observe how they interact
  4. Debrief: Gather feedback from students about what was helpful or confusing
  5. Refine: Make improvements based on student experience
  6. Deploy: Roll out to full class

Why this matters:

This iterative process is normal and valuable—each refinement makes your system more effective for your specific students and context.

Backup Strategy

Before making significant changes to your working system:

Create a backup:

  1. Copy your entire project directory to a timestamped folder
  2. Include all files: PHP, JavaScript, JSON, configuration
  3. Store backups separately from your live deployment
  4. Document what this backup represents (date, working state, purpose)

Why this matters:

Version control alternative: If using Git, commit working state before customization work. Can create branches for experimental changes.

Technical Adjustments

Common technical modifications you might need to make.

Switching AI Providers

If you want to change from one AI provider to another (e.g., Gemini to OpenAI, or OpenAI to Anthropic):

What needs to change:

How to approach:

Provide your AI assistant with:

The core architecture remains the same - you're just changing how the proxy communicates with the external API.

Adjusting Model Parameters

Common parameters you might want to adjust:

Temperature: Controls randomness/creativity in responses

Max tokens: Controls response length

Safety settings: (Provider-specific)

Top-p, top-k, frequency penalty: Advanced parameters

Modifying Rate Limiting

Rate limiting prevents abuse by controlling how frequently requests can be made.

Current implementation: Set in your API proxy (requests per time period, typically per IP address)

To adjust:

Trade-offs:

Changing Safety Settings

Safety settings control what content the AI will engage with and how it responds to edge cases.

Provider-level controls: Most AI providers offer safety settings in their API

Application-level controls: Add custom logic to your API proxy

Consider your context:

Managing Template Updates Across Installations

Pattern:

  1. Maintain one "template" installation with your best current configuration
  2. Test all changes thoroughly in the template
  3. Selectively copy improvements to production installations
  4. Keep configuration files (JSON) separate per installation
  5. Share core code improvements (PHP, JavaScript) across installations

What to sync:

What to keep separate: