# Extension Overview
Extensions are modular components that add functionality to Jan. Each extension is designed to handle specific features. Extensions can be managed through the Extensions page in settings.
## Core Extensions
### Cortex.cpp (v1.0.0)
The primary extension that manages both **local** and **remote inference** capabilities:
#### Local Engines
- **Llama.cpp**: Fast, efficient local inference engine for GGUF models
#### Remote Engines
- **Anthropic**: Access Claude models
- **Cohere**: Access Cohere's models
- **Groq**: High-performance inference
- **Martian**: Specialized model access
- **MistralAI**: Access Mistral models
- **OpenAI**: Access GPT models
- **OpenRouter**: Multi-provider model access
All engines can be enabled/disabled and configured individually through the [Engines]() page in settings.
### Jan Assistant
Enables assistants functionality, including Jan - the default assistant that can utilize all downloaded models. This extension manages:
- Default assistant configurations
- Model selection
- Conversation management
### Conversational
Enables conversations and state persistence through your filesystem.
### Model Management
Provides model exploration and seamless downloads:
- Model discovery and browsing
- Version control & configuration handling
- Download management
See model
### System Monitoring
Provides system health and OS level data:
- Hardware utilization tracking
- Performance monitoring
- Error logging
- Diagnostic tools
## Extension Architecture
### File Structure
```
jan/
├── extensions/
│ └── @janhq/
│ ├── inference-cortex-extension/
│ │ └── engines/
│ │ ├── cortex.llamacpp/
│ │ │ ├── mac-amd64/
│ │ │ └── mac-arm64/
│ │ └── remote-engines/
│ │ ├── openai/
│ │ ├── anthropic/
│ │ └── etc...
│ └── [other-extensions]/
└── settings/
└── [extension-id]/
└── settings.json
```
### Engine Management
The Cortex extension handles:
- Engine variant management for local inference
- Remote API integrations
- Model routing through `/chat/completion`
- Engine configuration and settings
- Performance optimization
### Installation and Updates
- Local engines are bundled with Jan
- Engine variants are managed via symlinks
- Remote engines require API credentials
- Updates are handled through the extension system
## Settings and Configuration
### Extension Settings
Each extension can have its own settings accessed through the gear icon:
- Feature toggles
- Performance settings
- API configurations
- User preferences
### Engine Settings
Configure engine-specific settings:
- Model parameters
- Hardware acceleration
- API credentials
- Performance tuning
## Privacy and Security
Jan maintains a privacy-first approach:
- Local processing prioritized
- Secure API key management
- User-controlled data sharing
- Encrypted storage when needed
## Best Practices
### Engine Selection
- Use local engines for privacy-sensitive tasks
- Remote engines for specialized capabilities
- Consider hardware requirements
- Balance performance needs
### Configuration Tips
- Keep engines updated
- Monitor system resources
- Manage API keys securely
- Test changes in isolation
## Troubleshooting
Common issues and solutions:
1. Engine not responding:
- Check engine status
- Verify hardware compatibility
- Review system logs
- Confirm API credentials
2. Performance issues:
- Monitor resource usage
- Check hardware acceleration
- Adjust model parameters
- Review concurrent operations
## Future Development
Planned enhancements:
- Additional engine variants
- Enhanced model compatibility
- Improved performance monitoring
- Extended API integrations
- Mobile optimization