-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
Background
The MoFA GSoC project ideas list includes this task as an open contribution item:
Integrate
mofa-local-llminto mofa core as the built-in local inference module. :contentReference[oaicite:4]{index=4}
Goals
- Enable first-class local inference support directly in the
mofacore framework - Define a backend interface / trait in the runtime capable of loading and running models
- Integrate
mofa-local-llmas one of the default backends - Ensure Rust-native APIs and workflows (no external HTTP model servers)
Proposed Approach
- Evaluate the current core
mofainference architecture - Define or extend an
InferenceBackendtrait in the core - Adapt
mofa-local-llmto implement this interface - Add loading, lifecycle management, and inference paths from core into the backend
- Write basic tests demonstrating inference pathways
As far as i can think of it ,inline local LLM support improves performance and makes MoFA more self-contained without external services.
Happy to expand the approach and get early feedback before coding.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels