Skip to content

Proposal: Integrate mofa-local-llm into mofa core as built-in local inference backend #7

@SanchitKS12

Description

@SanchitKS12

Background

The MoFA GSoC project ideas list includes this task as an open contribution item:

Integrate mofa-local-llm into mofa core as the built-in local inference module. :contentReference[oaicite:4]{index=4}

Goals

  • Enable first-class local inference support directly in the mofa core framework
  • Define a backend interface / trait in the runtime capable of loading and running models
  • Integrate mofa-local-llm as one of the default backends
  • Ensure Rust-native APIs and workflows (no external HTTP model servers)

Proposed Approach

  • Evaluate the current core mofa inference architecture
  • Define or extend an InferenceBackend trait in the core
  • Adapt mofa-local-llm to implement this interface
  • Add loading, lifecycle management, and inference paths from core into the backend
  • Write basic tests demonstrating inference pathways

As far as i can think of it ,inline local LLM support improves performance and makes MoFA more self-contained without external services.


Happy to expand the approach and get early feedback before coding.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions