Christian Johnson

ARC Core v1 – real-time LoRA fine-tuning for open LLMs (benchmarks & code)

Hi all,

I built a small library that keeps a language model in training mode while it chats. ARC Core uses LoRA adapters plus elastic-weight-consolidation to update weights on the fly, then “sleeps” every N turns to consolidate memories.

Key numbers on Tiny-Dolphin-8B (30 evaluation rounds):

−37 % perplexity (fluency)

−54 % median latency

+8 % coherence score

It is one pip install metisos-arc-core

Github Repo:https://github.com/metisos/arc_coreV1

Apache 2.0 License license, no server dependence. Full benchmarks, code in the README. Feedback welcome.

4 views

Add a comment

Replies

Be the first to comment