Liquid AI's LFM2.5-1.2B model family represents a significant advancement in on-device AI, building on the LFM2 device-optimized architecture. This release includes Base, Instruct, Japanese, Vision-Language, and Audio-Language models, all optimized for instruction following to enable private, fast, always-on intelligence on any device.
Key Features:
- Text Models: LFM2.5-1.2B-Instruct delivers top performance in knowledge, instruction following, math, and tool use benchmarks with blazing inference speed.
- Japanese Model: LFM2.5-1.2B-JP is specifically optimized for Japanese language with state-of-the-art knowledge and instruction-following capabilities.
- Vision-Language Model: LFM2.5-VL-1.6B offers improved multi-image comprehension and multilingual vision understanding across seven languages.
- Audio-Language Model: LFM2.5-Audio-1.5B processes speech and text natively with 8x faster audio detokenization than its predecessor.
- Deployment Support: Day-zero support across llama.cpp, MLX, vLLM, ONNX, and LEAP platforms with optimized performance on AMD and Qualcomm NPUs.
Use Cases:
- Local copilots and in-car assistants
- Japanese-language applications requiring cultural nuance
- Multimodal on-edge applications with vision and audio processing
- IoT devices, mobile devices, and embedded systems
- Enterprise deployments requiring open-weight, customizable models

