Qwen3.5 is a comprehensive collection of large language models hosted on Hugging Face, developed by the Qwen organization. This collection represents the latest iteration of the Qwen model series, featuring multimodal capabilities with image-text-to-text functionality across a wide range of parameter sizes from 0.8B to 397B parameters.
Key Features
- Multimodal Capabilities: All models in the collection support image-text-to-text tasks, enabling visual understanding and generation
- Scalable Parameter Sizes: Models range from lightweight 0.8B parameter versions to massive 397B parameter models for different computational needs
- Multiple Variants: Includes base models, FP8 quantized versions, and GPTQ-Int4 quantized models for optimized deployment
- Inference Provider Support: Many models are available through inference providers like Together, Novita, and Featherless-AI
- Tool Calling Support: Larger models (397B, 122B, 35B, 27B) feature tool calling capabilities for enhanced functionality
- High Performance: Models show strong performance metrics with significant download counts and community engagement
Use Cases
- AI Research and Development: Researchers can experiment with different model sizes and capabilities
- Multimodal Applications: Developers building applications that require both visual and textual understanding
- Production Deployment: Organizations can select appropriate model sizes based on computational constraints and performance requirements
- Model Optimization: Developers interested in quantized models (FP8, GPTQ-Int4) for efficient deployment
- Tool-Enhanced AI: Applications requiring tool calling capabilities for extended functionality
Technical Specifications
The collection includes 21 models with varying parameter counts, download statistics, and inference capabilities. Models are regularly updated, with the most recent updates occurring in March 2026. The collection has received 861 upvotes and features contributions from notable AI community members.

