Tether Unveils AI System to Run Large Models on Smartphones



Tether, issuer of the world’s largest stablecoin by market capitalization, USDT, has released a new AI training framework that it says helps fine-tune large language models on consumer hardware, including smartphones and non-Nvidia GPUs.

According to Tuesday announcementThe system, part of its QVAC platform, uses Microsoft’s BitNet architecture and LoRA techniques to reduce memory and compute requirements, potentially reducing costs and hardware barriers to developing AI models.

The framework supports cross-platform training and inference on a range of chips, including AMD, Intel and Apple Silicon, as well as mobile GPUs from Qualcomm and Apple.

Tether said its engineers were able to refine models with up to 1 billion parameters on smartphones in less than two hours, and smaller models in minutes, with support expanding to models of up to 13 billion parameters on mobile devices.