Groq API
Llama 3.3 70B · 2-3 sec
Custom Model
QuantumLLM 140M · 30-60 sec

Quantum Compute Chat

Powered by Custom 140M Model + Groq API

Waking up server