30B LLM, 260k context, at 50 tokens/s… On your Laptop
We’re sharing a secret that’s made the Cortex model feel snappy at full context on just 16GB of RAM.
Latest updates and insights from the Cortex team.
We’re sharing a secret that’s made the Cortex model feel snappy at full context on just 16GB of RAM.
Context management is the key to smart coding AI, this is how we do it.