Hey Ben. I find communication like this fairly off-putting. In so far the 80% cheaper per token (or any part of it) is something of your own making/ingenuity, by all means, do tell, but it requires comparing token cost fairly with comparable models on i.e. OpenRouter and not across different models and pretending it's the same thing.
Hi jstummbillig, I appreciate the feedback. We’re careful to state only that users can expect up to an 80% cost reduction when switching away from OpenAI. Our DeepSeek V4 Pro model is a good example of this.