Text Generation

Text generation is supported on the network. As explained in the architecture page, inference takes place in virtual machines.

Inside the VM

We are providing multiple VMs, with an inference stack that can change with time. It means the API is subject to change on newer models.

Available models

ModelBaseAPI typePrompt formatBase URLCompletion URL
NeuralBeagle 7BMistralLlama-likeChatMLAPI Urlopen in new window Completion Urlopen in new window
NeuralBeagle 7BMistralOpenAI-compatibleChatMLAPI Base Urlopen in new window Completion Urlopen in new window
Mixtral Instruct 8x7B MoEMixtralLlama-likeChatML or Alpaca InstructAPI Urlopen in new window Completion Urlopen in new window
DeepSeek Coder 6.7BDeepSeekLlama-likeAlpaca InstructAPI Urlopen in new window Completion Urlopen in new window

API details

Please see the according API documentation based on the model of your choice:

Prompting formats

Each mode has its own formatting. Knowing which format you should provide for a specific model will help getting better results out of it. Please refer to the available models table to know which format is the best for your model.

Last Updated: