The ai module contains functions for working with local and remote AI models.
Models can be managed in Settings ▸ AI. See AI Models for details.
The following model names are supported:
To use local models, you need to install the model files in Settings ▸ AI.
llama2-7b-chat — LLama 2 7B-Chat, chat-tuned 7B-parameter transformer model for text generation (4-bit quantized).openchat — OpenChat, chat-tuned 7B-parameter transformer model for text generation (4-bit quantized).neural-chat-7b — Neural Chat 7B, chat-tuned 7B-parameter transformer model for text generation (4-bit quantized).tinyllama-chat - TinyLlama-Chat,1.1B-parameter transformer model for text generation (5-bit quantized).To use OpenAI models, you need to set the OpenAI API key in Settings ▸ Keys.
gpt-3.5-turbo — GPT-3.5-turbo model via the OpenAI APIgpt-4 — GPT-4 model via the OpenAI APIgpt-4-turbo — GPT-4-turbo model via the OpenAI APITo use Cohere models, you need to set the Cohere API key in Settings ▸ Keys.
command — Command model via the Cohere API (previously known as command-xlarge)command-light — Command-light model via the Cohere API (previously known as command-medium)To use TextSynth models, you need to set the TextSynth API key in Settings ▸ Keys.
textsynth-gptj — GPT-J 6B model via the TextSynth API.textsynth-gpt-neox-20b — GPT-NeoX-20B model via the TextSynth API.ai