Chat
Text to Image
Image to Image
deepseek-ai/
DeepSeek-V2-Chat
32K
New
A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.
Playground→
deepseek-ai/
DeepSeek-Coder-V2-Instruct
32K
New
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
Playground→
google/
gemma-2-27b-it
8K
New
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
Contact Us→
google/
gemma-2-9b-it
8K
New
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
Contact Us→
Qwen/
Qwen1.5-7B-Chat
32K
Free
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
Playground→
THUDM/
glm-4-9b-chat
32K
Free
Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.
Playground→
THUDM/
chatglm3-6b
32K
Free
An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.
Playground→
01-ai/
Yi-1.5-9B-Chat-16K
16K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Playground→
01-ai/
Yi-1.5-6B-Chat
4K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Playground→
Qwen/
Qwen1.5-110B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
Playground→
Qwen/
Qwen1.5-32B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
Playground→
Qwen/
Qwen1.5-14B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
Playground→
deepseek-ai/
deepseek-llm-67b-chat
4K
An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.
Playground→
01-ai/
Yi-1.5-34B-Chat-16K
16K
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Playground→
meta-llama/
Meta-Llama-3-8B-Instruct
8K
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
Contact Us→
meta-llama/
Meta-Llama-3-70B-Instruct
8K
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
Contact Us→
*The purple background describes the context length of the large language model.Such as 32K means the model's context length is 32K.
Chat
Text to Image
Image to Image
deepseek-ai/
DeepSeek-V2-Chat
32K
New
A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.
Playground→
deepseek-ai/
DeepSeek-Coder-V2-Instruct
32K
New
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
Playground→
google/
gemma-2-27b-it
8K
New
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
Contact Us→
google/
gemma-2-9b-it
8K
New
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
Contact Us→
Qwen/
Qwen1.5-7B-Chat
32K
Free
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
Playground→
THUDM/
glm-4-9b-chat
32K
Free
Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.
Playground→
THUDM/
chatglm3-6b
32K
Free
An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.
Playground→
01-ai/
Yi-1.5-9B-Chat-16K
16K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Playground→
01-ai/
Yi-1.5-6B-Chat
4K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Playground→
Qwen/
Qwen1.5-110B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
Playground→
Qwen/
Qwen1.5-32B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
Playground→
Qwen/
Qwen1.5-14B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
Playground→
deepseek-ai/
deepseek-llm-67b-chat
4K
An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.
Playground→
01-ai/
Yi-1.5-34B-Chat-16K
16K
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Playground→
meta-llama/
Meta-Llama-3-8B-Instruct
8K
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
Contact Us→
meta-llama/
Meta-Llama-3-70B-Instruct
8K
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
Contact Us→
*The purple background describes the context length of the large language model.Such as 32K means the model's context length is 32K.
![](https://framerusercontent.com/images/pVOwKsERVEWMrWjdPt3ebnZHtM.png)
![](https://framerusercontent.com/images/pVOwKsERVEWMrWjdPt3ebnZHtM.png)
Accelerate AGI to Benefit Humanity
![](https://framerusercontent.com/images/pVOwKsERVEWMrWjdPt3ebnZHtM.png)
![](https://framerusercontent.com/images/pVOwKsERVEWMrWjdPt3ebnZHtM.png)
![](https://framerusercontent.com/images/pVOwKsERVEWMrWjdPt3ebnZHtM.png)
![](https://framerusercontent.com/images/pVOwKsERVEWMrWjdPt3ebnZHtM.png)