Product

En

Log in

Models

Chat

Text to Image

Image to Image

deepseek-ai/

DeepSeek-V2-Chat

32K

New

A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.

Playground→

deepseek-ai/

DeepSeek-Coder-V2-Instruct

32K

New

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

Playground→

google/

gemma-2-27b-it

8K

New

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

Contact Us→

google/

gemma-2-9b-it

8K

New

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

Contact Us→

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

Playground→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

Playground→

Qwen/

Qwen1.5-7B-Chat

32K

Free

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

Playground→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

Playground→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

Playground→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

Playground→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

Playground→

Qwen/

Qwen2-72B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

Playground→

Qwen/

Qwen2-57B-A14B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

Playground→

Qwen/

Qwen1.5-110B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

Playground→

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

Playground→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

Playground→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

Playground→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

Playground→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

Contact Us→

OpenAI/

GPT-3.5 Turbo

16K

A fast, inexpensive model for simple tasks in OpenAI.

Contact Us→

Anthropic/

claude-3-5-sonnet

200K

Anthropic's most intelligent model

Contact Us→

meta-llama/

Meta-Llama-3-8B-Instruct

8K

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

Contact Us→

meta-llama/

Meta-Llama-3-70B-Instruct

8K

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

Contact Us→

To Be Continued…

If you have models not in this list, please feel free to contact us.

Contact Us→

*The purple background describes the context length of the large language model.Such as 32K means the model's context length is 32K.

Chat

Text to Image

Image to Image

deepseek-ai/

DeepSeek-V2-Chat

32K

New

A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.

Playground→

deepseek-ai/

DeepSeek-Coder-V2-Instruct

32K

New

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

Playground→

google/

gemma-2-27b-it

8K

New

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

Contact Us→

google/

gemma-2-9b-it

8K

New

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

Contact Us→

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

Playground→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

Playground→

Qwen/

Qwen1.5-7B-Chat

32K

Free

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

Playground→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

Playground→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

Playground→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

Playground→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

Playground→

Qwen/

Qwen2-72B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

Playground→

Qwen/

Qwen2-57B-A14B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

Playground→

Qwen/

Qwen1.5-110B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

Playground→

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

Playground→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

Playground→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

Playground→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

Playground→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

Contact Us→

OpenAI/

GPT-3.5 Turbo

16K

A fast, inexpensive model for simple tasks in OpenAI.

Contact Us→

Anthropic/

claude-3-5-sonnet

200K

Anthropic's most intelligent model

Contact Us→

meta-llama/

Meta-Llama-3-8B-Instruct

8K

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

Contact Us→

meta-llama/

Meta-Llama-3-70B-Instruct

8K

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

Contact Us→

To Be Continued…

If you have models not in this list, please feel free to contact us.

Contact Us→

*The purple background describes the context length of the large language model.Such as 32K means the model's context length is 32K.

Product

En

Log in

Product

En

Log in

Models

OneDiff, High-performance
Image Generation Engine

En

Log in

En

Log in

Models