产品

Log in

模型列表

语言模型

文生图模型

图生图模型

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-7B-Chat

32K

Free

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

体验中心→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

体验中心→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

Qwen/

Qwen2-72B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-57B-A14B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-110B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

deepseek-ai/

DeepSeek-Coder-V2-Instruct

32K

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

体验中心→

deepseek-ai/

DeepSeek-V2-Chat

32K

A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.

体验中心→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

联系我们

OpenAI/

GPT-3.5 Turbo

16K

A fast, inexpensive model for simple tasks in OpenAI.

联系我们

Anthropic/

claude-3-5-sonnet

200K

Anthropic's most intelligent model

联系我们

meta-llama/

Meta-Llama-3-70B-Instruct

8K

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

联系我们

meta-llama/

Meta-Llama-3-8B-Instruct

8K

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

联系我们

google/

gemma-2-27b-it

8K

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

联系我们

google/

gemma-2-9b-it

8K

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

联系我们

持续更新中……

持续更新中……

如果您需要的模型不在此列,请联系我们。

联系我们

*灰色底色描述的是大语言模型的上下文长度,例如32K 表示模型的上下文长度为 32K个词元。

语言模型

文生图模型

图生图模型

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

体验中心→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

deepseek-ai/

Deepseek-V2-Chat

32K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

deepseek-ai/

DeepSeek-Coder-V2-Instruct

32K

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

体验中心→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

联系我们→

OpenAI/

GPT-3.5 Turbo

4K

A fast, inexpensive model for simple tasks in OpenAI.

联系我们→

Anthropic/

claude-3-5-sonnet

200K

Anthropic's most intelligent model.

联系我们→

meta-llama/

Meta-Llama-3-70B-Instruct

8K

联系我们→

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

meta-llama/

Meta-Llama-3-8B-Instruct

8K

联系我们→

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

google/

gemma-2-27b-it

8K

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

联系我们→

持续更新中……

如果您需要的模型不在此列,请联系我们。

联系我们

google/

gemma-2-9b-it

8K

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

联系我们→

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

体验中心→

Qwen/

Qwen1.5-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

体验中心→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

Qwen/

Qwen2-72B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-57B-A14B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-110B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

*灰色底色描述的是大语言模型的上下文长度,例如32K 表示模型的上下文长度为 32K个词元。

DeepSeek-Coder-V2-Instruct

deepseek-ai/

32K

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

体验中心→

Qwen2-72B-Instruct

Qwen/

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen2-57B-A14B-Instruct

Qwen/

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

体验中心→

Qwen/

Qwen1.5-110B-Chat

32K

The first 100B+ model of the Qwen1.5 series. Supports the context length 32K tokens, and it is still multilingual.

体验中心→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-7B-Chat

32K

Free

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

deepseek-ai/

DeepSeek-V2-Chat

32K

A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.

体验中心→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

联系我们→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

OpenAI/

GPT-3.5 Turbo

16K

A fast, inexpensive model for simple tasks in OpenAI.

联系我们→

Anthropic/

claude-3-5-sonnet

200K

Anthropic's most intelligent model.

联系我们→

meta-llama/

Meta-Llama-3-70B-Instruct

8K

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

联系我们→

meta-llama/

Meta-Llama-3-8B-Instruct

8K

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

联系我们→

google/

gemma-2-27b-it

8K

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

联系我们→

google/

gemma-2-9b-it

8K

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

联系我们→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

体验中心→

持续更新中……

如果您需要的模型不在此列,请联系我们。

联系我们

语言模型

文生图模型

图生图模型

*The purple background describes the context length of the large language model.Such as 32K means the model's context length is 32K.

关注硅动科技公众号

关注硅动科技微信公众号

扫小助理进入用户群

加小助手进入用户交流群

加速AGI普惠人类

京ICP备2024051511号-1

加速AGI普惠人类

关注硅动科技公众号

关注官方公众号

扫小助理进入用户群

扫小助理进入用户群

京ICP备2024051511号-1

扫小助理进入用户群

扫小助理进入用户群

关注硅动科技公众号

关注官方公众号

加速AGI普惠人类

京ICP备2024051511号-1

产品

Log in

产品

Log in

模型列表

OneDiff,高性能图像生成引擎

Log in

Log in

模型列表