语言模型
文生图模型
图生图模型
Qwen/
Qwen1.5-7B-Chat
32K
Free
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
体验中心→
THUDM/
glm-4-9b-chat
32K
Free
Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.
体验中心→
THUDM/
chatglm3-6b
32K
Free
An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.
体验中心→
01-ai/
Yi-1.5-9B-Chat-16K
16K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
01-ai/
Yi-1.5-6B-Chat
4K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
Qwen/
Qwen1.5-110B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
体验中心→
Qwen/
Qwen1.5-32B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
体验中心→
Qwen/
Qwen1.5-14B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
体验中心→
deepseek-ai/
DeepSeek-Coder-V2-Instruct
32K
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
体验中心→
deepseek-ai/
DeepSeek-V2-Chat
32K
A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.
体验中心→
deepseek-ai/
deepseek-llm-67b-chat
4K
An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.
体验中心→
01-ai/
Yi-1.5-34B-Chat-16K
16K
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
meta-llama/
Meta-Llama-3-70B-Instruct
8K
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
联系我们
meta-llama/
Meta-Llama-3-8B-Instruct
8K
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
联系我们
google/
gemma-2-27b-it
8K
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
联系我们
google/
gemma-2-9b-it
8K
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
联系我们
*灰色底色描述的是大语言模型的上下文长度,例如32K 表示模型的上下文长度为 32K个词元。
语言模型
文生图模型
图生图模型
Qwen/
Qwen1.5-32B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
体验中心→
体验中心→
Qwen/
Qwen1.5-14B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
deepseek-ai/
Deepseek-V2-Chat
32K
An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.
体验中心→
deepseek-ai/
DeepSeek-Coder-V2-Instruct
32K
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
体验中心→
01-ai/
Yi-1.5-34B-Chat-16K
16K
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
deepseek-ai/
deepseek-llm-67b-chat
4K
An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.
体验中心→
meta-llama/
Meta-Llama-3-70B-Instruct
8K
联系我们→
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
meta-llama/
Meta-Llama-3-8B-Instruct
8K
联系我们→
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
google/
gemma-2-27b-it
8K
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
联系我们→
google/
gemma-2-9b-it
8K
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
联系我们→
THUDM/
glm-4-9b-chat
32K
Free
Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.
体验中心→
THUDM/
chatglm3-6b
32K
Free
An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.
体验中心→
01-ai/
Yi-1.5-9B-Chat-16K
16K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
01-ai/
Yi-1.5-6B-Chat
4K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
*灰色底色描述的是大语言模型的上下文长度,例如32K 表示模型的上下文长度为 32K个词元。
DeepSeek-Coder-V2-Instruct
deepseek-ai/
32K
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
体验中心→
THUDM/
glm-4-9b-chat
32K
Free
Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.
体验中心→
Qwen/
Qwen1.5-110B-Chat
32K
The first 100B+ model of the Qwen1.5 series. Supports the context length 32K tokens, and it is still multilingual.
体验中心→
Qwen/
Qwen1.5-14B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
体验中心→
Qwen/
Qwen1.5-32B-Chat
32K
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
体验中心→
Qwen/
Qwen1.5-7B-Chat
32K
Free
The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.
体验中心→
deepseek-ai/
DeepSeek-V2-Chat
32K
A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.
体验中心→
deepseek-ai/
deepseek-llm-67b-chat
4K
An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.
体验中心→
01-ai/
Yi-1.5-34B-Chat-16K
16K
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
01-ai/
Yi-1.5-9B-Chat-16K
16K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
01-ai/
Yi-1.5-6B-Chat
4K
Free
An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
体验中心→
meta-llama/
Meta-Llama-3-70B-Instruct
8K
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
联系我们→
meta-llama/
Meta-Llama-3-8B-Instruct
8K
A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.
联系我们→
google/
gemma-2-27b-it
8K
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
联系我们→
google/
gemma-2-9b-it
8K
Gemma is a state-of-the-art, lightweight, open English text model suite from Google.
联系我们→
THUDM/
chatglm3-6b
32K
Free
An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.
体验中心→
语言模型
文生图模型
图生图模型
*The purple background describes the context length of the large language model.Such as 32K means the model's context length is 32K.
京ICP备2024051511号-1
京ICP备2024051511号-1
京ICP备2024051511号-1