产品

Log in

模型列表

语言模型

向量模型

文生图模型

图生图模型

internlm/

internlm2_5-7b-chat

32K

New

Free

InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios.

体验中心→

meta-llama/

Meta-Llama-3-8B-Instruct

8K

New

Free

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

文档→

mistralai/

Mistral-7B-Instruct-v0.2

32K

New

Free

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

文档→

google/

gemma-2-9b-it

8K

New

Free

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

文档→

google/

gemma-2-27b-it

8K

New

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

文档→

meta-llama/

Meta-Llama-3-70B-Instruct

8K

New

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

文档→

mistralai/

Mixtral-8x7B-Instruct-v0.1

32K

New

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

文档→

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

体验中心→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

体验中心→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

Qwen/

Qwen2-72B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-57B-A14B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-110B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

deepseek-ai/

DeepSeek-Coder-V2-Instruct

32K

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

体验中心→

deepseek-ai/

Deepseek-V2-Chat

32K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

联系我们→

OpenAI/

GPT-3.5 Turbo

4K

A fast, inexpensive model for simple tasks in OpenAI.

联系我们→

Anthropic/

claude-3-5-sonnet

200K

Anthropic's most intelligent model.

联系我们→

持续更新中……

如果您需要的模型不在此列,请联系我们。

联系我们→

语言模型

向量模型

文生图模型

图生图模型

internlm/

internlm2_5-7b-chat

32K

New

Free

InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios.

体验中心→

meta-llama/

Meta-Llama-3-8B-Instruct

8K

New

Free

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

文档→

mistralai/

Mistral-7B-Instruct-v0.2

32K

New

Free

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

文档→

google/

gemma-2-9b-it

8K

New

Free

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

文档→

google/

gemma-2-27b-it

8K

New

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

文档→

meta-llama/

Meta-Llama-3-70B-Instruct

8K

New

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

文档→

mistralai/

Mixtral-8x7B-Instruct-v0.1

32K

New

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

文档→

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

体验中心→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

体验中心→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

Qwen/

Qwen2-72B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-57B-A14B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-110B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

deepseek-ai/

DeepSeek-Coder-V2-Instruct

32K

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

体验中心→

deepseek-ai/

Deepseek-V2-Chat

32K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

联系我们→

OpenAI/

GPT-3.5 Turbo

4K

A fast, inexpensive model for simple tasks in OpenAI.

联系我们→

Anthropic/

claude-3-5-sonnet

200K

Anthropic's most intelligent model.

联系我们→

持续更新中……

如果您需要的模型不在此列,请联系我们。

联系我们→

语言模型

向量模型

文生图模型

图生图模型

internlm/

internlm2_5-7b-chat

32K

New

Free

InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios.

体验中心→

meta-llama/

Meta-Llama-3-8B-Instruct

8K

New

Free

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

文档→

mistralai/

Mistral-7B-Instruct-v0.2

32K

New

Free

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

文档→

google/

gemma-2-9b-it

8K

New

Free

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

文档→

google/

gemma-2-27b-it

8K

New

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

文档→

meta-llama/

Meta-Llama-3-70B-Instruct

8K

New

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

文档→

mistralai/

Mixtral-8x7B-Instruct-v0.1

32K

New

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

文档→

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-7B-Chat

32K

Free

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

体验中心→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

体验中心→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

Qwen/

Qwen2-72B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-57B-A14B-Instruct

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-110B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

deepseek-ai/

DeepSeek-Coder-V2-Instruct

32K

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

体验中心→

deepseek-ai/

DeepSeek-V2-Chat

32K

A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.

体验中心→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

联系我们→

OpenAI/

GPT-3.5 Turbo

16K

A fast, inexpensive model for simple tasks in OpenAI.

联系我们→

Anthropic/

claude-3-5-sonnet

200K

Anthropic's most intelligent model

联系我们→

持续更新中……

持续更新中……

如果您需要的模型不在此列,请联系我们。

联系我们→

*灰色底色描述的是大语言模型的上下文长度,例如32K 表示模型的上下文长度为 32K个词元。

语言模型

向量模型

文生图模型

图生图模型

internlm/

internlm2_5-7b-chat

32K

New

Free

InternLM2.5 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios.

体验中心→

meta-llama/

Meta-Llama-3-8B-Instruct

8K

New

Free

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

文档→

mistralai/

Mistral-7B-Instruct-v0.2

32K

New

Free

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

文档→

google/

gemma-2-9b-it

8K

New

Free

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

文档→

google/

gemma-2-27b-it

8K

New

Gemma is a state-of-the-art, lightweight, open English text model suite from Google.

文档→

meta-llama/

Meta-Llama-3-70B-Instruct

8K

New

A collection of pretrained and instruction tuned generative text models in 8 and 70B sizes.

文档→

mistralai/

Mixtral-8x7B-Instruct-v0.1

32K

New

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

文档→

Qwen/

Qwen2-7B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen2-1.5B-Instruct

32K

Free

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-7B-Chat

32K

Free

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

THUDM/

glm-4-9b-chat

32K

Free

Open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI.

体验中心→

THUDM/

chatglm3-6b

32K

Free

An open source, Chinese-English bilingual conversation Language Model based on the General Language Model (GLM) architecture with 6.2 billion parameters.

体验中心→

01-ai/

Yi-1.5-9B-Chat-16K

16K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

01-ai/

Yi-1.5-6B-Chat

4K

Free

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

Qwen2-72B-Instruct

Qwen/

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen2-57B-A14B-Instruct

Qwen/

32K

Qwen2 is the new series of Qwen large language models.

体验中心→

Qwen/

Qwen1.5-110B-Chat

32K

The first 100B+ model of the Qwen1.5 series. Supports the context length 32K tokens, and it is still multilingual.

体验中心→

Qwen/

Qwen1.5-32B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

Qwen/

Qwen1.5-14B-Chat

32K

The beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data.

体验中心→

deepseek-ai/

DeepSeek-Coder-V2-Instruct

32K

An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.

体验中心→

deepseek-ai/

DeepSeek-V2-Chat

32K

A strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.

体验中心→

deepseek-ai/

deepseek-llm-67b-chat

4K

An advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese.

体验中心→

01-ai/

Yi-1.5-34B-Chat-16K

16K

An upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.

体验中心→

OpenAI/

GPT-4o

128K

The fastest and most affordable flagship model in OpenAI.

联系我们→

OpenAI/

GPT-3.5 Turbo

16K

A fast, inexpensive model for simple tasks in OpenAI.