Product

En

Log in

Product

En

Log in

SiliconCloud, Production Ready
Cloud with Low Cost

Teaming up with excellent open-source foundation models.

Product

En

Log in

SiliconCloud, Production Ready
Cloud with Low Cost

Teaming up with excellent open-source foundation models.

Top-quality model services

Top-quality model services
Top-quality model services

01.

Chat

SiliconCloud delivers efficient, user-friendly, and scalable LLM models, with an out-of-the-box inference acceleration capability, including Llama3, Mixtral, Qwen, Deepseek, etc.

01.

Chat

SiliconCloud delivers efficient, user-friendly, and scalable LLM models, with an out-of-the-box inference acceleration capability, including Llama3, Mixtral, Qwen, Deepseek, etc.

02.

Image

SiliconCloud encompasses a diverse range of text-to-image and text-to-video models, such as SDXL, SDXL lightning, photomaker, instantid, and so on.

02.

Image

SiliconCloud encompasses a diverse range of text-to-image and text-to-video models, such as SDXL, SDXL lightning, photomaker, instantid, and so on.

01.

Chat

SiliconCloud delivers efficient, user-friendly, and scalable LLM models, with an out-of-the-box inference acceleration capability, including Llama3, Mixtral, Qwen, Deepseek, etc.

01.

Chat

SiliconCloud delivers efficient, user-friendly, and scalable LLM models, with an out-of-the-box inference acceleration capability, including Llama3, Mixtral, Qwen, Deepseek, etc.

02.

Image

SiliconCloud encompasses a diverse range of text-to-image and text-to-video models, such as SDXL, SDXL lightning, photomaker, instantid, and so on.

02.

Image

SiliconCloud encompasses a diverse range of text-to-image and text-to-video models, such as SDXL, SDXL lightning, photomaker, instantid, and so on.

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.siliconflow.cn/v1")

response = client.chat.completions.create(

model='alibaba/Qwen1.5-110B-Chat',

messages=[

{'role': 'user', 'content': "抛砖引玉是什么意思呀"}

],

stream=True

)

for chunk in response:

print(chunk.choices[0].delta.content)

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.siliconflow.cn/v1")

response = client.chat.completions.create(

model='alibaba/Qwen1.5-110B-Chat',

messages=[

{'role': 'user', 'content': "抛砖引玉是什么意思呀"}

],

stream=True

)

for chunk in response:

print(chunk.choices[0].delta.content)

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.siliconflow.cn/v1")

response = client.chat.completions.create(

model='alibaba/Qwen1.5-110B-Chat',

messages=[

{'role': 'user', 'content': "抛砖引玉是什么意思呀"}

],

stream=True

)

for chunk in response:

print(chunk.choices[0].delta.content)

Easy to use

Easy to use
Easy to use

With just a single line of code, developers can seamlessly integrate the fastest model services from SiliconCloud.

With just a single line of code, developers can seamlessly integrate the fastest model services from SiliconCloud.

With just a single line of code, developers can seamlessly integrate the fastest model services from SiliconCloud.

For these models we offer, you only pay for what you use. Explore pricing to get more details.

For these models we offer, you only pay for what you use.

Explore pricing to get more details.

For these models we offer, you only pay for what you use. Explore pricing to get more details.

OneDiff, High-performance
Image Generation Engine

Teaming up with excellent open-source foundation models.

En

Log in

En

Log in

SiliconCloud, Production Ready
Cloud with Low Cost