在离线推理业务场景中,您可以通过离线模式使用 TACO-LLM。本文档通过一个简单的例子介绍了如何使用 TACO-LLM 的离线模式。
导入 LLM 和 SamplingParams
首先,需要从 taco_llm 导入所需使用的 LLM 和 SamplingParams 类:
from taco_llm import LLM, SamplingParams
构建 prompts 和采样参数
接下来,构建所需的 prompts 和采样参数。本示例构建了4条 prompt,并设置了采样参数,其中 temperature 为0.8,top_p 为0.95。完整的采样参数配置可以参见采样参数 API。
# Sample prompts.prompts = ["Hello, my name is","The president of the United States is","The capital of France is","The future of AI is",]# Create a sampling params object.sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
构建 LLM 对象
# Create an LLM.llm = LLM(model="facebook/opt-125m")
推理计算
最后,调用 LLM 对象的 generate 接口进行推理计算:
# Generate texts from the prompts. The output is a list of RequestOutput objects# that contain the prompt, generated text, and other information.outputs = llm.generate(prompts, sampling_params)# Print the outputs.for output in outputs:prompt = output.promptgenerated_text = output.outputs[0].textprint(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
至此,TACO-LLM 的离线模式使用已经完成。以下是本示例的完整代码:
from taco_llm import LLM, SamplingParams# Sample prompts.prompts = ["Hello, my name is","The president of the United States is","The capital of France is","The future of AI is",]# Create a sampling params object.sampling_params = SamplingParams(temperature=0.8, top_p=0.95)# Create an LLM.llm = LLM(model="facebook/opt-125m")# Generate texts from the prompts. The output is a list of RequestOutput objects# that contain the prompt, generated text, and other information.outputs = llm.generate(prompts, sampling_params)# Print the outputs.for output in outputs:prompt = output.promptgenerated_text = output.outputs[0].textprint(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")