执行命令
taco_llm serve -h 可以查看 TACO-LLM 的完整在线模式参数配置:# taco_llm serve -husage: taco_llm serve <model_tag> [options]positional arguments:model_tag The model tag to serveoptions:-h, --help show this help message and exit--config CONFIG Read CLI options from a config file.Must be a YAML with the following options:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#command-line-arguments-for-the-server--host HOST host name--port PORT port number--uvicorn-log-level {debug,info,warning,error,critical,trace}log level for uvicorn--allow-credentials allow credentials--allowed-origins ALLOWED_ORIGINSallowed origins--allowed-methods ALLOWED_METHODSallowed methods--allowed-headers ALLOWED_HEADERSallowed headers--api-key API_KEY If provided, the server will require this key to be presented in the header.--lora-modules LORA_MODULES [LORA_MODULES ...]LoRA module configurations in the format name=path. Multiple modules can be specified.--prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...]Prompt adapter configurations in the format name=path. Multiple adapters can be specified.--chat-template CHAT_TEMPLATEThe file path to the chat template, or the template in single-line form for the specified model--response-role RESPONSE_ROLEThe role name to return ifrequest.add_generation_prompt=true.--ssl-keyfile SSL_KEYFILEThe file path to the SSL key file--ssl-certfile SSL_CERTFILEThe file path to the SSL cert file--ssl-ca-certs SSL_CA_CERTSThe CA certificates file--ssl-cert-reqs SSL_CERT_REQSWhether client certificate is required (see stdlib ssl module's)--root-path ROOT_PATHFastAPI root_path when app is behind a path based routing proxy--middleware MIDDLEWAREAdditional ASGI middleware to apply to the app. We accept multiple --middleware arguments. The value should be an import path. If a function is provided, taco_llm will add it to the server using @app.middleware('http').If a class is provided, taco_llm will add it to the server using app.add_middleware().--return-tokens-as-token-idsWhen --max-logprobs is specified, represents single tokens as strings of the form 'token_id:{token_id}' so that tokens that are not JSON-encodable can be identified.--disable-frontend-multiprocessingIf specified, will run the OpenAI frontend server in the same process as the model serving engine.--enable-auto-tool-choiceEnable auto tool choice for supported models. Use --tool-call-parserto specify which parser to use--tool-call-parser {mistral,hermes}Select the tool call parser depending on the model that you're using. This is used to parse the model-generated tool call into OpenAI API format. Required for --enable-auto-tool-choice.--model MODEL Name or path of the huggingface model to use.--tokenizer TOKENIZERName or path of the huggingface tokenizer to use. If unspecified, model name or path will be used.--skip-tokenizer-initSkip initialization of tokenizer and detokenizer--revision REVISION The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.--code-revision CODE_REVISIONThe specific revision to use for the model code on Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.--tokenizer-revision TOKENIZER_REVISIONRevision of the huggingface tokenizer to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.--tokenizer-mode {auto,slow,mistral}The tokenizer mode. * "auto" will use the fast tokenizer if available. * "slow" will always use the slow tokenizer. * "mistral" will always use themistral_commontokenizer.--trust-remote-code Trust remote code from huggingface.--download-dir DOWNLOAD_DIRDirectory to download and load the weights, default to the default cache dir of huggingface.--load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral}The format of the model weights to load. * "auto" will try to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available. * "pt" will load the weights in thepytorch bin format. * "safetensors" will load the weights in the safetensors format. * "npcache" will load the weights in pytorch format and store a numpy cache to speed up the loading. * "dummy" will initialize theweights with random values, which is mainly for profiling. * "tensorizer" will load the weights using tensorizer from CoreWeave. See the Tensorize vLLM Model script in the Examples section for more information. *"bitsandbytes" will load the weights using bitsandbytes quantization.--config-format {auto,hf,mistral}The format of the model config to load. * "auto" will try to load the config in hf format if available else it will try to load in mistral format--dtype {auto,half,float16,bfloat16,float,float32}Data type for model weights and activations. * "auto" will use FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models. * "half" for FP16. Recommended for AWQ quantization. * "float16" is the same as"half". * "bfloat16" for a balance between precision and range. * "float" is shorthand for FP32 precision. * "float32" for FP32 precision.--kv-cache-dtype {auto,fp8,fp8_e5m2,fp8_e4m3}Data type for kv cache storage. If "auto", will use model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3)--quantization-param-path QUANTIZATION_PARAM_PATHPath to the JSON file containing the KV cache scaling factors. This should generally be supplied, when KV cache dtype is FP8. Otherwise, KV cache scaling factors default to 1.0, which may cause accuracy issues. FP8_E5M2(without scaling) is only supported on cuda versiongreater than 11.8. On ROCm (AMD GPU), FP8_E4M3 is instead supported for common inference criteria.--max-model-len MAX_MODEL_LENModel context length. If unspecified, will be automatically derived from the model config.--guided-decoding-backend {outlines,lm-format-enforcer}Which engine will be used for guided decoding (JSON schema / regex etc) by default. Currently support https://github.com/outlines-dev/outlines and https://github.com/noamgat/lm-format-enforcer. Can be overridden perrequest via guided_decoding_backend parameter.--distributed-executor-backend {ray,mp}Backend to use for distributed serving. When more than 1 GPU is used, will be automatically set to "ray" if installed or "mp" (multiprocessing) otherwise.--worker-use-ray Deprecated, use --distributed-executor-backend=ray.--pipeline-parallel-size PIPELINE_PARALLEL_SIZE, -pp PIPELINE_PARALLEL_SIZENumber of pipeline stages.--tensor-parallel-size TENSOR_PARALLEL_SIZE, -tp TENSOR_PARALLEL_SIZENumber of tensor parallel replicas.--max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERSLoad model sequentially in multiple batches, to avoid RAM OOM when using tensor parallel and large models.--ray-workers-use-nsightIf specified, use nsight to profile Ray workers.--block-size {8,16,32}Token block size for contiguous chunks of tokens. This is ignored on neuron devices and set to max-model-len--enable-prefix-cachingEnables automatic prefix caching.--disable-sliding-windowDisables sliding window, capping to sliding window size--use-v2-block-managerUse BlockSpaceMangerV2.--num-lookahead-slots NUM_LOOKAHEAD_SLOTSExperimental scheduling config necessary for speculative decoding. This will be replaced by speculative config in the future; it is present to enable correctness tests until then.--seed SEED Random seed for operations.--swap-space SWAP_SPACECPU swap space size (GiB) per GPU.--cpu-offload-gb CPU_OFFLOAD_GBThe space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and setthis to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight,which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the modelisloaded from CPU memory to GPU memory on the fly in each model forward pass.--gpu-memory-utilization GPU_MEMORY_UTILIZATIONThe fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If unspecified, will use the default value of 0.9.--num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDEIf specified, ignore GPU profiling result and use this numberof GPU blocks. Used for testing preemption.--max-num-batched-tokens MAX_NUM_BATCHED_TOKENSMaximum number of batched tokens per iteration.--max-num-seqs MAX_NUM_SEQSMaximum number of sequences per iteration.--max-logprobs MAX_LOGPROBSMax number of log probs to return logprobs is specified in SamplingParams.--disable-log-stats Disable logging statistics.--quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,experts_int8,qqq,neuron_quant,None}, -q {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,experts_int8,qqq,neuron_quant,None}Method used to quantize the weights. If None, we first check thequantization_configattribute in the model config file. If that is None, we assume the model weights are not quantized and usedtypeto determine thedata type of the weights.--rope-scaling ROPE_SCALINGRoPE scaling configuration in JSON format. For example, {"type":"dynamic","factor":2.0}--rope-theta ROPE_THETARoPE theta. Use withrope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model.--enforce-eager Always use eager-mode PyTorch. If False, will use eager mode and CUDA graph in hybrid for maximal performance and flexibility.--max-context-len-to-capture MAX_CONTEXT_LEN_TO_CAPTUREMaximum context length covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. (DEPRECATED. Use --max-seq-len-to-capture instead)--max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTUREMaximum sequence length covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode.--disable-custom-all-reduceSee ParallelConfig.--tokenizer-pool-size TOKENIZER_POOL_SIZESize of tokenizer pool to use for asynchronous tokenization. If 0, will use synchronous tokenization.--tokenizer-pool-type TOKENIZER_POOL_TYPEType of tokenizer pool to use for asynchronous tokenization. Ignored if tokenizer_pool_size is 0.--tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIGExtra config for tokenizer pool. This should be a JSON string that will be parsed into a dictionary. Ignored if tokenizer_pool_size is 0.--limit-mm-per-prompt LIMIT_MM_PER_PROMPTFor each multimodal plugin, limit how many input instances to allow for each prompt. Expects a comma-separated list of items, e.g.:image=16,video=2allows a maximum of 16 images and 2 videos per prompt. Defaults to 1for each modality.--enable-lora If True, enable handling of LoRA adapters.--max-loras MAX_LORASMax number of LoRAs in a single batch.--max-lora-rank MAX_LORA_RANKMax LoRA rank.--lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZEMaximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).--lora-dtype {auto,float16,bfloat16,float32}Data type for LoRA. If auto, will default to base model dtype.--long-lora-scaling-factors LONG_LORA_SCALING_FACTORSSpecify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling factors to be used at the same time. If notspecified, only adapters trained with the base model scaling factor are allowed.--max-cpu-loras MAX_CPU_LORASMaximum number of LoRAs to store in CPU memory. Must be >= than max_num_seqs. Defaults to max_num_seqs.--fully-sharded-lorasBy default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this will use the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.--enable-prompt-adapterIf True, enable handling of PromptAdapters.--max-prompt-adapters MAX_PROMPT_ADAPTERSMax number of PromptAdapters in a batch.--max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKENMax number of PromptAdapters tokens--device {auto,cuda,neuron,cpu,openvino,tpu,xpu}Device type for vLLM execution.--num-scheduler-steps NUM_SCHEDULER_STEPSMaximum number of forward steps per scheduler call.--scheduler-delay-factor SCHEDULER_DELAY_FACTORApply a delay (of delay factor multiplied by previousprompt latency) before scheduling next prompt.--enable-chunked-prefill [ENABLE_CHUNKED_PREFILL]If set, the prefill requests can be chunked based on the max_num_batched_tokens.--speculative-model SPECULATIVE_MODELThe name of the draft model to be used in speculative decoding.--speculative-model-quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,experts_int8,qqq,neuron_quant,None}Method used to quantize the weights of speculative model.If None, we first check thequantization_configattribute in the model config file. If that is None, we assume the model weights are not quantized and usedtypeto determine the data type of the weights.--num-speculative-tokens NUM_SPECULATIVE_TOKENSThe number of speculative tokens to sample from the draft model in speculative decoding.--speculative-draft-tensor-parallel-size SPECULATIVE_DRAFT_TENSOR_PARALLEL_SIZE, -spec-draft-tp SPECULATIVE_DRAFT_TENSOR_PARALLEL_SIZENumber of tensor parallel replicas for the draft model in speculative decoding.--speculative-max-model-len SPECULATIVE_MAX_MODEL_LENThe maximum sequence length supported by the draft model. Sequences over this length will skip speculation.--speculative-disable-by-batch-size SPECULATIVE_DISABLE_BY_BATCH_SIZEDisable speculative decoding for new incoming requests if the number of enqueue requests is larger than this value.--ngram-prompt-lookup-max NGRAM_PROMPT_LOOKUP_MAXMax size of window for ngram prompt lookup in speculative decoding.--ngram-prompt-lookup-min NGRAM_PROMPT_LOOKUP_MINMin size of window for ngram prompt lookup in speculative decoding.--spec-decoding-acceptance-method {rejection_sampler,typical_acceptance_sampler}Specify the acceptance method to use during draft token verification in speculative decoding. Two types of acceptance routines are supported: 1) RejectionSampler which does not allow changing the acceptance rate of drafttokens, 2) TypicalAcceptanceSampler which is configurable, allowing for a higher acceptance rate at the cost of lower quality, and vice versa.--typical-acceptance-sampler-posterior-threshold TYPICAL_ACCEPTANCE_SAMPLER_POSTERIOR_THRESHOLDSet the lower bound threshold for the posterior probability of a token to be accepted. This threshold is used by the TypicalAcceptanceSampler to make sampling decisions during speculative decoding. Defaults to 0.09--typical-acceptance-sampler-posterior-alpha TYPICAL_ACCEPTANCE_SAMPLER_POSTERIOR_ALPHAA scaling factor for the entropy-based threshold for token acceptance in the TypicalAcceptanceSampler. Typically defaults to sqrt of --typical-acceptance-sampler-posterior-threshold i.e. 0.3--disable-logprobs-during-spec-decoding [DISABLE_LOGPROBS_DURING_SPEC_DECODING]If set to True, token log probabilities are not returned during speculative decoding. If set to False, log probabilities are returned according to the settings in SamplingParams. If not specified, it defaults to True.Disabling log probabilities during speculative decoding reduces latency by skipping logprob calculation in proposal sampling, target sampling, and after accepted tokens are determined.--model-loader-extra-config MODEL_LOADER_EXTRA_CONFIGExtra config for model loader. This will be passed to the model loader corresponding to the chosen load_format. This should be a JSON string that will be parsed into a dictionary.--ignore-patterns IGNORE_PATTERNSThe pattern(s) to ignore when loading the model.Default to 'original/**/*' to avoid repeated loading of llama's checkpoints.--preemption-mode PREEMPTION_MODEIf 'recompute', the engine performs preemption by recomputing; If 'swap', the engine performs preemption by block swapping.--served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...]The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response will be the first name in this list. If notspecified, the model name will be the same as the--modelargument. Noted that this name(s)will also be used inmodel_nametag content of prometheus metrics, if multiple names provided, metricstag will take the firstone.--qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATHName or path of the QLoRA adapter.--otlp-traces-endpoint OTLP_TRACES_ENDPOINTTarget URL to which OpenTelemetry traces will be sent.--collect-detailed-traces COLLECT_DETAILED_TRACESValid choices are model,worker,all. It makes sense to set this only if --otlp-traces-endpoint is set. If set, it will collect detailed traces for the specified modules. This involves use of possibly costly and or blockingoperations and hence might have a performance impact.--disable-async-output-procDisable async output processing. This may result in lower performance.--override-neuron-config OVERRIDE_NEURON_CONFIGoverride or set neuron device configuration.--lookahead-cache-config-dir LOOKAHEAD_CACHE_CONFIG_DIRFolder path of lookahead cache config--cpu-decoding-memory-utilization CPU_DECODING_MEMORY_UTILIZATIONthe memory is used for lookahead cache, which can range from 0 to 1. If unspecified, will use the default value of 0.15.--cpu-prefill-memory-utilization CPU_PREFILL_MEMORY_UTILIZATIONthe memory is used for prefill cache, which can range from 0 to 1. If unspecified, will use the default value of 0.3.--ignore-prompt-for-lookahead-cacheIf True, the prompt will be ignored.--enable-prefix-cache-offloadEnables prefix cache offloading--apc-offload-not-lazyIf set, lazy launch of layer 2~n-1 will be disabled.--apc-offload-min-access-threshold APC_OFFLOAD_MIN_ACCESS_THRESHOLDMin threshold for evict offloading. Default 1.--apc-offload-enable-hit-cntEnable hit count in APC.--apc-offload-gpu-evictor-limit APC_OFFLOAD_GPU_EVICTOR_LIMITThe free table size limited in gpu evictor. -1 default disable.--disable-log-requestsDisable logging requests.--max-log-len MAX_LOG_LENMax number of prompt characters or prompt ID numbers being printed in log. Default: Unlimited
除了兼容 vLLM 所有的配置参数外,TACO-LLM 还额外添加了以下参数配置:
# Lookahead-Cache--lookahead-cache-config-dir LOOKAHEAD_CACHE_CONFIG_DIRFolder path of lookahead cache config--ignore-prompt-for-lookahead-cacheIf True, the prompt will be ignored.--cpu-decoding-memory-utilization CPU_DECODING_MEMORY_UTILIZATIONthe memory is used for lookahead cache, which can range from 0 to 1. If unspecified, will use the default value of 0.15.# Auto Prefix Cache CPU Offload--enable-prefix-cache-offloadEnables prefix cache offloading--cpu-prefill-memory-utilization CPU_PREFILL_MEMORY_UTILIZATIONthe memory is used for prefill cache, which can range from 0 to 1. If unspecified, will use the default value of 0.3.--apc-offload-not-lazyIf set, lazy launch of layer 2~n-1 will be disabled.