Created
March 20, 2026 19:53
-
-
Save mberman84/98d0b92edb783a919d60e79b754c30d8 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| nvidia@dell-station:~$ nvidia-smi | |
| Fri Mar 20 12:50:52 2026 | |
| +-----------------------------------------------------------------------------------------+ | |
| | NVIDIA-SMI 590.48.01 Driver Version: 590.48.01 CUDA Version: 13.1 | | |
| +-----------------------------------------+------------------------+----------------------+ | |
| | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | |
| | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | |
| | | | MIG M. | | |
| |=========================================+========================+======================| | |
| | 0 NVIDIA RTX PRO 2000 Blac... On | 00000004:01:00.0 Off | Off | | |
| | 30% 45C P8 7W / 70W | 1MiB / 16311MiB | 0% Default | | |
| | | | N/A | | |
| +-----------------------------------------+------------------------+----------------------+ | |
| | 1 NVIDIA GB300 On | 00000009:06:00.0 Off | 0 | | |
| | N/A 37C P0 154W / 1300W | 0MiB / 284208MiB | 0% Default | | |
| | | | Disabled | | |
| +-----------------------------------------+------------------------+----------------------+ | |
| +-----------------------------------------------------------------------------------------+ | |
| | Processes: | | |
| | GPU GI CI PID Type Process name GPU Memory | | |
| | ID ID Usage | | |
| |=========================================================================================| | |
| | No running processes found | | |
| +-----------------------------------------------------------------------------------------+ | |
| nvidia@dell-station:~$ source myenv/bin/activate | |
| (myenv) nvidia@dell-station:~$ CUDA_VISIBLE_DEVICES=1 vllm serve nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 --async-scheduling --served-model-name nvidia/nemotron-3-super --dtype auto --kv-cache-dtype fp8 --tensor-parallel-size 1 --pipeline-parallel-size 1 --data-parallel-size 1 --swap-space 0 --trust-remote-code --attention-backend TRITON_ATTN --gpu-memory-utilization 0.9 --enable-chunked-prefill | |
| WARNING 03-20 12:51:22 [cuda.py:671] Detected different devices in the system: NVIDIA RTX PRO 2000 Blackwell, NVIDIA GB300. Please make sure to set `CUDA_DEVICE_ORDER=PCI_BUS_ID` to avoid unexpected behavior. | |
| (APIServer pid=28373) INFO 03-20 12:51:26 [utils.py:302] | |
| (APIServer pid=28373) INFO 03-20 12:51:26 [utils.py:302] █ █ █▄ ▄█ | |
| (APIServer pid=28373) INFO 03-20 12:51:26 [utils.py:302] ▄▄ ▄█ █ █ █ ▀▄▀ █ version 0.17.1 | |
| (APIServer pid=28373) INFO 03-20 12:51:26 [utils.py:302] █▄█▀ █ █ █ █ model nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | |
| (APIServer pid=28373) INFO 03-20 12:51:26 [utils.py:302] ▀▀ ▀▀▀▀▀ ▀▀▀▀▀ ▀ ▀ | |
| (APIServer pid=28373) INFO 03-20 12:51:26 [utils.py:302] | |
| (APIServer pid=28373) INFO 03-20 12:51:26 [utils.py:238] non-default args: {'model_tag': 'nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4', 'model': 'nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4', 'trust_remote_code': True, 'served_model_name': ['nvidia/nemotron-3-super'], 'attention_backend': 'TRITON_ATTN', 'swap_space': 0.0, 'kv_cache_dtype': 'fp8', 'enable_chunked_prefill': True, 'async_scheduling': True} | |
| (APIServer pid=28373) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored. | |
| (APIServer pid=28373) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored. | |
| (APIServer pid=28373) INFO 03-20 12:51:28 [model.py:531] Resolved architecture: NemotronHForCausalLM | |
| (APIServer pid=28373) INFO 03-20 12:51:28 [model.py:1554] Using max model len 262144 | |
| (APIServer pid=28373) INFO 03-20 12:51:28 [cache.py:223] Using fp8 data type to store kv cache. It reduces the GPU memory footprint and boosts the performance. Meanwhile, it may cause accuracy drop without a proper scaling factor. | |
| (APIServer pid=28373) INFO 03-20 12:51:28 [scheduler.py:231] Chunked prefill is enabled with max_num_batched_tokens=8192. | |
| (APIServer pid=28373) INFO 03-20 12:51:28 [config.py:618] Updating mamba_ssm_cache_dtype to 'float32' for NemotronH model | |
| (APIServer pid=28373) INFO 03-20 12:51:28 [config.py:544] Setting attention block size to 8320 tokens to ensure that attention page size is >= mamba page size. | |
| (APIServer pid=28373) INFO 03-20 12:51:28 [config.py:575] Padding mamba page size by 0.10% to ensure that mamba page size and attention page size are exactly equal. | |
| (APIServer pid=28373) WARNING 03-20 12:51:28 [modelopt.py:370] Detected ModelOpt fp8 checkpoint (quant_algo=FP8). Please note that the format is experimental and could change. | |
| (APIServer pid=28373) WARNING 03-20 12:51:28 [modelopt.py:984] Detected ModelOpt NVFP4 checkpoint. Please note that the format is experimental and could change in future. | |
| (APIServer pid=28373) INFO 03-20 12:51:28 [vllm.py:747] Asynchronous scheduling is enabled. | |
| WARNING 03-20 12:51:31 [cuda.py:671] Detected different devices in the system: NVIDIA RTX PRO 2000 Blackwell, NVIDIA GB300. Please make sure to set `CUDA_DEVICE_ORDER=PCI_BUS_ID` to avoid unexpected behavior. | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:35 [core.py:101] Initializing a V1 LLM engine (v0.17.1) with config: model='nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4', speculative_config=None, tokenizer='nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=262144, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=modelopt_mixed, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=fp8, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=nvidia/nemotron-3-super, enable_prefix_caching=False, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer', 'vllm::rocm_aiter_sparse_attn_indexer', 'vllm::unified_kv_cache_update', 'vllm::unified_mla_kv_cache_update'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': False}, 'local_cache_dir': None, 'fast_moe_cold_start': True, 'static_all_moe_layers': []} | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:35 [parallel_state.py:1393] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://192.168.4.40:47225 backend=nccl | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:35 [parallel_state.py:1715] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank 0, EPLB rank N/A | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:35 [base.py:106] Offloader set to NoopOffloader | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:35 [gpu_model_runner.py:4281] Starting to load model nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4... | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:36 [__init__.py:257] Selected CutlassFP8ScaledMMLinearKernel for ModelOptFp8LinearMethod | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:36 [nvfp4_utils.py:85] Using NvFp4LinearBackend.VLLM_CUTLASS for NVFP4 GEMM | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:36 [nvfp4.py:257] Using 'FLASHINFER_TRTLLM' NvFp4 MoE backend out of potential backends: ['FLASHINFER_TRTLLM', 'FLASHINFER_CUTEDSL', 'FLASHINFER_CUTLASS', 'VLLM_CUTLASS', 'MARLIN']. | |
| (EngineCore_DP0 pid=28525) INFO 03-20 12:51:36 [cuda.py:368] Using AttentionBackendEnum.TRITON_ATTN backend. | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [gpu_model_runner.py:4362] Failed to load model - not enough GPU memory. Try lowering --gpu-memory-utilization to free memory for weights, increasing --tensor-parallel-size, or using --quantization. See https://docs.vllm.ai/en/latest/configuration/conserving_memory/ for more tips. (original error: CUDA out of memory. Tried to allocate 672.00 MiB. GPU 0 has a total capacity of 15.48 GiB of which 41.31 MiB is free. Including non-PyTorch memory, this process has 15.43 GiB memory in use. Of the allocated memory 15.20 GiB is allocated by PyTorch, and 17.85 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] EngineCore failed to start. | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] Traceback (most recent call last): | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1090, in run_engine_core | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] engine_core = EngineCoreProc(*args, engine_index=dp_rank, **kwargs) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 834, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] super().__init__( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 110, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.model_executor = executor_class(vllm_config) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 103, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self._init_executor() | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 49, in _init_executor | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.driver_worker.load_model() | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 337, in load_model | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.model_runner.load_model(load_dummy_weights=dummy_weights) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4363, in load_model | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] raise e | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4297, in load_model | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.model = model_loader.load_model( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 54, in load_model | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] model = initialize_model( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/model_loader/utils.py", line 56, in initialize_model | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] model = model_class(vllm_config=vllm_config, prefix=prefix) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 876, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.model = NemotronHModel( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 305, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] old_init(self, **kwargs) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 597, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.start_layer, self.end_layer, self.layers = make_layers( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 637, in make_layers | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] + get_offloader().wrap_modules( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/offloader/base.py", line 90, in wrap_modules | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] return list(modules_generator) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 638, in <genexpr> | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] layer_fn(prefix=f"{prefix}.{idx}") for idx in range(start_layer, end_layer) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 587, in get_layer | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] return layer_class( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 356, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.mixer = NemotronHMoE( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 213, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.experts = SharedFusedMoE( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 629, in __init__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] self.quant_method.create_weights(layer=self, **moe_quant_params) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/modelopt.py", line 1249, in create_weights | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] data=torch.empty( | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] File "/home/nvidia/myenv/lib/python3.12/site-packages/torch/utils/_device.py", line 109, in __torch_function__ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) ERROR 03-20 12:51:37 [core.py:1100] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 672.00 MiB. GPU 0 has a total capacity of 15.48 GiB of which 41.31 MiB is free. Including non-PyTorch memory, this process has 15.43 GiB memory in use. Of the allocated memory 15.20 GiB is allocated by PyTorch, and 17.85 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) | |
| (EngineCore_DP0 pid=28525) Process EngineCore_DP0: | |
| (EngineCore_DP0 pid=28525) Traceback (most recent call last): | |
| (EngineCore_DP0 pid=28525) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap | |
| (EngineCore_DP0 pid=28525) self.run() | |
| (EngineCore_DP0 pid=28525) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run | |
| (EngineCore_DP0 pid=28525) self._target(*self._args, **self._kwargs) | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1104, in run_engine_core | |
| (EngineCore_DP0 pid=28525) raise e | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 1090, in run_engine_core | |
| (EngineCore_DP0 pid=28525) engine_core = EngineCoreProc(*args, engine_index=dp_rank, **kwargs) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 834, in __init__ | |
| (EngineCore_DP0 pid=28525) super().__init__( | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 110, in __init__ | |
| (EngineCore_DP0 pid=28525) self.model_executor = executor_class(vllm_config) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 103, in __init__ | |
| (EngineCore_DP0 pid=28525) self._init_executor() | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 49, in _init_executor | |
| (EngineCore_DP0 pid=28525) self.driver_worker.load_model() | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 337, in load_model | |
| (EngineCore_DP0 pid=28525) self.model_runner.load_model(load_dummy_weights=dummy_weights) | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4363, in load_model | |
| (EngineCore_DP0 pid=28525) raise e | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4297, in load_model | |
| (EngineCore_DP0 pid=28525) self.model = model_loader.load_model( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 54, in load_model | |
| (EngineCore_DP0 pid=28525) model = initialize_model( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (EngineCore_DP0 pid=28525) return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/model_loader/utils.py", line 56, in initialize_model | |
| (EngineCore_DP0 pid=28525) model = model_class(vllm_config=vllm_config, prefix=prefix) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 876, in __init__ | |
| (EngineCore_DP0 pid=28525) self.model = NemotronHModel( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 305, in __init__ | |
| (EngineCore_DP0 pid=28525) old_init(self, **kwargs) | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 597, in __init__ | |
| (EngineCore_DP0 pid=28525) self.start_layer, self.end_layer, self.layers = make_layers( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 637, in make_layers | |
| (EngineCore_DP0 pid=28525) + get_offloader().wrap_modules( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/offloader/base.py", line 90, in wrap_modules | |
| (EngineCore_DP0 pid=28525) return list(modules_generator) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 638, in <genexpr> | |
| (EngineCore_DP0 pid=28525) layer_fn(prefix=f"{prefix}.{idx}") for idx in range(start_layer, end_layer) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 587, in get_layer | |
| (EngineCore_DP0 pid=28525) return layer_class( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 356, in __init__ | |
| (EngineCore_DP0 pid=28525) self.mixer = NemotronHMoE( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/models/nemotron_h.py", line 213, in __init__ | |
| (EngineCore_DP0 pid=28525) self.experts = SharedFusedMoE( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 629, in __init__ | |
| (EngineCore_DP0 pid=28525) self.quant_method.create_weights(layer=self, **moe_quant_params) | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/modelopt.py", line 1249, in create_weights | |
| (EngineCore_DP0 pid=28525) data=torch.empty( | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) File "/home/nvidia/myenv/lib/python3.12/site-packages/torch/utils/_device.py", line 109, in __torch_function__ | |
| (EngineCore_DP0 pid=28525) return func(*args, **kwargs) | |
| (EngineCore_DP0 pid=28525) ^^^^^^^^^^^^^^^^^^^^^ | |
| (EngineCore_DP0 pid=28525) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 672.00 MiB. GPU 0 has a total capacity of 15.48 GiB of which 41.31 MiB is free. Including non-PyTorch memory, this process has 15.43 GiB memory in use. Of the allocated memory 15.20 GiB is allocated by PyTorch, and 17.85 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) | |
| [rank0]:[W320 12:51:37.623456461 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) | |
| (APIServer pid=28373) Traceback (most recent call last): | |
| (APIServer pid=28373) File "/home/nvidia/myenv/bin/vllm", line 10, in <module> | |
| (APIServer pid=28373) sys.exit(main()) | |
| (APIServer pid=28373) ^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 73, in main | |
| (APIServer pid=28373) args.dispatch_function(args) | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 112, in cmd | |
| (APIServer pid=28373) uvloop.run(run_server(args)) | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/uvloop/__init__.py", line 96, in run | |
| (APIServer pid=28373) return __asyncio.run( | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run | |
| (APIServer pid=28373) return runner.run(main) | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run | |
| (APIServer pid=28373) return self._loop.run_until_complete(task) | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/uvloop/__init__.py", line 48, in wrapper | |
| (APIServer pid=28373) return await main | |
| (APIServer pid=28373) ^^^^^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 471, in run_server | |
| (APIServer pid=28373) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs) | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 490, in run_server_worker | |
| (APIServer pid=28373) async with build_async_engine_client( | |
| (APIServer pid=28373) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__ | |
| (APIServer pid=28373) return await anext(self.gen) | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 96, in build_async_engine_client | |
| (APIServer pid=28373) async with build_async_engine_client_from_engine_args( | |
| (APIServer pid=28373) File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__ | |
| (APIServer pid=28373) return await anext(self.gen) | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 137, in build_async_engine_client_from_engine_args | |
| (APIServer pid=28373) async_llm = AsyncLLM.from_vllm_config( | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 225, in from_vllm_config | |
| (APIServer pid=28373) return cls( | |
| (APIServer pid=28373) ^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 154, in __init__ | |
| (APIServer pid=28373) self.engine_core = EngineCoreClient.make_async_mp_client( | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (APIServer pid=28373) return func(*args, **kwargs) | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 127, in make_async_mp_client | |
| (APIServer pid=28373) return AsyncMPClient(*client_args) | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper | |
| (APIServer pid=28373) return func(*args, **kwargs) | |
| (APIServer pid=28373) ^^^^^^^^^^^^^^^^^^^^^ | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 911, in __init__ | |
| (APIServer pid=28373) super().__init__( | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 569, in __init__ | |
| (APIServer pid=28373) with launch_core_engines( | |
| (APIServer pid=28373) File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__ | |
| (APIServer pid=28373) next(self.gen) | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 951, in launch_core_engines | |
| (APIServer pid=28373) wait_for_engine_startup( | |
| (APIServer pid=28373) File "/home/nvidia/myenv/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 1010, in wait_for_engine_startup | |
| (APIServer pid=28373) raise RuntimeError( | |
| (APIServer pid=28373) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {} | |
| (myenv) nvidia@dell-station:~$ CUDA_VISIBLE_DEVICES=1 vllm serve nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 --async-scheduling --served-model-name nvidia/nemotron-3-super --dtype auto --kv-cache-dtype fp8 --tensor-parallel-size 1 --pipeline-parallel-size 1 --data-parallel-size 1 --swap-space 0 --trust-remote-code --attention-backend TRITON_ATTN --gpu-memory-utilization 0.9 --enable-chunked-prefill |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment