Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPT-4o/Vision models cannot use GPU due to CLIP changes #4815

Open
nick-pape opened this issue Feb 12, 2025 · 1 comment
Open

GPT-4o/Vision models cannot use GPU due to CLIP changes #4815

nick-pape opened this issue Feb 12, 2025 · 1 comment
Labels
bug Something isn't working unconfirmed

Comments

@nick-pape
Copy link

nick-pape commented Feb 12, 2025

LocalAI version:
LocalAI version: v2.25.0 (07655c0c2e0e5fe2bca86339a12237b69d258636)

Environment, CPU architecture, OS, and Version:

Linux ai-server 5.10.102.1-dxgrknl #1 SMP Sat Apr 23 13:33:19 +07 2022 x86_64 x86_64 x86_64 GNU/Linux
It's a VM with 2x vCPU, GPU-np partitioning on an RTX 3090. (Somehow managed to get that working...)

Describe the bug
The latest version of CLIP has commented out support for GPU's. Thus, at least for me, the vision models all lose connection to the stream before the response can be completed (which is CPU intensive and takes a while).

To Reproduce
Pull the latest GPU docker image, open visual chat experience, and try to send any image to the GPT-4o model. It will hang, while things get processed by the CPU, and after 30-60s will lose connection to the stream. The response (if it ever is completed by CLIP on CPU) won't get displayed.

I confirmed this works (mostly) fine when sending text only. (There's a separate issue that we should add stopwords to the default config for this model, which I'll open an issue for).

Expected behavior
It responds telling me I've sent a picture of a cute cat. But really, CLIP should use GPU.

Logs
E.g. of LLM on GPU but CLIP on CPU:

8:49PM INF Loading model 'gpt-vision' with backend llama-cpp
8:49PM DBG Loading model in memory from file: /build/models/llava-v1.6-mistral-7b.Q5_K_M.gguf
8:49PM DBG Loading Model gpt-vision with gRPC (file: /build/models/llava-v1.6-mistral-7b.Q5_K_M.gguf) (backend: llama-cpp): {backendString:llama-cpp model:llava-v1.6-mistral-7b.Q5_K_M.gguf modelID:gpt-vision assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc000037348 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
8:49PM DBG [llama-cpp-fallback] llama-cpp variant available
8:49PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp-fallback
8:49PM DBG GRPC Service for gpt-vision will be running at: '127.0.0.1:36861'
8:49PM DBG GRPC Service state dir: /tmp/go-processmanager16132591
8:49PM DBG GRPC Service Started
8:49PM DBG Wait for the service to start up
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr I0000 00:00:1739393384.412600      41 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache, work_serializer_dispatch
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr I0000 00:00:1739393384.415041      41 ev_epoll1_linux.cc:125] grpc epoll fd: 3
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr I0000 00:00:1739393384.415273      41 server_builder.cc:392] Synchronous server. Num CQs: 1, Min pollers: 1, Max Pollers: 2, CQ timeout (msec): 10000
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr I0000 00:00:1739393384.416863      41 ev_epoll1_linux.cc:359] grpc epoll fd: 5
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr I0000 00:00:1739393384.418235      41 tcp_socket_utils.cc:634] TCP_USER_TIMEOUT is available. TCP_USER_TIMEOUT will be used thereafter
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout Server listening on 127.0.0.1:36861
8:49PM DBG GRPC Service Ready
8:49PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:llava-v1.6-mistral-7b.Q5_K_M.gguf ContextSize:4096 Seed:1639192211 NBatch:512 F16Memory:true MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:2 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/build/models/llava-v1.6-mistral-7b.Q5_K_M.gguf Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:true CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 LoadFormat: MMProj:llava-v1.6-7b-mmproj-f16.gguf RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false ModelPath:/build/models LoraAdapters:[] LoraScales:[] Options:[] CacheTypeKey: CacheTypeValue:}
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout {"timestamp":1739393386,"level":"INFO","function":"load_model","line":482,"message":"Multi Modal Mode Enabled"}
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr key clip.use_silu not found in file
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr ggml_cuda_init: found 1 CUDA devices:
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_load_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /build/models/llava-v1.6-mistral-7b.Q5_K_M.gguf (version GGUF V3 (latest))
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   0:                       general.architecture str              = llama
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   1:                               general.name str              = 1.6
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   4:                          llama.block_count u32              = 32
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  11:                          general.file_type u32              = 17
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - kv  23:               general.quantization_version u32              = 2
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - type  f32:   65 tensors
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - type q5_K:  193 tensors
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_model_loader: - type q6_K:   33 tensors
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_vocab: control token:      2 '</s>' is not marked as EOG
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_vocab: control token:      1 '<s>' is not marked as EOG
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_vocab: special tokens cache size = 3
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_vocab: token to piece cache size = 0.1637 MB
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: format           = GGUF V3 (latest)
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: arch             = llama
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: vocab type       = SPM
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_vocab          = 32000
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_merges         = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: vocab_only       = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_ctx_train      = 32768
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_embd           = 4096
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_layer          = 32
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_head           = 32
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_head_kv        = 8
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_rot            = 128
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_swa            = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_embd_head_k    = 128
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_embd_head_v    = 128
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_gqa            = 4
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_embd_k_gqa     = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_embd_v_gqa     = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: f_norm_eps       = 0.0e+00
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: f_clamp_kqv      = 0.0e+00
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: f_max_alibi_bias = 0.0e+00
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: f_logit_scale    = 0.0e+00
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_ff             = 14336
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_expert         = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_expert_used    = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: causal attn      = 1
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: pooling type     = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: rope type        = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: rope scaling     = linear
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: freq_base_train  = 1000000.0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: freq_scale_train = 1
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: n_ctx_orig_yarn  = 32768
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: rope_finetuned   = unknown
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: ssm_d_conv       = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: ssm_d_inner      = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: ssm_d_state      = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: ssm_dt_rank      = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: ssm_dt_b_c_rms   = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: model type       = 7B
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: model ftype      = Q5_K - Medium
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: model params     = 7.24 B
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: model size       = 4.78 GiB (5.67 BPW)
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: general.name     = 1.6
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: BOS token        = 1 '<s>'
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: EOS token        = 2 '</s>'
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: UNK token        = 0 '<unk>'
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: PAD token        = 0 '<unk>'
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: LF token         = 13 '<0x0A>'
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: EOG token        = 2 '</s>'
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_print_meta: max token length = 48
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_tensors: tensor 'token_embd.weight' (q5_K) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_tensors: offloading 32 repeating layers to GPU
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_tensors: offloading output layer to GPU
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_tensors: offloaded 33/33 layers to GPU
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_tensors:   CPU_Mapped model buffer size =    85.94 MiB
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llm_load_tensors:        CUDA0 model buffer size =  4807.05 MiB
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr ...................................................................................................
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: n_seq_max     = 1
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: n_ctx         = 4096
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: n_ctx_per_seq = 4096
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: n_batch       = 512
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: n_ubatch      = 512
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: flash_attn    = 0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: freq_base     = 1000000.0
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: freq_scale    = 1
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_kv_cache_init:      CUDA0 KV buffer size =   512.00 MiB
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: KV self size  =  512.00 MiB, K (f16):  256.00 MiB, V (f16):  256.00 MiB
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model:  CUDA_Host  output buffer size =     0.12 MiB
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model:      CUDA0 compute buffer size =   296.00 MiB
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model:  CUDA_Host compute buffer size =    16.01 MiB
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: graph nodes  = 1030
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr llama_new_context_with_model: graph splits = 2
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
8:49PM DBG GRPC(gpt-vision-127.0.0.1:36861): stderr common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: model name:   vit-large336-custom
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: description:  image encoder for LLaVA
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: GGUF version: 3
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: alignment:    32
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: n_tensors:    378
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: n_kv:         25
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: ftype:        f16
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: loaded meta data with 25 key-value pairs and 378 tensors from /build/models/llava-v1.6-7b-mmproj-f16.gguf
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   0:                       general.architecture str              = clip
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   1:                      clip.has_text_encoder bool             = false
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   2:                    clip.has_vision_encoder bool             = true
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   3:                   clip.has_llava_projector bool             = true
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   4:                          general.file_type u32              = 1
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   5:                               general.name str              = vit-large336-custom
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   6:                        general.description str              = image encoder for LLaVA
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   7:                        clip.projector_type str              = mlp
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   8:                     clip.vision.image_size u32              = 336
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv   9:                     clip.vision.patch_size u32              = 14
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  10:               clip.vision.embedding_length u32              = 1024
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  11:            clip.vision.feed_forward_length u32              = 4096
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  12:                 clip.vision.projection_dim u32              = 768
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  13:           clip.vision.attention.head_count u32              = 16
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  14:   clip.vision.attention.layer_norm_epsilon f32              = 0.000010
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  15:                    clip.vision.block_count u32              = 23
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  16:           clip.vision.image_grid_pinpoints arr[i32,10]      = [336, 672, 672, 336, 672, 672, 1008, ...
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  17:          clip.vision.image_crop_resolution u32              = 224
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  18:             clip.vision.image_aspect_ratio str              = anyres
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  19:         clip.vision.image_split_resolution u32              = 224
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  20:            clip.vision.mm_patch_merge_type str              = spatial_unpad
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  21:              clip.vision.mm_projector_type str              = mlp2x_gelu
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  22:                     clip.vision.image_mean arr[f32,3]       = [0.481455, 0.457828, 0.408211]
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  23:                      clip.vision.image_std arr[f32,3]       = [0.268630, 0.261303, 0.275777]
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - kv  24:                              clip.use_gelu bool             = false
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - type  f32:  236 tensors
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: - type  f16:  142 tensors
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: CLIP using CPU backend
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: text_encoder:   0
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: vision_encoder: 1
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: llava_projector:  1
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: minicpmv_projector:  0
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: model size:     595.50 MB
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: metadata size:  0.13 MB
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: params backend buffer size =  595.50 MB (378 tensors)
8:50PM DBG GRPC(gpt-vision-127.0.0.1:36861): stdout clip_model_load: compute allocated memory: 32.89 MB

Additional context

There's an issue complaining about it here: ggml-org/llama.cpp#11322 (comment)

It looks like @ggernaov removed support here: ggml-org/llama.cpp#10896

He points to some issues, I guess where some models weren't properly working, e.g. here.

Apparently they are still working on vision, here's a discussion.

Since LocalAI pulls Llama.CPP as a git submodule, defaulting on master, it automatically picked up those changes from Dec 19th onwards.

Thus, the newly built images, e.g. v2.25.0, do not actually support GPU for vision with LLama.cpp. Looks like 2.24.2 should be unaffected, I'll see if I can get it working there.

Workarounds (for LocalAI users):
A). Rebuild with CLIP GPU support

B) Downgrade to LocalAI v2.24.2 (which was released Dec 10th, 2024).

I have not tried either of this yet, but I will, and will update this thread. Downgrading is probably the easiest temporary solution.

Resolutions for LocalAI, in no specific order:

  • Add notes to documentation noting that until Llama.cpp supports CLIP, GPU acceleration does not work for vision starting in v2.25.0
    • Link to instructions on how to undo ggerganov's change, potentially in this PR.
  • Potentially: add a patch for CLIP to bring back support for GPU acceleration.

BTW -- thanks much for the work on this project! I was able to spin up LocalAI for my Home Assistant in a day!

@nick-pape nick-pape added bug Something isn't working unconfirmed labels Feb 12, 2025
@nick-pape
Copy link
Author

nick-pape commented Feb 13, 2025

Confirmed LocalAI 2.24.2 loads CLIP with CUDA:

1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: model name:   vit-large336-custom
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: description:  image encoder for LLaVA
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: GGUF version: 3
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: alignment:    32
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: n_tensors:    378
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: n_kv:         25
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: ftype:        f16
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: loaded meta data with 25 key-value pairs and 378 tensors from /build/models/llava-v1.6-7b-mmproj-f16.gguf
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   0:                       general.architecture str              = clip
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   1:                      clip.has_text_encoder bool             = false
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   2:                    clip.has_vision_encoder bool             = true
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   3:                   clip.has_llava_projector bool             = true
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   4:                          general.file_type u32              = 1
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   5:                               general.name str              = vit-large336-custom
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   6:                        general.description str              = image encoder for LLaVA
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   7:                        clip.projector_type str              = mlp
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   8:                     clip.vision.image_size u32              = 336
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv   9:                     clip.vision.patch_size u32              = 14
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  10:               clip.vision.embedding_length u32              = 1024
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  11:            clip.vision.feed_forward_length u32              = 4096
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  12:                 clip.vision.projection_dim u32              = 768
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  13:           clip.vision.attention.head_count u32              = 16
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  14:   clip.vision.attention.layer_norm_epsilon f32              = 0.000010
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  15:                    clip.vision.block_count u32              = 23
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  16:           clip.vision.image_grid_pinpoints arr[i32,10]      = [336, 672, 672, 336, 672, 672, 1008, ...
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  17:          clip.vision.image_crop_resolution u32              = 224
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  18:             clip.vision.image_aspect_ratio str              = anyres
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  19:         clip.vision.image_split_resolution u32              = 224
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  20:            clip.vision.mm_patch_merge_type str              = spatial_unpad
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  21:              clip.vision.mm_projector_type str              = mlp2x_gelu
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  22:                     clip.vision.image_mean arr[f32,3]       = [0.481455, 0.457828, 0.408211]
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  23:                      clip.vision.image_std arr[f32,3]       = [0.268630, 0.261303, 0.275777]
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - kv  24:                              clip.use_gelu bool             = false
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - type  f32:  236 tensors
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: - type  f16:  142 tensors
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: CLIP using CUDA backend
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: text_encoder:   0
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: vision_encoder: 1
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: llava_projector:  1
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: minicpmv_projector:  0
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: model size:     595.50 MB
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: metadata size:  0.13 MB
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: params backend buffer size =  595.50 MB (378 tensors)
1:17AM DBG GRPC(gpt-vision-127.0.0.1:33595): stdout clip_model_load: compute allocated memory: 32.89 MB

Still don't get a response in the /chat though..

Edit: should have read the docs. Need to use a 500x500 image. That works, with GPU on 2.24.2.

Image

I'll try with the correct image size on 2.25.0 and see how the latency is with CLIP on CPU (if it works at all).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

1 participant