-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: Image recognition #3087
Comments
hi @kalle07, can you elaborate more on the issue that you are facing? maybe with a screenshot from Jan app? |
what you wana see ? |
I'm facing the same issue, here are some details: I'm on PopOS 22.04 with the latest drivers and so on.
|
What is the latest status? Can't reproduce right? @Van-QA |
at least you can try the version Jun 26/JUL2 like we both had that error ;) |
dupe janhq/models#47 |
Current behavior
error log below
btw. same model and same mmproject-file works with koboldcpp , may you can copy paste ;)
Minimum reproduction step
choose model
YOUR hosted
LLAVA 7B
attach a jpg "512x512"
Expected behavior
...
Screenshots / Logs
2024-06-22T11:34:32.434Z [CORTEX]::Debug: Request to kill cortex2024-06-22T11:34:32.440Z [CORTEX]::Debug: cortex process is terminated
2024-06-22T11:39:43.866Z [SPECS]::Version: 0.5.1
2024-06-22T11:39:43.867Z [SPECS]::Machine: x86_64
2024-06-22T11:39:43.867Z [SPECS]::Endianness: LE
2024-06-22T11:39:43.866Z [SPECS]::CPUs: [{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":3867328,"nice":0,"sys":3539187,"idle":9017531,"irq":1018109}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4838406,"nice":0,"sys":1642953,"idle":9942484,"irq":34453}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5519609,"nice":0,"sys":2000546,"idle":8903687,"irq":27984}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4872796,"nice":0,"sys":1642296,"idle":9908750,"irq":26093}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5347093,"nice":0,"sys":1420718,"idle":9656031,"irq":33109}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4810140,"nice":0,"sys":1254828,"idle":10358875,"irq":34515}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5317484,"nice":0,"sys":1446343,"idle":9660015,"irq":33125}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4916453,"nice":0,"sys":1289843,"idle":10217531,"irq":34453}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5031203,"nice":0,"sys":1353562,"idle":10039062,"irq":27750}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":4791078,"nice":0,"sys":1192718,"idle":10440031,"irq":30843}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5097828,"nice":0,"sys":1237109,"idle":10088890,"irq":29093}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5281687,"nice":0,"sys":1214156,"idle":9927984,"irq":23765}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5203218,"nice":0,"sys":1525718,"idle":9694890,"irq":18500}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5202234,"nice":0,"sys":1436453,"idle":9785140,"irq":20562}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5402796,"nice":0,"sys":1446109,"idle":9574921,"irq":19265}},{"model":"Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz","speed":2904,"times":{"user":5387609,"nice":0,"sys":1350750,"idle":9685453,"irq":17296}}]
2024-06-22T11:39:43.867Z [SPECS]::Parallelism: 16
2024-06-22T11:39:43.867Z [SPECS]::Free Mem: 54787137536
2024-06-22T11:39:43.867Z [SPECS]::Total Mem: 68598566912
2024-06-22T11:39:43.867Z [SPECS]::OS Version: Windows 10 Pro
2024-06-22T11:39:43.867Z [SPECS]::OS Release: 10.0.19045
2024-06-22T11:39:43.869Z [APP]::{"notify":true,"run_mode":"gpu","nvidia_driver":{"exist":true,"version":"555.99"},"cuda":{"exist":true,"version":"12"},"gpus":[{"id":"0","vram":"16380","name":"NVIDIA GeForce RTX 4060 Ti","arch":"ada"}],"gpu_highest_vram":"0","gpus_in_use":["0"],"is_initial":false,"vulkan":false}
2024-06-22T11:39:43.867Z [SPECS]::OS Platform: win32
2024-06-22T11:39:43.867Z [SPECS]::0, 16380, NVIDIA GeForce RTX 4060 Ti
2024-06-22T11:40:40.935Z [CORTEX]::CPU information - 9
2024-06-22T11:40:40.935Z [CORTEX]::Debug: Request to kill cortex
2024-06-22T11:40:40.954Z [CORTEX]::Debug: cortex process is terminated
2024-06-22T11:40:40.955Z [CORTEX]::Debug: Spawn cortex at path: C:\Users\kallemst\jan\extensions@janhq\inference-cortex-extension\dist\bin\win-cuda-12-0\cortex-cpp.exe, and args: 1,127.0.0.1,3928
2024-06-22T11:40:40.955Z [APP]::C:\Users\kallemst\jan\extensions@janhq\inference-cortex-extension\dist\bin\win-cuda-12-0
2024-06-22T11:40:40.955Z [CORTEX]::Debug: Spawning cortex subprocess...
2024-06-22T11:40:41.075Z [CORTEX]::Debug: cortex is ready
2024-06-22T11:40:41.076Z [CORTEX]::Debug: Loading model with params {"cpu_threads":9,"vision_model":true,"text_model":false,"ctx_len":2048,"prompt_template":"\n### Instruction:\n{prompt}\n### Response:\n","llama_model_path":"C:\Users\kallemst\jan\models\llava-7b\llava-v1.6-mistral-7b.Q4_K_M.gguf","mmproj":"C:\Users\kallemst\jan\models\llava-7b\mmproj-model-f16.gguf","user_prompt":"\n### Instruction:\n","ai_prompt":"\n### Response:\n","model":"llava-7b","ngl":100}
2024-06-22T11:40:41.144Z [CORTEX]::Debug: 20240622 11:40:40.986000 UTC 3448 INFO cortex-cpp version: default_version - main.cc:73
20240622 11:40:40.986000 UTC 3448 INFO cortex.llamacpp version: 0.1.17 - main.cc:78
20240622 11:40:40.986000 UTC 3448 INFO Server started, listening at: 127.0.0.1:3928 - main.cc:81
20240622 11:40:40.986000 UTC 3448 INFO Please load your model - main.cc:82
20240622 11:40:40.986000 UTC 3448 INFO Number of thread is:16 - main.cc:89
20240622 11:40:41.083000 UTC 13412 INFO CPU instruction set: fpu = 1| mmx = 1| sse = 1| sse2 = 1| sse3 = 1| ssse3 = 1| sse4_1 = 1| sse4_2 = 1| pclmulqdq = 1| avx = 1| avx2 = 1| avx512_f = 0| avx512_dq = 0| avx512_ifma = 0| avx512_pf = 0| avx512_er = 0| avx512_cd = 0| avx512_bw = 0| has_avx512_vl = 0| has_avx512_vbmi = 0| has_avx512_vbmi2 = 0| avx512_vnni = 0| avx512_bitalg = 0| avx512_vpopcntdq = 0| avx512_4vnniw = 0| avx512_4fmaps = 0| avx512_vp2intersect = 0| aes = 1| f16c = 1| - server.cc:272
20240622 11:40:41.150000 UTC 13412 INFO Loaded engine: cortex.llamacpp - server.cc:299
20240622 11:40:41.150000 UTC 13412 INFO MMPROJ FILE detected, multi-model enabled! - llama_engine.cc:287
20240622 11:40:41.150000 UTC 13412 DEBUG [LoadModelImpl] cache_type: f16 - llama_engine.cc:347
20240622 11:40:41.150000 UTC 13412 DEBUG [LoadModelImpl] Enabled Flash Attention - llama_engine.cc:356
20240622 11:40:41.150000 UTC 13412 DEBUG [LoadModelImpl] stop: null
{"timestamp":1719056441,"level":"INFO","function":"LoadModelImpl","line":400,"message":"system info","n_threads":9,"total_threads":16,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "}
2024-06-22T11:40:41.540Z [CORTEX]::Error: llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\kallemst\jan\models\llava-7b\llava-v1.6-mistral-7b.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = 1.6
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 15
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
2024-06-22T11:40:41.547Z [CORTEX]::Error: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "
", "", "<0x00>", "<...2024-06-22T11:40:41.563Z [CORTEX]::Error: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2024-06-22T11:40:41.565Z [CORTEX]::Error: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
2024-06-22T11:40:41.580Z [CORTEX]::Error: llm_load_vocab: special tokens cache size = 259
2024-06-22T11:40:41.584Z [CORTEX]::Error: llm_load_vocab: token to piece cache size = 0.1637 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 4.07 GiB (4.83 BPW)
llm_load_print_meta: general.name = 1.6
llm_load_print_meta: BOS token = 1 '
''llm_load_print_meta: EOS token = 2 '
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
2024-06-22T11:40:41.597Z [CORTEX]::Error: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
2024-06-22T11:40:41.674Z [CORTEX]::Error: llm_load_tensors: ggml ctx size = 0.30 MiB
2024-06-22T11:40:42.013Z [CORTEX]::Error: llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 70.31 MiB
llm_load_tensors: CUDA0 buffer size = 4095.05 MiB
.
2024-06-22T11:40:42.118Z [CORTEX]::Error: .
2024-06-22T11:40:42.133Z [CORTEX]::Error: .
2024-06-22T11:40:42.144Z [CORTEX]::Error: .
2024-06-22T11:40:42.152Z [CORTEX]::Error: .
2024-06-22T11:40:42.167Z [CORTEX]::Error: .
2024-06-22T11:40:42.179Z [CORTEX]::Error: .
2024-06-22T11:40:42.187Z [CORTEX]::Error: .
2024-06-22T11:40:42.193Z [CORTEX]::Error: .
2024-06-22T11:40:42.213Z [CORTEX]::Error: .
2024-06-22T11:40:42.221Z [CORTEX]::Error: .
2024-06-22T11:40:42.225Z [CORTEX]::Error: .
2024-06-22T11:40:42.247Z [CORTEX]::Error: .
2024-06-22T11:40:42.257Z [CORTEX]::Error: .
2024-06-22T11:40:42.269Z [CORTEX]::Error: .
2024-06-22T11:40:42.285Z [CORTEX]::Error: .
2024-06-22T11:40:42.289Z [CORTEX]::Error: .
2024-06-22T11:40:42.308Z [CORTEX]::Error: .
2024-06-22T11:40:42.315Z [CORTEX]::Error: .
2024-06-22T11:40:42.322Z [CORTEX]::Error: .
2024-06-22T11:40:42.341Z [CORTEX]::Error: .
2024-06-22T11:40:42.349Z [CORTEX]::Error: .
2024-06-22T11:40:42.356Z [CORTEX]::Error: .
2024-06-22T11:40:42.372Z [CORTEX]::Error: .
2024-06-22T11:40:42.380Z [CORTEX]::Error: .
2024-06-22T11:40:42.386Z [CORTEX]::Error: .
2024-06-22T11:40:42.402Z [CORTEX]::Error: .
2024-06-22T11:40:42.410Z [CORTEX]::Error: .
2024-06-22T11:40:42.424Z [CORTEX]::Error: .
2024-06-22T11:40:42.436Z [CORTEX]::Error: .
2024-06-22T11:40:42.444Z [CORTEX]::Error: .
2024-06-22T11:40:42.450Z [CORTEX]::Error: .
2024-06-22T11:40:42.466Z [CORTEX]::Error: .
2024-06-22T11:40:42.474Z [CORTEX]::Error: .
2024-06-22T11:40:42.489Z [CORTEX]::Error: .
2024-06-22T11:40:42.497Z [CORTEX]::Error: .
2024-06-22T11:40:42.505Z [CORTEX]::Error: .
2024-06-22T11:40:42.519Z [CORTEX]::Error: .
2024-06-22T11:40:42.531Z [CORTEX]::Error: .
2024-06-22T11:40:42.539Z [CORTEX]::Error: .
2024-06-22T11:40:42.554Z [CORTEX]::Error: .
2024-06-22T11:40:42.562Z [CORTEX]::Error: .
2024-06-22T11:40:42.570Z [CORTEX]::Error: .
2024-06-22T11:40:42.585Z [CORTEX]::Error: .
2024-06-22T11:40:42.593Z [CORTEX]::Error: .
2024-06-22T11:40:42.604Z [CORTEX]::Error: .
2024-06-22T11:40:42.616Z [CORTEX]::Error: .
2024-06-22T11:40:42.628Z [CORTEX]::Error: .
2024-06-22T11:40:42.637Z [CORTEX]::Error: .
2024-06-22T11:40:42.652Z [CORTEX]::Error: .
2024-06-22T11:40:42.660Z [CORTEX]::Error: .
2024-06-22T11:40:42.670Z [CORTEX]::Error: .
2024-06-22T11:40:42.682Z [CORTEX]::Error: .
2024-06-22T11:40:42.698Z [CORTEX]::Error: .
2024-06-22T11:40:42.702Z [CORTEX]::Error: .
2024-06-22T11:40:42.713Z [CORTEX]::Error: .
2024-06-22T11:40:42.724Z [CORTEX]::Error: .
2024-06-22T11:40:42.735Z [CORTEX]::Error: .
2024-06-22T11:40:42.746Z [CORTEX]::Error: .
2024-06-22T11:40:42.763Z [CORTEX]::Error: .
2024-06-22T11:40:42.767Z [CORTEX]::Error: .
2024-06-22T11:40:42.785Z [CORTEX]::Error: .
2024-06-22T11:40:42.793Z [CORTEX]::Error: .
2024-06-22T11:40:42.800Z [CORTEX]::Error: .
2024-06-22T11:40:42.820Z [CORTEX]::Error: .
2024-06-22T11:40:42.828Z [CORTEX]::Error: .
2024-06-22T11:40:42.831Z [CORTEX]::Error: .
2024-06-22T11:40:42.850Z [CORTEX]::Error: .
2024-06-22T11:40:42.858Z [CORTEX]::Error: .
2024-06-22T11:40:42.864Z [CORTEX]::Error: .
2024-06-22T11:40:42.880Z [CORTEX]::Error: .
2024-06-22T11:40:42.888Z [CORTEX]::Error: .
2024-06-22T11:40:42.903Z [CORTEX]::Error: .
2024-06-22T11:40:42.914Z [CORTEX]::Error: .
2024-06-22T11:40:42.922Z [CORTEX]::Error: .
2024-06-22T11:40:42.928Z [CORTEX]::Error: .
2024-06-22T11:40:42.944Z [CORTEX]::Error: .
2024-06-22T11:40:42.952Z [CORTEX]::Error: .
2024-06-22T11:40:42.966Z [CORTEX]::Error: .
2024-06-22T11:40:42.974Z [CORTEX]::Error: .
2024-06-22T11:40:42.981Z [CORTEX]::Error: .
2024-06-22T11:40:42.996Z [CORTEX]::Error: .
2024-06-22T11:40:43.007Z [CORTEX]::Error: .
2024-06-22T11:40:43.014Z [CORTEX]::Error: .
2024-06-22T11:40:43.028Z [CORTEX]::Error: .
2024-06-22T11:40:43.040Z [CORTEX]::Error: .
2024-06-22T11:40:43.047Z [CORTEX]::Error: .
2024-06-22T11:40:43.053Z [CORTEX]::Error: .
2024-06-22T11:40:43.072Z [CORTEX]::Error: .
2024-06-22T11:40:43.080Z [CORTEX]::Error: .
2024-06-22T11:40:43.086Z [CORTEX]::Error: .
2024-06-22T11:40:43.105Z [CORTEX]::Error: .
2024-06-22T11:40:43.115Z [CORTEX]::Error: .
2024-06-22T11:40:43.127Z [CORTEX]::Error: .
2024-06-22T11:40:43.138Z [CORTEX]::Error: .
2024-06-22T11:40:43.145Z [CORTEX]::Error: .
2024-06-22T11:40:43.146Z [CORTEX]::Error: llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 2048
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: freq_base = 1000000.0
2024-06-22T11:40:43.146Z [CORTEX]::Error: llama_new_context_with_model: freq_scale = 1
2024-06-22T11:40:43.153Z [CORTEX]::Error: llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
2024-06-22T11:40:43.154Z [CORTEX]::Error: llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB
2024-06-22T11:40:43.165Z [CORTEX]::Error: llama_new_context_with_model: CUDA0 compute buffer size = 344.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 48.02 MiB
llama_new_context_with_model: graph nodes = 903
llama_new_context_with_model: graph splits = 2
2024-06-22T11:40:43.324Z [CORTEX]::Debug: Load model success with response {}
2024-06-22T11:40:43.327Z [CORTEX]::Debug: Validate model state with response 200
2024-06-22T11:40:43.328Z [CORTEX]::Debug: Validate model state success with response {"model_data":"{"frequency_penalty":0.0,"grammar":"","ignore_eos":false,"logit_bias":[],"min_p":0.05000000074505806,"mirostat":0,"mirostat_eta":0.10000000149011612,"mirostat_tau":5.0,"model":"C:\\Users\\kallemst\\jan\\models\\llava-7b\\llava-v1.6-mistral-7b.Q4_K_M.gguf","n_ctx":2048,"n_keep":0,"n_predict":2,"n_probs":0,"penalize_nl":false,"penalty_prompt_tokens":[],"presence_penalty":0.0,"repeat_last_n":64,"repeat_penalty":1.0,"seed":4294967295,"stop":[],"stream":false,"temperature":0.800000011920929,"tfs_z":1.0,"top_k":40,"top_p":0.949999988079071,"typical_p":1.0,"use_penalty_prompt_tokens":false}","model_loaded":true}
2024-06-22T11:40:43.352Z [CORTEX]::Debug: 20240622 11:40:41.152000 UTC 13412 DEBUG [LoadModel] Multi Modal Mode Enabled - llama_server_context.cc:152
20240622 11:40:43.222000 UTC 13412 DEBUG [Initialize] Available slots: - llama_server_context.cc:208
20240622 11:40:43.222000 UTC 13412 DEBUG [Initialize] -> Slot 0 - max context: 2048 - llama_server_context.cc:216
20240622 11:40:43.222000 UTC 13412 INFO Started background task here! - llama_server_context.cc:235
20240622 11:40:43.222000 UTC 13412 INFO Warm-up model: llava-7b - llama_engine.cc:794
20240622 11:40:43.222000 UTC 3736 DEBUG [LaunchSlotWithData] slot 0 is processing [task id: 0] - llama_server_context.cc:602
20240622 11:40:43.222000 UTC 3736 INFO kv cache rm [p0, end) - id_slot: 0, task_id: 0, p0: 0 - llama_server_context.cc:1522
20240622 11:40:43.323000 UTC 3736 DEBUG [PrintTimings] PrintTimings: prompt eval time = 52.321ms / 2 tokens (26.1605 ms per token, 38.2255690832 tokens per second) - llama_client_slot.cc:79
20240622 11:40:43.323000 UTC 3736 DEBUG [PrintTimings] PrintTimings: eval time = 53.642 ms / 4 runs (13.4105 ms per token, 74.5684351814 tokens per second)
20240622 11:40:43.323000 UTC 3736 DEBUG [PrintTimings] PrintTimings: total time = 105.963 ms - llama_client_slot.cc:92
20240622 11:40:43.323000 UTC 3736 INFO slot released: id_slot: 0, id_task: 0, n_ctx: 2048, n_past: 6, n_system_tokens: 0, n_cache_tokens: 0, truncated: 0 - llama_server_context.cc:1282
20240622 11:40:43.323000 UTC 3736 DEBUG [UpdateSlots] all slots are idle and system prompt is empty, clear the KV cache - llama_server_context.cc:1228
20240622 11:40:43.323000 UTC 3736 DEBUG [KvCacheClear] Clear the entire KV cache - llama_server_context.cc:241
20240622 11:40:43.323000 UTC 13412 INFO {"content":"! This is my first","generation_settings":{"frequency_penalty":0.0,"grammar":"","ignore_eos":false,"logit_bias":[],"min_p":0.05000000074505806,"mirostat":0,"mirostat_eta":0.10000000149011612,"mirostat_tau":5.0,"model":"C:\Users\kallemst\jan\models\llava-7b\llava-v1.6-mistral-7b.Q4_K_M.gguf","n_ctx":2048,"n_keep":0,"n_predict":2,"n_probs":0,"penalize_nl":false,"penalty_prompt_tokens":[],"presence_penalty":0.0,"repeat_last_n":64,"repeat_penalty":1.0,"seed":4294967295,"stop":[],"stream":false,"temperature":0.800000011920929,"tfs_z":1.0,"top_k":40,"top_p":0.949999988079071,"typical_p":1.0,"use_penalty_prompt_tokens":false},"model":"C:\Users\kallemst\jan\models\llava-7b\llava-v1.6-mistral-7b.Q4_K_M.gguf","prompt":"Hello","slot_id":0,"stop":true,"stopped_eos":false,"stopped_limit":true,"stopped_word":false,"stopping_word":"","timings":{"predicted_ms":53.642,"predicted_n":4,"predicted_per_second":74.56843518138771,"predicted_per_token_ms":13.4105,"prompt_ms":52.321,"prompt_n":2,"prompt_per_second":38.225569083159726,"prompt_per_token_ms":26.1605},"tokens_cached":6,"tokens_evaluated":2,"tokens_predicted":4,"truncated":false} - llama_engine.cc:802
20240622 11:40:43.323000 UTC 13412 INFO Model loaded successfully: llava-7b - llama_engine.cc:203
20240622 11:40:43.331000 UTC 3512 INFO Model status responded - llama_engine.cc:246
20240622 11:40:43.343000 UTC 8412 INFO Request 1, model llava-7b: Generating reponse for inference request - llama_engine.cc:451
20240622 11:40:43.343000 UTC 8412 INFO Request 1: Stop words:null
20240622 11:40:43.343000 UTC 8412 INFO Request 1: Base64 image detected - llama_engine.cc:531
20240622 11:40:43.355000 UTC 8412 INFO Request 1:
Jan version
0.5.1
In which operating systems have you tested?
Environment details
No response
The text was updated successfully, but these errors were encountered: