You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
src/cli.ts:529:26 - error TS2869: Right operand of ?? is unreachable because the left operand is never nullish.
529 console.info("[*] IMPORTANT: Error getting AI models (check if using correct AI type, if using Ollama - check if running). Error message: " + e.message ?? e.toString());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It looks like there was an issue compiling the Typescript code to JavaScript at line 529. I just updated the TypeScript compiler to 5.6 and I got the same error.
Can you edit src/cli.ts at line 529 to this:
console.info("[*] IMPORTANT: Error getting AI models (check if using correct AI type, if using Ollama - check if running). Error message: "+e.message);
It also appears that you are using version 0.2.0 which not has been released yet. This version is not ready for use and there are some bugs with the video rendering that needs to be fixed. Version 0.1.0 is stable but does not have the UI feature.
im trying to run the server but when i start it i get the following error:
src/cli.ts:529:26 - error TS2869: Right operand of ?? is unreachable because the left operand is never nullish.
529 console.info("[*] IMPORTANT: Error getting AI models (check if using correct AI type, if using Ollama - check if running). Error message: " + e.message ?? e.toString());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Found 1 error in src/cli.ts:529
i have Ollama running:
ollama serve
2024/10/04 18:38:11 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-10-04T18:38:11.861Z level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-10-04T18:38:11.861Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-04T18:38:11.863Z level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
time=2024-10-04T18:38:11.863Z level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1107556560/runners
time=2024-10-04T18:39:20.212Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm_v60102]"
time=2024-10-04T18:39:20.230Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-10-04T18:39:20.232Z level=WARN source=gpu.go:224 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"
time=2024-10-04T18:39:20.232Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="1.9 GiB" available="1.5 GiB"
The text was updated successfully, but these errors were encountered: