You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue is caused by an extension, but I believe it is caused by a bug in the webui
The issue exists in the current version of the webui
The issue has not been reported before recently
The issue has been reported before but has not been fixed yet
What happened?
When generating images, sometimes I encounter a Native API return -997 which is an IPEX code for a command failed to enqueue/execute. Sometimes, I can generate images, but most of the time, it doesn't. I even tried using SD1.5 models, but no luck. This hasn't happened before, I reset my PC because I want my PC to be refreshed as new, and after recloning the repo, I'm having this problem now.
Steps to reproduce the problem
Clone repository.
Edit the webui-user command args to include --use-ipex.
Execute webui-user and allow the installation to finish.
venv "R:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --use-ipex --opt-split-attention --medvram-sdxl
no module 'xformers'. Processing without...
No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 2.0.0a0+gite9ebda2. You might want to consider upgrading.
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
==============================================================================
You are running torch 2.0.0a0+gite9ebda2.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
Loading weights [b8d425c720] from R:\stable-diffusion-webui\models\Stable-diffusion\AOM3B4_orangemixs.safetensors
Creating model from config: R:\stable-diffusion-webui\configs\v1-inference.yaml
R:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1142: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set`share=True`in`launch()`.Startup time: 7.6s (prepare environment: 0.4s, import torch: 2.2s, import gradio: 0.6s, setup paths: 0.7s, initialize shared: 1.1s, other imports: 0.3s, load scripts: 1.2s, create ui: 0.8s, gradio launch: 0.4s).Loading VAE weights specified in settings: R:\stable-diffusion-webui\models\VAE\orangemix.vae.ptApplying attention optimization: Doggettx... done.Model loaded in 40.9s (load weights from disk: 0.4s, create model: 0.8s, apply weights to model: 2.1s, load VAE: 33.5s, calculate empty prompt: 3.8s). 0%|| 0/31 [00:08<?, ?it/s]*** Error completing request*** Arguments: ('task(knvvwltwmsj4455)', <gradio.routes.Request object at 0x000001C39F2CE9E0>, 0, '1girl, bangs, bed, bed sheet, blush, breasts, cleavage, earrings, green eyes, indoors, jewelry, large breasts, long hair, looking at viewer, navel, on bed, shirt, shorts, solo, thighs, window', '(worst quality, low quality:1.4), (bad-hands-5:1.5), easynegative', [], <PIL.Image.Image image mode=RGBA size=768x1344 at 0x1C39351E080>, None, None, None, None, None, None, 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 910, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 'upload', None, 0, False, 1, 0.5, 4, 0, 0.5, 2, 40, 'DPM++ 2M', 'Align Your Steps', False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {} Traceback (most recent call last): File "R:\stable-diffusion-webui\modules\call_queue.py", line 74, in f res = list(func(*args, **kwargs)) File "R:\stable-diffusion-webui\modules\call_queue.py", line 53, in f res = func(*args, **kwargs) File "R:\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "R:\stable-diffusion-webui\modules\img2img.py", line 242, in img2img processed = process_images(p) File "R:\stable-diffusion-webui\modules\processing.py", line 847, in process_images res = process_images_inner(p) File "R:\stable-diffusion-webui\modules\processing.py", line 988, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "R:\stable-diffusion-webui\modules\processing.py", line 1774, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "R:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 184, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "R:\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_samplingreturnfunc() File "R:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 184, in<lambda> samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_contextreturn func(*args, **kwargs) File "R:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "R:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 249, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "R:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "R:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_epsreturn self.inner_model.apply_model(*args, **kwargs) File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in<lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 34, in __call__return self.__sub_func(self.__orig_func, *args, **kwargs) File "R:\stable-diffusion-webui\modules\sd_hijack_unet.py", line 50, in apply_model result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs) File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in<lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 36, in __call__return self.__orig_func(*args, **kwargs) File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "R:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forwardreturn original_forward(self, x, timesteps, context, *args, **kwargs) File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward h = module(h, emb, context) File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in<lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "R:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 34, in __call__return self.__sub_func(self.__orig_func, *args, **kwargs) File "R:\stable-diffusion-webui\modules\sd_hijack_unet.py", line 96, in spatial_transformer_forward x = block(x, context=context[i]) File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forwardreturn checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 123, in checkpointreturn func(*inputs) File "R:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x File "R:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_implreturn forward_call(*args, **kwargs) File "R:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 278, in split_cross_attention_forward r2 = rearrange(r1, '(b h) n d -> b n (h d)', h=h) File "R:\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 487, in rearrangereturn reduce(tensor, pattern, reduction='rearrange', **axes_lengths) File "R:\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 410, in reducereturn _apply_recipe(recipe, tensor, reduction_type=reduction) File "R:\stable-diffusion-webui\venv\lib\site-packages\einops\einops.py", line 239, in _apply_recipereturn backend.reshape(tensor, final_shapes) File "R:\stable-diffusion-webui\venv\lib\site-packages\einops\_backends.py", line 84, in reshapereturn x.reshape(shape) RuntimeError: Native API failed. Native API returns: -997 (Command failed to enqueue/execute) -997 (Command failed to enqueue/execute)---
Additional information
My GPU is Intel Arc A750
My Intel Arc driver is version 32.0.101.6077
My command args are: --use-ipex --opt-split-attention --medvram-sdxl
iGPU is DISABLED in BIOS, only the Intel Arc is enabled. Resizable BAR enabled as well.
I have the OneAPI base toolkit installed on my system, but I don't know if that's relevant, so I included it here.
The text was updated successfully, but these errors were encountered:
Checklist
What happened?
When generating images, sometimes I encounter a Native API return -997 which is an IPEX code for a command failed to enqueue/execute. Sometimes, I can generate images, but most of the time, it doesn't. I even tried using SD1.5 models, but no luck. This hasn't happened before, I reset my PC because I want my PC to be refreshed as new, and after recloning the repo, I'm having this problem now.
Steps to reproduce the problem
What should have happened?
WebUI should generate the image while using IPEX.
What browsers do you use to access the UI ?
Microsoft Edge
Sysinfo
sysinfo-2024-09-20-08-33.json
Console logs
Additional information
My GPU is Intel Arc A750
My Intel Arc driver is version 32.0.101.6077
My command args are: --use-ipex --opt-split-attention --medvram-sdxl
iGPU is DISABLED in BIOS, only the Intel Arc is enabled. Resizable BAR enabled as well.
I have the OneAPI base toolkit installed on my system, but I don't know if that's relevant, so I included it here.
The text was updated successfully, but these errors were encountered: