Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stable diffusion mlx #474

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

pranav4501
Copy link

@pranav4501 pranav4501 commented Nov 20, 2024

Sharded stable diffusion inference for mlx
#159

Changes:

  • Sharded Stable Diffusion 2-1 Base mlx
  • Handled diffusion steps by looping the whole model
  • Added back inference state
  • Modified grpc and proto to support inference
  • New endpoint for image generation
  • Streaming progress for image generation
  • Handling multiple submodels in a single model

Sharding process:

  1. Stable Diffusion contains three models : CLIP( text encoder) , UNET( Denoising Transformer) and VAE (Image encoder and decoder)
  2. Stable diffusion hugging face repo contains a model_index.json and a folder for each model with its config. I combined all the models configs and loaded it to the model.
  3. The shard is then divided into 3 shards of each model (clip, unet, vae). This works something like the whole model is 37 layers of which 22, 8, 7 are the number of layers for each model in that order. So, a shard of (0,27) is made of shard(0,22, 'clip'), shard(0,5,'unet'), shard(0,0,'vae')
  4. Each model is manually sharded into individual layers.
  5. Then, the inference pipeline is clip.encode(text) -> unet.denoise_latent() for 50 steps -> vae.decode_image().
  6. This is implemented as clip.encode(text) if step ==1 -> unet.denoise_latent() for a step -k -> vae.decode_image() if step ==50 . This pipeline is implemented for 50 steps while maintaining the intermediate results and step count in the inference_state

@AlexCheema
Copy link
Contributor

Just so I understand what's going on, how is the model split up between devices? Lets say I have 3 devices with different capabilities, how does that work?

@pranav4501
Copy link
Author

There are no changes to that part. It's how the partition algorithm splits the shards across the devices.

@AlexCheema
Copy link
Contributor

There are no changes to that part. It's how the partition algorithm splits the shards across the devices.

I see. The difference here is the layers are non-uniform. That means they won't necessarily get split proportional to the memory used right?

@pranav4501
Copy link
Author

Yeah, layers are non-uniform, so the split memory isn't exactly proportional to the number of layers. Can we split using the number of params?

@AlexCheema
Copy link
Contributor

Yeah, layers are non-uniform, so the split memory isn't exactly proportional to the number of layers. Can we split using the number of params?

This is probably fine as long as the layers aren't wildly different in size. Do you know roughly how different in size they are?

@pranav4501
Copy link
Author

Unet does have couple larger layers because of upsampled dims and clip text encoder has comparatively smaller layers as it can be easily split similar to llms, made of transformer blocks. We can combine 2 clip layers and split UNET further to make it more uniform.
CLIP ( 1.36GB -> 22 layers : uniformly split), UNET ( 3.46GB -> 10 layers: non-uniform), VAE (346 MB -> 10 layers : non-uniform )

@blindcrone
Copy link
Contributor

I think at some point it would make sense to allow more granular sharding of models than just transformer blocks anyway, and this could involve updating to a memory-footprint heuristic based on dtypes and parameters rather than assuming uniform layer blocks

@AlexCheema
Copy link
Contributor

Was running on this branch for a while doing image generation requests and got the following errors:



model: stable-diffusion-2-1-base, prompt: A lion in a jungle, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

  4%|###8                                                                                             | 2/50 [00:00<?, ?it/s]
  4%|###8                                                                                             | 2/50 [00:00<?, ?it/s]

  6%|#####8                                                                                           | 3/50 [00:00<?, ?it/s]
  6%|#####8                                                                                           | 3/50 [00:00<?, ?it/s]

  8%|#######7                                                                                         | 4/50 [00:00<?, ?it/s]
  8%|#######7                                                                                         | 4/50 [00:00<?, ?it/s]

 10%|#########7                                                                                       | 5/50 [00:00<?, ?it/s]
 10%|#########7                                                                                       | 5/50 [00:00<?, ?it/s]

 12%|###########6                                                                                     | 6/50 [00:00<?, ?it/s]
 12%|###########6                                                                                     | 6/50 [00:00<?, ?it/s]

 14%|#############5                                                                                   | 7/50 [00:00<?, ?it/s]
 14%|#############5                                                                                   | 7/50 [00:00<?, ?it/s]

 16%|###############5                                                                                 | 8/50 [00:00<?, ?it/s]
 16%|###############5                                                                                 | 8/50 [00:00<?, ?it/s]

 18%|#################4                                                                               | 9/50 [00:00<?, ?it/s]
 18%|#################4                                                                               | 9/50 [00:00<?, ?it/s]

 20%|###################2                                                                            | 10/50 [00:00<?, ?it/s]
 20%|###################2                                                                            | 10/50 [00:00<?, ?it/s]

 22%|#####################1                                                                          | 11/50 [00:00<?, ?it/s]
 22%|#####################1                                                                          | 11/50 [00:00<?, ?it/s]

 24%|#######################                                                                         | 12/50 [00:00<?, ?it/s]
 24%|#######################                                                                         | 12/50 [00:00<?, ?it/s]

 26%|########################9                                                                       | 13/50 [00:00<?, ?it/s]
 26%|########################9                                                                       | 13/50 [00:00<?, ?it/s]

 28%|##########################8                                                                     | 14/50 [00:00<?, ?it/s]
 28%|##########################8                                                                     | 14/50 [00:00<?, ?it/s]

 30%|############################8                                                                   | 15/50 [00:00<?, ?it/s]
 30%|############################8                                                                   | 15/50 [00:00<?, ?it/s]

 32%|##############################7                                                                 | 16/50 [00:00<?, ?it/s]
 32%|##############################7                                                                 | 16/50 [00:00<?, ?it/s]

 34%|################################6                                                               | 17/50 [00:00<?, ?it/s]
 34%|################################6                                                               | 17/50 [00:00<?, ?it/s]

 36%|##################################5                                                             | 18/50 [00:00<?, ?it/s]
 36%|##################################5                                                             | 18/50 [00:00<?, ?it/s]

 38%|####################################4                                                           | 19/50 [00:00<?, ?it/s]
 38%|####################################4                                                           | 19/50 [00:00<?, ?it/s]

 40%|######################################4                                                         | 20/50 [00:00<?, ?it/s]
 40%|######################################4                                                         | 20/50 [00:00<?, ?it/s]

 42%|########################################3                                                       | 21/50 [00:00<?, ?it/s]
 42%|########################################3                                                       | 21/50 [00:00<?, ?it/s]

 44%|##########################################2                                                     | 22/50 [00:00<?, ?it/s]
 44%|##########################################2                                                     | 22/50 [00:00<?, ?it/s]

 46%|############################################1                                                   | 23/50 [00:00<?, ?it/s]
 46%|############################################1                                                   | 23/50 [00:00<?, ?it/s]

 48%|##############################################                                                  | 24/50 [00:00<?, ?it/s]
 48%|##############################################                                                  | 24/50 [00:00<?, ?it/s]

 50%|################################################                                                | 25/50 [00:00<?, ?it/s]
 50%|################################################                                                | 25/50 [00:00<?, ?it/s]

 52%|#################################################9                                              | 26/50 [00:00<?, ?it/s]
 52%|#################################################9                                              | 26/50 [00:00<?, ?it/s]

 54%|###################################################8                                            | 27/50 [00:00<?, ?it/s]
 54%|###################################################8                                            | 27/50 [00:00<?, ?it/s]

 56%|#####################################################7                                          | 28/50 [00:00<?, ?it/s]
 56%|#####################################################7                                          | 28/50 [00:00<?, ?it/s]

 58%|#######################################################6                                        | 29/50 [00:00<?, ?it/s]
 58%|#######################################################6                                        | 29/50 [00:00<?, ?it/s]

 60%|#########################################################6                                      | 30/50 [00:00<?, ?it/s]
 60%|#########################################################6                                      | 30/50 [00:00<?, ?it/s]

 62%|###########################################################5                                    | 31/50 [00:00<?, ?it/s]
 62%|###########################################################5                                    | 31/50 [00:00<?, ?it/s]

 64%|#############################################################4                                  | 32/50 [00:00<?, ?it/s]
 64%|#############################################################4                                  | 32/50 [00:00<?, ?it/s]

 66%|###############################################################3                                | 33/50 [00:00<?, ?it/s]
 66%|###############################################################3                                | 33/50 [00:00<?, ?it/s]

 68%|#################################################################2                              | 34/50 [00:00<?, ?it/s]
 68%|#################################################################2                              | 34/50 [00:00<?, ?it/s]

 70%|###################################################################1                            | 35/50 [00:00<?, ?it/s]
 70%|###################################################################1                            | 35/50 [00:00<?, ?it/s]

 72%|#####################################################################1                          | 36/50 [00:00<?, ?it/s]
 72%|#####################################################################1                          | 36/50 [00:00<?, ?it/s]

 74%|#######################################################################                         | 37/50 [00:00<?, ?it/s]
 74%|#######################################################################                         | 37/50 [00:00<?, ?it/s]

 76%|########################################################################9                       | 38/50 [00:00<?, ?it/s]
 76%|########################################################################9                       | 38/50 [00:00<?, ?it/s]

 78%|##########################################################################8                     | 39/50 [00:00<?, ?it/s]
 78%|##########################################################################8                     | 39/50 [00:00<?, ?it/s]

 80%|############################################################################8                   | 40/50 [00:00<?, ?it/s]
 80%|############################################################################8                   | 40/50 [00:00<?, ?it/s]

 82%|##############################################################################7                 | 41/50 [00:00<?, ?it/s]
 82%|##############################################################################7                 | 41/50 [00:00<?, ?it/s]

 84%|################################################################################6               | 42/50 [00:00<?, ?it/s]
 84%|################################################################################6               | 42/50 [00:00<?, ?it/s]

 86%|##################################################################################5             | 43/50 [00:00<?, ?it/s]
 86%|##################################################################################5             | 43/50 [00:00<?, ?it/s]

 88%|####################################################################################4           | 44/50 [00:00<?, ?it/s]
 88%|####################################################################################4           | 44/50 [00:00<?, ?it/s]

 90%|######################################################################################4         | 45/50 [00:00<?, ?it/s]
 90%|######################################################################################4         | 45/50 [00:00<?, ?it/s]

 92%|########################################################################################3       | 46/50 [00:00<?, ?it/s]
 92%|########################################################################################3       | 46/50 [00:00<?, ?it/s]

 94%|##########################################################################################2     | 47/50 [00:00<?, ?it/s]
 94%|##########################################################################################2     | 47/50 [00:00<?, ?it/s]

 96%|############################################################################################1   | 48/50 [00:00<?, ?it/s]
 96%|############################################################################################1   | 48/50 [00:00<?, ?it/s]

 98%|##############################################################################################  | 49/50 [00:00<?, ?it/s]
 98%|##############################################################################################  | 49/50 [00:00<?, ?it/s]

100%|################################################################################################| 50/50 [00:00<?, ?it/s]
100%|################################################################################################| 50/50 [00:00<?, ?it/s]

Error connecting peer [email protected]:60559: 
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py",
line 520, in wait_for
    return await fut
           ^^^^^^^^^
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 42, in connect
    await self.channel.channel_ready()
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_channel.py", line 481, in channel_ready
    await self.wait_for_state_change(state)
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_channel.py", line 474, in wait_for_state_change
    assert await self._channel.watch_connectivity_state(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "src/python/grpcio/grpc/_cython/_cygrpc/aio/channel.pyx.pxi", line 97, in watch_connectivity_state
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 350, in connect_with_timeout
    await asyncio.wait_for(peer.connect(), timeout)
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py",
line 519, in wait_for
    async with timeouts.timeout(timeout):
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File 
"/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/timeouts.py", 
line 115, in __aexit__
    raise TimeoutError from exc_val
TimeoutError
Error connecting peer [email protected]:53636: 
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py",
line 520, in wait_for
    return await fut
           ^^^^^^^^^
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 42, in connect
    await self.channel.channel_ready()
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_channel.py", line 481, in channel_ready
    await self.wait_for_state_change(state)
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_channel.py", line 474, in wait_for_state_change
    assert await self._channel.watch_connectivity_state(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "src/python/grpcio/grpc/_cython/_cygrpc/aio/channel.pyx.pxi", line 97, in watch_connectivity_state
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 350, in connect_with_timeout
    await asyncio.wait_for(peer.connect(), timeout)
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py",
line 519, in wait_for
    async with timeouts.timeout(timeout):
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File 
"/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/timeouts.py", 
line 115, in __aexit__
    raise TimeoutError from exc_val
TimeoutError
Removing download task for Shard(model_id='llama-3.2-3b', start_layer=0, end_layer=27, n_layers=28): True
model: stable-diffusion-2-1-base, prompt: apples and bananas, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
Removing download task for Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=23, n_layers=37): True
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4973691' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36807168 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36807168
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T10:32:48.153676+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36807168 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36807168
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T10:32:48.153676+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4973698' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: medieval castle, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4974620' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:34:26.402038+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:34:26.402038+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4974627' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
Task exception was never retrieved
future: <Task finished name='Task-4975169' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:34:45.790298+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:34:45.790298+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4975175' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
Task exception was never retrieved
future: <Task finished name='Task-4975177' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4978154' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:43:20.388568+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:43:20.388568+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4978161' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4982529' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T10:56:40.841611+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T10:56:40.841611+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4982536' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
Removing download task for Shard(model_id='llama-3.2-3b', start_layer=0, end_layer=17, n_layers=28): True
Error processing tensor for shard Shard(model_id='llama-3.2-3b', start_layer=0, end_layer=17, n_layers=28): too many values 
to unpack (expected 3)
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 267, in _process_tensor
    result, inference_state = await self.inference_engine.infer_tensor(request_id, shard, tensor, inference_state)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/sharded_inference_engine.py", line 58, in infer_tensor
    output_data, inference_state = await asyncio.get_running_loop().run_in_executor(self.executor, self.model, 
mx.array(input_data), request_id, inference_state)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File 
"/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread
.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/stateful_model.py", line 41, in __call__
    y = self.model(x, cache=cache)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/sharded_utils.py", line 129, in __call__
    y = super().__call__(x[None] if self.shard.is_first_layer() else x, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/models/llama.py", line 86, in __call__
    out = self.model(inputs, cache)
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/models/llama.py", line 64, in __call__
    h = layer(h, mask, cache=c)
        ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/mlx_lm/models/llama.py", line 239, in __call__
    r = self.self_attn(self.input_layernorm(x), mask, cache)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/mlx_lm/models/llama.py", line 176, in __call__
    B, L, D = x.shape
    ^^^^^^^
ValueError: too many values to unpack (expected 3)
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4994430' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:35:12.524432+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:35:12.524432+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4994437' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4997809' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:45:14.398384+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:45:14.398384+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4997816' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-5001494' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:56:20.740718+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:56:20.740718+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-5001501' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-5006899' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T12:13:15.940467+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T12:13:15.940467+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>
Task exception was never retrieved
future: <Task finished name='Task-5006906' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4

@AlexCheema
Copy link
Contributor

Was running on this branch for a while doing image generation requests and got the following errors:



model: stable-diffusion-2-1-base, prompt: A lion in a jungle, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

  4%|###8                                                                                             | 2/50 [00:00<?, ?it/s]
  4%|###8                                                                                             | 2/50 [00:00<?, ?it/s]

  6%|#####8                                                                                           | 3/50 [00:00<?, ?it/s]
  6%|#####8                                                                                           | 3/50 [00:00<?, ?it/s]

  8%|#######7                                                                                         | 4/50 [00:00<?, ?it/s]
  8%|#######7                                                                                         | 4/50 [00:00<?, ?it/s]

 10%|#########7                                                                                       | 5/50 [00:00<?, ?it/s]
 10%|#########7                                                                                       | 5/50 [00:00<?, ?it/s]

 12%|###########6                                                                                     | 6/50 [00:00<?, ?it/s]
 12%|###########6                                                                                     | 6/50 [00:00<?, ?it/s]

 14%|#############5                                                                                   | 7/50 [00:00<?, ?it/s]
 14%|#############5                                                                                   | 7/50 [00:00<?, ?it/s]

 16%|###############5                                                                                 | 8/50 [00:00<?, ?it/s]
 16%|###############5                                                                                 | 8/50 [00:00<?, ?it/s]

 18%|#################4                                                                               | 9/50 [00:00<?, ?it/s]
 18%|#################4                                                                               | 9/50 [00:00<?, ?it/s]

 20%|###################2                                                                            | 10/50 [00:00<?, ?it/s]
 20%|###################2                                                                            | 10/50 [00:00<?, ?it/s]

 22%|#####################1                                                                          | 11/50 [00:00<?, ?it/s]
 22%|#####################1                                                                          | 11/50 [00:00<?, ?it/s]

 24%|#######################                                                                         | 12/50 [00:00<?, ?it/s]
 24%|#######################                                                                         | 12/50 [00:00<?, ?it/s]

 26%|########################9                                                                       | 13/50 [00:00<?, ?it/s]
 26%|########################9                                                                       | 13/50 [00:00<?, ?it/s]

 28%|##########################8                                                                     | 14/50 [00:00<?, ?it/s]
 28%|##########################8                                                                     | 14/50 [00:00<?, ?it/s]

 30%|############################8                                                                   | 15/50 [00:00<?, ?it/s]
 30%|############################8                                                                   | 15/50 [00:00<?, ?it/s]

 32%|##############################7                                                                 | 16/50 [00:00<?, ?it/s]
 32%|##############################7                                                                 | 16/50 [00:00<?, ?it/s]

 34%|################################6                                                               | 17/50 [00:00<?, ?it/s]
 34%|################################6                                                               | 17/50 [00:00<?, ?it/s]

 36%|##################################5                                                             | 18/50 [00:00<?, ?it/s]
 36%|##################################5                                                             | 18/50 [00:00<?, ?it/s]

 38%|####################################4                                                           | 19/50 [00:00<?, ?it/s]
 38%|####################################4                                                           | 19/50 [00:00<?, ?it/s]

 40%|######################################4                                                         | 20/50 [00:00<?, ?it/s]
 40%|######################################4                                                         | 20/50 [00:00<?, ?it/s]

 42%|########################################3                                                       | 21/50 [00:00<?, ?it/s]
 42%|########################################3                                                       | 21/50 [00:00<?, ?it/s]

 44%|##########################################2                                                     | 22/50 [00:00<?, ?it/s]
 44%|##########################################2                                                     | 22/50 [00:00<?, ?it/s]

 46%|############################################1                                                   | 23/50 [00:00<?, ?it/s]
 46%|############################################1                                                   | 23/50 [00:00<?, ?it/s]

 48%|##############################################                                                  | 24/50 [00:00<?, ?it/s]
 48%|##############################################                                                  | 24/50 [00:00<?, ?it/s]

 50%|################################################                                                | 25/50 [00:00<?, ?it/s]
 50%|################################################                                                | 25/50 [00:00<?, ?it/s]

 52%|#################################################9                                              | 26/50 [00:00<?, ?it/s]
 52%|#################################################9                                              | 26/50 [00:00<?, ?it/s]

 54%|###################################################8                                            | 27/50 [00:00<?, ?it/s]
 54%|###################################################8                                            | 27/50 [00:00<?, ?it/s]

 56%|#####################################################7                                          | 28/50 [00:00<?, ?it/s]
 56%|#####################################################7                                          | 28/50 [00:00<?, ?it/s]

 58%|#######################################################6                                        | 29/50 [00:00<?, ?it/s]
 58%|#######################################################6                                        | 29/50 [00:00<?, ?it/s]

 60%|#########################################################6                                      | 30/50 [00:00<?, ?it/s]
 60%|#########################################################6                                      | 30/50 [00:00<?, ?it/s]

 62%|###########################################################5                                    | 31/50 [00:00<?, ?it/s]
 62%|###########################################################5                                    | 31/50 [00:00<?, ?it/s]

 64%|#############################################################4                                  | 32/50 [00:00<?, ?it/s]
 64%|#############################################################4                                  | 32/50 [00:00<?, ?it/s]

 66%|###############################################################3                                | 33/50 [00:00<?, ?it/s]
 66%|###############################################################3                                | 33/50 [00:00<?, ?it/s]

 68%|#################################################################2                              | 34/50 [00:00<?, ?it/s]
 68%|#################################################################2                              | 34/50 [00:00<?, ?it/s]

 70%|###################################################################1                            | 35/50 [00:00<?, ?it/s]
 70%|###################################################################1                            | 35/50 [00:00<?, ?it/s]

 72%|#####################################################################1                          | 36/50 [00:00<?, ?it/s]
 72%|#####################################################################1                          | 36/50 [00:00<?, ?it/s]

 74%|#######################################################################                         | 37/50 [00:00<?, ?it/s]
 74%|#######################################################################                         | 37/50 [00:00<?, ?it/s]

 76%|########################################################################9                       | 38/50 [00:00<?, ?it/s]
 76%|########################################################################9                       | 38/50 [00:00<?, ?it/s]

 78%|##########################################################################8                     | 39/50 [00:00<?, ?it/s]
 78%|##########################################################################8                     | 39/50 [00:00<?, ?it/s]

 80%|############################################################################8                   | 40/50 [00:00<?, ?it/s]
 80%|############################################################################8                   | 40/50 [00:00<?, ?it/s]

 82%|##############################################################################7                 | 41/50 [00:00<?, ?it/s]
 82%|##############################################################################7                 | 41/50 [00:00<?, ?it/s]

 84%|################################################################################6               | 42/50 [00:00<?, ?it/s]
 84%|################################################################################6               | 42/50 [00:00<?, ?it/s]

 86%|##################################################################################5             | 43/50 [00:00<?, ?it/s]
 86%|##################################################################################5             | 43/50 [00:00<?, ?it/s]

 88%|####################################################################################4           | 44/50 [00:00<?, ?it/s]
 88%|####################################################################################4           | 44/50 [00:00<?, ?it/s]

 90%|######################################################################################4         | 45/50 [00:00<?, ?it/s]
 90%|######################################################################################4         | 45/50 [00:00<?, ?it/s]

 92%|########################################################################################3       | 46/50 [00:00<?, ?it/s]
 92%|########################################################################################3       | 46/50 [00:00<?, ?it/s]

 94%|##########################################################################################2     | 47/50 [00:00<?, ?it/s]
 94%|##########################################################################################2     | 47/50 [00:00<?, ?it/s]

 96%|############################################################################################1   | 48/50 [00:00<?, ?it/s]
 96%|############################################################################################1   | 48/50 [00:00<?, ?it/s]

 98%|##############################################################################################  | 49/50 [00:00<?, ?it/s]
 98%|##############################################################################################  | 49/50 [00:00<?, ?it/s]

100%|################################################################################################| 50/50 [00:00<?, ?it/s]
100%|################################################################################################| 50/50 [00:00<?, ?it/s]

Error connecting peer [email protected]:60559: 
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py",
line 520, in wait_for
    return await fut
           ^^^^^^^^^
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 42, in connect
    await self.channel.channel_ready()
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_channel.py", line 481, in channel_ready
    await self.wait_for_state_change(state)
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_channel.py", line 474, in wait_for_state_change
    assert await self._channel.watch_connectivity_state(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "src/python/grpcio/grpc/_cython/_cygrpc/aio/channel.pyx.pxi", line 97, in watch_connectivity_state
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 350, in connect_with_timeout
    await asyncio.wait_for(peer.connect(), timeout)
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py",
line 519, in wait_for
    async with timeouts.timeout(timeout):
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File 
"/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/timeouts.py", 
line 115, in __aexit__
    raise TimeoutError from exc_val
TimeoutError
Error connecting peer [email protected]:53636: 
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py",
line 520, in wait_for
    return await fut
           ^^^^^^^^^
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 42, in connect
    await self.channel.channel_ready()
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_channel.py", line 481, in channel_ready
    await self.wait_for_state_change(state)
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_channel.py", line 474, in wait_for_state_change
    assert await self._channel.watch_connectivity_state(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "src/python/grpcio/grpc/_cython/_cygrpc/aio/channel.pyx.pxi", line 97, in watch_connectivity_state
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 350, in connect_with_timeout
    await asyncio.wait_for(peer.connect(), timeout)
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py",
line 519, in wait_for
    async with timeouts.timeout(timeout):
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File 
"/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/timeouts.py", 
line 115, in __aexit__
    raise TimeoutError from exc_val
TimeoutError
Removing download task for Shard(model_id='llama-3.2-3b', start_layer=0, end_layer=27, n_layers=28): True
model: stable-diffusion-2-1-base, prompt: apples and bananas, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
Removing download task for Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=23, n_layers=37): True
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4973691' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36807168 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36807168
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T10:32:48.153676+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36807168 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36807168
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T10:32:48.153676+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4973698' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: medieval castle, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4974620' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:34:26.402038+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:34:26.402038+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4974627' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
Task exception was never retrieved
future: <Task finished name='Task-4975169' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:34:45.790298+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:34:45.790298+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4975175' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
Task exception was never retrieved
future: <Task finished name='Task-4975177' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4978154' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:43:20.388568+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T10:43:20.388568+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4978161' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4982529' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T10:56:40.841611+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T10:56:40.841611+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4982536' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
Removing download task for Shard(model_id='llama-3.2-3b', start_layer=0, end_layer=17, n_layers=28): True
Error processing tensor for shard Shard(model_id='llama-3.2-3b', start_layer=0, end_layer=17, n_layers=28): too many values 
to unpack (expected 3)
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 267, in _process_tensor
    result, inference_state = await self.inference_engine.infer_tensor(request_id, shard, tensor, inference_state)
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/sharded_inference_engine.py", line 58, in infer_tensor
    output_data, inference_state = await asyncio.get_running_loop().run_in_executor(self.executor, self.model, 
mx.array(input_data), request_id, inference_state)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File 
"/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread
.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/stateful_model.py", line 41, in __call__
    y = self.model(x, cache=cache)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/sharded_utils.py", line 129, in __call__
    y = super().__call__(x[None] if self.shard.is_first_layer() else x, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/models/llama.py", line 86, in __call__
    out = self.model(inputs, cache)
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/exo/inference/mlx/models/llama.py", line 64, in __call__
    h = layer(h, mask, cache=c)
        ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/mlx_lm/models/llama.py", line 239, in __call__
    r = self.self_attn(self.input_layernorm(x), mask, cache)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/mlx_lm/models/llama.py", line 176, in __call__
    B, L, D = x.shape
    ^^^^^^^
ValueError: too many values to unpack (expected 3)
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4994430' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:35:12.524432+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:35:12.524432+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4994437' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-4997809' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:45:14.398384+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:45:14.398384+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-4997816' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-5001494' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:56:20.740718+04:00"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {grpc_message:"CLIENT: Sent message larger than max (36798940
vs. 33554432)", grpc_status:8, created_time:"2024-11-25T11:56:20.740718+04:00"}"
>
Task exception was never retrieved
future: <Task finished name='Task-5001501' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4
model: stable-diffusion-2-1-base, prompt: flying cars, stream: False
shard: Shard(model_id='stable-diffusion-2-1-base', start_layer=0, end_layer=0, n_layers=37)
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]
  2%|#9                                                                                               | 1/50 [00:00<?, ?it/s]

Task exception was never retrieved
future: <Task finished name='Task-5006899' coro=<StandardNode.forward_to_next_shard() done, defined at 
/Users/gary/exo/exo/orchestration/standard_node.py:275> exception=<AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T12:13:15.940467+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>>
Traceback (most recent call last):
  File "/Users/gary/exo/exo/orchestration/standard_node.py", line 304, in forward_to_next_shard
    await target_peer.send_tensor(next_shard, tensor_or_prompt, inference_state, request_id=request_id)
  File "/Users/gary/exo/exo/networking/grpc/grpc_peer_handle.py", line 101, in send_tensor
    response = await self.stub.SendTensor(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/grpc/aio/_call.py", line 327, in __await__
    raise _create_rpc_error(
grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with:
        status = StatusCode.RESOURCE_EXHAUSTED
        details = "CLIENT: Sent message larger than max (36798940 vs. 33554432)"
        debug_error_string = "UNKNOWN:Error received from peer  {created_time:"2024-11-25T12:13:15.940467+04:00", 
grpc_status:8, grpc_message:"CLIENT: Sent message larger than max (36798940 vs. 33554432)"}"
>
Task exception was never retrieved
future: <Task finished name='Task-5006906' coro=<ChatGPTAPI.handle_post_image_generations.<locals>.stream_image() done, 
defined at /Users/gary/exo/exo/api/chatgpt_api.py:412> exception=TypeError('Cannot handle this data type: (1, 1, 32, 320), 
<f4')>
Traceback (most recent call last):
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3277, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 32, 320), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/gary/exo/exo/api/chatgpt_api.py", line 417, in stream_image
    im = Image.fromarray(np.array(result))
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gary/exo/.venv/lib/python3.12/site-packages/PIL/Image.py", line 3281, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 32, 320), <f4

My bad - I accidentally had another Mac mini in the cluster on a different commit which was why we were getting these errors. No issues.

@AlexCheema
Copy link
Contributor

Please fix conflicts @pranav4501

@pranav4501
Copy link
Author

Resolved!

@AlexCheema
Copy link
Contributor

AlexCheema commented Nov 26, 2024

Can we also support chat history? I want to be able to give follow-up instructions like:

Me: Generate an image of a robot in a flower field
-> Generates image
Me: Make the flowers purple
-> Should generate a modified image with purple flowers

@AlexCheema
Copy link
Contributor

Also, we need to persist the images in a system-wide location rather than in the exo repo itself.

@pranav4501
Copy link
Author

For chat history, we can implement it as follows:

  • select a previous image from chat history or upload an image
  • then perform image-to-image generation using the image and the prompt
    I can change the location of the images, Shall I move it to Documents/Exo?

@pranav4501
Copy link
Author

Hi Alex,
It now supports chat history

  1. You can select the previously generated image from chat history by clicking on it
  2. Then add an additional prompt to perform further image-to-image generation.
  3. The caveat here is that the image-to-image generation requires an additional image encoder relative to text-to-image generation.
  4. Also moved the images location to Documents/Exo/Images/

@AlexCheema
Copy link
Contributor

Hi Alex, It now supports chat history

  1. You can select the previously generated image from chat history by clicking on it
  2. Then add an additional prompt to perform further image-to-image generation.
  3. The caveat here is that the image-to-image generation requires an additional image encoder relative to text-to-image generation.
  4. Also moved the images location to Documents/Exo/Images/

The follow-up prompts aren't working for me. I selected an image of flowers and added "with a horse galloping through" and it seems to ignore the selected image.

Screenshot 2024-12-13 at 21 36 05

@pranav4501
Copy link
Author

I don't think it is ignoring the previous image here, but it creates a latent using the encoded image and encoded prompt to generate a new image. So, it is not editing the previous image. We can set the strength of the prompt in the newly generated image. For a smaller strength, many features of the older images are carried over but the features of the prompt are not fully generated.
You can try generating with smaller strengths by updating it here
https://github.com/pranav4501/exo/blob/5c0cd1839bdddd549a0a3f51e97064f7cc58ff8e/exo/inference/mlx/models/StableDiffusionPipeline.py#L184

Copy link
Contributor

@AlexCheema AlexCheema left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some architectural improvements I'd like to make here. Some high level notes:

  • inference_state shouldn't be necessary
  • Special cases for stable diffusion
  • Cache handling can be unified / made more flexible with e.g. prompt caching / preloading the cache with a serialized cache
  • Tokenizer handling is hacky and should be unified - you had to write a new tokenizer for example

For now I think this is good to merge if you fix the small things I made comments on and we will work on these architectural changes later.

model = data.get("model", "")
prompt = data.get("prompt", "")
image_url = data.get("image_url", "")
print(f"model: {model}, prompt: {prompt}, stream: {stream}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

naked print

image_url = data.get("image_url", "")
print(f"model: {model}, prompt: {prompt}, stream: {stream}")
shard = build_base_shard(model, self.inference_engine_classname)
print(f"shard: {shard}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

naked print

img = Image.open(BytesIO(image_data))
W, H = (dim - dim % 64 for dim in (img.width, img.height))
if W != img.width or H != img.height:
print(f"Warning: image shape is not divisible by 64, downsampling to {W}x{H}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

naked print

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there a bulld dir?

Comment on lines +254 to +255
models_config = json.dumps(models_config)
models_config = json.loads(models_config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this to sanitize the json?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants