You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix allocation-size-too-big crash in prepare_input_tensors (#8233)
(Adapted from an LLM-suggested fix for a fuzzer-discovered crash)
The crash is an allocation-size-too-big error that occurs when the `prepare_input_tensors` function attempts to allocate an excessively large amount of memory for the `inputs` array. This crash is caused by the function's inability to handle large numbers of inputs, resulting in an attempt to allocate a huge amount of memory that exceeds the system's limits.
The root cause of the crash is the lack of bounds checking on the `num_inputs` variable, which allows the function to attempt to allocate an arbitrarily large amount of memory. This is exacerbated by the fact that the function allocates memory for each input tensor separately, without checking the total size of all tensors before allocating memory for the `inputs` array.
The patch fixes the crash by adding bounds checking on the `num_inputs` variable and calculating the total size of all tensors before allocating memory for the `inputs` array.
Differential Revision: D68876117
0 commit comments