Skip to content

Add Batch mode + Limit calculation#36

Open
wizz13150 wants to merge 1 commit intoAUTOMATIC1111:masterfrom
wizz13150:batch_mode_the_third
Open

Add Batch mode + Limit calculation#36
wizz13150 wants to merge 1 commit intoAUTOMATIC1111:masterfrom
wizz13150:batch_mode_the_third

Conversation

@wizz13150
Copy link
Copy Markdown

@wizz13150 wizz13150 commented Jun 4, 2023

Hey,

Repost of the previous messy PRs.

A batch mode would be appreciated.

Obviously not a python expert, lol.

  • Add a checkbox to activate batch mode.
  • Add a field for a folder containing models to convert. Both tab.
  • Using "models/Stable-diffusion" and "models/Unet-onnx" as default, if leaved empty.
  • Add some basic logging into the console, kind of blind right now, wondering if it's dead or alive when converting to onnx, and what is going on.
  • Add a checkbox to add some --maxshape arguments to the converted .trt model name. (_max_batch_size x max_width x max_height)
  • Add a Calculation to display the limit allowed for ONNX to TRT conversion. Should still work as the limit is going up, eventually, as well. Just need to change the max value allowed, seem's doable.

Convert everything in 5-8 clicks:

  • Check 'Run Batch' and click 'Convert Unet to ONNX' button.
  • Then you don't need to wait the end, just change tab, adjust the sliders (or not) regarding the limit.
  • Check 'Run Batch' and click 'Convert ONNX to TensorRT' button.

This way, using default folders, it will convert all the files in the 'Stable-Diffusion' folder to .onnx in the 'Unet-onnx' folder, and then automatically start to convert to .trt in the 'Unet_trt' folder. Kind of queued.
And Voilà !
The 'Unet-onnx' and 'Unet-trt' folders are now populated with all the available .ckpt and .safetensors models in the 'Stable-Diffusion' folder.


Screen for the 'Convert to ONNX' tab :

image

image

Screen for the 'Convert ONNX to TensorRT' tab :

image
image

image

I mean, it worked for me, lol.

Cheers ! 🥂

@wizz13150
Copy link
Copy Markdown
Author

wizz13150 commented Jun 4, 2023

The only thing Still ToDo (as i can tell) :

  • To not crash when converting to ONNX, with a model currently loaded that is linked to a .trt file/unet.
    The issue is related to export_onnx.py and 'attention optimization' desactivation and seems not related to my commits.

image

Not sure yet how to take this one, tho. 🤨🤔I won't touch this, I just set 'None' as 'SD Unet' setting, to un-link during conversion.

image

I can see people here and there able to convert to trt using 640*640 at batch 2, i don't understand this, as it exceed the limit for me.
If anyone have an answer for this.
Example, using a rtx4070ti, 12gb vram, result = 32.3 it/s :
https://www.youtube.com/watch?v=bT7SaMkgNEY&t=505s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant