AI as Rendering Engine (ControlNET v1.0) with Ambrosinus-Toolkit v1.2.1
AI as Rendering Engine is now something much more concrete and feasible
The principle is the one already widely discussed in some of my articles but also in many posts distributed on the net, i.e. the one attributable to text-to-image (T2I) and image-to-image (I2I) AI generative tasks. Anyway, ControlNET as an expansion module of the Stable Diffusion webUI project by A1111 could be installed as explained by the author here. The A1111 project has been integrated with the FastAPI platform, not to be too technical this means simply that it is possible to use the A11 project with the expansion of CN directly by querying the local server, which is specifically located via localhost on port 7860. With these assumptions, I am developing some components like: AIeNG_loc, LaunchSD_loc, SDopts_loc, UpsclsAI_loc and SD-Imginfo capable of putting together what has been described up to now, all through Grasshopper, this time completely avoiding the installation of multiple python libraries (nor Administrator privileges) and simply using the Ambrosinus-Toolkit downloadable from Food4Rhino or from the Rhino Package Manager. In particular these components belong to “3.AI” sub-category and they are still under implementation.
UPDATE 1
Ambrosinus-Toolkit v.1.2.0 has implemented AI-Gen components based on the AUTOMATIC1111 project, with two new features: SDopts_loc and SD-Imginfo.
SDopts_loc componentSD-Imginfo componentNEW components right-click context menu
SDopts_loc allows the user to set a custom Stable Diffusion model checkpoint. For instance, in the sneak peek video below I have used the “mdjrny-v4.safetensors” model, a dataset trained with Midjourney version 4 images. Currently (but above all in the very near future) the best output and AI-Gen exploration will depend on the typology and quality of the rained dataset used as a model checkpoint. Throughout the HuggingFace platform, lots of researchers have been sharing very interesting models and more focus on the architecture and building will come soon.
SD-Imginfo allows the user to read all AI-Gen settings used for image generation. In the future, I will implement it with some reinstate options like the very similar component already deployed for the StabilityAI engine.
Sneak Peek #1
UPDATE 2
Ambrosinus-Toolkit v.1.2.1 has been implemented with the “ViewCapture” component (2.Image | Ambrosinus Image sub-category) so you can easily save Rhino viewport (I mean the 3D model shown in the active view) as file image format (JPG/PNG) and “Named Views” Rhino side panel. By this component is possible to pass as input the viewport image generated as BaseIMG for your AI image generative process using ControlNET models.
New LauncSD_loc componentViewCapture componentUpsclAI_loc componentOpenDir component
The main update regards the possibility to interact with the webui-user.bat file like the WinSDlauncher Windows OS tool, so if you encounter some issue shown in the CMD terminal you can set new arguments according to A1111 feedback. Through the ViewCapture component now is very simple and easy to set a Rhino viewport as input in the BaseIMG parameter. When you save an image from the viewport automatically the component save the same view as the Rhino Named Views object.
My example is very simple (the 3D model is not so sophisticated even the prompt) but the result is promising:
This Toolkit version as the previous one works with ControlNET v1.0 extension (but it is CN v1.1 ready too), this is very important to know due to the fact that ControlNET v1.1 is not yet API fully supported. See this video tutorial about how to install CN v1.0 in Stable Diffusion (AUTOMATIC1111 project).
Finally, thanks to the UpsclAI_loc component the user cal upscale the image using the upscaler model desired. In the video below I used the Ultrasharp_4x model and the final result is very refined. OpenDir component is partially a wip component but I found it helpful to open each selected folder quickly.
Update Ambrosinus Toolkit to v1.2.1 and download the demo file from the this link.
More info and experiments on lucianoambrosini.it
As always Stay Tuned! 😉