AI as rendering engine through ControlNET via Grasshopper — demo by Luciano AmbrosiniAI as Rendering Engine: running Stable Diffusion and ControlNET locally via Grasshopper

Luciano Ambrosini
7 min readMar 24, 2023

--

AI as Render Engine via Grasshopper with Ambrosinus-Toolkit plugin (by Luciano Ambrosini)

The Ambrosinus-Toolkit v1.1.9 has been implemented with a new feature: Run Stable Diffusion locally thanks AUTOMATIC1111 (A11) project and ControlNET (CN) extensions. It is another AI tool that brings artificial intelligence power inside the Grasshopper platform.
Below is an extract from my official article

AI as most people already know represents, again, the new paradigm shift in our Digital Era.

Recently one of my goals has been to share and elaborate computational solutions, more or fewer experiments, in order to offer anyone the possibility to explore their own projects through some tools based on artificial intelligence. Surely one of the starting points was the possibility of using the APIs shared by OpenAI, StabilityAI and the services offered by numerous platforms such as Hugginface (just to name one of the most active and famous). Many of these projects are migrating from a cloud execution (which certainly has its advantages especially hardware see Google’s Colab) to a local type, clearly, the hardware you have can make a difference. However, further processing based on different technologies such as DPT, latent space etc. is making it possible to run neural models on your machines (even in the case of low VRAM availability, anyway lots of forum discussions recommend at least 6 GB of VRAM — in my case I have 4GB).

Having established this, two particularly valuable projects in this sense and which in my opinion will make a big difference in the coming months and more generally in the development of web-based applications are Automatic1111 (well-known to nerd and geeky users ;) ) and InvokeAI (a curious one is the ComfyUI project that adds a “node” system UI in VPL style).

Both take advantage of a web-based UI and offer numerous features to generate images through AI.
In this article, I have taken Automatic1111 as a reference and the possibility of integrating into this project an important feature introduced by the ControlNET neural network of which I posted a very brief sneak peek a few weeks ago. In practice, as explained by the two Stanford University researchers Lvmin Zhang and Maneesh Agrawala, ControlNet enables conditional inputs like edge maps, segmentation maps, and key points to enrich the methods to control large diffusion models and further facilitate related applications (here the paper arxiv.org/pdf/2302.05543). Thus this technology effectively confers an intriguing feature particularly dear to the AEC industry (but obviously not only), namely that of obtaining a sort of rendering of its architectural models in real-time simply by passing an image and a descriptive text as input to the neural model (text-to prompt).

AI as Rendering Engine is now something much more concrete and feasible

The principle is the one already widely discussed in some of my articles but also in many posts distributed on the net, i.e. the one attributable to text-to-image (T2I) and image-to-image (I2I) AI generative tasks. Anyway, CN as an expansion module of the Stable Diffusion webUI project by A11 could be installed as explained by the author here. The A11 project has been integrated with the FastAPI platform, not to be too technical this means simply that it is possible to use the A11 project with the expansion of CN directly by querying the local server, which is specifically located via localhost on port 7860. With these assumptions, I created the very first two components, AIeNG_loc and LaunchSD_loc, capable of putting together what has been described up to now, all through Grasshopper, this time completely avoiding the installation of multiple python libraries and simply using the Ambrosinus-Toolkit downloadable from Food4Rhino or from the Rhino Package Manager.

PART 1 — Requirements on my website

Example: The canny feature runs inside Grasshopper here below:

AieNG_loc and LaunchSD_loc components from Ambrosinus-Toolkit — AI subcategory

Some experiments are below:

Ai as rendering engine through ControlNET via Grasshopper — demo by Luciano Ambrosini
AI as rendering engine through ControlNET via Grasshopper (“canny” mode)
AI as rendering engine through ControlNET via Grasshopper (“depth” mode)

The version deployed with Ambrosinus-Toolkit v1.1.9 uses the Stable Diffusion 1.5, the future versions will be implemented with new features and the chance to integrate the v2.X engine.

What does it is possible to get from this component?
The current version can run text-to-image, but the most valuable feature is the ControlNET “power” because this neural network can be used as a rendering engine or simply as a superpower tool which enhances your design and pushes your conceptual sketches to the next level. Simple samples have been shown in the previous grid and in the video attached at the end of this article.

💡 Later versions will be implemented with further features (such as the image-to-image option). However, it is quite well known that the A11 project is constantly evolving and full of features so in this phase I opted for a more careful choice of the parameters to be passed as input and the differentiation of the executable tasks. The reason is to keep Grasshopper’s component interface as clean and clear as possible.

Why integrate into the toolkit the possibility to run SD locally?
As for all the other AI components, this step is part of a research project sealed in some academic papers (soon updates), in any case, this new Digital Era is still in a transition phase which is strongly tending to classify AI as a paradigm shift,

I personally believe that we are still in an “exploratory” phase of the “possible chances”.

Many tools, including some of the components I’ve distributed, start out as experiments. So the idea of sharing these tools for free (in agreement with a code that is not completely open for research reasons) seemed to me the best way to meet the change … innovation and then, also because professionals are particularly burdened by the costs of licenses of any kind, for these reasons, applause certainly goes to the projects mentioned here and to the researchers who make possible an increasingly open and interoperable vision of the world of architecture and design.

This latest development has in my personal opinion 3 advantages
The first, immediate, is the possibility of experimenting through the web UI platform with all the power of AI for free, no API keys required, no fees to pay and what’s more, a render engine that promises very well; second, the possibility of integrating what has just been mentioned within our workflows in Grasshopper thanks to Ambrosinus-Toolkit; thirdly, finally, cultivate the community of architects, computational designers, creatives, creators to interact and do and be a network through the development of tools and workflows. Because each of us has a different way of understanding design.

The real superpower of AI in the AEC world is the ability to give life to “metatools”, as infrastructures for possible and unexpected solution tools — this will be the real paradigm shift.

These last few times have been incredible and the speed of production of new solutions and research, AI-based tools have proceeded at a very high speed. With the diffusion of projects that can be installed locally, I believe I have laid the foundations for future developments, integration of small utilities, etc… regarding the Ambrosinus-Toolkit “AI” subcategory (especially AI projects based on Stable Diffusion projects). However, mainly I will try to filter the tools and advances, giving more leeway to what can be considered effectively valid and useful for workflows in the AEC environment and obviously usable in Grasshopper.

Finally, the video below can be enjoyed all in one breath or based on your needs and curiosities. It consists of two parts, the first explains how to install everything you need on your machine and in the second you will see the first two components in action. I suggest you enjoy the video on YouTube so you can jump to the highlights of your interest noted in the description:

Finally, get more info by having a look at this video!

UPDATE 1

Ambrosinus-Toolkit v.1.2.0 has implemented AI-Gen components based on the AUTOMATIC1111 project, with two new features: SDopts_loc and SD-Imginfo.

SDopts_loc component
SD-Imginfo component
NEW components Right-click context menu

SDopts_loc allows the user to set a custom Stable Diffusion model checkpoint. For instance, in the sneak peek video below I have used the “mdjrny-v4.safetensors” model, a dataset trained with Midjourney version 4 images. Currently (but above all in the very near future) the best output and AI-Gen exploration will depend on the typology and quality of the rained dataset used as a model checkpoint. Throughout the HuggingFace platform, lots of researchers have been sharing very interesting models and more focus on the architecture and building will come soon.

SD-Imginfo allows the user to read all AI-Gen settings used for image generation. In the future, I will implement it with some reinstate options like the very similar component already deployed for the StabilityAI engine.

Sneak Peek #1 Video here

If you have come this far, you have received my sincere thanks! ;)

--

--

Luciano Ambrosini
Luciano Ambrosini

Written by Luciano Ambrosini

PhD | Architect | Computational + Environmental Designer

No responses yet