To give the model enough freedom to compose designs from a wide variety of domains, we commissioned two extensive design systems (one for mobile and one for desktop) with hundreds of components, as well as examples of different ways these components can be assembled to guide the output.
We feed metadata from these hand-crafted components and examples into the context window of the model along with the prompt the user enters describing their design goals. The model then effectively assembles a subset of these components, inspired by the examples, into fully parameterized designs. From there, Amazon Titan, a diffusion model, creates the images needed for the design. It’s more or less as simple as AI helping you identify, arrange, fill out, and theme small composable templates from a design system to give you a jumping off point.