The “wizard” interface design has been around for decades: interfaces that guide users step by step through complicated procedures, usually installations. Even today, you see them on the web with software like Typeform (splits form questions into a multi-step UI, which is aesthetically pleasing and maybe even increases conversion).
One idea I’ve been thinking about recently is generative interfaces. These interfaces have to be designed individually for every task. There are “no-code” builders that help design these wizards, but they still require work. The logic gets complicated exponentially fast — every branch in the decision tree can add many different wizard states.
What if we could compile designs just in time with generative AI? Given the current application state, the UI is conditionally rendered according to the output of some AI. For a form, this might be letting the user input multiple answers in a free-form text box, have the AI try to parse it into the structured output, and then ask clarifying questions for the remaining or unclear values.
For application onboarding, it might allow users to have a customized journey — what are you interested in learning about? How will you use the application? How familiar are you with the application already?
Instead of dealing with a raw chat interface — there might be richer elements — input boxes, sliders, select forms, or other interactive elements.
One more thought — generative interfaces might be best served as a “design in the small” paradigm. That is, instead of trying to generate an entire application, they might be more useful if they generate a single element or piece of the UI. Maybe a foundation of generative UI components that you can assemble together.