Shaping creativity with AI: when judgement becomes the product

Insights_Blog_Forwardism_shaping-creativity_Header_1848x792

This article is part of Framna’s Forwardism in Focus series reflecting on what meaningful progress looks like in the way we design, build, and evolve digital products.

Do you want to see more from the Forwardism in Focus series?

AI has quickly become part of everyday creative practice. It is used to explore ideas, test directions, and reduce the time it takes to move from thought to form. In many teams, it has already changed how early work happens.
What is less obvious is how it changes how decisions are made. AI increases speed and optionality, but most teams are not set up to decide well at that speed.


AI makes it easy to move forward. That is its strength. It is also where new risks emerge — not because the technology is flawed, but because it removes friction from decisions that used to demand effort.


When everything is easier to create, the differentiator is no longer output. It is intent.
AI does not threaten creativity by replacing it. It threatens creativity by making shallow decisions feel like progress.

Speed without friction changes the work

AI removes friction from early creative work. Ideas surface instantly. Variations are endless. What once required effort now happens by default.

"Progress is vast and fast within AI, but it’s often maybe less impressive than we think."
Tobias-Ahlin-portait-quote

Tobias Ahlin

Principal Design Engineer formerly at GitHub

The issue is not whether AI is capable. It is how quickly its output can be mistaken for insight.
When everything looks coherent, it becomes harder to tell which ideas are strong and which are simply well-articulated. Speed compresses the time teams would normally use to question assumptions, challenge direction, or sit with uncertainty.

 

The consequences are tangible: 

Poor prioritization, because everything appears viable

Premature convergence, because polished options feel finished

Over-investment in ideas that look convincing but lack depth

The work moves faster. The thinking does not automatically go deeper. AI increases options. It does not improve decision quality.

Creativity shifts from making to learning and judging

As AI takes on more of the generative work, the role of the human changes. Creative effort moves away from producing material and toward understanding, evaluating, and expanding it.

“AI has changed my creative process, I think, in primarily two or three ways. One, I use it a lot to critique myself — to explain the problem and see if it holds up, or try to find new angles by getting feedback. But much more excitingly, it’s incredibly good at teaching me new things. Because what I do with AI is I’m not afraid to lean into something that’s an adjacent specialty from where I’m situated right now.”
Tobias-Ahlin-portait-quote

Tobias Ahlin

Principal Design Engineer formerly at GitHub

This distinction matters. AI is not just useful for producing options. It is valuable because it lowers the cost of learning outside your core expertise. It allows teams to explore adjacent domains, test unfamiliar ideas, and pressure-test assumptions without having to become experts first.

 

That does not remove the need for judgment. It shifts where judgment is applied.

 

Creative quality now depends less on how much is generated and more on how clearly teams can answer difficult questions:

What are we optimizing for?

What are we willing to discard quickly, even if it looks good?

What deserves investment beyond exploration?

These are not creative exercises. They are leadership decisions. AI accelerates exploration and learning. It does not replace choice.

Familiar methods start to break

Some of the work discussed in the interview does not fit into established design or validation models at all. Projects like interpreting animal communication or working with multimodal signals introduce problems humans do not fully understand themselves.
But this is no longer confined to experimental labs. AI pushes more teams into uncertainty earlier, even in commercial products. Features are launched before patterns are stable. Capabilities evolve faster than established frameworks.

 

In these cases, teams cannot rely solely on familiar techniques like scenario mapping, user testing, or incremental refinement. Feedback loops are slower. Definitions of success are less obvious.

 

The work starts with possibility rather than proof.

 

This demands stronger principles upfront. AI does not simplify this work. It exposes how much judgment has always been involved, and how much more is now required.

Measuring progress becomes harder, not easier

A recurring theme in the conversation is how poorly current metrics capture real capability — both for humans and for AI.

 

AI is often evaluated using simplified tests that reward speed and accuracy in narrow tasks. When AI performs well, it is tempting to conclude that it is “better” than humans. But this overlooks context, expertise, and the difficulty of real-world application.

 

When teams rely too heavily on these signals, they risk overestimating progress and underestimating the work still required. What looks like advancement may simply be measurement bias.

 

Better tools do not remove the need for judgment. They demand more careful interpretation.

Inclusivity introduces new responsibility

AI also opens the door to more adaptive and inclusive experiences. Systems can respond to individual needs in ways that were previously impossible.

“I think AI does bring a huge potential for actually improving inclusivity. If you think about the hyper-personalization trend and how we can now generate not just the content, but the interfaces on the fly, like what does that unlock? Well, each individual's user's needs can be met.”
Jenni-Munroe-portrait-quote

Jenni Munroe

AI & Programs Lead at Google

But this potential comes with complexity. When experiences become highly individualized, coherence becomes harder to maintain. Decisions ripple differently for different users.
Inclusivity at this level is not a feature. It is an ongoing responsibility. Teams must be explicit about what varies, what stays consistent, and why.

AI increases reach. It also increases the consequences of unclear decisions.

This shift is already visible across product organisations. Teams that use AI well are not defined by their tools, but by how clearly they structure decision making, responsibility, and learning around them.

In The state of product development 2026, we studied how leading teams are adapting their product development models in response to AI, where judgment still matters most, and what separates meaningful progress from noise.

forwardism_blog2-shaping-creativity_mockup_survey_-844x362
Creativity now depends on clearer intent

As AI becomes more capable, creative work becomes less about execution and more about responsibility. Tools can generate ideas endlessly. They cannot define meaning, intent, or value.

“Design is not about Figma or the tools or the research. Design is about bridging the gap of what a technology, a trend can do and what humans interpret from it. And that ability is what the future of UX and AI is.”
German-Leon-portrait-quote

Germán León

AI Maven & Founder at Helvetica Digital

That gap is where creative work now lives. It requires clarity, restraint, and judgment — not more output.

The teams that navigate this well are not the ones chasing speed, nor the ones resisting new tools. They are the ones willing to slow down at the right moments and be precise about what they are building and why.

What is Forwardism?

The tension described above is not temporary. It is structural. AI changes the conditions under which teams work. It removes friction from exploration, accelerates output, and makes progress feel immediate. But it does not automatically improve the quality of decisions behind that progress.

 

When tools make movement easy, intent becomes the differentiator.

 

Forwardism is Framna’s approach to building digital products under exactly these conditions. It begins by defining outcomes clearly before acceleration starts. It aligns teams around decisions that compound over time rather than short-term activity that simply looks productive.

 

Forwardism prioritizes clarity over volume. It challenges assumptions early, especially when AI-generated output appears convincing. It treats learning as part of delivery, not something postponed until after release. And it recognizes that products are never finished — which means decisions are revisited, refined, and strengthened continuously.

 

AI expands what teams can create. Forwardism ensures they remain precise about why they are creating it.

Subscribe

Join our newsletter and stay up-to-date