- User-friendly LLMs
- Posts
- The paradox of technology
The paradox of technology
How simplicity and complexity collide.
Artificial Intelligence (AI) brings both simplicity and complexity to our lives, offering streamlined solutions while requiring a deeper understanding of complex algorithms and intricate systems.
AI simplifies tasks and enhances convenience by providing streamlined solutions. From voice assistants that respond to our commands to smart home devices that automate our routines, AI has made our lives easier in many ways. However, this newfound simplicity comes at a price—the need to navigate intricate algorithms and AI systems and learn how to use and control them. Users find themselves confronted with the need to acquire new skills, adapt to AI systems that constantly change their behavior, and confront uncertain outputs.
We find between simplicity and complexity the paradox of technology: The same technology that simplifies life by providing more functions in each device also complicates life by making the device harder to learn and use.
Is there a way how we can help our users understand AI without explaining to them how they work under the hood?
Conceptual Models
Conceptual models are like mental blueprints of algorithms. Where can I go, what can I do, what is happening? All shown by simply looking at the interface and through past experiences.
A good conceptual model allows us to predict the effects of our actions. When things go wrong, users need a deeper understanding, and a good model.
A conceptual model is formed through many interactions: What the device looks like, what we know from using similar things in the past, what was told to us in the sales literature, by salespeople and advertisements, by articles, by the product website and instruction manuals. All the combined available information the user has in his brain.
One of the biggest opportunities for creating effective mental models of AI products is to build on existing models while teaching users the dynamic relationship between their input and product output. Start by thinking about how people currently solve the problem that your product will use AI to address. Then incorporate the old, familiar ideas into the new technologies. Modify the user’s existing conceptual model instead of replacing it. If the previous interface was a spreadsheet with formulas, keep it that way, but sprinkle some AI magic under the hood that automatically performs the calculations the user intends to do.
If the AI is well-designed, the way the user understands your tool is the way you built it. It is your responsibility, not the users. The goal is to design tools that effectively communicate their purpose and functionality to users.
A Case Study on Copilot

GitHub Copilot is the auto-complete for programmers. When you are coding, it suggests completions for your current program.
What is the foundation they built upon?
Gmail. Gmail established the auto-complete feature, where the suggested text was shaded and the user could accept the suggestions by pressing tab.
They took the existing conceptual model of auto-complete and transferred it to coding. The user immediately knows what to do and what to expect when he hits tab.
A bad example exists in the same interface. Did you know that by pressing Option + Slash (iOS), you can cycle through different suggestions?
Neither did I. The interface doesn’t offer any hints that there are multiple possibilities. The users wouldn’t even think about trying to discover how to unlock the feature.

Where to get started?
You can take the route of GitHub: What interaction patterns already exist for your kind of solution? A code-autocomplete can follow the same interaction patterns as a text-autocomplete.
If an obvious interaction pattern doesn’t exist yet, you have to dig deeper. How is the user currently solving the problem? Then try to take away the elements that stay constant and sprinkle the AI under the hood. Alter the elements that are removed through the AI. Think about how you can show the user what he can do with the AI and how it works under the hood.
If you need to ingrain something entirely different, show through input-output pairs how it alters the interface to build up a conceptual model.
Reply