Generative AI and User Experience: The Good, The Bad, and The Ugly
Hype around Generative AI has exploded – but we’re only beginning to understand the implications for software products, user experiences, and the risks of excluding people.
The good: meaningful results for everyone
For most of the history of software, to use any application you had to understand the graphical user interface (GUI) and have some concept of the underlying data or system. The new interaction model of prompt based, conversational AI changes all that.
Any user can type a natural language question and the AI can generate a meaningful response – often at impressive levels of quality and depth – users no longer need to learn the system. This is a far cry from the first generation of conversational user interfaces or natural-language chatbots which often frustrated users with their lack of capability, generative AI models are now able to fulfill that early promise. I can’t overstate how much of a big deal it is – not just in direct to consumer products, but in the vast world of enterprise IT products we use every day. It’s currently difficult for employees to do what they need to do in internal systems:
66% of respondents said that using employer IT took more effort than it should – more often than weekly. (Gartner Employee UX Survey 2021, Q32, n=2,244)
Generative AI and large language models (LLM) enable a significant shift in the capabilities of these systems. Instead of learning PowerBI reports, Tableau or Salesforce – an employee can ask the system: “show me the Q4 sales numbers” and be presented with relevant data and charts.
But, the GUI isn’t going away. Lots of users still need the precise control and fine adjustments you get with a GUI – especially those who create content rather than just manipulate or consume it, but for the majority of users in the majority of use cases: this is the future.
The bad: the “hit and hope” experience
It’s sometimes hard with this new LLM paradigm to get exactly what you want. Crafting prompts is a skill that takes practice, and it’s often very opaque – which prompts are better than others? The large language model that powers many forms of generative AI is a “black box”, a closed system that users have to try to tease the right result from. This is expected to become easier over time. Many leading models already supplement user’s inputs with a “meta prompt” designed to add guard rails and improve accuracy.
Writing and adjusting prompts will become more important over time – just as writing good Google searches has been highly advantageous for the last decade.
The ugly: the users who get left behind
It’s obvious that writing prompts requires you to write specific sentences in English (or the supported language of the model) – but what if you can’t write well? Or you write in a language that the LLM doesn’t understand? 30 million adults in the United States have a “below basic” level of literacy, with 11 million of those classed as “non literate”1.
It’s a similar picture across the world, varying from country to country: but a significant proportion of people can’t write well-formed sentences. Right now, they can click and tap their way around accessible websites and apps to perform important tasks – if that interface becomes solely a prompt-based one, they’ll be at a disadvantage, excluded from vital experiences that form part of everyday life. Again, this is an area where meta prompts can help refine user input into a more meaningful question for the generative AI to tackle, but it’s still a major usability barrier compared to a GUI.
Don’t throw accessibility away
As user experience professionals navigate this new era of computer interfaces, we must take care. In our excitement at the new possibilities we must not throw away the progress we’ve made in building friendly, inclusive experiences that benefit the widest possible section of society.
1 “National Assessment of Adult Literacy”, National Center for Education Statistics, 2003. (National Assessment of Adult Literacy (NAAL) – Demographics – Overall)