ENGL 8122  ※  User-Experience Research & Writitng



AI and UX: Why Artificial Intelligence Needs User Experience

  • p. vii "To put it simply, we believe experiences matter. We want to make the world a little easier for people." [This should probably be the primary goal of UX, but making the world "easier" can be troubling when ease of use translates into addiction issues because the products are too easy to use and impossible to stop using.]
  • p. viii "Humans are impatient and fickle creatures; unless they are going to see the benefit very early on, they often will not invest the time or attention needed to appreciate the AI brilliance."
  • [When working with genAI, I think many people find themselves disappointed with the systems initial outputs, not realizing that if they are going to be disappointed with something, it should probably be the input they created. Once people understand how to re-iterarte and improve prompts and exercise some patience and perseverance, I think they can discover that "AI brilliance" the authors speak of.]
  • p. viii "If AI is to be successful, the design matters. The UX matters. How people would interact with AI matters. We believe UX can help; that's the main point of the book!"
  • p. 9 "developers need to think about what the experience of using their AI product will be like—even in the early stages, when that product is just a big idea."
  • p.11 "The next wave of AI needs to be designed with a UX framework in mind or risk the further limiting of acceptance. Good UX for AI applications will propel growth."
  • p. 16 "For any product, whether it has AI or not, the bare minimum should be that it be usable and useful. It needs to be easy to operate, perform the tasks that users ask of it accurately, and not perform tasks it isn't asked to do. That is setting the bar really low, but there are many products in the marketplace that are so poorly designed where this minimum bar is not met."
  • p. 17 "Our perception of a product is the sum total of the experiences that we have with that product. Does the product deliver the value we had hoped? Our willingness to "trust" the product hangs in that balance."
  • [I think some users of AI may have higher expectations as to what AI should be able to do and how easy it should be to use. When it does not meet their expectations, they are not likely to trust it and go back to it. For younger users, this may be a result of taking certain technologies completely for granted. It is hard to be impressed with advanced computer technology when one hasn't known a life without it.]
  • p. 40 (Licklider and Taylor, 1968) "In a few years, men will be able to communicate more effectively through a machine than face to face. That is a rather startling thing to say, but it is our conclusion."
  • p. 46 (From Stuart Card et al. 1983, The Psychology of Human-Computer Interaction) The user is not an operator. He does not operate the computer; he communicates with it to accomplish a task. Thus, we are creating a new arena of human action: communication with machines rather than operation of machines.
  • p. 47 . . . in today's world, most companies are hiring computer scientists to do natural language processing and eschewing linguists or psycholinguists. Language is more than a math problem.
  • p. 49 The continuous re-defining of AI: "Schank emphasized in 1991. . . that ‘intelligence entails learning,' implying that true AI needs to be able to learn in order to be intelligent."
  • [Interesting discussion of how the name of AI changed to expert systems after AI fell out of favor. Later terms like neural networks did the same thing to try restoke interest and funding in AI research. The lesson for UX is that once people have a negative experience with a product/technology, they are hard to win back and changing the name can help.]
  • The outgrowth of UX from HCI: p. 50 "Where HCI was originally focused heavily on the psychology of cognitive, motor, and perceptual functions, UX is defined at a higher level—the experiences that people have with things in their world, not just computers. HCI seemed too confining for a domain that now included toasters and door handles. Moreover, Norman, among others, championed the role of beauty and emotion and their impact on the user experience. Socio-technical factors also play a big part. So UX casts a broader net over people's interactions with stuff. That's not to say that HCI is/was irrelevant; it was just too limiting for the ways in which we experience our world."
  • [I was concerned that the 2022 release of ChatGPT (after this book was published) would render much of the author's points too dated to be helpful. However, they mention developments in technology that are significant and similar enough to stand in for the impact of todays' LLM systems.]
  • p.51 "We come into contact daily with things we have no mental model for, interfaces that present unique features, and experiences that are richer and deeper than they've ever been. These new products and services take advantage of new technology, but how do people learn to interact with things that are new to the world? These new interactions with new interfaces can be challenging for adoption."
  • p.51 "For AI to succeed, to avoid another winter, it needs good UX."
  • P. 64 "Look at older product designs to find features that deserve a second chance." [Because many products with poorly thought out UX design get shelved, it might be good to look back at those products to see what aspects can be salvaged and repurposed in a product with better UX design.]
  • Good lesson on what differentiated Alexa from Siri. Siri was a secondary feature of another product; Amazon's Echo (Alexa) was designed specifically to be a personal assistant.
  • Conversational Context seems to be a problem that today's LLMs have largely solved. They are now far better at remembering and drawing from earlier ideas in a conversation. They seem to offer appropriate responses to follow-up questions.
  • p. 67 Modern LLMs can follow three of Grice's four maxims of communication, but they struggle mightily with the truth maxim. Interestingly, I prompted ChatGPT to give me a summary of a book that was released in 2023 (and therefore should not be in its training data) and it wrote a plausible summary that was somewhat accurate. Its lie was pretty good. I asked it in a separate conversation to give me a summary of the same book, but I added "do not make one up if you do not have access to information about the book," and it responded that it could not give me the summary but instead gave me an idea of what a hypothetical summary of that book could look like—what it should have done all along.
  • p. 77 "Ultimately, engaging users with an AI service is the end goal, and recommendation engines achieve this. . . Recommendation engines exemplify the ways in which AI might fit into a user experience. While only a portion of Spotify's recommendation engine is in fact an AI system, that AI system blends seamlessly with other computing and human elements to build an engine that proves valuable to users."
  • p. 78 "In 2010, Northwestern University researchers released StatsMonkey, a program that could write automated stories about baseball games.39 By 2019, major news outlets including The Washington Post and the Associated Press were using AI to write articles." [This is three years before the release of ChatGPT.]
  • p. 81 "Humans are capable of constructing coherent narratives that make sense to other human beings, evoke emotions in their audiences, convey subtextual messages, and even contain aesthetic beauty. AI can't do any of those things. It's difficult to quantify aesthetics." [This is in the chapter of the book about film making and creativity. I think that AI has advanced drastically in this domain over the last two years, to the point that it can do the things the authors say it cannot.]
  • p. 82 "In fiction, one author has created an AI program that automatically auto-completes a writer's sentences while writing science fiction stories, based on a corpus of science fiction stories.54 He envisions the program as a kind of co-author, which generates ideas that might spark the writer's human creativity. Tellingly, none of these three projects are widely used. AI in the arts is not quite ready for prime time yet." [It is now.]
  • p. 85 ". . . what can we do to improve AI through the data itself? What are the elements where we can have an impact on AI?"
  • p. 93 "Under Moore's Law, the number of transistors in a CPU doubles every 2 years, but in AI's case, computing power for AI took advantage of the massively parallel processing of a GPU (graphics processing unit). These are the new graphics chips associated with making video games smoother and the incredible action movies we see today. Massively parallel processing required to present video games made AI much, much faster. AI systems often took months to learn the dataset. When graphics chips were applied to AI applications, training intervals dropped to single days, not weeks."
  • p. 95 "Capturing behavior is the prerogative of UX and requires research rigor and formal protocols. What we learned is that UX is uniquely positioned to collect and code these data elements through our tested research methodologies and expertise in understanding and codifying human behavior." [AI runs on data, and the authors discuss the collection and use of data extensively.]
  • P.98 ". . . talking about ethics in data. This is an area where AI has not developed fully. Companies are building AI not for foundational science, but for commercial advantage. The same sorts of issues that arise with bias in the culture also exist in the data. So the fear is that AI applications may have subtle—or even not-so-subtle— biases because the underlying data contain biases. . . There are no formal ethical standards or guidelines for AI. It is very much the proverbial "Wild West" where technology is being created without guardrails."
  • P. 109 "For many people, there's still a hesitance, a resistance, to adopt AI. Perhaps it is because of the influence of sci-fi movies that have planted images of Skynet and the Terminator in our minds, or simply fear of those things that we don't understand. AI has an image problem. Risks remain that people will get disillusioned with AI again."
  • p. 110 "Technology has become a commodity. What can set a product apart is good design. The same logic applies to AI-enabled products."
  • p. 112 "User-centered design (UCD) places user needs at the core. At each stage of the design process, design teams focus on the user and the user's needs. This involves a variety of research techniques to understand the user and is used to inform product design."
  • p. 115 "AI can be seen through the lens of how we look at the user experience of any product or application. AI is no different. To be successful, it must have the essential elements of utility, usability, and aesthetics."