Each week focuses on a central discipline of UX. Our goal in the first eight weeks is to learn enough to do some kind of a complete but pilot study, from observations to recommendations but without IRB approval because we are not planning to publish anything and all of this research is for educational purposes only.
Prior to class, read over the week's work. Don't do any of the tasks in advance -- that will defeat their purpose, which is to mimic user-research. These tasks are illustrative rather than performative: there are no right answers only user-insights and researchers can't have insights if users are cribbing or trying to game the setup in order to show how smart they are or hide their insecurities or otherwise be somehow other than they are. As UX researchers, you will learn that getting people to act and think naturally rather than as they think they should (or shouldn't) is a big part of the job.
Prior to class, familiarize yourself with what we will be doing and talking about. After class, do the homework assignments. Since you are blogging your responses to the assignments, everyone will be able to see everyone else's work. Do your own thing, but learn from and support each other.
There are several disciplines that have the word user in them, User-Centered Design, User-Interface Design, User-Experience Research, User Research, and Usability. There is also Student-Centered Pedagogy. None of these disciplines use the word rhetoric, but all of them are audience-focused. They are, therefore, descendants of Aristotle's Rhetorica.
User Research is an expression used in marketing and sales. It refers to gathering information about clients, potential clients, and the competition's clients. While there is some generalization from the data collected and the creation of personas, for the most part UR is about actual people, from contact information to buying history. This information is often kept in a Customer Management Systems (CMS). We are not interested in any of this here.
User-Centered Design is the discipline and practice of making products after thoroughly researching a target population's needs and expectations and goals and financial situation, building a product with the audience in mind from the blueprints to the release-event. This is the opposite of the traditional (pre-digital) production process which begins with the product and has engineers build it as specified by top management (known as the waterfall process). If you build it, they will buy it. If they don't and there's still enough money, version 2.0 might take customer feedback into account. User-Centered Design is the opposite, inductive rather than deductive, the water falls up.
User-Interface Design is essentially User-Centered Design applied to digital interfaces, the buttons on your remote control, the screens on your phone, etc..
Student-Centered Pedagogy is the discipline of teaching students rather than subjects. I don't teach Composition; I teach people how to think critically and write effectively. In practice, Student-Centered Pedagogy is very similar to User-Centered Design. Student success replaces profit. And each student is understood as an Individual rather than a representation of a type (a persona). Even though one might recognize "non-traditional" as a common type, a student-centered pedagogue realizes there are many ways to be "non-traditional" and therefore many different ways and means of learning need to be identified and addressed. Student-centered pedagogues are willing to consider that a high failure rate might be the assignment's rather than the students' problem. You can quickly identify someone who is anti-student-centered by using the word "client" in place of "student". If the moss-back rolls his or her eyes, they are old school, "Calculus is calculus, has been, always will. There are two kinds of students. Those who can learn calculus and those who fail calculus. The only pedagogical failure is in passing those who should fail."
Usability, which is where we are starting (finally!) is about gathering user data -- what a user does (behavior) -- and user feedback -- what a user reports (perception) -- to inform design decisions. User data can be obtained by the object itself as it is being used, with or without the user's conscious knowledge (and with only implied consent -- hence the ubiquitous cookie alerts you ignore). Every time you use a website, where you came from, what you click on, how long between clicks, the path you take from beginning to end, the equipment you are using, your location, your demographic, all of this information is recorded on the server and analysts can use that data to improve desired outcomes, like reducing the number of times items are abandoned in a shopping cart. This kind of data acquisition is designed into nearly every electronic product these days, from lightbulbs to toothbrushes (iCollege too). Gathering data outside user consciousness enables a level of objectivity. People behave more naturally when they don't know they are being observed. But knowing what a person is thinking as they do something or what they think about it after the fact can provide information that a data trail alone cannot. Knowing that 20% of users didn't do something doesn't tell you if they didn't see it, ignored it, saw it but were distracted before they could complete it, or something you can't imagine caused the observed effect.
In order to know what users are thinking and feeling, you have to interact with them, via questionnaires and interviews, but that can be time-consuming, especially the process of tracking down willing participants. In-house usability research, observing the people who make the product use it, is easer to conduct than finding willing outsiders, but insider knowledge also creates problems (the curse of knowledge and the problem of Easy Once you Know or EOK). Observing outsiders can help a team think beyond it's own perspective, which can greatly enhance design decisions.
How Usability differs from User-Experience Research will be the subject of today's class. In esscence, Usability is about objectivity, what the universal user does. User-Experience Research is about subjectivity, what a single user experiences while doing or inter-subjectivity, what a type of user experiences while doing what they do in the way they tend to do it.
To get a clearer understanding of Usability and UX, we are going to do a light-weight UX experience and then contrast that with a Usability test we won't actually do.
I have formatted the Keywords screen in two different ways. The question is, Human subjects-based research is typically hypothesis driven. A hypothesis is a statement that can be proven true or false by some community-accepted means. You can phrase a hypothesis as a question, as we did here. Stated as a hypothesis, our UX question might be, Users prefer Version B over Version A. We have ideas about readability, specifically text-heavy screens being overwhelming that underwrites our expectation but we are going to prove or disprove it with actual user impressions. Were we seeking Usability rather than UX, our hypothesis would be something like Version B allows users to find a definition quicker than Version A.
which version do you (the users) prefer? I would like each of you, on your own, to compare the two and identify which you prefer, if in fact you have a preference. You might make a list of pros and cons or just write down what occurs to you as you use the two screens. I'm not going to ask you to advocate for one, just to share how you arrived at your preference. There is no right or wrong answer and you are not being graded our judged in any way. I will give you 10 minutes to use and evaluate the two screens. Come back when you are done. Please mute your mike. Why did I ask you to decide before we discussed? Several reasons. Quiet people get ignored in group discussions while chatty people often have more influence than their ideas might warrant. Most people don't know how to listen; they are composing a response instead. And then there's "The Abilene Paradox" Now let's discuss.
Given that we are the only users of this site, and I have control over the design, our collective decision based on shared experience is actionable. You tell me what we want, and I make it so.
What we just did is a miniature version of User-Experience Research. We put users (participant observers, specifically) in front of two different versions of something and asked them to describe how the experience of using each led them to a preference (a subjective assessment). We then voted to identify the majority. If the vote wasn't unanimous, then enacting the will of the majority inconvenienced? irritated? disadvantaged? (level of harm) a minority. Notice I wrote a and not the minority. There may be more than one way to feel left out. We might have spent some time trying to build a consensus, adjusting each of us to the decision and compromising a little on the decision as well until we were all a bit happy and a bit unhappy with the results. The differences between Keywords A and Keywords B are so trivial as to make this whole discussion almost entirely theoretical. Still, I hope it correctly illustrates a kind of UX practice.
What we just did was UX and not Usability because we didn't objectively analyze our preference. We didn't verify that what we felt was a reflection of something other than subjective experience. We didn't think outside ourselves because we didn't have to. If we wanted to share this website with the wider world, it would probably be a good idea to think outside ourselves in case we don't actually think and learn (the purposes of the website) the way other people do.
Is there a way we might discover which Keywords is objectively better?
Here is a breakdown of some of the thinking involved. These bullets are not in order and you might jump back and forth. Hopefully you will also think of more questions that need to be asked:
Having thought through all of this and whatever occured to us along the way, we would survey our users to identify their use cases, identify the different ways to "read" keywords. We would then choose a design that best accommodates all of those ways or if we have to choose one, the one that works best for the most common way of reading and hampers the others the least. Then we would Usability test our choice, to verify that our users can do what they want to do as efficiently (fast) as possible. Once we had the data -- 12 users with 3 trials and an average speed of T -- then we would decide if our design should stand or be tweaked and re-tested.
We could write up and share this miniature research project using the industry standard format, a case study. A case study is a multimodal narrative about the research that led to a design decision and how the implementation of that decision improved people's lives and thrilled the client who commissioned the research. We could, but we aren't going to because we don't yet know enough about case studies. Might make a good topic for a final project. You have users. You have access. Might be too simple. Maybe not. You could have 100% participation. Hmm. Worth pondering perhaps.
Read the Cases section of this website, spend some time reading and looking at each example case and thinking about what characteristics they have in common.
Search for UX case studies and note one you find especially interesting or significant or emblematic of all cases studies. The criteria of your preference is up to you. We just want more cases to look at.
Based on your thinking and the outlines provided by ChatGPT and Copilot, design your own preliminary case study outline. What parts do you think are the necessary core of a case study? You might like to consult one or more of the other AIs.
This isn't exactly a bibliography. These links take you to the highlights I made when I read these books. Ideally they will give you the gist, enough to decide if you want to read the whole thing or just note a few salient quotations, make a gist of a gist, in other words.
No Class
You need a notebook, digital or analogue, for today's class because I want you to make field notes as we walk through a couple of different kinds of Usability tests. Field notes are a significant source of primary data. Preplanning and organizational ideas tend to influence what data you record. So how you take notes can determine what notes you take and therefore what you can learn.
PerplexityAI: Keeping Field Notes in User Experience Research
I asked ChatGPT to build a prototype app. We are going to use it as an object of Usability analysis. We are what is called a convenience sample, that is a subset of a population chosen simply because we are available. We were not selected and therefore we may or may not be representative of the relevant population. However we are regular app users. We have some ideas about what we want and don't want. So we are at least relevant even if we don't know if we are representative.
Let's start with a "Five Second Test." A Five Second test is a Usability technique to capture how quickly a screen communicates its purpose and how to use it.
When you click on the next link, you will see an image of an app for five seconds. You don't have to do anything. Just look and process however you are inclined to process. This is not a test of you in anyway.
Now, please answer the following questions in your field notes:
Let's compare notes.
If you are interested in learning a bit more about 5 second testing, a company called Lyssna offers a nice explanation and a free service to fascilitate your testing and learning. They rather elegantly describe thier free tier as being, "For small teams starting their experience research journey."
If you want to build your own 5 second test, all you need is an image on a server and this HTML code: Well that and test subjects, of course.
As you may have already noticed, I am trying to nudge you toward learning HTML/CSS and building web interfaces.
For the pre-test part of this Usability experience, please make a couple notes about to-do lists in general.
The goal of this test is to learn from you (a convenience sample of potential users) how to improve a prototype. We aren't looking for validation or evaluation and this test has no reflection on you. We simply want to know what you think as you look and use. When you click on the link, the screen will stay in place. Use it as you see fit and make a few notes about how well it worked for you and what would make it better. Ten minutes doing and thinking should give you enough data for some notes.
If you abhor to-do lists, you can make a few notes about why. They will be useful when we talk about personas in a few minutes.
Let's discuss.
In rhetoric the word heuristic means invention, the process of coming up with things to say. In contemporary language, a heuristic is a shortcut, a general guideline that works effectively for the most part most of the time.
Heuristic analysis is the Usability practice of having informed users -- people who know the 10 heuristics -- use a product to see where it fails to conform to the standard pattern. It's a quick and typically cheap kind of quality control. If you design using industry standard patterns, you can presume your designs are usable, though you should verify by asking experts to assess its conformity.
Jakob Nielsen, one of the OGs (cringe?) of usability, identified 10 usability heuristics:
Please fill out the To Do Heuristic Usability form.
Post engagement notes -- compare your list of potential improvements to the heuristic assessment form data. What did the Heuristics inventory suggest that hadn't occurred to you as you just looked and thought about the To-Do app How did the Inventory fail to identify the improvements that occurred to you?
If we were to put other people in front of this software, we might first want to screen our population, to identify different types of users so as not to lump different types together. What problems might lumping cause?
Overview: UX & Usability (Week 1)
In-Class Activity: A UX Experience
Keywords A
Keywords B
ChatGPT:The Abilene Paradox occurs when a group of people collectively decide on a course of action that is counter to the preferences of many (or all) of the individuals in the group. It involves a common breakdown of group communication in which each member mistakenly believes that their own preferences are counter to the group's and, therefore, does not raise objections to a certain action.
The paradox is named after an incident that took place in Abilene, Texas, which was described by management expert Jerry B. Harvey in his 1974 article "The Abilene Paradox: The Management of Agreement." In the anecdote, a family agrees to take a trip to Abilene despite none of them actually wanting to go; they each believe the others want to go and thus go along with it to avoid conflict.
The Abilene Paradox is often used to illustrate the pitfalls of group decision-making processes, highlighting the importance of properly managing agreement and ensuring that all members of a group are able to voice their true opinions and preferences. It's a cautionary tale against "going along to get along" when it leads to actions that are not actually supported by the group as a whole.
Cognitive Walk Through of a Usability Test
BTW
Since I first drafted this lecture, I added the "Random UX Term of the Day" function to the index file. I did this partially because it occurred to me how to do it -- with an assist from ChatGPT -- and partially because I think learning the technical vocabulary is a significant part of learning a discipline and practice, so why not print a UX term on the front door everytime someone walks up? Well, ego is a bad motivation for design, maybe ok for art but if others have to use the thing that gets made, ego is a design flaw. On the other hand, the pedagogy is sound enough, I think. If you dislike the pedagogical application of the generally irritating convention of a popup screen, bookmark www.gpullman.com/8122/syllabus.php instead of www.gpullman.com/8122.
Homework -- Assignment 1
Deliverable(s)
Keywords
Books worth skimming
Martin Luther King Day (Week 2)
Usability Testing (Week 3)
Keeping field notes in user experience research is crucial for capturing insights, observations, and details about user interactions. Here are some tips for someone new to this process:
Prepare Beforehand
Determine what you need to deliver and prepare a checklist of important things to take with you.
Take Detailed Notes
Make sketches, take photos, and write down observations. Even if you're allowed to make recordings, written notes can be invaluable.
Organize Your Notes
Organize your notes and data as you collect them. This will speed up user research analysis dramatically. Consider using a consistent naming convention and folder structure for your data.
Use a Dedicated Notetaker
If possible, assign a dedicated notetaker for your research sessions. This allows the researcher to focus on the participant and the conversation.
Debrief After Each Session
Take 15 minutes after every session to jot down your main takeaways and impressions. This will help you recall details and themes from the session during analysis later on.
Choose Your Note-Taking Medium Wisely
Whether you choose digital or paper for note-taking can depend on personal preference and the nature of the research. Some find that writing notes on paper aids retention.
Use Templates and Tools
Utilize note-taking templates and tools to make the process easier and more organized.
Tag and Group Your Notes
Tag your notes with common themes to help sort data into themes, trends, and patterns.
Share Your Findings
Once you have a group of research findings or key insights, share them with your team so everyone can make more informed design, product, and feature decisions.
Reflect and Interpret
Field notes should not only describe what you observe but also your interpretations and reflections on the relevance and importance of your observations.
Remember, the goal of keeping field notes is to capture rich, detailed information about user behavior, preferences, and pain points that can inform design decisions.
There are recording/transcription apps worth investigating if you plan to do UX at scale. (Otter.ai, for example). And there are UX note templates and entire webservices (for a fee). For today, though, I just want to you free-form it. Write down whatever occurs to you as it occurs to you. Part of the homework will be revisiting and then posting your revisited notes so we can all learn from how each of us thought to do it.
In-Class Activities
Five Second Test
Instructions:
<meta http-equiv="refresh" content="5; url=https://www.yourhomescreen.html">
Brainstorming with users
Instructions:
Heuristic analysis
Nielsen Norman Group
RESULTS of SURVEY
Responses for visibilitySystemStatus:
Responses for realWorldLanguage:
Responses for userControl:
Responses for industryStandards:
Responses for preventMistakes:
Responses for recognitionOverRecall:
Responses for flexibleDesigns:
Responses for minimalistDesign:
Responses for nonFatalErrors:
Responses for provideHelp:
Screening test subjects
Do you use task lists?
Yes or No?
If no, end
If yes,
How often
For what purpose(s)
Do you use software
if so, what is it
If not why not?
How many different kinds of users have we identified? What name should we give each group?
More on screening potential participants . . .
As we think about different kinds of to-do users, we are starting to think about personas, generalized user-types or models. Personas serve a couple of purposes. They give people working at a company a concrete sense of audience, that they are working for actual people and not just abstract, undifferentiated clients, or worse, for a product. Personas also give workers a sense of how each different kind of audience member thinks, their psychology, as it were. In the case of those people who use To-Do lists, there is a relevant psychology. I asked Perplexity to explain it and I thought it did a rather solid job:
The psychology behind keeping to-do lists is multifaceted and can be attributed to several factors. Research and experts suggest that to-do lists serve various psychological functions, including reducing anxiety, providing structure, and offering a sense of accomplishment[1][2][3]. They can help dampen anxiety about the chaos of life, give a plan to stick to, and serve as proof of what has been achieved[1]. Additionally, the act of making a plan to get tasks done can alleviate anxiety and free individuals from the distraction of unfinished tasks[1][2]. To-do lists can also be reframed as a set of miniature goals for the day, providing structure and reducing stress[2]. Furthermore, the rewarding aspect of completing tasks on a to-do list can lead to increased motivation and productivity[4][5]. However, it is important to note that to-do lists need to be flexible, and overly lengthy lists can be overwhelming and counterproductive[2][4]. Understanding the psychology behind to-do lists can help individuals utilize this information to enhance productivity and motivation[5]. Citations:
- The psychology of the to-do list -- why your brain loves ordered tasks -- The Guardian
- The psychology behind to-do lists and how they can make you feel less anxious | CNN
- Why We Continue to Rely on (and Love) To-Do Lists - Harvard Business Review
- The psychology behind to-do lists and why you should make one - The Daily Free Press
- The Secret Psychology on Why We Love Completing To-do Lists - Workast
Compiled by Perplexity at Perplexity.ai
Given this information, we might start to create a persona for each type, the anxious lister, the structure-seeking lister, the accomplishment-driven lister, the never lister. We could build each of these types into our re-design of the to-do app. But first we would need to find enough users to verify (accept, modify, or reject: beware confirmation bias) our thinking about the 4 relevant personas. We would interview to-do users (where would we find them?)
to see how each fits or doesn't fit one of these three personas and then we would modify the personas accordingly until we think we have an accurate representation of our target population, all to-do listers. Then we would update the app to offer reassurance and praise for the anxious and accomplishment driven (maybe various ring-tones) and perhaps more ways to organize items on the list for the structure-seekers.
The to-do list is probably as old as writing itself. One of the very first computer programs was probably a to-do list, and there are many available today. Making a new one doesn't make much sense. So unless we had an innovation in mind, the research process for a to-do list today would be: Let's make a to-do list. Oh, so many already exist. Can't improve on them. Never mind. Or maybe we find out they cost money so we decide to make our own.
Let's pretend we want to make one to sell or give as lure to others. We send out a questionnaire to potential customers and ask, would you be interested in an app that would help you keep track of what you need to do? If so, what would you keep track of? Are there any particular features you would especially like? (market research questionnaire) Enough say yes and offer enough insight to send us in a particular, if still tentative, direction. We have an idea worth developing. Next we make a sketch (wireframe), just the basic structure. We show this to a few potential customers and ask for feedback. We might also show it in house. If we get notes, we redo the wireframe. Once we think we have a good starting place, we take the vetted wireframe to a graphic designer who makes a static image (low fidelity mockup). We put that lo-fi mockup in front of potential users. We watch them "use" it. Then we ask questions about their experience, what ways they might make use of such a thing if they could play around with one that actually worked, and perhaps ask for suggestions (UX). Then, given that data (task flows, user differentiation, i.e., personas), we go back to the designer and ask for an updated image. Then we take the update to the the software engineer and they make a high fidelity mockup (appears to work) or an actual prototype (does work). Then we put a couple different kinds of potential users in front of the late-stage artifact (Usability and UX, which is where you came in), observe them use it; maybe ask them to think out loud as they use it. Perhaps follow-up with interviews. Based on what we learned, we would update the prototype. Test it. Tweak it. Then ship it. Then ask for feedback from our customers, and so on. There are now AI programs that can take a sketch like the one above left and turn it into a drawing like the one above right and others that can take such a drawing and create the software it depicts. makereal.tldraw.com (docs), for example. There are also image-to-code converters like Locofy, for example, that can turn a Figma mockup into an app. The production aspects of User-Centered Design can be automated. What we still need is user input, user testing, and therefore UX.
Recap: UX in a nutshell
Write up your field notes from today's experiments -- the 5 second test, your using and brainstorming to-do session, the Heuristic Inventory form, and your after engagement reflections.
Design a mockup to-do version 2.0
If you're not familiar with to-do lists, here is another to-do list to compare features with.
You might also search for to-do list programs, but keep in mind your goal isn't to recommend one that already exists but mockup your own idea about what a to-do list should look like.Technical writers of all kinds need to be software literate. They need to learn tools quickly and choose tools judiciously. UXers commonly collect data via screen-recording software. A participant uses the object of analysis, perhaps thinking out loud as they go, while the software unobtrusively captures their face, voice, and whatever is happening on the screen. This data is valuable for many reasons, not the least of which is that it simplifies differentiating design from technical flaws.
This week's assignment is to review a screen recoder in sufficient detail to let your classmates make an informed decison about whether or not to use it. You are not trying to sell the one you tested, nor diss it if you had a bad experience. Your goal is to help your classmates make an informed decision about the options out there.
Screen capturing can be very helpful, but it has to be easy to set up and easy to run. It also needs to create editable or taggable output since it is providing data. If we need a participant to run it on their machine in our absence, then it has to be very easy to install and just as easy to uninstall. It needs a one button on switch and it has to produce output small enough to email.
If we plan to sit participants down in front of our laptop, then the setup process could be a little more inovolved but we will still want a one-button launch and output that doesnt' take up tonnes of diskspace. We will need at least one file for each participant and that can add up quickly.
We will probably know more about what we want and need once we have used the product to capture a person doing something on screen and looked at the output. We will also need to spend some time actually playing around with the output, to see what can and can't be done with it. We might want to review the help files and tutorials (their quality and availability is a criteria of evaluation). We might want to look for reviews already out there, to see if our candidate has an industry reputation of note.
Once each of you has posted your reviews, so week 5, we will compare notes. We don't have to pick a winner. Keep in mind that if you have gone to the trouble to learn how to use a bit of software, your inclination might be to stick with it, to preserve your investment. Beware the gambler's fallacy.
We need to assign a software package for each of us to evaluate. We may need to find more options.
Review one of the following pieces of software and show us how you came to your evaluation. You can write your review as text, text and screen captures, a talking head video, or use the software itself to make a video.
George: Hey ChatGPT, Can you suggest some free screen and voice recorder software options?
ChatGPT: Certainly! There are several free screen and voice recorder software options available. Here are a few popular ones:
- OBS Studio (Open Broadcaster Software): OBS Studio is a versatile open-source software that allows you to record your screen and capture audio. It's widely used by gamers and streamers but can be used for various recording purposes.
- ShareX: ShareX is an open-source screen capture and recording software that offers a wide range of features, including the ability to record both screen and audio. It's highly customizable and offers post-capture editing tools.
- Flashback Express: Flashback Express is a user-friendly screen recorder that's free to use. It allows you to record your screen, webcam, and audio. The free version has some limitations but is suitable for basic recording needs.
- Screencast-O-Matic: Screencast-O-Matic offers a free version that allows you to record your screen, webcam, and voice. It's easy to use and suitable for creating tutorials and basic recordings.
- Apowersoft Online Screen Recorder: This is an online screen recording tool that's simple to use. It doesn't require any downloads or installations. It can capture both screen and audio, making it a convenient choice.
- Icecream Screen Recorder: Icecream Screen Recorder is a free and user-friendly tool for capturing your screen and adding voice narration. It also offers basic editing features.
- Loom: Loom allows you to record your camera, microphone, and desktop simultaneously. The free plan offers basic recording and sharing features. Useful for teams, educators, and anyone needing to create quick, shareable video content.
- CamStudio: CamStudio is a desktop software for Windows computers that can record all on-screen video and audio activity, creating industry-standard AVI video files and optionally turn those into lean, mean, bandwidth-friendly Streaming Flash videos (SWFs) using its built-in SWF Producer.
Remember that the availability of features and ease of use may vary from one software to another. It's a good idea to try out a few of these options to see which one best suits your needs and preferences.
Two other options:
We need at least one more because there are 11 of us in the class.
It might be helpful to offer a feature table:
Free | Yes / No / Kindof |
One button install | Yes / No / Kindof |
One button launch | Yes / No / Kindof |
Lightweight output | Yes / No / Kindof |
Editable output | Yes / No / Kindof |
Tagable output | Yes / No / Kindof |
Help | Yes / No / Kindof |
Tutorials | Yes / No / Kindof |
Positive industry reputations | Yes / No / Kindof |
What else? | Yes / No / Kindof |
View source code, copy and paste, to get code you can edit for inclusion on your blog.
We could combine all your tables and thus create a decision matrix.
We are engaging in exploratory research this week. We don't have a hypothesis we are trying to disprove. We are just practicing gathering data to learn about how to do it and what to do with it once we have it. We will need live subjects, though since this is just practice and we have no plan to publish, we don't need IRB approval. We will, however, provide informed consent. We will, in other words, make this as close to real as a strictly practice session can be.
You need an object for analysis. You could use the ChatGPT ToDo app from two weeks ago or one someone else made. Or, since we will be using iCollege's gradebook to discuss personas, you might use it.
For a subject, you need a friend, relative, classmate who is willing to help you out. They need to know what they are doing, why, and how long it will take. So you need to figure all that out before reaching out. I think we should agree not to use each other because we are a special class of subject, being as we are all in ENGL 8122. That fact makes us different in potentially significant ways.
How do you want to proceed?
George: Hey ChatGPT, You are a User Experience Research expert. You need to teach some new researchers about screen recording as a way of gathering data about a user using an app. Provide an introduction to the process and then a list of 5 best practices. Please format in HTMLSure, here's an HTML-formatted introduction and best practices guide for using screen recording in user experience research:
Introduction to Screen Recording in UX Research
Screen recording is a valuable tool in user experience (UX) research, providing a direct view of how users interact with an app. This method captures the user's actions on the screen, including clicks, scrolls, and navigation patterns, offering insights into user behavior, preferences, and challenges. It's particularly useful for understanding the usability of an app and identifying areas for improvement.
Best Practices for Screen Recording in UX Research
- Obtain Consent: Always secure explicit permission from users before recording their screen and audio. Explain the recording's purpose and how it will be used.
- Focus on Key Tasks: Guide users to perform specific tasks that align with the research objectives. This approach ensures focused and relevant data.
- Encourage Continuous Verbalization: Instruct users to verbalize their thoughts, feelings, and reasons for actions continuously during the session, as part of the think-aloud protocol.
- Avoid Influencing the User: Let users interact with the app naturally. Minimize guidance to prevent influencing their behavior and ensure authentic feedback.
- Analyze for Insights: Post-recording, analyze the screen actions and verbalizations for patterns that reveal usability issues or areas for improvement.
After reviewing what ChatGPT offered, I added "Please do that again but include some advice about think aloud protocol." It gave me what it gave before plus the Encourage Continuous Verbalization bullet.
I would add a couple of things to the AI's advice. When you are obtaining consent, you need to tell them who you are, what you are doing, what you are asking them to do, how long it will take, how you are collecting the data, what you will do with the participant's data (how it will be analyzed but also how it will be stored and for how long and who might have access to it.), what's in it for the participant or otherwise why they want to do what you are asking them to do. While you are drafting this, though you are practicing with a friend and so will be likely very informal, keep in mind you plan to do this on a wider scale and so you might want to draft a script that is far more professional than friendly, just for practice.
If you are planning to have more than one participant, you need to ensure consistency of context. Each participant should get the same instructions, same information, and have the same amount of time in the same conditions to do what you are asking them to do. Each participant should have the same context. You should write out a preamble and explanation of the process and read perform it in the same way for each participant. You need to avoid context interference.
You should also have a thank you message after the event, and it should be the same for each participant.
Enlist a friend (or 2) to spend 10 (?) minutes using an app while you record them using it. Don't tell them anything about the software, just the process, that you will turn on a screen recorder, record what they do and say for ten minutes and then ask them some post-experience questions. Reassure them this has nothing to do with them personally; it is not a test of their abilities or intelligence or anything else. This is just about the software and there are many testers contributing data so no pressure of any kind. Remind them to speak freely as they use the software.
After the recording, do a post-engagement review: play the video back and ask the participant to comment whenever they feel like it. Stop the video where they want to speak and record what they said and where they said it.
Post-engagement interview -- how did they feel about the whole experience. Have they any advice for you? You will not use any of this advice in subsequent iterations of this test unless what they found is a fatal flaw. You want each iteration to be done in an identical fashion. But you also want to learn over time so the next time you do something like this, use what they taught you.
A blog post in which you share your lab notes -- including perhaps screen captures of the recording -- and discuss the experience. What did you learn? Don't post the video, but keep it handy if you want to share it with the class or remind yourself for future screen recording sessions.
The researchers asked, "Do you agree or disagree with the following statement: gender inequality doesn't really exist." Why "really"? Is that a dog whistle?
This week is about designing questionnaires in general. Next week we will design a questionnaire specifically for creating personas for iCollege. I've chosen iCollege merely for convenience. The population is known. We have ready access to a subset of that population, English TAs. We can rely on goodwill to get data. I'm open to alternatives. Maybe you want to focus on your final project.
Questionnaires are the backbone of nearly all forms of human-subjects research. While passive user data gathering may play a greater role in User-Experience research overall, just given the amount and granularity of it, user perception of experience can only be learned from direct questioning. Also, because as we all know, companies are constantly asking for feedback via questionnaires, knowing how to create an informative questionnaire is a marektable skill.
One could easily spend an entire semester on questionnaires alone. What follows is superficial. If you plan on doing any kind of human-subjects research for real, you will need to dig much deeper.
Questioning users is often an ongoing process. Some places keep a database of questions and an archive of past answers. They also keep a list of past respondents and profiles of respondents that might be used to find other respondents. Questionnaires are not a one time thing in most cases, so you should design with as broad a horizon as possible, but of course stay focused on the task at hand.
Because questionnaires can be expensive and time-consuming, it might make sense to think of the practice as a process, where you start by asking a select audience a few questions designed to identify an actionable problem, design what you think is a solution, ask a few more questions to discover if the solution solves the problem without creating new ones, and then create a questionnaire that asks a wider range of users to confirm or deny the efficacy of the solution. In other words, a full-scale questionnaire probably shouldn't be a fishing expedition because you don't want to spend money and time learning nothing useful. You also don't want to burn bridges with your potential informants by having to go back to them for a redo.
Constructing good hypotheses is challenging.
George: What are the key concepts of a hypothesis in general and then specifically in a User Experience context?
ChatGPT: A hypothesis, in general, is a proposed explanation for a phenomenon, based on limited evidence as a starting point for further investigation. It is a fundamental concept in scientific research, forming the basis of experiments designed to verify or falsify it. In any context, a hypothesis typically adheres to these key concepts:Key Concepts of a Hypothesis
General Concepts
- Testable: Possible to confirm or deny through experiments or observations.
- Falsifiable: Can be disproven if contradictory evidence is found.
- Based on Existing Knowledge: Builds upon previous research and theories.
- Predictive in Nature: Makes predictions about outcomes under specific conditions.
- Specific and Clear: Precise and unambiguous statement of the expected relationship between variables.
- Logical: Consistent with existing knowledge and scientific principles.
Concepts in a User Experience Context
- Focuses on User Behavior and Preferences: Predicts how design changes affect user behavior or satisfaction.
- Informs Design Decisions: Guides design choices and tests design element effectiveness.
- User-Centric: Based on user needs, goals, pain points, or behaviors.
- Driven by Usability and Experience Goals: Relates to usability, engagement, conversion rates, or satisfaction.
- Empirical Testing: Tested via usability testing, surveys, A/B testing, and analytics.
- Iterative: Evolves with increased understanding of user interactions with the product.
Questionnaires are rhetorical acts, not just the motivated ones designed to gather "evidence" to support a desired outcome but even those that seek to be objective. Who is asked and in what circumstances influences the data. Who is thought to be asking can also influence how a person answers. So you need to answer a number of rhetorical questions (audience, ethos, context) before you start designing a questionnaire.
Assuming you know what question you need answered, the rhetorical questions are along the following lines.
There are also logistical questions that go to context:
You need to screen potential respondents to find relevant and willing candidates. Once you have a list of useful respondents, you need to run at least one pilot to test the effectiveness of your questionnaire.
Your goal is to ask the least number of questions possible to learn what you need to learn.
Close-ended questions limit what a person can say. The advantage is that you get numbers and numbers are unambiguous, although of course how they relate to a subjective reality or even intersubjective reality (to say nothing of objective reality) depends on how effective the questions themselves were and who you asked (and who answered since not all invited will attend). Don't confuse clarity for accuracy. Close-ended questions are also quicker for a participant to answer because they don't have to compose an answer, just choose one. They don't require thinking, though of course you want to write them in such a way that your participant selects intentionally and meaningfully. You should always consider offering a Not Applicable (NA) and a neutral option so a person isn't forced to contribute data they don't stand by or skip a question leaving you not knowing what the absence of data means (objection, abstention, distraction).
Open-ended questions give participants space to say what they think. You get words rather than numbers and you need to be able to code the answers if you plan to generalize. Coding (labeling each item, whether word or phrase or sentence or whatever unit) requires some consistency among the answers. If you can limit an open-ended question to a specific user-type, a persona or a role, and you see similar answers among them, then you might be able to make an argument without further coding.
Open-ended questions can provide subjective insights which makes them great for understanding a single participant. They can also provide useful material (salient quotations) when building personas, but they don't typically generalize very well. So you almost always need a strong context and supporting limited choice questions to build that context out of.
Methods for administering range from clipboard and pencil in a high (relevant) traffic area to a SurveyMonkey type email list to a popup box on a well-trafficked website. Generally speaking the more people respond the better, though if the respondents are the right people you may not need that many. A representative sample can negate the law of large numbers.
Mistakes to avoid
These mistakes are easier to make than they may seem. Pilot test a questionnaire with several different types of respondents before you ship it.
What would you like to do? Perhaps we might brainstorm questions for people who use iCollege's grading tool.
Read the excerpts from the Resources. Find some other advice about designing and administering questionnaires in whatever format you like, text, video, interview an expert.
Blog what you learned about designing and administering questionnaires. If you have personal experience, please feel free to share that with us as well. If you have some initial questionnaire questions for your final project, share them with us.
As Aristotle explained, rhetoric isn't about addressing individuals, it's about addressing types of people. His types were the young, the old, the rich, the powerful, the educated -- all men, of course. He didn't have any data except personal experience and cultural expectations, stereotypes, in other words. Personas are data-based descendants of Aristotle's audience member stereotypes.
There are several ways to divide an audience by "type." You can use roles, assuming for example that accountants might have different mental models and default approaches then, say, people in HR. Or that graduate assistants might have different needs from Profs because of differences in their role. (This assumption would fail most spectacularly in English Departments where both are "Teacher of Record" and in that sense identical). An alternative to roles is tasks, where what a person needs to do is used to design the tool they will be provided. If the task is adding or deleting a student from a roster, then who is doing the adding or deleting doesn't matter. The task is performed by an undifferentiated process.
You can design based on "attitude," level of patience, motivation, level of commitment to the outcome: casual user, fan, mavin.
You can use "experience" or familiarity: novice, apprentice, card carrying member, seasoned member, senior member, retired. A design based on this pattern of thinking would likely reveal different opportunities and processes based on how much bench time or what kind of certification a given user has.
All of these approaches -- role, task, experience -- are persona adjacent but not personas per se because they lack individuation; accountant as role is different from Ms. Jones, CPA.
A persona is a research-based, data-driven representation of a segment of an intended population (audience), depicted as if a real person, a simulated individual that is in fact a generalization. A persona is not a stereotype. It is not a character, and even less a caricature. It also isn't an actual person. It is an artifact and in a sense a genre of technical writing.
Personas tend to have a name, a photograph of someone who looks like what the data suggests, a biography based on demographic and geographic and psychographic data, a context, a primary goal, a set of attitudes, beliefs, identifications, and mental models (how they think something works which may or may not resemble reality), an emblematic quotation, and sometimes a list of brands they identify with. because those brands also have personas. So if they are Mac people, for example, you know that much more about them. However, brands guard their personas carefully because they offer insider information. Knowing someone uses a Mac doesn't tell you anything about how Apple understands its personas. Businesses use personas to make sure that everyone writing for the company -- or designing for the company -- has a clear sense of who they are writing to and working for, so that employees are not addressing some abstract, disembodied, undifferentiated, universal "audience member" or "user."
I visited MailChimp a few years back and they had their personas printed as life-size cardboard cutouts with note cards taped to the wall around them.
George: Hey ChatGPT, What kind of data does one need to create a persona?
ChatGPT: Creating a persona involves collecting a mix of qualitative and quantitative data that helps to form a picture of the typical users of a product or service. This data is then used to generate a fictional character that embodies the characteristics of a larger group of users. Here is the type of data typically gathered to create a persona:Persona Creation Data Requirements
Demographic Information
- Age
- Gender
- Education level
- Occupation
- Income range
- Family status
- Geographic location
Psychographic Information
- Personality traits
- Values
- Attitudes
- Interests/hobbies
- Lifestyle
Behavioral Data
- Purchasing behaviors
- User goals and tasks they want to complete
- Challenges and pain points in completing those tasks
- Brand interactions and loyalty
- Usage patterns for current products/services
Technological Proficiency
- Comfort with technology
- Preferred devices and platforms
- Frequency and context of device usage
Contextual Information
- Specific conditions under which they would use the product or service
- Environmental factors (e.g., noisy office, private home)
- Social context (e.g., alone, with family, at work)
Motivations and Goals
- Primary objectives they want to achieve with the product/service
- Secondary or long-term goals
Attitudes Towards Current Solutions
- Satisfaction with current products or services
- Complaints and wish-list items
- Switching behavior from other solutions
Information Consumption
- Preferred sources of information
- Trusted recommendations (e.g., from friends, experts)
- Media consumption habits
Scenario-Specific Details
- Task flows and decision-making processes related to the product/service
- Specific use cases or scenarios where the product/service would be used
Collecting this data typically involves a variety of research methods, such as user interviews, surveys, field studies, direct observation, and diary studies. Additionally, analyzing data from customer relationship management (CRM) systems, web analytics, and social media can provide insights into existing and potential user behaviors.
George: Hey ChatGPT, Can you give me an example of a persona used by a well known software company?
ChatGPT: Sure, personas are typically confidential to the companies that create them because they reveal strategic insights into their user experience approach. However, let's take a hypothetical example inspired by what might be used by a well-known software company like Microsoft for its Office suite of products.Persona Example: "Enterprise Emma"
Background:
- Name: Emma Clark
- Age: 38
- Occupation: Marketing Director
- Education: MBA in Marketing
- Location: Seattle, Washington
Demographics:
- Gender: Female
- Income: $110,000/year
- Family: Married with two children
Technological Profile:
- Highly proficient with productivity software
- Uses a high-end laptop and has multiple devices (tablet, smartphone)
- Prefers integrated software that syncs across all devices
Goals and Needs:
- Needs to create professional marketing materials efficiently
- Looks for advanced features to analyze market data
- Wants to collaborate easily with her team members who are sometimes remote
Challenges:
- Balancing the demands of a fast-paced marketing department
- Coordinating projects with multiple team members in different time zones
- Maintaining brand consistency across all marketing materials
Behavior:
- Heavy user of email, word processing, and spreadsheet software
- Relies on collaboration tools for team projects
- Frequently gives presentations to executives and stakeholders
Psychographics:
- Motivated by career advancement and professional development
- Values efficiency and productivity
- Enjoys leading teams and developing marketing strategies
Brand Interaction:
- Loyal to Microsoft due to long-term use and familiarity
- Open to new technology if it promises to improve productivity
Information Sources:
- Regularly reads industry publications and follows thought leaders on LinkedIn
- Attends webinars and conferences on marketing trends and software advancements
- Participates in professional networks both online and in-person
Scenario of Use:
- Uses Microsoft Office to coordinate marketing campaigns
- Relies on Outlook, Word, Excel, and PowerPoint for daily tasks
- Utilizes Microsoft Teams for managing team's workflow and meetings
Quote:
"I need tools that keep up with the pace of the marketing world and help me stay ahead of the competition."
Personas tend to be an ongoing effort, a consequence of gathering user data constantly. The personas are updated whenever new data suggests a revision. If the product already exists, and it doesn't yet have personas, one would try to learn from everyone in the purchaser database. If the product isn't yet available, one tries to learn from everyone who is a member of the target population. Obviously "everyone" is impossible, so a representative sample would be the next best thing. Once you have identified real members of the target population, names and email addresses, you can take the next step.
We are going to do what follows strictly as a thought experiment. When I wrote this section a couple months ago I thought we might actually design and administer questionnaires to create these personas, but now I think we just want to engage at the level of play.
To make personas, we need to gather data on a population. To do this with any kind of integrity, we need as large a population as possible. Given that we are in the English Department and most of us are teaching, and iCollge is the university's Learning Management System, collectively we know quite a few iCollege users. So let's use English Graduate Students as our population and iCollege as our object for analysis.
To gather the data, a questionnaire makes sense. If we had access to the server's analytics, we might be able to identify clusters of data from which we could create personas. But we don't so we have to contact people and get their cooperation. What question might our questionnaire's data answer? Let's go with, How many different iCollege user-types can we identify and how might we describe each type? This is a two-part question and yet just the tip of an iceberg. Often the first questionnaire is a short one designed to identify willing and relevant participants. But since we are just practicing, we will see what we can make with this. Questionnaires shouldn't have more than 20 questions, if at all possible (especially if a potential respondent doesn't agree in advance). Here are some to get us started:
This list is sortable. Drag and drop to ponder optimal order.
What else should we ask of participants to help us create iCollege personas (iCollegers)?
The list is sortable. What order would you put these questions in? Why that order?
How would you make this survey (survey monkey, Qualtrics, ChatGPT)?
If your case study involves creating personas, you might develop a questionnaire to help you identify and sort different kinds of users. Or you might imagine a couple of different kinds of users and then develop a questionaire to verify or deny your imaginings. If you go this route, share your plans and your draft questionnaire(s) with the class.
If you aren't yet ready to directly engage your case study, here's an alternative assignment.
Interviews can be an important part of data collection. If you can follow-up a questionnaire with an interview, you can gain greater insights than a questionnaire alone can provide. Your questionnaire may (by accident or design) raise questions it can't answer and interviewing might provide those answers. An interview can also provide a whole new avenue of data, picking up where the questionnaire left off or going in an unanticipated direction the questionnaire uncovered. Interviews can be even more valuable if combined with on-site observation. Being in the place where users typically use your product gives you a chance to learn from their environment. It also gives you a chance to see if what they do aligns with what they say. It is of course possible to conduct interviews over the phone or via video conference, without any direct observation. And it is possible to observe without interviewing, via remote screen capture software and or server data. There is, however, nothing like spending time in a co-present conversation with people to develop a thick description and better understanding of their experience. While interviewing requires some screening, typically performed via questionnaire, you can conduct interviews that are not follow-ups to a questionnaire per se.
Although interviews can be valuable, recruiting and scheduling interviewees can be very time-consuming. There is something called Guerrilla usability, where you just take your laptop to the Starbucks and sit down next to random people, The Starbucks next to GATech probably won't have the same people as the one on the first floor of Georgia's Own. The people who habituate a local independent are probably different from both. In any of those three places you are likely to make inferences about which person is most approachable before you approach. So not exactly random. and that can work, but it has limitations. See also "What is guerrilla usability testing?" Nevertheless, if the product requires little learning to use and pretty much everyone who uses it will use it in the same way, Guerrilla testing may be preferable.
Before you start looking for interviewees, you need to know why you are conducting interviews, what questions you need to ask, and what kinds of people you need to ask them of. Consider where the interview falls in your research process. Is it a follow up to a questionnaire? Is it in lieu of one? Is it a separate line of inquiry? The next step is to identify a specific goal. You might formulate this as a hypothesis. What do you need to prove or disprove? If your goals are less pointed, like you just want to get a feel for who your users are, then you might take a more searchlight rather than laser approach, open-ended questions designed to encourage monologues. The more they say, the more you have to learn from. Always keep in mind, however, that a specific individual's experience is their's. You can't just generalize from a single point of view.
Once you know why you want to interview, and you have identified and recruited interviewees, you should write a conversation guide (not quite a script) and ask a colleague to help you vet and practice it. Are you asking the right questions? Are you asking them in the right order? While you can't control the order in an interview the way you can with a questionnaire, you still need to make sure that you aren't leading the interviewee to say specific things. You also want to see if the questions allow for as natural a conversation as an interview can be.
You might even consider role playing with a colleague to see where a conversation might go under various scenarios. You don't want to go in with the goal of obtaining a specific data set, but you don't want to waste anyone's time either.
You have to let the interviewee talk and so you may have to adjust and accommodate, be prepared to improvise. However, if each interview is a unique conversation, you won't be able to generalize. You may be able to create narrative arguments out of what you learned from a unique encounter, but recurring themes will give you a stronger place to argue from. So you will likely want to ask the same questions in much the same way.
Even after you've found and managed to schedule a plausible interviewee, they have to be in the right frame of mind, neither wary nor eager to please. It helps if you can meet them in their own environment because people are most themselves in familiar surroundings. But then again, they've invited a stranger in so it's not entirely familiar.
If they are visiting you, they may be different from who they are at home. They will likely be a bit distracted by the new surroundings and be in an atypical frame of mind. If you are paying them to be there, they may be trying to please or impress you with their answers. On the other hand, you don't have to travel and you don't have to leave your own world.
Regardless of where you do it, interviewing people effectively is a talent that requires both training and experience. It also requires a specific set of social skills. You don't have to be an extrovert. You don't even have to like people. But you do need to know how to listen, how to ask for clarification and elaboration when necessary, how to encourage candid responses, how to empathize with the interviewee, to keep them focused, but also to let them wander or rant a bit when they seem to want to while being able reign them back in before you lose too much time. You want the interview to feel like a natural but mostly one-sided conversation.
Listening skills can be developed. The techniques are:
You want to avoid interrogation (don't set them up to say what you want to hear); avoid journalism (don't put words in their mouth); avoid HR-like interviewing (you are not hiring anyone). Avoid tech-support responses as well. You are not there to trouble shoot -- if all they have to do is click that button there, your job is to tell the designers to make the button more obvious, not show the struggling user where the button is. Nor are you a career coach. You are also not there to make friends, although of course you want to be cordial and respectful. When you leave (or they do), it should be for them like nothing significant happened. What matters most is the information. All the same, you may want to follow-up and in some cases you may want a longer-term association, so don't leave them cold.
You need a transcript of the interview. Consider using a tool like Otter.ai If you can, you should bring a colleague with you to observe unobtrusively while the machine records. It can be very hard to listen, take notes, and engage. If you don't have a colleague, the voice recorder will probably give you everything you need. You could use a camera, but that requires lighting and the person has to be at home on camera, which is not normal. If you really need screen data, then consider using a screen recorder and a think aloud protocol instead of an interview, or follow-up with an interview after reviewing the video.
As part of the scheduling process, you need to ensure the interviewee knows who you are, what you want, what you will ask, why you want to talk to them, how long it will take, who will be there, the fact you will record, and anything else that's pertinent to them understanding in advance exactly what to expect. After you leave, or they leave, follow up with a thank you card. Remind them why their help is appreciated, how it will be stored and used. Be sure they know how to contact you in case they have questions or think of something they wish they had said.
If UX were a linear process, it would be something like, hypothesis, questionnaire, follow-up interview, conclusions/recommendations. But each research project is a function of the hypothesis, the data required, and the sources of that data. And some projects are ongoing. So thinking of UX research as linear is misleading.
Given our questionnaire was, theoretically at least, designed to help us develop personas for iCollege's gradebook-users, it would make sense to follow-up with an interview to verify a projected persona. For this to work, we would need to know people who are willing to be interviewed who fit one of our projected personas. This may or may not be realistic. So I thought it might be interesting to see how ChatGPT would do a persona follow up interview.
George: Hey ChatGPT, As a user experience researcher, you want to interview a person you think is a good representative of a persona the questionnaire you previously administered has identified. The purpose of the interview is to verify or disprove the accuracy of this person's fit with that persona and to modify the persona if the interviewer's responses suggest modifications. What questions would you ask and why would you put each in the order you did?
ChatGPT: When conducting an interview to verify or potentially modify a persona based on a user's responses, it's important to structure your questions in a way that gradually builds a comprehensive understanding of the user's behaviors, needs, and attitudes. The order of the questions should flow logically, starting from general to more specific, and should be designed to encourage open and detailed responses.1. Background and Demographic Information
Question: "Can you tell me a bit about yourself, including your profession, hobbies, and daily routine?"
Why: Starting with broad, open-ended questions about the participant's background sets a comfortable tone and provides context for their responses. It helps in understanding if their basic demographic details align with the persona.2. Technology Usage and Preferences
Question: "What types of technology do you use regularly, and for what purposes?"
Why: This question aims to understand their comfort and familiarity with technology, which is crucial for assessing their fit with a persona that may be defined by certain technological behaviors.3. Goals and Motivations
Question: "What are some goals or objectives you're currently working towards in your personal or professional life?"
Why: Understanding their goals and motivations can help in determining if these align with the persona's characteristics. This question also delves into what drives their decisions and behaviors.4. Challenges and Pain Points
Question: "Can you describe any challenges or frustrations you face in your daily activities or work?"
Why: This question helps in identifying pain points and challenges that may or may not align with those outlined in the persona. It can reveal discrepancies or new insights for persona modification.5. Decision-Making Process
Question: "Can you walk me through how you recently made an important decision, either in your personal life or at work?"
Why: This question aims to understand their decision-making process, which is a critical aspect of a persona. It can reveal how they gather information, weigh options, and what factors influence their decisions.6. Specific Product/Service Interactions (if applicable)
Question: "How do you typically use [specific product/service]? What do you like or dislike about it?"
Why: If the persona is related to a specific product or service, understanding their direct interactions with it can provide valuable insights into their fit with the persona.7. Feedback on Persona Representation
Question: "Based on what we've discussed, do you feel that [describe key aspects of the persona] accurately represents you? Why or why not?"
Why: This direct question allows the participant to reflect on the persona's accuracy in representing them. It can lead to specific insights on what aspects of the persona might need adjustment.8. Open-Ended Feedback
Question: "Is there anything else about your experiences or needs that you think is important for us to know?"
Why: This final, open-ended question gives the participant an opportunity to share any additional thoughts or insights that might not have been covered but could be relevant to refining the persona.Remember, the key to a successful interview is not just asking the right questions but also actively listening to the responses, being adaptable, and probing deeper when necessary. The goal is to gather rich, qualitative data that can validate or challenge the assumptions built into your persona.
Thoughts on GhatGPT's advice about a persona follow-up interview?
Who do you want to interview? What do you want to learn? What questions are you going to ask? How long will the interview take? Where will you conduct it? What could you do with what you learn?
Conduct an interview. Get a transcript.
Post a transcript of your interview to your blog. Also post thoughts on the experience, what you would do the same and what differently next time.
Find a book on UX or a UX adjacent topic that is not already in the collection under Books, provide a list of important quotations, in the way you see in the Books section, and write a review of the book -- who it is for, what it explains, how well you think it works. When you send me the quotations, I will add your book and your quotations to Books. Post your review on your blog. If you want to practice talking head presenting, you can video record your review and post that to your blog.
While not attending class this week ...
Brain storm UX projects. Group or individual? If you can't think of a project, you can take one of the elements of UX research -- questionnaires, interviews, observations, usability testing, personas, case studies, or some other element -- and provide an in depth report on it consisting of a detailed overview, a bibliography, examples, and links to useful resources. You will share this report with the class via a 1 hour Webex presentation during the last week or two of class (depending on how many of us choose this path).
Outline of generic user-centered product development research study.
Figure out what you want to UX
Or
Work on your UX project and weekly blog about what you did, what you learned, and what you are going to do next.