ENGL 8122  ※  User-Experience Research & Writitng

"There is no such thing as user error."
Nunnally & Farkas, UX Research.

Lectures

Each week focuses on a central discipline of UX. Our goal in the first eight weeks is to learn enough to do some kind of a complete but pilot study, from observations to recommendations but without IRB approval because we are not planning to publish anything and all of this research is for educational purposes only.

Prior to class, read over the week's work. Don't do any of the tasks in advance -- that will defeat their purpose, which is to mimic user-research. These tasks are illustrative rather than performative: there are no right answers only user-insights and researchers can't have insights if users are cribbing or trying to game the setup in order to show how smart they are or hide their insecurities or otherwise be somehow other than they are. As UX researchers, you will learn that getting people to act and think naturally rather than as they think they should (or shouldn't) is a big part of the job.

Prior to class, familiarize yourself with what we will be doing and talking about. After class, do the homework assignments. Since you are blogging your responses to the assignments, everyone will be able to see everyone else's work. Do your own thing, but learn from and support each other.

Overview: UX & Usability (Week 1)

Usability and the can opener.

There are several disciplines that have the word user in them, User-Centered Design, User-Interface Design, User-Experience Research, User Research, and Usability. There is also Student-Centered Pedagogy. None of these disciplines use the word rhetoric, but all of them are audience-focused. They are, therefore, descendants of Aristotle's Rhetorica.

User Research is an expression used in marketing and sales. It refers to gathering information about clients, potential clients, and the competition's clients. While there is some generalization from the data collected and the creation of personas, for the most part UR is about actual people, from contact information to buying history. This information is often kept in a Customer Management Systems (CMS). We are not interested in any of this here.

User-Centered Design is the discipline and practice of making products after thoroughly researching a target population's needs and expectations and goals and financial situation, building a product with the audience in mind from the blueprints to the release-event. This is the opposite of the traditional (pre-digital) production process which begins with the product and has engineers build it as specified by top management (known as the waterfall process). If you build it, they will buy it. If they don't and there's still enough money, version 2.0 might take customer feedback into account. User-Centered Design is the opposite, inductive rather than deductive, the water falls up.

User-Interface Design is essentially User-Centered Design applied to digital interfaces, the buttons on your remote control, the screens on your phone, etc..

Student-Centered Pedagogy is the discipline of teaching students rather than subjects. I don't teach Composition; I teach people how to think critically and write effectively. In practice, Student-Centered Pedagogy is very similar to User-Centered Design. Student success replaces profit. And each student is understood as an Individual rather than a representation of a type (a persona). Even though one might recognize "non-traditional" as a common type, a student-centered pedagogue realizes there are many ways to be "non-traditional" and therefore many different ways and means of learning need to be identified and addressed. Student-centered pedagogues are willing to consider that a high failure rate might be the assignment's rather than the students' problem. You can quickly identify someone who is anti-student-centered by using the word "client" in place of "student". If the moss-back rolls his or her eyes, they are old school, "Calculus is calculus, has been, always will. There are two kinds of students. Those who can learn calculus and those who fail calculus. The only pedagogical failure is in passing those who should fail."

Usability, which is where we are starting (finally!) is about gathering user data -- what a user does (behavior) -- and user feedback -- what a user reports (perception) -- to inform design decisions. User data can be obtained by the object itself as it is being used, with or without the user's conscious knowledge (and with only implied consent -- hence the ubiquitous cookie alerts you ignore). Every time you use a website, where you came from, what you click on, how long between clicks, the path you take from beginning to end, the equipment you are using, your location, your demographic, all of this information is recorded on the server and analysts can use that data to improve desired outcomes, like reducing the number of times items are abandoned in a shopping cart. This kind of data acquisition is designed into nearly every electronic product these days, from lightbulbs to toothbrushes (iCollege too). Gathering data outside user consciousness enables a level of objectivity. People behave more naturally when they don't know they are being observed. But knowing what a person is thinking as they do something or what they think about it after the fact can provide information that a data trail alone cannot. Knowing that 20% of users didn't do something doesn't tell you if they didn't see it, ignored it, saw it but were distracted before they could complete it, or something you can't imagine caused the observed effect.

In order to know what users are thinking and feeling, you have to interact with them, via questionnaires and interviews, but that can be time-consuming, especially the process of tracking down willing participants. In-house usability research, observing the people who make the product use it, is easer to conduct than finding willing outsiders, but insider knowledge also creates problems (the curse of knowledge and the problem of Easy Once you Know or EOK). Observing outsiders can help a team think beyond it's own perspective, which can greatly enhance design decisions.

How Usability differs from User-Experience Research will be the subject of today's class. In esscence, Usability is about objectivity, what the universal user does. User-Experience Research is about subjectivity, what a single user experiences while doing or inter-subjectivity, what a type of user experiences while doing what they do in the way they tend to do it.

In-Class Activity: A UX Experience

To get a clearer understanding of Usability and UX, we are going to do a light-weight UX experience and then contrast that with a Usability test we won't actually do.

I have formatted the Keywords screen in two different ways. The question is, Human subjects-based research is typically hypothesis driven. A hypothesis is a statement that can be proven true or false by some community-accepted means. You can phrase a hypothesis as a question, as we did here. Stated as a hypothesis, our UX question might be, Users prefer Version B over Version A. We have ideas about readability, specifically text-heavy screens being overwhelming that underwrites our expectation but we are going to prove or disprove it with actual user impressions. Were we seeking Usability rather than UX, our hypothesis would be something like Version B allows users to find a definition quicker than Version A. which version do you (the users) prefer? I would like each of you, on your own, to compare the two and identify which you prefer, if in fact you have a preference. You might make a list of pros and cons or just write down what occurs to you as you use the two screens. I'm not going to ask you to advocate for one, just to share how you arrived at your preference. There is no right or wrong answer and you are not being graded our judged in any way. I will give you 10 minutes to use and evaluate the two screens. Come back when you are done. Please mute your mike.

Keywords A
Keywords B

Why did I ask you to decide before we discussed? Several reasons. Quiet people get ignored in group discussions while chatty people often have more influence than their ideas might warrant. Most people don't know how to listen; they are composing a response instead. And then there's "The Abilene Paradox"
ChatGPT:The Abilene Paradox occurs when a group of people collectively decide on a course of action that is counter to the preferences of many (or all) of the individuals in the group. It involves a common breakdown of group communication in which each member mistakenly believes that their own preferences are counter to the group's and, therefore, does not raise objections to a certain action.

The paradox is named after an incident that took place in Abilene, Texas, which was described by management expert Jerry B. Harvey in his 1974 article "The Abilene Paradox: The Management of Agreement." In the anecdote, a family agrees to take a trip to Abilene despite none of them actually wanting to go; they each believe the others want to go and thus go along with it to avoid conflict.

The Abilene Paradox is often used to illustrate the pitfalls of group decision-making processes, highlighting the importance of properly managing agreement and ensuring that all members of a group are able to voice their true opinions and preferences. It's a cautionary tale against "going along to get along" when it leads to actions that are not actually supported by the group as a whole.

Now let's discuss.

  1. Which format do you prefer?
  2. Why that one?
  3. How many in favor of A? How many in favor of B?

Given that we are the only users of this site, and I have control over the design, our collective decision based on shared experience is actionable. You tell me what we want, and I make it so.

What we just did is a miniature version of User-Experience Research. We put users (participant observers, specifically) in front of two different versions of something and asked them to describe how the experience of using each led them to a preference (a subjective assessment). We then voted to identify the majority. If the vote wasn't unanimous, then enacting the will of the majority inconvenienced? irritated? disadvantaged? (level of harm) a minority. Notice I wrote a and not the minority. There may be more than one way to feel left out. We might have spent some time trying to build a consensus, adjusting each of us to the decision and compromising a little on the decision as well until we were all a bit happy and a bit unhappy with the results. The differences between Keywords A and Keywords B are so trivial as to make this whole discussion almost entirely theoretical. Still, I hope it correctly illustrates a kind of UX practice.

What we just did was UX and not Usability because we didn't objectively analyze our preference. We didn't verify that what we felt was a reflection of something other than subjective experience. We didn't think outside ourselves because we didn't have to. If we wanted to share this website with the wider world, it would probably be a good idea to think outside ourselves in case we don't actually think and learn (the purposes of the website) the way other people do.

Cognitive Walk Through of a Usability Test

Is there a way we might discover which Keywords is objectively better?

Here is a breakdown of some of the thinking involved. These bullets are not in order and you might jump back and forth. Hopefully you will also think of more questions that need to be asked:

  • UX is time-consuming and it can be expensive, so we should answer some preliminary questions before we start desiging an actual study.
    • Either Keywords A or Keywords B is an either/or choice, so to avoid false dilemma we should ask, "Are other formats possible?"
      Exigence will lead us to presume not. Since I wrote that, I have put the keywords into a database, making re-formatting much less time-consuming, so I might entertain alternatives. Really strong user-centered designs need to take rapid modification into account. Whenever possible, treat text as data. Never write anything twice.
    • Will the results of testing be worth the effort and expense?
      • Time it would take to track down users, verify they are relevant, consent them, orient them, get them to the task, record them, analyze the data
      • Costs associated with each of the above, plus perhaps opportunity cost -- many businesses require a lot of paper work and while this work is happening, other work isn't or, worse, changes are happening that will invalidate the effort
      • What will we gain?
  • What does better mean? (not preferable, we are looking for an objective measure rather than a subjective impression)
    • Can we assume better means enables a user to complete a relevant task more quickly?
  • What measurable and relevant task might we set some users?
    • Which users?
    • Are all users the same?
    • Are there different ways to read?
    • How do we find willing users?
    • How will we observe them?
    • How will we record our observations?
  • How do we measure how well they performed the task?
    • Is that the right task?
      • How likely and how often are they to perform that one?
      • What others are there?
        What functions might Keywords serve and how might each function lead one to "read" Keywords differently?
    • What is the right unit of measure?
  • Let's assume the task is "search" and lets operationalize "search" as "lookup speed" since time is measurable. How do we design the test?
  • We might use A/B usability testing, show each alternative at random to users and measure how long it takes a user of either design to find what they were looking for -- well, leave the screen. Unless we were there or had some way for them to indicate satisfaction, we wouldn't know what happened. Once we know which design yields the quickest look up times, we would have an objective measure of which design is better. We didn't need to verify the validity of our concensus opinion because we are the entire population.
    However, designing for concensus approval can be dangerous. If the audience prefers something counter-productive or worse (exclusionary, perhaps), then giving them what they want is inefficient or worse. In matters of taste, the customer is always right, but only in matters of taste. And only when the dominant taste doesn't seriously disadvantage some users.
  • What if lookup speed isn't a relevant measure for usability for the Keywords file?
    What if a user's goal for Keywords isn't locating the definition of a word but an overview of the discipline or a memory device for how to be a UXer or something else? What if different users have different goals, are reading in different ways, and therefore prefer different formats? Perhaps one user wants to use Keywords to get a bird's eye view of the discipline. Perhaps another wants to use Keywords as a mnemonic device. Perhaps yet another sees Keywords as a way to experience being a UXer. And still another is using them as an outline, a way of organizing their own subsequent note-taking. Once we start thinking about different use cases, different users, and accommodating different goals, rather than simplifying a single, presumably universal goal, we are leaving Usability and entering UX.
  • Are there other, non-task oriented, ways to decide which format to go with?
    • What about giving users a questionnaire -- Which do you prefer -- and then taking the majority preference? 51% prefer B. B it is.
      • That is UX not Usability. If we ask for their opinion can we trust its accuracy, relevance, that what they think is representative of what anyone who might use the website would think, not to mention do they actually know what they really want?
  • What about taking the choose-your-own-adventure route, offer both versions to everyone?
    The Keywords button on the menu bar would drop down to offer a link to each, perhaps, or put a link to version 2 at the top of version 1? Given only two options, this approach might work. We could even create a login and once user A has selected version B three times, it defaults to B.
    But what if we have more than 2 options? If we can think of every possible approach and offer all of them, then we don't need to know which is objectively better; we can let users preferences decide. But this is a very heavy (time, money, imagination) solution. And offering many options can lead to decision paralysis.
    Remember that sometimes no choice is the best (for whom?) choice.
BTW Since I first drafted this lecture, I added the "Random UX Term of the Day" function to the index file. I did this partially because it occurred to me how to do it -- with an assist from ChatGPT -- and partially because I think learning the technical vocabulary is a significant part of learning a discipline and practice, so why not print a UX term on the front door everytime someone walks up? Well, ego is a bad motivation for design, maybe ok for art but if others have to use the thing that gets made, ego is a design flaw. On the other hand, the pedagogy is sound enough, I think. If you dislike the pedagogical application of the generally irritating convention of a popup screen, bookmark www.gpullman.com/8122/syllabus.php instead of www.gpullman.com/8122.

Having thought through all of this and whatever occured to us along the way, we would survey our users to identify their use cases, identify the different ways to "read" keywords. We would then choose a design that best accommodates all of those ways or if we have to choose one, the one that works best for the most common way of reading and hampers the others the least. Then we would Usability test our choice, to verify that our users can do what they want to do as efficiently (fast) as possible. Once we had the data -- 12 users with 3 trials and an average speed of T -- then we would decide if our design should stand or be tweaked and re-tested.

We could write up and share this miniature research project using the industry standard format, a case study. A case study is a multimodal narrative about the research that led to a design decision and how the implementation of that decision improved people's lives and thrilled the client who commissioned the research. We could, but we aren't going to because we don't yet know enough about case studies. Might make a good topic for a final project. You have users. You have access. Might be too simple. Maybe not. You could have 100% participation. Hmm. Worth pondering perhaps.

Homework -- Assignment 1

Read the Cases section of this website, spend some time reading and looking at each example case and thinking about what characteristics they have in common.

Search for UX case studies and note one you find especially interesting or significant or emblematic of all cases studies. The criteria of your preference is up to you. We just want more cases to look at.

Based on your thinking and the outlines provided by ChatGPT and Copilot, design your own preliminary case study outline. What parts do you think are the necessary core of a case study? You might like to consult one or more of the other AIs.

Deliverable(s)

  1. Setup a WordPress instance.
    Share your blog's URL with the rest of us.
  2. Blog your case study outline. Before you give it to your readers, say something about how you came by it. Since the data for your outline consists of your own empirical research, and the outlines suggested by both ChatGPT and MS's Copilot, your outline might differ from the AI generated ones. Great if it does, no problem if it doesn't. If you adopt ChatGPT's or Copilot's, you might want to think a bit about the priming bias, seeing what was suggested but not necessarily what was there.
  3. Lastly, share with us a link to a case study you found compelling and briefly (one or two sentences) tell us why you chose it.

Keywords

  • Ethnography -- sub-discipline of Anthropology, ethnography is a qualitative research method using observation, participation, and interviewing that seeks to accurately describe a people by understanding them as they understand themselves, from the inside as opposed to the outside. Objectivity, the view from the outside, is precisely what ethnography seeks to avoid.
  • Observing -- avoid white coat effect, put people at ease, fly on the wall, etc.
    Data mining -- ideally websites and apps are set up to collect data trails as a given user progresses through a given experience, algorithms track and responds to these data trails and then test the responses and modify if thought need be. Data can consist of paths, time spent in a given place as well as overall time, entrance and exit point, affect (tone of words used during interaction, signs of frustration -- hitting undo or back, jumping back and forth, quitting mid-process ). Sometimes questionnaires are used to support data or as an alternative to passive data trails. In lieu of data trails, one can observe a user via screen casting software and or being there watching.
  • Interviewing -- did you observe what you think you observed? How does the client express their experience?
  • Participating -- sometimes the first round of users consists of the designers and programmers and stakeholders. Sometimes the UX people go to the clients and work along side them. In both cases those involved in gathering the information are also providing the information gathered. You have to be careful not to assume that as an insider your have better insights. You also need to guard against confirming your biases and assuming what you think is what others think.
  • Hypothesizing -- you need a testable statement, verification of which will be useful (actionable, timely, economically viable)
  • Testing -- acquiring the relevant evidence to prove or disprove the hypothesis
  • Usability -- watching representative people using a product to figure out if it can be made easier to use. While focused on improving an existing product, Usability sometimes discovers features to add to a product, and sometimes whole new product opportunities.
  • User Research -- learning about customers, potential customers, as well as competitor's customers, who they are (demographics), what they want, expect, how they work, how they see themselves, what they can pay, how often, etc.
  • User Experience Research -- learning how a user feels while using a product and how they identify with the product (fan, user, grudging user, oblivious user, etc.)
  • Operationalization -- to render something measurable, actionable.

Books worth skimming

This isn't exactly a bibliography. These links take you to the highlights I made when I read these books. Ideally they will give you the gist, enough to decide if you want to read the whole thing or just note a few salient quotations, make a gist of a gist, in other words.

Martin Luther King Day (Week 2)

No Class

Usability Testing (Week 3)

You need a notebook, digital or analogue, for today's class because I want you to make field notes as we walk through a couple of different kinds of Usability tests. Field notes are a significant source of primary data. Preplanning and organizational ideas tend to influence what data you record. So how you take notes can determine what notes you take and therefore what you can learn. PerplexityAI: Keeping Field Notes in User Experience Research
Keeping field notes in user experience research is crucial for capturing insights, observations, and details about user interactions. Here are some tips for someone new to this process:

Prepare Beforehand
Determine what you need to deliver and prepare a checklist of important things to take with you.

Take Detailed Notes
Make sketches, take photos, and write down observations. Even if you're allowed to make recordings, written notes can be invaluable.

Organize Your Notes
Organize your notes and data as you collect them. This will speed up user research analysis dramatically. Consider using a consistent naming convention and folder structure for your data.

Use a Dedicated Notetaker
If possible, assign a dedicated notetaker for your research sessions. This allows the researcher to focus on the participant and the conversation.

Debrief After Each Session
Take 15 minutes after every session to jot down your main takeaways and impressions. This will help you recall details and themes from the session during analysis later on.

Choose Your Note-Taking Medium Wisely
Whether you choose digital or paper for note-taking can depend on personal preference and the nature of the research. Some find that writing notes on paper aids retention.

Use Templates and Tools
Utilize note-taking templates and tools to make the process easier and more organized.

Tag and Group Your Notes
Tag your notes with common themes to help sort data into themes, trends, and patterns.

Share Your Findings
Once you have a group of research findings or key insights, share them with your team so everyone can make more informed design, product, and feature decisions.

Reflect and Interpret
Field notes should not only describe what you observe but also your interpretations and reflections on the relevance and importance of your observations.


Remember, the goal of keeping field notes is to capture rich, detailed information about user behavior, preferences, and pain points that can inform design decisions.
There are recording/transcription apps worth investigating if you plan to do UX at scale. (Otter.ai, for example). And there are UX note templates and entire webservices (for a fee). For today, though, I just want to you free-form it. Write down whatever occurs to you as it occurs to you. Part of the homework will be revisiting and then posting your revisited notes so we can all learn from how each of us thought to do it.

In-Class Activities

I asked ChatGPT to build a prototype app. We are going to use it as an object of Usability analysis. We are what is called a convenience sample, that is a subset of a population chosen simply because we are available. We were not selected and therefore we may or may not be representative of the relevant population. However we are regular app users. We have some ideas about what we want and don't want. So we are at least relevant even if we don't know if we are representative.

Five Second Test

Let's start with a "Five Second Test." A Five Second test is a Usability technique to capture how quickly a screen communicates its purpose and how to use it.

Instructions:

When you click on the next link, you will see an image of an app for five seconds. You don't have to do anything. Just look and process however you are inclined to process. This is not a test of you in anyway.

link

Now, please answer the following questions in your field notes:

  1. What was the app for?
  2. Did you want to use it?
    1. If not, why not?
  3. If so, what would you do first?

Let's compare notes.

  1. Note anything relevant that occured to you during our discussion.
  2. Note your thoughts about 5 second testing as a Usability tool.

If you are interested in learning a bit more about 5 second testing, a company called Lyssna offers a nice explanation and a free service to fascilitate your testing and learning. They rather elegantly describe thier free tier as being, "For small teams starting their experience research journey."

If you want to build your own 5 second test, all you need is an image on a server and this HTML code:
<meta http-equiv="refresh" content="5; url=https://www.yourhomescreen.html">

Well that and test subjects, of course.

As you may have already noticed, I am trying to nudge you toward learning HTML/CSS and building web interfaces.

Brainstorming with users

Instructions:

For the pre-test part of this Usability experience, please make a couple notes about to-do lists in general.

  1. Do you use to-do lists?
    • If no, why not?
    • If yes, continue
  2. How often?
  3. Do you think your prior experience might influence your analysis of this to-do app?

The goal of this test is to learn from you (a convenience sample of potential users) how to improve a prototype. We aren't looking for validation or evaluation and this test has no reflection on you. We simply want to know what you think as you look and use. When you click on the link, the screen will stay in place. Use it as you see fit and make a few notes about how well it worked for you and what would make it better. Ten minutes doing and thinking should give you enough data for some notes.

If you abhor to-do lists, you can make a few notes about why. They will be useful when we talk about personas in a few minutes.

  1. Use To-Do
  2. List things you liked
  3. List improvements

Let's discuss.

Heuristic analysis

In rhetoric the word heuristic means invention, the process of coming up with things to say. In contemporary language, a heuristic is a shortcut, a general guideline that works effectively for the most part most of the time.

Heuristic analysis is the Usability practice of having informed users -- people who know the 10 heuristics -- use a product to see where it fails to conform to the standard pattern. It's a quick and typically cheap kind of quality control. If you design using industry standard patterns, you can presume your designs are usable, though you should verify by asking experts to assess its conformity.

Jakob Nielsen, one of the OGs (cringe?) of usability, identified 10 usability heuristics:

  1. visibility of system status -- keep users informed about behind the screen processing, loading, successfully uploaded, searching please wait
  2. use familiar, real world, language -- no jargon, no site-specific lingo
  3. users should be in control -- nothing relevant to the experience should be happening behind the screen
  4. follow industry standards -- CNTRL S means save regardless of platform or Cmd because Apple
  5. don't let users make mistakes -- multiple levels of undo, popup in place warnings about required form fields, greyed out representation of features unavailable in the current context
  6. recognition over recall -- don't make users remember or have to think
  7. flexible designs -- experts should have shortcuts and other tools that aren't visible to novices who will be distracted or confused by them
  8. minimalist design -- don't clutter the screen, don't add images as decoration
  9. no error should be fatal -- offer clear signposts and ways to start over efficiently, auto populate form data when possible
  10. provide help -- but design so no one needs it (and assume no one will read it)
Nielsen Norman Group

Please fill out the To Do Heuristic Usability form.

RESULTS of SURVEY

Responses for visibilitySystemStatus:

  • ?: 1
  • N: 6
  • Y: 5

Responses for realWorldLanguage:

  • Y: 12

Responses for userControl:

  • ?: 1
  • N: 9
  • Y: 2

Responses for industryStandards:

  • ?: 1
  • N: 3
  • NA: 3
  • Y: 5

Responses for preventMistakes:

  • ?: 1
  • N: 8
  • Y: 3

Responses for recognitionOverRecall:

  • Y: 12

Responses for flexibleDesigns:

  • ?: 1
  • N: 10
  • NA: 1

Responses for minimalistDesign:

  • Y: 12

Responses for nonFatalErrors:

  • N: 7
  • Y: 5

Responses for provideHelp:

  • ?: 2
  • N: 7
  • Y: 3

Post engagement notes -- compare your list of potential improvements to the heuristic assessment form data. What did the Heuristics inventory suggest that hadn't occurred to you as you just looked and thought about the To-Do app How did the Inventory fail to identify the improvements that occurred to you?

If we were to put other people in front of this software, we might first want to screen our population, to identify different types of users so as not to lump different types together. What problems might lumping cause?

Screening test subjects

Do you use task lists? 
Yes or No?

If no, end

If yes, 


How often For what purpose(s) Do you use software if so, what is it If not why not?

How many different kinds of users have we identified? What name should we give each group?

More on screening potential participants . . .

Persona developments

As we think about different kinds of to-do users, we are starting to think about personas, generalized user-types or models. Personas serve a couple of purposes. They give people working at a company a concrete sense of audience, that they are working for actual people and not just abstract, undifferentiated clients, or worse, for a product. Personas also give workers a sense of how each different kind of audience member thinks, their psychology, as it were. In the case of those people who use To-Do lists, there is a relevant psychology. I asked Perplexity to explain it and I thought it did a rather solid job:

The psychology behind keeping to-do lists is multifaceted and can be attributed to several factors. Research and experts suggest that to-do lists serve various psychological functions, including reducing anxiety, providing structure, and offering a sense of accomplishment[1][2][3]. They can help dampen anxiety about the chaos of life, give a plan to stick to, and serve as proof of what has been achieved[1]. Additionally, the act of making a plan to get tasks done can alleviate anxiety and free individuals from the distraction of unfinished tasks[1][2]. To-do lists can also be reframed as a set of miniature goals for the day, providing structure and reducing stress[2]. Furthermore, the rewarding aspect of completing tasks on a to-do list can lead to increased motivation and productivity[4][5]. However, it is important to note that to-do lists need to be flexible, and overly lengthy lists can be overwhelming and counterproductive[2][4]. Understanding the psychology behind to-do lists can help individuals utilize this information to enhance productivity and motivation[5]. Citations:
  1. The psychology of the to-do list -- why your brain loves ordered tasks -- The Guardian
  2. The psychology behind to-do lists and how they can make you feel less anxious | CNN
  3. Why We Continue to Rely on (and Love) To-Do Lists - Harvard Business Review
  4. The psychology behind to-do lists and why you should make one - The Daily Free Press
  5. The Secret Psychology on Why We Love Completing To-do Lists - Workast

Compiled by Perplexity at Perplexity.ai

Given this information, we might start to create a persona for each type, the anxious lister, the structure-seeking lister, the accomplishment-driven lister, the never lister. We could build each of these types into our re-design of the to-do app. But first we would need to find enough users to verify (accept, modify, or reject: beware confirmation bias) our thinking about the 4 relevant personas. We would interview to-do users (where would we find them?) to see how each fits or doesn't fit one of these three personas and then we would modify the personas accordingly until we think we have an accurate representation of our target population, all to-do listers. Then we would update the app to offer reassurance and praise for the anxious and accomplishment driven (maybe various ring-tones) and perhaps more ways to organize items on the list for the structure-seekers.

Recap: UX in a nutshell

The to-do list is probably as old as writing itself. One of the very first computer programs was probably a to-do list, and there are many available today. Making a new one doesn't make much sense. So unless we had an innovation in mind, the research process for a to-do list today would be: Let's make a to-do list. Oh, so many already exist. Can't improve on them. Never mind. Or maybe we find out they cost money so we decide to make our own.

Let's pretend we want to make one to sell or give as lure to others. We send out a questionnaire to potential customers and ask, would you be interested in an app that would help you keep track of what you need to do? If so, what would you keep track of? Are there any particular features you would especially like? (market research questionnaire) Enough say yes and offer enough insight to send us in a particular, if still tentative, direction. We have an idea worth developing. Next we make a sketch (wireframe), just the basic structure. We show this to a few potential customers and ask for feedback. We might also show it in house. If we get notes, we redo the wireframe. Once we think we have a good starting place, we take the vetted wireframe to a graphic designer who makes a static image (low fidelity mockup). We put that lo-fi mockup in front of potential users. We watch them "use" it. Then we ask questions about their experience, what ways they might make use of such a thing if they could play around with one that actually worked, and perhaps ask for suggestions (UX). Then, given that data (task flows, user differentiation, i.e., personas), we go back to the designer and ask for an updated image. Then we take the update to the the software engineer and they make a high fidelity mockup (appears to work) or an actual prototype (does work). Then we put a couple different kinds of potential users in front of the late-stage artifact (Usability and UX, which is where you came in), observe them use it; maybe ask them to think out loud as they use it. Perhaps follow-up with interviews. Based on what we learned, we would update the prototype. Test it. Tweak it. Then ship it. Then ask for feedback from our customers, and so on.

There are now AI programs that can take a sketch like the one above left and turn it into a drawing like the one above right and others that can take such a drawing and create the software it depicts. makereal.tldraw.com (docs), for example. There are also image-to-code converters like Locofy, for example, that can turn a Figma mockup into an app. The production aspects of User-Centered Design can be automated. What we still need is user input, user testing, and therefore UX.

Homework -- Assignment 3

Write up your field notes from today's experiments -- the 5 second test, your using and brainstorming to-do session, the Heuristic Inventory form, and your after engagement reflections.

Design a mockup to-do version 2.0

If you're not familiar with to-do lists, here is another to-do list to compare features with.

You might also search for to-do list programs, but keep in mind your goal isn't to recommend one that already exists but mockup your own idea about what a to-do list should look like.

Deliverable(s)

  1. Blog post your field notes.
  2. Use Figma or Canva or some other mockup tool to mockup a better to-do list tool, where "better" means what you think is better.
    Or
    Try to get ChatGPT or make a better to-do tool. My original prompt was: "using JavaScript, CSS, and html write a To-Do script that lets a user add, mark as complete, delete, and sort items to a list and save that list locally." In retrospect, I think specifying drag and drop for the sorting mechanism would have been a good idea. The difficulty here will be in describing what you want clearly enough to get ChatGPT to make it for us. You may need to try again and perhaps again. If you get something cool enough, you might use it as the object for analysis for your final project.
  3. Upload your mockup to your blog or provide a link to your ChatGPT prototype.
    Don't have a website to run your To-Do script on? Get one.

Software worth exploring

Resources

Keywords

  • Convenience sample -- human subjects selected merely because they are available, willing, and able
  • Field notes -- Prior, during, and post engagement notes
  • Five Second test -- Usability test to get users' flashbulb impressions of an app
  • Heuristic analysis -- a Usability test the measures a pre-production app against industry standards
  • Mockup -- an image that looks like the final version might. It doesn't work. It's a cheap (money and time) way to figure out look and feel. There are levels of mockup, from low to high fidelity. Low fidelity means hand drawn, back of the napkin like sketches. High fidelity mockups look like they function -- the buttons click, the lights light up, the forms can be filled in -- but nothing actually does anything.
  • Nielsen / Norman Group -- source of important information
  • Persona -- data based generalization of a user type, a model or personification
  • Prototype -- a working version not yet ready to mass produce but close to the final version in look and feel and function as the design intends.
  • Wireframe -- the bones, as opposed to the skin, of a product pre-production. Typically a wireframe is is done before a mockup. It is schematic, conveying functionality and navigation and workflow. No aesthetics or branding and it doesn't actually do anything.
Software Evaluation (Week 4)

Usability Data Collection

Technical writers of all kinds need to be software literate. They need to learn tools quickly and choose tools judiciously. UXers commonly collect data via screen-recording software. A participant uses the object of analysis, perhaps thinking out loud as they go, while the software unobtrusively captures their face, voice, and whatever is happening on the screen. This data is valuable for many reasons, not the least of which is that it simplifies differentiating design from technical flaws.

This week's assignment is to review a screen recoder in sufficient detail to let your classmates make an informed decison about whether or not to use it. You are not trying to sell the one you tested, nor diss it if you had a bad experience. Your goal is to help your classmates make an informed decision about the options out there.

In-Class Activity

Preamble

Screen capturing can be very helpful, but it has to be easy to set up and easy to run. It also needs to create editable or taggable output since it is providing data. If we need a participant to run it on their machine in our absence, then it has to be very easy to install and just as easy to uninstall. It needs a one button on switch and it has to produce output small enough to email.

If we plan to sit participants down in front of our laptop, then the setup process could be a little more inovolved but we will still want a one-button launch and output that doesnt' take up tonnes of diskspace. We will need at least one file for each participant and that can add up quickly.

We will probably know more about what we want and need once we have used the product to capture a person doing something on screen and looked at the output. We will also need to spend some time actually playing around with the output, to see what can and can't be done with it. We might want to review the help files and tutorials (their quality and availability is a criteria of evaluation). We might want to look for reviews already out there, to see if our candidate has an industry reputation of note.

Once each of you has posted your reviews, so week 5, we will compare notes. We don't have to pick a winner. Keep in mind that if you have gone to the trouble to learn how to use a bit of software, your inclination might be to stick with it, to preserve your investment. Beware the gambler's fallacy.

We need to assign a software package for each of us to evaluate. We may need to find more options.

Homework -- Assignment 4

Review one of the following pieces of software and show us how you came to your evaluation. You can write your review as text, text and screen captures, a talking head video, or use the software itself to make a video.

George: Hey ChatGPT, Can you suggest some free screen and voice recorder software options?
ChatGPT: Certainly! There are several free screen and voice recorder software options available. Here are a few popular ones:
  1. OBS Studio (Open Broadcaster Software): OBS Studio is a versatile open-source software that allows you to record your screen and capture audio. It's widely used by gamers and streamers but can be used for various recording purposes.
  2. ShareX: ShareX is an open-source screen capture and recording software that offers a wide range of features, including the ability to record both screen and audio. It's highly customizable and offers post-capture editing tools.
  3. Flashback Express: Flashback Express is a user-friendly screen recorder that's free to use. It allows you to record your screen, webcam, and audio. The free version has some limitations but is suitable for basic recording needs.
  4. Screencast-O-Matic: Screencast-O-Matic offers a free version that allows you to record your screen, webcam, and voice. It's easy to use and suitable for creating tutorials and basic recordings.
  5. Apowersoft Online Screen Recorder: This is an online screen recording tool that's simple to use. It doesn't require any downloads or installations. It can capture both screen and audio, making it a convenient choice.
  6. Icecream Screen Recorder: Icecream Screen Recorder is a free and user-friendly tool for capturing your screen and adding voice narration. It also offers basic editing features.
  7. Loom: Loom allows you to record your camera, microphone, and desktop simultaneously. The free plan offers basic recording and sharing features. Useful for teams, educators, and anyone needing to create quick, shareable video content.
  8. CamStudio: CamStudio is a desktop software for Windows computers that can record all on-screen video and audio activity, creating industry-standard AVI video files and optionally turn those into lean, mean, bandwidth-friendly Streaming Flash videos (SWFs) using its built-in SWF Producer.

Remember that the availability of features and ease of use may vary from one software to another. It's a good idea to try out a few of these options to see which one best suits your needs and preferences.

Two other options:

  • QuickTime:
  • hScreen recorder for Window 11 : . It is $9.99 a year or $14.99 for life, so not free.

    We need at least one more because there are 11 of us in the class.

    Deliverable(s)

    1. Blog your review of the software you test drove. Positives, negatives, thumb up or thumb down or wiggly-finger indecisive.

    It might be helpful to offer a feature table:

    Product Name

    Free Yes / No / Kindof
    One button install Yes / No / Kindof
    One button launch Yes / No / Kindof
    Lightweight output Yes / No / Kindof
    Editable output Yes / No / Kindof
    Tagable output Yes / No / Kindof
    Help Yes / No / Kindof
    Tutorials Yes / No / Kindof
    Positive industry reputations Yes / No / Kindof
    What else? Yes / No / Kindof

    View source code, copy and paste, to get code you can edit for inclusion on your blog.

    We could combine all your tables and thus create a decision matrix.

    Resources

    Keywords

    • Data collection -- any method of acquiring information from which generalizations may be drawn, an inductive thought process
    • Decision matrix -- a consistent way of storing data for comparison among competing options
    • Product review -- choosing one product over another based on a disciplined comparison of identified features / trade offs
    • Screen recording -- software driven process of acquiring a record of a person doing something on a screen for purposes of usability analysis. It may be possible to make a user experience assessment based solely on words said or facial expression, especially if using think aloud protocol, but often a post action interview is needed to learn what a person was feeling as they were doing.
  • Collecting Data via Screen Recording (Week 5)

    We are engaging in exploratory research this week. We don't have a hypothesis we are trying to disprove. We are just practicing gathering data to learn about how to do it and what to do with it once we have it. We will need live subjects, though since this is just practice and we have no plan to publish, we don't need IRB approval. We will, however, provide informed consent. We will, in other words, make this as close to real as a strictly practice session can be.

    In-Class Activity

    You need an object for analysis. You could use the ChatGPT ToDo app from two weeks ago or one someone else made. Or, since we will be using iCollege's gradebook to discuss personas, you might use it.

    For a subject, you need a friend, relative, classmate who is willing to help you out. They need to know what they are doing, why, and how long it will take. So you need to figure all that out before reaching out. I think we should agree not to use each other because we are a special class of subject, being as we are all in ENGL 8122. That fact makes us different in potentially significant ways.

    How do you want to proceed?

    George: Hey ChatGPT, You are a User Experience Research expert. You need to teach some new researchers about screen recording as a way of gathering data about a user using an app. Provide an introduction to the process and then a list of 5 best practices. Please format in HTML

    Sure, here's an HTML-formatted introduction and best practices guide for using screen recording in user experience research:

    Introduction to Screen Recording in UX Research

    Screen recording is a valuable tool in user experience (UX) research, providing a direct view of how users interact with an app. This method captures the user's actions on the screen, including clicks, scrolls, and navigation patterns, offering insights into user behavior, preferences, and challenges. It's particularly useful for understanding the usability of an app and identifying areas for improvement.

    Best Practices for Screen Recording in UX Research

    1. Obtain Consent: Always secure explicit permission from users before recording their screen and audio. Explain the recording's purpose and how it will be used.
    2. Focus on Key Tasks: Guide users to perform specific tasks that align with the research objectives. This approach ensures focused and relevant data.
    3. Encourage Continuous Verbalization: Instruct users to verbalize their thoughts, feelings, and reasons for actions continuously during the session, as part of the think-aloud protocol.
    4. Avoid Influencing the User: Let users interact with the app naturally. Minimize guidance to prevent influencing their behavior and ensure authentic feedback.
    5. Analyze for Insights: Post-recording, analyze the screen actions and verbalizations for patterns that reveal usability issues or areas for improvement.

    After reviewing what ChatGPT offered, I added "Please do that again but include some advice about think aloud protocol." It gave me what it gave before plus the Encourage Continuous Verbalization bullet.

    I would add a couple of things to the AI's advice. When you are obtaining consent, you need to tell them who you are, what you are doing, what you are asking them to do, how long it will take, how you are collecting the data, what you will do with the participant's data (how it will be analyzed but also how it will be stored and for how long and who might have access to it.), what's in it for the participant or otherwise why they want to do what you are asking them to do. While you are drafting this, though you are practicing with a friend and so will be likely very informal, keep in mind you plan to do this on a wider scale and so you might want to draft a script that is far more professional than friendly, just for practice.

    If you are planning to have more than one participant, you need to ensure consistency of context. Each participant should get the same instructions, same information, and have the same amount of time in the same conditions to do what you are asking them to do. Each participant should have the same context. You should write out a preamble and explanation of the process and read perform it in the same way for each participant. You need to avoid context interference.

    You should also have a thank you message after the event, and it should be the same for each participant.

    Homework -- Assignment 5

    Enlist a friend (or 2) to spend 10 (?) minutes using an app while you record them using it. Don't tell them anything about the software, just the process, that you will turn on a screen recorder, record what they do and say for ten minutes and then ask them some post-experience questions. Reassure them this has nothing to do with them personally; it is not a test of their abilities or intelligence or anything else. This is just about the software and there are many testers contributing data so no pressure of any kind. Remind them to speak freely as they use the software.

    After the recording, do a post-engagement review: play the video back and ask the participant to comment whenever they feel like it. Stop the video where they want to speak and record what they said and where they said it.

    Post-engagement interview -- how did they feel about the whole experience. Have they any advice for you? You will not use any of this advice in subsequent iterations of this test unless what they found is a fatal flaw. You want each iteration to be done in an identical fashion. But you also want to learn over time so the next time you do something like this, use what they taught you.

    Deliverable(s)

    A blog post in which you share your lab notes -- including perhaps screen captures of the recording -- and discuss the experience. What did you learn? Don't post the video, but keep it handy if you want to share it with the class or remind yourself for future screen recording sessions.

    Resources

    Keywords

    • Data annotation
    • Exploratory research -- no specific hypothesis, just watching without judgement or expectation
    • Pre-engagement questionnaire
    • Post-engagement review
    • Population --
    Designing Questionnaires (Week 6)
    How was "gender inequality" defined?
    (link)

    The researchers asked, "Do you agree or disagree with the following statement: gender inequality doesn't really exist." Why "really"? Is that a dog whistle?

    1. gender inequality doesn't exist
    2. gender inequality exists
    3. gender equality doesn't exist
    4. gender equality exists
    Asking the right questions correctly is difficult because the same words can mean different things to different people. "Really" could suggest conspiracy to some people, elicit anger from others, and leave others completely unphased. Does that difference influence the data?.

    This week is about designing questionnaires in general. Next week we will design a questionnaire specifically for creating personas for iCollege. I've chosen iCollege merely for convenience. The population is known. We have ready access to a subset of that population, English TAs. We can rely on goodwill to get data. I'm open to alternatives. Maybe you want to focus on your final project.

    Questionnaires are the backbone of nearly all forms of human-subjects research. While passive user data gathering may play a greater role in User-Experience research overall, just given the amount and granularity of it, user perception of experience can only be learned from direct questioning. Also, because as we all know, companies are constantly asking for feedback via questionnaires, knowing how to create an informative questionnaire is a marektable skill.

    One could easily spend an entire semester on questionnaires alone. What follows is superficial. If you plan on doing any kind of human-subjects research for real, you will need to dig much deeper.

    Questioning users is often an ongoing process. Some places keep a database of questions and an archive of past answers. They also keep a list of past respondents and profiles of respondents that might be used to find other respondents. Questionnaires are not a one time thing in most cases, so you should design with as broad a horizon as possible, but of course stay focused on the task at hand.

    Because questionnaires can be expensive and time-consuming, it might make sense to think of the practice as a process, where you start by asking a select audience a few questions designed to identify an actionable problem, design what you think is a solution, ask a few more questions to discover if the solution solves the problem without creating new ones, and then create a questionnaire that asks a wider range of users to confirm or deny the efficacy of the solution. In other words, a full-scale questionnaire probably shouldn't be a fishing expedition because you don't want to spend money and time learning nothing useful. You also don't want to burn bridges with your potential informants by having to go back to them for a redo.

    Constructing good hypotheses is challenging.

    George: What are the key concepts of a hypothesis in general and then specifically in a User Experience context?
    ChatGPT: A hypothesis, in general, is a proposed explanation for a phenomenon, based on limited evidence as a starting point for further investigation. It is a fundamental concept in scientific research, forming the basis of experiments designed to verify or falsify it. In any context, a hypothesis typically adheres to these key concepts:

    Key Concepts of a Hypothesis

    General Concepts

    • Testable: Possible to confirm or deny through experiments or observations.
    • Falsifiable: Can be disproven if contradictory evidence is found.
    • Based on Existing Knowledge: Builds upon previous research and theories.
    • Predictive in Nature: Makes predictions about outcomes under specific conditions.
    • Specific and Clear: Precise and unambiguous statement of the expected relationship between variables.
    • Logical: Consistent with existing knowledge and scientific principles.

    Concepts in a User Experience Context

    • Focuses on User Behavior and Preferences: Predicts how design changes affect user behavior or satisfaction.
    • Informs Design Decisions: Guides design choices and tests design element effectiveness.
    • User-Centric: Based on user needs, goals, pain points, or behaviors.
    • Driven by Usability and Experience Goals: Relates to usability, engagement, conversion rates, or satisfaction.
    • Empirical Testing: Tested via usability testing, surveys, A/B testing, and analytics.
    • Iterative: Evolves with increased understanding of user interactions with the product.

    Questionnaires require rhetorical thinking

    Questionnaires are rhetorical acts, not just the motivated ones designed to gather "evidence" to support a desired outcome but even those that seek to be objective. Who is asked and in what circumstances influences the data. Who is thought to be asking can also influence how a person answers. So you need to answer a number of rhetorical questions (audience, ethos, context) before you start designing a questionnaire.

    Assuming you know what question you need answered, the rhetorical questions are along the following lines.

    • who are you seeking information from (target population)
      important chartacteristics -- role, knowledge, value
      who are you not seeking information from
    • what is a representative sample for your target population
    • what is the smallest viable N (number of respondents)

    There are also logistical questions that go to context:

    • where will you find representatives
    • how will you recruit them
    • how will you gather the information
    • under what circumstances
      • their frame of mind
      • their perception of you or your agents -- think "white-coat effect"?
        back when people went to malls, marketers would have people doing high traffic questionnaires and invariably the person with the clipboard was an attractive white-toothed charmer.
      • the effect the topic as well as the questions will have on their mood and thus their answers and thus your data

    You need to screen potential respondents to find relevant and willing candidates. Once you have a list of useful respondents, you need to run at least one pilot to test the effectiveness of your questionnaire.

    Your goal is to ask the least number of questions possible to learn what you need to learn.

    Types of questions

    Close-ended questions limit what a person can say. The advantage is that you get numbers and numbers are unambiguous, although of course how they relate to a subjective reality or even intersubjective reality (to say nothing of objective reality) depends on how effective the questions themselves were and who you asked (and who answered since not all invited will attend). Don't confuse clarity for accuracy. Close-ended questions are also quicker for a participant to answer because they don't have to compose an answer, just choose one. They don't require thinking, though of course you want to write them in such a way that your participant selects intentionally and meaningfully. You should always consider offering a Not Applicable (NA) and a neutral option so a person isn't forced to contribute data they don't stand by or skip a question leaving you not knowing what the absence of data means (objection, abstention, distraction).

    1. A/B (limited choice)
    2. Select one from the following list (limited choice but more options)
    3. Select all that apply from a list (less limited choice, though still limited unless you add a write-in category)
    4. True/false/don't know (knowledge)
    5. Agree/disagree/don't know (opinion)
    6. Likert Scale (degree of certainty or level of intensity)

    Open-ended questions give participants space to say what they think. You get words rather than numbers and you need to be able to code the answers if you plan to generalize. Coding (labeling each item, whether word or phrase or sentence or whatever unit) requires some consistency among the answers. If you can limit an open-ended question to a specific user-type, a persona or a role, and you see similar answers among them, then you might be able to make an argument without further coding.

    Open-ended questions can provide subjective insights which makes them great for understanding a single participant. They can also provide useful material (salient quotations) when building personas, but they don't typically generalize very well. So you almost always need a strong context and supporting limited choice questions to build that context out of.

    Methods for administering range from clipboard and pencil in a high (relevant) traffic area to a SurveyMonkey type email list to a popup box on a well-trafficked website. Generally speaking the more people respond the better, though if the respondents are the right people you may not need that many. A representative sample can negate the law of large numbers.

    Mistakes to avoid

    1. Skewed sample -- asking only those who will say what you want to hear or those whose answers will not be representative of the general population for any number of reasons or people who don't know what you are trying to find out
    2. Leading questions -- where the preferred answer is implicit
    3. Questions based on problematic assumptions -- like a shared context that isn't actually shared
      Carefully scrutinize all your assumptions
    4. Irrelevant questions -- if an answer doesn't contribute directly to proving or disproving the hypothesis, it's not relevant
    5. Too many questions -- a person's level of commitment to the project needs to be taken into account, what's in it for them?
    6. Questions that require a context that hasn't been provided or are answered in a context not inspected
    7. Complex questions -- two questions presented as one where answering the question amounts to admitting a premise that wasn't specifically asked -- have you stopped binge drinking?
    8. Questions that have overlapping responses -- options should be mutually exclusive. If A and B are or can be interpreted as insufficiently dis-similar, the respondent will be confused
    9. Double-barreled questions -- two separate questions expressed as one -- if there is an "and" in the question it is probably two questions
    10. Questions that lead to more questions that can't be followed up on -- don't ask, "Are you willing to participate in a follow-up interview" if you know you won't follow-up.
    11. Asking for opinions when objective measures are possible -- unless you want to know what people think to contrast with what is real
    12. Asking for opinions that any respondent would not have a considered opinion about -- encouraging fabulation
    13. Order effects, unintended influence over responses by putting respondent into a frame of mind with one question that then influences their answers to the next. You can control for this by scrambling the question order before presenting them, so different respondents are asked the same questions but in a different order. Of course, if some questions are intentionally clustered, you need to keep them together
    14. Failing to take respondents' feelings and attitudes into account. If you ask a potentially embarrassing question, people may lie to save face. People lie to themselves too, so if you ask a question that they might be inclined to answer aspirationally rather than realistically, you should try to ask another that will control for that. Another potential problem is that people who want to please or impress will say whatever the questionnaire appears to want. As well as the opposite. And so on.
    15. Making the respondents uneasy or upset or harried or otherwise in a counter-productive frame of mind or doing so intentionally to get desired outcome or knowing they will be and taking advantage of that
    16. Vague quantifiers (often, usually, frequently, rarely)
    17. Loaded language, dog whistles
    18. Jargon or technical terms that some respondents might not know
    19. Double negatives
    20. Idioms that non-native speakers might not know
    21. Absolute terms like "always" and "never" exclude nuance and might force a person to overstate their feelings or ignore the question and leave an uninterpretable gap
    22. Asking more or less the same question in two different ways when the variant doesn't illuminate the original
    23. Asking a question that likely contradicts an answer to another one without meaning to do so -- like when you want to know if a person might not fully understand something or when you are trying to catch someone lying
    24. Asking Y/N questions when a range of answers is plausible
      Don't pressure your respondent to say what you want to hear
    25. Not giving a "don't know" or "not applicable" option, thus forcing a respondent to add data that isn't accurate or leave a gap in the data that isn't interpretable
    26. Offering a range where the difference among options is ambiguous or poorly calibrated -- "Please rate each of the following statements on a scale of 1 to 5, where 1 indicates strongly disagree and 5 strongly agree." What do 2, 3, and 4 mean?
    27. Inconsistent scaling -- don't ask for a range from 1 to 5 and then from 1 to 10 at another time
    28. Ignore the mode of administration -- online is not the same as face-to-face

    These mistakes are easier to make than they may seem. Pilot test a questionnaire with several different types of respondents before you ship it.

    Typical parts of the questionnaire process

    1. Hypothesis -- statement that will prove true or false based on the data generated by the questionnaire
    2. Planning -- identify population, determine N, recruit (might need small preliminary questionnaire for this)
    3. Population -- all possible users
    4. Population, Sample -- those you try to reach
    5. Population, Sample N -- users who responded to a questionnaire. Indicates reliability.
    6. Population, Representative Sample -- statistically significant number of respondents. You need to account for anomalies, special groups, self-selected samples, and so on.
    7. Design -- questions, order, what kind of data comes back and how to visualize and use it
    8. Questionnaire:
      1. Preamble -- orient the respondent to the questionnaire: why the research, what the process, what's in it for them, how data will be stored, for how long, and who will have access
      2. Consent -- we don't' plan to publish or distribute, so we don't need IRB, but you always need informed consent
      3. Questionnaire
      4. Thank you --
      5. Follow up -- what happens next for you, for them, if anything
    9. Work with data
    10. Decide on next steps

    In-Class Activity

    What would you like to do? Perhaps we might brainstorm questions for people who use iCollege's grading tool.

    Keywords

    • Assumption(s) -- what is taken for true (a priori), the outcome of previous research. You need to list your assumptions and try to ensure that you don't have any unnoticed assumptions. An example might be, "There are different types of iCollege users among English TAs." A related assumption might be, "If we ask only the TAs we collectively know, we won't have a skewed sample." This could be debatable. If our population is just rhetcomp TAs, that might be a non-representative sample of ALL English TAs.
    • Hypothesis -- a tentative answer to a research question. It may take and if-then format. It should be testable, falsifiable, based on existing knowledge, predictive, specific and clear, logical. An example might be, "If we ask the right questions of English TAs regarding iCollege, we will discover data relevant to persona creation"
    • Consent -- a respondent needs to know in advance what they are being asked to do or say, why, and what will happen with what information they provide as well as how it will be stored and used if in the future. They should also know what is in it for them.
    • Research question -- what, why, or how something occurs. The question should be simple (one issue at a time) but might consist of multiple questions. The answer or hoped for answer should not be embedded in the question anymore than the research should predetermine the data that comes back. An example might be, "What are the different kinds of iCollege users among English TAs?"
    • Screening questionnaire -- preliminary data gathering to identify the right respondents: relevant, available, willing.

    Homework -- Assignment 6

    Read the excerpts from the Resources. Find some other advice about designing and administering questionnaires in whatever format you like, text, video, interview an expert.

    Deliverable(s)

    Blog what you learned about designing and administering questionnaires. If you have personal experience, please feel free to share that with us as well. If you have some initial questionnaire questions for your final project, share them with us.

    Resources

    Personas (Week 7)
    UXBooth : Creating Personas

    As Aristotle explained, rhetoric isn't about addressing individuals, it's about addressing types of people. His types were the young, the old, the rich, the powerful, the educated -- all men, of course. He didn't have any data except personal experience and cultural expectations, stereotypes, in other words. Personas are data-based descendants of Aristotle's audience member stereotypes.

    There are several ways to divide an audience by "type." You can use roles, assuming for example that accountants might have different mental models and default approaches then, say, people in HR. Or that graduate assistants might have different needs from Profs because of differences in their role. (This assumption would fail most spectacularly in English Departments where both are "Teacher of Record" and in that sense identical). An alternative to roles is tasks, where what a person needs to do is used to design the tool they will be provided. If the task is adding or deleting a student from a roster, then who is doing the adding or deleting doesn't matter. The task is performed by an undifferentiated process.

    You can design based on "attitude," level of patience, motivation, level of commitment to the outcome: casual user, fan, mavin.

    You can use "experience" or familiarity: novice, apprentice, card carrying member, seasoned member, senior member, retired. A design based on this pattern of thinking would likely reveal different opportunities and processes based on how much bench time or what kind of certification a given user has.

    All of these approaches -- role, task, experience -- are persona adjacent but not personas per se because they lack individuation; accountant as role is different from Ms. Jones, CPA.

    A persona is a research-based, data-driven representation of a segment of an intended population (audience), depicted as if a real person, a simulated individual that is in fact a generalization. A persona is not a stereotype. It is not a character, and even less a caricature. It also isn't an actual person. It is an artifact and in a sense a genre of technical writing.

    Personas tend to have a name, a photograph of someone who looks like what the data suggests, a biography based on demographic and geographic and psychographic data, a context, a primary goal, a set of attitudes, beliefs, identifications, and mental models (how they think something works which may or may not resemble reality), an emblematic quotation, and sometimes a list of brands they identify with. because those brands also have personas. So if they are Mac people, for example, you know that much more about them. However, brands guard their personas carefully because they offer insider information. Knowing someone uses a Mac doesn't tell you anything about how Apple understands its personas. Businesses use personas to make sure that everyone writing for the company -- or designing for the company -- has a clear sense of who they are writing to and working for, so that employees are not addressing some abstract, disembodied, undifferentiated, universal "audience member" or "user."

    I visited MailChimp a few years back and they had their personas printed as life-size cardboard cutouts with note cards taped to the wall around them.

    George: Hey ChatGPT, What kind of data does one need to create a persona?
    ChatGPT: Creating a persona involves collecting a mix of qualitative and quantitative data that helps to form a picture of the typical users of a product or service. This data is then used to generate a fictional character that embodies the characteristics of a larger group of users. Here is the type of data typically gathered to create a persona:

    Persona Creation Data Requirements

    Demographic Information

    • Age
    • Gender
    • Education level
    • Occupation
    • Income range
    • Family status
    • Geographic location

    Psychographic Information

    • Personality traits
    • Values
    • Attitudes
    • Interests/hobbies
    • Lifestyle

    Behavioral Data

    • Purchasing behaviors
    • User goals and tasks they want to complete
    • Challenges and pain points in completing those tasks
    • Brand interactions and loyalty
    • Usage patterns for current products/services

    Technological Proficiency

    • Comfort with technology
    • Preferred devices and platforms
    • Frequency and context of device usage

    Contextual Information

    • Specific conditions under which they would use the product or service
    • Environmental factors (e.g., noisy office, private home)
    • Social context (e.g., alone, with family, at work)

    Motivations and Goals

    • Primary objectives they want to achieve with the product/service
    • Secondary or long-term goals

    Attitudes Towards Current Solutions

    • Satisfaction with current products or services
    • Complaints and wish-list items
    • Switching behavior from other solutions

    Information Consumption

    • Preferred sources of information
    • Trusted recommendations (e.g., from friends, experts)
    • Media consumption habits

    Scenario-Specific Details

    • Task flows and decision-making processes related to the product/service
    • Specific use cases or scenarios where the product/service would be used

    Collecting this data typically involves a variety of research methods, such as user interviews, surveys, field studies, direct observation, and diary studies. Additionally, analyzing data from customer relationship management (CRM) systems, web analytics, and social media can provide insights into existing and potential user behaviors.

    "Enterprise Emma," A ChatGPT Persona Example
    George: Hey ChatGPT, Can you give me an example of a persona used by a well known software company?
    ChatGPT: Sure, personas are typically confidential to the companies that create them because they reveal strategic insights into their user experience approach. However, let's take a hypothetical example inspired by what might be used by a well-known software company like Microsoft for its Office suite of products.

    Persona Example: "Enterprise Emma"

    Professional headshot of Emma
    A professional headshot of a woman in business attire.

    Background:

    • Name: Emma Clark
    • Age: 38
    • Occupation: Marketing Director
    • Education: MBA in Marketing
    • Location: Seattle, Washington

    Demographics:

    • Gender: Female
    • Income: $110,000/year
    • Family: Married with two children

    Technological Profile:

    • Highly proficient with productivity software
    • Uses a high-end laptop and has multiple devices (tablet, smartphone)
    • Prefers integrated software that syncs across all devices

    Goals and Needs:

    • Needs to create professional marketing materials efficiently
    • Looks for advanced features to analyze market data
    • Wants to collaborate easily with her team members who are sometimes remote

    Challenges:

    • Balancing the demands of a fast-paced marketing department
    • Coordinating projects with multiple team members in different time zones
    • Maintaining brand consistency across all marketing materials

    Behavior:

    • Heavy user of email, word processing, and spreadsheet software
    • Relies on collaboration tools for team projects
    • Frequently gives presentations to executives and stakeholders

    Psychographics:

    • Motivated by career advancement and professional development
    • Values efficiency and productivity
    • Enjoys leading teams and developing marketing strategies

    Brand Interaction:

    • Loyal to Microsoft due to long-term use and familiarity
    • Open to new technology if it promises to improve productivity

    Information Sources:

    • Regularly reads industry publications and follows thought leaders on LinkedIn
    • Attends webinars and conferences on marketing trends and software advancements
    • Participates in professional networks both online and in-person

    Scenario of Use:

    • Uses Microsoft Office to coordinate marketing campaigns
    • Relies on Outlook, Word, Excel, and PowerPoint for daily tasks
    • Utilizes Microsoft Teams for managing team's workflow and meetings

    Quote:

    "I need tools that keep up with the pace of the marketing world and help me stay ahead of the competition."

    Some more persona examples

    Personas tend to be an ongoing effort, a consequence of gathering user data constantly. The personas are updated whenever new data suggests a revision. If the product already exists, and it doesn't yet have personas, one would try to learn from everyone in the purchaser database. If the product isn't yet available, one tries to learn from everyone who is a member of the target population. Obviously "everyone" is impossible, so a representative sample would be the next best thing. Once you have identified real members of the target population, names and email addresses, you can take the next step.

    In-Class Activity

    We are going to do what follows strictly as a thought experiment. When I wrote this section a couple months ago I thought we might actually design and administer questionnaires to create these personas, but now I think we just want to engage at the level of play.

    To make personas, we need to gather data on a population. To do this with any kind of integrity, we need as large a population as possible. Given that we are in the English Department and most of us are teaching, and iCollge is the university's Learning Management System, collectively we know quite a few iCollege users. So let's use English Graduate Students as our population and iCollege as our object for analysis.

    To gather the data, a questionnaire makes sense. If we had access to the server's analytics, we might be able to identify clusters of data from which we could create personas. But we don't so we have to contact people and get their cooperation. What question might our questionnaire's data answer? Let's go with, How many different iCollege user-types can we identify and how might we describe each type? This is a two-part question and yet just the tip of an iceberg. Often the first questionnaire is a short one designed to identify willing and relevant participants. But since we are just practicing, we will see what we can make with this. Questionnaires shouldn't have more than 20 questions, if at all possible (especially if a potential respondent doesn't agree in advance). Here are some to get us started:

    This list is sortable. Drag and drop to ponder optimal order.

    1. Do you use iCollege? Y N
    2. How often 1 to 5 times a semester, a month, a week, daily
    3. What's your primary purpose for it?
    4. What parts do you use?
    5. What do you ignore?
    6. What's your favorite thing?
    7. What's your least?
    8. What's your attitude toward it -- reluctant compliance, compliance, happy enough user, enthusiastic user
    9. If iCollege were a person, what would you say to them?
    10. If you had to characterize iCollege in one sentence, what would that sentence be?
    11. What suggestions do you have for a future redesign?
    12. Is there a better alternative to iCollete?
    13. Do you use iCollege in any creative "work around" ways that other users may not know about?

    What else should we ask of participants to help us create iCollege personas (iCollegers)?


    The list is sortable. What order would you put these questions in? Why that order?

    How would you make this survey (survey monkey, Qualtrics, ChatGPT)?

    • How would you distribute it?
    • How many people can we send it to? (name and email list) Friends are more likely to respond, but may constitute a skewed sample.
    • How many do you need to come back to have a representative sample?
    • How long can we wait?
    • What do we do with the data? (never ask an idle question. If you can't use the data or don't know how to, don't ask for it)

    Keywords

    • Demographic -- shared characteristics of a population segment: age, gender, income, education, occupation, cultural background, linguistic background, special interests, special requirements.
    • Persona -- see above
    • Psychographic -- personality traits, values, attitudes, interests & hobbies, lifestyle. Two people might belong to the same demographic but to different psychographics. Same with personas.
    • Scenario -- a narrative that depicts a persona interacting with the product, typically to dramatize an issue
    • Task flow -- step by step visual walk through (typically a flow chart) of a persona doing something specific with the product
    • Use case -- a step by step description (not a narrative) of how a digital object is used to perform a specific task in a specific context -- about the object rather than the person(a) using it.
      You might ask an AI to give you one specific example of a scenario, a task flow, and a use case that highlights the differences among the three ideas.

    Homework -- Assignment 7

    If your case study involves creating personas, you might develop a questionnaire to help you identify and sort different kinds of users. Or you might imagine a couple of different kinds of users and then develop a questionaire to verify or deny your imaginings. If you go this route, share your plans and your draft questionnaire(s) with the class.

    If you aren't yet ready to directly engage your case study, here's an alternative assignment.

    AI Persona Experiment

    Deliverable(s)

      Post something about persona creation.

    Resources

    Interviewing (Week 8)

    Interviews can be an important part of data collection. If you can follow-up a questionnaire with an interview, you can gain greater insights than a questionnaire alone can provide. Your questionnaire may (by accident or design) raise questions it can't answer and interviewing might provide those answers. An interview can also provide a whole new avenue of data, picking up where the questionnaire left off or going in an unanticipated direction the questionnaire uncovered. Interviews can be even more valuable if combined with on-site observation. Being in the place where users typically use your product gives you a chance to learn from their environment. It also gives you a chance to see if what they do aligns with what they say. It is of course possible to conduct interviews over the phone or via video conference, without any direct observation. And it is possible to observe without interviewing, via remote screen capture software and or server data. There is, however, nothing like spending time in a co-present conversation with people to develop a thick description and better understanding of their experience. While interviewing requires some screening, typically performed via questionnaire, you can conduct interviews that are not follow-ups to a questionnaire per se.

    Although interviews can be valuable, recruiting and scheduling interviewees can be very time-consuming. There is something called Guerrilla usability, where you just take your laptop to the Starbucks and sit down next to random people, The Starbucks next to GATech probably won't have the same people as the one on the first floor of Georgia's Own. The people who habituate a local independent are probably different from both. In any of those three places you are likely to make inferences about which person is most approachable before you approach. So not exactly random. and that can work, but it has limitations. See also "What is guerrilla usability testing?" Nevertheless, if the product requires little learning to use and pretty much everyone who uses it will use it in the same way, Guerrilla testing may be preferable.

    Before you start looking for interviewees, you need to know why you are conducting interviews, what questions you need to ask, and what kinds of people you need to ask them of. Consider where the interview falls in your research process. Is it a follow up to a questionnaire? Is it in lieu of one? Is it a separate line of inquiry? The next step is to identify a specific goal. You might formulate this as a hypothesis. What do you need to prove or disprove? If your goals are less pointed, like you just want to get a feel for who your users are, then you might take a more searchlight rather than laser approach, open-ended questions designed to encourage monologues. The more they say, the more you have to learn from. Always keep in mind, however, that a specific individual's experience is their's. You can't just generalize from a single point of view.

    Once you know why you want to interview, and you have identified and recruited interviewees, you should write a conversation guide (not quite a script) and ask a colleague to help you vet and practice it. Are you asking the right questions? Are you asking them in the right order? While you can't control the order in an interview the way you can with a questionnaire, you still need to make sure that you aren't leading the interviewee to say specific things. You also want to see if the questions allow for as natural a conversation as an interview can be.

    You might even consider role playing with a colleague to see where a conversation might go under various scenarios. You don't want to go in with the goal of obtaining a specific data set, but you don't want to waste anyone's time either.

    You have to let the interviewee talk and so you may have to adjust and accommodate, be prepared to improvise. However, if each interview is a unique conversation, you won't be able to generalize. You may be able to create narrative arguments out of what you learned from a unique encounter, but recurring themes will give you a stronger place to argue from. So you will likely want to ask the same questions in much the same way.

    Even after you've found and managed to schedule a plausible interviewee, they have to be in the right frame of mind, neither wary nor eager to please. It helps if you can meet them in their own environment because people are most themselves in familiar surroundings. But then again, they've invited a stranger in so it's not entirely familiar.

    If they are visiting you, they may be different from who they are at home. They will likely be a bit distracted by the new surroundings and be in an atypical frame of mind. If you are paying them to be there, they may be trying to please or impress you with their answers. On the other hand, you don't have to travel and you don't have to leave your own world.

    Regardless of where you do it, interviewing people effectively is a talent that requires both training and experience. It also requires a specific set of social skills. You don't have to be an extrovert. You don't even have to like people. But you do need to know how to listen, how to ask for clarification and elaboration when necessary, how to encourage candid responses, how to empathize with the interviewee, to keep them focused, but also to let them wander or rant a bit when they seem to want to while being able reign them back in before you lose too much time. You want the interview to feel like a natural but mostly one-sided conversation.

    Listening skills can be developed. The techniques are:

    • Be open -- be encouraging and respectful, especially if you are in their space
    • Pay attention -- don't think about what to say next or get distracted by anything going on around you. Most people can't take notes and listen, so record. If possible, bring an observer with you. They can observe while you engage.
      Listen to the music and the words -- their tone and attitude will help you understand what they are saying. Frame of mind is an important frame of reference. If they are frustrated, for example, you need to take that into account when parsing what they say, but you also need to acknowledge their feelings and make sure they know you hear them. Never claim to know what they are feeling. This is not about you.
    • Be comfortable with silence -- don't overlap or anticipate or fill in gaps. Just sit there looking attentive.
    • Show you are paying attention -- lean in, nod, say yes and uh-huh. If you can find recordings of interviews like the kind you need to conduct, pay close attention to how the interviewer behaves. Accompany a more seasoned interviewer as their observer and learn from what they do.
    • Give feedback -- don't interrupt but when they stop talking, ask for clarification, summarize what they just said and request confirmation, ask for elaboration (tell me more)
    • Don't judge or comment -- what you think about what you are hearing doesn't matter. Analysis should happen later.
    • Stay in the moment -- be present but remember that this is about what they have to say. You are a conduit.

    You want to avoid interrogation (don't set them up to say what you want to hear); avoid journalism (don't put words in their mouth); avoid HR-like interviewing (you are not hiring anyone). Avoid tech-support responses as well. You are not there to trouble shoot -- if all they have to do is click that button there, your job is to tell the designers to make the button more obvious, not show the struggling user where the button is. Nor are you a career coach. You are also not there to make friends, although of course you want to be cordial and respectful. When you leave (or they do), it should be for them like nothing significant happened. What matters most is the information. All the same, you may want to follow-up and in some cases you may want a longer-term association, so don't leave them cold.

    You need a transcript of the interview. Consider using a tool like Otter.ai If you can, you should bring a colleague with you to observe unobtrusively while the machine records. It can be very hard to listen, take notes, and engage. If you don't have a colleague, the voice recorder will probably give you everything you need. You could use a camera, but that requires lighting and the person has to be at home on camera, which is not normal. If you really need screen data, then consider using a screen recorder and a think aloud protocol instead of an interview, or follow-up with an interview after reviewing the video.

    As part of the scheduling process, you need to ensure the interviewee knows who you are, what you want, what you will ask, why you want to talk to them, how long it will take, who will be there, the fact you will record, and anything else that's pertinent to them understanding in advance exactly what to expect. After you leave, or they leave, follow up with a thank you card. Remind them why their help is appreciated, how it will be stored and used. Be sure they know how to contact you in case they have questions or think of something they wish they had said.

    If UX were a linear process, it would be something like, hypothesis, questionnaire, follow-up interview, conclusions/recommendations. But each research project is a function of the hypothesis, the data required, and the sources of that data. And some projects are ongoing. So thinking of UX research as linear is misleading.

    Given our questionnaire was, theoretically at least, designed to help us develop personas for iCollege's gradebook-users, it would make sense to follow-up with an interview to verify a projected persona. For this to work, we would need to know people who are willing to be interviewed who fit one of our projected personas. This may or may not be realistic. So I thought it might be interesting to see how ChatGPT would do a persona follow up interview.

    George: Hey ChatGPT, As a user experience researcher, you want to interview a person you think is a good representative of a persona the questionnaire you previously administered has identified. The purpose of the interview is to verify or disprove the accuracy of this person's fit with that persona and to modify the persona if the interviewer's responses suggest modifications. What questions would you ask and why would you put each in the order you did?

    ChatGPT: When conducting an interview to verify or potentially modify a persona based on a user's responses, it's important to structure your questions in a way that gradually builds a comprehensive understanding of the user's behaviors, needs, and attitudes. The order of the questions should flow logically, starting from general to more specific, and should be designed to encourage open and detailed responses.

    1. Background and Demographic Information

    Question: "Can you tell me a bit about yourself, including your profession, hobbies, and daily routine?"
    Why: Starting with broad, open-ended questions about the participant's background sets a comfortable tone and provides context for their responses. It helps in understanding if their basic demographic details align with the persona.

    2. Technology Usage and Preferences

    Question: "What types of technology do you use regularly, and for what purposes?"
    Why: This question aims to understand their comfort and familiarity with technology, which is crucial for assessing their fit with a persona that may be defined by certain technological behaviors.

    3. Goals and Motivations

    Question: "What are some goals or objectives you're currently working towards in your personal or professional life?"
    Why: Understanding their goals and motivations can help in determining if these align with the persona's characteristics. This question also delves into what drives their decisions and behaviors.

    4. Challenges and Pain Points

    Question: "Can you describe any challenges or frustrations you face in your daily activities or work?"
    Why: This question helps in identifying pain points and challenges that may or may not align with those outlined in the persona. It can reveal discrepancies or new insights for persona modification.

    5. Decision-Making Process

    Question: "Can you walk me through how you recently made an important decision, either in your personal life or at work?"
    Why: This question aims to understand their decision-making process, which is a critical aspect of a persona. It can reveal how they gather information, weigh options, and what factors influence their decisions.

    6. Specific Product/Service Interactions (if applicable)

    Question: "How do you typically use [specific product/service]? What do you like or dislike about it?"
    Why: If the persona is related to a specific product or service, understanding their direct interactions with it can provide valuable insights into their fit with the persona.

    7. Feedback on Persona Representation

    Question: "Based on what we've discussed, do you feel that [describe key aspects of the persona] accurately represents you? Why or why not?"
    Why: This direct question allows the participant to reflect on the persona's accuracy in representing them. It can lead to specific insights on what aspects of the persona might need adjustment.

    8. Open-Ended Feedback

    Question: "Is there anything else about your experiences or needs that you think is important for us to know?"
    Why: This final, open-ended question gives the participant an opportunity to share any additional thoughts or insights that might not have been covered but could be relevant to refining the persona.

    Remember, the key to a successful interview is not just asking the right questions but also actively listening to the responses, being adaptable, and probing deeper when necessary. The goal is to gather rich, qualitative data that can validate or challenge the assumptions built into your persona.

    In-Class Activity

    Thoughts on GhatGPT's advice about a persona follow-up interview?

    Who do you want to interview? What do you want to learn? What questions are you going to ask? How long will the interview take? Where will you conduct it? What could you do with what you learn?

    Homework -- Assignment 8

    Conduct an interview. Get a transcript.

    Deliverable(s)

    Post a transcript of your interview to your blog. Also post thoughts on the experience, what you would do the same and what differently next time.

    Resources

    Keywords

    Book Review Blog Posts (Week 9)

    Find a book on UX or a UX adjacent topic that is not already in the collection under Books, provide a list of important quotations, in the way you see in the Books section, and write a review of the book -- who it is for, what it explains, how well you think it works. When you send me the quotations, I will add your book and your quotations to Books. Post your review on your blog. If you want to practice talking head presenting, you can video record your review and post that to your blog.

    SPRING BREAK Big Projects Begin (Week 10)

    SPRING BREAK

    While not attending class this week ...

    Brain storm UX projects. Group or individual? If you can't think of a project, you can take one of the elements of UX research -- questionnaires, interviews, observations, usability testing, personas, case studies, or some other element -- and provide an in depth report on it consisting of a detailed overview, a bibliography, examples, and links to useful resources. You will share this report with the class via a 1 hour Webex presentation during the last week or two of class (depending on how many of us choose this path).

    Outline of generic user-centered product development research study.

    Homework -- Assignment 10

    Figure out what you want to UX

    1. Object to analyze
    2. Hypothesis to test
    3. Population (keep in mind you need at least one willing participant from each sector)
    4. Data needed
    5. Method(s) for acquiring data
    6. Personas
    7. Outline for your case study

    Or

    1. Pick a UX skill and create an in-depth explanation of how to do it.

    Deliverable(s)

      Action plan

    Resources

    Keywords

    Progress Reports (Weeks 11 - 14 )

    Homework

    Work on your UX project and weekly blog about what you did, what you learned, and what you are going to do next.

    Case Study Due (Week 15)

    Via your blog, share a link to your case study



    Week #

    Week #

    Subject

    In-Class Activity

    Homework -- Assignment #

    Deliverable(s)

    Resources

    Keywords