top of page
Career Design Stuff.png

Can we GenAI be a Design Collaborator?

Design Project

Role

UX designer and researcher

What I did

Qualitative analysis, case studies, chatbot design, and research session conducting

Result

AI prototype that encourages more regulative thinking

Background

 

Large language models (llms) are a dichotomy of opportunity and hazard in the education system. Cheating is omnipresent, but the opportunity for every student to have a personal tutor is an incredible boon. Ultimately, this is a design problem–you wouldn’t give a child a surgical scalpel, but after proper training, this hazard becomes an invaluable tool. If we can create an AI system that guides students along a collaborative learning process rather than providing quick and easy answers, we could transform education. 

​

This project seeks to address this design problem, to contribute the design of a learning experience that coincides with Artificial Intelligence that will better prepare students for the workplace. The scope of this project includes undergraduate students and is centralized around design learning. It also includes the design of a chatbot and learning processes. 

 

To understand students’ learning, we analyzed their learning regulation patterns. Learning regulation refers to the processes students take to enhance their learning environment. Learning regulation is associated with greater success in learning outcomes (Lawson, et al. 2019); likewise, it is the focus of this design to increase the prevalence and quality of learning regulation. 

 

This project is a snapshot of research being conducted with Dr. Nguyen (USU). The principal leader of the research is Dr. Nguyen, but this Capstone is my own original work. For a more detailed view of the division of labor see Table 5. 

​

Process

To inform the design of our chatbot a human study was done. The study involved independent brainstorming wherein participants were given a design task on a familiar product, Canvas (learning management system). Participants were told to think out loud through a process of identifying users and problems, and brainstorming solutions using the following prompts: 

​

Who are the users of Canvas?

  1. If you have unlimited resources, who will you involve as users in the Empathize phase?

  2. If you have to categorize these users into groups, who are the user groups?

  3. What features do you think these user groups prioritize when using Canvas?

  4. Based on the assumptions you just listed, we can start brainstorming some design ideas. Tell us 2-3 ideas you may want to pursue.

Chat.jpg

Figure 1. Initial Chatbot Design

After the independent task, participants were then given the opportunity to interact with ChatGPT (v3.5) to go through the same process again. After collaborating with the AI, participants then sketched 2-3 design ideas and presented them. Lastly, a reflection was completed. 

 

The study was conducted on 17 participants from a wide range of experience. Six were undergraduate students, four were master’s students, and seven were alumni from Utah State. Alumni were working in UX/UI and instructional design. All participants had at least some familiarity with design processes. 

​

Following the study, a qualitative analysis of the collected data was conducted. This involved creating a codebook wherein participants’ utterances could be categorized. For example, the statement, “I really like how ChatGPT does like break downs of stuff until list because It does like bigger problems at first and then there's like subproblems that are of those bigger problems.” was coded as evaluate. Participants were then coded, which allowed us to see regulation patterns, quantities, and proportions. 

Case Studies

Scoring Ideas
  1. Addressing problem

    1. How well does the design idea address the stated problem? 

    2. 1=doesn’t address the problem at all, 10=perfectly addressed the problem

  2. Specificity

    1. How specific is the idea? Could it be designed by a professional as described? 

    2. 1=vague, 10=perfectly specific

  3. Novelty

    1. How different is the idea from what currently exists?

    2. 1=already exists, 10=completely different

  4. Collaboration

    1. Is it singular or collaborative?  

    2. 1=singular ownership (entirely self or AI’s idea), 10=dual Ownership

Regulation Patterns.jpg

Figure 3. Regulation Patterns

Idea Scoring.jpg

Figure 2. Idea Scoring ("P" represents participant)​

P1: High Evaluation=High Collaboration

 

Looking into the regulation patterns of P1, that is, how often they self-regulated and using which types of regulation, evaluation was prevalent. This regulation is described as monitoring progress toward a successful outcome, summarizing what has been done, and assessing current resources (Malmberg 2017). This could be described as the designer self-defining their progress toward the solution, and any appraisal of information gathered. P1 regulated more than other participants, especially in evaluative measures. 

 

The evaluation took two modes for P1, assessing the AI’s responses and highlighting information to move forward with. For example, almost immediately after the AI’s first response, P1 assessed, “All right, so we're getting a pretty generic response. Which is not abnormal for chat GPT.” and later they highlighted, “Yeah, there's definitely a couple. I notice it [suggests] an account tagging feature, under, number 2. I kind of like that.” 

 

P1 also demonstrated a higher level of collaboration than other participants in the AI design task. Looking at Table 2, we can see that P1 scored higher in collaboration than others. A working hypothesis is that P1’s evaluative tendencies contributed to a more collaborative environment. By consistently checking in with the information they received from the AI and their relationship with it, P1 was more apt to include the AI in the design process. For example, “Wow, that was actually really impressive. I'm kind of impressed. I especially like that it has like the behaviors and preferences.” By marking what is useful for themselves, the participant opens up opportunities to internalize the information presented and build upon it. 

Screenshot 2023-10-27 at 11.32.42 AM.jpg

Additionally notable is the following: “One thing that I like about this,  it gives an outline of what we can do, right?” This specific line is especially important because P1 not only highlights information that is important to them but also demonstrates a meta-awareness of the interaction as a whole. In other words, P1 evaluated the process of using AI as a design partner, giving them a better framework to effectively collaborate. By marking the usefulness of the exercise, they identify the role of AI in the conversation. P1 is communicating that the AI is an information and outline contributor, providing a potential plan of what to do next. Collaborating in this mode is likely to be more effective as it allows the participant heuristics from which to process information to be given. If the time was not spent identifying the roles, P1 might not have been as inclined to incorporate the information presented by the AI. 

 

The design of the Chatbot could be substantially improved if it included this type of interaction where the user evaluates the helpfulness of having a conversation with the AI tool. In a high evaluation environment like the case with P1, a more critical eye for what is useful and what is not is key. So, we need to provide a plethora of opportunities for the user to evaluate the interaction, especially in ways that they would naturally do so.

​

​

Informed design attributes from P1: High Evaluation=High Collaboration

  • Reactive emojis for more options when responding to AI

    • It is worth considering how our target audience, undergraduate students, typically engage in evaluation in the digital space. With this in mind, the design of emoji reactions was created. The emphasis here is to give users opportunities to stop and evaluate the information they are receiving. Perhaps in future iterations, this feedback could start to influence the AI’s responses, but in early editions, simply getting users to think about the interaction is enough. 

  • Regulation pieces within the chatbot design to generate more evaluative thinking from the user. “What do you think of this?” or “Summarize where we have come so far”

P3: Converse to Inspire

 

Through their conversation with the AI, P3 began narrowing the design space around a specific use case problem. How could Canvas represent performance in a class with few assignments? Narrowing and framing the design problem to something more specific like this is a move more often associated with advanced designers (Tang et al. 2012). This line of thinking came as a result of the conversation with the AI, and demonstrates the power of having a conversation within design contexts in framing activities and generating creativity. 

 

P3 was an interesting case, after asking only two questions of the AI they said, “And then this is where I would stop asking it questions and then start implementing it in the designing and experimenting.” A potential to critique design ideas using AI was suggested by the interviewer, which proved to be a fruitful avenue for P3. They proceeded to investigate the specific problem of measuring performance on Canvas when an instructor has few assignments. This led to regulatory utterances like, “One thing that really like jumped up to me that I liked was the early warning system within the grade book.” and “The AI mentioned something…  it didn't really sit with me that much.” Even though they are critiquing the AI in the latter example, P3 is demonstrating self-regulation. They also moved towards a more specific design framing, “I guess now I would think of another way to phrase this question that's more specific to what I'm thinking about. So I would say something like, ‘What if the professor doesn't keep track of participation or attendance?’” Ultimately, P3 presented design ideas that directly addressed this problem, scoring high in specificity (Table 3), whereas, in independent brainstorming, their ideas were somewhat vague. 

Screenshot 2023-10-27 at 11.36.20 AM.jpg

It might be helpful to think of the process of designing as a conversation with a design problem. In this framework, we can understand how AI tools might assist students in expressing their thoughts in a design conversation. AI tools might act like an excellent conversation partner, listening and providing their own input on the topic at hand. Much as a good conversation partner might make you feel more willing to bring up thoughts that would have otherwise gone silent, the AI in this case could help novice designers learn to investigate new ideas.

 

“Creative confidence” is additionally helpful for understanding this case. IDEO founders, Tom and David Kelley said, “Creative confidence is about believing in your ability to create change in the world around you. It is the conviction that you can achieve what you set out to do” (Kelley 2013). Internal dialogue can be a mess of self-doubt and anxiety. By expressing views and ideas to an AI that speaks only kindly and helpfully, students may begin to find their creative confidence growing. 

 

This might be what happened in the case of P3, through the conversation with the AI, they felt more confident to focus on a design space they otherwise might have avoided. Because the AI represented a zero-consequence sounding board, P3 explored their unique use case. Creative confidence boosting like this would be valuable for our chatbot. We might replicate it by demonstrating true listening attributes, like looping–repeating what someone has said in a different way to show understanding–, and reactive tiles that suggest where users could go next. 

 

​

Informed design attributes from P3: Converse to Inspire

  • Reactive tiles that suggest where users could go next. 

  • Good listening attributes. Looping: “Is there anything more?” 

P9: Broadening Perspectives

​

"So this is a unique thing for me because I'm in a different time zone... So whenever I pull up my calendar, all of my assignments look like they're due the next day and I can't edit that. And from a user standpoint that would be really nice for me to say, okay, well, I want to edit this, or if I want to give an arbitrary due date like being able to bump that up on the calendar."

 

P9’s conversation centralized around this specific problem they encountered in Canvas. Throughout the design tasks, they moved across different scopings but continued to come back to this one issue. This interaction was noteworthy because it demonstrated a potential thought pattern of lower-experience designers and inspired the design of an introductory interaction with the AI for novices. 

 

Beginning the interaction with the AI, P9 asked what problems users are encountering with calendar events, explaining, “I’m just seeing if there were any other like problems that other users were having other than ones that I’ve already experienced.” Having validated their initial thinking and expanded the scoping of the problem, P9 then moved into an ideation space, asking for methods to smooth out the problems with calendars. Brainstorming solutions expanded the mindset of P9, looking over ideas to connect syllabus information to the calendar they thought about lecture speakers in the calendar which prompted the following: “another key thing in one of my classes, right now we had to have, like our final presentation proposal, and by a certain time so like that was something that would be good to have that pull from.” This expanded thinking also included teachers, “That could be more work for the teacher because they'd have to go in and like pull the link from the calendar and put it into the syllabus.” These two lines conjunctionally inspired P9’s final idea, a personal due date alongside the instructor’s. This solution’s complexity was a result of P9 analyzing the situation from a different perspective (pulling information from the teacher’s syllabus) and looking at the work required from the teacher's side of things. 

 

The value of P9’s interaction is best explained in their own words: “Yeah, it gives a broader perspective, cause, like, I have my limited perspective.” Perhaps the most common failing of the novice designer is assuming they are the user themselves–a narrow perspective generates narrow solutions. Through their conversation with the AI, P9 broadened the platform upon which they developed solutions. Whereas in independent brainstorming they generated solutions that would address their experience, interacting with the AI prompted ideas to address multiple. This arc of broadening perspectives is something we aim to reflect in the design of Chatbot conversations. 

 

P9 started the interaction with their own perspective, moved to enlarge it, and embraced solutions of capacious application. An introductory conversation could be designed this way to help those unfamiliar with design methodologies. This proposed conversation would start with the AI and participant finding a personal problem of the user followed by a guided process along the same path taken by P9. The ultimate goal would be to initiate designers into the process of including perspectives beyond their own, and generating ideas for those perspectives. 

 

Informed Design Attributes from P9: Initiating Designers

  • Introductory conversation pattern. 

  • Image/visual-based responses and user simulation with submitted images. 

Results

A new iteration of the chatbot design includes a notebook for highlighting and recording, image-based communication, reactive tile suggestions, more feedback options, listening attributes, evaluation stimulus, and an initiation conversation.

Informed Design Attributes

Notebook

Many participants remarked they would like a way to save information presented by the AI, so, a notebook sidebar was created. This allows users to highlight responses they want to save and add their own notes. The notebook encourages more thinking about the process and helps learners record their thinking.

Highlight.jpg

Wrap-up

What would I do differently next time?
Division of Labor

I want to involve more testing and feedback in the design process. Although I had research from many participants informing my work, a stronger focus on iteration will improve my design work. 

​

Digging further into the data would be very valuable. Looking at co-occurrence tools and heatmaps to see relationships between regulation thinking could reveal more insights.

Given this project is a snapshot of research with Dr. Nguyen of USU, I would be remiss if I did not acknowledge the division of labor. ​

Jake

  • Case studies & correlated visuals

  • Chatbot design

Dr. Nguyen

  • Human study design

  • Research direction (grant application, owner of idea, etc.) 

Collaborative

  • Conduction of interviews

  • Coding (qualitative analysis) 

Works Cited

Jackson, Kathryn et al. “The Crossroads between Workforce and Education.” Perspectives in health information management vol. 13,Spring 1g. 1 Apr. 2016

​

Kelley, David, and Kelley, Tom. Creative Confidence: Unleashing the Creative Potential Within Us All. United Kingdom, Crown, 2013.

Lawson, M.J., Vosniadou, S., Van Deur, P. et al. Teachers’ and Students’ Belief Systems About the Self-Regulation of Learning. Educ Psychol Rev 31, 223–251 (2019). https://doi.org/10.1007/s10648-018-9453-7

​

Malmberg, Jonna, et Pal. "Capturing Temporal and Sequential Patterns of Self-, Co-, and Socially Shared Regulation in the Context of Collaborative Learning." Contemporary Educational Psychology, vol. 49, 2017, pp. 160-174, https://doi.org/10.1016/j.cedpsych.2017.01.009. Accessed 30 Jul. 2023.

​

Tang, Hsien-Hui, et al. “Reexamining the Relationship between Design Performance and the Design Process Using Reflection in Action.” AI EDAM, vol. 26, no. 2, 2012, pp. 205–219., doi:10.1017/S0890060412000078.

 

Zimmerman, B J. “Self-Regulated Learning.” Science Direct, 2001, doi.org/10.1016/B0-08-043076-7/02465-7. Accessed 04 Aug. 2023.

bottom of page