UX - The User Experience Podcast
Help me improve the show: https://forms.fillout.com/t/txqbF3seyNus
Welcome to the User Experience Podcast, the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will: Share learning experiences from myself and UX professionals, answer most common questions and read famous minds.
UX - The User Experience Podcast
How To Stay Sane With AI, Claude Design Launches
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
I'd love to hear from you. Get in touch!
π§ How To Approach AI And Stay Sane β UX Collective
- Julia Kockbeck's article as a QA engineer frames the AI adoption question better than most: it's not use it or don't β it's knowing when, why, and what you're trading off
- The trifecta that never goes away: speed, quality, and scope β if you keep scope constant and push for speed, quality takes the hit, whether you're aware of it or not
- Two failure modes to avoid: overuse without critical thinking (copy-pasting AI output, blindly trusting agents) and AI reservedness (not using it at all and being left behind by people who do)
- We still don't have solid heuristics for when to use AI β we're building them in real time, and most people are doing it unconsciously
- What I think is uniquely human in UX research: moderating interviews, framing a problem with a stakeholder, deciding what questions to ask and why β AI can draft, but it cannot think before the draft
- The measure that actually matters: is the output at least the same? And has the spread of your activity shifted from repetitive tasks toward more strategic thinking? If yes, that's already a win
- My approach: AI is my collaborator, not my substitute β I use it to generate a quick script or research plan, then I review, complete, and own it
π¨ Anthropic Launches Claude Design β TechCrunch
- Claude Design lets you create prototypes, slide decks, presentations, and design systems from prompts β Figma's stock dropped on the news
- I haven't used it in depth yet, but my honest first take: it's genuinely useful for people who aren't designers but need a starting point β researchers, PMs, anyone who needs something that looks considered without hiring a designer
- That said, the pattern I keep running into with prompt-only design tools: generating something looks amazing in minutes, but making one small change is a nightmare
- What I'm really watching for: can you tweak it manually after generation? Can you apply a design system and have it hold? Can you export to PPT or Figma and continue from there?
- It's not competing with Figma in the way the headlines suggest β Figma is a collaboration and precision tool, Claude Design appears to be a generation tool β different jobs, different users
- The tools I want to exist: AI generation plus drag-and-drop editing in the same product β we're still waiting for that
Help me improve the show HERE
In today's episode, how can we approach AI and stay sane? Cloud design launched by Anthropic and how can we rethink the shape of design teams in an AI world? Thank you for tuning in. I'm sorry for the lack of content for last week's episode. I had to prepare for an intense sports event, and it was an intense prep, and so that is the reason. But now I'm back. Okay, so for today's episode, we will cover first how to approach AI and stay sane. So I'm reading the article, I'm more precisely, I'm scanning it as I'm speaking today. And uh so it's an article written by Julia Cockbeck on UX Collective on Medium. So she says she's a QA engineer. I don't want to get that wrong, but this is what I could read throughout the article. Yeah, QA engineer. And uh she basically shares about her opinion of when to use AI or not, uh, in what cases, the idea of under adoption versus overuse, let's say, the idea of AI versus quality. This is an interesting piece of uh thinking. The idea of AI reservedness, so uh I'm gonna go through that very briefly. What to delegate to AI and so on and so forth. And these are interesting thoughts coming from someone who is a QA engineer, because that can also help us pause and ponder when it comes to user experience research and user experience as a whole. So let's start. AI versus quality. So, for instance, she shares very interestingly, of course, that AI tools brought us cosmic velocity and at the same time the speed might be overwhelming for quality. So I recently had a discussion on the topic of this trifecta, which is we always make trade-offs. We always make trade-offs in any case in life, and we just need to know them because they will be made, whether we know it or not, whether we do it consciously or not. And so in our cases for AI, we could think about these three these three criteria, which are quality, speed, and let's say probably energy that we spend, or we could say scope, whatever we want. So let's say scope. The scope is what you tackle as the content with the AI, the the area that you cover of your task, your output, your outcomes, and so on. So that would be the scope. And I would, for sake of simplicity, keep that constant in my point today. So we have speed and quality. So if we have the scope constant, which is we don't change whether we have AI or not, we don't change the the area that we cover in our job, let's say, which is debatable, I know, but let's consider that for a minute. And we do want to accelerate on the we do want to increase the speed of production. What we end up with is we lower the quality, because this is a trade-off we made, we make, we lower the quality. Uh so that's one way to see it, and so that's what she's sharing more or less. We had, of course, in the past, overwhelming incidents triggered by AI agents like accidental deletion of databases, and people blindly trusting AI agents and copy-pasting outputs and so on and so forth. So she's sharing, and I'm really an advocate for that as well, that when we have a powerful technology in our hands, and it's not only a powerful technology in this case, I want to add something to that. It's a powerful technology with potentially risks for uh being misused and potential risks for errors because it's not deterministic. So it's a powerful technology, but it's not deterministic, meaning that with the same input it can provide different outputs, and sometimes these outputs would be errors. So we need to also have that in mind. But nevertheless, when we have a powerful technology, we should always think about what are our responsibilities? Because with any power comes great responsibilities, you know about that quote. And so in this case, she's positing the idea, of course, that self-education is a must-do for everyone in our field. So we need to keep our ability to think critically. I think I read yesterday very, very briefly, there was a in a podcast by Lex Friedman. I read a summary of that. I need to watch this episode, it looks very interesting. I haven't watched it yet. But something along the lines of NVIDIA CEO saying that the AI the LLMs will not, let's say, take us away our tasks or jobs, they will more micromanage us. We will be more micromanaged by them. Um so yeah, that's that's another way to see it because if we lose our ability to think critically of when we should use it or not, that is a risk. Anyways. And then we have the another section which talks about AI reservedness. Reservedness towards new technology is another extreme. Quote I've noticed a surprising trend lately. Non-tech companies are slow in adapting modern stacks, or and even some tech companies are quite hesitant about AI-driven tools. So it's the idea that it's hard to trust AI, especially if this is a quote, if a company doesn't have an established security policy. So the idea is that if if it's hard for you to trust AI, but as an extreme, let's say not on a case by case basis, or you don't have any, you don't have any heuristics telling you when to use AI or not, you will err on the side of not using it. And ultimately, this is a great technology, it accelerates a lot of stuff. Again, maybe not all the stuff needs to be accelerated, maybe not all the stuff needs to be delegated, but if you don't ask yourself this question, you can err on the side of the well, the opposite side of the spectrum, which is basically I don't use AI. And if you don't use AI, people who know how to use it, I'm not talking about people who don't know how to use it and use it on an extreme level without critical thinking, but people who know how to use it and when to use it will let's say tilt the balance in this direction and you will be left behind. So that's the idea. Again, this is a tricky, tricky situation because we have no heuristic whatsoever for now. Well, we are starting to have those, but it's I'm seeing it's still taking shape when to use AI or not, basically. And so that's the rest of the article what to delegate to AI. So it's basically from a point of view of a QA engineer. Sorry, it's not basic, but it's from a point of view of a QA engineer. And um it's the idea of sharing when to use AR or not, that sometimes it could be better to use it because it helps to avoid mistakes that a manual action would have done. The idea of delegating repetitive tasks, um, but that some other activities are still better done by humans. And so I deeply agree with this idea. Uh so let's let's stop, take a step back and reflect. This is from a point of view of a QA engineer, and I love this kind of article. It's kind of short, so I highly encourage you to read it. And what I like to do in this type of situations is stopping and thinking, what can apply to my situation and my circumstance? So it's really the idea of having the same framework of thinking, which is okay, what is AI use versus quality? Maybe AI can be of quality as well. Okay, we have to think about this uh situation that is possible. But I think in this situation, what she referred to as quality as well, it's not an opposite, but it's probably making sure we still have quality while using AI. And for that, the end of this section is mentioning thinking critically. And then we have the opposite side of the spectrum, which is AI reservedness. So on one side of the spectrum, we don't we use AI, we overuse AI and we don't think about it. Other side of the spectrum, we don't use it and we are left behind. And an intermediate side is like what should be delegated. And I think this is a good framework to have is what are the risks of overusing AI, what are the risks of not using it at all, and then what in my tool, in my activity stack, let's say in my tasks, should I delegate to AI or not, and why? UX research-wise, I can say that again, taking a stance of critical thinking and taking a stance of what is uniquely human that you should bring to the table when you do UX research. Moderating an interview, in my opinion, for now, maybe in the end tools will change, but moderating an interview is uniquely human. You need to have active listening skills, you need to have empathy. Framing a problem. When a stakeholder comes to you thinking, Oh, I need to have this research being done because I need to understand XYZ. Well, framing the problem, what do they need to understand? What's best for the business? Why do they need to understand this at this moment, and so on and so forth? That is uniquely human. You can use AI to draft some ideas, you can use AI once you have thought about everything, and then you basically lay down a research plan because copy-pasting in a Word document would be would be potentially, I don't know, manual and repetitive and not fulfilling, okay, but you need to have thought about the problem yourself. So it's the idea of thinking. I think that everything that is the manual aspect and so on probably can be delegated. The idea of the script, the script, when you interview someone, well, you need to be you need to be thinking about the questions that you ask. You need to have this thinking process, I think. Yes, you can have an AI asking the questions, but I would say that what is uniquely ours as UX researchers, or anyone conducting UX research, I do not like this kind of elitism saying like only UX researchers could do research. No, I'm all in for people from outside UX research also doing research. But for anyone conducting research, what do we bring to the table? It's the idea of okay, I want to understand this or that, I want to reduce this or that risk before I take this or that decision. So I need to ask XYZ questions, and that's it. And you can use AI along the way to help you probably, but do not use it to constrain your thinking and have it think instead of you. We do have a brain, let's use it. So there's that, and I think I think people need to be confident about the fact that we are able to do that. We have been able to do that for a lot of time before AI came. And again, I'm not saying not to use it. I do myself use it for sometimes generating a quick script or a quick research plan, and but then I review it. I review it, I complete it, like I work with it, it's my collaborator. But it's not substituting me. Whether it does accelerate the process or not, this is a good question, and I think I think it's difficult to measure for now. It's difficult to measure for now because the only way to measure it would be, okay, um, what is the output? Is the output at least the same? Is the outcome at least the same? For XYZ task, okay. And once it's the same, can we measure the time that we invested in before we used AI, after we use AI, in what way? Was it prompting the AI? Was it verifying the output? Because if it's the same amount of hours worked, but it's just a different set of tasks, well, okay, we can posit that maybe how can I say we have been prompting more the AI than we have been writing a research plan? Okay. So that could be one way to see it. Another way could be, oh, we have had more conversations with our stakeholders than before to frame the problem because we had less time to write our research plan because the AI did it. That could be a win. Anyways, my point is measuring the outcome if it's still the same at the very least or improved, and then what is the what is the spread of our activities that we do? And if it's more strategic thinking instead of repetitive tasks, that's already a win. That's an example. And then yeah, what to delegate to AI? Well, a lot of things we can delegate a lot of things, uh, for instance, data entry, um, synthesizing large patterns of data, but then you need to prompt it back and forth to make sure that you are doing it correctly, that the AI is doing it correctly, and so on and so forth. But it's the idea that I really encourage as this QA engineer did, is pausing and pondering, okay, what is in my activities tag? What are my jobs to be done every day? What can be delegated to an AI so that I can tackle more of XYZ else, other tasks. Highly encouraged to read this article. So thank you, Julia, for posting that. Then we have an article from TechCrunch.com anthropic launches cloud design, a new product for creating quick visuals. So very briefly, I haven't tested it 100%. I just landed on the page and I looked at the options. I haven't generated any design yet. Um, I need to look at it more in detail. But apparently, yeah, so from what I saw in the window, you have the ability to create a design system, import design system, designing presentations. I think you have the ability to design a website or prototypes at the very least, um, and whatnot. Let me at the time I'm talking to you, let me just launch my window. So you can make a prototype, you can make a slide deck, you can work from a template or other, apparently. You can create a design system. Okay. Um that's what we would be able to do with cloud design. And um so yeah, what what what has interested me is seeing in the tech scene that it well people say it's competing with popular design tools like Canva, like Figma, even the stock of Figma went down if I understood correctly. Um so that was interesting. And I would say I haven't used it in detail yet, but what I can say is that I think it's a good thing for people who do not know specifically well, don't hold me accountable for it, please. But for instance, I'm a UX researcher, I know a little bit about design, designing a presentation that is visually appealing. A little bit. I'm not a designer, I'm not a UI designer, and I can tell you like sometimes I'm confronted to a problem and I want it to look good, a presentation, and I think I think it's a good thing to have these kind of tools, to have at least a first stage of thinking before you go to a UI designer or UX designer asking them to uh help you out with the presentation or um any other um situation. I think I think it helps people to be a little bit more empowered, but I'm not saying like I'm not saying that this substitutes any UI to something that we do, and even not, for instance, what you're capable of doing with Figma, these are different tools made for different purposes. In Figma, you have collaboration, maybe this is coming in a later stage, but for now, this is not exactly the same. And I haven't worked on it, it looks very, very interesting and promising. Um I haven't worked on it, but for instance, in the tools that that I used um in the past that are like prompt-based, you cannot fine-tune the design like manually, especially for designing websites. For instance, I have tried it with AI and it's not exactly manually oriented. It's not like we take something and we empower it with something else. No, it's we substitute a modality for another modality. So, for instance, with Figma, you can move boxes around, change the font, change the color, uh, you can uh do way more, you can um yeah, change the order of the prototype, whatever. You can do a lot of stuff. And with these AI-based tools, it's like just prompting, and then you have no control whatsoever over the over the experience, over the design, and you have if you want to make one change, you need to specify it in a prompt. It's highly inefficient, in my opinion. Like I was able to generate a website in like minutes, and that was amazing, it looked great, but if I wanted to change one thing, or if I wanted to put a design system in it, it was a nightmare. So let's see what comes out of the design system from Cloud Design. That that is looking good and it's it looks interesting, but ultimately what I'm really looking for personally, personally, is when I want to make a website or a prototype or a presentation, I want it to be I want it to be tweakable. I want to be able to tweak it. So maybe this is possible, I haven't done it yet with the slide deck. Um, like you export in PPT, you re-import, you make the changes, and so on and so forth. That would be interesting. Anyways, that's it for today. This is a bit shorter than expected. I thought I had would have time to do the third article. I don't. Sorry for that, maybe for next time. Again, thank you for listening, thank you for tuning in, and see you in the next episode. Bye bye.