.png)
UXchange
Welcome to "UXChange" the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will:- Share learning experiences from myself and UX professionals- Answer most common questions- Read famous blogs- Interview UX Professionals- And much more!For more info, head over to ux-change.com
UXchange
When Not to Use AI
Welcome back to UXChange, I’m Jeremy — and today we’re talking about something I’ve been wrestling with personally: when should we actually use AI?
We all love what it can do — speed, automation, magic. But at what point does using AI stop helping us think, and start replacing the thinking itself?
In this episode, I share my personal heuristics for using AI at work — not as a researcher, but as a knowledge worker trying to stay effective, curious, and human in an AI-powered world. I unpack questions like:
- When is AI genuinely adding value, and when is it just noise?
- How much control are you willing to give away for efficiency?
- Where’s the line between delegation and disconnection?
I’ll talk about why some tasks must stay human — especially those that shape your understanding, your craft, and your credibility — and how I’ve learned to decide where AI fits in my creative and research process.
🎧 Listen now for a raw, reflective episode about the mindset, trade-offs, and self-trust we all need in this new era of AI-driven productivity.
Hi everyone, welcome back to the podcast. I'm Jeremy, I'll be your host today and we'll be covering a topic that is dear to me because it's a topic I'm pondering a lot about these days and this is related to heuristics when it comes to using AI. When should we use AI? When should we not? Is it always profitable to us? Is it not? And if so, what are the rules. I hope you like this episode. Okay, okay, okay. First and foremost, please don't take this episode as face value. I'm just here rambling, to be honest. I need to talk. I need to share what I have in mind. Even if I'm alone because I have no one answering me right now, but I feel it's my duty to do this. And maybe because in a few years I'd like to go back and listen to this episode to see if I progressed in my mindset, let's say. So again, don't take this as a face value. This is really a set of opinions and not knowledge, although knowledge might be also included. This is just to share my experience when it comes to using AI for my work. So today, I will not speak as a researcher, although I may also populate some thoughts when it comes to user research, but I want really to speak as a worker, as a knowledge worker. And so here today, the goal is really to share with you my heuristics that I developed over time when it comes to using AI. And it's a dear topic to me because I haven't found such sets of heuristics. I'm a huge fan of productivity theories and so on. And when it comes to AI, so my point is we have had several productivity books and theories. We have great authors like Carl Newport, for instance, or the book Getting Things Done exists but I think it would need a refresher when it comes to using AI and so it's not necessarily its productivity but it's really when it comes to doing your work what is the best tool to do your work. So that's the point today and I'm gonna explain what I mean by that. So it's a bit fuzzy and it might be a bit let's say going in all directions I hope you You can follow me. And if I see that there are less people interested in this episode, I will know why because it has less structure in essence. It's really me talking about what I have in mind. I have nothing written. And so here we are. So I have been a user of AI, let's say AI consumer products, consumer facing products of AI since the day it started to emerge. Let's say it's GPT2, we would say, so chat GPT basically. And in the past, I also worked a little bit on creating my own, let's say, back propagation systems. And so in Python, to recognize handwritten numbers. And I love to do it. because I learned how black propagation worked and how to implement it in a Python script. And so I love to do it. And so, but it was very, very complex to me and didn't have the time to dive deep. And so, yeah, it looked like magic for me at the time, right? So just because we could, I did it. And you will say this has an importance to me because just because we can, should we do it? Like really that's to me that's a central question that we should ask ourselves. And because this is the question that is in the minds of these tech giants, right, as we can do this, so why shouldn't we? And I always say the world is driven by efficiency. That's my feeling. I'm not saying it should and I'm not saying this is right or wrong. I'm just saying that's how it is. We humans are looking for efficient ways to do things. But I think that sometimes it's good to step back and wonder, am I using the right tool to do my job? And so right now we all know that we have these tools, like we have these large language models and we have these chat-based tools, sometimes even audio and sometimes even agentic experiences. So it's basically, it works under the hood. you don't even have to ask anything. And so I have been wondering, now that we have these tools, expectations when it comes to productivity will change, naturally, in knowledge work jobs. And it's gonna progressively be the case, of course, it's not gonna be the thing from one day to the next, but ultimately expectations will change the same way it changed when we saw the emergence of spreadsheets and let's say other kinds of UIs or calculators, right? So productivity and expectations tied to that changed. And so then naturally you're expected to do the same kind of task if I should talk about it this way, maybe job is a more appropriate term. You would, or set of tasks, sorry, you would be expected to do it way more quickly. And at the same time, if you do it more quickly, what about accuracy? What about correctness? So I have been wondering, we have these tools, AI based tools, which are able to facilitate some of the tasks that we perform on a day-to-day basis. Let me give you an example. As a user researcher, you can use AI tools to synthesize customer feedback from user research sessions. You can also use it to craft presentations. can also use it to craft emails very quickly. You can use it to analyze survey data. You can use it to craft a research plan and a script and so on and so forth. So that's the thing. So then I have been wondering and I'm really a huge user of AI. I love it. I use it every day. I absolutely love it. I think it's transformative for our future. And I think in the future we cannot imagine a world with AI. It's impossible, in my opinion. I would love to see in a few years down the road if I were wrong, but I don't think I am, to be honest. So it's really transformative. And so naturally, and I want to do the best of my ability to focus this conversation, well, this monologue, sorry, on productivity. So naturally as maybe you who is listening, I have been wondering, okay now that AI is here, should I use it? Like let's face it, we all asked ourselves this question recently. Should I use it? That's the first question. Should I use it? Is it cheating if I use it or not? Like philosophically, you're producing something that you were not the one to produce. Let's say we can see it this way. Maybe it's not so much accurate, of course, because you're still the owner and the accountable for what you produce. But still, at some point, you ask yourself this question. Okay. And then when you ask yourself this question, then you have follow-up questions, of course, which is, okay, should I use it? And if yes, how? And then when? Why? And what amount? Right? That's my question. So I care about doing my job right. Whatever I'm given As a task, I care about doing it right. So we are all different. And when we do user research, one of the first things we learn in design thinking theories, let's say, is that, well, no two users are the same. And that's why the concept of persona is really important, which is in a given group to a user research, let's say, you can subdivide this group into smaller groups of participants who share the same characteristics when it comes to needs, way of working, mental models, the way they process things, what they expect, what they need, people are different. So I know already what are my quirks and what are my pros, what are my cons. One of the things that I like when I produce Imagine I Work Solo, when I work solo I would say I like to have control over what is being done. So if I'm using a calculator for instance, I can use it because I know that it will be repeatable. It's deterministic. If I input 2+2 and tomorrow I input again 2+2, I will have the same result. So I have no problem when it comes to delegating the task of computing something to a calculator, for instance. When it comes to large language models, it's not the same because if I ask my LLM to output a research plan on one day and then on the next day and again I will have two different answers. So then you should ask yourself, are these two answers the same? Of course they are not. But to what extent can we say they are the same? What is the right amount of similarity to say these are the same? Okay, so when we know that, when we know that one single input does not produce one single output, we are faced with some, let's say, we need to think about, okay, in which cases can I use LLM or do make a shortcut AI, even though it's not entirely correct because AI AI is way more than AILMs, but today I want to focus on AILMs. So should I use AI for this task, for instance? That's a huge question that I was asking myself. And you could answer, "Don't bother." And this is true. We could think, "Don't bother." Don't bother, just use it. Or don't bother, don't use it. But I want to bother because I do bother because it's already here. You cannot say and do as if it was not. The technology is here and it will be transformative and we can see it is transformative. We can see the speed at which it can output something. The only thing is, in what cases should I use it? Because for instance, my persona, I would say, as a persona, I would be the persona of someone who wants to make sure that nothing has been left out when I do my analysis. When I do my analysis of, let's say, user research, I want to make sure that nothing has been left out. So that's one way to look at it, right? So, but then there are a lot of follow up questions which is what about the prompt that you input? What about the instructions you give to your LLM to synthesize the information? And then what is the amount of data that you provide? Do you do it user by user? or you do it with all the users and you just upload a file. We now know that context windows will get bigger and bigger over time. And so it will not be such of a problem to just upload like a huge chunk of information and then ask to analyze and then that's it. Then you have an AI agent who handles several tasks one after the other and then everything is done perfectly. Yeah, okay. But right now at the stage that we are, I am asking myself, what tasks should I be using AI for and what tasks should I not? So I wanna do a quick two-episode series and this one is the first one, which is my observation. My observation is that I would see that as a hamburger. So I know, it doesn't make any sense right now, but let me explain. I think there is a set of tasks, at least when it comes to user research, in which I can use AI. And there are a set of tasks that I almost choose not to. I'm not saying I don't use it, I do, but the main information and the core of the activity, I am the one to do it. So why the hamburger? It's because I see it with layers. And so the outer most layers are the one, so the bread of the hamburger is the thing that I wanna do myself. And why I see it as a hamburger because they are the outer most layers is the thing that is most superficial, the most in the periphery. And if you see that as an analogy, That's the first thing you see if you look from above or from below, let's say. I'm sorry for the analogy. But it's like, imagine you get some data from a user research and you need to analyze it and synthesize it and present it to stakeholders. So I do want to understand this data. It's really important to me to understand this data. So for that, I cannot, I absolutely cannot give that to an AI and then synthesize and then come up with the synthesis and talk to my stakeholders and say, these are the insights. For several reasons, the first one being, if I'm being asked any question, I will not be able to answer. And it's not for my ego. It's really just because it's my job. I'm responsible for answering these questions. I'm in charge of the good implementation of an insight. If I'm saying something and the reality is something else, it's my fault. But right now, I feel the problem is in knowledge work, I feel probably the team should have more transparency over expectations when it comes to using AI. And I'm not saying this is a fault of companies, I'm just saying it should be your responsibility. I think it should be our responsibility to just align with the stakeholders and say, "Look, at this stage of the process, I'm going to use AI." So it might give incorrect results. And that's why the analogy of the hamburger is so great to me, or maybe the onion, whatever you want. Because the most superficial it is, of course, if you use AI, you will need to tell your stakeholders, but the deeper you go, the less you need to mention that to, well, if you want you can of course, but the less you need to mention that because you're going deeper and deeper into the tasks, into the granularity. And so really it doesn't even matter almost if you're using AR or not. I don't know, it's maybe more for you. But let me resume back to my analogy for that to make maybe a bit more sense. So if you get some raw data from user research, so imagine user A said, the button is really not easy to use because it's not in a good location. User B said, the button should be in another color because XYZ, right? So if you use AI directly to sort that, you will lose the control and the ownership and the knowledge. It's like if you gave this to someone else and this someone else is not a professional many mistakes. I like the comparison with an intern or with a junior but in this case a junior or an intern would do better. I'm convinced about that at least right now. I mean that's I have tried it myself with data from user research and it was it was not good. It was not remotely good. Like I don't know maybe Maybe it was me not prompting it good enough. I don't know, I'm not sure. It was not good. And as a tip, I can tell you, I have been using other language models to come up with a prompt for my main language model. Right, but it was not good. And so ultimately I needed to do it again. And just for you to know, I spent hours to prompt the system to come up with a way to analyze. And when I decided I would do it myself, I ended up taking the same time to just do the task myself. So it's also something you need to have in mind for your heuristics, which is how long will the task take and how long will it take to prompt the system correctly? Then you could argue, yeah, of course, I'm taking a lot of time to prompt the system, but then it's gonna be reusable for all subsequent projects, yes, you don't know. Until it really works at least once, you don't know. So to this point, I would say, is it even your job to come up with a prompt to analyze it? I don't know, I'm just throwing a lot of open-ended questions here, because these are my questions right now and I need to share them. And so talking in a mic really helps me. So, sorry, you have to bear me for a while. Okay, so if you take the input data, you want to maintain ownership over this data. You cannot really question control to an AI at this point. It's like if, I don't know, it's like if you would need to do sales, and instead of doing sales yourself, you do it with an AI audio agent. You have no control over if it's doing things right or wrong. You don't know. So it needs to be you. It needs to be you, at least for now. So, and then when you progress, let's say there is everything that happens in between, all the analysis, all the synthesis, all the preparation for the report and so on and so forth. I'm gonna go through that after. But then after everything was, is done, analysis, synthesis, you need to prepare a report, let's say, or you need to have a talk. So that's the other bread of the sandwich, of the hamburger. Can you imagine that everything is being reported by an AI to you or to your stakeholders? Would you like to read a report that is all AI? Even if the data that was feeding it was not done with AI? If the report itself is all written with AI, Would you like to read that? I mean, I wouldn't because I read AI content and I produced AI content. Sorry, I had AI produced content and I did not like it because it's bland, because it lacks spirit. It lacks depth. That's my feeling at least. And I'm not anti-AI again, I really love AI, really. but it was not doing justice to the analysis that was made. And so I still need to do that work of comparing output from the AI and my work. And be honest, we need to be honest. We need to look at it and say, okay, here the AI found more things than I did. And that's fine. That's fine. Because if it does better, I just want to use it because I want the output to be better. It's okay. But if it doesn't, I need to know it and I need to test it. So the outermost layers, I want to be in control. First and foremost, because the inputs, it's really important for me to process them. Not only for sake of time, the time it takes to automate something is not the only reason, because if you automate something, you also lose control. There are some things that is okay for losing control, but there are some things it's not. For instance, if I lose control over the input data that I need to synthesize, I am also losing, how can I say? I'm losing more, I'm not just gaining time of analyzing and synthesizing, I'm also losing the knowledge that doesn't enter my brain anymore. And if I'm the one who's supposed to make a presentation about something, I'm not able to answer any question. Well, maybe I will be able, but it will be weird. And even the readout, the reports, it will be weird. So that's my feeling. Everything that is input, I need to process it actively. I need to read it. I need to digest. And so that's why I had the epiphany recently and I was like, because I read some articles saying it's the end of reading books and so on and so forth. I don't know why we can come up with things like this. I don't know. I mean maybe there might be a case for that but I don't see how. You need to actively digest information to read, digest, synthesize, integrate the notions and everything needs to be integrated into your neurons, your neurons, not artificial neural networks. So that's my feeling. That's for the input. You need to digest. Once it's done, I would say once you have ownership and you know what is the data about, what is the synthesis about, and so on and so forth, you can progressively, let's say, let's say you draw an access with several steps from the input to the output and you list all the tasks that need to be done. So in this case, you came up with all the raw data and you read it yourself. You categorized it yourself. Once it's done, you have your categories and the content of the text inside of these categories. Then what you could do is summarize each category. For pain points, needs, expectations and so on and so forth. So you summarize. Users expect blah blah blah and then you compare each one granularity with granularity. That's my point. The main point here being that the more granular you are in your tasks the better because you will be able to pinpoint what is is really not adding value to your work process. So I don't know if your data analysis is the same, you need to gather data, sorry, you need to come up with a research question and then a plan to gather data and then you gather the data, you analyze it, you synthesize it, you come up with a report and you share. And maybe in all of these steps, you have subsequent tasks and you need to analyze them. You need to lay them down. You need to think, okay, this task, is it adding value if I do it myself? Is it okay if there are some mistakes done along the way? What should I do about that? So that's my thinking process about that. And ultimately, I am certain because I feel this way and I know other people feel the same, that there are tasks that we don't wanna do. It's not a big deal if there are no mistakes. You can still control and you can automate pretty fast. These tasks are the ones you need to automate. That's my feeling. So, for the middle layer, that's what I would do. So basically, you summarize each and every insight, for instance, you tweak the sentences and so on and so forth. You come up with a table, with insight and quotes and so on and so forth. But then when comes the moment to create a presentation, at least for now, the tools are not ready to come up with a good storytelling aspect, a good copy and a good design for presentations. I tried many tools and I can give you some as an example. There is Gamma.app is a great presentation tool, but it's still not perfect. It doesn't do the job that I want it to do. So I prefer to do it myself, even if it takes time. And heck, I think in the end, I also like it. I love making presentations. I need to tell you, I'm not a master at this, but there were times I was sucking at it, really. I was bad. And I really, I also had comments from my peers actually telling me, Jeremy, we need to talk about your presentation. It was some time ago already, but still. And I don't know, I developed a taste for it. So I started to work on it. And I started a bit about storytelling and I have still a way, a long, long way to go. I'm not a master, but I can say already I'm pretty confident when I craft some slides that it will have an impact. Kind of. I'm going there. I think the storytelling in itself is a whole, whole endeavor. And so I would have no, let's say, pretension to say that I'm a master at this. I'm just saying I like it. And so because of that, I wouldn't like an AI to do it. I need to craft a message myself and I need to, I don't know, call me an efficient if you want, that's the point. But really, so, and ultimately it's also because someone will see that, a stakeholder will see that, and ultimately someone who is responsible for making decisions. So I want the message to come across impactful. And so I want to be the owner and the one who crafted it because I would have entire responsibility of it. relinquish that to someone else or in this case something else which is an LLM, I would lose control and so then when I present, the thing is when you present stakeholders see what's on the surface which is what you're presenting and they will judge you and it's fine. Well, they judge your work and they judge you ultimately. Let's not pretend that it's not the case. We are being judged or at least evaluated, maybe not in the scholar way, but we are being judged on the basis of a work. And so if you present something, you're the owner of that. So if an AI did that, and then you need to verify, and then it's not okay, and then you need to correct, and so on and so forth, you're losing time. So I already I got a sense that I'm not giving you some heuristics of how to approach the problem, but let me give you another example. Recently, I needed to do some competitive analysis. So I discovered that there was a great way to do it with LLMs, with connectors and MCPs and so on. So I tried the tools. So I used Cloud and I plugged it to some MCP, which was preproxity and I came up with really great insights about the competition. But ultimately there were so many many things that went wrong, so many many things. First and foremost, the LLM was not able to focus. That's something we are able to do as humans, we can focus. I am deliberately choosing to go on XYZ website and not on ABC website. And when you do a competitive analysis, you know it's important because you will choose, you will already have a heuristic mind of what websites to search. But when you use AI, except if you prompt it very, very, very well, sometimes you will use some random sources. And so that was one thing already. That was one thing. The other thing is the ability to focus when presenting results. So that's something we should train, of course, as humans, but still, we are able, upon training, to provide results that are straight to the point and making a point. But with the AI, they are kind of, I don't know, maybe that's my feeling, they are very verbose. They share a lot of words. And it's like, you need to parse and identify what's really relevant and important in a text before making up your mind. Recently I asked an AI to synthesize some user research insights, and some of them were like one sentence. And it came up with three paragraphs to synthesize one sentence. I know then I updated and I told the LLM some rules, like for instance, the synthesis could not be longer than the input or the synthesis should be two sentences max, whatever. But still, it's still not, I don't know, sometimes it's not doing it. It's not the way I like it to be. And so when you know you have acquired some expertise in one field, I'm not telling you to not use it. But I'm telling you to trust yourself. I hope, I hope, I hope this message gives you a bit more clarity about the heuristics. It would be about trusting yourself. It's like if you have, I don't know, 10 years of experience in doing something, maybe you you do it better than an LLM. Because you have learned all the intricacies about what's wrong and what's right, what's incorrect and what's correct. Recently I listened to a podcast from Carl Newport and he was saying that if you enjoy the process of writing and if you are an expert writer, you need to write yourself. Because you are the only, you know more than an LLM, sorry I'm gonna butcher it, but you know more than LLM what constitutes good phrasing and good writing. That was more or less the point. And so I agree with that. So, yes, I would say to sum up, when deciding about using an LLM or not for your work, first consider, do you have some time in front of you to experiment? That's one thing. If you don't have time to experiment, I would say do it yourself. but always carve out some time to experiment. I think this is also important because LLMs are here to stay. That's one thing. Then what's the cost of making a mistake? Then how deep are you in the process of whatever synthesizing data or in the process of, in the pipeline, let's say, from start to finish of your work? Maybe you don't want to use it at the start because if you use it at the start and there are some mistakes, then if you take the work in the middle of the pipeline, this is maybe not good for you because you will have to recover from tricky situations. And maybe at the end of the pipeline is also not good because you might have to present that to other stakeholders, let's say. And again, all of that is dependent on a lot of criteria, maybe the length of the pipeline, right? If you're a customer support agent, let's say, and the pipeline is basically customer requests and then routing to the appropriate department and then end the pipeline, whatever. Imagine this is your role. I don't know, I'm not a customer support pro, but I could imagine that sometimes it works like this. So if the pipeline is this one and you have processes in place to automate that and it works well and you have your KPI showing that it works well, well then, so be it. Yes, but my point is you need to think about all that. You need to think about all that and not just using for the sake of using. That's my point. And really, I can tell you, I felt so weird because I was like, well, I love AI and maybe it's me, maybe I'm prompting it badly. Why am I losing so much time over this task? Maybe I should prompt it differently. Maybe I should have done it myself from the beginning, but I feel no one is telling us that. No one is telling us that and it's fine because it's a new tool. No one knows. No one knows and it's constantly evolving. So no one has the rules you need to find for yourself. That's the thing, experiment, experiment, experiment. And be an avid learner. That's my way to think about this. Again, these tools are here to stay, so we should experiment. And again, maybe in three months time or maybe less, this whole podcast doesn't make any sense anymore because things have evolved so much more than then making entire competitive analysis based on AI would make sense, maybe. And maybe we reach a point in which we have an AI agent asking another AI agent to do a competitive analysis and then and then and so on and so forth. But I don't want to get there because I have no idea what's that gonna look like. Anyways, I hope you enjoyed this episode as much as I enjoy doing it. I hope you learned at least one thing. We are living very exciting times. One very good use of AI that I like to have is when I don't master any topic is to use it to learn. basically. So I ask it to provide me a curriculum to learn about something. And then I have new directions that I have to take and I can ask it for resources. And then I ask, I can ask it to organize all of this into a, well, a curriculum again. But yeah, so that's exciting. Uh, I want to end on a good note. This is really an exciting moment. And I hope you feel the same as I do. Um, and again, thanks for tuning in. This episode was way longer than I expected to be. I hope you like it. See you in the next one. Cheers. (upbeat music) (upbeat music)