UX - The User Experience Podcast
Help me improve the show: https://forms.fillout.com/t/txqbF3seyNus
Welcome to the User Experience Podcast, the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will: Share learning experiences from myself and UX professionals, answer most common questions and read famous minds.
UX - The User Experience Podcast
Using An App To Get Off Your Phone, And The Research That Says AI Is Affecting Our Brain
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
I'd love to hear from you. Get in touch!
📱 Bond — The Social Media App That Wants To Cure Your Doom-Scrolling — TechCrunch
- Bond launched this week as a social media platform explicitly designed to get you off your phone — no infinite feed, no algorithmic scroll, just a spatial view of what your friends are up to and activity recommendations based on your interests
- The core bet: remove the vertical feed and you remove the addictive pattern — the app gives you ideas for real-world activities, you go live them, you get off the app
- I haven't tested it, but I have a lot of thoughts
- First: using an app to get off your phone is paradoxical — your phone is still your phone, and everything else addictive is still on it
- Second: removing the feed doesn't remove social comparison — seeing what friends are up to, peeking at their memories, knowing they got a promotion — that's still there, and social comparison is one of the more reliably damaging patterns in existing platforms
- Third — and this one I can't let go: end-to-end encryption is described as "a priority for us in the near future after launch" — meaning right now, the team can see your data — storing data securely is not the same as private data
- The monetisation path is also unresolved — licensing user data to AI companies and product recommendations with merchant commissions are both on the table
- My honest read: the intent seems genuine, but the medium is still a phone, the social comparison patterns are still present, and the privacy foundations aren't there yet
🧠Concerns Grow That AI Is Damaging Users' Cognitive Abilities — Futurism
- MIT researchers split 54 participants into three groups — ChatGPT, Google search, and own knowledge only — and measured brain activity via EEG during essay writing tasks
- Students using ChatGPT consistently underperformed at neural, linguistic, and behavioural levels — and got lazier with each consecutive essay
- Brain activation in areas corresponding to creativity and information processing was significantly lower — and participants struggled to recall or quote their own AI-written essays
- This connects directly to cognitive surrender — the University of Pennsylvania finding I covered in an earlier episode — where people predominantly chose to use the chatbot even when they didn't need to
- My take: there are always trade-offs, and if you don't know them, you're still making them — taking the car everywhere instead of walking has a physical cost; outsourcing your thinking has a cognitive cost
- The question isn't whether to use AI — it's which tasks should stay yours: framing a research problem, deciding what questions to ask, writing the first draft of your own ideas — these are the muscles that atrophy fastest
- The concept from UX that keeps coming to mind: learned helplessness — users who stop trying because they've been trained to feel that the tool, or in this case they themselves, can't do it without help
- The constant I'd advocate for regardless of how AI evolves: keep thinking, keep practising critical judgment, keep owning the reasoning — the human brain is shaped to do this, and it needs the exercise
Help me improve the show HERE
In this episode, how can AI be used and leveraged to quit doom scrolling? And the concern that grows that AI is damaging users' cognitive abilities? Thank you for tuning in again as every day. And for today, we have two news articles that I wanted to share because I could access it briefly this morning. And I want, as always, to share my interpretation, my comments, my reactions to those articles. So the first one is the idea that AI could be leveraged, and have many, many opinions on that, and I want to share them. AI could be leveraged to help you kick your doom scrolling habit. This is a quote from TechCrunch.com, an article that I saw today. And so there is apparently this company which is called Bond, which officially launched on Tuesday, which at the core is a new social media. Social media. I do think that there are some positives. I personally don't have social media because I know well in all transparency, what do you count as social media? I would say that probably the one that I have is YouTube. Probably we could count that in. I'm not sure what is the actual what are the actual limits, thresholds, what is considered in or out. That's the only one I have. And probably Reddit to some extent. Yes, this is social. So okay, I'm definitely in, I'm sorry. But I don't have others like Facebook or Instagram, TikTok, because of the addictive effect it can have. And I'm not saying that YouTube or Reddit don't have them, of course they have, anyways. But it's the idea that they different the other day I was talking to a friend, and um he recognized that to some extent he was hooked in Instagram, and I was super interested in this because I'm not, but I am hooked on YouTube or Reddit, for instance. So this is super interesting to see that different social media are perceived differently by different people, of course. Of course, like what would you expect as a user experience researcher? That would be the first type of insight we could expect, naturally. But yeah, so today apparently, so there is this bond new social media which works pretty much like a normal social media platform. Users post what they have been up to lately, and what is interesting to see, I haven't downloaded the app and I haven't tried it. I have many, many thoughts on this. First, there is no feed, so there is no everlasting feed that you scroll like the casino machines giving you some intermittent reward to fry your dopamine receptors, like the other social media do. But the presentation from TechCrunch article says that, right? That that's the main difference for now, and it's basically my understanding is that it works as an idea generator of what anyone could do in the real world so that it matches your interests. So the idea is that everyone has kind of an area in the app, kind of a box in which you put your memories, and you can zoom in and zoom out on each other's boxes, like your friends, family, relatives, whatever. And you can explore what are their interests, what they have been up to. And then the idea from what I understood is that the more you post, the more you feed those boxes, the more you will have you will have some recommendations. So, for instance, if it knows that you like running, you will have recommendations as activities involving running, for instance. So these are some examples. And so, yeah, it all fits from what you post. I'm highly, highly interested in so many choices that were made in this application. Like, really, I could speak about that for an hour. But I want to spare you that. You may have other things to do, and I already am so thankful for you to listen to me today. Anyways, so and the idea is that yes, the main, main, main hypothesis with this app is that you use the app, you go in, you get some ideas of what you could be doing to match your interests, so that gets you off the app because you will go live in the real world with the suggestions that you get from the app, and then you you get off the app, basically, you get out of the phone. They say that this is something that will probably help you quit doom scrolling. Apparently, it says the article, the layout looks a little bit like Instagram, although there is no actual feed. And um, yeah, and then I really like the take from the article towards the end. It's it's kind of I don't know, I don't want to assume that there is kind of an opinion that we can read between the lines. That's interesting. So it mentions, for instance, that there is the revenue path for a company like this is not yet defined, not yet clear, at least. Other social media sites have a giant, giant are giant vehicles for advertising. But this one doesn't have ads, for instance, at least for now. And it appears that the CEO envisions a scenario where users can license their data for AI companies to train on them. So basically, you could be paid for publishing. So that's what I understand from it. And um, yeah, also product recommendations that integrate with e-commerce site. So, quote, our users would opt into this experience if we are able to do this. We believe we could capture some value from the transaction with merchants by enabling a better user experience, driving conversion and or increasing throughput. Okay, I may be wrong, I may be in understanding that in an incorrect way, but that wouldn't be so much different from what influencers are doing right now. Like recommending, I mean, maybe on the UX level, it would be like a flow in the app, but right now it's like what influencers are doing: recommending a product, giving an affiliated link, you click, you pay, they earn a commission. And yeah, that's my understanding, but I may be wrong. It says, and that's what I like about the article towards the end, there are like three par three to four paragraphs. The first one is like Bond would never sell users' data, quotes, for the purposes of advertising, and users can delete any memories by either deleting them in the memory tab or using natural language in memory chat. He added, quote, users can also delete their profile if they're not getting value from Bond. As the product grows, we will introduce more privacy control features to our users for them to manage their data. And so that's what I like about test then. Bond will improve, quote, Bond will improve its encryption over time, though he is a little vague about the platform's current protections. End-to-end encryption is a priority for us in the near future after launch. In the meantime, we store all user data securely in our database and ensure it's protected. It's kind of funny. At the moment, Betsirovic, I'm sure I'm pronouncing this incorrectly, I'm sorry. It says at the moment, Betsirovic seems mostly focused on making Bond cool. This is the quote of the day. First, you make an app and you focus on making it cool. You care about privacy later. This is not the quote, this is for me. Okay, so so many, many, many thoughts here. So many thoughts. First, and this is these are the things that as a user experience professional for like years, I kind of became trained thinking this way. And I'm not saying my way of thinking is the best or the one you should adopt, and so on. I'm just giving you my thoughts and probably some tools for you to think while we are users of some things. Like, first and foremost, why are we using what we use on a day-to-day basis? We should ask ourselves these questions. Why do I use a smartphone? Maybe can we can we challenge the assumptions? Can we challenge the reasons why we are doing what we are doing? Is there any reason to do things in a different way? Spoke to a friend last time and we talked about going analog for some time. One day of the week I go analog and I don't use my phone, and it's it's a relief. I'm not saying that this is sustainable long term, but it's a relief. And I came to this conclusion by just wondering, is it really needed for me to use a smartphone or to be on screens seven days out of seven? Anyways, just thinking. So my rationale was well, why are we using what we are using? So once you think about that, it's like, okay, I'm using a phone app to get out of my phone. That's really interesting. Really, really interesting. And I'm not saying that to demonize this app, like really, I haven't tried it, it's probably wonderful. But like as any person, well, with critical thinking to some extent, I like to wonder, pause, and ask myself some questions. So this is an app that is meant to get you off your phone and to let's say avoid doom scrolling. So this is kind of let's say a huge take, which is if we remove a feed which works in a vertical way by vertically scrolling, we will remove the negative aspects of social media. This is what it possesses. We know that vertical scrolling is indeed kind of dangerous to say the least, because it's infinite. You can go on and on and on and on about that. I think at some point Instagram implemented a way for you to kind of well, a way. Uh it told you at some point, well, you reached the end of what you should be reading for today, what's new, let's say, and um come back later for newer newer stuff. Okay, so we know that vertically scrolling with something that is algorithmic is dangerous because it gives you some unpredictability, you do not know what you will get, you see it through your retina, and then it gets to your brain, and you don't know all the before it accesses your consciousness, you don't know you before it accesses your consciousness, it already had an effect on you. That's why it's so powerful. Because it rewards you with something, and you became you become aware that it rewards you after the fact. So it's really difficult for you to control that. And it's kind of intermittent reward. So sometimes it will give you something you like and something you sometimes you will be something surprising, sometimes it will be something for which you will feel indignated, and so on and so forth. But at least it will not usually it will not leave you neutral. Okay. So the idea is that we know that this pattern is kind of a deceptive pattern in user experience, right? It fosters addiction. Let's say I'm not saying it doesn't have a positive impact, it can also have a positive impact, but in my opinion, this is drowned and this is mixed with all the negativity of it. So for me, it's a no for some of them, but I know that I'm using others which have the same effect. So that's something that is also interesting to reflect on. Which is like, okay, if I'm to use this new social media because it doesn't have a feed, what does that tell about my other uses of other social media, anyways? So that's the first reflection. And then my other reflection is okay, so we think that this the the the idea of having a social media app in itself is kind of providing enough value and that it and that it supersedes the negative aspect in itself, so that if we remove a feed, it's just still legitimate to have it. So let me just uh step back and consider uh the assumptions here. So, for instance, if we have an app, social media, that gives recommendations that tells you what your friends and other relationships are doing and so on, if we remove the feed, then it's okay. That's the assumption. Then it's better because it gets you off your phone. But how can we get off a phone if we are on a phone already and if we have a reason to go back to raw phone? Like really, this is and and even if the app is the least addictive one ever, it's still on your phone, and you have other addictive stuff on your phone. So the medium to start with is like questionable. And then what about all the aspect that is like social in itself? This is social. Okay, we tend to trivialize the term social for me, for me, yes, we can to some extent call it social, but is it the same definition of social as gathering with my friends at a bar or at a restaurant or dance classes and so on? Is it the same? Maybe not. So, in a continuum of social interaction, how social is that? How social is going to see the story of my friend reacting and then swiping through something something else? What is the social aspect of that? And then what are the implications? Because I'm seeing something similar to stories in Instagram, what everyone is up to. I can peek and I can get some ideas and I can compare myself to the others. We know that social comparison is really dangerous for everyone's self-esteem. And I'm still seeing the same pattern here. So, yeah, we get away, we get rid of the feed, but there are still other patterns that are here. So the social aspect really, this is a core need. We are intrinsically wired to be social creatures. We bond, we need to bond, and that's probably why they gave it this name. We couldn't live alone. We need to bond, we need to live experiences together, we need to help out each other. Long time ago, people needed to bond because it would help them face threats together, they would be stronger together than isolated. Being cast away means being in danger. So, what I'm seeing here is the idea of leveraging a core human need that is normally meant to be used in a physical way. And again, I'm not casting the stone. I'm myself a user of social media, right? I'm just trying to analyze the product I'm using and and what does that mean? And so once you scroll through LinkedIn and that you see that XYZ had XYZ job or XYZ promotion and you didn't have, for instance, what does that do to your brain? For instance, that's like I would say that's human at to some extent. Can we totally can we totally shut off all the shutdowns, sorry, all the reactions that we would have, even if it's micro level? Anyways, so the app for me still has social comparison, which can be bad in my opinion. Anyways, so that's one aspect. The other aspect is using the app to get off the app, for me, it's kind of it's kind of um paradoxical naturally, but this is nothing like uh how can I say that's not a breakthrough what I'm saying, and I think everyone would would have this thought. And then I'm really, really surprised again and again and again to see once an app or a website or anything comes out, it's not the privacy, well, it's not always the case, but to some extent, the privacy aspect is not thought of from the start. Like as as the TechCrunch article says, and I I sense a bit of irony, and I'm sorry if it was not intended from the senior writer, but I sense a little bit of irony. Uh probably it's me. Um at the moment, Baitsirovich, I don't know if I'm pronouncing correctly, seems most fossil seems mostly focused on making Bond cool. That's really funny because they say Bond will improve its encryption over time, though he is a little vague about the platform's current protection. End-to-end encryption is a priority for us in the near future after launch. In the meantime, we store all user data securely in our database and ensure it's protected. Well, okay, as a user, what once you use something that is free, you know what are the what are the how can I say what are the implications. So I think it's easier for us once we use something to not think about the consequences or think about all the all that that implies once it's digital, because our data goes somewhere. For us, it goes like in the cloud, and even if you know how it works, it's really easy to think, oh, it goes in the cloud. It's like magic, disappears, or or it goes somewhere, it flows. Yes, and no. Yes, because you perceive that, but no, because it does go somewhere, it does go on a hard drive. Your data is stored somewhere. Someone has your data, your photo, your experiences, and so on. And again, again and again. Um I do uh use these kind of services. I use Instagram in the past, I use Reddit, I use uh whatever uh YouTube, um, I'm publishing my podcasts, but I know what that imply, of course. But yeah, the data goes somewhere, and it's really again, I'm just taking a theoretical stance here, which is when you start to craft a product and you say that encryption is a priority for us in the near future, but it's not right now. Like, I'm not understanding that to be honest. I'm not understanding that. In the meantime, we store data securely. What does securely mean? And ensure it's protected. What does protected mean? And then it may well be secure and protected from phishing and attacks, but it's not end-to-end encrypted. So that means you can see it. Like the team, the team, the product team, uh, the engineers can see the data. And so, yes, it's secure and it's protected from probably from the outside, but what does that say about the inside? Like, and I'm not the one making this kind of decisions and and and making this kind of product, so who am I to say? But it's just like it's it's surprising that to have this kind of statements. Because in the end, these are user data. And then they say monetization is not a short-term priority. So, yeah. At the same time, I can so taking this oh taking the stance of the the businesses in general, at the same time, I can understand them because how can I say when you well understand yes or no? When you kickstart a product, you need to see that there is value for the product. And so, and you it's useful to also have data to improve on the product. So I kind of see probably where they're going from, but at the same time, ethically speaking, I think it's good to be clear on the encryption practices, the privacy practices. I'm not talking about security of data. Security is having your data protected from attacks. I'm talking about privacy. That you are the only one, uh, how can I say, which who can access your data and the ones you authorize the data to be shared with. That's it. And so, yeah, I don't know. I'm I'm I'm so so surprised. And also all the assumptions that are made, which is we make a social media platform, everyone shares. So tell me what are the differences with Instagram, Facebook, and the like. Everyone shares what they are up to. You have recommendations to do something else based on your or your interests, like up to now, I don't see any difference. You can have friends and share that with friends. I don't see any difference. Oh, we removed the feed. That's the difference. Sora app it's not addictive because we removed the vertical feed. That's it. Again, I haven't tested it, but I would have expected like as a presentation, probably um, let's say if if if the intent is to get out of our phone, probably having the phone as a media medium for that is kind of dubious. Anyways, uh, so that's my analysis of this kind of news. Then we have another news which is an article from futurism.com. Concerns growth that AI is damaging users' cognitive abilities. So they the MIT research team used electroncephalograms. Oh gosh, how I wished I could go back to using electroncephalograms. I I did that for my studies and my first internship, it was it was amazing, and my first uh professional um my first job. So, yes, they split 54 participants into three groups. One was told to use Chat GPT, one could search for information on Google, and another had to rely on their own knowledge. So they have published a paper which is yet to be reviewed, apparently, but they say that basically there is an accumulation of cognitive debt when using an AI assistant for essay writing task. So they say, quote, the students using ChatGPT consistently underperformed at neural, linguistic, and behavioral levels, and even got lazier with each consecutive essay. Quote, the brain didn't fall asleep, but there was much less activation in the areas corresponding to creativity and to processing information. Quote, participants using ChatGPT also struggled to quote their own essays, dovetailing with other research that have found information recall could be negatively affected by the use of AI. And yeah, they also observed that originality was well, the papers produced by students using ChatGPT were kind of the same. And um yeah, another recent paper that's quotes by researchers at the University of Pennsylvania found that participants who were asked to answer a variety of reasoning and knowledge-based questions and were given the option to use ChatGPT predominantly predominantly chose to use the chatbot to answer the questions in what the scientists termed cognitive surrender. I think I I also covered this theme in a past episode. Yeah, so it's basically the idea that the more you use AI tools, the more you tend to cognitively surrender, meaning you lose a bit of thinking um critical thinking skills, judgment, nuance, and so on. Okay, let me step back a little bit. I love AI tools, like this is an amazing technology, it can do wonders. I use it every day to generate ideas, to fine-tune an article that I'm writing, to give me uh let's say feedback, or to kickstart from something to not be in front of the blank page. But just know about something. Like I think I mentioned that in my previous episode, but today I will adopt a different angle. There are always trade-offs, always, always trade-offs. And if you don't know about the trade-offs, that means you make decisions. Well, sorry for the redundancy, it means you make decisions without knowing about the trade-offs, but they are here. So, if you take the car every day to go to every little place, you will gain weight. Like if you go from a let's say certain level of activity, and then you reduce your physical activity and you keep all the rest constant, like what you eat, the way you sleep, and so on, but you just take the car to move every other place instead of walking, you will gain weight. There is no other way. So that's the same with AI tools. The human mind and the brain is made to learn, it's plastic, it's meant to let's say, experiment, it's meant to struggle, it's meant to be creative, it's meant to think. If you outsource that to someone else or to something else, that's something you're not using. You're not using your brain for these things. And ultimately, it can be good at some in some ways. I'm not saying it's not. I'm just saying in which ways do you want that to be how can I say delegated? Do you want if you're writing a book, do you want let's say we take the extreme, you ask the AI to write the whole book and you just prompt it? Do you want that to happen, or do you want to be the one writing the book and to have your own ideas and experiences? If you're planning a User Research study, do you want to think about the objectives yourself, or do you want the AI to come up with the objectives? I think these are the questions we should ask every time. And I think this is also I like the term cognitive surrender. It's it's close to another term which describes a different topic, but still I think could be related. In user experience, when a user is um interacting with something and it doesn't work, they kind of avoid doing XYZ behavior or they try to avoid making it work, and it's called learned helplessness. So they think that they're using this wrong, and so they don't try to use it more, or they don't try to adapt it to their needs, and it's like learned helplessness. They know that they are helpless with this flow and it doesn't work, and that's the way it is, and and they are the ones to blame, and and that's it. And in these cases, the users are well if a user experience professional does it's his or her job correctly, it shouldn't happen. Our users shouldn't be feeling helpless when using our product. Because in the end, they are the users and they should be the one in control. And they should what they should be the one the product or the technology is catered to. So they should be the ones in control, basically. I see kind of the same idea with AI. I think I think learn helplessness could be avoided when we make a product by having users feel confident about their abilities and that that product is to blame, if anything, not them. We make product to serve people's lives, not to have people, let's say, be uh in a hamster wheel when using the product. And so they are they shouldn't be helpless. And so the risk with using AI is to feel helpless, it's to feel that we are not enough, that we are not legitimate to think to think our own way, or to to fail, or to experiment, and to learn, is to feel we're not confident and to say, yes, okay, I will use AI for that, without even considering probably that we could also come up with great ideas. So imagine you need to come up with, let's say, ideas for a flyer for communicating a series, a meeting series in your company. Well, you can start from a blank page and start to brainstorm about ideas from nothing. And you can also start with ChatGPT. I'm not saying that there is one better than the other. I'm just saying what are the trade-offs? If you start from a blank page, probably some of the trade-offs could be well, anxiety, stress, it's difficult, generating idea, you're not sure about what it's worth, and so on and so forth. But with Chat GPT, what are the what are the trade-offs? Well, it could be well, you go faster, so yes. You probably feel less stressed because you already have something to start with, but then doesn't it restrict all the possibilities? Then when you will iterate, will you work from what is the what is the range of possible options that you consider? Like if you restrict it from the start by using an AI tool. So that's an example. Other examples could be when you analyze some data using an AI. Well, we could argue that you could prompt it again and again and again to make sure that it does the analysis correctly, but imagine you don't. You could have the analysis made. But the question is, has every nuance been considered? The depth of the analysis, is that there? Maybe not. Maybe you would have it if you did it manually. Again, I'm not advocating for doing everything anti-AI, I'm just trying to frame a bit my thinking around what task should be handled by AI and what task should be handled by yourself. Knowing, of course, I'm not I do not intend to be in my ivory tower saying that this is completely disconnected from reality. I do know how the world works, and well, kind of at least maybe a tiny bit bit percent, 0.1%, and I know that well, I feel that let's say around us we will be pushed to use AI more and more because it acts it it increases efficiency, and so the human mind is trying to be more efficient every time because they want to feel less pain of doing the tasks. That's a reality, and so the society is a reflect of that, right? So I do know that ultimately the tasks we are doing, the ones that can be streamlinable, that's another word, but you get the idea, streamlinable without too much risk and accurate, as accurate and as qualitative as before. Well, we will hand that off to an AI, and that that's it. And that will happen progressively, more or less progressively. So I know about that. But in the meantime, I think it's still good to pause and wonder what is uniquely human and what is uniquely AI, and to reassess every time. To not feel that we have cognitively surrendered. I think that the constant should be that we always think and that we have critical thinking abilities and that we reflect and that we still practice this muscle that we have, which is thinking. Whatever the advances in AI, I would say that the human mind still needs to think in hundred, two hundred years from now. This is a constant. So we need to still use it. That's my feeling. So maybe today was a really, really opinionated episode. I'm sorry if it's too much to handle, but I still thank you for tuning in and listening to me. Please feel free to react, give your feedback. There is a link in each episode so that you can answer a survey if you want me to improve in some ways, if you want me to interview some people, if you want me to cover other topics, please please let me know. And I really thank you for listening. If you are part of the listeners who are not subscribed and you want to not miss any episode, please, please subscribe and uh enable the alerts that you that you have available. See you tomorrow for a new episode. Until then, take care. Bye bye.