
Despite growing concerns around ethics, privacy and misuse, GenAI continue to see rapid uptake. The study finds moral concerns play less of a significant role in individuals’ decisions to use tools such as ChatGPT.
Based on a survey of close to 500 information systems students in Europe, the study explores the concept of legitimacy – how desirable and appropriate a technology is to users – and identifies three types that shape acceptance:
- Pragmatic – does it help me?
- Moral – is it the right thing to do?
- Cognitive – does it feel familiar?
If a tool works similarly to established technology, such as Google or WhatsApp and helps users get something done, they’re likely to adopt it. Whether using technology is seen as being morally right had no measurable impact.
Professor Bunduchi said, “It’s not that people weigh up the ethics of using GenAI, think it is wrong to use it within a particular context, and decide to go ahead anyway. If something feels intuitive and helps us get things done, we adopt it without much reflection.”
“When technology fits into patterns we already know, we experience less friction,” Bunduchi added. “That familiarity makes it easier for us to use it – even when the technology itself is new or poorly understood.”
The findings suggest that responsibility for ethical technology use cannot rest with individuals alone.
Unless a technology’s harmful effects are obvious, people will not stop to think about using it. That’s why the responsibility for ensuring the technology is developed and used ethically has to sit with technology providers, regulators and society. We need collective solutions to collective problems.Professor Bunduchi
To learn more about this new study, you can read the full publication, .

Raluca Bunduchi is our Personal Chair of Innovation.