In his MAS thesis, David Preissel examines what happens when technology outpaces human adaptation.
As Generative AI (GenAI) technologies evolve at speed, organisations face the challenge of helping their workforce adapt to a continuously changing work environment. Approaching this challenge through a Change Management lens, this article draws on the central findings of David Preissel’s MAS thesis, completed as part of the MAS Leadership and Change Management at the FHNW School of Business. It explores the human-side of AI and why the transformation goes beyond technology and strategy.

Technology Accelerates Change
The saying “culture eats strategy for breakfast” is a classic business-school favourite, suggesting that culture moves slowly and provides a stable backdrop for strategic execution. Today, that assumption no longer fully holds. AI changes weekly, sometimes daily, and culture often struggles to keep up.
While AI tools and transformation stories are celebrated across platforms like LinkedIn, the reality inside organisations is often different: AI licences are distributed across enterprises, pilots are launched with enthusiasm, usage spikes, and then adoption plateaus. The greater risk is not culture eating strategy, but organisations mistaking visible AI activity for genuine day-to-day integration. AI can begin reshaping habits and decision-making before organisations have agreed on what “good” looks like in an AI-augmented workplace.
Shifting Identities
Beneath the dashboards and adoption metrics sits a human challenge that transformation roadmaps rarely address: the shift in professional identity triggered by GenAI.
Many knowledge workers are moving from creators, who write code, draft strategic plans, and solve complex problems, to orchestrators, who review AI output, refine prompts, and validate quality. That shift changes how work gets done and how people experience their value at work. The distinction matters more than it first appears. It touches the core of how people understand their professional contribution, which makes the change personal. For high-performing experts, years of hard-earned expertise can suddenly seem more negotiable. When a less experienced person can produce acceptable outputs faster with AI, expertise may feel less distinctive, even when it remains essential.
This was one of the key findings from expert interviews conducted as part of the research. Leaders tend to see AI tools that increase efficiency, while workers may be more likely to experience them as a threat to their professional worth. One IT leader noted that developers often prefer manual methods over AI-assisted work because of the black-box nature of the technology and because AI can feel less like a neutral productivity gain and more like a challenge to professional identity. Left unaddressed during the change process, this tension can push teams back towards old behaviours and workflows.
Psychological Safety As The Foundation
From an Organisational Change Management perspective, this matters because resistance is more than a reaction to new tools. It is often a response to uncertainty, loss of control, or a perceived loss of professional value. When faced with resistance, many leaders instinctively respond by pushing harder, adding adoption metrics and accelerating rollouts and timelines. Yet research suggests this can backfire, because resistance increases when pressure does.
Drawing on Edgar Schein’s cultural models and Kurt Lewin’s change theory, David Preissel argues that deep-rooted cultural assumptions are unlikely to shift simply because a new technology is introduced. The identity shift from creator to orchestrator therefore calls for psychological safety: an environment where employees can experiment with AI as part of the learning process without fearing obsolescence.
Leaders therefore need to start by reducing restraining forces. This means genuinely addressing employees’ concerns and for instance communicating that AI is there to augment their skills, not replace them, before scaling broader AI transformation efforts and mandates. Without psychological safety, people may avoid experimenting because they fear mistakes, quietly resist real changing habits, and turn adoption into compliance theatre rather than genuine workflow change.
Leading Change Through Credibility
IT organisations within enterprises face a specific challenge: they must drive organisation-wide AI adoption while transforming themselves at the same time. This dual role creates pressure to demonstrate internal success first while leading their own workforce through an AI journey, since credibility is often a prerequisite for enterprise-wide influence.
Augmenting work with AI is more likely to succeed when organisations define clear boundaries: where does human judgement sit in AI-enabled workflows, and which decisions remain firmly in human hands? Without such guidance, anxiety among workers can rise quickly. A practical reference point is the “competent human baseline”. It measures work quality against skilled human output rather than the illusion of machine-made perfection, while also clarifying authorship, decision rights, and quality accountability.
Reshaping the Vessel
Because technology will not slow down, cultural adaptation needs to be deliberately designed to keep human judgement at the centre. That means redesigning work so it remains valued and recognisably human. Leaders must reshape the organisational “vessel” so AI can flow through it without washing away what makes work human.
Without that foundation, AI adoption may stall or remain superficial, often because the identity shift beneath the surface was never properly addressed. From a change perspective, the path forward is not simply faster deployment, but smarter sequencing: address fear first, then build capability, so learning can stabilise even while the tools continously evolve.
About the Author
David Preissel leads change strategy and Roche affiliate engagement models aimed at increasing AI solution adoption across Roche Digital Technology, Roche’s global IT organisation. In that role, he focuses on the people-side of change and partners with stakeholders and AI consumers across the organisation to develop change approaches that work in real settings. Before the Roche AI Corporate Strategy went live in early 2025, David led the change management team for Roche IT’s AI Center of Excellence. The team delivered technical capabilities through an enterprise AI platform and expanded AI upskilling and consultancy offerings across the broader organisation. At a time when much of the industry is focused on technology, he recognised a gap in research on AI’s impact on work culture. That insight led to his master’s thesis, Organisational Culture Transformation as a Necessity for GenAI Adoption, completed as part of the MAS Leadership & Change Management programme.
The author would like to extend heartfelt thanks to Prof. Dr. Renate Grau, Head of the MAS Change Management & Leadership programme, and to thesis supervisor Prof. Dr. Steffen Dörhöfer for their support and guidance throughout the thesis process. Special thanks also go to Prof. Dr. Wolfgang Ansari for his inspiration in the field of Change Management and for his encouragement and support in enabling this MAS study programme.
Contact

Prof. Dr. Renate Grau
- Phone
- +41 62 957 28 29
- renate.grau@fhnw.ch