Search

Guardrails for AI in pharma, biotech and medtech

 

The convergence of AI, cybersecurity and supply chain strategy is reshaping the pharmaceutical, medical device, and biotechnology sectors. Adopting and managing these technology-based operations, particularly where their functions intersect, can be a daunting endeavor for life sciences companies.

 

A panel of experts, moderated by Zara Muradali, Head of Life Sciences at Grant Thornton, recently discussed the intersection of these technology advancements and the implications of their use in the life sciences industry. The panel included:

  • Edna Conway, a globally recognized expert in cybersecurity, supply chain and risk management, and former Chief Security & Risk Officer at Microsoft and Cisco.
  • Jennifer Bisceglie, founder of Interos, Inc. and a sought-after expert on third-party and supply chain risk and the development of new technologies that foster business intelligence.
  • Ayan Paul, Principal Research Scientist at the Institute for Experiential AI at Northeastern University.
  • Vikrant Rai, Managing Director at Grant Thornton and an advisor on cybersecurity programs across life sciences enterprises.

Throughout the discussion, the four panelists offered actionable insights on opportunities, challenges and critical issues that can arise when technologies converge. In these discussion excerpts, Muradali and the four panelists examine how organizations can harness AI while safeguarding innovation, patient safety and operational continuity in an increasingly interconnected world. 

 

Communication failures can have ripple effects

 

 

3:30 | Transcript (PDF - 150KB)

 

Conway shares a cautionary tale from her previous role as Chief Security & Risk Officer about discovering a hidden supply chain issue that disrupted connectivity. She emphasizes the challenges of deep supply chain visibility, the need for mutual respect and information-sharing across ecosystems, and the importance of collaboration to ensure resilience and security.

 

Barriers to successful AI adoption

 

 

1:35 | Transcript (PDF - 111 KB)

 

Bisceglie explains that the main barriers to successful AI adoption in security functions are the lack of top-down governance and the tendency for humans to overreact when faced with rapid information flow. Organizations often collect excessive data, she adds, without a clear understanding of the specific business problem they aim to solve, making it essential to focus on relevant data and measured decision-making.

 
 

Need for stronger guardrails

 

 

2:35 | Transcript (PDF - 113KB)

 

Paul talks about how his transition from particle physics to life sciences highlights the challenge of distilling massive data sets into meaningful insights. He notes that while particle physics can tolerate occasional errors, in life sciences, incorrect data analysis can have life-or-death consequences — making strong guardrails for data aggregation and decision-making essential. 

 

Making AI adoption a success

 

 

3:43 | Transcript (PDF - 115KB)

 

Rai describes how successful AI adoption in life sciences requires a balance between rapid innovation with responsible governance. While organizations face pressure to implement AI quickly, they must ensure alignment across projects, uphold ethical standards around transparency, security, and privacy, and maintain a strong human element.

 

Getting to a place of trust

 

 

3:39 | Transcript (PDF - 116KB)

 

In this conversation, Conway discusses how gathering real-time data on pulsatile blood flow could revolutionize life-sustaining technology, but warns of the security risks if such data falls into the wrong hands. Bisceglie follows to point out how rapid technological advancement in life sciences raises critical questions about how to establish and maintain trust in operational environments where errors can cause real harm. Rai emphasizes that as technology evolves and misinformation grows, the paradigm must shift from “trust but verify” to “verify before trust” to ensure security and reliability in data-driven systems.

 
 

The governance challenges of AI

 
 

7:07 | Transcript (PDF - 116KB)

 

In a free-ranging conversation among all four panelists, Conway observes that while AI evolves rapidly, humans remain central, and governance must adapt to manage intelligent systems that act as new learning entities. Rai counters that effective AI governance can be centralized, decentralized, or federated, but it must align with each organization’s structure and evolve toward dedicated AI strategy functions. Bisceglie adds that strong executive oversight and clear guardrails are essential to balance innovation and control, ensuring technology doesn’t outpace corporate governance. Paul concludes that while risks from powerful AI systems are inevitable and not fully understood, organizations must still proceed responsibly — learning through action while constantly developing and refining guardrails to prevent catastrophic outcomes.

 
 

Contacts:

 

Boston, Massachusetts

Industries

  • Life Sciences
  • Healthcare
  • Technology, Media & Telecommunications
  • Private Equity

Service Experience

  • Corporate Tax
 
 

Content disclaimer

This Grant Thornton Advisors LLC content provides information and comments on current issues and developments. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.

Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.

For additional information on topics covered in this content, contact a Grant Thornton Advisors LLC professional.

 

Trending topics