When a nonprofit employee wants to use a free AI tool to summarize sensitive case notes, or a small business owner realizes they could save hours by uploading financial spreadsheets into a chatbot, the question they ask our clinics is the same: “Is this safe?”
These aren’t hypothetical concerns about the distant future; they are fundamental security questions for the here and now. For our member clinics, responding to them is a core part of providing cybersecurity support to community organizations in 2026.
For seven years, cybersecurity clinics have served as trusted partners for community organizations with little to no access to digital security support. Today, as the rapid adoption of AI tools by organizations of all sizes expands the attack surface and introduces new risks, cybersecurity clinics are stepping up to meet the challenge.
Expanding the Scope for Digital Security
AI is often treated as a ‘shiny new object,’ but cybersecurity clinicians see it differently. We believe that protecting a community organization requires addressing AI safety and cyber risk as two sides of the same coin.
As we’ve seen with other technological advancements, the least-resourced organizations often bear disproportionate digital security risks. That’s why cybersecurity clinics are training our students to help our clients—like small businesses, non-profits, K-12 schools, local governments, and rural water utilities—manage their AI security as part of an overall security assessment.
Luckily, we don’t need to reinvent the wheel to meet student and client needs at the intersection of AI risk and security. As trusted partners in their communities, cybersecurity clinics are already on the ground. Our students are already learning the hands-on skills essential to enabling safe and responsible use of AI in the real world. By connecting the dots between AI and cyber risk, we see that the digital security perimeter has expanded, and with that expansion comes a strategic opportunity to do more for our students and our clients.
Cori Faklaris, Faculty Director of the UNC Charlotte Cybersecurity Clinic, sees this responsibility firsthand in her community: “As AI becomes an invisible layer in every community organization, it’s both a potential helper for their operations and a huge new threat. We feel a duty to help nonprofits in Charlotte navigate AI and leverage it safely. This work empowers our students to demystify AI risk and to create human-centered security protocols that even the smallest nonprofit can implement.”
Developing Practical Resources for Real-World Applications
The Consortium’s AI Risk Management (AIRM) Working Group has spent the past 9 months building the tools that faculty and students need to navigate this shift. In collaboration with experts at UC Berkeley’s Center for Long-Term Cybersecurity’s AI Security Initiative (CLTC AISI), we have developed:
- AIRM Resource Library: A hub for existing literature and resources on AI risk tailored for cybersecurity clinical education and practice.
- AI Risk Management Curriculum: Course modules specifically tailored for clinical education, covering topics such as AI opportunity and risk mapping, mitigation strategies for resource-constrained organizations, and technical threats like prompt injection.
- AI Risk Assessment Framework: Practical tools that can be layered onto a clinic’s existing cybersecurity risk assessment framework to help community organizations understand and act on AI risk.
Together, these resources serve as the building blocks of a new standard for digital security education, training, and service in cybersecurity clinics.
As Nada Madkour, Interim Director of CLTC AISI, explains, these integrated tools are a necessity for modern defense: “Cybersecurity hygiene must now include AI safety. Any organization that utilizes AI opens itself to new attack vectors, and AI now allows malicious actors to operate at heightened speeds, scales, and levels of sophistication. It is very important to train our students to address these disciplines in unity, because any organization that treats them in silos will always be underprepared for the threats that live at their intersection.”
The “Win-Win” of Modern Clinical Education
By integrating AI risk into our existing structures, both groups at the heart of our mission benefit:
| The Student (Workforce Development) | The Client (Community Resilience) |
| Real-World Skills: Students learn to manage AI risk in real-time, preparing them for a career path where skills in technical AI safeguards, ethics and communication are essential. | Integrated Protection: Community organizations receive digital security support, including cyber and AI, from a single source. They don’t need separate providers for the two. |
| Field-Ready Talent: Clinic students graduate with a “bilingual” skill set, competent in both traditional cybersecurity and AI risk management. | Safe Innovation: Organizations can use commercial AI tools to work more effectively while understanding the associated risk and best practices. |
Clinics “Closing the Gap”
History shows that with technological advancements, small community organizations are often the most impacted and the last to receive support. Today, these organizations are encouraged to adopt commercial AI tools that promise to save time and resources, yet they frequently lack the specialized support to navigate the associated risks. Our goal is to ensure that a “knowledge gap” does not become a permanent “safety gap” for the vital organizations our communities rely on.
The Consortium provides the skills, network, and clinical structure to meet the emerging workforce and community needs introduced by AI. In a digital era that is constantly evolving and expanding, our clinics are, too. By advancing clinical pedagogy and practice, clinics are doing what we have always done: standing in the gap to ensure our students and our communities have the expert support and practical skills they need to thrive.