Most cybersecurity professionals believe that AI will have a positive impact on their jobs, helping alleviate pressures caused by the cyber skills gap, a new report by ISC2 has found.
More than four in five (82%) of respondents agreed that AI will improve job efficiency for cyber professionals, with 42% strongly agreeing with this statement.
An even higher proportion (88%) expect AI will significantly impact their job over the next couple of years, with 35% stating that it already has.
More than half (56%) of those surveyed believe AI will make some parts of their job obsolete. This isn’t necessarily a negative given the growing cybersecurity workforce gap, according to ISC2, which estimates there is a shortfall of four million people in this industry.
The cybersecurity job functions impacted by AI and machine learning (ML) are some of the most time-consuming and repetitive in nature, the report found. This included analyzing user behavior patterns and automating repetitive tasks.
“It is unlikely that AI is going to make major inroads into closing the supply and demand divide, but it will play a meaningful role in allowing the 5.5 million [global cybersecurity workforce] to focus on more complex, high value and critical tasks, perhaps alleviating some of the workforce pressure,” the report noted.
AI is Increasing Cyber-Threats
More than half (54%) of respondents reported seeing a substantial increase in cyber-threats over the past six months. Of those, 13% directly linked this increase to AI-generated threats and 41% could not make a definitive connection.
Worryingly, 37% disagreed that AI and ML benefits cybersecurity professionals more than they do criminals, with just 28% agreeing with that statement and 32% unsure.
The biggest AI-based threats cited by respondents were based around misinformation attacks:
- Deepfakes (76%)
- Disinformation campaigns (70%)
- Social engineering (64%)
- Adversarial attacks (47%)
- IP theft (41%)
- Unauthorized access (35%)
Other significant AI-driven concerns revolved around regulation and data practices:
- Lack of regulation (59%)
- Ethical concerns (57%)
- Privacy invasion (55%)
- Data poisoning – intentional or accidental (52%)
Four out of five respondents believe there is a clear need for comprehensive and specific regulations governing the safe and ethical use of AI.
How Organizations Secure AI Tools in the Workplace
Only 27% of cybersecurity professionals said their organizations have a formal policy in place to govern the safe and ethical use of AI, and just 15% a formal policy on securing and deploying AI technology.
However, a substantial proportion of organizations are currently discussing a formal policy on the safe and ethical use of AI (39%) and on how to secure and deploy AI technology (38%).
Around one in five (18%) have no plans to create a formal policy on AI in the near future.
The report also found there is no standard approach to governing employee use of generative AI tools across organizations.
More than one in 10 (12%) have blocked all employee access to generative AI tools, and 32% have blocked access to some of these tools.
Nearly half (46%) either allow employee access to all generative AI tools or have not yet considered the issue.
Encouragingly, 60% of cybersecurity professionals said they could confidently lead the rollout of AI in their organization, although a quarter (26%) are not prepared to deal with AI-driven security issues.
Over four in five (41%) admitted they have little or no experience in AI or ML, while 21% do not know enough about AI to mitigate concerns.
ISC2 CEO Clar Rosso said the findings demonstrate that cybersecurity professionals are aware of the opportunities and challenges AI presents, and are concerned their organizations lack the expertise and awareness to introduce AI into their operations securely.
“This creates a tremendous opportunity for cybersecurity professionals to lead, applying their expertise in secure technology and ensuring its safe and ethical use,” commented Rosso.