Exploring next steps in AI Insights

Committed topics for further research

This insights series will next address offensive AI and defensive AI. Further topics to be covered will be decided via Member interactions.

1 | The threat of offensive AI

This paper will explore the use of AI to develop and deploy cyber-attack tools, tactics and techniques aimed at exploiting vulnerabilities, compromising systems, and undermining security measures.

AI offensive capabilities allow attackers to automate and optimise various stages of their attack lifecycles, including reconnaissance, weaponisation, initial compromise and post-exploitation activities. ML, NLP and automated decision-making capabilities allow offensive AI tools to identify and exploit zero-day vulnerabilities, bypass traditional perimeter defences and dynamically adapt their tactics to minimise the risk of detection and subsequent attribution. Such capabilities pose a strong threat to defenders and organisations that are not equipped to detect and respond to these new types of attack.

As offensive AI capabilities increase, the risk of adoption by non-state actors and less technical criminals using as-a-service offerings to launch more frequent, scalable, impactful, and effective targeted attacks will continue to grow. The emergence of AI-driven attack techniques, AI-enhanced social engineering and AI-generated malware all raise the difficulty of attribution, forensic analysis of attacks and incident response actions. As a result, the need for defensive AI tools and strategies is only likely to grow.

Related research

ISF research

This research is only available to ISF Members

Nurturing Security Governance: Becoming effective through engagement

Information security governance should be the guiding hand that organises and directs the resources dedicated to risk mitigation efforts into a business-aligned strategy. Engaged and effective governance will greatly enhance the organisation’s success, commercial or otherwise, in the long term.

ISF Analyst Insight Podcast

AI Futures: Assessing the danger

Rob Macgregor, Lee Munson, Richard Absalom and Mark Ward

2 | The benefits of defensive AI

In this paper the ISF will examine how the application of AI can bolster cyber security defences, strategies and operations, such as detecting, mitigating, and responding to cyber threats and attacks.

AI-powered defensive capabilities enable organisations to proactively identify and analyse potential security threats, vulnerabilities, and IOCs using anomaly detection techniques and predictive analytics. By continuously learning from new data, evolving threat landscapes and historical attack data, defensive AI solutions can adapt and optimise their detection capabilities, improve their efficiency in threat analysis, and enable organisations to prioritise and respond to security incidents more effectively and efficiently.

Integrating defensive AI technologies allows organisations to automate and streamline a variety of cyber security tasks, processes, and workflows. This can improve operational efficiency, reduce the need for security operations centre (SOC) and incident response personnel, or free up analysts to engage in other security tasks.

Related research

ISF research

Threat Horizon 2025: Scenarios for an uncertain future

When replacing human workers with automation or AI, organisations will almost certainly consider the potential benefits of such a change. They should also give equal consideration to what is lost by removing human critical thinking, imagination and experience from affected processes.

“The single greatest concern involving AI and the Pentagon is the integration of AI into weapons systems such that they can function autonomously, delivering lethal force without intervention or meaningful human control.”

– A.I. Joe: The Dangers of Artificial Intelligence and the Military18

Exploring potential future research topics

The following topics are of general interest to organisations who are using, hoping to use or are worried about AI. These are a high-level introduction that may be expanded upon later in the series.

The ethics of AI

Multiple organisations now adopt AI as part of their hiring procedures, seeking to automate and optimise the process, as well as decrease the role of the human factor. However, does AI manage to overcome individual prejudices and offer impartiality, or is it simply operating within existing structural inequalities relating to race, gender, or socio-economic status? For example, several experiments suggest that significantly fewer women are shown online adverts for high-paid jobs, while people of colour are more often identified as criminals or unskilled workers.12,13 If we were to think of AI as an individual, they would be a white middle-aged male, mirroring both the thinking and background of those who most often train the models.14 This raises the question “Is ‘decolonisation’ of AI possible?”15 If so, is it a task to be achieved by society at large, strict regulations, AI developers, organisations adopting it in their daily operations, or by all four working together? And how, if at all, should the world police those organisations that deliberately use AI in unethical ways, e.g. by colluding with competitors to artificially inflate prices to the maximum they know their customers can afford to pay?

Then there are the broader business questions around the sustainable use of AI: What are the consequences of the increased power requirements required to operate it?16 Will our warming planet create an uneven playing field with some nations unable to source the water required to cool AI servers? And how will regulations, which could be widely different across regions, deal with the resulting carbon emissions and the broader question of how organisations can use AI? How will AI, relying on huge datasets of personal information to function effectively, impact personal privacy rights and will that create an internal power struggle for organisations looking to maximise returns on AI investments?17 How will a loss of human input in important decision-making processes impact sectors such as healthcare, transportation, and the military?

Related research

ISF Podcast

The Case for Social Responsibility in AI

Steve Durbin and Nicholas Witchell

ISF video

Scary Smart: The future of technology, AI, robotics and ethics

Mo Gawdat

Governance of AI

Read the news and it quickly becomes clear that both governments and organisations realise the importance of reaching an agreement on how to govern AI. What is less clear is how exactly they plan to come to such an agreement. The real challenge lies in taking the next step and defining good practices and answering the question “Is it even possible to regulate AI?” Standardisation and monitoring of different cross-border activities run into difficulties due to the regulatory gaps between international and national regulatory regimes. What if those activities not only happen in cyber space but also have the potential to be conducted by an entity whose identity is blurred? If a comprehensive AI governance framework is developed, how achievable is it to implement it worldwide given the developmental inequalities, as well as the lack of a single oversight body? Will potential regulatory differences between nations lead to dubious practices in some?

At the organisational level, should it be the board, IT or security teams that take on the responsibility for the strong governance required to safeguard critical infrastructure, protect sensitive information and mitigate risks associated with malicious use? Or is it perhaps a centralised AI team that will foster collaboration between business leadership, security teams and AI developers? Should a comprehensive governance framework be established incorporating clear guidelines, standards and procedures for secure AI development and deployment, that encourages the sharing of information and threat intelligence and promotes good practices in AI security?

AI’s impact on the cyber security job market

Many employees may have considered how AI may impact their future career prospects and job security. Is it friend or foe? While the answer is probably ‘both’ and ‘neither’ at the same time, this question is particularly pertinent to the cyber security sector. Will the ever-increasing presence of AI in everyday lives create new jobs related to its maintenance and automation?

Or will its accelerating efficiency eliminate the need for a big proportion of the current workforce? A tentative timeline presents yet another riddle: are these changes months, years, or decades away? And, when change arrives, will it completely remove junior roles, dramatically reducing the availability of experienced personnel further in the future?

“It’s only cool if you can control it.”

– ISF Member

Return to the start
About this insights paper

Information Security Forum

Better Cybersecurity

© 2025 Information Security Forum Limited