Microsoft has announced the upcoming availability of a new artificial intelligence (AI) capability within its cyber security offering that it claims it will “dramatically increase the reach, speed and effectiveness” of its customers’ security teams.
The latest fruition of Microsoft’s multibillion-dollar bet on OpenAI’s ChatGPT, and currently available to some customers through a private preview, the Security Copilot feature has been trained across trillions of networking and security data points, blending Microsoft’s massive threat intelligence footprint with cutting-edge security expertise.
Security Copilot joins a number of other recently announced Copilot-branded options pitched at users of Microsoft 365 tools such as Office, Outlook and Teams. Redmond believes that it can differentiate its AI from others by using it to support humans rather than take over from them, hence the name.
“We’re moving from autopilot to copilot,” CEO Satya Nadella said earlier this month. “As we build this next generation of AI, we made a conscious design choice to put the human at the centre of the product. Today is the start of the next step in this journey, with powerful foundation models and capable copilots accessible via the most universal interface – natural language – which will radically transform how computers help us think, plan and act.”
Rather than helping office workers write documents or compose emails, Security Copilot is designed to give defenders a leg up by supporting them to identify and respond to threats much quicker.
“Today the odds remain stacked against cyber security professionals. Too often, they fight an asymmetric battle against relentless and sophisticated attackers,” said Vasu Jakkal, corporate vice-president of Microsoft Security.
“With Security Copilot, we are shifting the balance of power into our favour. Security Copilot is the first and only generative AI security product enabling defenders to move at the speed and scale of AI.”
Security Copilot will also help to address the ongoing shortage of skilled cyber security professionals by augmenting and amplifying the capabilities of the existing workforce, summarising and making sense of threat intelligence data, catching what others might miss, and even prioritising alerts and incidents and making recommendations on what actions to take. Microsoft claimed the feature would imbue even the smallest security teams with the skills and abilities of an enterprise-grade security operations centre (SOC).
As is to be expected with AIs, Security Copilot will also be continuously learning, sucking in the latest data on threat actors and tactics, techniques and procedures (TTPs) with access to the most advanced OpenAI models.
“Advancing the state of security requires both people and technology – human ingenuity paired with the most advanced tools that help apply human expertise at speed and scale,” said Charlie Bell, executive vice-president of Microsoft Security.
“With Security Copilot we are building a future where every defender is empowered with the tools and technologies necessary to make the world a safer place.”
Ethics cutbacks
Microsoft’s foray into AI-backed security comes just weeks after the organisation received criticism after taking the decision to cut its AI ethics team – part of a wider job cutting exercise.
At the time, some speculated that the move was taken because the team was slowing down innovation, but others suggesting that it was because it wants to hand off responsibility to OpenAI.
In a statement provided to Computer Weekly’s sister title, TechTarget Enterprise AI, Redmond said that it remained committed to the safe, responsible development and design of AI products and experiences. It pointed out that it has increased the scale and scope of work of its Office of Responsible AI, among other things.