Security leaders often track threats in code, networks, and policies. But a quieter risk is taking shape in the everyday work of teams. Collaboration is getting harder even as AI use spreads across the enterprise. That tension creates openings for mistakes, shadow tools, and uncontrolled data flows. A recent Forrester study shows how this break in teamwork forms and how leaders can respond before it grows.
Teamwork is central to enterprise outcomes
Forrester’s research found agreement across product, engineering, IT, and business groups. They view collaboration as a core driver of enterprise results. Teams rely on shared workspaces, chat, and visual canvases to coordinate work. These surfaces function as living records of decisions and context.
When these environments break apart, security controls fracture with them. Information spreads across more tools, each with different permissions and visibility. Alignment slips. People repeat tasks or shift work into places security teams may not monitor.
Collaboration is failing in ways that create risk
The research shows that collaboration complexity is rising. 46% of respondents said their departments struggle to prioritize initiatives. Routine tasks pile up, and different groups interpret goals in conflicting ways. These issues show growing operational and data risk, not just workplace friction.
When teams lack a shared view of their work, they invent their own methods to keep projects moving. Files are passed through multiple channels. Notes land in personal storage. Sensitive details shift from tool to tool. None of this happens with bad intent, but because people need to keep moving and the path in front of them is not defined.
Embedding AI where teams work can reduce exposure
Many AI tools entering the enterprise are designed for individual use. They support quick tasks but do not help teams stay aligned. This leads employees to work in parallel instead of together. They move between primary tools and separate AI tools, which scatters context and weakens oversight.

The study also notes that many teams feel unprepared for AI. Skills vary, and workflows change faster than people can adjust. Employees often make independent choices about how to use AI. These choices can bypass established controls and introduce new behavior that security teams cannot see.
Decision makers see AI as a way to strengthen teamwork if it is embedded in the same places where collaboration already happens. They want tools that understand project context without requiring teams to move information into new locations.
When AI is built into shared environments, teams spend less time switching tools. Work stays in governed spaces. Information moves less, and security can apply consistent rules. Teams gain reliable handoffs and alignment, which reduces the unplanned work that often leads to risky shortcuts.
“To be effective, AI should operate where teams work: supporting collaboration in the flow of work, informing decisions with full team context, and driving towards results faster. Embedding AI where teamwork happens achieves more than just improving productivity, it enables team- and organization-wide collaboration, innovation, and transformation,” said Andrey Khusid, CEO of Miro.
What security leaders should take from the findings
Security and risk teams do not need to own collaboration strategy, but they do need to shape it. The research shows that AI can increase exposure when it reinforces individual workflows instead of team workflows. It also shows that well integrated AI can support secure collaboration when work stays inside governed environments.
A few actions follow:
- Join early discussions about AI adoption.
- Push for tools that support team workflows instead of individual helpers scattered across the stack.
- Review how major groups exchange information and identify where work splinters across tools. Those splinters often carry unmanaged data.
- Guide education efforts so employees understand how AI should be used inside shared, governed environments.
