Microsoft's AI Agents in Windows: What Businesses Need to Know About Copilot Actions

fahd.zafar • November 24, 2025

Microsoft has introduced experimental AI agent capabilities into Windows through Copilot Actions and agent workspaces, features designed to automate everyday tasks like organising files, scheduling meetings, and sending emails.


However, the announcement comes with significant security warnings that business leaders and IT administrators must understand before enabling these capabilities.

What Are Agent Workspaces?

An agent workspace is a separate, contained space in Windows where AI agents can access apps and files to complete tasks in the background whilst users continue working. Each agent operates using its own distinct account, separate from personal user accounts, establishing clear boundaries between agent activity and user activity.


Agents typically receive access to known folders or specific shared folders, with permissions reflected in folder access control settings. Each agent has its own workspace and permissions—what one agent accesses doesn't automatically apply to others. For this initial preview release, agent workspaces run in separate Windows sessions, allowing agents to interact with apps parallel to user sessions.


When running in agent workspace, the AI has access to six commonly used folders in your user profile directory: Documents, Downloads, Desktop, Music, Pictures, and Videos. The feature is off by default and can only be enabled by administrator users.



Microsoft's Security Warning

The company's announcement includes a critical caveat that organisations must take seriously. Microsoft explicitly states:

"We recommend that you only enable this feature if you understand the security implications outlined on this page."


Why the warning? Microsoft acknowledges that "AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation."


These aren't theoretical risks. Prompt injection attacks have been demonstrated repeatedly by security researchers, and AI hallucinations remain an unsolved problem across all large language models. The combination of these vulnerabilities with system-level access creates genuine security exposure.



Security Principles and Goals

Microsoft outlines ambitious security principles for agentic features:


Non-repudiation: All agent actions must be observable and distinguishable from user actions, ensuring clear audit trails.

Confidentiality: Agents handling protected user data must meet or exceed existing security and privacy standards.

Authorisation: Users must approve all queries for user data and all actions taken by agents.


Additional principles include that agents should operate under least privilege, never exceeding the permissions of the initiating user. Authorised agent privileges should be granular, specific, and time-bound. Agents should only access sensitive information in specific, user-authorised contexts.


Microsoft emphasises that "security in this context is not a one-time feature—it's a continuous commitment. As agentic features evolve, so will our security controls, adapting to each phase of rollout from preview to broad availability."



What IT Administrators Need to Know

The experimental agentic feature setting is off by default and can only be enabled by administrator accounts. Once enabled, it applies to all users on the device including other administrators and standard users.


IT administrators will be able to enable or disable agent workspace at both account and device levels using Intune or other Mobile Device Management applications. This provides central control over which users and devices can access these features.


When running in agent workspace, the AI has access to apps available to all users by default. To limit access, organisations can install apps for specific users or specifically for agents. Agent accounts have access to any folders that all authenticated users can access, such as public user profiles.



The Risk-Benefit Calculation for Businesses

Agentic AI capabilities offer genuine productivity potential. Automating routine tasks, complex file organisation, and multi-step workflows could deliver measurable efficiency gains. However, businesses must weigh these benefits against the acknowledged security risks.


The challenge for organisations is that the same vulnerabilities Microsoft warns about—hallucinations and prompt injection attacks—have proved impossible to fully prevent across the AI industry. Workarounds exist for specific discovered vulnerabilities, but comprehensive solutions remain elusive.


For businesses, this creates difficult decisions. Do you enable powerful productivity features that Microsoft explicitly warns carry security risks? How do you determine which users have sufficient experience to understand and mitigate these risks? What monitoring and auditing processes ensure agents aren't compromised?



Managing the Feature in Your Organisation

If your organisation chooses to enable these experimental features, several practices can help manage risk:


Restrict access carefully. Only enable agent workspaces for users who genuinely need the capabilities and have appropriate technical understanding of the risks.

Use centralised management. Deploy controls through Intune or MDM to maintain visibility into which devices and accounts have agent features enabled.

Monitor agent activity. Take advantage of the separate agent accounts to monitor and audit agent behaviour distinctly from user activity.

Limit file access. Consider restricting agent access to sensitive folders by carefully managing which folders agents can reach.

Maintain awareness. As Microsoft states that security controls will evolve as the feature matures, stay informed about updates and new guidance.



Looking Forward

Microsoft describes this as a phased approach to delivering agentic capabilities, starting with limited access to gather feedback and strengthen foundational security. The company acknowledges that "agent development and AI-related security continue to be a fast-moving field of research."


The reality is that businesses will increasingly encounter AI agent capabilities across various platforms and tools. Microsoft's relatively transparent approach to communicating risks provides a framework for how organisations should evaluate these features—not just from Microsoft, but from any vendor introducing autonomous AI capabilities.


The key question isn't whether AI agents will become part of business computing—they likely will. The question is whether organisations will deploy them thoughtfully, with appropriate security controls and realistic understanding of current limitations, or whether productivity promises will outweigh security caution.



Navigate AI Security Challenges with Expert Guidance

At Altiatech, we help organisations evaluate and implement emerging technologies including AI capabilities whilst maintaining robust security postures. Our cybersecurity and IT infrastructure specialists can assess whether features like AI agents align with your security requirements and help develop appropriate policies and controls.


From security assessments and risk evaluation to ongoing monitoring and incident response planning, we provide the expertise needed to make informed decisions about AI adoption in your business environment.


Get in touch:

📧 Email: innovate@altiatech.com
📞 Phone (UK): +44 (0)330 332 5482


Innovation with security. Technology with control.

November 28, 2025
A threat group known as Scattered Lapsus$ Hunters is targeting Zendesk users through a sophisticated campaign involving fake support sites and weaponised helpdesk tickets, according to security researchers at ReliaQuest. The operation represents an evolution in how cybercriminals exploit trust in enterprise SaaS platforms.
November 28, 2025
Amazon Web Services has launched a new feature allowing customers to make DNS changes within 60 minutes during service disruptions in its US East (N. Virginia) region. The announcement tacitly acknowledges what many have long observed: AWS's largest and most critical region has a reliability problem.
November 28, 2025
A Scottish council remains unable to fully restore critical systems two years after a devastating ransomware attack, highlighting the long-term consequences of inadequate cybersecurity preparation and the challenges facing resource-constrained local authorities.  Comhairle nan Eilean Siar, serving Scotland's Western Isles, suffered a ransomware attack in November 2023 that required extensive system reconstruction. According to a report published by Scotland's Accounts Commission, several systems remain unrestored even now, with large data volumes slowing the digital recovery process.
November 25, 2025
The Cybersecurity and Infrastructure Security Agency has issued an alert warning that multiple cyber threat actors are actively leveraging commercial spyware to target users of mobile messaging applications including Signal and WhatsApp. The sophisticated campaigns use advanced social engineering and exploit techniques to compromise victims' devices and gain unauthorized access to their communications.
November 17, 2025
Anthropic has disclosed the first documented case of a large-scale cyberattack executed with minimal human intervention, marking a significant escalation in AI-enabled cyber threats. The campaign, attributed with high confidence to a Chinese state-sponsored group, demonstrates how rapidly AI capabilities are being weaponised for espionage operations.
November 14, 2025
Microsoft has unveiled its first "AI superfactory" - a revolutionary approach to cloud infrastructure that connects multiple datacentres across vast distances to function as a single, unified AI training system. The innovation marks a significant shift in how hyperscale computing infrastructure can be architected.
By fahd.zafar November 14, 2025
The UK's National Savings & Investments bank has spectacularly exceeded its digital transformation budget by £1.3 billion whilst running four years behind schedule, according to a damning National Audit Office report. The programme's failures illustrate how ambitious technology projects collapse under procurement weaknesses, underestimated complexity, and insufficient expertise.
November 7, 2025
For the first time in UK history, a cyberattack has caused sufficient damage to impact the nation's GDP growth. The Bank of England has cited the Jaguar Land Rover breach as a contributing factor to the country's slower-than-expected economic performance, marking a watershed moment in understanding cyber threats as macroeconomic risks.
November 6, 2025
Marks & Spencer has revealed the full financial impact of its April 2025 cyberattack, with total costs reaching £136 million and profits plummeting by more than half. The incident demonstrates how a single cyber breach can devastate even large retailers' financial performance and operational capabilities. 
November 5, 2025
Police forces in England and Wales spend approximately £2 billion annually on technology, with 97% dedicated solely to maintaining legacy systems. This leaves almost nothing for innovation, artificial intelligence, or the service transformation needed to improve policing productivity.