TL;DR: Can the real bot please stand up?
Bots and AI meeting assistants are great for productivity but are a potential security nightmare if left unattended. Even if not intended as malicious, they can cause all kinds of red flags from data residency concerns to the loss of legal confidentiality. Bots can pose a bigger liability than you might realize. Reclaim control by hardening your Teams Lobby settings, training your “Human Firewall,” and using advanced monitoring to see exactly which bots are joining your organization’s calls.
The bots amid our ranks
Are bots taking over our meetings? And if so, what are they doing?
More and more bots are being introduced that add value to meetings & calls. From native Microsoft bots that perform tasks like translation, recording or managing call queues, to external tools like Otter.ai and Fireflies.ai that perform notetaking and virtual assistant tasks, to self-built bots your organization deploys to do anything from compliance monitoring and security to simple call handling.
End users will generally see these bots appear as attendees. Often with names related to the task they perform (e.g. “[users name] note taker”) or the product they represent. Rarely enough for users to fully understand why they are there and what that implies.
Then there is the situation where bots are not shown or try to hide their identity. Most of Microsoft’s native bots for instance do not show up in the attendee list which is probably not very concerning as these are after all part of the standard toolkit of Teams but there are other bots that are, or might try to be, unnoticed. Not necessarily for malicious purposes, but still, they can pose risks you might not be aware of.
Hiding in Plain Sight: Bot Visibility
In theory, any bot joining a meeting has a participant ID and is shown in the participants roster as well as has a video tile. However, bots can try to hide or subdue their presence. Sometimes on purpose, for malicious purposes, or to not intrude, like in the case of compliance bots. And sometimes, it’s actually just unintended like this forum post highlights. It does mean though that there are ways in which a bot can go undetected and not show up in the participant roster or video tiles.
The “Fred” Factor: Why AI Bots Use Human Names
Another way in which bots try to hide their identity is by simply taking a name that could be seen as a normal person. Especially in larger meetings it can go undetected like that. Fireflies.ai’s virtual assistant is for instance called ‘Fred’. If you’re organizing a larger meeting with externals and one of these uses Fireflies.ai, you might be forgiven to not know that the ‘Fred’ in the lobby is not an actual person. Especially if your own organization doesn’t use Fireflies.ai.
Another element of these virtual meeting assistants is that they often request access to the user’s calendar, allowing them access to each meeting invite the user has in their calendar which they will then try to automatically join. Even those in which you really might not want that. For example: in an HR meeting or a financial or confidential meeting. Especially as services like that tend to send out meeting summaries to all participants afterwards, even those that did not attend.
An example of how damaging this could be for an organization is nicely explained in this blog from MLT Aikins.
So, what are the risks of bots in meetings in general? There are several risks associated with bots, and not all include malicious intent.
Data residency & security: Bots that collect information store that data on a server. In the case of third-party vendors, this could mean sensitive information is stored outside of the IT department’s control. Which could compromise security as well as regulatory, industry and company compliance. Worse even, when the application is a ‘free’ app and there is no formal contract arranging data ownership, security and residency.
Data quality & ownership: Bots interpret and process data, this means that some of the data might not even be accurate. This brings us to the next risk, which is that vendors might use the collected data to train their bots. Which could mean that knowledge derived from your meetings and (correctly or incorrectly) interpreted by a bot is used to assist other customers.
Legal consequences: Apart from the obvious jurisdiction, compliance and security risks, having bots in meetings could also pose some other, perhaps not expected, legal consequences. There are some interesting articles on what the implications of data collected by third-party bots during meetings could be. Including loss of client-lawyer privilege and altered disclosure status.
Locking the doors: How to Reclaim Control of Your Meetings
Needless to say that it is important to be aware of the risks of having bots in meetings and to take steps to both monitor and safeguard the use of them.
1. Hardening the Meeting Entry (Lobby & Anonymous Join)
The most effective way to block unwanted bots is to ensure that they cannot enter a meeting without human intervention.
- Disable Anonymous Join: This is Microsoft’s primary recommendation. If a bot is not signed in with a trusted account, it will be blocked entirely.
- Enforce the Lobby for All Externals: Set “Who can bypass the lobby” to “People in my org”. This forces any bot invited via a guest account or external federation to wait for approval.
- Restrict “Who can admit from lobby”: If anyone can admit from the lobby, a user might accidentally let a bot in. Set this to “Only organizers and co-organizers”.
2. Verification (is this even a human?)
Microsoft realizes the risks of bots entering meetings unwanted and has recently added options that allow you to have automatic verification if it is a bot or human before even admitting them to the lobby.
- Join Verification (CAPTCHA): Require unverified or anonymous users to complete a CAPTCHA challenge before they can even reach the lobby.
- Email Verification for Anonymous Users: Organizations with Teams Premium, can require anonymous users to verify their identity via a one-time passcode (OTP) sent to their email. This effectively kills most automated 3rd-party recording bots.
3. Application Governance (Blocking the “App” itself)
Many bots join because a user has “installed” a 3rd-party application (like a note-taker). Admins can control this at the source.
- App Permission Policies: Create a policy that blocks all third-party apps or only allows a “Whitelist” of AI agents your organization approved specifically.
- Resource-Specific Consent (RSC): Ensure that bots are limited by RSC. This technical framework ensures that even if a bot is added, it can only access data in that specific meeting instance, not your broader tenant.
4. Detection: How to Spot a Bot in the Meeting
This one can be tricky. Most legitimate bots have a name & icon that makes clear what it is and what it does. Some however don’t. Microsoft’s 2026 UI now makes it somewhat easier to identify these entities. In the meeting roster:
- “Unverified” Label: Users/bots that haven’t passed authentication.
- “Agent” Icon: Native Microsoft 365 Agents have a distinct icon compared to human participants.
The Human Firewall: Why Tech Settings Aren’t Enough
All of the above makes one thing clear. None of this will be waterproof. Even if you restrict external access, block anonymous access, or harden the lobby restrictions, there will always be situations or users that require exceptions. Based on their specific role or on the meeting context. Furthermore, users might join meetings initiated by external users where other configurations are at play. You have no control over those meetings and what bots join those calls. Deliberately or unintentionally.
And that’s why it is crucial to not just look at this as a ‘technical’ problem to solve with settings and restrictions but as one that requires active training and awareness from your users. Users should know how to identify a bot, be aware of what the risks are, and been given instructions on how to act when they do not trust the situation. Most importantly, they need to feel empowered to speak up so meeting organizers can take steps, be cautious with information until the situation is clarified and report to an administrator what they encountered if they feel uncomfortable.
Visibility is Power: The Need to Move Beyond “Basic” Admin Reporting
Finally, is it enough to just rely on settings and user awareness? No. As said, there are almost always exceptions that do require active monitoring. Even for legitimate bots where you want to know if things like the host changed or if it’s running on the right version.
Monitoring which bots are joining users’ meetings can be a challenge for admins. Microsoft’s Team Admin Center tells you a non-native bot joined the meeting but often just names it “bot”. Which doesn’t explain if this was a sanctioned or unsanctioned bot. Nore does it provide any other details important to ensure that it’s a trusted bot. With TrueDEM, in-depth bot information is available, from name & version to the backend-host they run on. This allows admins to directly see which bots joined which meeting and which backend-host they utilize. For more on Native and non-native meeting bot monitoring and it’s importance, check out Stefan Fried’s paper.