top of page

Search Results

334 items found for ""

  • Cyber Crime: A Focus on Financial Gain (Human-Operated Ransomware, LockBit 2.0, and Crypto Mining Malware)

    In recent years, the landscape of cybercrime has drastically changed, evolving from random attacks to highly organized, human-operated campaigns. Unlike traditional ransomware attacks, which were often opportunistic, human-operated ransomware is carefully orchestrated by groups that target specific organizations, often with a high level of planning and precision. 1. Human-Operated Ransomware: A New Level of Targeted Attack In the early days of ransomware, attackers often used “scattershot” approaches like phishing emails, aiming to infect as many victims as possible. However, some ransomware groups now conduct targeted attacks, sometimes called “human-operated ransomware.” Instead of random infections, attackers thoroughly research and choose victims, gaining access to networks and strategically deploying ransomware when it’s likely to cause the most damage. Key Steps in a Human-Operated Ransomware Attack: Initial Compromise:  Attackers typically gain entry through straightforward means: phishing emails with malicious attachments, weak or reused credentials, or exploiting systems with internet-facing vulnerabilities (like exposed RDP). Establishing Persistence:  Once inside, attackers often use t ools like Cobalt Strike (a penetration testing tool frequently used by attackers) to maintain access , or they may install “web shells” ( programs that allow remote access ) to give them backdoor entry whenever they need it. Privilege Escalation:  Attackers then work to gain more control over the network. They may look for saved passwords or use tools like Mimikatz to steal login credentials. Tools like Bloodhound and Pingcastle are often used to map out and find ways to escalate privileges within Active Directory environments. Reconnaissance and Data Collection:  Before encrypting data, attackers often steal sensitive information . This tactic, called “double extortion,” is a strategy where attackers can threaten to release stolen data if the ransom is not paid . Cobalt Strike scripts, nslookup, and other network tools are used to locate and gather valuable data. Lateral Movement:  Attackers spread across the network to infect more devices using tools like Cobalt Strike, Metasploit, and sometimes even old exploits like EternalBlue (which was part of the WannaCry attack). They may also tunnel RDP connections using ngrok or other services. Execution of Objectives:  After gaining full control over the domain, attackers reach their final objectives: Data Exfiltration:   Using FTP, WinSCP, or cloud file hosting services , they steal sensitive data. Ransomware Deployment:   Ransomware is deployed across the network via tools like WMIC, PSExec, and sometimes manually. This strategic deployment often occurs at a time that maximizes impact, such as during off-hours or holidays. --------------------------------------------------------------------------------------------------------- I have create a complete series on Ransomware from Evolution to impact. Might possible you know more than me But Who knows you might learn something new. Kindly do check under course tab -------------------------------------------------------------------------------------------------------- 2. LockBit 2.0: Ransomware-as-a-Service with a Double-Extortion Twist LockBit, first seen in 2019, resurfaced in 2021 as LockBit 2.0 , introducing new strategies and enhancements to ransomware deployment. LockBit 2.0 operates as a Ransomware-as-a-Service (RaaS) model , where the developers offer the ransomware to affiliates, who carry out the actual attacks. When a ransom is paid, both the developer and the affiliate profit, making ransomware more accessible to less technically skilled criminals. Key Tactics of LockBit 2.0: Double Extortion:  Similar to Maze, LockBit 2.0 leverages double extortion, where attackers first encrypt a victim’s files and then threaten to leak the stolen data if the ransom isn’t paid. Affiliate Program:  LockBit 2.0 actively recruits insiders within target companies to provide login credentials, like RDP access . This insider help streamlines initial entry into networks and often bypasses basic security controls. Network-Wide Distribution via GPOs:   Once the attackers gain access to the domain controller, they use Group Policy Objects (GPOs) to distribute the ransomware across the entire network . This allows them to disable security tools and push LockBit 2.0 ransomware to every connected device efficiently. StealBit for Data Exfiltration:  LockBit 2.0 includes a built-in tool called StealBit, designed to locate and exfiltrate sensitive corporate data. This feature automates data theft, ensuring maximum leverage over the victim. Rapid Encryption Techniques:  LockBit 2.0 uses advanced encryption tactics like multithreading and partial file encryption. These methods allow it to encrypt large amounts of data very quickly, making recovery more difficult for victims. ---------------------------------------------------------------------------------------------------------- Kindly Note The LockBit ransomware group has been significantly impacted by recent law enforcement actions under " Operation Cronos," involving international agencies like Europol, the FBI, and the UK's National Crime Agency (NCA). As of February 2024, several key LockBit infrastructure components have been taken down, including their Tor sites, and a series of high-profile arrests have occurred. These operations have disrupted LockBit's network, leading to a major loss of affiliates and a tarnished reputation, as the group has been forced to duplicate victim claims to maintain credibility. Authorities have arrested multiple LockBit affiliates, including those behind large-scale ransomware attacks . Charges were filed against prominent figures associated with LockBit and affiliated groups like Evil Corp, and several LockBit members have faced sanctions in the U.S., UK, and Australia. Notably, D imitry Yuryevich Khoroshev, allegedly the main operator of LockBit, was identified, and a reward was offered for information leading to his capture. Despite these efforts, LockBit has continued some operations , though their activity level and visibility have diminished, with some attacks attributed to the group potentially being exaggerated to mask the true impact of the takedown ---------------------------------------------------------------------------------------------------------- 3. Crypto Mining Malware: Silent Profiteers Unlike ransomware, which is loud and disruptive, crypto mining malware works quietly in the background. This type of malware hijacks system resources to mine cryptocurrency , potentially running for extended periods without detection. While crypto mining may seem less harmful, it can still cause major issues, draining resources, slowing down systems, and increasing power costs. Types of Crypto Mining Malware: Browser-Based Crypto Mining: Typically, this type is implemented through JavaScript on a website, mining cryptocurrency while the user is on the site. Many sites using browser-based miners are streaming sites or content portals where users stay for extended periods, maximizing the mining time. Host-Based Crypto Mining: This type of malware behaves more like traditional malware, arriving through phishing emails or malicious downloads. Once installed, it often uses PowerShell scripts or other methods to persist on the system, ensuring it can continue mining even after the system restarts. Though crypto mining may not seem as destructive as ransomware, some c rypto mining malware includes additional features like worm-like spreading capabilities, password stealing, and other data theft functions. This added functionality can allow attackers to sell compromised data or escalate attacks later, making crypto mining malware a threat that goes beyond resource theft. ---------------------------------------------------------------------------------------------------------- Key Takeaways: Staying Ahead of Modern Cyber Threats The rapid evolution of cybercrime demonstrates that organizations must adapt their security measures to meet these advanced threats. Here’s a summary of key strategies for defense: Enhance Network Security:   Segment your network to limit attackers’ lateral movement. Protect internet-facing systems with strong credentials and multi-factor authentication. Monitor and Detect Early:  D eploy endpoint detection and response (EDR) solutions to spot unusual activities like lateral movement, credential dumping, or unknown tools. Educate Employees:  Phishing is still a major entry point for attackers. Regular training can help employees recognize and avoid phishing attempts. Limit Privilege Escalation Opportunities:   Use tools like Bloodhound to identify and mitigate vulnerabilities in privilege management, and limit the number of users with administrative access. Patch Regularly:  Many ransomware attacks exploit known vulnerabilities. Keeping systems updated is one of the simplest and most effective defenses. Back Up Data:  Regular, secure backups are essentia l. They allow you to recover quickly without paying ransoms in case of a successful ransomware attack ---------------------------------------------------------------------------------------------------------- Akash Pate l

  • Unified Kill Chain: An evolution of Cyber Kill chain

    The Unified Kill Chain (UKC) is an evolution of earlier cyber kill chain models , addressing key limitations of traditional frameworks, such as the Lockheed Martin Cyber Kill Chain and Dell SecureWorks Cyber Kill Chain. It provides a holistic perspective on modern cyberattack s, emphasizing the complexities of advanced persistent threats (APTs) and multi-stage intrusions. By organizing an attack into three broad phases— Initial Foothold , Network Propagation , and Actions on Objectives —the Unified Kill Chain accommodates diverse threat scenarios , including insider threats and supply chain attacks. Limitations of Traditional Kill Chains The Lockheed Martin Cyber Kill Chain, introduced in 2011 , remains a v aluable model for understanding adversarial methods . However, its static structure reveals significant limitations in addressing modern, dynamic attack vectors: Payload-Centric Approach : The traditional model assumes an external payload delivery mechanism , neglecting the rise of insider threats and supply chain attacks . Lateral Movement Overlooked : Modern attackers often propagate through internal networks using t echniques like credential theft and lateral movement, which are inadequately addressed in the traditional framework. Inflation of Action on Objectives : Critical attack steps, such as privilege escalation and persistence, are grouped under "Actions on Objectives, " diluting their importance. To address these gaps, alternative frameworks such as the Unified Kill Chain were developed. ---------------------------------------------------------------------------------------------------------- Phases of the Unified Kill Chain The UKC defines 18 attack phases , grouped into three overarching stages : In , Through , and Out . 1. In (Initial Foothold) Focuses on breaching the organizational perimeter to gain initial access. Key Phases : Reconnaissance Resource Development Delivery Social Engineering Exploitation Persistence Defense Evasion Command & Control Example : An attacker performs phishing (Social Engineering) to deliver malware (Exploitation) that establishes a Command & Control channel. 2. Through (Network Propagation) Involves activities to escalate privileges and move laterally across the network . Key Phases : Discovery Privilege Escalation Credential Access Lateral Movement Execution Pivoting Example : Attackers use stolen credentials (Credential Access) to escalate privileges (Privilege Escalation) and pivot to other systems. 3. Out (Actions on Objectives) Covers activities for achieving the attacker's final goals, such as exfiltration or system impact. Key Phases : Collection Exfiltration Impact Objectives Example : Data is exfiltrated (Exfiltration) from compromised servers, or ransomware disrupts operations (Impact). ---------------------------------------------------------------------------------------------------------- Structure of the Unified Kill Chain The Unified Kill Chain divides an attack into three phases: 1. Initial Foothold This phase includes techniques used to gain access to the target environment . It encompasses reconnaissance and exploitation methods. Example Techniques : Phishing emails with malicious attachments or links. Exploitation of public-facing vulnerabilities, such as Log4Shell. Insider threats gaining unauthorized access using stolen credentials. Real-World Example : In the SolarWinds attack, adversaries used a compromised update mechanism to inject malicious code into thousands of victims’ environments. 2. Network Propagation Once initial access is established, attackers seek to move laterally, escalate privileges, and access critical systems. Example Techniques : Credential harvesting and Pass-the-Hash attacks. Exploiting trust relationships between systems, such as Active Directory misconfigurations. Deployment of remote administration tools like Cobalt Strike. Real-World Example : During the WannaCry ransomware outbreak, attackers exploited the EternalBlue vulnerability to propagate rapidly across networks. 3. Actions on Objectives In this final phase, attackers accomplish their goals, such as data exfiltration, sabotage, or deploying ransomware. Example Techniques : Encrypting critical files for ransom demands. Stealing sensitive data for espionage or financial gain. Disrupting critical operations by destroying system backups. Real-World Example : The NotPetya attack targeted organizations globally, encrypting data irrecoverably and causing billions in damages. ---------------------------------------------------------------------------------------------------------- Now we will look into Comparison b/w Unified kill chain and Traditional kill chain Unified Kill Chain vs. Traditional Kill Chain ---------------------------------------------------------------------------------------------------------- How to Use the Unified Kill Chain for Defense Organizations can leverage the Unified Kill Chain to strengthen their cybersecurity posture: Threat Detection : Monitor logs and network activity to identify patterns consistent with Initial Foothold techniques. Lateral Movement Prevention : Implement micro-segmentation and restrict unnecessary inter-system communication. Incident Response : Use the framework to categorize and prioritize remediation efforts based on the attack phase. ---------------------------------------------------------------------------------------------------------- Example Attack Mapped to the Unified Kill Chain Attack Scenario : Ransomware targeting a corporate network. Unified Kill Chain Phase Attack Steps Initial Foothold Spear-phishing email delivers a malicious macro document. Network Propagation Harvested credentials are used to move laterally via RDP and exploit SMB vulnerabilities. Actions on Objectives Files are encrypted, and a ransom note is delivered, demanding cryptocurrency payment for decryption. ---------------------------------------------------------------------------------------------------------- Conclusion The Unified Kill Chain equips organizations with a modern and robust framework for understanding and defending against complex cyberattacks. Its comprehensive, flexible, and actionable nature makes it an invaluable tool for enhancing cybersecurity resilience in an ever-evolving threat landscape. For more details, refer to the Unified Kill Chain White Paper . Akash Patel

  • Cyber Crime: A Focus on Financial Gain(Bangladesh Bank Heist via the SWIFT network)

    The 2016 Bangladesh Bank Heist  stands out as a significant digital theft where hackers exploited the SWIFT financial messaging system to orchestrate a massive theft from Bangladesh Bank’s account at the Federal Reserve Bank of New York. Attack Summary Intrusion Method : The attackers, possibly with insider assistance, used Dridex malware  to infiltrate the Bangladesh Bank's systems. This allowed them to monitor internal processes, especially around international transactions and payment operations. Reconnaissance and Preparation : To gather intelligence, they installed Sysmon   on systems connected to the SWIFT network, which helped them map out SWIFT’s operational patterns and employee interactions with SWIFT software. Fraudulent Transactions : Using manipulated PRT files  and Printer Command Language , the attackers initiated 35 fraudulent SWIFT messages, attempting to transfer $951 million. Thirty transactions were flagged and blocked by the New York Fed, but five transactions were processed, leading to a $101 million loss for Bangladesh Bank: $20 million  transferred to Sri Lanka  (recovered due to a typographical error). $81 million  routed to the Philippines , where $18 million was later recovered. Final Losses : After partial recovery, B angladesh Bank faced a $63 million loss . Much of this was swiftly laundered through casinos in the Philippines. Understanding SWIFT's Role in International Transactions The SWIFT network facilitates secure financial messaging between banks globally. To grasp the heist's complexity, understanding the VOSTRO/NOSTRO  account setup is essential. Here's a simplified example to illustrate how SWIFT functions in an international transfer scenario: Initiation : The buyer's bank (Bangladesh Bank) receives a request to transfer a large amount, e.g., $10 million. Intermediary Use : Due to high international transfer amounts and limited access to foreign markets, the transaction involves an intermediary bank. NOSTRO  and VOSTRO  are accounting terms used in this setup, where Bangladesh Bank maintains a VOSTRO account with the NY Fed. Transaction Flow : Bangladesh Bank instructs the NY Fed to debit its VOSTRO account and transfer the amount to the seller’s bank. Transaction Completion : The NY Fed deducts the amount from the VOSTRO account and completes the transfer to the recipient bank. Bangladesh Bank’s SWIFT Technical Architecture The bank’s SWIFT setup involved four main components, interconnected via a VPN: Core Bank IT Systems : Handle regular banking transactions. SWIFT Messaging Bridge : Generates SWIFT messages for transactions. SWIFT Gateway : Ensures secure connectivity between banks via SWIFT protocols. Confirmation Printer : Provides a physical record of transaction confirmations for verification. Attack Execution on SWIFT Systems Malware Deployment : Attackers installed malware on servers running SWIFT Alliance software , responsible for SWIFT message handling and validation. DLL Manipulation : The malware checked active Windows processes for liboradb.dll , a crucial SWIFT component, and patched it in memory to bypass transaction validations by altering the code (JNZ instruction). Message Injection : With the patched DLL, attackers could inject unauthorized SWIFT messages into the network without triggering file integrity or signature checks, making the fake transactions appear legitimate. The Bangladesh Bank Heist: The Intrusion During the attack, the adversaries compromised systems running the SWIFT messaging bridge software, allowing them to inject fraudulent SWIFT messages. Notably, the bank’s internal IT systems were unaware of this intrusion, as the fraudulent transactions were directly injected into the SWIFT network. The Bangladesh Bank Heist: Zooming in on the Malware The malware specifically targeted the Bangladesh Bank’s servers running the SWIFT Alliance software, which manages SWIFT message transactions. The software performs complex validation checks, which the malware altered to bypass these checks. When executed on the server, the malware scanned all running processes and modules on the Windows OS, searching for the liboradb.dll file . This DLL, a part of the SWIFT Alliance software, handles: Reading the Alliance database path from the registry Starting the database Performing backup and restore functions for the database In processes loading liboradb.dll, the malware altered the DLL in memory by replacing a specific JNZ instruction with two NOP instructions. This bypass caused SWIFT’s validation checks to always succeed, allowing counterfeit transactions to be approved. The in-memory patching allowed the attackers to avoid detection from integrity checks or digital signature validations on SWIFT’s software files. With this modification, counterfeit SWIFT messages could be injected directly into the database. The Bangladesh Bank Heist: Zooming in on the Malware Original Code Manipulated Dll To ensure this function always returns success, the jnz instruction was removed. Instead of deleting the bytes, the malware authors replaced them with NOP (No Operation) instructions, preserving code structure and bypassing the jump condition. This technique is common in machine code patching. The Bangladesh Bank Heist: The Intrusion The malware also intercepted SWIFT gateway confirmations, preventing them from being printed. However, when the confirmation printer malfunctioned , it failed to print any transactions, which raised suspicion . Once it was operational, the backlog—including the injected transactions—was printed. Despite this misstep, the attackers managed to process some transactions successfully due to careful planning. The Bangladesh Bank Heist: The Fraud Flow The attackers initially injected 35 transactions totaling $951M. Of these, 30 transactions were blocked due to the keyword “Jupiter” in the bank address, flagged by the NY Fed due to an unrelated sanction hit. Five transactions, totaling $101M, were processed by the NY Fed. Four of these succeeded and were directed to three pre-established accounts at the Rizal Commercial Banking Corporation (RCBC) in the Philippines. One transaction was blocked due to a typo ("Shalika foundation" vs. "Shalika fandation"), prompting Deutsche Bank to request verification from Bangladesh Bank. The successful $81M transferred to RCBC was further funneled to casino accounts, where it was withdrawn and laundered. The Bangladesh Bank Heist: Key Takeaways The Bangladesh Bank heist serves as a critical example of vulnerabilities in financial institutions and the sophisticated tactics employed by attackers. Here are some essential insights from the incident: Cybersecurity Posture : The Bangladesh Bank’s cybersecurity framework was alarmingly inadequate, particularly for a financial institution. Lacking network segmentation and relying on low-cost, secondhand infrastructure made it easier for attackers to infiltrate. SWIFT Vulnerabilities : Although SWIFT is known for its secure environment, this heist revealed that its s ecurity is only as strong as its weakest link. The attack exploited the bank’s infrastructure without directly targeting SWIFT itself. This incident motivated SWIFT to launch its Customer Security Program (CSP) to enhance the security of institutions within its network. Meticulous Planning : The heist was strategically timed, taking advantage of bank holidays and off-hours when responses would be delayed . This planning allowed the attackers to avoid immediate detection. Extended Network Access : Attackers had been lurking within Bangladesh Bank’s network for a significant period before executing their plan. This prolonged access likely hindered the ability to identify the initial breach point, highlighting the need for improved network monitoring that could have detected the intrusion sooner. Cyber Crime: Notable Ransomware Families The evolution of ransomware has resulted in the emergence of numerous families, each with unique tactics and impact. Here are some significant ransomware variants: Locky : Highly versatile, Locky can spread through exploit kits or traditional phishing emails, making it widely adaptable and popular. Cerber : Known for its multifaceted approach, Cerber not only encrypts files but can also launch DDoS attacks against its victims. Jigsaw : Inspired by the "Saw" movie series, Jigsaw both encrypts and exfiltrates data, increasing pressure on victims to pay the ransom. Crysis & LeChiffre : Both leverage brute-force attacks against RDP to infiltrate systems, avoiding traditional phishing methods. Goldeneye, Petya, & HDDCryptor : These ransomware variants don’t just encrypt files; when run with admin rights, they encrypt entire hard drives, even overwriting the Master Boot Record. Popcorn Time : This variant introduces a “social” twist, offering victims the decryption key for free if they successfully infect others. WannaCry (Wcry) : Famous for its May 2017 attack, WannaCry exploited an SMB vulnerability (leaked by ShadowBrokers) to spread across networks, impacting several large organizations. NotPetya : Rising to prominence in June 2017, NotPetya combined SMB exploits with credential-stealing tools like Mimikatz, followed by lateral movement techniques like PsExec/WMIC. Many believe its true aim was widespread disruption rather than ransom collection. GandCrab : Launched in January 2018, GandCrab popularized the Ransomware-as-a-Service (RaaS) model, enabling less skilled cybercriminals to deploy ransomware. Its creators announced the end of operations on May 31, 2019. Ryuk : Primarily targeting large organizations, Ryuk ransomware operators aim to control entire networks and coordinate a wide distribution of the malware, hoping for substantial ransom payouts. Maze : Known for data theft, Maze often enters systems via phishing and post-compromise utility execution. Before encryption, it exfiltrates data, threatening public exposure if the victim refuses to pay. If you want to learn about bank heist: Do check link below https://www.niceideas.ch/roller2/badtrash/entry/deciphering-the-bengladesh-bank-heist Conclusion: The Bangladesh Bank heist and the evolution of ransomware attacks provide crucial lessons for organizations, particularly in the financial and critical infrastructure sectors. The Bangladesh Bank incident highlighted how vulnerabilities in basic cybersecurity practices—such as poor network segmentation, outdated infrastructure, and lack of proactive monitoring—can expose even the most secure systems, like SWIFT, to indirect threats. This event spurred initiatives like the SWIFT Customer Security Program (CSP), underscoring that security must be holistic, addressing even the weakest links. Akash Patel

  • Cyber Crime: A Focus on Financial Gain (Zeus Trojan, Emotet Trojan, Carbanak)

    Monetary Gain as the Core Driver of Cybercrime Cyber criminals are motivated by financial profit, making their targets somewhat predictable—they go where the money is. These attackers prefer l ow-effort, high-reward methods and often avoid challenging targets. A classic saying summarizes their approach: “ You don’t have to be the fastest; just don’t be the slowest.” Common Attack Techniques in Financial Cybercrime 1. Online Banking Trojans Banking Trojans target online banking users, aiming for mass infections and small-value thefts. Notable examples include: Zeus, Citadel, Emotet, and Dridex:  These Trojans infect users’ devices to steal small amounts of money from each infected account. POS and ATM Malware:  Tailored malware targeting point-of-sale systems and ATMs to steal data and cash. 2. Advanced Attacks Against Financial Institutions Criminals are targeting banks directly, infecting business users involved in handling large fund transfers: Carbanak Attack (2015):  Cybercriminals infiltrated bank networks, learning fund transfer procedures and stealing millions. Bangladesh Bank Heist (2016):  Attackers exploited the SWIFT system, resulting in an attempted theft of $951 million. 3. Targeted Ransomware Since 2015, ransomware has surged, targeting any entity that values its data: Victims:  From individuals to corporations and government bodies, anyone with data worth protecting is a potential target if they’re willing to pay to retrieve it. Key Online Banking Trojans Zeus Trojan: The "King" of Banking Malware Overview:  Zeus, a versatile Trojan, performs various attacks, including keylogging and "man-in-the-browser" (MitB) attacks, which intercept and manipulate data in a user’s browser. Tech Support Scams:  Zeus also supported fake virus warnings, leading users to pay for fraudulent antivirus services. Open-Source Adaptation:   In 2011, Zeus’s source code was leaked , giving rise to many new variants like Citadel. ZitMo (Zeus-in-the-Mobile):  This mobile version intercepts authentication codes to facilitate fraudulent transactions. Emotet Trojan: Evolving Financial Malware First Identified (2014):  Initially, Emotet bypassed security to steal banking credentials, later evolving with features like self-propagation through email. Infection via Spam:  Emotet spreads via email with malicious Office documents , often disguised as invoices or delivery notices. Notable Attack (2019):  In Lake City, Florida, Emotet infected the city’s network, later dropping Trickbot and leading to Ryuk ransomware deployment, resulting in a $460,000 ransom payment. Carbanak: The First APT Against Banks Discovery (2015):  Carbanak, an APT (Advanced Persistent Threat) campaign, targeted financial institutions, amassing $500 million through fraudulent transactions. Attack Method:   Phishing emails with malicious attachments led to malware installation, allowing remote control and surveillance of bank operations. Techniques:  Carbanak gang learned banking procedures by recording screens and keystrokes, enabling them to conduct transactions themselves. Cash-Out Techniques:  These included programming ATMs to dispense cash on command, transferring funds to mule accounts, manipulating the SWIFT network, and creating fake bank accounts. Summary of Financial Losses : Carbanak alone caused losses of up to $10 million per institution, potentially totaling $1 billion across all affected banks. In Next Article we will talk about the Bangladesh Bank Heist via swift network in depth. Until than stay safe and keep learning!

  • Source of Logs in Azure(P3 :- NSG/Storage Account Logs) : A Comprehensive Guide for Incident Response

    Lets Talk about Third category called: Resource Azure offers a variety of logging resources to support incident response, monitoring, and security analytics. Two key components are Network Security Group (NSG) Flow Logs  and Traffic Analytics —essential tools for analyzing network activity and identifying potential security incidents in your Azure environment. https://learn.microsoft.com/en-us/azure/azure-monitor/reference/logs-index Key Components of Azure Network Security Network Security Groups (NSG) : NSGs are used to control network traffic flow to and from Azure resources by setting up security rules . Rules specify the source, destination, port, and protocol, either allowing or denying traffic. Rules are prioritized numerically, with lower numbers having higher priority. https://learn.microsoft.com/en-us/azure/network-watcher/nsg-flow-logs-overview NSG Flow Logs : Flow logs capture important network activity at the transport layer (Layer 4) and are a vital resource for tracking and analyzing network traffic . They include: Source and destination IP, ports, and protocol : This 5-tuple information helps identify connections and patterns . Traffic Decision (Allow or Deny) : Specifies whether traffic was permitted or blocked. Logging Frequency : Flow logs are captured every minute. Storage : Logs are stored in JSON format, retained for a year, and can be configured to stream to Log Analytics or an Event Hub for SIEM integration . Note: NSG flow logs are enabled through the Network Watcher service, which must be enabled for each region in use. NSG Flow Log Configuration To enable NSG flow logs: Enable Network Watcher : Set up in each Azure region where NSG monitoring is needed. Register Microsoft.Insights Provider : The "Insights" provider enables log capture and must be registered for each subscription. Enable NSG Flow Logs : Use Version 2 for enhanced details , including throughput information. https://learn.microsoft.com/en-us/azure/network-watcher/nsg-flow-logs-tutorial https://learn.microsoft.com/en-us/azure/network-watcher/traffic-analytics --------------------------------------------------------------------------------------------------------- Traffic Analytics Traffic Analytics  is a powerful tool that enhances NSG flow logs by providing a visual representation and deeper insights into network activity . By using a Log Analytics workspace, it allows organizations to: Visualize Network Activity : Easily monitor traffic patterns across subscriptions. Identify Security Threats : Detect unusual traffic patterns that could signify attacks or unauthorized access. Optimize Network Deployment : Analyze traffic flows to adjust resource configurations for efficiency. Pinpoint Misconfigurations : Quickly identify and correct settings that might expose resources to risk. https://learn.microsoft.com/en-us/azure/network-watcher/traffic-analytics Setup : Traffic Analytics is configured via the Network Watcher and requires the NSG logs to be sent to a Log Analytics workspace. --------------------------------------------------------------------------------------------------------- Practical Applications in Incident Response and Forensics For incident response, NSG flow logs and Traffic Analytics provide a detailed view into Azure network activity, allowing you to: Track unusual or unauthorized traffic patterns. Quickly spot and investigate potential lateral movement within the network. Assess security posture by reviewing allowed and denied traffic flows, helping ensure configurations align with security policies. --------------------------------------------------------------------------------------------------------- Now Lets Talk about Fourth category called: Storage Account Logs In Azure, storage accounts  are crucial resources for storing and managing data, but they require specific configurations to secure access and enable effective monitoring through logs. Here’s a breakdown of key practices for setting up and securing storage accounts in Azure: Enabling Storage Account Logs Azure does not enable logging for storage accounts by defaul t, but you can enable logs through two main options: Diagnostic Settings – Preview : This is the preferred option , offering granular logging settings. Logs can be configured for each data type—blob, queue, table, and file storage— and sent to various destinations such as a Log Analytics workspace, another storage account, an Event Hub, or a partner solution. Diagnostic Settings – Classic : An older option with limited customization compared to the preview settings. Logging Categories : Logs can capture Read, Write, and Delete operations . For security and forensic purposes, it’s especially important to enable the StorageRead  log to track data access, as this can help detect data exfiltration attempts (e.g., when sensitive data is downloaded from a blob). Key Logging Considerations for Security Data Exfiltration Tracking : Monitoring Read operations is critical for detecting unauthorized data access. Filtering for specific operations, such as GetBlob, allows you to identify potential data exfiltration activities. Microsoft Threat Matrix : Azure’s threat matrix for storage, based on the MITRE ATT&CK framework, highlights data exfiltration as a significant risk. Monitoring for this by configuring relevant logs helps mitigate data theft. https://www.microsoft.com/en-us/security/blog/2021/04/08/threat-matrix-for-storage/ --------------------------------------------------------------------------------------------------------- Storage Account Access Controls Access to storage accounts can be configured at multiple levels: Account Level : Overall access to the storage account itself. Data Level : Specific containers, file shares, queues, or tables. Blob Level : Individual blob (object) access, allowing the most restrictive control. Access Keys : Each storage account comes with two access keys . Regular rotation of these keys is highly recommended to maintain security. Shared Access Signatures (SAS) : S AS tokens allow restricted access to resources for a limited time and are a safer alternative to using account keys , which grant broader access. SAS tokens can be scoped down to individual blobs for more restrictive control. Public Access : It’s critical to avoid public access configurations unless absolutely necessary, as this can expose sensitive data to unauthorized users. --------------------------------------------------------------------------------------------------------- Internet Access and Network Security for Storage Accounts By default, Azure storage accounts are accessible over the internet , which poses security risks: Global Access : Storage accounts exist in a global namespace, making them accessible worldwide via a URL (e.g., https://mystorageaccount.blob.core.windows.net). Restricting access to specific networks or enabling a private endpoint is recommended to limit exposure. Private Endpoints and Azure Private Link : For enhanced security, private endpoints can be used to connect securely to a storage account via Azure Private Link. This setup requires advanced planning but significantly reduces the risk of unauthorized internet access. Network Security Groups (NSG) : Although NSGs do not directly control storage account access, securing virtual networks and subnets associated with storage accounts is essential . Best Practices for Incident Response and Forensics For effective incident response: Enable and monitor diagnostic logs for Read operations to detect data exfiltration. Regularly review access control configurations to ensure minimal exposure. Use private endpoints and avoid public access settings to minimize risk from the internet. These configurations and controls enhance Azure storage security, protecting sensitive data from unauthorized access and improving overall network resilience. --------------------------------------------------------------------------------------------------------- In Azure, protecting against data exfiltration in storage accounts requires a layered approach, involving strict control over key and SAS token generation , careful monitoring of access patterns , and policies that enforce logging for audit and response purposes . Here’s a detailed breakdown: Data Exfiltration Prevention and Monitoring Key and SAS Management Key Generation : Access keys or SAS tokens are critical for accessing data in storage accounts and can be generated through various methods: Azure Console : Provides an intuitive UI for key generation and monitoring. PowerShell and CLI : Useful for scripting automated key management tasks. Graph API : Suitable for integrating key management into custom applications or workflows. For example: Access Keys : Azure generates two access keys per storage account to allow for seamless key rotation. Shared Access Signatures (SAS) : SAS tokens can be generated at different levels (blob, file service, queue, and table), granting temporary, limited access. Generating SAS tokens at the most granular level, such as for individual blobs, reduces the risk of misuse. Monitoring Key Enumeration : To detect potential data exfiltration, look for specific operations that indicate credential enumeration: LISTKEYS/ACTION Operation : Any instance of "operationName": " MICROSOFT.STORAGE/STORAGEACCOUNTS/LISTKEYS/ACTION "  in the logs suggests that a principal has listed the keys. This is a red flag, as unauthorized access to these keys could enable data exfiltration. Configuring Applications for Secure Access Once a threat actor obtains storage credentials, it becomes straightforward to access and exfiltrate data through a pplications like Azure Storage Explorer . This tool allows quick configuration using access keys or SAS tokens, so it’s vital to: Limit Key Distribution : Only authorized users should have access to SAS tokens or keys, ideally with restricted permissions and limited expiry. Enable StorageRead Logs : The StorageRead  log captures read activities, providing visibility into data access. If this log isn’t enabled, data exfiltration activity goes undetected. Automating Log Enabling with Policies For organizations with extensive storage account usage, enabling StorageRead logs on each account individually can be infeasible . To streamline this, you can: Create a Policy for Storage Logs : Set a policy at the management group  or subscription  level to automatically enable logs for all current and future storage accounts. Predefined Policies : Azure offers several predefined policies, but currently, none enforce storage account logging by default. Custom Policy : If needed, a custom policy can be created (e.g., to enable StorageRead logging and direct logs to an Event Hub, a Log Analytics workspace, or other storage). This policy can ensure storage accounts remain compliant with logging requirements. Policy Constraints and Configuration : Regional Limitation : When configuring a policy to send logs to an Event Hub, both the Event Hub and the storage account must be in the same region. To capture logs across multiple regions, create corresponding Event Hubs. Flexible Destinations : Customize the policy to send logs to various destinations, such as Log Analytics or a storage account, depending on organizational needs. --------------------------------------------------------------------------------------------------------- Further we will talk in next blog, Until than stay safe and keep learning Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • Source of Logs in Azure(P2 :- Tenant/Subscription Logs) : A Comprehensive Guide for Incident Response

    While the Log Analytics Workspace  is an excellent tool for monitoring and analyzing logs in Azure, storing logs in a Storage Account  provides a more cost-effective and flexible solution for long-term retention and external access. This setup allows organizations to store logs for extended periods and export them for integration with other tools or services. Why Export Logs to a Storage Account? There are several benefits to exporting tenant logs  and other Azure logs to a Storage Account : Long-Term Retention : You can define a retention policy to keep logs for months or years, depending on compliance and operational requirements. Cost Efficiency : Compared to storing everything in a Log Analytics Workspace, which is more costly for extensive data, Storage Accounts  offer lower-cost alternatives for long-term log retention. Accessibility : Logs stored in a storage account can be accessed through APIs, or via tools like Azure Storage Explorer , allowing easy download, transfer, and external analysis. However, each organization must balance storage needs with costs, as larger volumes of data will increase storage costs over time. ------------------------------------------------------------------------------------------------------------- Steps to Export Tenant Logs to a Storage Account Step 1: Set Up Diagnostic Settings to Export Logs Navigate to Diagnostic Settings : In the Azure portal, search for Azure Active Directory  and select it. Under the Monitoring  section, select Diagnostic settings . Create a New Diagnostic Setting : Click Add diagnostic setting . Name your setting (e.g., "TenantLogStorageExport"). Select Log Categories : Choose the logs you want to export , such as Audit Logs , Sign-in Logs , and Provisioning Logs . Select Destination : Choose Archive to a storage accoun t  and select the storage account where the logs will be stored. Confirm and save the settings. Once configured, the selected logs will start streaming into the specified storage account. ------------------------------------------------------------------------------------------------------------- Accessing Logs with Azure Storage Explorer Azure Storage Explorer  is a free, graphical tool that allows you to easily access and manage data in your storage accounts, including logs stored as blobs . Using Azure Storage Explorer: Download and Install : Install Azure Storage Explorer  on your local machine from here . Connect to Your Azure Account : Launch Storage Explorer and sign in with your Azure credentials. Browse to your storage account  and locate the blobs  where your logs are stored (e.g., insights-logs-signinlogs). View and Download Logs : Use the explorer interface to view the logs. You can download these blobs to your local machine for offline analysis, or even automate log retrieval using tools like AzCopy  or Python scripts. Logs are typically stored in a hierarchical structure, with each log file containing valuable data in JSON or CSV formats. Examples of Log Types in Storage Accounts Here are some common logs that you might store in your storage account : insights-logs-signinlogs : Logs of all user and service sign-in activities. insights-logs-auditlogs : Logs of administrative changes such as adding or removing users, apps, or roles. insights-logs-networksecuritygrouprulecounter : Tracks network security group rules and counters. insights-logs-networksecuritygroupflowevent : Monitors NSG traffic flows. These logs are stored as blobs, while certain logs (e.g., OS logs) might be stored in tables  within the storage account. https://azure.microsoft.com/en-us/products/storage/storage-explorer/ https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log-schema#schema-from-storage-account-and-event-hubs ------------------------------------------------------------------------------------------------------------- Sending Logs to Event Hub for External Systems If you need to export tenant logs or other logs to a non-Azure system , Event Hub  is a great option . Event Hub is a real-time data ingestion service that can process millions of events per second and is often used to feed external systems such as SIEMs  (Security Information and Event Management). How to Configure Event Hub Export: Create Event Hub : Set up an Event Hub  within Azure Event Hubs  service. Configure Diagnostic Settings : Just as you did for the storage account, go to Diagnostic settings  for Azure Active Directory and select Stream to an event hub  as the destination. Enter the namespace  and event hub name . This setup allows you to forward Azure logs in real-time to any system capable of receiving data from Event Hub, such as a SIEM or a custom log analytics platform. https://azure.microsoft.com/en-us/products/event-hubs/ https://learn.microsoft.com/en-us/entra/identity/monitoring-health/howto-stream-logs-to-event-hub?tabs=splunk ------------------------------------------------------------------------------------------------------------- Leveraging Microsoft Graph API for Log Retrieval In addition to Storage Accounts  and Event Hubs , Azure also supports the Microsoft Graph API  for retrieving tenant logs programmatically. This API allows you to pull log data directly from Azure and Microsoft 365  services. The Graph API  supports many programming languages, including Python, C#, and Node.js, making it highly flexible. It’s commonly used to integrate Azure logs into custom applications or third-party systems. https://developer.microsoft.com/en-us/graph ------------------------------------------------------------------------------------------------------------- All Above logs were part of the tenant logs: Lets start with second log category called Subscription Logs What are Subscription Logs? Subscription logs track and log all activities within your Azure subscription . They record changes made to resources, providing a clear audit trail and insight into tenant-wide services. The primary information recorded includes details on operations, identities involved, success or failure status, and IP addresses. Accessing Subscription Logs Subscription logs are available under the Activity log  in the Azure portal . You can use the logs in multiple ways: View them directly in the Azure portal  for a quick, interactive inspection. Store them in a Log Analytics workspace  for advanced querying and long-term retention. Archive them in a storage account , useful for maintaining a long-term log history. Forward them to a SIEM  (Security Information and Event Management) solution via Azure Event Hub for enhanced security monitoring and correlation. To access the logs in the Azure portal, use the search bar to look for Activity log . This will provide a quick summary view of activities within the portal. https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=powershell ------------------------------------------------------------------------------------------------------------- Key Elements of the Subscription Log Schema Each activity log entry has several key fields that can help in monitoring and troubleshooting. When an action, such as creating a new virtual machine (VM), is logged, the following fields provide detailed information: resourceId : This is a unique identifier for the resource that was acted upon, allowing precise tracking of the specific VM, storage account, or network security group. operationName : Specifies the action taken on the resource. For example, creating a VM might appear as MICROSOFT.COMPUTE/VIRTUALMACHINES/WRITE. resultType  and resultSignature : These fields show whether the operation succeeded, failed, or was canceled, with additional error codes or success indicators in resultSignature. callerIpAddress : The IP address from which the action originated, identifying the source of the request. correlationId : A unique GUID that ties together all sub-operations in a single request, allowing you to trace a sequence of actions as part of a single change or request. claims : Contains identity details of the principal making the change, including any associated authentication data. This can include fields from an identity provider like Azure AD, giving insight into the user or service making the request. Each log entry captures critical details that aid in understanding who , what , when , and where  changes were made. ------------------------------------------------------------------------------------------------------------- Subscription Log Access Options Azure offers different access and filtering methods for subscription logs. Here’s a breakdown of how you can utilize the portal effectively: Azure Portal : The Azure portal offers a quick, visual way to explore logs . You can select a subscription, set the event severity level (e.g., Critical, Error, Warning, Informational), and define a timeframe for the log entries you need. The Export Activity Logs  option on the top menu or the Diagnostic Settings  on the left allows you to set up data export or view diagnostic logs. Log Analytics Workspace : The Log Analytics workspace offers a more robust and flexible environment for log analysis . By sending your logs here, you can perform advanced queries, create dashboards, and set up alerts. This workspace enables centralized log management, making it an ideal choice for larger organizations or those with specific compliance requirements. Programmatic Access : Using the PowerShell cmdlet Get-AzLog  or the Azure CLI with az monitor activity-log , you can query the activity logs programmatically . This is useful for automated scripts or integrating logs into third-party solutions. Event Hub Integration : For real-time analysis, integrate subscription logs with Event Hub and forward them to a SIEM for security insights and anomaly detection . This setup is beneficial for organizations that require constant monitoring and incident response. https://learn.microsoft.com/en-us/powershell/module/az.monitor/?view=azps-12.4.0#retrieve-activity-log https://learn.microsoft.com/en-us/cli/azure/service-page/monitor?view=azure-cli-latest#view-activity-log ------------------------------------------------------------------------------------------------------------- Subscription Logs in Log Analytics workspace For Detailed analysis, it's best to set up a Log Analytics workspace . This enables centralized log storage and querying capabilities, combining subscription logs with other logs (such as Azure Active Directory logs (Entra ID Logs) ) for a comprehensive view. The setup process is identical to the one for the tenant logs : select the log categories you wish to save and the Log Analytics workspace to send them to. Subscription Log Categories The main log categories available are: Administrative : Tracks actions related to resources, such as creating, updating, or deleting resources via the Azure Resource Manager. Security : Logs security alerts generated by Azure Security Center. Service Health : Reports incidents affecting the health of Azure services. Alert : Logs triggered alerts based on predefined metrics, such as high CPU usage. Recommendation : Records Azure Advisor recommendations for resource optimization. Policy : Logs policy events for auditing and enforcing subscription-level policies. Autoscale : Contains events from the autoscale feature based on usage settings. Resource Health : Provides resource health status, indicating whether a resource is available, degraded, or unavailable. ------------------------------------------------------------------------------------------------------------- Querying Subscription Logs in Log Analytics The logs are stored in the AzureActivity table in Log Analytics . Here are some example queries: Identify Deleted Resources : AzureActivity | where OperationNameValue contains "DELETE" This query is useful for investigating deletions, such as a scenario where a malicious actor deletes a resource group, causing all contained resources to be deleted. Track Virtual Machine Operations : AzureActivity | where OperationNameValue contains "COMPUTE" | distinct OperationNameValue This query lists unique operations related to virtual machines, helpful for getting an overview of VM activity. Count VM Operations : AzureActivity | where OperationNameValue contains "COMPUTE" | summarize count() by OperationNameValue By counting operations, this query provides insights into the volume of VM activities, which can reveal patterns such as frequent VM creation or deletion. ------------------------------------------------------------------------------------------------------------- Archiving and Streaming Logs To save logs for long-term storage or send them to a SIEM: Configure diagnostic settings to specify the storage account or Event Hub for archiving and real-time streaming. Logs stored in a storage account appear in a structured format, often in JSON files within deeply nested directories, which can be accessed and processed using tools like Azure Storage Explorer. By effectively leveraging subscription logs and these configurations, Azure administrators can enhance monitoring, identify security issues, and ensure accountability in their environments. ----------------------------------------------------------------------------------------------------------- Further we will talk in next blog, Until than stay safe and keep learning Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • In-Cloud Incident Response: How to Acquire and Analyze a VM Disk Image in Azure

    When conducting incident response in the cloud, there often comes a point when logs alone aren’t enough, and we need direct access to data from the affected machine . A cquiring an image of a virtual machine (VM) in Azure and analyzing it in the cloud can save both time and egress costs compared to downloading it . This guide will walk you through each step in setting up and performing forensic analysis in the cloud, using a dedicated “Forensic VM” to examine a disk image created from a “Victim VM.” https://learn.microsoft.com/en-us/azure/import-export/storage-import-export-service Steps to Perform In-Cloud Forensic Analysis: Step 1: Snapshot the OS Disk from the Victim VM To start, take a snapshot of the VM’s disk. A snapshot is a full, read-only copy of the disk at a specific point in time. Locate the Victim VM  in the Azure portal and navigate to its disk. Create a Snapshot:  Select the “Snapshot” option for the disk . Make sure the VM is running, as snapshots can be created on active VMs. Choose Snapshot Type:   Select “Full” for a complete copy of the disk . Use “Incremental” if you’re doing routine backups. Name the Snapshot:  Assign a descriptive name ( e.g., victim-vm-os-snapshot ) to avoid confusion in later steps. Azure storage costs apply to snapshots ($0.05/GB/month for standard and $0.132/GB/month for premium). For most investigations, snapshotting only the OS disk is sufficient . However, if the VM has data disks, you may need snapshots of these too. Step 2: Create a New Disk from the Snapshot The snapshot data now needs to be applied to a new disk, making it accessible for analysis. Create a Disk from the Snapshot:  In the Azure portal, go to “Create Disk” and select “Snapshot” as the source type. Name the Disk:  Name it similarly to the snapshot, adding -disk at the end (e.g., victim-vm-os-disk) for easy identification. Select Disk Type:  Choose Premium SSD for faster data processing speeds during forensic analysis. If cost is a concern, you can delete the snapshot after this step, but keeping it is advisable as a backup. Create Disk:  Confirm and create the disk. This disk now holds all data from the snapshot and is ready to be attached to a VM for analysis. Once created search for disks (You will find created disk there) Step 3: Create the Forensic VM To analyze the imaged disk, create a separate VM called the “Forensic VM” with adequate resources for your forensic tools. Select VM Specifications:  Choose a VM size with a robust CPU (4 vCPUs) and memory (16GB) to handle the processing demands of forensic tools. Create OS Disk:  During setup, the Forensic VM will have its own OS disk where you can install forensic software and store results. Data Disk Selection:   Under “Data Disks,” select “Attach an Existing Disk” and attach the disk you created in Step 2. If you forget this step, shut down the VM before attaching the disk to prevent corruption. Location and Region:  Make sure the Forensic VM is created in the same region as the victim VM disk for performance optimization. This VM will host your forensic tools, such as KAPE, and provide an isolated environment for analysis. Step 4: Mount the Disk in the Forensic VM Once the Forensic VM is running, access the imaged disk by mounting it. Access Disk Management:  In the VM’s Disk Management window, you will see: Disk 0: The Forensic VM’s OS disk (typically C:). Disk 1: Temporary storage (typically D:), which is not persistent. Disk 2: The attached disk from the victim VM. Bring Disk 2 Online:   Right-click on Disk 2 and select “Online.” Windows will assign a letter to each partition. Access the OS Partition:  The OS partition will appear as another drive (e.g., G:), which you can now investigate. Ensure you avoid writing to this disk during analysis to maintain the integrity of the data. If any corruption occurs, simply repeat from Step 2 using the original snapshot. Step 5: Run Forensic Tools on the Forensic VM With the disk mounted, you can now use forensic tools to analyze the image. Install Forensic Tools:  Tools like KAPE, FTK Imager, or any preferred forensic software can be installed on the Forensic VM’s OS disk. Perform Analysis:  Analyze the OS partition to retrieve relevant evidence. Remember that this disk is writable, so take care not to alter the contents unintentionally. If you need a point-in-time copy for future analysis, consider using the snapshot or taking another snapshot of the newly created disk. ------------------------------------------------------------------------------------------------------------- Optional: Alternative Snapshot Access Methods For scenarios that require additional tools, downloading the snapshot as a virtual hard disk (VHD) can be useful. Export the Snapshot:  Select the snapshot in the Azure portal, navigate to “Snapshot Export,” and generate a one-time URL for downloading. Use AzCopy for Speed:  To accelerate the download, use the AzCopy tool with the command: azcopy cp "" "c:\temp\snapshot.vhd" --check-md5 nocheck VHD Advantages:  Direct VHD access enables automation and integration with tools that may require offline data access. However, be mindful of the potential data egress costs and long download times. Forensic VM Image Creation for Future Use To streamline future investigations, Azure offers ways to create and share VM images. Create an Image:  Create a VM with all your forensic tools installed, then save it as an image in the Azure Compute Gallery. This enables quick setup of similar VMs for future investigations. Use Azure Image Builder (Advanced):  If you prefer customization, use Azure Image Builder to craft an image from scratch. This setup is ideal if you need to transfer or reuse the Forensic VM across different regions, subscriptions, or resource groups. ------------------------------------------------------------------------------------------------------------- Summary Using Azure’s in-cloud investigation capabilities, you can create snapshots, build forensic VMs, and attach imaged disks to streamline your incident response process. By performing forensic analysis in the cloud, you can bypass data egress costs, reduce transfer times, and work efficiently on even large data sets. With this guide, you have a complete roadmap for setting up in-cloud forensic analysis and enhancing your response capabilities in Azure. Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • How Attackers Use Search Engines and What You Can Do About It

    Search engines are incredible tools for finding information online, but they can also be used by attackers for reconnaissance. How Attackers Use Search Engines for Reconnaissance Search engines like Google and Bing provide a vast amount of information that attackers can exploit. By using specific search commands, they can uncover sensitive data, find vulnerabilities, and prepare for attacks. Google Hacking Database (GHDB): The Google Hacking Database (GHDB) is a collection of search queries that help find vulnerabilities and sensitive data exposed by websites. It's a valuable resource for attackers and can be found on the Exploit Database website. https://www.exploit-db.com/google-hacking-database Key Search Commands Attackers Use site:  Searches a specific domain. Example: site: example.com  restricts the search to example.com . link:  Finds websites linking to a specific page. Example: link: example.com  shows all sites linking to example.com . intitle:  Searches for pages with specific words in the title. Example: intitle: "login page"  finds pages with "login page" in the title. inurl:  Looks for URLs containing specific words. Example: inurl: admin  finds URLs with "admin" in them. related:  Finds pages related to a specific URL. Often less useful but can sometimes uncover valuable information. cache:  Accesses the cached version of a webpage stored by Google. Example: cache: example.com  shows Google's cached copy of example.com . filetype/ext:  Searches for specific file types. Example: filetype: pdf  or ext: pdf  finds PDF files, useful for locating documents that might contain sensitive information. Practical Reconnaissance Techniques 1. Searching for Sensitive Files: Attackers search for files that might be accidentally exposed, such as: Web Content:   site: example.com asp , site: example.com php Document Files:   site: example.com filetype: xls , site: example.com filetype:pptx 2. Using Cache and Archives: Google Cache:  Retrieves recently removed pages using the cache:  command. Wayback Machine:  Archives webpages over time, available at archive.org . https://archive.org/ 3. Automated Tools: FOCA/GOCA:  Finds files, downloads them, and extracts metadata, revealing usernames, software versions, and more. https://github.com/gocaio/Goca SearchDiggity:  Provides modules for Google, Bing, and Shodan searches, malware checks, and data leakage assessments. Recon-ng:  A framework that queries data from multiple services and manages data across projects. https://github.com/lanmaster53/recon-ng Conclusion Search engine reconnaissance is a powerful tool for attackers, providing them with a wealth of information to plan their attacks. By understanding these techniques and implementing robust defensive measures, you can significantly reduce your exposure and protect your critical data. Stay vigilant, stay informed, and continuously audit your public-facing assets to maintain a strong security posture. Akash Patel

  • Source of Logs in Azure(P4:- Virtual Machine Logs) : A Comprehensive Guide for Incident Response

    Lets talk about Fifth category called: Virtual Machine Logs Azure provides a range of logging options for virtual machines (VMs ) to support monitoring, troubleshooting, and incident response. Here’s an overview of the log types, agents, and configuration options for both Windows and Linux VMs, along with specific considerations for application logs. Logging Agents Azure offers several agents for collecting VM logs, each suited to different needs: Monitor Agent  : Designed to replace older agents, it supports Data Collection Rules (DCR)  for granular log collection . Diagnostic Extension (WAD) : Known as Windows Azure Diagnostics, this agent can write data directly to a storage account or an Event Hub . It remains a go-to choice for direct storage integration. Azure Monitor for VMs : Collects performance data and logs across VMs but may require additional configuration for more specialized needs. For data retention in Azure, understanding which agent best aligns with your storage and monitoring requirements is key. Configuring Windows Azure Diagnostics (WAD) for Windows VMs Initial Setup : Navigate to Azure Monitor  in the Azure portal. Create a Data Collection Rule (DCR)  for specific logs. Configuration Steps : Diagnostic Settings : Configure diagnostic settings for the VM and select the event logs and levels you want to collect (e.g., system, security, and application logs). Agent Settings : Assign a storage account to store the logs and set a disk quota to manage storage limits. Types of Logs Collected : Windows Event Logs : Stored in WADWindowsEventLogsTable , which contains OS-level event logs. Application Logs : Capture IIS logs, .NET application traces, and Event Tracing for Windows (ETW)  events . ETW provides insights into kernel and application-level events, useful for performance and security monitoring. Accessing Logs : Azure Storage Explorer : Use this tool to navigate to the storage account’s Tables  section, a ccess WADWindowsEventLogsTable , and export logs to a .csv file if needed. Configuring Logging for Linux VMs Diagnostic Settings : S et diagnostic settings for the Linux VM, similar to the Windows setup. Choose the target storage account for log storage. Log Options : Metrics : Configure metrics for key system parameters such as CPU, memory, network, file system, and disk usage. These can indicate suspicious activity patterns, such as high CPU usage for crypto mining or elevated disk usage during ransomware incidents. Syslog : Collect system logs stored in auth.log, kern.log, syslog, etc. All logs are combined into a single table, LinuxSyslogVer2v0 in the Azure storage account. https://datatracker.ietf.org/doc/html/rfc5424 Accessing Linux Logs : Use Azure Storage Explorer  to access LinuxSyslogVer2v0  under the Tables  section of the designated storage account. Application Logging Tracing for .NET and ETW : Application logs generated from .NET applications and ETW (Event Tracing for Windows)  capture both system and application performance data. Logs are stored in plaintext, differing from other logs stored in JSON format, and can be accessed via Azure’s storage services. ------------------------------------------------------------------------------------------------------------- Summary of Log Sources Windows VMs : Windows event logs (WADWindowsEventLogsTable) IIS and application logs, ETW events Linux VMs : System metrics (CPU, memory, etc.) Syslog events (LinuxSyslogVer2v0) Application Logs : .NET tracing output and ETW logs in plaintext -------------------------------------------------------------------------------------------------------- Key Takeaways Choosing Agents : Decide based on whether storage account integration or advanced data collection rules are required. Logging Setup : Configure storage quotas to avoid excessive costs and log noise. Accessing Logs : Use Azure Storage Explorer  for NoSQL table-based logs, which provide structured access to Windows and Linux logs. ------------------------------------------------------------------------------------------------------ Conclusion: In Azure, securing storage accounts and virtual machines requires vigilant access management, policy-driven logging, and careful monitoring of data access activities. By enabling StorageRead logs and configuring diagnostic agents for VMs, organizations can detect potential data exfiltration and unusual activity. Centralizing logs and applying policies across environments strengthens incident response and supports comprehensive visibility across resources. Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • Source of Logs in Azure(P1 :-Tenant Logs) : A Comprehensive Guide for Incident Response

    In cloud-based environments like Azure, maintaining comprehensive visibility over all activities is essential for securing your infrastructure and responding effectively to incidents. One of the most critical tools in your security arsenal is logging . Azure provides a variety of log sources, but not all are enabled by default. Understanding where these logs come from, how to access them, and how to store them can significantly improve your ability to investigate incidents and mitigate risks. The Five Key Azure Log Sources Azure collects logs from various levels of the cloud infrastructure, each serving a unique role in monitoring and security. Here are the five primary log sources you need to be aware of: Tenant Logs Subscription Logs Resource Logs Operating System Logs Application Logs Let’s explore each of these in more detail. Tenant Turned on by default Used to detect password spray attacks or other credentian abuses. Subscription Turned on by default Used to analyze the creation, deletion, start/stop of resources in cases such as crypto mining VM incidents or mass deletion for sabotage cases. Resource Turned off by default Used to log network traffic flow, file storage access for cases such as data exfiltration. Operating System Turned off by default Used to log operating system events, which can show lateral movement. Application Turned off by default Used to create custom logs at the discretion of developers. Azure includes a log for IIS that can be used to show web servers attacks. ------------------------------------------------------------------------------------------------------------- Why Proper Logging Matters in Incident Response In many cases, when an organization is called to respond to a security incident, the first challenge is discovering that key logs were never configured or stored . This leaves responders with limited information and hampers their ability to fully understand the attack. Why is this important? Comprehensive Monitoring : Many log sources, such as resource and OS logs, must be enabled manually . Without these logs, crucial events like unauthorized access or file manipulation might go unnoticed. Cost of Storage : Logs must be stored in Azure, often in a Log Analytics Workspace  or similar storage solution, which incurs additional costs. Without proper budgeting and planning, organizations might avoid enabling these logs due to perceived costs, leaving them vulnerable. Log Retention : Depending on your configuration, logs might only be stored for a short period before being overwritten. Having a strategy in place for exporting and storing logs in a secure, centralized location (such as a SIEM  system) is essential. The ideal setup is to continuously export these logs to a SIEM , where they can be stored long-term and analyzed even after an incident has occurred. This prevents attackers from covering their tracks by deleting logs stored locally in Azure. ------------------------------------------------------------------------------------------------------------- Log Analytics Workspace: Centralizing Your Logs for Efficient Analysis Azure provides a Log Analytics Workspace  as a centralized repository where logs from multiple sources, both Azure-based and non-Azure, can be aggregated and analyzed. This workspace organizes logs into tables , with each data source creating its own table. Key benefits of using a Log Analytics Workspace include: Scalability : The default workspace can handle up to 6GB of logs per minute  and s tore up to 4TB of data per day . This is generally sufficient for most organizations, though custom workspaces can be created for larger log volumes. Access Control : You can set granular permissions based on security roles, ensuring that sensitive logs are only accessible to authorized personnel. By setting up a Log Analytics Workspace, you can automate the collection of logs from all relevant sources and integrate with Azure Monitor  for real-time alerting and analysis. https://learn.microsoft.com/en-us/azure/azure-monitor/logs/workspace-design https://learn.microsoft.com/en-us/azure/azure-monitor/logs/manage-access?tabs=portal ------------------------------------------------------------------------------------------------------------- Setting Up Log Analytics Workspace in Azure A Log Analytics Workspace  allows you to aggregate logs from multiple Azure services and third-party tools into one place. Here’s how to set it up: Step-by-Step Guide to Creating a Log Analytics Workspace Step 1: Sign in to the Azure Portal Go to Azure Portal  and sign in with your credentials. Step 2: Search for 'Log Analytics Workspaces' In the search bar at the top, type Log Analytics Workspaces  and select the service from the list. Step 3: Create a New Workspace Click New  to create a new workspace. Enter the required details: Subscription : Select your Azure subscription. Resource Group : Choose an existing resource group or create a new one. Workspace Name : Name your workspace (e.g., "SecurityLogsWorkspace"). Region : Choose the region where you want the workspace to reside. Step 4: Review and Create After entering all details, click Review + Create  and then Create  to deploy your Log Analytics Workspace. This workspace will serve as a centralized location for all logs, which can be expanded to include tenant logs, subscription logs, resource logs, and more. For more details on creating a Log Analytics workspace, visit Microsoft’s official documentation. https://learn.microsoft.com/en-us/azure/azure-monitor/logs/quick-create-workspace?tabs=azure-portal https://learn.microsoft.com/en-us/azure/azure-monitor/logs/quick-create-workspace?tabs=azure-portal ------------------------------------------------------------------------------------------------------------- Tenant Logs: Overview and Access Tenant logs provide information about operations conducted by tenant-wide services  like Azure Active Directory (AAD)( Entra ID) . These logs are essential for monitoring security-related events such as sign-ins, user provisioning, and audit trails. The key AAD logs include: Audit Logs : Track changes and configuration updates across the tenant. Sign-in Logs : Provide detailed records of user login activity, including success, failure, and multi-factor authentication (MFA) usage. Viewing Tenant Logs in the Azure Portal Sign-in Logs To quickly check sign-in activity, go to the Azure Portal  and navigate t o Azure Active Directory(Entra ID)  > Sign-ins . Here you can view sign-in logs for the last 30 days , showing details such as user, date, status (success, failure, interrupted), and the IP address used. https://learn.microsoft.com/en-us/entra/fundamentals/how-to-manage-user-profile-info Audit Logs Similarly, go to Azure Active Directory(Entra ID)  > Audit Logs  to see tenant-wide changes, such as user account updates and administrative configuration changes. However, the Azure portal  limits logs to the last 30 days , making it unsuitable for long-term forensic analysis or detailed investigations. For comprehensive analysis and historical data retention, s toring logs in a Log Analytics Workspace  is a much better approach. ------------------------------------------------------------------------------------------------------------- Exporting Azure Active Directory Logs to Log Analytics Workspace (Now AAD name have been modified to Entra ID) To take full advantage of tenant logs , including AAD audit and sign-in logs, you should configure the logs to be stored in your Log Analytics Workspace . This allows for extended retention periods, deeper analysis, and cross-correlation with other logs. Step-by-Step Guide to Exporting AAD Logs(Entra ID logs) Step 1: Navigate to Azure Active Directory(Entra ID logs) In the Azure Portal, search for and select Azure Active Directory  from the services list. Step 2: Configure Diagnostic Settings From the AAD  menu, select Diagnostic settings . Click Add diagnostic setting  to configure where the logs will be stored. ------------------------------------------------------------------------------------------------------------- Selecting AAD Logs(Entra ID) and Setting Up Log Analytics Workspace After setting up your Log Analytics Workspace  (as described in previous steps) , the next task is to configure which AAD logs  you want to capture and send to the workspace . Azure provides several types of logs that you can export for analysis: Audit Logs : Logs changes such as adding or removing users, groups, roles, policies, and applications. Sign-in Logs : Tracks sign-in activities, including: User sign-in : Captures direct user login events. Non-interactive sign-in : Logs background sign-ins, such as token refreshes. Service Principal sign-in : Logs sign-ins performed by service principals (used by applications). Managed Identity sign-in : Captures sign-ins for managed identities. Provisioning Logs : Tracks user, group, and role provisioning activities performed by Azure AD. ADFS Sign-in Logs : Monitors federation sign-in events through Active Directory Federation Services (ADFS) . Identity Protection Logs : Tracks risky users and events, including RiskyUsers , UserRiskEvents , RiskyServicePrincipals , and ServicePrincipalRiskEvents . N etwork Access Traffic Logs : Logs network traffic for policy and risk management, including user experience data. To set this up : Navigate to Diagnostic Settings : Go to the Azure Active Directory  ( Entra ID ) service in the Azure portal. In the left menu, click Diagnostic settings  and then select Add diagnostic setting . Choose Logs to Export : Select the categories of logs you want to export to the Log Analytics Workspace  (e.g., AuditLogs, SignInLogs, ProvisioningLogs). Specify the Log Analytics Workspace  where these logs will be stored. Save Settings : Confirm the logs you’ve selected and save the diagnostic setting. Once configured, these logs will be automatically sent to the designated Log Analytics Workspace  for long-term storage and analysis. https://learn.microsoft.com/en-us/entra/id-protection/ ------------------------------------------------------------------------------------------------------------- Managing Storage Costs While it may be tempting to store all available logs, storage costs can accumulate quickly, especially for large organizations with a lot of activity. One cost-saving measure is to use Azure Storage Accounts  for logs that don't require constant querying but need to be archived for compliance or later use. For critical logs, such as sign-in  and audit logs , continuous export to the Log Analytics Workspace  is recommended for monitoring real-time activity and performing incident response. However, less frequently accessed logs can be stored more cost-effectively in a storage account. ------------------------------------------------------------------------------------------------------------- Querying AAD Logs(Entra ID logs) Using Kusto Query Language (KQL) Once AAD logs are flowing into your Log Analytics Workspace, you can use Kusto Query Language (KQL)  to search, filter, and analyze log data. KQL is a powerful language for querying logs and has a syntax similar to SQL, making it approachable for those familiar with databases. Example of a Simple KQL Query: SigninLogs | where TimeGenerated > ago(1d) | where ResultType == 0 SigninLogs : The first line specifies the log type you want to search. TimeGenerated > ago(1d) : Filters the query to only include logs from the past 24 hours. ResultType == 0 : This line filters for successful logins (ResultType 0 corresponds to success). This simple query helps you identify all successful sign-in attempts in the last 24 hours. KQL also allows for more complex queries involving joins, aggregations, and visualizations, making it a robust tool for analyzing log data. For more details on KQL, visit Microsoft’s KQL Documentation . ------------------------------------------------------------------------------------------------------------- Using Pre-Built Queries in Log Analytics Microsoft also provides a set of pre-built queries  for common scenarios , such as analyzing sign-ins, audit events, or identifying risky behavior in your tenant. These queries serve as templates, which you can customize based on your specific investigation needs. Pre-Built Queries : These are particularly useful when first starting with KQL, as they provide a foundation for your own queries and ensure you're asking the right questions of your data. To use these pre-built queries: Open your Log Analytics Workspace  in the Azure portal. Navigate to the Logs  section. Search for the desired query in the query library, or start with a template and adjust it to suit your needs. https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-overview ------------------------------------------------------------------------------------------------------------- Further we will talk in next blog, Until than stay safe and keep learning Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • Azure Compute: Understanding VM Types and Azure Network Security for Incident Response

    Microsoft Azure provides a wide range of compute services, organized based on workload types and categorized as Infrastructure as a Service (IaaS) , Platform as a Service (PaaS) , or Software as a Service (SaaS) . For incident response and forensic investigations, the focus is typically on virtual machines (VMs)  and the related networking infrastructure. ----------------------------------------------------------------------------------------------------------- Virtual Machines: Types and Applications Azure offers various classes of virtual machines tailored for different workloads, all with specific performance characteristics. Here’s a breakdown of the most common VM types you'll encounter during an investigation: Series A (Entry Level) : Use Case : Development workloads, low-traffic websites. Examples : A1 v2, A2 v2. Series B (Burstable) : Use Case : Low-cost VMs with the ability to "burst" to higher CPU performance when needed. Examples : B1S, B2S. Series D (General Purpose) : Use Case : Optimized for most production workloads. Examples : D2as v4, D2s v4. Series F (Compute Optimized) : Use Case : Compute-intensive workloads, such as batch processing. Examples : F1, F2s v2. Series E, G, and M (Memory Optimized) : Use Case : Memory-heavy applications like databases. Examples : E2a v4, M8ms. Series L (Storage Optimized) : Use Case : High throughput and low-latency applications. Examples : L4s, L8s v2. Series NC, NV, ND (Graphics Optimized) : Use Case : Visualization, deep learning, and AI workloads. Examples : NC6, NV12s. Series H (High Performance Computing) : Use Case : Applications such as genomic research, financial modeling. Examples : H8, HB120rs v2. https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/ https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/ VM Storage: Managed Disks Managed Disks  in Azure operate similarly to physical disks but come with a few key distinctions relevant for incident response: Types of Managed Disks : Standard HDD : Slow, low-cost. Standard SSD : Standard for most production workloads. Premium SSD : High performance, better suited for intensive workloads. Ultra Disk : Highest performance for demanding applications. Each VM can have multiple managed disks , including an OS disk, temporary disk (for short-term storage), and one or more data disks. Forensics often involves snapshotting the OS disk  of a compromised VM, attaching that snapshot to a new VM for further analysis. Costs are associated with: Disk type and size. Snapshot size (critical for investigations). Outbound data transfers (when retrieving forensic data). I/O operations (transaction costs). https://learn.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types ----------------------------------------------------------------------------------------------------------- Azure Virtual Network (VNet): The Glue Behind Azure Resources An Azure Virtual Network ( VNet)  allows Azure resources like VMs to communicate with each other and with external networks . During an incident response, it’s essential to understand the network topology  to see how resources were connected, what traffic was allowed, and where vulnerabilities might have existed. Key points about VNets: Private Addressing : Azure assigns a private IP range (typically starting with 10.x.x.x). Public IP Addresses : Required for internet communication, but comes with extra charges. On-Premises Connectivity : Point-to-Site VPN : Connects individual computers to Azure. Site-to-Site VPN : Connects an on-premises network to Azure. Azure ExpressRoute : Private connections that bypass the internet. https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview ----------------------------------------------------------------------------------------------------------- Network Security Groups (NSG): Traffic Control and Incident Response NSG Overview : Azure automatically creates NSGs to protect resources, like virtual machines (VMs), by allowing or blocking traffic based on several criteria: Source/Destination IP : IP addresses from which the traffic originates or to which it is sent. Source/Destination Port : The network ports involved in the connection. Protocol : The communication protocol (e.g., TCP, UDP). Rule Prioritization : NSG rules are processed in order of their priority , with lower numbers having higher priority. Custom rules have priorities ranging from 100 to 4096 , while Azure-defined rules have priority in the 65000 range. Incident Response Tip : Ensure that firewall rules are correctly prioritized. A common issue during investigations is discovering that a misconfigured or improperly prioritized rule allowed malicious traffic to bypass protections. Flow Logs :Network flow logs , which capture traffic information, are essential for understanding traffic patterns and investigating suspicious activity . Flow logs are generated every minute, and the first 5GB per month is free. After that, the cost is $0.50 per GB plus storage charges. Example : If an attack involved unauthorized access through a compromised port, flow logs would help you trace the origin and nature of the traffic, providing critical forensic data. https://learn.microsoft.com/en-us/azure/network-watcher/nsg-flow-logs-overview https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview#network-security-groups ----------------------------------------------------------------------------------------------------------- Network Virtual Appliances (NVA): Advanced Network Security Azure provides additional options for advanced traffic management and security beyond basic NSGs: Azure Load Balancer : Distributes incoming network traffic across multiple resources to balance load. Azure Firewall : Offers advanced filtering, including both stateful network and application-layer inspections. Application Gateway : Protects web applications by filtering out vulnerabilities like SQL injection and cross-site scripting (XSS). VPN Gateway : Connects on-premises networks securely to Azure. Many third-party Network Virtual Appliances  are also available on the Azure Marketplace , such as firewalls, VPN servers, and routers, which can be vital components in your investigation. https://azuremarketplace.microsoft.com/en-us/marketplace/apps/category/networking?page=1&subcategories=all ----------------------------------------------------------------------------------------------------------- Azure Storage: Central to Forensics and Logging Azure storage accounts  are integral to how logs and other data are stored during investigations. Proper storage setup ensures data retention and availability for analysis. Storage Account Types : Blob Storage : Scalable object storage for unstructured data, such as logs or multimedia. File Storage : Distributed file system storage. Queue Storage : For message storage and retrieval. Table Storage : NoSQL key-value store, now part of Azure Cosmos DB . Blob Storage : Blobs  (Binary Large Objects) are highly versatile and commonly used for storing large amounts of unstructured data, such as logs during forensic investigations. Blobs come in three types: Block Blobs : Ideal for storing text and binary data, can handle up to 4.75TB per file. Append Blobs : Optimized for logging, where data is appended rather than overwritten. Page Blobs : Used for random access data, like Virtual Hard Drive (VHD) files. Direct Access and Data Transfers :With the appropriate permissions, data stored in blob storage can be accessed over the internet via HTTP or HTTPS. Azure provides t ools like AzCopy  and Azure Storage Explorer  to facilitate the transfer of data in and out of blob storage. Example : Investigators may need to download logs or snapshots stored in blobs for offline analysis. Using AzCopy  or Azure Storage Explorer , these files can be easily transferred for examination. ----------------------------------------------------------------------------------------------------------- How This Script Helps: VM Information for Analysis : The extracted data (VM ID and VM size) is essential for identifying and analyzing the virtual machines involved in an incident. $results = get-azlog -ResourceProvider "Microsoft.Compute" -DetailedOutput $results.Properties | foreach {$_} | foreach { $contents = $_.content if ($contents -and $contents.ContainsKey("responseBody")) { $fromjson = ($contents.responseBody | ConvertFrom-Json) $newobj = New-Object psobject $newobj | Add-Member NoteProperty VmId $fromjson.properties.vmId $newobj | Add-Member NoteProperty Vmsize $fromjson.properties.hardwareprofile.vmsize $newobj } } ----------------------------------------------------------------------------------------------------------- Conclusion: In Azure, combining effective Network Security Group (NSG)  management with automated VM log extraction  provides essential visibility for incident response. Understanding traffic control through NSGs and using PowerShell scripts for VM log retrieval empowers organizations to investigate security incidents efficiently, even without advanced security tools like SIEM. Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

  • "Azure Resource Groups and Role-Based Access Control: A Comprehensive Guide for Incident Response and Forensics in the Cloud"

    Microsoft Azure is a vast ecosystem of cloud-based services and tools, offering almost limitless possibilities for building, managing, and scaling applications. But when it comes to incident response or forensic investigation, the Azure landscape can feel overwhelming . To make things clearer, let's focus on the essential elements you're most likely to encounter during such operations. Understanding Azure's Structure: The Building Blocks Think of Azure as a layered architectur e, with each layer adding a distinct function that contributes to how an organization manages and controls its cloud resources . Here are the key components: 1. **Azure Tenant** Picture the tenant as the foundation of a house —the basis for everything else. It represents the entire organization and is associated with an **Azure Active Directory (AAD)(Entra ID)** instance, which handles identity and access management . If you're responding to a security breach, this is where you'll likely start your investigation—analyzing user and group permissions in AAD to find any clues about unauthorized access . 2. **Management Groups** In larger enterprises, it's common to have many different projects running across Azure, each with its own budget, team, and purpose. To keep things tidy, **management groups** help organize multiple subscriptions under a single umbrella. For example, a company could have different management groups for its production and development environments. This setup lets administrators apply policies across all relevant subscriptions in one go—a time-saving feature that also helps standardize security practices. **For Example**: Imagine you're investigating a security incident in a multinational corporation. You may find that production environments are more tightly controlled compared to development, thanks to separate management groups. This organization helps you narrow down where a misconfiguration or security hole might exist. 3. **Subscriptions** Subscriptions are like folders within the cloud that help organize resources and manage billing. Each subscription can contain a collection of resources such as virtual machines, storage accounts, and databases. In a forensic investigation, this is where things get interesting because every subscription can have different access permissions. **Key Point**: If you're investigating a security breach, ensure you have access to all relevant subscriptions because the compromised resource could be hidden within a subscription you're not initially granted access to. 4. **Resource Groups** Moving deeper into Azure's structure, r esource groups act as containers that hold related resources, such as virtual machines or storage accounts. For example, a company might group all resources related to a specific app in one resource group. **Investigative Tip**: Sometimes, you’ll only get access to a single resource group rather than an entire subscription. In that case, your view of the infrastructure will be narrow, limiting your ability to see the bigger picture. Whenever possible, push for subscription-level access. 5. **Resources** Finally, resources are the individual services and assets—virtual machines, networking components, storage accounts, and so on . They are the nuts and bolts of Azure, and they are also the focus of most investigations. For example, if a virtual machine has been compromised, you'll need to scrutinize the VM, its associated storage, and network configurations to understand the breach. ------------------------------------------------------------------------------------------------------------- ### Subscriptions: The Power Behind Azure's Flexibility Once your tenant is up and running, you’ll need to define one or more **subscriptions**. Each subscription is essentially a contract with Microsoft for cloud services, with charges accumulating based on usage . Large companies often set up multiple subscriptions to track different projects, which also helps them monitor costs across various departments or teams. During an investigation, gaining access to the right subscription is crucial because that's where the resources live. Permissions at this level can make or break your ability to fully explore and analyze cloud infrastructure. It’s also worth noting that subscriptions come with limits—for example, the number of virtual CPUs (vCPUs) might be capped. If a breach involves a resource-heavy virtual machine, you may need to request a limit increase from Microsoft. https://learn.microsoft.com/en-us/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide ------------------------------------------------------------------------------------------------------------- ### Azure Resource Manager: The Conductor of the Cloud Before diving into specifics like virtual networks or storage, it's essential to understand **Azure Resource Manager (ARM)**. T hink of ARM as the brain behind all deployments in Azure. It provides a management layer, handling the creation, updating, and deletion of resources. One of ARM's strengths is that it takes input from various interfaces—Azure Portal, PowerShell, CLI, or even REST APIs—and ensures consistency across them. It’s especially useful during a forensic investigation because you can use any of these tools to explore resource configurations or query logs. ARM also supports templates, written in JSON, that allow resources to be deployed consistently . These templates serve as a record of how resources were deployed and configured, offering valuable information during an investigation. For example, if a misconfigured virtual machine was deployed using an ARM template, you could identify that exact misconfiguration and track how it might have contributed to a breach. https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/overview ------------------------------------------------------------------------------------------------------------- Why Resource Groups Matter for Incident Response From an incident response  and forensic investigation  perspective, understanding resource groups is essential. Often, the resources involved in an attack or breach will be grouped together under a specific resource group, allowing you to track and manage them collectively. For example: Ease of Management : If an attacker compromises several virtual machines within a resource group, you can manage, update, or even delete all the compromised resources in one go by targeting the resource group. Access Control : Role-based access control (RBAC) can be set at the resource group level. This means that permissions for an entire group of resources can be managed centrally, making it easier to ensure that only authorized users have access. However, o ne potential challenge is that during investigations, you might only be granted access to a specific resource group rather than the entire subscription. While this can be helpful for isolating resources, it limits your view of the full Azure environment. If you're only granted permissions for one resource group , you could miss key elements or additional compromised resources in other parts of the subscription. Always aim to request higher-level permissions for a complete view during an investigation. ------------------------------------------------------------------------------------------------------------- Azure Resource Providers: The Backend Support Each resource in Azure is managed by a resource provider , which is a service responsible for provisioning, managing, and configuring the resources. For example: To deploy a virtual machine , Azure uses the Microsoft.Compute resource provider . For a storage account , the Microsoft.Storage resource provider is used . When performing investigations or responding to incidents, you won't directly interact with resource providers most of the time. However, understanding that they operate in the background helps you track what services are involved when examining Azure Resource Manager (ARM) templates or logs. ------------------------------------------------------------------------------------------------------------- Key Azure Services for Incident Response and Forensics For forensic investigations and incident response, there are certain Azure products you’re likely to interact with the most: Identity and Access Management : Azure Active Directory (AAD)/Entra ID : Controls identity and access management, a key area to investigate when tracking how a threat actor gained access to a compromised account or service. Networking : Virtual Networks (VNet) : Helps isolate resources and control network traffic. Network Security Groups (NSGs) : Filters network traffic, which can help track network traffic anomalies during an incident. Compute : Virtual Machines (VMs) : Key investigation targets in cases of compromised systems. Both Linux and Windows VMs are supported. Azure Functions : Provides compute-on-demand and could be abused by attackers for running scripts in a serverless environment. Storage : Disk Storage : Persistent storage for VMs. Investigators might need to examine disk snapshots or backups to analyze compromised systems. Blob Storage : REST-based object storage for storing unstructured data, which can be a target for data exfiltration. Storage Explorer : A graphical tool for viewing and interacting with Azure storage resources, useful for accessing storage data during investigations. Analytics : Log Analytics : Allows you to collect and search through logs, essential for tracking suspicious activity across resources. Azure Sentinel : A cloud-native SIEM (Security Information and Event Management) platform, which aggregates data from across the environment and uses intelligent analytics to identify and respond to potential threats. https://azure.microsoft.com/en-us/products/ ------------------------------------------------------------------------------------------------------------- Resource Identification in Azure: Understanding Resource IDs Azure resources are uniquely identified using a Universal Resource Identifier (URI) . This format helps trace individual resources and track their relationships within the Azure environment, which is critical during incident response. A typical resource URI follows this structure: /subscription//resourceGroups//providers/// SubscriptionId : The globally unique identifier for the subscription. resourceGroups : The user-generated name of the resource group. providerName : The resource provider responsible for managing that resource (e.g., Microsoft.Compute for VMs). resourceType : The type of resource (e.g., virtualMachines). resourceName : The specific name of the resource. For example, in the case of a virtual machine named "MiningVM": The resource ID might include URIs for the VM itself, the operating system (OS) disk, the network interface, and even a public IP address (if assigned). Investigators can use these URIs to track and manage each component of a compromised resoure ------------------------------------------------------------------------------------------------------------- ### Investigating Identity and Access: Role-Based Access Control (RBAC) Azure’s **Role-Based Access Control (RBAC)** is like a security guard at the gates of every resource . It defines who has access to what and what they can do with it —read, write, or delete. During an investigation, understanding RBAC is critical because you’ll need to know who had access to a compromised resource and whether their access was appropriate. For instance, each resource in Azure has a **scope**, which could be at the level of a management group, subscription, or resource group. A role assignment defines who (user or service account) can do what (role definition) within that scope. The most common roles are **Owner**, **Contributor**, and **Reader**, but custom roles can be created as well. Imagine you’re looking into an incident where sensitive data was leaked from a storage account. By examining RBAC, you might discover that a developer had unnecessary write access to the account, or that a third-party contractor was given too much control over key resources. https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles https://learn.microsoft.com/en-us/azure/role-based-access-control/overview ------------------------------------------------------------------------------------------------------------- ### Real-World Example: Tracing an Azure Security Breach Let’s put it all together with a simple example. Suppose a virtual machine (VM) in your Azure environment was hacked. You start your investigation by looking into the **subscription** where the VM resides. First, you check **Azure Resource Manager** to view the deployment history of the VM. By examining the **ARM template**, you see that the VM was configured with an outdated operating system, which may have been the entry point for the attacker. Next, you use **RBAC** to review who had access to the resource group containing the VM. You discover that a former employee still had **Owner** access, which allowed them to modify settings and potentially introduce vulnerabilities. Finally, you dive into **Log Analytics** to trace the attacker’s movements through the VM’s logs, giving you a clear picture of how the breach occurred. ------------------------------------------------------------------------------------------------------------- When it comes to managing user access in Microsoft Azure, especially during investigations, things can get complicated quickly. Azure uses Role-Based Access Control (RBAC) , which defines who has access to what resources and what they can do with those resources. The challenge comes when a user’s permissions are scattered across multiple subscriptions and resource groups. Administrators often need to enumerate role assignments  to fully understand a user’s level of access. Here’s how that can be achieved using Azure's tools. Listing User Role Assignments: Azure CLI and PowerShell The Azure CLI  and PowerShell  provide the most efficient ways to list user role assignments across different levels of Azure resources. Using Azure CLI to List Role Assignments The Azure CLI  allows you to enumerate role assignments by issuing a command to list all the roles a user has across resources. The steps are: Select the appropriate subscription : First, make sure you’ve selected the subscription that holds the resources you're investigating: az account set --subscription "subscription_name_or_id" List role assignments : Use the az role assignment list command to list all role assignments for a specific user within that subscription . The key parameters here are --all to search recursively and --assignee to specify the user. az role assignment list --all --assignee "user_email_or_id" This will list the user’s roles at both the subscription and resource group levels. If they have owner-level access to a specific resource group but no broader subscription access, this command will reveal that. Using PowerShell to List Role Assignments Similarly, you can achieve the same results using PowerShell  with the Get-AzRoleAssignment command. Install and set up Azure PowerShell :If you haven't already, install the Azure PowerShell module Install-Module -Name Az -AllowClobber Authenticate and select the subscription : Authenticate with your Azure account and choose the correct subscription. Connect-AzAccount Select-AzSubscription -SubscriptionId "subscription_id" List role assignments for the user : Use the following command to list role assignments: Get-AzRoleAssignment -ObjectId (Get-AzADUser -UserPrincipalName "user_email").Id This will return all roles assigned to the user, including those at the subscription or resource group level. Why This Matters for Investigations In cases where a security incident or breach is being investigated, it’s critical to understand who had access to what . For example, a user might not have direct access to a subscription but could hold Owner  permissions at a specific resource group or even an individual resource level, which could lead to security loopholes. If the user has elevated permissions—such as Owner  or Contributor —on critical resources, this could be an entry point for an attacker to escalate their control over the environment. Listing all role assignments helps pinpoint misconfigurations or excessive access that might have been leveraged during an attack. ------------------------------------------------------------------------------------------------------------- MITRE ATT&CK® and Azure: Understanding Threat Actor Behavior The MITRE ATT&CK® framework  provides an extensive matrix of tactics and techniques that threat actors commonly use when attacking cloud platforms like Azure. For instance, attackers frequently aim to: Obtain and verify credentials : Attackers often exploit legacy protocols like IMAP, which lack strong security measures. Enforcing multi-factor authentication (MFA)  and disabling legacy protocols are essential to mitigate these risks. Exfiltrate data via storage accounts : Attackers might abuse Azure’s Blob Storage  or use the Microsoft Graph API  to access and extract sensitive information. The MITRE ATT&CK framework has detailed mappings for Office 365 , Azure AD , and other Azure services, which makes it easier to correlate specific threat tactics with your security controls. Microsoft has even mapped its built-in Azure security controls against MITRE ATT&CK to create a library of 48 potential defenses. You can explore Azure security mappings here: MITRE ATT&CK for Cloud Azure Security Controls Mapped to MITRE ------------------------------------------------------------------------------------------------------------- Accessing Azure: CLI, Portal, PowerShell, and Graph API There are four primary ways to interact with Azure during your investigation or daily operations: Azure Portal : The graphical interface for viewing and managing Azure resources. Azure CLI : A command-line interface for automating resource management. PowerShell : Ideal for Windows users who prefer scripting in PowerShell to manage Azure. Microsoft Graph API : A RESTful API that allows programmatic access to Azure services, providing deep integration into apps and custom tools. The Azure CLI  and PowerShell  options are especially important for large-scale environments where running commands on the fly is necessary to quickly retrieve information. Cloud Shell —a terminal within the Azure Portal—also provides access to these tools without needing local installations. https://learn.microsoft.com/en-us/cli/azure/what-is-azure-cli https://learn.microsoft.com/en-us/cli/azure/install-azure-cli https://learn.microsoft.com/en-us/azure/cloud-shell/overview https://learn.microsoft.com/en-us/azure/cloud-shell/get-started/classic?tabs=azurecli Investigating Cloud Shell: Bash vs PowerShell Artifacts An interesting point to consider during an investigation is whether the attacker used Cloud Shell  for their activities. When a user initiates a Cloud Shell session, a storage account is automatically created to store the environment. If Bash  was used, traditional Linux forensics  can be applied, such as analyzing the .bash_history file to see the commands issued by the user. However, there’s a limitation: PowerShell Cloud Shell  leaves fewer artifacts. While the underlying actions will still be logged (e.g., through Azure Audit Logs ), direct forensics from PowerShell Cloud Shell is limited. https://learn.microsoft.com/en-us/azure/cloud-shell/get-started/classic?tabs=azurecli Conclusion Effectively managing and investigating user access in Azure requires understanding the nuances of role assignments across different subscriptions and resources. Tools like Azure CLI  and PowerShell  make it easier to enumerate these roles, while frameworks like MITRE ATT&CK®  provide insight into threat actor behavior in cloud environments. The right combination of access control, security controls, and investigative tools can significantly enhance your incident response capabilities in Azure. Akash Patel ---------------------------------------------------------------------------------------------------------- Special Thanks (Iqra) I would like to extend my heartfelt gratitude to one of my dearest colleagues, a Microsoft Certified Trainer, for her invaluable assistance in creating these articles. Without her support, this would not have been possible. Thank you so much for your time, expertise, and dedication! https://www.linkedin.com/in/iqrabintishafi/ -------------------------------------------------------------------------------------------------------------

bottom of page