Search Results
334 items found for ""
- Microsoft 365 Security: Understanding Built-in Detection Mechanisms and Investigating Log Events
As the landscape of cybersecurity threats evolves, protecting sensitive information stored within enterprise platforms like Microsoft 365 (M365) has become a top priority for IT and security teams. To help organizations identify and mitigate these risks, Microsoft provides a range of built-in detection mechanisms based on user activity and sign-in behavior analysis. While these tools can offer significant insights, it’s important to understand their limitations, potential false positives, and how to effectively investigate suspicious events. ------------------------------------------------------------------------------------------------------------- Built-In Reports: Monitoring Risky Activity Microsoft 365's built-in reporting suite provides several out-of-the-box detection features that monitor risky user behavior and sign-in activity. These include: Risky sign-ins : Sign-ins flagged as risky due to factors like unusual IP addresses, impossible travel, or logins from unfamiliar devices. Risky users : User accounts exhibiting abnormal behavior, such as frequent failed login attempts or multiple sign-ins from different geographies. Risk detections : A general term referring to any identified behavior or event that deviates from normal patterns and triggers a system alert. These alerts are largely powered by machine learning and heuristic algorithms that analyze stored log data to identify abnormal behavior patterns . The system is designed to recognize potential security risks, but it does have some caveats. ------------------------------------------------------------------------------------------------------------- Built-In Risk Detection: Delays and False Positives One of the most important things to understand about Microsoft’s risk detection mechanisms is that they are not instantaneous. Alerts can take up to 8 hours to be generated , meaning there is a delay between the detection of a suspicious event and the time it takes for the alert to surface. This delay is designed to allow the system to analyze events over time and avoid triggering unnecessary alerts, but it also means that organizations may not be alerted to security incidents immediately. Another challenge is that these alerts can sometimes generate false positives . A common example is the geolocation module and its associated “ impossible travel ” alert. This is triggered when a user signs in from two geographically distant locations within a short time, which would be impossible under normal circumstances. However, the issue often arises from incorrect IP location data, such as when users connect to the internet via hotel networks, airplane Wi-Fi, or mobile carriers. For instance, if a user switches from airplane internet to airport Wi-Fi, the system may mistakenly flag it as an impossible travel scenario, even though the user hasn’t changed locations. Managing False Positives Because these false positives can clutter security dashboards, it’s important for IT teams to review and refine their alerting thresholds. Regular tuning of the system and awareness of typical user behaviors—such as frequent travelers—can help minimize the noise created by these alerts and focus on genuine threats. ------------------------------------------------------------------------------------------------------------- Investigating and Profiling Logons When a suspicious event is detected, one of the first steps in investigating the issue is analyzing logon data. Microsoft’s Unified Audit Logs (UAL) track over 100 types of events, including both successful and unsuccessful login attempts. Here are some key strategies for analyzing logons and identifying potential security breaches: Tracking Successful Logins Every successful login generates a UserLoggedIn event , which includes valuable information such as the source IP address . Investigators can use this data to identify unusual logon behavior, such as logins from u nexpected geographical locations or times . Temporal or geographic outliers—such as a login from a country the user has never visited—can be red flags that warrant further investigation. Additionally, a pattern of failed logon attempts (logged as UserLoginFailed events) followed by a successful login from a different or suspicious IP address may suggest that an attacker was trying to brute-force or guess the user’s password before successfully logging in. Investigating Brute-Force Attacks Brute-force attacks —where an attacker attempts to gain access by repeatedly guessing the user's credentials —leave distinctive traces in the log data. One common sign of a brute-force attack is when a user gets locked out of their account after multiple failed login attempts. In this case, you would see a sequence of UserLoginFailed events followed by a “ IdsLocked ” event , indicating that the account was temporarily disabled due to too many failed attempts. Further, even if the user account doesn’t exist, the system will log the attempt with the term UserKey=“Not Available” , which can help identify instances of user enumeration —a technique used by attackers to discover valid usernames by testing different variations. ------------------------------------------------------------------------------------------------------------- Investigating MFA-Related Events When multi-factor authentication (MFA) is enabled, additional events are logged during the authentication process. For example: UserStrongAuthClientAuthNRequired : Logged when a user s uccessfully enters their username and password but is then prompted to complete MFA . UserStrongAuthClientAuthNRequiredInterrupt : Logged if the user cancels the login attempt after being asked for the MFA token. These events are particularly useful in detecting attempts by attackers to bypass MFA. If you notice a sudden increase in UserStrongAuthClientAuthNRequiredInterrupt events, it could indicate that attackers have obtained passwords from a compromised database and are testing accounts to find those without MFA enabled. ------------------------------------------------------------------------------------------------------------- Investigating Mailbox Access and Delegation Attackers who gain access to a Microsoft 365 environment often target email accounts, particularly those of key personnel. Once inside, they may attempt to read emails or set up forwarding rules to siphon off sensitive information . One tactic is to use delegate access , where one account is granted permission to access another user’s mailbox. Delegate access is logged in UAL, and reviewing these logs can reveal when permissions are assigned or when a delegated mailbox is accessed . In addition, organizations should regularly audit their user lists to check for unauthorized accounts that may have been created by attackers. In many cases, such unauthorized users are only discovered during license reviews. Another avenue for attackers is server-side forwarding , which can be set up through either a Transport Rule or an Inbox Rule . These forwarding mechanisms can be used to exfiltrate data, so security teams should regularly review the organization’s forwarding rules to ensure no unauthorized forwarding is taking place. ------------------------------------------------------------------------------------------------------------- External Applications and Consent Monitoring Microsoft 365 users can grant third-party applications access to their accounts, which poses a potential security risk. Once access is granted, the application doesn’t need further permission to interact with the account. Monitoring for the Consent to application event can help organizations detect when external applications are being granted access , particularly if the organization doesn’t typically use external apps. This was a factor in the well-documented SANS breach in 2020, where attackers exploited third-party app permissions to gain access to a user’s mailbox. https://www.sans.org/blog/sans-data-incident-2020-indicators-of-compromise/ ------------------------------------------------------------------------------------------------------------- Conclusion While Microsoft 365 offers powerful built-in tools for detecting risky behavior and investigating suspicious logon events, security teams must be aware of their limitations, particularly the potential for false positives and delayed alerts. By regularly reviewing log data, investigating unusual patterns, and keeping an eye on key events like failed login attempts, MFA interruptions, and delegation changes, organizations can better protect their environments against evolving threats. The key to effective security monitoring is a proactive approach, combining automated detection with human analysis to sift through the noise and focus on genuine risks. Akash Patel
- Streamlining Cloud Log Analysis with Free Tools: Microsoft-Extractor-Suite and Microsoft-Analyzer-Suite
When it comes to investigating cloud environments, having the right tools can save a lot of time and effort. Today, I’ll introduce two free, powerful tools that are absolutely fantastic for log analysis within the Microsoft cloud ecosystem: Microsoft-Extractor-Suite and Microsoft-Analyzer-Suite . These tools are easy to use, flexible, and can produce output in accessible formats like CSV and Excel, making them excellent resources for investigating business email compromises, cloud environment audits, and more. About Microsoft-Extractor-Suite The Microsoft-Extractor-Suite is an actively maintained PowerShell tool designed to streamline data collection from Microsoft environments, including Microsoft 365 and Azur e. This toolkit provides a convenient way to gather logs and other key information for forensic analysis and cybersecurity investigations. Supported Microsoft Data Sources Microsoft-Extractor-Suite can pull data from numerous sources, including: Unified Audit Log Admin Audit Log Mailbox Audit Log Azure AD Sign-In Logs Azure Activity Logs Conditional Access Policies MFA Status for Users Registered OAuth Applications This range allows investigators to get a comprehensive picture of what’s happening across an organization’s cloud resources. ---------------------------------------------------------------------------------------------------------- Installation and Setup To get started, you’ll need to install the tool and its dependencies. Here’s a step-by-step guide: Install Microsoft-Extractor-Suite : Install-Module -Name Microsoft-Extractor-Suite Install the PowerShell module Microsoft.Graph (for Graph API Beta functionalities): Install-Module -Name Microsoft.Graph Install ExchangeOnlineManagement (for Microsoft 365 functionalities): Install-Module -Name ExchangeOnlineManagement Install the Az module (for Azure Activity log functionality): Install-Module -Name Az Install the AzureADPreview module (for Azure Active Directory functionalities): Install-Module -Name AzureADPreview Once the modules are installed, you can import them using: Import-Module .\Microsoft-Extractor-Suite.psd1 ---------------------------------------------------------------------------------------------------------- Note: You will need to sign in to Microsoft 365 or Azure with appropriate permissions(Admin level access, included P1 or higher access level, or an E3/E5 license) before using Microsoft-Extractor-Suite functions. ---------------------------------------------------------------------------------------------------------- Getting Started First, connect to your Microsoft 365 and Azure environments: Connect-M365 Connect-Azure Connect-AzureAZ From here, you can specify start and end dates, user details, and other parameters to narrow down which logs to collect. The tool captures output in Excel format by default, stored in a designated output folder. Link :- https://microsoft-365-extractor-suite.readthedocs.io/en/latest/ ---------------------------------------------------------------------------------------------------------- Example Log I collected: One drawback to keep in mind is that logs are collected one by one. example first u collect MFA logs second again you written command and collected Users log. Another thing to keep in mind is if u do not provide path output will be capture under default folder where script is present. ---------------------------------------------------------------------------------------------------------- You might have question why two different suite? Answer is because there is script name Microsoft-Analyzer-Suite developed by evild3ad. This suite offers a collection of PowerShell scripts specifically designed for analyzing Microsoft 365 and Microsoft Entra ID data, which can be extracted using the Microsoft-Extractor-Suite. Current Analysis support by Microsoft-Analyzer-Suite is: Link: https://github.com/evild3ad/Microsoft-Analyzer-Suite ---------------------------------------------------------------------------------------------------------- Before I start, I will show you folder structure of both the tools: Microsoft-Extractor-Suite Microsoft-Analyzer-Suite-main Analyzer-Suit allows You can also add specific IP addresses, ASNs, or applications to a whitelist by editing the whitelist folder in the Microsoft-Analyzer-Suite directory. ------------------------------------------------------------------------------------------------------------ Lets start: I will show you two logs capture and analyzed is message trace log other one Unified audit log all collect using the script Microsoft extractor suite and than I will use Microsoft-Analyzer-Suite. Collecting Logs with Microsoft-Extractor-Suite Now, let’s go over collecting logs. Here’s an example command to retrieve the Unified Audit Log entries for the past 90 days for all users: Get-UALAll After running this, the tool will output data in Excel format to a default folder. However, you may need to combine multiple excel file into one .csv file. Because Anlyzer suite script only run using .csv. ------------------------------------------------------------------------------------------------------------ Combining CSV Files into One Excel File When working with large data sets, it's more efficient to combine multiple log files into a single file. Here’s how to do this in Excel: Place all relevant CSV files in a single folder. Open a new Excel spreadsheet and navigate to Data > Get Data > From File > From Folder . Select the folder containing your CSV files and click “Open”. From the Combine drop-down, choose Combine & Transform Data . This option loads your files into the Power Query Editor , where you can manipulate and arrange the data. In the Power Query Editor, click OK to load your combined data. Edit any column formats, apply filters, or sort the data as needed. Once done, go to Home > Close & Load Once Done Output will be look like below: But to ensure compatibility with Microsoft-Analyzer-Suite save the file as a .csv Using Microsoft-Analyzer-Suite for Log Analysis With your data collected and organized, it’s time to analyze it with Microsoft-Analyzer-Suite . UAL-Analyzer.ps1 Before using UAL-Analyzer.ps1 script there are few dependencies u have to make sure these are installed for running script First is creating is IPinfo account its free. https://ipinfo.io/signup?ref=cli ImportExcel for Excel file handling (PowerShell Module) Install-Module -Name ImportExcel https://github.com/dfinke/ImportExcel IPinfo CLI (Standalone Binary) https://github.com/ipinfo/cli xsv (Standalone Binary) https://github.com/BurntSushi/xsv To install xsv: Now as I had WSL (I used command git clone https://github.com/BurntSushi/xsv.git ) You can download folder (as you feel comfortable) Once dependencies are set up, configure your IPinfo token by pasting it into the UAL-Analyzer script. To locate this in the script: Open UAL-Analyzer.ps1 with a text editor like Notepad++, search for the token variable, and paste your token there. Running the Analysis Script For Unified Audit Logs, use the UAL-Analyzer script. For example: .\UAL-Analyzer.ps1 "C:\Path\To\Your\CombinedUALLog.csv" -output "C:\Path\To\Output\" Once script ran successfully and output collected you will get pop up ------------------------------------------------------------------------------------------------------------ Lets check the output: As per screenshot , You can see output will be in CSV, XLSX in both format. Now question arise why there is same output in different. This is because the XLSX will contain output in coloured format, if something suspicious found it will be highlighted automatically. Where as csv will be in no highlighted format. Example of xlsx: Example of CSV: Folder Suspicious Operation: Kind note scripts are still getting updated and modified if you open GitHub you might find newer version it might work better for current this will output for me it make thing easy hope it do for you as well. ------------------------------------------------------------------------------------------------------------ Second Log we are going to talk about Message Trace logs Command : (This will collect all logs) Get-MessageTraceLog Screenshot of Output: Next step is Combined all excel into one(.csv format). Once done run MTL-Analyzer script .\MTL-Analyzer.ps1 "C:\Path\To\Your\CombinedMTLLog.csv" -output "C:\Path\To\Output\" (Make sure before running add token details inside the script than run the script) Conclusion By combining Microsoft-Extractor-Suite and Microsoft-Analyzer-Suite , you can effectively streamline log collection and analysis across Microsoft 365 and Azure environments. While each suite has its own focus, together they provide an invaluable resource for incident response and cybersecurity. Now that you have the steps, you can test and run the process on your own logs. I hope this guide makes things easier for you! See you, and take care! Akash Patel
- Streamlining Office/Microsoft 365 Log Acquisition: Tools, Scripts, and Best Practices
When conducting investigations, having access to Unified Audit Logs (UALs) from Microsoft 365 (M365) environments is crucial. These logs help investigators trace activities within an organization, covering everything from user login attempts to changes made in Azure Active Directory (AD) and Exchange Online . There are two primary ways for investigators to search and filter through UALs : Via the Microsoft 365 web interfac e for basic investigation. Using r eady-made script framework s to automate data acquisition and conduct more in-depth, offline analysis. While the M365 interface is helpful for small-scale operations, using PowerShell scripts or specialized tools can save a lot of time in larger investigations . This article will walk you through the process of acquiring Office 365 logs, setting up acquisition accounts, and leveraging open-source tools to make investigations more efficient. --------------------------------------------------------------------------------------------------------- Setting Up a User Account for Log Acquisition To extract logs for analysis , you need to set up a special user account in M365 with specific permissions that provide access to both Azure AD and Exchange-related information . This process requires setting up roles in both the Microsoft 365 Admin Center and the Exchange Admin Center . Step 1: Create an Acquisition Account in M365 Admin Center Go to the M365 Admin Center . Create a new user account . Assign the Global Reader role to the account. This role grants access to Unified Audit Logs (UALs). Step 2: Set Up Exchange Permissions Next, you’ll need to set up permissions in the Exchange Admin Center : Go to the Exchange Admin Center and create a new group . Assign the Audit Log permission to the group. This role allows access to audit logs for Exchange activities. Add the user you created in the M365 Admin Center to this group. Now that the account has the necessary permissions, you are ready to acquire logs from Microsoft 365 for your investigation. Note: If in future it became possible i will create an detailed blog to how to setup account and collect logs manually. --------------------------------------------------------------------------------------------------------- Automation: Using Ready-Made Acquisition Scripts Several pre-built scripts make the process of acquiring Unified Audit Logs (UALs) and other cloud-based logs easier, especially when conducting large-scale investigations. Below are two of the most widely used frameworks: 1. DFIR-O365RC (Developed by ANSSI) DFIR-O365RC is a powerful PowerShell-based tool developed by ANSSI , the French governmental Cyber Security Agency. This tool is designed to extract UAL data and integrate with Azure APIs to provide a more comprehensive view of the data. Key Features : Access to both UAL and multiple Azure APIs, allowing for more enriched data acquisition. The tool is somewhat complex, but the GitHub page provides guidance on setup and usage. Usage : Once you set up the Global Reader account and Audit Log permissions , you can use DFIR-O365RC to automate the extraction of logs. The tool provides a holistic view of available data, including enriched details from Azure AD and Exchange. Reference : DFIR-O365RC GitHub Page 2. Office-365-Extractor (Developed by PwC Incident Response Team) Another useful tool is Office-365-Extractor , developed by PwC’s incident response team . This tool includes functional filters that let investigators fine-tune their extraction depending on the type of investigation they are running. Key Features : Functional filters for tailoring data extraction to specific investigation needs. Complements PwC’s Business Email Compromise (BEC) investigation guide, which offers detailed instructions on analyzing email compromises in Office 365 environments. Usage :Investigators can quickly set up the tool and begin filtering logs by specific criteria like user activity, mailbox access, or login attempts. Reference : Office-365-Extractor GitHub Page Business Email Compromise Guide Both DFIR-O365RC and Office-365-Extractor provide a more streamlined approach for handling larger volumes of data, making it easier to manage in-depth investigations without running into the limitations of the Microsoft UI. --------------------------------------------------------------------------------------------------------- Tool I prefer Microsoft Extractor Suite: Another Cloud-Based Log Acquisition Tool In addition to the tools mentioned above, there is another robust tool known as the Microsoft Extractor Suite . It is considered one of the best options for cloud-based log analysis and acquisition. Though we won’t dive into full details in this article, it’s worth noting that this tool is highly recommended for investigators dealing with larger or more complex environments. --------------------------------------------------------------------------------------------------------- Why Automated Tools Are Crucial for Large-Scale Investigations While the M365 UI is convenient for smaller investigations, its limitations become apparent during large-scale data acquisitions. Automated scripts not only save time but also allow for more thorough and efficient data collection . These tools can help investigators get around the API export limitations , ensuring that no critical data is missed. Additionally, data science methodologies can be applied to the collected logs to uncover patterns, trends, or anomalies that might otherwise go unnoticed in manual analysis . As cloud-based environments continue to grow in complexity, leveraging these automation frameworks becomes increasingly essential for effective incident response. --------------------------------------------------------------------------------------------------------- Final Thoughts and Next Steps In conclusion, the combination of Microsoft 365 Admin Center , Exchange Admin Center , and automated tools like DFIR-O365RC and Office-365-Extractor provides investigators with a powerful framework for extracting and analyzing Office 365 logs. Setting up the right user accounts with appropriate roles is the first step, followed by leveraging these scripts to automate the process, ensuring no data is overlooked. Stay tuned for a detailed guide on the Microsoft Extractor Suite, which we’ll cover in an upcoming blog post. Until then, happy investigating! Akash Patel
- M365 Logging: A Guide for Incident Responders
When it comes to Software as a Service (SaaS), defenders heavily rely on the logs and information provided by the vendor . For Microsoft 365 (M365), the logging capabilities are robust, often exceeding what incident responders typically find in on-premises environments. At the heart of M365’s logging system is the Unified Audit Log (UAL) , which captures over 100 different activities across most of the SaaS products. What You Get: Logs and Retention Periods The type of logs you have access to, and their retention periods, depend on your M365 licensing. While there are options to extend retention by offloading data periodically, obtaining the detailed logs available with higher-tier licenses can be challenging with less expensive options. Another consideration is the limitations Microsoft places on API quotas for offloading and offline analysis. However, there are ways to navigate these restrictions effectively. Log Retention Table: (It kept on updating by Microsoft so keep an eye on Microsoft) Key Logs in M365 Azure AD Sign-in Logs : Most Microsoft services now use Azure Active Directory (AD) for authentication. In this context, the Azure AD Sign-in logs can be compared to the 4624 and 4625 event logs in on-premises domain controllers. A unique aspect of these logs is that most authentication requests originate from the internet through publicly exposed services. This allows for additional detection methods based on geolocation data. The information gathered here is also ideal for time-based pattern analysis, enabling defenders to track unusual login behaviors. Unified Audit Log (UAL) : T he UAL is a treasure trove of activity data available to all paid enterprise licenses . The level of detail varies by licensing tier, and Microsoft occasionally updates what each package includes. Unlike typical Windows logs, where a significant percentage may be irrelevant to incident response, the UAL is designed for investigations, with almost all logged events being useful for tracing activities. Investigation Categories To help incident responders leverage the UAL effectively, we categorize investigations into three types: User-based , Group-based , and Application-based investigations. Each category will include common scenarios and relevant search terms. 1. User-Based Investigations These investigations focus on user objects within Azure AD. Key activities include: Tracking User Changes : Understand what updates have been made to user profiles, including privilege changes and password resets. Auditing Admin Actions : Log any administrative actions taken in the directory, which is crucial for accountability. Typical Questions : What types of updates have been applied to users? How many users were changed recently? How many passwords have been reset? What actions have administrators performed in the directory? 2. Group-Based Investigations Group-based investigations are closely related to user investigations since permissions in Azure AD often hinge on group memberships. Monitoring groups is vital for security. Group Monitoring : Track newly added groups and any changes in memberships, especially for high-privilege groups. Typical Questions : What new groups have been created? Are there any groups with recent membership changes? Have the owners of critical groups been altered? 3. Application-Based Investigations Application logs can vary significantly depending on the services in use. One critical area to investigate is application consent , which can highlight potential breaches if an attacker gains access through an Azure application. Typical Questions : What applications have been added or updated recently? Which applications have been removed? Has there been any change to a service principal for an application? Who has given consent to a particular application? 4. Azure AD Provision Logs Azure AD Provision logs are generated when integrating third-party services like ServiceNow or Workday with Azure AD. These services often facilitate employee-related workflows that need to connect with the user database. Workflow Monitoring : For instance, during employee onboarding in Workday, the integration may involve creating user accounts and assigning them to appropriate groups, all of which are logged in Azure AD Provision logs. Typical Questions : What groups have been created in ServiceNow? Which users were successfully removed from Adobe? What users from Workday were added to Active Directory? Leveraging Microsoft Defender for Cloud Apps The Microsoft Defender for Cloud Apps can be an invaluable tool during investigations, provided it is correctly integrated with your cloud applications. By accessing usage data, defenders can filter out certain user agents and narrow down the actions of an attacker. For more information, refer to Microsoft Defender for Cloud Apps Announcement . Conclusion Understanding and effectively utilizing the logging capabilities of M365, particularly the Unified Audit Log and other related logs, can significantly enhance your incident response efforts. By focusing on user, group, and application activities, defenders can gain valuable insights into potential security incidents and make informed decisions to bolster their security posture. Akash Patel
- Microsoft Cloud Services: Focus on Microsoft 365 and Azure
Cloud Providers in Focus: Microsoft and Amazon In today’s cloud market, Microsoft and Amazon are the two biggest players, with each offering a variety of services. Microsoft provides solutions across all three categories—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) . Amazon, on the other hand, focuses heavily on IaaS and PaaS, with limited SaaS offerings . For investigative purposes, the focus with Amazon is usually on IaaS and PaaS components, while Microsoft’s extensive suite of cloud services demands a closer look into Microsoft 365 (M365) and Azure. Microsoft 365 (M365): A Successor to Office 365 Microsoft 365, previously known as Office 365, is a comprehensive cloud-based suite that offers both SaaS and on-premises tools to businesses. Licensing within Microsoft 365 can get quite complicated, especially when viewed from a security and forensics perspective. The impact of licensing on forensic investigations is significant, as it determines the extent of data and log access. Understanding M365 Licensing M365 licenses range from Business Basic to Business Premium , with Enterprise tiers referred to as E1, E3, and E5 : Business Basic : Provides cloud access to Exchange, Teams, SharePoint, and OneDrive. Business Standard : Adds access to downloadable Office apps (Word, Excel, etc.) and web-based versions. Business Premium : Adds advanced features like Intune for device management and Microsoft Defender. Enterprise licenses offer more advanced security features, with E3 and E5 providing the highest level of access to security logs and forensic data. In forensic investigations, having access to these higher-tier licenses is essential for capturing a comprehensive view of the environment. Impact on Forensics In an M365 environment, licensing plays a crucial role in how effectively investigators can respond to breaches. In traditional on-premises setups, investigators had access to physical machines for analysis, regardless of license level. However, in cloud settings, access to vital data is often gated by licensing, making high-tier licenses, such as E3 and E5 , invaluable for thorough investigations. Azure: Microsoft’s IaaS with a Hybrid Twist Azure, Microsoft’s IaaS solution, includes PaaS and SaaS components like Azure App Services and Azure Active Directory (Azure AD). It provides customers with virtualized data centers, complete with networking, backup, and security capabilities . The IaaS aspect allows customers to control virtual machines directly, enabling traditional forensic processes such as imaging, memory analysis, and the installation of specialized forensic tools. Azure Active Directory (Azure AD) and Hybrid Setups Azure AD, a critical component for many organizations, provides identity and access management across Microsoft’s cloud services . In hybrid environments, Azure AD integrates with on-premises Active Directory (AD) to support cloud-based services like Exchange Online, ensuring seamless authentication across on-prem and cloud environments. This integration introduces Azure AD Connect , which synchronizes data between on-prem AD and Azure AD. As a result, administrators can manage both environments from Azure, but this also increases exposure to the internet. Unauthorized access to Azure AD credentials could compromise the entire environment, which highlights the need for Multi-Factor Authentication (MFA) . Key Considerations for Azure AD Connect Azure AD Connect is integral for organizations using both on-prem and cloud-based Active Directory. It relies on three key accounts, each with specific permissions to enhance security and maintain synchronization: AD DS Connector Account : Reads and writes data to and from the on-premises AD. ADSync Service Account : Syncs this data into a SQL database, serving as an intermediary. Azure AD Connector Account : Syncs the SQL database with Azure AD, allowing Azure AD to reflect updates from on-prem AD. These roles are critical for secure synchronization, ensuring that changes in on-premises AD are accurately mirrored in Azure AD. This dual setup requires investigators to examine both infrastructures during an investigation, increasing the complexity of the forensic process. The Role of MFA and Security Risks in Hybrid Environments In hybrid setups, users are accustomed to entering domain credentials on cloud-based platforms, making them vulnerable to phishing attacks. MFA plays a vital role in preventing unauthorized access but is not foolproof. Skilled attackers can bypass MFA through various techniques, such as phishing or SIM swapping , underlining the need for a layered security approach. Microsoft’s Licensing Complexity and Forensics Microsoft’s licensing structure is notorious for its complexity, and this extends to M365. While on-premises systems allowed investigators full access to data regardless of licensing, the cloud imposes limits based on the chosen license tier. This means that E3 and E5 licenses are often necessary for investigators to access the full scope of data logs and security features needed for in-depth analysis. In hybrid environments, these licensing considerations directly impact the data available for forensics. For example, lower-tier licenses may provide limited audit logs, while E5 licenses include advanced logging and alerting features that can make a significant difference in detecting and responding to breaches. Investigative Insights and Final Thoughts For investigators, Microsoft’s cloud services introduce new layers of complexity: Dual Authentication Infrastructures : Hybrid setups mean you’ll need to investigate both on-prem and cloud-based AD systems. MFA Requirements : Securing Azure AD with MFA is crucial, but investigators must be aware of MFA’s limitations and potential bypass methods. High-Tier Licenses for Forensic Access : E3 and E5 licenses unlock advanced security and audit logs that are vital for thorough investigations. In summary, Microsoft 365 and Azure provide powerful tools for businesses but introduce additional challenges for forensic investigators. By understanding the role of licensing, Azure AD synchronization, and MFA, organizations can better prepare for and respond to incidents in their cloud environments. These considerations ensure that forensic investigators have the access they need to effectively secure, investigate, and manage cloud-based infrastructure. Akash Patel
- Forensic Challenges of Cloud-Based Investigations in Large Organizations
Introduction: Cloud-Based Infrastructure and Its Forensic Challenges Large-scale investigations have a wide array of challenges. One that’s increasingly common is navigating the cloud-based infrastructure of large organizations. As more businesses integrate cloud services with on-premises systems like Microsoft Active Directory, attackers can easily move between cloud and on-premises environments—an investigator’s nightmare! Cloud platforms are tightly woven into corporate IT, yet they bring unique considerations for incident response and forensic investigations. A key point to remember is that cloud infrastructure essentially boils down to “someone else’s computer.” And unfortunately, that “someone else” may not be ready to grant you full forensic access when a breach occurs. To get into the nitty-gritty of cloud forensics, it’s essential to understand the different types of cloud offerings: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Each of these comes with unique access levels and data availability, impacting how effectively we can conduct investigations. Diving Into Cloud Services: IaaS, PaaS, and SaaS Let’s break down these cloud service types to see how they affect access to forensic data. 1. Infrastructure as a Service (IaaS) What It Is : In IaaS, cloud providers offer virtual computing resources over the internet . You get to spin up virtual machines and networks, almost like your own data center, except it’s hosted by the provider. Forensic Access : Since customers manage their own operating systems and applications, IaaS provides the most forensic access among cloud service types. Investigators can perform standard incident response techniques, like log analysis and memory captures, much as they would on on-prem systems. Challenges : The major challenge is the dependency on the provider. Moving away from a provider you’ve invested in heavily can be a headache . So, it’s essential to plan security and forensic readiness from the start. 2. Platform as a Service (PaaS) What It Is : PaaS bundles the OS with essential software, such as application servers, allowing you to deploy applications without worrying about the underlying infrastructure . Forensic Access : This setup limits access to the underlying OS, which restricts what investigators can directly analyze. Y ou can access logs and some application data, but full system access is typically off-limits. Challenges : Because multiple customers often share the infrastructure, in-depth forensics might reveal data belonging to other clients. Therefore, cloud providers rarely allow forensic access to the physical machines in a PaaS setup. 3. Software as a Service (SaaS) What It Is : SaaS handles everything from the OS up, so the customer only interacts with the software. Forensic Access : Forensics in a SaaS environment is usually limited to logs, often determined by the service tier (and subscription cost). If a backend compromise occurs, SaaS logs might not give enough data to identify the root cause. Challenges : This limitation can cause breaches to go unnoticed for extended periods. SaaS providers control everything, so investigators can only work with whatever logs or data the provider makes available. Cloud-Based Forensics vs. Traditional On-Premises Forensics With traditional on-premises forensics , investigators have deep access to various system components. They can use techniques like creating super timelines to correlate events across systems, uncovering hidden evidence . Cloud forensics, however, is a different story. Cloud investigations resemble working with Security Information and Event Management (SIEM) systems in Security Operations Centers (SOCs). Just as SIEM setups depend on pre-selected data inputs, cloud providers offer only certain types of logs and data. This means you need to plan ahead to ensure you’re capturing the right logs. When it’s time to investigate, you’ll be limited to whatever was logged based on the initial setup and your subscription level. Essential Steps for Incident Response in the Cloud Handling incidents in the cloud follows many of the same steps as traditional response processes, but there’s an added emphasis on preparation. Without the right preparations, investigators could be left scrambling, unable to detect or respond to intrusions effectively. Preparation : Know Your Environment : Document the systems your organization uses, along with any defenses and potential weak spots. Prepare for likely incidents based on your cloud architecture and assets. Logging : Make sure you’re subscribed to an adequate logging tier to capture the necessary data for investigations. Higher-tier subscriptions often provide more granular logs, which are crucial for in-depth analysis. Data Retention : Cloud providers offer different retention periods depending on the subscription. Ensure the data you need is available long enough for proper analysis. Detection : Use tools like the MITRE ATT&CK® framework to identify techniques and indicators of compromise specific to cloud environments. Regularly review security logs to detect anomalous activities. Log aggregators and monitoring tools can streamline this process. Analysis : For IaaS, you can perform traditional forensic techniques, such as memory analysis and file recovery. For PaaS and SaaS, focus on analyzing available logs. If suspicious activity is detected, collect and analyze whatever data the provider can provide. Correlate cloud logs with on-premises logs to trace attacker movements between environments. Containment & Eradication : In the cloud, containment often involves disabling specific accounts or access keys, updating permissions, or isolating compromised systems. For SaaS or PaaS, the provider might handle containment on their end, so you’ll need a strong partnership with your provider to act quickly in a breach. Recovery : Implement any necessary changes to strengthen security and avoid re-compromise. This may involve changing access policies, adjusting logging settings, or reconfiguring cloud resources. Lessons Learned : Post-incident, review what happened and how it was handled. Look for opportunities to enhance your response capabilities and bolster your cloud security posture. Leveraging the MITRE ATT&CK Framework for Cloud Environments The MITRE ATT&CK framework, renowned for cataloging adversary tactics and techniques, has been expanded to include cloud-specific threats . While current versions focus on major cloud platforms like Microsoft Azure and Google Cloud, they also include techniques applicable to IaaS and SaaS broadly. This makes it a valuable resource for proactive defense planning in cloud environments. Regularly reviewing the techniques in the framework can help you design detections that fit your organization’s cloud architecture. By integrating the ATT&CK framework into your cloud incident response strategy, you’ll be better equipped to recognize suspicious behavior and quickly respond to emerging threats. Conclusion: Embracing Cloud Forensics in an Evolving Threat Landscape Cloud forensics presents a unique set of challenges, but with the right knowledge and tools, your organization can respond effectively to incidents in cloud environments. Remember, it’s all about preparation. Invest in adequate logging, establish incident response protocols, and familiarize your team with the MITRE ATT&CK framework. By doing so, you’ll ensure that you’re ready to tackle threats in the cloud with the same rigor and responsiveness as on-premises investigations. Akash Patel
- macOS Incident Response: Tactics, Log Analysis, and Forensic Tools
macOS logging is built on a foundation similar to traditional Linux/Unix systems, thanks to its BSD ancestry . While macOS generates a significant number of logs, the structure and format of these logs have evolved over time ---------------------------------------------------------------------------------------------- Overview of macOS Logging Most macOS logs are stored in plain text within the /var/log directory (also found as /private/var/log ). These logs follow the traditional Unix Log Format : MMM DD HH:MM:SS HOST Service: Message One major challenge : log entries don't include the year or time zone , so when reviewing events near the turn of the year, it’s important to be cautious. Logs are rotated based on size or age, with old logs typically compressed using gzip or bzip2 . Key Difference from Linux/Unix Logging macOS uses two primary binary log formats: Apple System Log (ASL) : Introduced in macOS X Leopard , ASL stored syslog data in a binary format . While deprecated, it’s still important for backward compatibility. Apple Unified Log (AUL) : Starting with macOS Sierra (10.12) , AUL became the standard for most logging . Apps and processes now use AUL, but some data may still be logged via ASL. ---------------------------------------------------------------------------------------------- Common Log Locations Investigators should know where key log files are stored: /var/log : Primary system logs. /var/db/diagnostics : System diagnostic logs. /Library/logs : System and application logs. ~/Library/Logs : User-specific logs. /Library/Application Support/(App name) : Application logs. /Applications : Logs for applications installed on the system. ---------------------------------------------------------------------------------------------- Important Plain Text Logs Some of the most useful plain text logs for enterprise incident response include: /var/log/system.log : General system diagnostics. /var/log/DiskUtility.log : Disk mounting and management events. /var/log/fsck_apfs.log : Filesystem-related events. /var/log/wifi.log : Wi-Fi connections and known hotspots. /var/log/appfirewall.log : Network events related to the firewall. Note : Starting with macOS Mojave , many of these logs have transitioned to Apple Unified Logs (AUL). On upgraded systems, you might still find them, but they are no longer actively used for logging in newer macOS versions. ---------------------------------------------------------------------------------------------- Binary Logs in macOS macOS has shifted toward binary logging formats for better performance and data integrity. Investigators should be familiar with two main types: 1. Apple System Logs (ASL) Location : /var/log/asl/*.asl View : Use the syslog command or Console app during live response. ASL contains diagnostic and system management data , including startup/shutdown events and some process telemetry. 2. Apple Unified Logs (AUL) Location : /var/db/diagnostics/Persist /var/db/diagnostics/timesync /var/db/uuidtext/ File Type : .tracev3 AUL is the default logging format since macOS Sierra (10.12) . These logs cover a wide range of activities, from user authentication to sudo usage , and are critical for forensic analysis. How to View AUL: View in live response : Use the log command or the Console app . File parsing : These logs are challenging to read manually. It’s best to use specialized tools designed to extract and analyze AUL logs. ---------------------------------------------------------------------------------------------- Limitations of macOS Logging Default Logging May Be Insufficient : Most macOS systems don’t have enhanced logging enabled (like auditd ), which provides more detailed logs. This can result in gaps when conducting enterprise-level incident response. Log Modification : U sers with root or sufficient privileges can modify or delete logs , meaning attackers may tamper with evidence. Binary Format Challenges : A nalyzing ASL and AUL logs on non-macOS systems can be difficult . The best approach is to use a macOS device for live response or log analysis , as using other platforms may result in a loss of data quality. ---------------------------------------------------------------------------------------------- Live Log Analysis in macOS 1. Using the Last Command Just like in Linux, the last command shows the most recent logins on the system, giving investigators a quick overview of user access. 2. Reading ASL Logs with Syslog The syslog command allows investigators to parse Apple System Log (ASL) files in binary format: syslog -f (filename).asl While it can reveal key system events, it’s not always easy to parse visually. 3. Live Analysis with the Console App For a more user-friendly experience, macOS provides the Console app , a GUI tool that allows centralized access to both Apple System Logs (ASL) and the more modern Apple Unified Logs (AUL) . It’s an ideal tool for visual log analysis, but keep in mind, you can’t process Console output with command-line tools or scripts. ---------------------------------------------------------------------------------------------- Binary Log Analysis on Other Platforms When you can’t analyze logs on a macOS machine, especially during forensic analysis on Windows or Linux, mac_apt is a powerful, cross-platform solution. mac_apt: macOS Artifact Parsing Tool Developed by Yogesh Khatri , mac_apt is an open-source tool designed to parse macOS and iOS artifacts, including Apple Unified Logs (AUL) . https://github.com/ydkhatri/mac_apt Key Features : Reads from various sources like raw disk images, E01 files, VMDKs, mounted disks, or specific folders. Extracts artifacts such as user lists , login data , shell history , and Unified Logs . Outputs data in CSV , TSV , or SQLite formats. Challenges with mac_apt : TSV Parsing : The default TSV output is in UTF-16 Little Endian , which can be tricky to process with command-line tools. However, it works well in spreadsheet apps. Large File Sizes : Log files can be huge, and mac_apt generates additional copies for evidence, which can take up significant disk space . For example, analyzing a 40GB disk image could produce a 13GB UnifiedLogs.db file and 15GB of exported evidence. Speed : Some plugins can be slow to run . Using the FAST option avoids the slowest ones but can still take 10-15 minutes to complete. A f ull extraction with plugins like SPOTLIGHT and UNIFIEDLOGS can take over an hour. ---------------------------------------------------------------------------------------------- How to Use mac_apt The command-line structure of mac_apt is straightforward, and you can select specific plugins based on the data you need: python /opt/mac_apt/mac_apt.py -o /output_folder --csv -d E01 /diskimage.E01 PLUGIN_NAME For example, to investigate user activity: python /opt/mac_apt/mac_apt.py -o /analysis --csv -d E01 /diskimage.E01 UTMPX USERS TERMSESSIONS This will extract user information, login data, and shell history files into TSV files. Useful mac_apt Plugins for DFIR : ALL : Runs every plugin (slow, only use if necessary). FAST : Runs plugins without UNIFIEDLOGS , SPOTLIGHT , and IDEVICEBACKUPS , speeding up the process. SUDOLASTRUN : Extracts the last time sudo was run, useful for privilege escalation detection. TERMSESSIONS : Reads terminal history (Bash/Zsh). UNIFIEDLOGS : Reads .tracev3 files from Apple Unified Logs. UTMPX : Reads login data. ---------------------------------------------------------------------------------------------- Conclusion: Tried to simplifies the complex task of macOS log analysis during incident response, providing investigators with practical tools and strategies for both live and binary log extraction. By using the right tools and understanding key log formats, you can efficiently gather the information you need to support forensic investigations. Akash Patel
- Investigating macOS Persistence :macOS stores extensive configuration data in: Key Artifacts, Launch Daemons, and Forensic Strategies"
Let’s explore the common file system artifacts investigators need to check during incident response (IR). ---------------------------------------------------------------------------------------------- 1. Commonly Abused Files for Persistence Attackers often target shell initialization files to maintain persistence by modifying the user’s environment, triggering scripts, or executing binaries. Zsh Shell Artifacts (macOS default shell since Catalina) Global Zsh Files: /etc/zprofile : Alters the shell environment for all users, setting variables like $PATH. Attackers may modify it to run malicious scripts upon login. /etc/zshrc : Loads configuration settings for all users. Since macOS Big Sur, this file gets rebuilt with system updates. /etc/zsh/zlogin : Runs after zshrc during login and often used to start GUI tools. User-Specific Zsh Files: Attackers may also modify individual user shell files located in the user’s home directory (~): ~/.zshenv (optional) ~/.zprofile ~/.zshrc ~/.zlogin ~/.zlogout (optional) User History ~/.zsh_history ~/.zsh_sessions (directory ) These files are loaded in sequence during login, giving attackers multiple opportunities to run malicious code. Note :During IR collection it is advised to check all the files (including ~/.zshenv & ~/.zlogout if they are present) to check for signs of attacker activity ---------------------------------------------------------------------------------------------- 2. User History Files Tracking a user’s shell activity can provide valuable insights during an investigation. The .zsh_history file logs the commands a user entered into the shell. By default, this file stores the last 1,000 commands, but the number can be configured via SAVEHIST and HISTSIZE in /etc/zshrc. Important Note : The history file is only written to disk when the session ends. During live IR, make sure active sessions are terminated to capture the latest data. Potential Manipulation : Attackers may selectively delete entries or set SAVEHIST and HISTSIZE to zero, preventing commands from being logged. Another place to check is the .zsh_sessions directory. This folder stores session and temporary history files, which may contain overlooked data. ---------------------------------------------------------------------------------------------- 3. Bash Equivalents For systems where Bash is in use (either as an alternative shell or legacy setup), investigators should review the following files, which serve the same purpose as their Zsh counterparts: ~/.bash_history ~/.bash_profile ~/.bash_login ~/.profile ~/.bashrc ~/.bash_logout Attackers can modify these files to achieve persistence or hide their activity. ---------------------------------------------------------------------------------------------- 4. Installed Shells It's not uncommon for users to install other shells. To verify which shells are installed, check the /etc folder , and look at the user's home directory for history files. If multiple shells have been installed, you may find artifacts from more than one shell. ---------------------------------------------------------------------------------------------- 5. Key File Artifacts for User Preferences macOS stores extensive configuration data in each user’s ~/Library/Preferences Some of these files are particularly useful during an investigation. Browser Downloads : Quarantine Information : Found in the com.apple.LaunchServices.QuarantineEventsV* SQLite database, this file logs information about executable files downloaded from the internet, including URLs, email addresses, and subject lines. Recently Accessed Files : macOS Mojave and earlier : com.apple.RecentItems.plist. macOS Big Sur and later : com.apple.shared.plist Finder Preferences : com.apple.finder.plist file contains details on how the Finder app is configured, including information on mounted volumes. Keychain Preferences : com.apple.keychainaccess.plist file logs keychain preferences and the last accessed keychain, which can provide clues about encrypted data access. Investigation Note : Be aware that attackers can modify or delete these files, and they may not always be present. ---------------------------------------------------------------------------------------------- macOS Common Persistence Mechanisms Attackers use various strategies to maintain persistence on macOS systems, often exploiting system startup files or scheduled tasks. 1. Startup Files Attackers frequently modify system or user initialization files to add malicious scripts or commands. These files are read when the system or user session starts, making them a common target. 2. Launch Daemon (launchd) The l aunchd daemon controls services and processes triggered during system boot or user login. While it’s used by legitimate applications, attackers can exploit it by registering malicious property list (.plist) files or modifying existing ones to point to malicious executables. Investigating launchd on a Live System: You can use the launchctl command to list all the active jobs: launchctl list This command will show: PID : Process ID of running jobs. Status : Exit status or the signal that terminated the job (e.g., -9 for a SIGKILL). Label : Name of the task, sourced from the .plist file that created the job. Investigating launchd on Disk Images: The launchd process is owned by root and normally runs as PID1 on a system. It is the only process which can’t be killed while the system is running . This allows it to create jobs that can run as a range of user accounts. Jobs are created by property list (plist) files in specific locations, which point to executable files. The launchd process reads the plist and launches the file with any arguments or instructions as set in the plist. To analyze launchd in a system image or offline triage: Privileged Jobs : Check these folders for startup tasks that run as root or other users: /Library/LaunchAgents: Per-user agents for all logged-in users, installed by admins. /Library/LaunchDaemons : System-wide daemons, installed by admins. /System/Library/LaunchAgents : Apple-provided agents for user logins. /System/Library/LaunchDaemons : Apple-provided system-wide daemons. User Jobs : Jobs specific to individual users are stored in: /Users/(username)/Library/LaunchAgents 3. Cron Tasks Similar to Linux systems, cron manages scheduled tasks in macOS. Attackers may create cron jobs that trigger the execution of malicious scripts at regular intervals. ---------------------------------------------------------------------------------------------- Workflow for Analyzing Launchd Files When investigating launchd persistence, use this methodical approach: Check for Unusual Filenames : Look for spelling errors, odd filenames, or files that imitate legitimate names. Start in the /Library/LaunchAgents and /Library/LaunchDaemons folder. Sort by Modification Date : If you know when the incident occurred, sort the .plist files by modification date to find any changes made around the attack . Analyze File Contents : Check the Program and ProgramArguments keys in each .plist file . Investigate any executables they point to. Validate Executables : C onfirm if the executables are legitimate by checking their file hashes or running basic forensic analysis , such as using the strings command or full reverse engineering. ---------------------------------------------------------------------------------------------- Final Thoughts When investigating a macOS system, checking these file system artifacts is crucial. From shell initialization files that may be altered for persistence to history files that track user activity, these files provide a window into the state of the system. By examining user preferences and quarantine data , and Persistence Mechanisms you can further uncover potential signs of compromise or abnormal behavior. Akash Patel
- Evidence Profiling : Key Device Information, User Accounts, and Network Settings on macOS
When investigating a macOS system, understanding its device information , user accounts , and network settings is critical. ---------------------------------------------------------------------------------------------- 1. Device Information (i) OS Version and Build The macOS version and build details can be found in the SystemVersion.plist file: Location : /System/Library/CoreServices/SystemVersion.plist Command : Use cat on a live system to view the .plist file contents. (ii) Device Serial Number The device's serial number is stored in three database files, but access may be restricted while the system is live: Files : consolidated.db cache_encryptedA.db lockCache_encryptedA.db Location: /root/private/var/folders/zz/zyxvpxvq6csfxvn_n00000sm00006d/C/ Use DB Browser for SQLite to open these databases and find the serial number in the TableInfo table. (iii) Device Time Zone – Option 1 Run ls -l on the /etc/localtime file to reveal the time zone set on the device. This works on both live systems and disk images. Be cautious when working on an image, as this path could return the time zone of the investigation machine instead. (iv) Device Time Zone – Option 2 The time zone is also stored in a .plist file that may be more accurate as it can include latitude and longitude from location services: Location: /Library/Preferences/.GlobalPreferences.plist Command :(On live system or on MAC) plutil -p /Library/Preferences/.GlobalPreferences.plist Note:- If location services are enabled, the automatic time zone update will regularly update this plist. However, when devices switch to static time zones, this plist may not be updated and it will point to the last automatic update location. To check If location service is enabled or not: Location : /Library/Preferences/com.apple.timezone.auto.plist If location services are enabled, the entry “active” will be set to 1 or true. ---------------------------------------------------------------------------------------------- 2. User Accounts Each user account on a macOS system has its own configuration .plist file : Location: /private/var/db/dslocal/nodes/Default/users/ Location: /private/var/db/dslocal/nodes/Default/groups/ These files contain key details about the user accounts. If investigating malicious activity, check this directory to confirm whether any suspicious accounts have been created or account have added to privileged group. Key Points: Accounts managed by Open Directory won’t have a .plist file here. System service accounts (like _ftp) have names beginning with an underscore. Default system accounts include root, daemon, nobody, and Guest. ---------------------------------------------------------------------------------------------- 3. Network Settings Network Interfaces Each network interface has its own configuration stored in a .plist file: Location: /Library/Preferences/SystemConfiguration/NetworkInterfaces.plist Key Information : Interface number (e.g., en0 for Wi-Fi, en1 for Ethernet). Network Type (e.g., IEEE802.11 for Wi-Fi, Ethernet for wired connections). MAC address : This may be displayed in Base64-encoded format on Linux but can be decoded using echo "(encoded MAC)" | base64 –d | xxd Model : Useful for identifying the device's network hardware. Network Configuration – Interfaces Another important .plist file, preferences.plist, contains detailed configuration for each interface: Location : /Library/Preferences/SystemConfiguration/preferences.plist Key Elements : Network Services : Details on IPv4/IPv6 settings, proxies, DNS, and more. Local HostName : The machine's local network name. Computer Name : May differ from the hostname. ---------------------------------------------------------------------------------------------- DHCP Lease Information The DHCP lease information provides details about past network connections: Location : /private/var/db/dhcpclient/leases/ Files are named based on the network interface (e.g., en0.plist, interface.plis t, en0-MAC.plist or en0-1,12:12:12:12:12:12.plist ). Where there have been multiple connections on an interface, the files in this folder will contain data relating to the most recent connection Key Information : Device IP address Lease start date Router MAC and IP address SSID (if connected to Wi-Fi) ---------------------------------------------------------------------------------------------- Final Thoughts Investigating a macOS system, especially with an APFS file system, involves diving deep into system files and .plist configurations. From device profiling to uncovering user activity and network settings , understanding where to find critical data can streamline investigations and ensure thorough evidence collection. Always ensure you have the necessary tools to access and decode these files. Akash Patel
- APFS Disk Acquisition: From Live Data Capture to Seamless Image Mounting
Understanding .plist Files (Property List Files) .plist files in macOS are like the registry in Windows. They store important configuration settings for apps and the system. These files come in two flavors: XML Format This is the older, more human-readable format . If you open an XML .plist, you’ll see it starts with # Shows Access, Modify, and Change timestamps in seconds For nanosecond accuracy, use: stat -f %Fa # Access time stat -f %Fm # Modification time stat -f %Fc # Change time GetFileInfo : This command gives you additional details about the file, including creation and modification times. GetFileInfo --------------------------------------------------------------------------------------------- Disk Acquisition from an APFS Filesystem Acquiring disk data from macOS devices using the APFS (Apple File System) presents unique challenges, especially for investigators or responders dealing with encrypted systems. Let’s break down the process: 1. Physical Disk Extraction Unlike traditional PCs, Apple’s devices often don’t allow easy removal of disks . In most cases, the storage is built right into the system. Even if you can physically remove the disk, things get complicated i f it’s encrypted—once removed, the data may become unrecoverable. 2. Disk Encryption Apple devices frequently use disk encryption by default, adding another layer of complexity. While certain organizations claim they can recover data from encrypted disks, it’s not feasible for most responders. The best strategy? Make sure institutional access keys are set up in your organization. These allow you to decrypt and access data when needed. 3. System Integrity Protection (SIP) Introduced with macOS El Capitan (OS X 10.11) , SIP is a security feature that prevents even administrators from modifying key system files . While it helps protect the system, it can interfere with forensic tools that need access to the disk. SIP can be disabled temporarily by rebooting into Recovery Mode , but be warned—this could alter data on the device and affect the investigation. --------------------------------------------------------------------------------------------- Tips for Disk Acquisition Live collection is usually your best bet. Capturing data from a running system avoids many of the challenges mentioned above. Here are a few strategies: Endpoint monitoring tools like EDR (Endpoint Detection and Response) are essential for tracking suspicious activity or capturing data. Examples include Velociraptor or remote access agents like F-Response . Forensic tools : If you have access to commercial forensic software, you’re in good hands. Some commonly used options include: Cellebrite Digital Collector FTK Imager OpenText EnCase Magnet Acquire Direct Access Methods :If you have direct access to the system but not commercial tools, you can still use open-source solutions. dd or dcfldd/dc3dd : These tools can create a disk image that can be sent to external storage or even a remote address using netcat . Sumuri PALADIN : A live forensic USB tool for capturing disk images. --------------------------------------------------------------------------------------------- Mounting APFS Images Once you’ve captured a disk image, the next step is mounting it for analysis. There are different ways to do this, depending on your platform and available tools. Easiest Option: Commercial Forensic Suites If you’re using commercial tools, they make it easy to mount and read the image on a macOS system. If Commercial Tools Aren’t Available: Mounting the image on macOS is straightforward, but it requires a few key options: rdonly : Mounts the image as read-only, ensuring no accidental changes. noexec : Prevents any code from executing on the mounted image. noowners : Ignores ownership settings, minimizing access issues. Commands to Mount in macOS : sudo su mkdir /Volumes/apfs_images mkdir /Volumes/apfs_mounts xmount -- in ewf evidencecapture.E01 -- out dmg /Volumes/apfs_images hdiutil attach -nomount /Volumes/apfs_images/evidencecapture.dmg diskutil ap list diskutil ap unlockvolume -nomount mount_apfs -o rdonly,noexec,noowners /dev/disk# /Volumes/apfs_mounts/ Mounting in Linux Mounting an APFS image on Linux is possible but requires FUSE (Filesystem in Userspace) drivers. Here’s a simplified guide: Install APFS FUSE Drivers : First, you’ll need to install the necessary dependencies and clone the APFS FUSE repository from GitHub. sudo apt update sudo apt install libicu-dev bzip2 cmake libz-dev libbz2-dev fuse3 clang git libattr1-dev libplist-utils -y cd /opt git clone https://github.com/sgan81/apfs-fuse.git cd apfs-fuse git submodule init git submodule update mkdir build cd build cmake .. make ln /opt/afps-fuse/build/apfs-dump /usr/bin/apfs-dump ln /opt/afps-fuse/build/apfs-dump-quick /usr/bin/apfs-dump-quick ln /opt/afps-fuse/build/apfs-fuse /usr/bin/apfs-fuse ln /opt/afps-fuse/build/apfsutil /usr/bin/apfsutil NOTE: the ln commands are to make it easier to run the commands without n eeding to add the /opt/apfsfuse/ build folder to the path . This may vary depending on your environment. Mount the Image : After setting up FUSE, you can mount the image using this command: mkdir /mnt/apfs_mount #create mount point cd /mnt/ewf_mount #change to the directory where the E01 file is located. apfs-fuse -o ro,allow_other ewf1 /mnt/apfs_mount # mount the image read only If you want a script to automate this for Debian-based distros (like Ubuntu), check out the one available at this link. https://github.com/TazWake/Public/blob/master/Bash/apfs_setup.sh Final Thoughts In forensic investigations, especially on macOS systems, APFS disk acquisition can be tricky. Between encrypted disks, System Integrity Protection (SIP), and Apple's tight security measures, your best option is often live data capture. Whether you're using commercial tools or open-source alternatives, having the right approach and tools is critical. Akash Patel
- History of macOS and macOS File Structure
Early Apple Days Apple was established on April 1, 1976, and quickly made its mark with the Lisa in the early 1980s , the first public computer featuring a graphical user interface (GUI). Fast forward to 1984, and Apple released the Macintosh , their first affordable personal computer with a GUI, revolutionizing personal computing. Big Moves in the 1990s and Beyond By the late 1990s, Apple was well-established. In 1998, they introduced the HFS+ file system , which helped users manage larger storage devices and improved overall file organization. But things really got interesting in 2001 with the launch of macOS X —a Unix-based operating system that gave the Mac the robustness and reliability it needed. The Evolution of macOS 2012 : With OS X 10.8 (Mountain Lion) , Apple started to unify its desktop and mobile platforms, borrowing elements from iOS. 2016 : Apple rebranded OS X to macOS , beginning with macOS 10.12 (Sierra). 2 017 : The APFS file system (Apple File System) was introduced to replace HFS+, designed to be faster and more efficient, especially for SSDs. APFS: Apple's Modern File System When Apple introduced APFS in 2017, it addressed many limitations of its predecessor, HFS+ . Here’s what makes APFS special and why it matters for modern Macs: Optimized for SSDs : APFS is designed to work seamlessly with solid-state drives (SSDs) and flash storage, making your Mac much faster when it comes to file operations. Atomic Safe Save : Ever worried about losing data if your Mac crashes while saving a file? APFS uses a technique called Atomic Safe Save . Instead of overwriting files (which can corrupt data during a crash), it creates a new file and updates pointers—meaning your data is much safer. Full Disk Encryption : APFS builds encryption right into the file system, giving you multiple ways to secure your data using different recovery keys, including your password or even iCloud. Snapshots : One of the coolest features is snapshots , which create a read-only copy of your system at a specific point in time. If something goes wrong, you can roll back to a previous state—perfect for troubleshooting! Large File Capacity : APFS supports filenames with up to 255 characters and file sizes up to a theoretical limit of 8 exabytes (that’s 8 billion gigabytes!). So, you probably won’t run out of space anytime soon. Accurate Timestamps : With nanosecond accuracy, APFS records changes precisely—useful for backups, file versioning, and tracking down when exactly something was altered. macOS File Structure: How Your Files Are Organized macOS organizes files and folders into four main domains , each serving different purposes: 1. User Domain (/Users) This is w here all the files related to your user account live . It includes the home directory, which stores personal documents, downloads, music, and more. Each user on the system has their own isolated space here. There’s also a hidden Library folder within each user account, where your apps store personal preferences and data. Key folders in the User Domain : Home Directory : Your personal space, with folders like Documents , Downloads , and Desktop . Public Directory : A space where you can share files with others who use the same Mac. User Library : Hidden by default, but this folder is a treasure trove for advanced users and app developers. It contains your preferences, app data, and cached files. If you ever need to dig in, you can reveal it using a simple Terminal command: chflags nohidden /Users//Library 2. Local Domain (/Library) This domain c ontains files and apps that are shared across all users on the Mac. Apps installed via the Mac App Store will be located in the /Applications folder. There’s also a / Developer folder here if you’ve installed Xcode or developer tools. /Library – Library files shared across all users. 3. Network Domain (/Network) The Network Domain is for shared resources like network drives or printers. In an office setting, this is where you’d find shared servers or Windows file shares. It’s managed by network administrators and isn’t something the average user interacts with often. 4. System Domain (/System) This is where Apple stores the critical components that make macOS run smoothly. I t’s locked down so that regular users can’t accidentally delete something important. You’ll find OS-level libraries and apps here, safely tucked away from tampering. A Deeper Look into the User Domain The User Domain is often the center of attention during troubleshooting or security incidents. Whether it's a malicious app trying to access personal files or suspicious activity in the system, the User Domain holds a lot of valuable evidence . It's divided into three main directories: 1. Home Directory Your personal space for files like downloads, documents, music, and more. Each user on the Mac has their own home directory , and macOS prevents other users from accessing it unless they have special permissions. 2. Public Directory This folder is for sharing files with other users on the same Mac. It’s located at /Users//Public. 3. User Library Hidden by default, the User Library stores a lot of important app data. It contains application sandboxes, preferences, and cached data—things you wouldn’t normally touch but are critical to how apps function. Application Sandboxes : Found in ~ /Library/Containers, this is where macOS keeps app data safe and separate from the rest of your system. (i) ~/Library/Containers for data relating to specific apps (ii) ~/Library/Group\ Containers/ for shared data. (iii) ~/Library/Application\ Support/ folder and you should always check both to find all the data for a specific application. Preferences : Stored in ~/Library/Preferences , these files keep track of how you like your apps set up. For example, the Safari browser’s preferences are in com.apple.Safari.plist. Cached Data : Found in ~/Library/Caches , this folder holds temporary files that apps use to speed things up. Final Thoughts macOS and its APFS file system are designed to provide a smooth and efficient experience, especially on modern hardware. The system balances speed, security, and reliability with features like snapshots, encryption, and safe saving methods. By organizing files into distinct domains (User, Local, Network, System), macOS ensures that both individual users and administrators have easy access to what they need while keeping everything secure. Akash Patel
- Lateral Movement: User Access Logging (UAL) Artifact
Lateral movement is a crucial part of many cyberattacks, where attackers move from one system to another within a network, aiming to expand their foothold or escalate privileges. Detecting such activities requires in-depth monitoring and analysis of various network protocols and artifacts. Some common methods attackers use include SMB , RDP , WMI , PSEXEC , and Impacket Exec . One lesser-known but powerful artifact for mapping lateral movement in Windows environments is User Access Logging (UAL) . In this article, we'll dive into UAL, where it's stored, how to collect and parse the data, and why it's critical in detecting lateral movement in forensic investigations. 1. Introduction to User Access Logging (UAL) User Access Logging (UAL) is a Windows feature, enabled by default on Windows Server versions prior to 2012 . UAL aggregates client usage data on local servers by role and product, allowing administrators to quantify requests from client computers for different roles and services. By analyzing UAL data, you can map which accounts accessed which systems, providing insights into lateral movement. Why it’s important in forensic analysis: Track endpoint interactions : UAL logs detailed information about client interactions with server roles, helping investigators map out who accessed what . Detect lateral movement : UAL helps identify which user accounts or IP addresses interacted with specific endpoints , crucial for identifying an attacker's path. 2. Location of UAL Artifacts The UAL logs can be found on Windows systems in the following path: C:\Windows\System32\Logfiles\sum This directory contains multiple files that store data on client interactions, system roles, and services. 3. Collecting UAL Data with KAPE To collect UAL data from an endpoint, you can use KAPE (Kroll Artifact Parser and Extractor) . This tool is designed to collect forensic artifacts quickly, making it a preferred choice for investigators. Here’s a quick command to collect UAL data using KAPE: Kape.exe --tsource C: --tdest C:\Users\akash\Desktop\tout --target SUM --tsource C: Specifies the source drive (C:). --tdest: Defines the destination where the extracted data will be stored (in this case, C:\Users\akash\Desktop\tout). --target SUM: Tells KAPE to specifically collect the SUM folder, which contains the UAL data. 4. Parsing UAL Data with SumECmd Once the UAL data has been collected, the next step is parsing it. This can be done using SumECmd , a tool by Eric Zimmerman, known for its efficiency in processing UAL logs. Here’s how you can use SumECmd to parse the UAL data: SumECmd.exe -d C:\users\akash\desktop\tout\SUM --csv C:\Users\akash\desktop\sum.csv -d : Specifies the directory containing the UAL data (in this case, C:\users\akash\desktop\tout\SUM). --csv : Tells the tool to output the results in CSV format (which can be stored on the desktop). The CSV output will provide detailed information about the client interactions. 5. Handling Errors with Esentutl.exe During parsing, you may encounter an error stating “error processing file.” This error is often caused by corruption in the UAL database. To fix this, use the esentutl.exe tool to repair the corrupted database: Esentutl.exe /p Replace with the actual name of the corrupted .mdb file. Run the above command for all .mdb files located in the SUM folder. 6. Re-Parsing UAL Data Once the database is repaired, re-run the SumECmd tool to parse the data: SumECmd.exe -d C:\users\akash\desktop\tout\SUM --csv C:\Users\akash\desktop\sum.csv This command will generate a new CSV output that you can analyze for lateral movement detection. 7. Understanding the Output The CSV file generated by SumECmd provides various details that are critical in detecting lateral movement. Here are some of the key data points: Authenticated Username and IP Addresses : This helps identify which user accounts and IP addresses interacted with specific endpoints. Detailed Client Output : This includes comprehensive data on client-server interactions, role access, and system identity. DNS Information : UAL logs also capture DNS interactions, useful for tracking the network activity. Role Access Output : This identifies the roles accessed by different clients, which can highlight unusual activity patterns. System Identity Information : UAL logs provide system identity details, helping to track systems that may have been compromised. 8. The Importance of UAL Data in Lateral Movement Detection The data captured by UAL plays a pivotal role in identifying and mapping out an attacker's movement across a network. Here’s how UAL data can aid in forensic investigations: Mapping Lateral Movement : By analyzing authenticated usernames and IP addresses, UAL logs can help identify potential attackers moving through the network and interacting with various endpoints. Detailed Analysis : UAL provides detailed logs of user interactions, which can be cross-referenced with other forensic artifacts (like event logs) to build a comprehensive timeline of an attack. Investigating Network Traffic : The inclusion of DNS and role access data allows investigators to better understand how attackers are interacting with various roles and services within the network. Conclusion User Access Logging (UAL) is a powerful tool for identifying lateral movement in a Windows environment. With tools like KAPE for collecting UAL data and SumECmd for parsing it, forensic investigators can gain deep insights into how attackers are navigating through the network. Understanding and leveraging UAL data in your investigations can significantly enhance your ability to detect and mitigate cyber threats. Akash Patel