top of page

Incident Response Log Strategy for Linux: An Essential Guide

In the field of incident response (IR), logs play a critical role in uncovering how attackers infiltrated a system, what actions they performed, and what resources were compromised. Whether you're hunting for exploits, analyzing unauthorized access, or investigating malware, having a solid understanding of log locations and analysis strategies is essential for efficiently handling incidents.



1. Log File Locations on Linux

Most log files on Linux systems are stored in the /var/log/ directory. Capturing logs from this directory should be part of any investigation.


Key Directories:

  • /var/log/: Main directory for system logs.

  • /var/run/: Contains volatile data for live systems, symlinked to /run. When dealing with live systems, logs in /var/run can be crucial as they may not be present on a powered-down system (e.g., VM snapshots).


Key Log Files:

  • /var/log/messages:

    CentOS/RHEL systems; contains general system messages, including some authentication events.

  • /var/log/syslog:

    Ubuntu systems; records a wide range of system activities.

  • /var/log/secure:

    CentOS/RHEL; contains authentication and authorization logs, including su (switch user) events.

  • /var/log/auth.log:

    Ubuntu; stores user authorization data, including SSH logins.


For CentOS, su usage can be found in /var/log/messages, /var/log/secure, and /var/log/audit/audit.log. On Ubuntu, su events are not typically found in /var/log/syslog but in /var/log/auth.log.


2. Grepping for Key Events

When performing threat hunting, the grep command is an effective tool for isolating critical events from logs. A common practice is to search for specific terms, such as:


  • root: Identify privileged events.

  • CMD: Find command executions.

  • USB: Trace USB device connections.

  • su: On CentOS, find switch user activity.


For example, you can run:

grep root /var/log/messages

3. Authentication and Authorization Logs

Key Commands:


  • last: Reads login history from binary log files such as utmp, btmp, and wtmp.

  • lastlog: Reads the lastlog file, showing the last login for each user.

  • faillog: Reads the faillog, showing failed login attempts.


Authentication logs are stored in plain text in the following locations:

  • /var/log/secure (CentOS/RHEL)

  • /var/log/auth.log (Ubuntu)


These files contain vital data on user authorization sessions, such as login events from services like SSH.


Key Events to Hunt:

  • Failed sudo attempts: These indicate potential privilege escalation attempts.

  • Root account activities: Any changes to key system settings made by the root account should be scrutinized.

  • New user or cron jobs creation: This can be indicative of persistence mechanisms established by attackers.


4. Binary Login Logs

Binary login logs store data in a structured format that isn’t easily readable by standard text editors. These logs record user login sessions, failed login attempts, and historical session data. Key files include:


  • /var/run/utmp: Shows users and sessions currently logged in (available on live systems).

  • /var/log/wtmp: Contains historical data of login sessions.

  • /var/log/btmp: Logs failed login attempts.

Note: The utmp file is located in /var/run/, which is volatile and only exists on live systems. When analyzing offline snapshots, data in utmp won’t be available unless the system was live when captured.

Viewing Binary Login Files

You can use the last command to view binary login logs. The syntax for viewing each file is:


last -f /var/run/utmp
last -f /var/log/wtmp
last -f /var/log/btmp

Alternatively, you can use utmpdump to convert binary log files into human-readable format:


utmpdump /var/run/utmp
utmpdump /var/log/wtmp
utmpdump /var/log/btmp

For systems with heavy activity, piping the output to less or using grep for specific users is helpful to narrow down the results.


5. Analyzing wtmp for Logins

When reviewing login activity from the wtmp file, there are a few critical areas to examine:

Key Data:

  • Username: Indicates the user who logged in. This could include special users like "reboot" or unknown users. An unknown entry may suggest a misconfigured service or a potential intrusion.

  • IP Address: If the login comes from a remote system, the IP address is logged. However, users connecting to multiple terminals may be shown as :0.

  • Logon/Logoff Times: The date and time of the login event, and typically only the log-off time. This can make long sessions hard to identify. Notably, the last command does not display the year, requiring attention to timestamps.

  • Duration: The duration of the session is shown in hh:mm format or in dd+hh:mm for longer sessions.


For large systems with extensive activity, filtering for specific users or login times helps focus the analysis. You can do this with:

last | grep <username>

6. btmp Analysis

The btmp file logs failed login attempts, providing insights into potential brute-force attacks or unauthorized access attempts. Key areas to focus on when analyzing btmp are:

  • Username: This shows the account that attempted to authenticate. Keep in mind, it doesn't log non-existent usernames, so failed attempts to guess usernames won’t show up.

  • Terminal: If the login attempt came from the local system, the terminal will be marked as :0. Pay attention to login attempts from unusual or unexpected terminals.

  • IP Address: This shows the remote machine (if available) where the attempt originated. This can help in identifying the source of a potential attack.

  • Timestamp: Provides the start time of the authentication event. If the system doesn’t log the end time, it will appear as "gone" in the log. These incomplete events could signal abnormal activity.


Using lastb to view the btmp file can quickly provide a summary of failed login attempts.


7. Lastlog and Faillog

These logs, while useful for IR, come with reliability issues. However, they can still provide valuable clues.


Lastlog

The lastlog file captures the last login time for each user. On Ubuntu, this log can sometimes be unreliable, especially for terminal logins, where users may appear as "Never logged on" even while active.


Command to view:

lastlog
lastlog -u <username>  # For a specific user

In a threat hunting scenario, gathering lastlog data across multiple systems can help identify anomalies, such as accounts showing unexpected login times or systems reporting no recent logins when there should be.


Faillog

The faillog captures failed login events but is known to be unreliable, especially as it’s not available in CentOS/RHEL systems anymore. Still, on systems where it exists, it can track failed login attempts for each user account.


Command to view:

faillog -a               # View all failed logins
faillog -u <username>    # Specific user account

For an IR quick win, use lastlog across your devices to check for unusual login patterns, even if you need to keep in mind that Ubuntu's implementation isn’t always consistent.



8. Audit Logs: A Deep Dive into System Activity

The audit daemon (auditd) is a powerful tool for logging detailed system activity. On CentOS, it’s enabled by default, but on Ubuntu, elements of the audit log are often captured in auth.log. The audit daemon captures events like system calls and file activity, which makes it a critical tool in IR and hunting.


Key Audit Logs:

  • /var/log/audit/audit.log: This log captures authentication and privilege escalation events (su usage, for instance), as well as system calls.

  • System calls: Logs system-level activities and their context, such as user accounts and arguments.

  • File activity: If enabled, it monitors file read/write operations, execution, and attribute changes.


To analyze audit logs effectively, you can use:

  • ausearch: A powerful tool for searching specific terms.


    For example:

ausearch -f <file-name>      # Search events related to a file
ausearch -p <pid>            # Search events related to a process ID
ausearch -ui <user-id>       # Search events related to a specific user

This is particularly useful for finding specific events during IR.


There are lots more and it is worth checking the man pages in detail or https://linux.die.net/man/8/ausearch

  • aureport: Ideal for triage or baselining systems. It’s less granular than ausearch but provides a broader view that can help identify unusual behavior.


Configuration

The audit configuration is stored in /etc/audit/rules.d/audit.rules. For example, on a webserver, you could configure audit rules to monitor changes to authentication files or directories related to the webserver.

By customizing auditd, you can focus on high-priority areas during IR, such as monitoring for unauthorized changes to system files or authentication events.


----------------------------------------------------------------------------------------------


1. Application Logs: Key to Incident Response


Application logs provide crucial insights during an incident response investigation. Logs stored in /var/log often include data from web servers, mail servers, and databases. Administrators can modify these log paths, and attackers with elevated privileges can disable or erase them, making log analysis a critical part of any forensic process.


Common Locations for Application Logs:


  • Webserver (Apache/HTTPd/Nginx): /var/log/apache2, /var/log/httpd, /var/log/nginx

  • Mail Server: /var/log/mail

  • Database: /var/log/mysqld.log, /var/log/mysql.log, /var/log/mariadb/*


(i) Application Logs: HTTPd Logs

Webserver logs, such as Apache or Nginx, are often the first place to investigate in incident response because they capture attacker enumeration activity, such as scanning or attempts to exploit web vulnerabilities. These logs reside in:


/var/log/apache2 (Ubuntu)
/var/log/httpd (CentOS)
/var/log/nginx (for Nginx servers)

These logs can be found on various servers, including web, proxy, and database servers, and help track attacks targeting specific web services.



2. Webserver Logs: Two Main Types

1. Access Log

  • Purpose: Records all HTTP requests made to the server. This log is critical for determining what resources were accessed, the success of these requests, and the volume of the response.


  • Important Fields:

    • IP Address: Tracks the client or source system making the request.

    • HTTP Request: Shows what resource was requested (GET, POST, etc.).

    • HTTP Response Code: Indicates if the request was successful (200), or unauthorized (401), among others.

    • Response Size: Displays the amount of data transferred in bytes.

    • Referer: Shows the source URL that directed the request (if available).

    • User Agent (UA): Provides details about the client (browser, operating system, etc.).

Example Access Log Entry:


2. Error Log

  • Purpose: Records diagnostic information and alerts related to server issues such as upstream connectivity failures or backend system connection problems. It's useful for troubleshooting server-side issues.

  • SSL/TLS Logging: In some configurations, web servers also log SSL/TLS data (e.g., ssl_access_log) containing HTTPS requests, but these may lack User Agent strings and HTTP response codes





Quick Incident Response Wins with Webserver Logs

  1. Review HTTP Methods Used:

    • Look for unusual or malicious HTTP methods like OPTIONS, DELETE, or PATCH, which may indicate scanning tools or attempted exploits.

    • Webshells often use POST requests to execute commands or upload files.


  2. Look for Suspicious Pages:

    • Use the HTTP 200 response code to identify successful requests.

    • Search for unusual or non-existent filenames (like c99.php, which is commonly used for webshells).


  3. Analyze User-Agent Strings:

    • Attackers may use default or uncommon User-Agent strings, which can help trace their activity.

    • Even though these strings can be spoofed, they’re still valuable for identifying patterns, especially for internal servers.



Example Commands for Webserver Log Analysis

1. Checking Pages Requested:

cat access_log* | cut -d '"' -f2 | cut -d ' ' -f2 | sort | uniq -c | sort -n

This command will display a count of unique pages requested, making it easy to spot anomalies or repeated access to specific files.


2. Searching for Specific Methods (e.g., POST):

cat access_log* | grep "POST"

This will filter all POST requests, which can be indicative of webshells or exploits that use POST to upload or execute files.


3. Reviewing User Agent Strings:


cat access_log* | cut -d '"' -f6 | sort | uniq -c | sort -n

This extracts and counts unique User Agent strings, allowing you to spot unusual or uncommon strings that may belong to attackers.

(Modify as per logs availability)

Conclusion: Tailor the Strategy

An effective log strategy is key to unraveling the attack chain in an incident response. Start where the attacker likely started, whether that’s the web server, database, or another service. The primary goal is to build a clear timeline of the attack by correlating logs across different systems. By following these strategies, you can mitigate the damage and gather critical forensic data that will assist in remediating the incident and preventing future breaches.


Akash Patel


50 views0 comments

Comments


bottom of page