top of page

Search Results

418 results found with an empty search

  • Windows Registry: A Forensic Goldmine for Installed Applications

    The Windows Registry  is like the DNA of an operating system —it tracks system configurations, user settings, and most importantly, installed applications. For forensic investigators, this makes the Registry a valuable source of evidence, helping to i dentify what software has been installed, when it was installed, and even if it has been uninstalled but left traces behind. Where to Find Installed Applications in the Registry Windows stores information about installed applications in multiple locations within the Registry. The primary locations include: 1. Uninstall Keys (Most Common Locations) These keys list installed applications and provide details such as: Application Name Version Number Software Publisher File Size Installation Date Location on Disk Registry Paths for Installed Applications For all users: 1.SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall 2.SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall (for 32-bit apps on 64-bit OS) For specific users: 1. NTUSER\Software\Microsoft\Windows\CurrentVersion\Uninstall 2. NTUSER\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall Many applications today are still 32-bit, even on 64-bit systems, so checking the WOW6432Node  key is essential for a complete audit. -------------------------------------------------------------------------------------------------------- Alternative Registry Locations for Installed Applications Beyond the uninstall keys, additional locations provide useful data: Microsoft Installer (MSI) Applications SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\\Products\\InstallProperties This key tracks software installed using MSI packages . If an application’s UninstallString  contains "msiexec.exe", the MSI Product Code (GUID)  can be searched in the Registry to find more related details. Universal Windows Platform (UWP) Apps (Microsoft Store Apps) SOFTWARE\Microsoft\Windows\CurrentVersion\Appx\AppxAllUserStore This tracks Windows Store apps  and built-in system applications. Other Application Tracking Keys Application Paths (Shortcut Information) SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths NTUSER\Software\Microsoft\Windows\CurrentVersion\App Paths File Extension Tracking (Recently Used Apps) NTUSER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts Application-Specific Data Storage (IntelliType Keyboard Software, etc.) NTUSER\Software\Microsoft\IntelliType Pro\AppSpecific These locations contain file paths, execution history, and recently used file extensions , which can be useful when investigating software usage and digital artifacts. -------------------------------------------------------------------------------------------------------- Tracking Software Installation Dates and Updates Every installed application typically has an InstallDate  value , which records the last time the software was installed, updated, or repaired . However, not all applications store this data , and updates (such as Windows patches) can alter timestamps for multiple applications at once. If the InstallDate  field is missing, the Registry last write time  can sometimes be used as an estimate. However, this method isn’t always reliable because system-wide updates can reset multiple timestamps at once. -------------------------------------------------------------------------------------------------------- Finding Uninstalled Software Evidence Even when an application is removed, traces often remain in the Registry. These can include: Leftover registry keys  under the uninstall locations Recently used file extensions  still linked to the software Application-specific MRU (Most Recently Used) lists  stored elsewhere in the Registry A simple keyword search across registry keys, values, and data  can reveal hidden traces of software that no longer appears in the uninstall lists. -------------------------------------------------------------------------------------------------------- Forensic Analysis of Installed Software: The Best Approach To get a complete picture of installed applications, follow these steps: Check all known uninstall registry keys  for software records. Look at the MSI Installer keys  for software installed via Microsoft’s installer service. Audit UWP (Microsoft Store) applications  in the Appx registry location. Search for file extension associations and application paths  to find recent usage. Check Registry last write timestamps , but be aware of system updates affecting accuracy. U se a forensic tool like Registry Explorer  to automatically aggregate relevant data into a table for easier analysis. -------------------------------------------------------------------------------------------------------- Final Thoughts By analyzing multiple registry locations , investigators can track not just what software is installed, but also when and how it was installed, updated, or even removed . However, timestamps can sometimes be unreliable due to system updates, so layering evidence from multiple sources is key to forming accurate conclusions. By mastering Registry analysis, forensic investigators can uncover hidden applications, track software usage, and even identify traces of deleted programs—making it a crucial skill in digital forensics! ------------------------------------------------Dean--------------------------------------------------

  • Tracking Microphone and Camera Usage in Windows (Program Execution: CompatibilityAccessManager)

    With more people working remotely than ever before, concerns about privacy and unauthorized access to microphones and webcams have grown . Windows now includes built-in tracking features that log when applications use these devices. This information is stored in the Windows Registry , making it a valuable forensic artifact for investigators. Where Does Windows Store Microphone and Camera Usage Data? Starting with Windows 10 (build 1903)  and continuing in Windows 11 , Microsoft introduced new Registry keys that log when applications access sensitive devices like microphones, webcams, and location services. These logs are stored in the following locations: For system-wide settings: SOFTWARE\Microsoft\Windows\CurrentVersion\CapabilityAccessManager\ConsentStore For user-specific settings: NTUSER\Software\Microsoft\Windows\CurrentVersion\CapabilityAccessManager\ConsentStore Each of these Registry locations contains subkeys that track permissions and usage details for different system capabilities. The ones of most interest to forensic investigators are: microphone  → Logs apps that accessed the microphone webcam  → Tracks camera usage location  → Monitors GPS or location data How Windows Tracks Application Activity Each application that requests access to the microphone or camera gets logged under these Registry keys . Microsoft applications (like Teams or Skype) are stored in dedicated keys, while other applications are grouped under a NonPackaged  key . Each application entry contains: Application Name and Path  – The full path of the program that accessed the device LastUsedTimeStart  – The timestamp when the application started using the microphone or camera LastUsedTimeStop  – The timestamp when the application stopped using it These timestamps are stored in Windows FILETIME format  (a 64-bit timestamp). Investigators can convert these values into readable date and time formats to determine e xactly when an app accessed the microphone or camera—and for how long. Why This Data Matters in Forensic Investigations This Registry data provides concrete evidence of microphone and camera activity , which can be useful in several scenarios: 1. Detecting Unauthorized Access If a user suspects their microphone or webcam was activated without their knowledge, forensic analysts can check these keys to see if any suspicious applications accessed them. 2. Identifying Malware or Spyware Not all applications that use the microphone or camera are legitimate. Malicious software that secretly records conversations or captures video might appear in these logs. If an unknown program shows up in the NonPackaged  section or is running from an unusual location, it could be malware. 3. Investigating Insider Threats In corporate investigations, these logs can help determine if an employee used unauthorized software for video conferencing or recorded private meetings . 4. Digital Evidence in Criminal Cases If an attacker used a victim’s device to make calls, record video, or capture audio , these Registry logs could serve as key evidence, showing when and for how long  the device was accessed. Let’s go through a real-world example. A few days ago, I was giving an interview on my personal laptop, and I wanted to check if these details were logged(In registry using Zoom and webcam) . As mentioned earlier, I examined the Registry to see if any records were generated. Refer to the screenshot below —you can see that the activity was indeed logged. Pay special attention to the timestamp , which is recorded in UTC format . Additionally, if you look closely, abive screenshot the logs also captured how long the session lasted , providing a detailed record of the event. Final Thoughts: A Valuable Source of Digital Evidence The C apabilityAccessManager  Registry keys provide an excellent resource for tracking microphone and camera usage . Whether you’re investigating a privacy concern, looking for signs of malware, or gathering digital evidence in a forensic case, these logs offer valuable insights. However, it’s crucial to cross-check this data with other forensic artifacts —such as event logs, system logs, and application history—to build a complete picture of user activity. --------------------------------------------Dean=-----------------------------------------

  • MFTECmd-MFTexplorer: A Forensic Analyst's Guide

    When it comes to forensic tools, MFTECmd.exe  is one of my go-to choices . It’s part of the KAPE suite and an incredibly efficient tool fo r parsing NTFS artifacts like $MFT, $J, $Boot, $SDS, and $I30. While I’ve always relied on it, many have requested a detailed guide, so here we are. --------------------------------------------------------------------------------------------------------- Before we dive into the details of this tool, I want to let you know that there are already a rticles available on parsing $J , $MFT , . You can check them out here: https://www.cyberengage.org/courses-1/ntfs-journaling ------------------------------------------------------------------------------------------------------- MFTECmd: Parsing the Master File Table (MFT) As the name suggests, MFTECmd is designed to parse the NTFS Master File Table (MFT) . Developed by Eric Zimmerman , t he tool converts MFT records into human-readable formats, making it easier to analyze files, including deleted ones, alternate data streams, copied files, and more. Here’s what makes MFTECmd  stand out: Fast Processing : It generates CSV or JSON output in under 40 seconds, even for large MFT files. Support for Volume Shadow Copies : With the --vss option, you can parse older versions of the MFT from Volume Shadow Copies. Deduplication : The --dedupe option helps eliminate duplicate entries, simplifying analysis. Command-Line Interface : While it may seem intimidating at first, its straightforward commands provide unparalleled flexibility. Command : MFTECmd.exe -f F:\C\$MFT --csv C:\Users\User\Downloads --csvf mft.csv Once you executed MFTECmd Output will look like below An Alternative: MFT Explorer If you prefer a graphical user interface (GUI), MFT Explorer , also by Eric Zimmerman, is an alternative to MFTECmd. Tree View : MFT Explorer presents parsed MFT data in a Windows Explorer-like structure, making it easier to visualize files and folders. Rich Metadata : It provides detailed information for each MFT record, including raw hex contents. Slower Performance : Due to its GUI and the sheer size of modern MFT files, loading can take up to an hour . While slower, it’s an excellent tool for learning about the MFT. It took me almost 10 minutes to get $MFT  opened in MFTEExplorer . But once loaded, it created a complete Windows-like structure for us . This is expected because the $MFT  (Master File Table) organizes the file system. See the screenshot above for a clear view. ------------------------------------------------------------------------------------------------------- What Did We Find? In this instance, I wanted to show you how to identify a file downloaded from the internet and retrieve the link it was downloaded from . This is also possible through N TFS Alternate Data Streams (ADS) , specifically the Zone.Identifier . If you’re new to the concept, I recommend reading this article: Unveiling File Origins: The Role of Alternate Data Streams (ADS) - Zone Identifier in Forensic Investigations ------------------------------------------------------------------------------------------------------------- Choosing the Right Tool I suggest trying both tools and deciding what works best for you. Personally, I find MFTECmd.exe  to be the best tool—it’s quick, easy to use, and highly efficient . But who knows, you might prefer MFTEExplorer  for its graphical interface. The choice is yours! Final Thoughts MFTECmd is a powerful, fast, and efficient tool that simplifies NTFS artifact parsing, helping forensic analysts uncover critical insights in record time. While MFT Explorer offers a more visual approach, MFTECmd remains my top choice for its speed and flexibility. Experiment with both to find what works best for you. Remember, the ultimate goal is to keep learning and refining your forensic skills. Keep learning, exploring, and experimenting with different tools. They all offer unique benefits and can deepen your forensic capabilities. See you in the next article! --------------------------------------------------Dean------------------------------------------

  • Understanding, Collecting, Parsing, Analyzing the $MFT

    Updated on 18 Feb, 2025 Master File Table ($MFT) : The MFT is a database that stores information about every file and directory on an NTFS volume. It's essentially a metadata repository, containing records for each file, including its attributes and metadata. Understand the $MFT and there structure check out below article: https://www.cyberengage.org/courses-1/insights-into-file-systems-and-anti-forensics Collection: Investigation with Kape. We'll use KAPE to acquire the NTFS Master File Table ($MFT) and journals. Then, we'll employ MFTECmd to parse the MFT. Kape triage compound target , showcasing snippets of the MFT, $J, and link files targets. The output structure of Kape, with raw files and parsed outputs, is detailed, emphasizing the efficiency of this workflow in gathering artifacts for analysis. Kape can be used as GUI version or Cmd version its depend upon your preference need. command ---------------------------------------------------------------------------------------------------- Parsing: There is complete article written regarding parsing of $MFT using MFTExplorer/MFTECMD check out below https://www.cyberengage.org/post/mftecmd-mftexplorer-a-forensic-analyst-s-guide ---------------------------------------------------------------------------------------------------- Analyzing: Column Headers: As we begin our exploration, take note of the extensive list of column headers. These headers provide essential information about MFT entries, including file names, sizes, and crucially, timestamps. Understanding Timestamps: Each timestamp column corresponds to specific aspects of file operations, such as creation (B), modification (M), and access (A). The timestamps are presented in a hex format.  with hex 0x10 denoting $SI while hex 0x30 represents $FN ---------------------------------------------------------------------------------------------------- Detecting Time Stomping Time stomping can be detected by comparing these two time stamp $SI and $FN we can identify time stomping. You can learn more about Timestomping check out the article below: https://www.cyberengage.org/post/anti-forensics-timestomping ---------------------------------------------------------------------------------------------------- Interpreting Blank Timestamps:  You may notice some blank timestamps in columns ending with hex 0x30. These blanks signify that the $file name timestamps are identical to the corresponding $standard information timestamps. This design choice reduces noise in the data and directs attention to entries where timestamps diverge, aiding in identifying suspicious activities. ---------------------------------------------Dean---------------------------------------------------------

  • Breaking Down the $LogFile and How to Use LogFileParser

    When it comes to forensic analysis, the $LogFile is one of those artifacts that hasn’t received as much attention as other NTFS structures. However, the $LogFile is packed with valuable forensic data, storing full details of changes to critical structures like the $MFT, $I30 indexes, $Bitmap, and even the $UsnJrnl itself. If i talk about parsing the $LogFile one of the best free tools available is LogFileParser  by Joakim Schicht. This tool simplifies the process of parsing the $LogFile and provides multiple output files that make sense of all the data it contains. Why Is the $LogFile Important? The $LogFile keeps track of changes happening within the NTFS file system. It records transaction logs, including file creations, modifications, renaming, and deletions . Even though it doesn’t store traditional timestamps for each event, it uses Log Sequence Numbers (LSNs)  to maintain order, which helps in reconstructing events over time. ------------------------------------------------------------------------------------------------------------ How LogFileParser Helps LogFileParser is designed to extract useful information from the $LogFile efficiently. The primary output file, LogFile.csv , provides an overview of what’s stored in the log. This file is massive, often containing over 100,000 rows  and 60+ fields , although not every field is populated for every entry. For a more targeted approach, the tool also generates: LogFile_INDX_I30.csv  – Extracts metadata from $I30 index entries, including file names, MACB timestamps, file sizes, flags, and MFT record numbers. LogFile_FileNames.csv  – Consolidates file and directory names found within the $LogFile, along with their corresponding MFT record numbers and LSNs. LSN value allow us to piece together the order of events. For instance, if you find a suspicious file in LogFile_FileNames.csv , you can track its LSN back to LogFile.csv   and analyze what actions were taken before and after that event. ------------------------------------------------------------------------------------------------------------ Recovering Deleted Files One of the most powerful features of LogFileParser is its ability to help recover deleted files. While the $LogFile doesn’t store actual file data, it does retain cluster run information , which tells us where data was stored on disk. This can be a game-changer if a file’s MFT record has been overwritten , as the original cluster locations may still be recoverable. To enable this feature, use the /ReconstructDataruns  option, which attempts to rebuild data runs for fragmented files—a task that traditional file carving techniques struggle with. ------------------------------------------------------------------------------------------------------------ How to Use LogFileParser LogFileParser comes with both a GUI  and a command-line interface . Running it is straightforward: By default, output directory next to the LogFileParser executable. To launch the GUI , simply double-click on LogFileParser.exe , which is typically located in: ------------------------------------------------------------------------------------------------------------ Alternative Tools: TZWorks Mala If you’re looking for an alternative, TZWorks’ Mala  is a solid commercial tool for analyzing the $LogFile. It’s incredibly fast, organizes data in a more readable format, and is actively maintained. Even if you’re not purchasing the tool, TZWorks provides excellent documentation  explaining how forensic artifacts work, making it a great reference for learning more about the $LogFile. ------------------------------------------------------------------------------------------------------------ Final Thoughts Parsing the $LogFile isn’t always the first thing that comes to mind in forensic investigations, but it can be incredibly useful. Whether you’re tracking file changes, recovering deleted metadata, or trying to reconstruct the timeline of an incident, tools like LogFileParser and Mala can help extract valuable information . If you haven't already, give LogFileParser a try and see what hidden details you can uncover from the $LogFile! -------------------------------------Dean----------------------------------------------------------

  • Understanding the $UsnJrnl, $J and How to Parse and analyze It

    Updated on 18 Feb,2025 If you're digging into NTFS file system changes, the $UsnJrnl (Update Sequence Number Journal)  is one of the best forensic artifacts you can analyze. It keeps track of changes happening on the volume, making it a go-to resource for investigators. The good news? There are great tools  available to parse the $UsnJrnl. One of the best and my favorite tool is Eric Zimmerman’s MFTECmd . --------------------------------------------------------------------------------------------------------- If you want to learn running MFTECmd against $MFT Do check out below article https://www.cyberengage.org/post/mftecmd-mftexplorer-a-forensic-analyst-s-guide ------------------------------------------------------------------------------------------------------- How to Use MFTECmd to Parse the $UsnJrnl Running MFTECmd  against the $UsnJrnl is pretty much the same as running it against the $MFT (Master File Table) . The only difference? You need to specify the $J alternate data stream (ADS)  in addition to the $MFT file. Here’s two simple command to get started: In first command you will include the $MFT while parsing $J MFTECmd.exe -f G:\G\$Extend\$J -m G:\G\$MFT --csv "E:\Output for testing\Website investigation" --csvf usnjrnl.csv What’s Happening Here? -f G:\G\$Extend\$J → Points to the $UsnJrnl file . -m G:\G\$MFT → Includes the $MFT file  to cross-reference file entries and build full path information. --csv "E:\Output for testing\Website investigation" --csvf usnjrnl.csv → Outputs the parsed data into a CSV file  for easy analysis. This will generate a CSV file that contains every change recorded in the journal . In a real-world case, a forensic investigator ran this on a compromised system and parsed 384,493 records , covering 70 hours (about 3 days) of system activity . That’s a lot of valuable data! In Second command you will not include the $MFT while parsing $J MFTECmd.exe -f G:\G\$Extend\$J --csv "E:\Output for testing\Website investigation" --csvf usn.csv I know I know you will asked Dean whats the difference so let me show you with output screenshot With $MFT Without $MFT (As per screenshot its self explanatory: with $MFT you will get path of the exe without $MFT there is no path) ------------------------------------------------------------------------------------------------------------ ***Now if you following me you should have seen i have created an article Name Tracing Reused $MFT Entries Paths : Recovering Deleted File Paths Forensically with CyberCX UsnJrnl Rewind https://www.cyberengage.org/post/tracing-reused-mft-entries-paths-recovering-deleted-file-paths-forensically-with-cybercx-usnjrnl ****Good thing is if you use -m G:\G\$MFT while parsing $J file u do not have to download extra tool, you will get the same output ---------------------------------------------------------------------------------------------------------- Parsing Change Journals in Volume Snapshots One of the coolest things about MFTECmd  is that it lets you analyze volume shadow copies  (VSS). Since Windows maintains snapshots of the volume , you can extract past versions of the $UsnJrnl and extend your timeline even further. To do this, just add the --vss flag: mftecmd.exe -f G:\$Extend\$J -m G:\C\$MFT --vss --csv "E:\Output for testing\Website investigation" --csvf usnvss.csv What’s Different Here? Instead of running it on a single $J file , this command extracts data from all available volume snapshots . The result? Multiple CSVs , each containing records from different points in time. If you’re mounting a full disk image using Arsenal Image Mounter , you can run this command on the mounted drive (e.g., G:) to retrieve historical data. ---------------------------------------------------------------------------------------------------------- Avoiding Duplicate Entries Another handy feature in MFTECmd  is the --dedupe option. This checks the hash of each file  before parsing to avoid duplicate entries . While it’s rare for different snapshots to contain identical $J streams, this option saves time and storage  when working with large dataset s. mftecmd.exe -f G:\$Extend\$J -m G:\C\$MFT --dedupe --vss --csv "E:\Output for testing\Website investigation" --csvf usnvss.csv ---------------------------------------------------------------------------------------------------------- Alternative Tool: TZWorks' "JP" If you're looking for another solid tool, TZWorks' "JP"  is a great option for parsing the $UsnJrnl . It comes with advanced carving features , which means it can recover records even from partially corrupted  or deleted  change journals. This is super useful in forensic investigations where data integrity is an issue. ---------------------------------------------------------------------------------------------------------- Analyses of $J Output: Understanding Column Headers:  As we dive into the USN journal, the column headers are mostly self-explanatory. However, there's one column that warrants special attention - the "Update Reasons" column . Here file create, file delete, and rename, which provide detailed information about each file-related action recorded in the journal. Example Analysis: Suppose we search for an executable file named "New Text Document.txt" and identify its entry number in the journal. By filtering the journal entries based on this entry number, we can observe a chronological timeline of events related to this file. Reconstructing File Activity: In this example, we observe a series of operations involving the file " New Text Document.txt " We witness its renaming to "creds.txt.txt," . This sequence of events, captured in the USN journal, provides a comprehensive narrative of the file's journey on the system. ------------------------------------------------------------------------------------------------------------- Reference video: https://www.youtube.com/watch?v=_qElVZJqlGY&ab_channel=13Cubed ------------------------------------------------------------------------------------------------------------- Reference video: Final Thoughts The $UsnJrnl is a goldmine  when investigating file system changes. Thanks to tools like MFTECmd  and TZWorks' JP , forensic analysts can quickly extract, cross-reference, and analyze  these logs with ease. Whether you're examining a live system, a forensic image, or volume snapshots, these tools help uncover what really happened on a system —no guesswork needed. Happy hunting! 🔍🚀

  • Making Sense of $UsnJrnl and $LogFile : Why Journal Analysis is a Game Changer

    Updated 18 Feb,2025 Now that we’ve got a solid grasp on how $UsnJrnl and $LogFile work, let’s dive into how we can actually use them for analysis. Which One is Easier to Read? If you quickly scan through both, you’ll notice that $UsnJrnl is much easier to understand . It’s well-documented, and Microsoft provides clear explanations for its codes . On the other hand, $LogFile events are messier and less documented, making their analysis trickier. Plus, a single file action can generate a flood of $LogFile events, adding to the complexity. However, $LogFile holds way more details about a file than $UsnJrnl does , including MFT attributes and $I30 index records. If you find something suspicious in $LogFile, it’s definitely worth the extra effort to analyze it. For more in-depth details, check out the presentation “NTFS Log Tracker”  from Forensic Insight—it’s a great breakdown of $LogFile analysis. Key Markers in $UsnJrnl and $LogFile Action $LogFile Codes $UsnJrnl Codes File/Directory Creation AddIndexEntryAllocation InitializeFileRecordSegment FileCreate File/Directory Deletion DeleteIndexEntryAllocation DeallocateFileRecordSegment FileDelete File/Directory Rename or Move DeleteIndexEntryAllocation AddIndexEntryAllocation RenameOldName RenameNewName ADS Creation CreateAttribute with name ending in “:ADS” StreamChange NamedDataExtend File Data Modification * Op codes for $LogFile often are not sufficient to determine file modification DataOverwrite DataExtend Data Truncation 1. Detecting File or Directory Creation $LogFile : Look for the combination of InitializeFileRecordSegment  and AddIndexEntryAllocation . This means a new “FILE” record was allocated in the MFT, and an entry was added to the parent directory. $UsnJrnl : The FileCreate  event is a clear indicator of a new file or directory . Simple and straightforward! 2. Detecting File or Directory Deletion $LogFile : Look for DeleteIndexEntryAllocation  and DeallocateFileRecordSegment  together. This means the file’s MFT record was removed and the index entry was deleted from the parent directory. $UsnJrnl : Just look for the FileDelete  event—it directly indicates a file or directory was deleted. 3. Detecting File or Directory Renaming $LogFile : A rename action shows up as DeleteIndexEntryAllocation  (removing the old name) and AddIndexEntryAllocation  (adding the new name). $UsnJrnl : You’ll see RenameOldName  followed by RenameNewName , showing both the old and new names in a clear sequence. 4. Detecting File or Directory Movement $LogFile : Just like renaming, a move generates DeleteIndexEntryAllocation  and AddIndexEntryAllocation  events. The difference? The file name stays the same, but the parent directory changes. $UsnJrnl : Similar to renaming, you’ll see RenameOldName  followed by RenameNewName . The key difference here is that the file’s location changes, but the name remains the same. 5. Detecting Alternate Data Stream (ADS) Creation $LogFile : If an ADS is created, a CreateAttribute  event will show up, referencing a stream name ending with :ADS. Since ADS creation isn’t super common, looking at these events can help you spot hidden or suspicious files. $UsnJrnl : The StreamChange event logs any ADS activity. If followed by NamedDataExtend , it confirms that data was added to a newly created ADS . Attackers sometimes delete ADS to evade detection, but spotting their creation is already a win. 6. Detecting File Data Modification $LogFile : This journal doesn’t directly track file data changes, but it does record metadata updates like timestamp changes, allocation status updates, and modifications to the $DATA attribute. $UsnJrnl : When a file’s content is modified, you’ll see a DataOverwrite or DataExtend  event, making it easier to track changes. ------------------------------------------------------------------------------------------------------------- Wrapping Up By combining insights from both $UsnJrnl and $LogFile, forensic analysts can uncover valuable details about file system activities. While $UsnJrnl offers a cleaner, high-level view, $LogFile provides deep, granular insights that can be critical in investigations. If you're looking to dive deeper into NTFS forensic analysis, checking out tools like istat for parsing MFT records and referencing the NTFS Log Tracker  presentation will help sharpen your skills. Happy hunting! ----------------------------------------------------------------------------------------------------------- Why Journal Analysis Matters Let’s say we’re investigating a file that was renamed, moved, and later deleted. With just the $MFT , if we're lucky and the file’s record hasn’t been overwritten, we can find out its last recorded name and location before deletion . But we won’t know when it was deleted—because $MFT timestamps don’t capture that event. Now, enter journal analysis. With it, we can see the entire history of the file: When and what it was renamed to When and where it was moved Exactly when it was deleted That’s a huge advantage! Having this level of visibility helps us reconstruct an attacker's actions, track malware movements, and understand what happened on a system. ----------------------------------------------------------------------------------------------------------- Smart Filtering for Better Insights Since both the $UsnJrnl and $LogFile track file system changes, we can use creative searches and filters to uncover critical details. A good starting point is a nalyzing the $UsnJrnl first —it has a longer history and is easier to read. Then, we can refine our investigation with the $LogFile for more granular details. Here are some high-value filters to consider: Key System Directories to Watch C:\Windows & C:\Windows\System32   C:\Windows\Prefetch   Temp Directories   C:\Users*\Downloads C:\Users*\AppData\Roaming\Microsoft\Windows\Recent C:$Recycle.Bin   A major win in an investigation is identifying where attackers store their tools and stolen data. Once we find their working directory, we can: Look for similar directories on other machines Identify additional Indicators of Compromise (IOCs) Recover deleted or moved files For example, if we discover the attacker’s directory, filtering the journals for activity in that location might show us files that were once there but are now missing. File Types Worth Searching Regardless of location, some file types are always worth investigating: Executables (.exe, .dll, .sys, .pyd) Scripts (.ps1, .vbs, .bat)   Archives (.rar, .7z, .zip, .cab) Known IOCs (file or folder names linked to the attack) Since searching for executables like .exe and .dll can produce a ton of results, it’s best to filter by directories of interest, such as System32 or Temp folders. ----------------------------------------------------------------------------------------------------------- Conclusion Using just the $MFT gives us a snapshot in time, but combining it with journal analysis gives us a dynamic view of file system activity. By filtering for key directories, attacker working directories, and high-risk file types, we can uncover hidden traces of an attack, track an attacker’s movements, and build a stronger case in our investigations. So, the next time you're diving into forensic analysis, don’t just stop at the $MFT—dig into the journals and see the full picture! ------------------------------------------------Dean------------------------------------------------

  • Understanding NTFS Journaling ($LogFile and $UsnJrnl) : A Goldmine for Investigators

    Updated 18 Feb,2025 Ever wonder how your computer keeps track of all the changes happening to files and folders? That’s where NTFS journaling comes in. Think of it as a built-in security camera for your file system, constantly recording what’s going on. For forensic investigators, this is a goldmine of information, helping them rewind time and see exactly what happened on a system. -------------------------------------------------------------------------------------------------------- The Two Journals: $LogFile and $UsnJrnl NTFS actually has two separate journaling features, each with its own purpose: $LogFile  – This is like a system safety net. It records every change happening at a low level, ensuring that if the system crashes, it can recover without corrupting data. $UsnJrnl  – This is more like an activity tracker, logging file and folder changes so applications (like antivirus or backup software) can react efficiently. Both of these logs give investigators an incredible amount of visibility into past file system activity. Since they’re also backed up inside Volume Shadow Copies, they can provide insights stretching back days, weeks, or even months! -------------------------------------------------------------------------------------------------------- How These Logs Help Investigators Think of an airplane’s black box. There are two recorders: one tracks flight data (like altitude and speed), and the other records cockpit conversations. In a similar way: $LogFile is like the flight data recorder , tracking deep system changes at a technical level. $UsnJrnl is like the cockpit voice recorder , summarizing higher-level file activity. A better analogy might be comparing them to network security tools: $LogFile is like full packet capture —detailed but heavy on data. $UsnJrnl is like NetFlow logs —less detailed but covers a longer time span. -------------------------------------------------------------------------------------------------------- Breaking Down $LogFile $LogFile’s main job is keeping NTFS stable, making sure the system doesn’t corrupt itself if something goes wrong. Changes to the Master File Table (MFT) Directory updates in $I30 indexes Modifications to $UsnJrnl itself  (if enabled) Changes in the $Bitmap file , which tracks disk space Even self-maintenance events  (it logs its own updates!) What makes $LogFile especially valuable is that it doesn’t just log what changed—it records the actual data that was modified . This means forensic analysts can sometimes recover deleted data by analyzing these logs. However, since NTFS constantly updates multiple system files at once, even simple actions like creating a new file can generate dozens of log entries . -------------------------------------------------------------------------------------------------------- The Downside: $LogFile is Short-Lived The catch? $LogFile is only 64MB by default . That might sound like a lot, but with so much happening under the hood, it typically only holds a few hours’ worth of data on active systems. However, if a system is mostly idle or you’re looking at logs from a secondary drive, you might find logs stretching back days or even weeks . Want to check or increase your $LogFile size? Use these commands: Check current size:  chkdsk /L Increase size:  chkdsk /L: -------------------------------------------------------------------------------------------------------- What NTFS Journaling Won’t Do While NTFS journaling is great at tracking file system changes, it doesn’t protect actual file content . If your system crashes while a file is being written, NTFS can repair the file system, but the file itself might still be corrupt. This is why databases and critical applications maintain their own  transaction logs—to ensure their data stays intact even if the system crashes. -------------------------------------------------------------------------------------------------------- $UsnJrnl The NTFS file system has a hidden gem called the Update Sequence Number (USN) Change Journal , stored in a system file named $UsnJrnl . This file keeps a log of all file and directory changes, along with a reason code indicating what type of modification occurred. While it does help with system recovery in some cases (like quickly re-indexing a volume), its primary role is to let applications efficiently track file changes across the system. Why Does $UsnJrnl Matter? Think about how Windows Backup works. Instead of scanning every single file to see what’s changed, it just checks the USN journal for recent modifications—saving tons of time. The same applies to antivirus software, the Windows Search Index, File Replication Service (FRS), and other applications that need to monitor file activity. Because of its efficiency, Microsoft made sure that $UsnJrnl was enabled by default starting with Windows Vista (it was available in Windows XP and 2000 but usually disabled). -------------------------------------------------------------------------------------------------------- How It Works: A Simpler View Compared to another NTFS journal, $LogFile , which tracks every tiny system change, $UsnJrnl is much more concise and user-friendly . If a new file is created, for instance, $LogFile might log over 20 detailed system events, while $UsnJrnl simplifies it down to just a few records. This makes it a lot easier for investigators and forensic tools to interpret. Each USN record logs: File or folder name MFT number (unique identifier in NTFS) Parent directory’s MFT number Timestamp of change Reason code (what changed?) File size and attributes (hidden, read-only, archived, etc.) Because it logs only major changes, $UsnJrnl can store several days or even weeks of history , depending on system activity. And since these logs are often backed up in volume shadow copies , forensic investigators can sometimes recover over a month’s worth of historical file activity . -------------------------------------------------------------------------------------------------------- Where and How Is It Stored? $UsnJrnl isn’t stored like a regular file—it’s actually an alternate data stream  (ADS) of the $UsnJrnl system file. Specifically, the USN records are kept in a data stream called $J . Unlike numbered log entries, each record is positioned based on its offset into the $J stream. Every file and directory has an Update Sequence Number in its MFT record , which links to the matching entry in $J. Since $UsnJrnl is a locked and hidden  system file, standard tools won’t allow access. You’ll need forensic utilities to extract it. Also, because $J is a sparse file  (meaning it appears large but isn’t fully written to disk), when you try to copy it, the system will fill in the missing parts, leading to massive file sizes . It can often exceed 3 GB  on a typical workstation, making remote collection tricky. Fortunately, it compresses well. -------------------------------------------------------------------------------------------------------- A Hidden Benefit: Recovering Deleted USN Records Even though the journal size is capped (usually 32 MB ), Windows tricks NTFS into thinking it’s a much larger file . When new entries are added, Windows allocates disk space at the end while deallocating older parts , marking them as sparse (empty). This means deleted USN records often remain in unallocated space , allowing forensic tools to recover them. -------------------------------------------------------------------------------------------------------- The $Max Data Stream There’s another small alternate data stream in $UsnJrnl called $Max . It’s tiny (about 32 bytes ) and stores metadata like the maximum allowed size of the journal. -------------------------------------------------------------------------------------------------------- Investigation with Kape. Use KAPE to acquire the NTFS Master File Table (MFT) and journals. Then, we'll employ MFTECmd to parse the MFT and USN Journal, as the $LogFile parsing functionality is not available in MFTECmd. Kape triage uses compound target, showcasing snippets of the MFT, $J, $LogFile and link files targets. The output structure of Kape, with raw files and parsed outputs, is detailed, emphasizing the efficiency of this workflow in gathering artifacts for analysis. Now as Kape can be used as GUI version or Cmd version its depend on your choice. command ------------------------------------------------------------------------------------------------- Final Thoughts The NTFS USN journal is an incredibly valuable forensic resource. It logs file changes in a structured, efficient manner and can provide a historical view of system activity stretching back weeks or even months . While Windows limits its size, forensic analysts can often recover old records, making it a powerful tool in investigations. Whether for system maintenance, security monitoring, or digital forensics, ----------------------------------------------Dean----------------------------------------------

  • Tracking Recently Opened Files in Microsoft Office: A Forensic Guide

    When investigating user activity on a Windows system, knowing what files were accessed and when can provide critical insights. While Windows keeps a list of recently opened files in the RecentDocs  registry key , Microsoft Office maintains an even more detailed record called File MRU (Most Recently Used) . This registry key tracks documents, spreadsheets, and presentations opened in Office applications— often storing more history than RecentDocs. ------------------------------------------------------------------------------------------------------------- Where Does Microsoft Office Store Recent Files? Each version of Microsoft Office stores a File MRU  list, which logs files opened in Word, Excel, PowerPoint, and other Office applications. The registry location varies based on the Office version and user account type: For older Office versions (2013, 2016, 2019, Microsoft 365): NTUSER\Software\Microsoft\Office\\[App]\File MRU (Office 2016, 2019, and Microsoft 365 all use "16.0" because they share the same code base.) For Microsoft 365 tied to a personal Microsoft account: NTUSER\Software\Microsoft\Office\ \User MRU\LiveID_#### \File MRU For Microsoft 365 accounts tied to an organization (Azure Active Directory): NTUSER\Software\Microsoft\Office\\ User MRU\ADAL \File MRU Alongside File MRU , Office also maintains a Place MRU  key, which tracks folder locations accessed by the user. ------------------------------------------------------------------------------------------------------------- What Information Can You Find in File MRU? Each entry in File MRU  contains: ✅ Full File Path  – Unlike RecentDocs (which only stores filenames), File MRU lists the complete  file location. ✅ Last Accessed Timestamp  – Stored in Windows 64-bit FILETIME  format (Big-Endian). ✅ Order of Access  – The most recently opened document is stored as Item 1 , followed by older entries. ✅ Up to 100+ Entries  – Newer Office versions keep a longer history. This is particularly useful because it allows forensic analysts to see exactly when a file was last opened  and where it was stored  (local drive, USB, network share, etc.). ------------------------------------------------------------------------------------------------------------- Tracking More Than Just File Open Times: Reading Locations Starting with Office 2013, Microsoft introduced the Reading Locations  registry key , which remembers where a user left off in a document. This is the feature behind the “Welcome back! Pick up where you left off” message when reopening a Word document. Registry Location for Reading Locations NTUSER\Software\Microsoft\Office\\Word\Reading Locations How Can This Data Be Used in Investigations? Forensic analysts and cybersecurity professionals can use File MRU  and Reading Locations  to: 🔍 Track User Activity  – Identify recently accessed files and determine if unauthorized documents were viewed. 💾 Recover Deleted Evidence  – Even if a file is deleted , its MRU entry remains in the registry  until overwritten. 📂 Identify Storage Locations  – Determine if files were accessed from USB drives, network shares, or cloud folders . ⏳ Estimate Document Usage Duration  – By comparing the File MRU (last opened time)  with Reading Locations (last closed time) , you can estimate how long a file was in use. Final Thoughts When conducting an investigation, don’t just stop at RecentDocs— dig deeper into the Microsoft Office registry keys  for a clearer picture of file usage! 🚀 --------------------------------------------Dean----------------------------------------------------------

  • Understanding, Collecting, Parsing the $I30

    Updated on Feb 17,2025 Introduction: In the intricate world of digital forensics, every byte of data tells a story. Within the NTFS file system, "$I30" files stand as silent witnesses, holding valuable insights into file and directory indexing Understanding "$I30" Files: $I30 files function as indexes within NTFS directories , providing a structured layout of files and directories. They contain duplicate sets of $File_Name timestamps, o ffering a comprehensive view of file metadata stored within the Master File Table (MFT). Utilizing "$I30" Files as Forensic Resources: $I30 files provide an additional forensic avenue for accessing MACB timestamp data. Even deleted files, whose remnants linger in unallocated slack space, can often be recovered from these index files. ------------------------------------------------------------------------------------------------------------- If you're into digital forensics, you've probably come across Joakim Schicht’s tools. They’re free, powerful, and packed with features for analyzing different forensic artifacts. One such tool, Indx2Csv, is a lifesaver when it comes to parsing INDX records like $I30 (directory indexes), $O (object IDs), and $R (reparse points). The cool thing about Indx2Csv is that it doesn’t just look at active records; it also digs up deleted entries that are still hanging around due to file system operations. Plus, it can even scan for partial entries, which means you might be able to recover metadata for deleted files or folders, even if their complete records are gone. How Does Indx2Csv Work? Indx2Csv processes I NDX records that have been exported from forensic tools like FTK Imager or The Sleuth Kit’s icat. If you've used FTK Imager before, you might have seen files labeled as $I30 in directories. These aren’t actual files but representations of the $INDEX_ALLOCATION attribute for that directory. You can export them and analyze them with Indx2Csv. Output: (GUI Mode of Ind2xcsv if you're using The Sleuth Kit, you can extract the $INDEX_ALLOCATION attribute with this command: icat DiskImage MFT#-160-AttributeID > $I30 (Just remember, the attribute type for $INDEX_ALLOCATION is always 160 in decimal.) Once you’ve got the file, running Indx2Csv is straightforward: Indx2Csv.exe -i exported_I30_file -o output.csv Indx2Csv has several command-line options for tweaking how it scans and outputs data. You can check out the tool’s GitHub page for a complete list of commands. ------------------------------------------------------------------------------------------------------------- Alternative Tools: Velociraptor & INDXparse.py While Indx2Csv is great, it’s not the only tool in the game. Here are two other options worth mentioning: Velociraptor Velociraptor is an advanced threat-hunting and incident response tool that can also be used for forensic analysis. Unlike Indx2Csv, which works with exported INDX files, Velociraptor can analyze live file systems and mounted volumes. That means you don’t have to manually locate and export the $I30 file—just point Velociraptor to a directory, and it’ll handle the rest. For example, if you've mounted a disk image and want to analyze the  directory, you can run: velociraptor.exe artifacts collect Windows.NTFS.I30 --args \ DirectoryGlobs=" <\\Windows\\Dean\\>" --format=csv --nobanner > C:\output\I30-Dean.csv This will save both active and deleted entries in a CSV file, which you can then analyze with Timeline Explorer or any spreadsheet app. INDXparse.py Another great option is INDXparse.py, a Python-based tool created by Willi Ballenthin. Like Indx2Csv, i t focuses on $I30 index files, but since it's written in Python , it works on multiple operating systems, not just Windows. Collection: You can use FTK Imager to collect Artifact like $I30. Parsing: INDXParse-master Can be used for Parsing: https://github.com/williballenthin/INDXParse Below screenshot is example of INDXParse-master You can use -c or -d (Parameter) based on needs Note: To use INDXParse-master you need have to Python installed on windows as I have do so its easy for me. Wrapping Up Indx2Csv is a powerful, easy-to-use tool for forensic investigators who need to dig into INDX records. Whether you’re analyzing active files, recovering deleted entries, or scanning for hidden metadata, it gets the job done. And if you need alternatives, Velociraptor and INDXparse.py offer additional flexibility for different situations. So, if you haven’t tried Indx2Csv yet, give it a shot—you might be surprised at what you uncover! --------------------------------------------Dean--------------------------------------------

  • The Truth About Changing File Timestamps: Legitimate Uses and Anti-Forensics: Timestomping

    Changing a file’s timestamp might sound shady, but there are actually some valid reasons to do it. At the same time, cybercriminals have found ways to manipulate timestamps to cover their tracks. Let’s break it down in a way that makes sense. When Changing Timestamps is Legitimate Think about cloud storage services like Dropbox. When you sync your files across multiple devices, you’d want the timestamps to reflect when the file was last modified, not when it was downloaded to a new device. But here’s the problem: when you install Dropbox on a new computer and sync your files, your operating system sees them as “new” files and assigns fresh timestamps. To fix this, cloud storage apps like Dropbox adjust the timestamps to match the original modification date. This ensures your files appear the same across all devices. It’s a perfectly legitimate reason for altering timestamps and helps keep things organized. --------------------------------------------------------------------------------------------------------- If you waana learn about cloud storage forensic including dropbox, box, onedrive, Box do check out articles written by me link below happy learning https://www.cyberengage.org/courses-1/mastering-cloud-storage-forensics%3A-google-drive%2C-onedrive%2C-dropbox-%26-box-investigation-techniques -------------------------------------------------------------------------------------------------------- So where were we! Yeah, lets continue When Changing Timestamps is Suspicious Hackers and cybercriminals love to manipulate timestamps too, but for completely different reasons. A common trick is to disguise malicious files by changing their timestamps to blend in with legitimate system files. For example if a hacker sneaks malware into the C:\Windows\System32 folder, they can rename it to look like a normal Windows process. But to make it even less suspicious, they’ll modify the timestamps to match those of other system files. This sneaky technique is called timestomping . How Analysts Detect Fake Timestamps Security analysts have developed several methods to spot timestomping . In the past, it was easier to detect because many tools didn’t set timestamps with fractional-second accuracy. I f a timestamp had all zeros in its decimal places, that was a red flag . Example: Timestamps Time stomping in $J 2. Time Stomping in $MFT**(Very Important) If you see screenshot attacker time stomped the eviloutput.txt they changed timestamp(0x10) to 2005 using anti forensic tool but as anti forensic tool do not modify (0x30) which is showing they original timestamp when file is created 3. Another example But today, newer tools allow hackers to copy timestamps from legitimate files, making detection trickier. Here’s how experts uncover timestamp manipulation: Compare Different Timestamp Records In Windows, files have t imestamps stored in multiple places , such as $STANDARD_INFORMATION and $FILE_NAME  metadata. If these don’t match up, something suspicious might be going on. Tools like mftecmd, fls, istat, and FTK Imager  help with these checks. Look for Zeroed Fractional Seconds Many timestomping tools don’t bother with precise sub-second timestamps. If the decimal places in a timestamp are all zeros, it could indicate foul play. Tools: mftecmd, istat . Compare ShimCache Timestamps Windows tracks when executables were first ru n using a system feature called ShimCache (AppCompatCache) . If a file’s recorded modification time is earlier than when it was first seen by Windows, that’s a big red flag. Tools: AppCompatCacheParser.exe, ShimCacheParser.py . Check Embedded Compile Times for Executables Every executable file has a compile time embedded in its metadata . If a file’s timestamp shows it was modified before it was even compiled, something’s off. Tools: Sysinternals’ sigcheck, ExifTool . Analyze Directory Indexes ($I30 Data) Sometimes, old timestamps are still stored in the parent directory’s index . If a previous timestamp is more recent than the current one, it’s a clue that someone tampered with it. Check the USN Journal Windows keeps a log (USN Journal) of file creation events. If a file’s creation time doesn’t match the time the USN Journal recorded, that’s a clear sign of timestamp backdating. Compare MFT Record Numbers Windows writes new files sequentially in the Master File Table (MFT). If most files in C:\Windows\System32 have close MFT numbers but a backdated file has a much later number, it stands out as suspicious. Tools: mftecmd, fls . Real-World Example Security analysts at Dean service organization investigated a suspicious file (dean.exe) in C:\Windows\System32. Even though its timestamps matched legitimate files, further checks revealed: $STANDARD_INFORMATION creation time was earlier  than $FILE_NAME creation time. The fractional seconds in its timestamp were all zeros. The executable’s compile time (found via ExifTool) was newer  than its modification time. Windows’ ShimCache recorded a modification time that was later  than the file system timestamp. These findings confirmed the file had been timestomped, helping the team uncover a hidden malware attack. ------------------------------------------------------------------------------------------------------------- All anti forensic tool have one thing in common they mostly modify $SI Timestamp. They do not modify the $FN time stamp. So comparing these two time stamp in timeline explorer can help to identify time stopping. ------------------------------------------------------------------------------------------------------------- Now keep in mind as normal there might be False positive while analyzing the $MFT for time stomped this thing must be understand by analyst Screen connect example of timestomp: The Bottom Line Timestamp manipulation is a double-edged sword. While cloud storage services use it for legitimate reasons, hackers exploit it to hide malicious files . Security analysts have developed multiple ways to detect timestomping, but modern tools make it harder than ever to spot. So, the next time you see a file with a suspiciously old timestamp, don’t just take it at face value. There might be more going on under the surface! ----------------------------------------------Dean----------------------------------------------

  • Understanding NTFS Metadata(Entries) and How It Can Help in Investigations

    When dealing with NTFS (New Technology File System), one of the most crucial components to understand is the Master File Table (MFT) . Think of it as the backbone of the file system—it stores metadata for every file and folder, keeping track of things like timestamps, ownership, and even deleted files. Allocated vs. Unallocated Metadata Entries Just like storage clusters, metadata entries in the MFT can either be allocated (actively in use) or unallocated (no longer assigned to a file). If a metadata entry is unallocated, it falls into one of two categories: It has never been used before (essentially empty). It was used in the past, meaning it still contains traces of a deleted file or directory. This is where forensic investigations get interesting. I f an unallocated metadata entry still holds data about a deleted file, we can recover information like filenames, timestamps, and ownership details . In some cases, we may even be able to fully recover the deleted file—provided its storage clusters haven't been overwritten yet. How Metadata Entries Are Assigned MFT entries are typically assigned sequentially . This means that when new files are created rapidly, their metadata records tend to be grouped together in numerical order. Let’s say a malicious program named "mimikatz.exe"  runs and extracts several resource files into the sysetm32 directory. Because all these files are created in quick succession, their metadata entries will be next to each other in the MFT. A similar thing happens when another malicious executable, "svchost.exe" , runs and drops a secondary payload ( "a.exe" ). This action triggers the creation of prefetch files , and since they’re created almost instantly, their MFT entries are also close together . This pattern helps forensic analysts track down related files during an investigation. The Hidden Clues in MFT Clustering While this clustering pattern isn’t guaranteed in every case, it’s common enough that it can serve as a backup timestamp system . Even if a hacker tries to manipulate file timestamps (a technique called timestomping ), looking at the MFT sequence can reveal when files were actually created. This makes it a valuable tool for forensic analysts. Type Name Type Name 0x10 $STANDARD_INFORMATION 0x90 $INDEX_ROOT 0x20 $ATTRIBUTE_LIST 0xA0 $INDEX_ALLOCATION 0x30 $FILE_NAME 0xB0 $BITMAP 0x40 $OBJECT_ID 0xC0 $REPARSE_POINT 0x50 $SECURITY_DESCRIPTOR 0xD0 $EA_INFORMATION 0x60 $VOLUME_NAME 0xE0 $EA 0x70 $VOLUME_INFORMATION 0xF0 0x80 $DATA 0x100 $LOGGED_UTILITY_STREAM Breaking Down the MFT Structure Every file, folder, and even the volume itself has an entry in the MFT. Typically, each entry is 1024 bytes  in size and contains various attributes that describe the file. Here are some of the most commonly used attributes: $STANDARD_INFORMATION (0x10)  – Stores general d etails like file creation, modification, and access timestamps. $FILE_NAME (0x30)  – Contains the filename and another set of timestamps. $DATA (0x80)  – Holds the actual file content (for small files) or a pointer to where the data is stored. $INDEX_ROOT (0x90) & $INDEX_ALLOCATION (0xA0)  – Used for directories to manage file listings. $BITMAP (0xB0)  – Keeps track of allocated and unallocated clusters. Timestamps and Their Forensic Importance NTFS records multiple sets of timestamps, and they don’t always update the same way. Two of the most important timestamp attributes are: $STANDARD_INFORMATION timestamps  – These are affected by actions like copying, modifying, or moving a file. $FILE_NAME timestamps  – These remain more stable and can serve as a secondary reference. Because these two timestamp sets don’t always update together, analysts can spot inconsistencies that reveal timestomping attempts . For instance, if a file’s $STANDARD_INFORMATION creation time  differs from its $FILE_NAME creation time , it could mean that someone tampered with the timestamps. Real-World Challenges in Analyzing NTFS Metadata While these timestamp rules are generally reliable, they aren’t foolproof. Changes in Windows versions, different file operations, and even t ools like the Windows Subsystem for Linux (WSL) can alter how timestamps behave. For example: In Windows 10 v1803 and later , the "last access" timestamp may be re-enabled under certain conditions. The Windows Subsystem for Linux (WSL)  updates timestamps differently than the standard Windows shell. Final Thoughts Analyzing NTFS metadata can unlock a wealth of information, helping forensic investigators reconstruct file activity even after deletion or manipulation. Understanding sequential MFT allocations , timestomping detection , and the role of multiple timestamps  is essential for building a strong case in digital forensics. By looking beyond standard timestamps and diving into the metadata, analysts can uncover hidden traces of activity—providing crucial evidence in cybersecurity investigations. ----------------------------------------Dean---------------------------------------------

bottom of page