top of page

Making Sense of $UsnJrnl and $LogFile : Why Journal Analysis is a Game Changer

Updated: Feb 18



Updated 18 Feb,2025

Now that we’ve got a solid grasp on how $UsnJrnl and $LogFile work, let’s dive into how we can actually use them for analysis.


Which One is Easier to Read?

If you quickly scan through both, you’ll notice that $UsnJrnl is much easier to understand. It’s well-documented, and Microsoft provides clear explanations for its codes. On the other hand, $LogFile events are messier and less documented, making their analysis trickier. Plus, a single file action can generate a flood of $LogFile events, adding to the complexity.


However, $LogFile holds way more details about a file than $UsnJrnl does, including MFT attributes and $I30 index records. If you find something suspicious in $LogFile, it’s definitely worth the extra effort to analyze it.


For more in-depth details, check out the presentation


“NTFS Log Tracker” from Forensic Insight—it’s a great breakdown of $LogFile analysis.

Key Markers in $UsnJrnl and $LogFile


Action

$LogFile Codes

$UsnJrnl Codes

File/Directory Creation

AddIndexEntryAllocation InitializeFileRecordSegment

FileCreate

File/Directory Deletion

DeleteIndexEntryAllocation DeallocateFileRecordSegment

FileDelete

File/Directory Rename or Move

DeleteIndexEntryAllocation AddIndexEntryAllocation

RenameOldName RenameNewName

ADS Creation

CreateAttribute with name ending in “:ADS”

StreamChange NamedDataExtend

File Data Modification

* Op codes for $LogFile often are not sufficient to determine file modification

DataOverwrite DataExtend Data Truncation



1. Detecting File or Directory Creation

  • $LogFile: Look for the combination of InitializeFileRecordSegment and AddIndexEntryAllocation. This means a new “FILE” record was allocated in the MFT, and an entry was added to the parent directory.

  • $UsnJrnl: The FileCreate event is a clear indicator of a new file or directory. Simple and straightforward!


2. Detecting File or Directory Deletion

  • $LogFile: Look for DeleteIndexEntryAllocation and DeallocateFileRecordSegment together. This means the file’s MFT record was removed and the index entry was deleted from the parent directory.

  • $UsnJrnl: Just look for the FileDelete event—it directly indicates a file or directory was deleted.


3. Detecting File or Directory Renaming

  • $LogFile: A rename action shows up as DeleteIndexEntryAllocation (removing the old name) and AddIndexEntryAllocation (adding the new name).

  • $UsnJrnl: You’ll see RenameOldName followed by RenameNewName, showing both the old and new names in a clear sequence.


4. Detecting File or Directory Movement

  • $LogFile: Just like renaming, a move generates DeleteIndexEntryAllocation and AddIndexEntryAllocation events.

The difference? The file name stays the same, but the parent directory changes.
  • $UsnJrnl: Similar to renaming, you’ll see RenameOldName followed by RenameNewName.

The key difference here is that the file’s location changes, but the name remains the same.

5. Detecting Alternate Data Stream (ADS) Creation

  • $LogFile: If an ADS is created, a CreateAttribute event will show up, referencing a stream name ending with :ADS. Since ADS creation isn’t super common, looking at these events can help you spot hidden or suspicious files.

  • $UsnJrnl: The StreamChange event logs any ADS activity. If followed by NamedDataExtend, it confirms that data was added to a newly created ADS. Attackers sometimes delete ADS to evade detection, but spotting their creation is already a win.


6. Detecting File Data Modification

  • $LogFile: This journal doesn’t directly track file data changes, but it does record metadata updates like timestamp changes, allocation status updates, and modifications to the $DATA attribute.

  • $UsnJrnl: When a file’s content is modified, you’ll see a DataOverwrite or DataExtend event, making it easier to track changes.


-------------------------------------------------------------------------------------------------------------


Wrapping Up

By combining insights from both $UsnJrnl and $LogFile, forensic analysts can uncover valuable details about file system activities. While $UsnJrnl offers a cleaner, high-level view, $LogFile provides deep, granular insights that can be critical in investigations.

If you're looking to dive deeper into NTFS forensic analysis, checking out tools like istat for parsing MFT records and referencing the NTFS Log Tracker presentation will help sharpen your skills.

Happy hunting!


-----------------------------------------------------------------------------------------------------------


Why Journal Analysis Matters

Let’s say we’re investigating a file that was renamed, moved, and later deleted. With just the $MFT, if we're lucky and the file’s record hasn’t been overwritten, we can find out its last recorded name and location before deletion.


But we won’t know when it was deleted—because $MFT timestamps don’t capture that event.

Now, enter journal analysis. With it, we can see the entire history of the file:

  • When and what it was renamed to

  • When and where it was moved

  • Exactly when it was deleted


That’s a huge advantage! Having this level of visibility helps us reconstruct an attacker's actions, track malware movements, and understand what happened on a system.

-----------------------------------------------------------------------------------------------------------


Smart Filtering for Better Insights

Since both the $UsnJrnl and $LogFile track file system changes, we can use creative searches and filters to uncover critical details.


A good starting point is analyzing the $UsnJrnl first—it has a longer history and is easier to read. Then, we can refine our investigation with the $LogFile for more granular details.


Here are some high-value filters to consider:

Key System Directories to Watch

  • C:\Windows & C:\Windows\System32 

  • C:\Windows\Prefetch 

  • Temp Directories 

  • C:\Users*\Downloads

  • C:\Users*\AppData\Roaming\Microsoft\Windows\Recent

  • C:$Recycle.Bin<SID> 


A major win in an investigation is identifying where attackers store their tools and stolen data.

Once we find their working directory, we can:

  • Look for similar directories on other machines

  • Identify additional Indicators of Compromise (IOCs)

  • Recover deleted or moved files


For example, if we discover the attacker’s directory, filtering the journals for activity in that location might show us files that were once there but are now missing.

File Types Worth Searching

Regardless of location, some file types are always worth investigating:


  • Executables (.exe, .dll, .sys, .pyd)

  • Scripts (.ps1, .vbs, .bat) 

  • Archives (.rar, .7z, .zip, .cab)

  • Known IOCs (file or folder names linked to the attack)


Since searching for executables like .exe and .dll can produce a ton of results, it’s best to filter by directories of interest, such as System32 or Temp folders.

-----------------------------------------------------------------------------------------------------------


Conclusion

Using just the $MFT gives us a snapshot in time, but combining it with journal analysis gives us a dynamic view of file system activity. By filtering for key directories, attacker working directories, and high-risk file types, we can uncover hidden traces of an attack, track an attacker’s movements, and build a stronger case in our investigations.


So, the next time you're diving into forensic analysis, don’t just stop at the $MFT—dig into the journals and see the full picture!

------------------------------------------------Dean------------------------------------------------


 
 
 

Comments


bottom of page