top of page

A Deep Dive into Plaso/Log2Timeline Forensic Tools

Plaso is the Python-based backend engine powering log2timeline, while log2timeline is the tool we use to extract timestamps and forensic artifacts. Together, they create what we call a super timeline—a comprehensive chronological record of system activity.


Super timelines, unlike file system timelines, include a broad range of data beyond just file metadata. They can incorporate Windows event logs, prefetch data, shell bags, link files, and numerous other forensic artifacts. This comprehensive approach provides a more holistic view of system activity, making it invaluable for forensic investigations.


Example:

Imagine you've been given a disk image, perhaps a full disk image or a image created with KAPE. Your task: find evil, armed with little more than a date and time when the supposed activity occurred.

So, you begin the investigation with the usual suspects: examining Windows event logs, prefetch data, various registry-based artifacts, and more. But after a while, you realize that combing through all these artifacts manually will take forever. Wouldn't it be great if there was a tool that could parse all these artifacts, consolidate them into a single data source, and arrange them in chronological order? Well, that's precisely what we can achieve with Plaso and log2timeline.


I am going to use Ubuntu 22.04LTS version(Virtual box) and Plaso version 20220724

Installation:


Lets start:


We need image or collected artifact:

The data we're dealing with could take various forms—it might be a raw disk image, an E01 image, a specific partition or offset within an image, or even a physical device like /dev/sdd. Moreover, it could manifest as a live mount point; for instance, we could mount a VHDX image created with KAPE and direct the tool to that mount point. With such versatility, we're equipped with a plethora of choices, each tailored to the specific nature of the data at hand.


In current case I did capture the image using Kape tool and then I mounted the image in form of drive in my windows host than I shared the Mounted drive to (Ubuntu) virtual box

If you are not able to access the mounted drive in ubuntu you have to enter below in terminal


Command :- sudo adduser $USER vboxsf

than restart the VM


2. Command and output (Syntax)

Syntax

log2timeline.py --storage-file OUTPUT INPUT


and command will be like in our case

log2timeline.py --storage-file akash.dump /media/sf_E_DRIVE


akash.dump -- output file name which will be created (this will be in SQL format)

you can add path like /path-to/akash.dump

/media/sf_E_DRIVE -- Mounted drive path


(1) Raw Image

log2timeline.py /path-to/plaso.dump /path-to/image.dd

(2) EWF Image

log2timeline.py /path-to/plaso.dump /path-to/image.E01

(3) Physical Device

log2timeline.py /path-to/plaso.dump /dev/sdd

(4) Volume via Sector Offset

log2timeline.py -o 63 /path-to/plaso.dump /path-to/image.dd


3. if you have entire image of drive as a artifact. log2timeline can ask to provide the which partition or vss you want to parse.

if log2time find VSS. it will as for which vss as well

You can mention identifier either one vss or all.

Example :- 1 or 1..4 or all


or (Single command)

log2timeline.py --partitions 2 --vss-stores all --storage-file /path-to/plaso.dump /path- to/image.dd


Now in current case I don’t have VSS or partition because I collected only needed artifacts (not entire drive) so in this case I did not get above options you can see screen shot below what it looks like once you hit enter.



You can also use Parsers and filters against image with plaso/log2timeline and store in akash.dump or any output.dump file


  1. Parsers:- which will help us tell log to timeline to concentrate only on certain specific forensic artifacts

To check all available parsers:

log2timeline.py --parsers list |more


if you want to use particular parser: In current case


log2timeline.py --parsers windows_services --storage-file akash2.dump /media/sf_E_DRIVE


you can write your own parsers:


2. Filters: -

Filter will tell logged timeline to go after specific files that would contain forensically valuable data like /users /windows/system32


Now there is txt file containing all important filter you can parse from image.

Link below


you can do is open link and click on raw copy the link in ubuntu write :

wget

it will save the txt file after saving text file you can run below command


Command

log2timeline.py -f filter_windows.txt --storage-file akash2.dump /media/sf_E_DRIVE


What this command will do from image it will go to specific files /Paths which are mentioned in txt file and capture artifact into akash2.dump file


you can combine parser and filter in same command as well

log2timeline.py - -parsers webhist -f filter_windows.txt --storage-file akash2.dump /media/sf_E_DRIVE


what i am telling timeline to do is to target the paths and locations within the filter file and then against those particular locations run the web hist parser which will parse

our browser forensics artifacts


Now after all the command you will get output in output.dump or in my case akash.dump file.


output will be in sql format and its very difficult to understand so now you have convert this dump file into csv format or any format which you prefer (I prefer CSV format because i will use timeline explorer to analyze further)


1. Using pinfo.py

As the name suggests, it furnishes details about a specific Plazo storage file (output file): In our case for akash.dump

Command

pinfo.py akash.dump


2. Using psort.py

this command is for Which format you want to create output.

Command :- psort.py --output-time-zone utc -o list

Now to analyze output with timeline_explorer from eric Zimmerman we will use  l2tcsv format


Complete command :- psort.py --output-time-zone utc -o l2tcsv -w timeline.csv akash.dump

-w write format


"Within an investigation, it's common to have a sense of the time range in which the suspected incident occurred. For instance, let's say we want to focus on a specific day and even a particular time within that day—let's choose February 29th at 15:00. We can achieve this using a technique called slicing. By default, it offers a five-minute window before and after the given time, although this window size can be adjusted."


Command :

psort.py --output-time-zone utc -o l2tcsv -w timeline.csv akash.dump - - slice '2024-02-29 15:00'


"However, use a start and end date to delineate the investigation timeframe. This is achieved by specifying a range bounded by two dates. For example, "date > '2024-12-31 23:59:59' and date < '2020-04-01 00:00:00'."


Command :

psort.py --output-time-zone utc -o l2tcsv -w timeline.csv akash.dump "date > '2024-12-31 23:59:59' AND date < '2024-04-01 00:00:00'"



Once super timeline is create in CSV format.

We can use timeline explorer to analyze. The best part of timeline explorer is

Data loaded into Timeline Explorer is automatically color-coded based on the type of artifact. For example, USB device utilization is highlighted in blue, file openings in green, and program executions in red. This color-coding helps users quickly identify and interpret different types of activities within the timeline.


Recommended column to look while analyzing:

Date, Time, MACB, Source type, desc, filename, inode, notes, extra


Conclusion:

In conclusion, Plaso/Log2Timeline stands as a cornerstone in the field of digital forensics, offering investigators a powerful tool for extracting, organizing, and analyzing digital evidence. Its origins rooted in the need for efficiency and accuracy, coupled with its continuous evolution and updates, make it an essential asset for forensic practitioners worldwide. As digital investigations continue to evolve, Plaso/Log2Timeline remains at the forefront, empowering investigators to unravel complex digital mysteries with ease and precision.



167 views0 comments

Comments


bottom of page