Every analyst, during day by day experiences refines its own workflow for timeline creation.

Today i propose mine.

Required tools


Sleuth Kit is a collection of command line tools that allows you to analyze disk images.



The well-known open source memory forensics framework for incident response and malware analysis.



A tool designed to extract timestamps from various files found on a typical computer system(s) and aggregate them.


Timeline creation

The traditional timeline analysis is generated using data extracted from the filesystem, enriched with information gathered by volatile memory analisys.
The data are parsed and sorted in order to be analyzed: the end goal is to generate a snapshot of the activity done in the system including its date, the artifact involved, action and source.

Here the steps, starting from a E01 dump and a volatile memory dump:

  1. Extract filesystem bodyfile from the .E01 file (physical disk dump):

    fls -r -m / Evidence1.E01 > Evidence1-bodyfile
  2. Run the timeliner plugin against volatile memory dump using volatility, after image identification:
    vol.py -f Evidence1-memoryraw.001 --profile=Win7SP1x86 timeliner --output=body > Evidence1-timeliner.body
  3. Run the mftparser volatility plugin, in order to spot some strange MFT activities.
    This step can generate duplicates entries against the fls output, but i think that this data can contain precious artifatcs.
    vol.py -f Evidence1-memoryraw.001 --profile=Win7SP1x86 mftparser --output=body > Evidence1-mftparser.body
  4. Combine the timeliner and mftparser output files with the filesystem bodyfile
     cat Evidence1-timeliner.body >> Evidence1-bodyfile
     cat Evidence1-mftparser.body >> Evidence1-bodyfile
  5. Extract the combined filesystem and memory timeline
    mactime -d -b Evidence1-bodyfile 2012-04-02..2012-04-07 > Evidence1-mactime-timeline.csv
  6. Optionally, filter data using grep and applying the whitelist
    grep -v -i -f Evidence1-mactime-timeline.csv > Evidence1-mactime-timeline-final.csv

If you need to automate the whole process, you may use my tool AutoTimeliner.

Supertimeline creation

The super timeline goes beyond the traditional file system timeline creation based on metadata extracted from acquired images by extending it with more sources, including more artifacts that provide valuable information to the investigation.

The technique was published in June 2010, on the SANS reading room, in a paper from Kristinn Gudjonsson as part of his GCFA gold certification.

Three simple steps starting from a E01 dump:

  1. Gather timeline data

    log2timeline.py plaso.dump Evidence1.E01
  2. Filter the timeline using psort.py
    psort.py -z "UCT" -o L2tcsv plaso.dump "date > '2012-04-03 00:00:00' AND date < '2012-04-07 00:00:00'" -w plaso.csv
  3. Optionally filter data using grep and applying the whitelist
    grep -v -i -f whitelist.txt plaso.csv > supertimeline.csv

In the next article i will propose my method for timeline analysis.