No one likes to go fishing for data, so this is the basic list of information I request from IT administrators before I start cutting data!! If you don't get answers, check out this secret millitarty instructional interrogation video on water boarding. I'm just saying!
1.) Listing of Active Directory and/or User Names for all Custodians.
2.) For all Custodians, a Permission listing of all Personal and Group Shares they have write-access to (i.e.: Custodian Dav Nads has write access to directory ABC on the File Server XXX).
3.) Lotus/Exchange Mailbox name(s) for all custodians (i.e.: DavNadz.nsf) and servers they reside on.
4.) If possible, Local and Admin Security ID files and passwords to access the respected custodian’s local and server mailboxes.
Once upon a time, DAV NADS was collecting mailboxes from a 64-bit Exchange 2007 server environment (LOL!). I wanted to take a moment to highlight a few things I learned that I hope you may find helpful.
“ExMerge” no longer exists. As of 2007, this functionality has been integrated into Exchange’s Management Shell Cmdlet’s (available in SP1 and SP2).
Cmdlets is NOT compatible with 64-bit servers ONLY 32-bit. I will describe a “work-around” I used in detail below.
The CMD you will need to know and use is called: Export-Mailbox.
One notable advantage of “Export-Mailbox” over “ExMerge” is it does NOT have issues exporting mailboxes over the 2GB PST limit.
Export-Mailbox will include “Dumpster” data on Exchange 2007. On Exchange 2010 is does NOT…!
Just like ExMerge, before you can use Export-Mailbox, you need the proper account rights.
Local Administrator rights.
Exchange Server Administrator Role on the target Exchange 2007 mailbox server.
Full access to the mailboxes against which the import/export operation is run.
It is quite cumbersome, but STILL possible to install Exmerge on a client machine and connect to Exchange 2007 remotely. A tutorial on this procedure is here: www.exchangeinbox.com/article.aspx?i=88
The work-around I followed to the 64-bit limitation was quite simple:
1. Because I could not export to PST but still had the ability to export to a mailboxes, we created a “dummy” mailbox and exported to this mailbox. For example, the below command will export ALL e-mail from the “davnads@blogspot.com” identity to the “MyData” folder in the “DummyMailbox”.
2. After we exported the data to a “DummyMailbox” we authenticated to the mailbox with Outlook.
3. Manually created a new “Local” Outlook Data File (PST file).
4. Manually copied over all e-mail from the Exchange “DummyMailbox” to the “Local” PST file.
Now, if you are working with a 32-bit Exchange server, this is the command you need to use to export the contents of a Exchange mailbox to a local PST file:
I was out and about doing some "Live" GroupWise E-mail collections. For the living sake of me, I could not figure out how the #$% to "log out" of one users mailbox and log into another, from the client application.
It ends up GroupWise has commands that you can use when you start the client. Quoted from Novell itself,
"Some of them are for your convenience, while others are necessary to run GroupWise on your particular hardware."
Well if it was for my #$%#$% convince, why the hell wouldn't you just make a button somewhere or put this in the options box??? I didn't know you needed to be a freaking rocket scientist to administer the system.
Well anyways, if you have the pleasure of logging into GroupWise this may come in handy. The switch you will need to use to re-activate the login dialog box is: /@u-?
Right-click the GroupWise icon on the desktop > click Properties.
Click the Shortcut tab.
In the Target text box, after the GroupWise executable, type a space, type the startup switch(es), then click OK.
Separate multiple startup switches with a space, like this:
J:\GRPWISE.EXE /ph-pathname /@u-?
In this example, /ph- is the startup switch to specify the path to the post office. The pathname is the path to the post office. The /@u-? switch is used to display a login dialog box a user can supply with login information whenever he or she opens GroupWise. This switch is useful when two or more users share a workstation but have separate GroupWise Mailboxes.
The Windows operating system has a Registry setting that can add USB write protection to a computer system. It is like a switch that can be enabled to make use of the write protection or disabled to allow write processes. Check it out:
I have always been eager to learn and challenge myself to further develop intellectually. Over the last 3 months, I challenged myself to obtain 3 professional certifications. Dav Nads is now EnCE, EnCEP, and ACE certified, Bit%hes!!
Sorry it's been awhile.. I been working hard on my smarts. Dav Nads is BACK!
I had a small window of time the other day to image a Apple Macbook Air. It was like “my first time” so I felt it would be appropriate to do a little research about “how to turn it on” and “what buttons to press” to make sure things didn’t get sloppy ;-p
I can’t emphasize how important it is to go into situations with more than one option. It’s like the old sang, “Why carry a tool box if you only have one tool in it?” After a little research, I came up with a Plan A and Plan B. Not talking about the Plan B - One-Step here :-)
Before I jump into my procedures, let me note a few things:
I knew ahead of time that this Macbook Air did not have an Apple Super Drive (external CD/DVD drive). I do not have an external CD/DVD drive or Apple Super Drive in my forensic kit. Maybe I need to get one!! Furthermore it is reported that not all USB CD/DVD drives are compatible.The Macbook Air only has one USB port. This USB port is buried in the shell so not all thumb drives will physically fit into it. Yes, I had this problem… What can I say, Dav Nads has a BIG USB thumb drive!!
Similar to the external CD/DVD drive issue, it is reported that some USB hubs do not let you let you boot from them. The one I tried was a Belkin Desktop Hub (Model F4U016) which comes with an external power supply to power the USB ports.
The Macbook Air does not have a Firewire port. Therefore, you CANNOT acquire using Targeted Disk Mode.
There is no eSata port, ethernet port, or PCMCIA slot
Here’s what I tried:
A) Forensic Linux Boot Disk to Acquire:
We have an in-house Linux variant comparable to Helix, Knopix, Raptor that we use for boot acquisitions. Note that since I did not have an external CD/DVD drive it was a requirement that I load the Boot Disk into RAM since the laptop only has one USB port. I needed the one and only USB port free so I could plug in an external USB hard drive as a destination to save the image to. Our boot disk has a “Load to RAM” option which allowed me to do this. I believe others do as well.
Boot to Forensic Linux from USB thumb drive.
Load into RAM. Some boot disks have this option as noted above.
Remove USB thumb drive and plug USB storage hard drive in.
Image away.
Unfortunately, the specific chipset in the Macbook Air I was acquiring from was not compatible with my Linux boot disk. I found this interesting because it worked for a colleague a few months ago on an earlier MacBook Air model which was also Intel-based. Regardless, it was on to Plan B. I will note here that I have heard Raptor works well booting in Mac environments. However, I did not have time to try it in the field and I do not think it has the option to load into RAM.
Here is what I did:
B) Remove Hard Drive:
Before you get started note that for Rev A Macbook's I would expect you would find a PATA ZIF hard drive. For Rev B&C, you should find a SATA LIF hard drive.
Unfortunately, I have not found a adapter yet for LIF interfaces. So stop reading here if you know that is what your working with. The only place I have seen an adapter advertised for purchase is here, but it has always been out of stock. I recently told that LIF adapters could also be purchased here but I have not personally verified this. If you don't have a adapter to interface with LIF and now looking for a plan C, check back for my next post on FTK's CLI tool for OSX.
There is an excellent tutorial, written by Lee Whitfield, on Forensic 4cast documenting how to remove the hard drive from a Macbook Air. This can be found here. Alternatively, there are a number of videos on YouTube. This is the one I watched.
Whenever I take something a part, I like to draw a picture of where I extracted each piece/screw from. Something that may come in handy when putting it back together! It's also not a bad idea to tape the screws to the piece of paper. I actually had an experience were a person knocked the screws over once and I had to be real creative about putting the laptop back together. Live and learn LOL.
If the laptop has a SSD hard drive you will need a ZIF adapter. I recommend the one that Tableau sells (now owned by Guidance Software). If you use this one, it must be connected this way: To image a Samsung 1.8" drive, connect the Tableau TC20-3-2 ZIF cable to the adapter label face-up. Then connect the cable to the Samsung 1.8" drive, positioning the drive label face-up
Image the hard drive externally using hard drive duplicator or your tool of choice.
Put it back together!!
I will note that it has been reported that some Linux boot disks may temporary disable or render the one USB Port inactive. To reset the USB port, make sure the Mac is turned off. Press and hold the following keys on the keyboard: Shift, Control, Option (all on the bottom left side of the keyboard) and Press and hold the Power button (top right of the keyboard). Hold for about 5 seconds and then release them all. You will not see indication of anything. Try to boot from the External Drive again.
I will document another collection option using FTK Imager CLI for OSX in my next post.
So you have been tasked with acquiring an Apple Macbook Air. There you are, it’s just you and the laptop and you’re losing;
·Your favorite Linux distribution disk won’t boot,
·You spent hours taking the laptop apart only to discover the internal hard drive has a ZIFF or LIF interface and you don’t have an adapter,
·The Firewire and Ethernet ports are missing,
·there is only one USB port,
·and the laptop won’t boot from your USB hub.
This documentation specifically applies to Apple’s Macbook Air models. However, the procedures outlined here should be applicable to all Intel-based Macs. When acquiring Macbook Airs traditional acquisition methods can often be challenged by the lack of external media interfaces and software compatibility issues.
SO…WHAT’S NEXT?!?! In April 2010, Access Data released Command Line (CLI) versions of its popular FTK Imager tool. Supported by one of the versions are Intel-based Mac OS versions 10.5 and 10.6x. I have found this tool to be a strong candidate for Mac collections. This article will explore two collection techniques that exercise this tool:
1.(Live Collection)– Acquisition of a targeted system in a live (booted) state. FTK CLI tool is executed from target’s system and image is written to external USB hard drive. This method is frequently used to acquire systems that cannot be taken offline or when encryption is involved.
2.(Secondary-boot Collection) – Acquisition of a targeted system from a secondary-boot device. Target’s system is booted from a bootable external USB hard drive containing OS X and pre-installed with the FTK CLI tool. Once booted FTK CLI imager is executed from this device and image is written to the same USB hard drive in a separate partition FAT32 partition.
Note: As a forensic practitioner, you should weigh the pros and cons of the two collection techniques and use discretion to what method (if any) suits the requirements and needs of your engagement.
Approach 1: Live Collection – Preparation:
1.OS X does not natively support writing to NTFS or EXT volumes. Therefore, you will need to prepare a HSFS or FAT32 formatted hard drive to write your image too. I prefer FAT32 over HFS because it is readily accessible from Windows.
*By following this step you are making substantial changes to the host system.
5.After you have switched users to root, you will need to identify the source and destination hard drives for acquisition: Ftechs-Mac-mini:~ root$ diskutil list
This will query all active disks and their partition layouts:
This information can be interpreted as follows:
"/dev/disk0" is representative of the first physical hard drive (attached to the system). It is determined based on size, volume name, and partition layout that this is the hard drive inside of the system. In this example, the physical device, "/dev/disk0" will be the source of the acquisition.
“/dev/disk1” is representative of the second physical hard drive (attached to the system). It is determined based on size, volume name, and partition layout that this is destination hard drive connected via USB to the system.
On this hard drive there is one volume disk1s1 named Evidence_Drive. This is the volume we will use to write the acquisition to.
However, before you can write to a volume you need to determine what the “mount point” of the volume is. A mount point is the connection the operating system uses to interact with a volume on a hard drive.
6.Mac OS will automatically create a mount point (with full read/write permissions) when a device is attached to the system with a recognizable file system.
The mount point should be consistent with the volume name appended to /Volumes/. The mount command can be used to verify this: Ftechs-Mac-mini:~ root$ Mount
This will list all volumes mounted on the system:
We see here that “/Volumes/Evidence_Drive” is the full path of the mount point for volume “disk1s1” on the destination hard drive “/dev/disk1”. This is the destination mount point.
This now establishes that we will be imaging (source): /dev/disk0 and writing our acquisition image to (destination mount point): /Volumes/Evidence_Drive
After you have determined the source and destination mount point, navigate to the destination mount point where the FTK CLI took resides: Ftechs-Mac-mini:~ root$ cd /Volumes/Evidence_Drive
7.Execute the following command and flags to execute FTK CLI. This will acquire the source /dev/disk0(physical hard drive inside of the computer) and save to/Volumes/Evidence_Drive(on the destination hard drive volume) in .EO1 format and fragment every 4 GB with no compression
·One volume to install OSX which will be the boot partition. The second volume as a storage area that can be used to write your image(s) to.
·I would suggest using Apple’s Disk Utility, located at /Applications/Utilities/, to prepare this drive.
3.To make the USB hard drive bootable it must have ownership enabled.
1.Locate the 16 GB volume on your Mac desktop, right-click its icon, and select ‘Get Info’ from the pop-up menu.
2.In the Info window that opens, expand the ‘Sharing & Permissions’ section, if it’s not already expanded.
3.Click the lock icon in the bottom right corner.
4.Enter your administrator password when asked.
5.Remove the check mark from ‘Ignore ownership on this volume.’
6.Close the Info panel.
7.Once you complete, your USB flash drive will be ready for you to install OS X.
4.Install OS X - Summarized
1.Plug USB hard drive (prepared above) into Mac.
2.Put Install DVD in the Mac.
3.Reboot.
4.Choose to install OS X on the USB hard drive 16 GB partition, OSX Journaled Extended.
5.You may want to customize the software packages that OS X will install to minimize disk space required for the installation.
5.After install, test to make sure the Mac will boot from the secondary boot drive you just created instead of the internal hard drive. At start up hold down the “Option” key and you will be prompted with the boot options menu.
6.Once you are booted to the USB hard drive, the secondary OSX boot drive, you will need to copy over the FTK CLI application onto it. You can use a flash drive to do this or just go online and download it if you are connected to the internet.
1.Plug the secondary OSX boot drive you created above into the Mac maEV0ne you would like to acquire.
2.Start the Mac up holding down the Option key at start up to get to the boot options menu. Select to boot to the external secondary OSX boot drive.
3.Once booted, follow the steps listed above starting in Approach 1.3. In summary:
a.Go to console.
b.Switch to user root.
c.As illustrated below, identify the physical source hard drive (hard drive inside of the computer)
d.As illustrated below, identify the destination hard drive, volume, and mount point of the FAT32 storage-area volume on the Secondary-boot hard drive.
Attached disks:
Mounted devices:
e.Navigate to the location of the FTK CLI tool and execute the command with the proper usage and flags.
Let's say client XYZ maintains sensitive budget information within a select folder on one particular Windows fileserver. When originally created, the folder was restricted to specific AD users. At some point, everyone was granted access to the folder. Is there any available trail of activity in Windows to tell who accessed what and when?!?!
YES (if it's turned on)... !!!!!
I learned today that Microsoft’s audit object access policy handles auditing access to all objects outside AD. It is disabled by default, but IF enabled you can audit access to almost any kind of Windows object including files, folders, registry keys, printers, and services.
Pretty cool. I see this as a useful source of information for many investigations so I thought I would share.
If it's not turned on, I believe enabling Audit Object Access either within GPO or the local server policy should do the trick. Please note that depending on how many files/folders you have this auditing, disk space may be an issue. You really need a SIEM to go alongside this to parse and alert on anomalies if you want to use this as a true real-time investigate tool.
Task: 50 hard drives, Windows XP, report all date/time instances a USB drive connection was made.
Purposed Solution: Open Encase, Mount Image using Physical Disk Emulator module, Manually change Window’s security permissions/ownership of System Volume Information directory OR export Restore Point directory, Open up CMD prompt, Execute RegRipper against %/Windows/System32/config/SYSTEM and Restore Point, slice and dice output, and misc.
~ 1 hour per hard drive.
Alternative Solution: Batch script that shit!
~ 2 hours of development resulting in ~5 mins per hard drive.
This is the groundwork and a start to scripting computer forensic tasks via the command line. It’s simple, yet very powerful stuff that anyone can do.
Also, a special thank you to David Kovar who was so kind to give me a few pointers along the way. He has volunteered to take this initiative and port it over to Python. More to come from David and I as we work together on expanding on this.
• Will prompt for disk image Full Path Location (.ad1, .e01, .dd, .vmdk, etc…)
• Automatically mount disk image using Mount Image Pro CLI
• List disk mounting information (drive letters mounted, volume name, file system, etc...)
• Prompt for drive letter that %/SYSTEM VOLUME INFORMATION/% is located. This is where Restore Points are saved. By default this directory is protected and not accessible by the system. This can be automated later on
• Prompt for local Administrator account name
• Automatically Change ownership and grant full access to the %/SYSTEM VOLUME INFORMATION/% directory using SysInternal’s Subinacl.exe.
• List %/SYSTEM VOLUME INFORMATION/% information
• Prompt for Restore Point Directory Name you would like to parse
• Then do work...
• Currently set to execute RegRipper (RipXP.exe) using the USBSTOR3 plugin. This will parse the local SYSTEM hive and every Restore Point System Hive subsequently outputting a nice CSV file showing every USB drive (and corresponding date/time) EVER plugged into the system.
• Anything that is cmd line accessible can be set to be executed after the drive is mounted.
Get Started:
Copy the code below into notepad, save as XXX.bat, and execute via the command line. Make sure you have the three dependencies installed and your paths are defined to the three executables.
Let me know if you have any questions or suggestions… It is just a rough draft but gets the job done for me! A lot more to come... stay tuned
::This batch script will mount image using Mount Image Pro (MIP4.exe), use Microsoft's SubInACL (SubInACL.exe) command to take ownership and grant full access to System Volume Information, and then do work such as execute RipXP.exe.
:: This requires that you have installed Mount Image Pro (Demo is available for free),
:input to locate SYSTEM VOLUME INFORMATION - this can be automated later
set /P MOUNTED_DRIVE_LETTER="Look at the above List of Mounted Devices and input drive letter where SYSTEM VOLUME INFORMATION directory is located (e.g. H): "
dir /ah "%MOUNTED_DRIVE_LETTER%:\System Volume Information\"
set /P RP_FOLDER_NAME="Enter Restore Point Folder you would like to parse in /System Volume Information/ (e.g. _restore{46DE8921-1D39-44D2-A9E9-64119261F211}): "
yoGirl: Davnads, you put the "sic" in forensic bc you got skillz.
Davnads: dat rite
yoGirl: I'm trying to stage some data on my network for a eDiscovery engagement that I need to process using the Cloud. I don't have time to manually create 500 staging folders with sub directories.
Davnads: Yo chair yo' Problem
yoGirl: :-(
Davnads: Damn sad faces, they always get 2 me. Okay I will help!
In response to my fan mail, I created a ugly (I don't program for a paycheck) Python script that will assist the process of creating directory structures in mass. This script uses the os module. "As is" the script will read a comma delimited file, containing 3 folder names, line by line.
Folder_Names.csv
David Nides,HDD SN XX,Mobile Phone,Network Share Danny Nides,HDD SN XX,SharePoint Data,Network Share
For each line, it will create a directory structure consisting of the parent folder named based on the first line variable, and sub directories using the second, third, and forth line variables. For example:
>David Nides >>HDD SN XX >>Mobile Phone >>Network Share
>Danny Nides >>HDD SN XX >>Share Point Data >>Network Share
The code is listed below. Note it is currently set to write the folder structure out to the "D:\" drive but this can be easily changed. Let me know if you have any questions.
#Created by David Nides, 6/29/11 #This python script will input a CSV file (refer to the input.txt template) #Parse each row and create a directory. import os import csv file = csv.reader(open('folder_names.csv'), delimiter=',') for row in file: os.chdir('d:') os.mkdir(row[0]) print "creating ",row[0] temp1='d:/'+row[0]+'/'+row[1] temp2='d:/'+row[0]+'/'+row[2] temp3='d:/'+row[0]+'/'+row[3] os.mkdir(temp1) print "creating ",temp1 os.mkdir(temp2) print "creating ",temp2 os.mkdir(temp3) print "creating ",temp3 os.chdir('d:')
Reason to believe a server was compromised and it's a physical Debian GNU/Linux mail server in a production environment? ..Sounds like fun!
Below is a short list of items to consider when responding to a incident. This is from a technical perspective and by no means a work plan for a comprehensive investigation.
If you haven't already, try to get a physical or logical image of the device. If the server can't be turned off to acquire physically, consider acquiring the logical partitions live:
1. Attach USB 2. mkdir /m1 3. mount /dev/sdb1 /m1 # Substitute /dev/sdb1 for your USB device’s partition, fdisk –l helps 4. dd if=/dev/sda1 of=/m1/my_image.img # this cmd is very basic and will dd the partition to the USB disk. If it uses logical volume manager, copy the logical partition as reconstructing the raid/lvm later could be an issue.
Identify all logs that could contain potential evidence related to the intrusion. Logs are going to be one of the key points of analysis in Linux based investigations. To that point, don't forget to inquiry about log retention polices and procedures during your scoping. For instance, are logs from the target server collected using a SIM, backed up to tape, or maybe logging is not even enabled? A good analogy is, make sure to account for ("or eat") all the crumbs that may be surrounding the cookie.
Here is a short list:
1. /var/log/secure 2. /var/log/secure.* 3. /var/log/messages 4. /var/log/messages.* 5. /var/log/wtmp 6. /var/log/wtmp.* 7. /var/log/btmp 8. /var/log/btmp.* 9. /var/log/mail.log 10. /var/log/mail.log.* 11. /var/log/apache 12. /var/log/auth.log 13. /var/spool/ 14. Check syslog configuration (/etc/syslog.conf typically) and see if additional log files are stored 15. If the machine is behind a firewall, check firewall (machine/appliance)logs.
In 2010, Guidance Software hosted a video competition for 2 free passes to their CEIC conference. We did not win because apparently it was not appropriate.I still went anyways, but reminiscing about our great video!
I'm working on a paper on High Tech Intellectual Property Theft so I thought I would share some food for thought!
According to Wikipedia (whatev that's worth), Intellectual Property (IP) is a term referring to a number of distinct types of creations of the mind for which a set of exclusive rights are recognized—and the corresponding fields of law and theft is the illegal taking of another person's property without that person's freely-given consent.
Do the math, IP + Theft is a equation for stealing s$% you shouldn't!! If you add technology as a variable into this equation, stealing $#% can get super geeky. For instance, a employee can copy the text from a document containing the recipe for Coke onto a website called pastebin.com. This is a website where you can freely copy and paste text making it accessible to the world with just a few clicks. It is a convenient and "virtually untraceable" platform for people to share large amounts of text. The website has been traditionally used by programmers to store source code but also more recently used by HaX0r groups like Anonymous, 4chan, and LulzSec to post their pirated caches and booties.
Methods of IP theft are becoming more advanced and mutually difficult to detect. Traditional methods of detection (i.e. usb connection analysis, print spool files, e-mail, etc.) are not going to CUT it in some cases. I used one example of a insider COPYING and PASTING IP out of a network, but their are many other advanced methods such as transferring data from a laptop to a mobile device in someone's pocket via ad-hoc networking, to installing mobile malware/spyware software on a VIP.
However, traditional methods of IP theft may not be as advanced but just as difficult (if not more difficult) to detect. For instance, taking pictures of IP with a camera phone or calling a partner and communicating IP over a phone. In these cases it's more important to be aware of these methods and put governance and policies in place to prevent so your NOT responding to the "perfect crime".
Let's also not forget about how the most simple digital crime can become ah so difficult. For instance, a terminated user transferred documents from a computer to a USB storage device a week before they resign. During that week, a Windows Update is also run and all USB last connection date/time information in the active registry are unfortunately updated. Now you, as a forensic examiner are challenged to think outside of the box and look elsewhere ;-)
Below is a collaborative (thank you unnamed co-worker) brain dump of potential methods of IP Theft. Note, some of these methods may leave little to NO forensic residue - the emphasis of the paper I'm writing is identification and detection from a Computer Forensic purpose. The purpose of this list is to promote awareness and hopefully assist with your due diligence or your next IP Theft investigation .
Personal e-mail account usage (i.e. user logs into personal e-mail account via web mail and attaches documents or copies text to e-mail message).
Instant Messaging software such as AIM, MSN, Yahoo, Gtalk, or ICQ (i.e. transfer text or attachment over instant messaging conversation)
Internet activity to online storage tools, file sharing services, social media platforms, and public/private forums (i.e. upload documents to online storage service or copy text to website such as pastebin.com).
Access to network resources such as file servers (i.e. copy documents from file server directly to USB device) without subsequently accessing it.
Network connectivity to private networks via Bluetooth, wifi, or remote access to transfer data (i.e. computer transfers documents to another computer via Bluetooth network).
Removable storage device (i.e. user copies data to thumb drive or external hard drive). Keep in mind removable storage devices do not not always get tracked comprehensively (i.e. O/S update occurs that updates all USB last connection date/time information in registry).
Screen capture applications run from removable devices to minimize forensic residue (i.e. run screen recording tool from USB drive).
Use of non-standard applications/protocols such as VPN, FTP, SFTP, P2P, SHH (i.e. Use FTP application to transfer data to remote server).
Copy data to device that be configured as USB storage device such as mobile phone or music player (i.e. copy data via USB to iPhone or iPod).
Bypassing the operating system by booting the system into a bootable disk to copy data to an external drive (i.e. anti-forensic or forensic software such as Helix or Knoppix).
Traditional forensic and IT methods of cloning hard drives (i.e. extract hard drive from system and use forensic software/hardware to copy/clone data).
Host and Mobile device based Spyware/Malware
Other "low tech" methods of exfiltrating data include:
Taking hard copy documents or electronic devices,
Photography or video,
printing,
scanning,
use of unknown devices,
making a phone call and communicating the IP.
Stay tuned.. I will be posting some more forensication soon.
Recently, I received 50 + SYSTEM registry hives from various host systems. Note, due to special circumstances only the SYSTEM hives were provided -- fyi -- there are other artifacts that log USB connections. All hives where preserved in Logical Evidence File (L01s) format. Using Encase I took a look at the L01 files. Based on full path information of the SYSTEM registry hives collected, it appeared they were from both active and Restore Point locations.
For this engagement I needed to report all date/time instances a USB connection was made based on the SYSTEM registry hives provided...
Since I was dealing with hives from various hosts within the L01s-- the only thing segregating them was the directory structure (full path information) they were preserved in. It would be key to preserve this same full path information for each hive in whatever output/report created. This would allow one to tie a Hive back to a specific host later on.
Therefore, it was time to put my thinking cap on. Below is the list of options I came up with:
Manually parse out the Hives.
Run the Encase Advanced Enscript USB parser, but that outputs into a messy log file that is not delimited. Experience also tells me it can be hit or miss.
Export the Hives and run Reg Ripper on each of them one by one, manually building a report as I go.
Build a Reg Ripper batch script, but this would not preserve the file name and full path source of the hive in the output.
Recursively walk through a directory structure (using Encase I exported all L01's preserving folder paths to a case folder).
Identify any "SYSTEM" or "_REGISTRY_MACHINE_SYSTEM" registry hives.
For each Hive it finds:
Append File name to processing audit log
Run Reg Ripper against it with specific plug in ( USBSTOR3 to show me all USB connections)
Import Reg Ripper output into Python memory based list/db
For each line imported, append full path of original hive parsed (for audit purposes -- will allow me to tie a hive back to it's original source later).
Export CSV report for all hive files found.
Below is the pretty Python code I compiled. For fun I’m going to try to add some error handling, convert to OO, and port into an Executable. For now, all I can say is it works and saved me a ton of manual effort/time.
import os, fnmatch, csv
a = []
def find_files(directory, pattern): #Recursively walk directory path for files
print 'Recursively search directory for SYSTEM hives..'
for root, dirs, files in os.walk(directory):
for basename in files:
if fnmatch.fnmatch(basename, pattern):
filename = os.path.join(root, basename)
yield filename
for filename in find_files('C:\directory_structure_to_search)' , '*SYSTEM'): #Define dir path and hive type to look for
Thank you to all of my #DFIR followers. Hope everyone had a great New Years. Let 2012 bring many dongles, matching hashes, and cold blowing CPU fans to everyone!
Harlan Carvey recently blogged about approaches to conduct Timeline Analysis:
"So, anyway...I've been thinking about some of the things that I put into pretty much all of my timeline analysis presentations. When it comes to creating timelines, IMHO there are essentially two "camps", or approaches. One is what I call the "kitchen sink" approach, which is basically, "Give me everything and let me do the analysis." The other is what I call the "layered" or "overlay" approach, in which the analyst is familiar with the system being analyzed and adds successive "layers" to the timeline. When I had a chance to chat with Chad Tilbury at PFIC 2011, he recommended a hybrid of the two approaches...get everything, and then view the data a layer at a time, using something he referred to as a "zoom" capability. This is something I think is completely within reach...but I digress."
I very much agree with the various approaches outlined above and their respective descriptions. Well put, Harlan and Chad Tilbury.
Over the years I have observed the traditional "kitchen sink" approach evolve into a "layered - overlay" approach. Fundamentally this has been the building blocks of timeline analysis. Harlan, Rob Lee, and the Sleuth kit have been primary drivers of this transformation with contributions such as "regtime.pl", "mac_daddy", and "fls". These contributions have allowed us to take the "kitchen sink", a entire hard drive image, and break it up into different "layers". Each layer representing a specific artifact type such as registry or file system.
What I appreciate about the "layered - overlay" approach is that it is a effective method of "removing the noise". This is my way of saying, hone in on specific areas of interest. In contrast, the "kitchen sink" approach can result in overwhelming volumes of data that can easily lead to distraction.
For example, if I'm only interested in reviewing USB connections, there are specific "data points" that I only need to look at. In such, I would only apply relevant layers of data points to my timeline (i.e. registry, setupapi.log) to identify the connections. Then if needed, I could double check my results by adding a third layer into the timeline, ".evtx" files (event logs in win7 logs usb connections system) which should essentially overlay my existing USB connections and confirm my results. Perhaps, I then wanted to see if there was any ".lnk" files created on the hard drive image to show files being accessed from the USB device during the date/time of a USB connection. Subsequently, a fourth layer, file system activity could be added to the timeline for review and quickly filtered by ".lnk" files. In summary, this fundamental process of building a timeline is the concept of the "layered - overlay" approach.
Adobe Photoshop (a graphic design application) is a good example of putting this concept to use. For anyone not familiar with the product (pictured to the right), multiple layers are used to represent and control each part of a image; background, shading/coloring, objects, etc. All of the individual layers merged together (overlayed) make up the "entire picture."
However, as Harlan alluded to, not using the "kitchen sink" approach will dilute visibility into the context of specific artifacts -- limiting your analysis to specific layers instead of looking at the "entire picture." :
"the more data we have, the more context there is likely to be available. After all, a file modification can be pretty meaningless, in and of itself...but if you are able to see other events going on "nearby", you'll begin to see what events led up to and occurred immediately following the file modification."
So how does Dav Nads' combine the best of the two approaches into one - the Hybird Approach?
To my knowledge there's no "out of box solution" or "push button" solution for this. It's a process of using multiple tools and applications. It's a manual process but comprehensive process. The process like all processes should be is constantly redefining to adapt to technology and needs..
It all starts out with owning a lot of real estate, 2 x 24" monitors :-) Having tall and wide monitors is key for any type of timeline analysis. It allows you to see more data (and context) at one glance and increases efficiently by reducing clicking n' scrolling.
I use one monitor to display the timeline data output from log2timeline-sift in SPLUNK. This process is described in detail by Klein&Co. Why do I use SPLUNK to display my log2timeline-sift output?
Running log2timeline-sift on a 120GB hard drive image can easily result in a 2-3 GB of output. Not to mention, try running log2timeline on a 500 GB hard drive image. Microsoft Excel ain't going to work to review all of your data. It has limitations, period.
Sure you can use "l2l_process" to cull your resulting output from log2timeline down by criteria such as date-range, but this still does not guarantee your resulting output will be a manageable volume. It also takes away context and makes the process of building timeline a iterative process if you need to adjust later on.
Most people know enough Python, SQL, GREP or PERL to be dangerous but not productive. Therefore, having a GUI based platform similar to Excel tends to be a preference when reviewing timeline data.
SPLUNK indexes timeline data, providing the ability to search, filter, and sort data on the fly. It's also scalable, in the sense it's a enterprise tool that is designed to work with GBs of data. With the click of a button I can easily refine my timeline to only show certain data types. Note, DAV NADS does not work for SPLUNK, it's just the the best solution I have found.
Harlan raises an excellent point, "That leads me to this question...if you're running a tool that someone else designed and put together, and you're just pushing a button or launching a command, how do you know that the tool got everything? How do you know that what you're looking at in the output of the tool is, in fact, everything?"
If I were to rely solely on the using the output of log2timeline and SPLUNK as a review tool for my analysis, that would be a issue for 2 reasons:
First , let's be honest regardless of what tool used (commercial or open source) they all have or had at one point BUGS. Just as recently as a week ago, a bug in log2timeline was identified on the win4n6 list and was subsequently fixed.
Secondly, timelines are what I like to refer to as skeletons. They do not show you the meat on the bones. Reviewing timeline data may reveal that "Top Secret - Receipt for Coke.docx" was created and opened. However the limitation with timeline data is, you can't view the document. That's when the second monitor comes into the picture...
I use the second monitor to display the hard drive image in a Forensic tool (Encase, FTK, etc). This allows me now to take a look at "Top Secret - Receipt for Coke.docx" and see that it's just a document that discusses how Coke's secret formula is now on exhibit in World of Coca-Cola in Atlanta! This also allows me to potentially see anything that may be in context of this event that is not displayed in my timeline as a layer.
Leveraging a second tool simultaneously to view the data from a different perspective allows me to also double check and verify findings. For instance, if I see that how_to_kill_the_dog.doc was created on January 1, 2013 in my timeline data, I can quickly check to see if I'm seeing the same thing from my forensic tool or if this is a odd anomaly and potentially a issue with my timeline.
From my experience, the Hybrid timeline analysis approach is really finding synergy between the "full kitchen" and "layered - overlay" approaches. The important thing to understand to sucessfully deploy this approach is the strengths and weakness of the tools you use. For instance, identifying the difference between timeline data (output from log2timeline or wherever) that may only contain X where the full disk image contains Z and empowering a processing to fill these gaps. This allows you to develop a Hybird approach, like I described above that fits your needs.
- DAV NADS, tweetin' @DAVNADS tweet at me cyber girls!
Sorry for the lack of log posts biatches but Dav Nads has been busy wrangling APT hackers and getting nominated for writing "best digital forensic article of year" by the digital forensic incident response community and forensic4cast!!
Here is the article and cheat sheet Dav Nads was nominated for writing on the SANS blog: