Ponder The Bits

Musings and confusings. All things DFIR.

Author: Jonathon Poling (Page 1 of 2)

The Importance of Incident Scoping/Assessment

In consulting, all engagements begin with what we refer to as “scoping” in order to, at a very high level, determine if/how we may be able to help a client. Sure, they can sometimes be arduous or monotonous and often involve a comedy of conference call errors, but they are absolutely CRITICAL to the success of the engagement. So critical, in fact, that I wanted to take some purposeful time to address this often-overlooked, admittedly very unsexy, but nonetheless integral piece of performing effective DFIR.

Though I’m approaching this from a consulting point of view, this is in no way unique nor applicable only to consulting. So, don’t tune out now just because, “I don’t have clients.” You do, actually, whether they are external entities or people within your own organization. Each of us has “clients” in various forms that rely (often heavily) on us as DFIR professionals to ask the right questions and respond with the appropriate guidance and/or actions to mitigate threats and protect them from harm. As such, every DFIR professional will find themselves in situations where they must first comprehensively understand the problem(s) and issue(s) at hand before it can be effectively addressed in response and analysis. In fact, this is often where many of us find ourselves at the onset of an alert or suspected compromise that requires a well-formulated and concerted DFIR response. Though it can be very difficult to take step back and perform a high level assessment instead of diving right into action (especially when you may have external entities and/or higher level management wanting answers LIKE YESTERDAY), doing so will pay you back ten-fold.

At a lower level, scoping (or an initial assessment) must be performed as a due diligence effort to acquire all appropriate/pertinent information, addressing a variety of aspects that may affect the success, efficacy, or efficiency of the investigation. From my private sector experience dealing with a wide array of clients from all types of verticals, industries, and organizational sizes, I’ve attempted to unjustly distill the scoping topic and question set down a few (ok, a few more than a few) important bullets provided below.

However, do keep in mind that this is not intended to be all-inclusive set of questions to ask by any means. Rather, the below set of topics and specific questions are intended to serve as a solid baseline in facilitating comprehensive response and should be augmented or modified as needed. In addition, the below questions are from the perspective of me/us asking an external entity (i.e. client). So, feel free to change/replace the pronouns as well as instances of “the client” as needed for your use and application.

  • Purpose: Understand the background of the situation
    • How did we (the client/organization) get here?
    • When and how did they first notice any sort of issue?
    • Can they think of any legitimate (non-malicious) causes for the current issue?
    • Have there been any other anomalies either previous to or during the timeframe of concern that may be related to the current issue?
  • Purpose: Understand what responsive actions have been performed to date
    • Are the systems still running or have they been powered down?
    • Have firewall blocks been put in place?
    • (Tons more questions I ask…)
    • Have any external entities been notified (particularly if under certain regulations)?
  • Purpose: Understand the set of artifacts available to assist us in our response
    • What type(s) of machines are involved (workstations/servers, OS)?
    • Are they virtual or physical?
    • Where are they hosted, and do they span disparate physical locations?
    • What specific logging is enabled (host, network, and appliance)?
    • …and which of the above are actually collected and available?
    • …and do the available logs contain the right (useful) content?
      • Let’s just say it is not uncommon to get “VPN logs” that simply show syslog interface up/down, “Web Logs” that contain only the load balancer as the “client-ip”, or “DNS logs” that aren’t logging query/response and logging only the IP of the DNS server as the “client”.
    • …and do the available logs span the time frame(s) of concern/interest?
  • Purpose: Identify the goals (in priority order) for the engagement
    • Ask the question, “What would a successful engagement look like to you?”
    • Is the client interested in simply getting back to business?
    • Would they like a root cause analysis (how exactly did this get in/past their defenses)?
      • If so, are they looking to specifically prove/disprove something?
    • Are they simply looking to check a box (fulfilling some sort of requirement)?
      • Often this will not be stated, but you can determine it by asking certain questions and gauging certain entities’ investment in the response.
    • What would they like us to do, specifically, in priority order?
      • Now, we don’t take this verbatim and slap it into a SoW, but we do use it to guide our response as best as possible to achieve their priority goals in tandem with an appropriate, comprehensive, and best practice response.

Though it is not uncommon for a situation to merit even further questioning and level of excruciating detail, depending on complexity, the above set of topics are what I would consider the minimum requirements for a comprehensive initial assessment. Suffice to say that misunderstandings or simple lack of required information in any one of these areas can lead to a variety of undesirable consequences, ranging from that of a simply sub-optimal outcome to that of a completely disastrous one for both the company and the client.

The DFIR industry is a “pay now or pay later” industry, and this is no exception. As such, I highly encourage all of us to be purposeful in spending our time performing due diligence on the front end instead of paying the consequences of not doing so throughout our investigations.

/JP

OSX (Mac) Memory Acquisition and Analysis Using OSXpmem and Volatility

Macs don’t get much love in the forensics community, aside from @iamevltwin (Sarah Edwards), @patrickolsen (Patrick Olsen), @patrickwardle (Patrick Wardle), and a few other incredibly awesome pioneers in the field. We see blog posts all the time about Windows forensics and malware analysis techniques, along with some Linux forensic analysis, but rarely do we see any posts about Mac technical/forensic analysis or techniques. I find this odd, considering the surge in usage and deployment over the last several years, particularly within enterprises. Well, with my most recent two part Mac post as well as this one, I’m attempting to change this, my friends!

Macs need love and disk/memory analysis as well, amirite?

Let’s have a look at memory acquisition of OSX systems using a nifty tool called OSXpmem.

OSXpmem is a part of the pmem suite created by the developers of Rekall. Rekall itself is actually a very useful utility built for both memory acquisition and live memory analysis on Windows, Linux, and OSX systems. While I will be delving into Rekall in a future post, for this we will simply be focusing on OSXpmem, which is an awesome command-line utility for quickly and easily collecting RAM from a Mac system. One of its greatest features is its output to an AFF4 volume, which has a ton of useful features (likely to be discussed in a dedicated post in the future as well).

Acquiring Memory

So, what’s the easiest way to get up and running with the tool for memory acquisition?

  1. Download latest release (as of this post, the latest osxpmem release is “2.1.post4”).
  2. Unzip the package
    1. $ unzip osxpmem.osxpmem-2.1.post4.zip
  3. Run it to collect memory from the local system
    1. $ ./osxpmem.app/osxpmem -o <output_dir>

Super simple, right?

Wellll, maybe not that simple. When you run it, even as sudo/root, you may get the following error:

$ sudo osxpmem.app/osxpmem -o Memory_Captures/mem.aff4
Imaging memory
E1229 15:17:26.335978 3375588288 aff4_file.cc:289] Can not open file /dev/pmem :No such file or directory
/Users/jp/Projects/osxpmem.app/MacPmem.kext failed to load - (libkern/kext) authentication failure (file ownership/permissions); check the system/kernel logs for errors or try kextutil(8).
E1229 15:17:26.606639 3375588288 osxpmem.cc:283] Unable to load driver at /Users/jp/Projects/osxpmem.app/MacPmem.kext
E1229 15:17:26.606714 3375588288 pmem_imager.cc:328] Imaging failed with error: -8

How usefully nondescript. Let me save you some time, as searching the system/kernel logs as suggested yields nothing useful.

So, instead, let’s use the native utility kextutil’s “test” parameter (-t) to see if that gets us anywhere…

$ sudo kextutil -t osxpmem.app/MacPmem.kext/
Diagnostics for osxpmem.app/MacPmem.kext:
Authentication Failures:
File owner/permissions are incorrect (must be root:wheel, nonwritable by group/other):
osxpmem.app/MacPmem.kext
Contents
_CodeSignature
CodeResources
Info.plist
MacOS
MacPmem

Nice. It finally tells us what’s wrong. The file ownership/permissions must be changed to “root:wheel”. Easy enough…

$ sudo chown -R root:wheel osxpmem.app/

So, let’s try again…

$ sudo osxpmem.app/osxpmem -o Memory_Captures/mem.aff4
Imaging memory
Creating output AFF4 ZipFile.
Reading 0x8000 0MiB / 8095MiB 0MiB/s
Reading 0xe38000 14MiB / 8095MiB 56MiB/s
Reading 0x1c88000 28MiB / 8095MiB 56MiB/s
Reading 0x2ac0000 42MiB / 8095MiB 56MiB/s
Reading 0x3978000 57MiB / 8095MiB 58MiB/s
Reading 0x47c8000 71MiB / 8095MiB 56MiB/s
Reading 0x5678000 86MiB / 8095MiB 58MiB/s
Reading 0x6500000 101MiB / 8095MiB 57MiB/s

Reading 0x1f7478000 8052MiB / 8095MiB 39MiB/s
Reading 0x1f7d68000 8061MiB / 8095MiB 35MiB/s
Reading 0x1f8708000 8071MiB / 8095MiB 38MiB/s
Reading 0x1f9150000 8081MiB / 8095MiB 41MiB/s
Reading 0x1f9c00000 8092MiB / 8095MiB 41MiB/s

YES! It worked! As you can see, my system has 8GB of memory that was (by default) exported to an AFF4 volume/file called “mem.aff4”.

You also have the option to include additional local files within the resulting AFF4 volume/file via the “-i </path/to/file> -i </path/to/file> …” command line option(s), which can be useful in producing a singular output volume containing not only memory but other files (binaries/logs/etc.) you’d like to analyze as well. In the past, I used this option to collect the local /bin/bash file when Volatility used to require the bash shell’s memory address be provided in order to parse command history and produce associated timestamps when using the linux_bash plugin. Though the documentation still shows it as a requirement, it’s actually not needed anymore and parses it all just fine.

In addition, you may also export the memory image to a singular RAW or ELF file by using the “–format elf” or “–format raw” command line options if that suits your fancy. However, for this post, I am using the default AFF4 output so that we may explore its use and features a bit.

So, without further ado, let’s take a look at the resulting AFF4 volume/file.

$ sudo osxpmem.app/osxpmem -V Memory_Captures/mem.aff4
Password:
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix aff4: <http://aff4.org/Schema#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix memory: <http://aff4.org/Schema#memory/> .
<aff4://7f482355-5683-46bb-87c0-21afd75dbbeb/dev/pmem>
aff4:category memory:physical ;
aff4:stored <aff4://7f482355-5683-46bb-87c0-21afd75dbbeb> ;
a aff4:map .
<aff4://7f482355-5683-46bb-87c0-21afd75dbbeb/dev/pmem/data>
aff4:chunk_size 32768 ;
aff4:chunks_per_segment 1024 ;
aff4:compression <https://www.ietf.org/rfc/rfc1950.txt> ;
aff4:size 8488656896 ;
aff4:stored <aff4://7f482355-5683-46bb-87c0-21afd75dbbeb> ;
a aff4:image .
Objects in use:
Objects in cache:
aff4://7f482355-5683-46bb-87c0-21afd75dbbeb - 0
aff4://7f482355-5683-46bb-87c0-21afd75dbbeb/information.turtle - 0
file:///Users/jp/Projects/Memory_Captures/mem.aff4 - 0

Here, you can see that we extracted a memory image to the AFF4 stream “7f482355-5683-46bb-87c0-21afd75dbbeb/dev/pmem“.

Now, what can we do with this? Well, one thing you could do (if not using Rekall to analyze this image) might be to extract the AFF4 memory image stream into a singular raw file for parsing/analysis by other tools such as Volatility, page_brute, yara, strings, etc. To do that, we perform the following:

$ sudo osxpmem.app/osxpmem -e /dev/pmem -o Memory_Captures/mem.raw Memory_Captures/mem.aff4
Extracting aff4://7f482355-5683-46bb-87c0-21afd75dbbeb/dev/pmem into file:///Users/jp/Projects/Memory_Captures/mem.raw
Reading 0x8000 0MiB / 9968MiB 0MiB/s
Reading 0x750000 7MiB / 9968MiB 28MiB/s
Reading 0xde0000 13MiB / 9968MiB 25MiB/s
Reading 0x1480000 20MiB / 9968MiB 25MiB/s

Reading 0x26d938000 9945MiB / 9968MiB 21MiB/s
Reading 0x26deb8000 9950MiB / 9968MiB 21MiB/s
Reading 0x26e418000 9956MiB / 9968MiB 20MiB/s
Reading 0x26eab0000 9962MiB / 9968MiB 25MiB/s

$ ls -l Memory_Captures/
total 25665056
-rwxr-xr-x 1 root staff 2688302741 Dec 29 15:30 mem.aff4
-rwxr-xr-x 1 root staff 10452205568 Dec 29 16:10 mem.raw

As you can see, the raw image is uncompressed and thus substantially larger than the AFF4 volume (one of the useful features of AFF4 is its compression options). Nonetheless, there you have it. A raw memory image to parse to your heart’s content with whatever tools you like.

However, before we move on, I personally like to unload the kernel extension for one last good measure so that it’s not just hanging out there for no purpose.

$ sudo osxpmem.app/osxpmem -u
Unloading driver /Users/jp/Projects/osxpmem.app/MacPmem.kext

Creating a Memory Profile

Acquiring a memory image is great, but unfortunately is useless (with respect to Volatility) without the appropriate profile to parse it. Volatility requires a memory profile be specified when parsing a memory image via the “–profile=<profile>” command line option. By default, Volatility includes a ton of profiles for Windows, but such is not the case for Linux and Mac. Though a profiles repository has been created containing a substantial set of profiles for Linux and Mac, YMMV. In my situation, I’m running the latest MacOS Sierra release 10.12.3, for which no profile existed as of this post (nor did it for 10.12.2 until I created and submitted one to the repo as well :D). Therefore, I had to create my own profile. Luckily, the folks at Volatility do a great job walking us through building a profile on a Mac. Though, there are a few clarifications I’d like to address.

To begin, I need to provide some clarification/correction for the initial step, focusing on the part in italics:

“To create a profile, you first need to download the KernelDebugKit for the kernel you want to analyze. This can be downloaded from the Apple Developer’s website (click OS X Kernel Debug Kits on the right). This account is free and only requires a valid Email address.

After the DebugKit is downloaded, mount the dmg file. This will place the contents at “/Volumes/KernelDebugKit”.”

While the above statement is true, if you immediately dismount a package once it’s installed like I do, you should instead pay attention to the installer to see where it is putting the files for long term access. Independent of the mounted package, the KDK is installed in the following location, which will need to be referenced for future use once the package is dismounted post-install:

/Library/Developer/KDKs/KDK_<version>.kdk/

As of current, for macOS Sierra 10.12.2 and 10.12.3, the <version> will be “10.12.2_16C67” and “KDK_10.12.3_16D32.kdk“, respectively.

/Library/Developer/KDKs/KDK_10.12.2_16C67.kdk/
/Library/Developer/KDKs/KDK_10.12.3_16D32.kdk/

Thus, “Step 1” for building a 10.12.3 profile would be the following (for a 64-bit 10.12.3 system):

$ dwarfdump -arch x86_64 /Library/Developer/KDKs/KDK_10.12.3_16D32.kdk/System/Library/Kernels/kernel.dSYM > 10.12.3_x64.dwarfdump

Also note that the referenced kernel file names vary from the current instructions (e.g., “mach_kernel.dSYM” is now “kernel.dSYM”, and “mach_kernel” is now just “kernel”). So, do exercise additional caution when running the commands. For ease of reference, below should be the locations for both of these files on a macOS Sierra 10.12.3 64-bit system (but note that this may change with future versions):

/Library/Developer/KDKs/KDK_10.12.3_16D32.kdk/System/Library/Kernels/kernel
/Library/Developer/KDKs/KDK_10.12.3_16D32.kdk/System/Library/Kernels/kernel.dSYM

All of the above is actually noted during the install as well:

To save everyone a bit of time and translation from current Volatility documentation, I’ve written out the latest required steps below for relatively easy copy/paste into your terminal. For this, we are using the latest 10.12.3 release and associated KDK as an example:

  1. Check to see if a profile is already available for your particular OSX version/release
    1. https://github.com/volatilityfoundation/profiles/tree/master/Mac
  2. If not, download and install the KDK appropriate for your current (or targeted) OSX version/release
    1. http://developer.apple.com/hardwaredrivers
  3. Get the dwarf debug info from the kernel.
    1. $ dwarfdump -arch x86_64 /Library/Developer/KDKs/KDK_10.12.3_16D32.kdk/System/Library/Kernels/kernel.dSYM > 10.12.3_x64.dwarfdump
  4. Convert the dwarfdump output to Linux style output readable by Volatility
    1. $ python tools/mac/convert.py 10.12.3_x64.dwarfdump converted-10.12.3_x64.dwarfdump
  5. Create the types from the converted file
    1. $ python tools/mac/convert.py converted-10.12.3_x64.dwarfdump > 10.12.3.64bit.vtypes
  6. Generate symbol information
    1. $ dsymutil -s -arch x86_64 /Library/Developer/KDKs/KDK_10.12.3_16D32.kdk/System/Library/Kernels/kernel > 10.12.3.64bit.symbol.dsymutil
  7. Create a zip file of the *.dsymutil and *.vtypes files
    1. $ zip 10.12.3.64bit.zip 10.12.3.64bit.symbol.dsymutil 10.12.3.64bit.vtypes
    2. **See note at end of instructions**
  8. Copy the zip file to to the volatility/plugins/overlays/mac/ directory (remember, we are already inside the root /volatility directory)
    1. $ cp 10.12.3.64bit.zip volatility/plugins/overlays/mac/
  9. Verify your profile is registered and ready for use
    1. $ python vol.py --info | grep "A Profile for Mac"
      1. The profile name presented is the string you will pass to the “–profile=” parameter when analyzing a memory image from this version/release in Volatility

**Note: While I append “x64” or “64bit” to my various output files to keep track of which architecture build I’m producing, doing so for the final .zip output file yields profile names with rather weird-looking duplicate 64-bit identifiers (e.g., “Mac10_12_3_64bitx64”). If you would like cleaner looking profile names (at the cost of losing the filename identifier denoting the arch build), you should instead drop the trailing identifier and provide a name the file like the following “10.12.3.zip”, thus yielding a prettier (IMO) profile name like “Mac10_12_3x64”.

Using Volatility for Analysis

Once we have successfully created the appropriate profile for the acquired image, we can now use the plethora of native Volatility Mac OSX plugins provided to us for analysis.

To see the list of available plugins, simply type the following:

#Executed from within the root /volatility folder of a git cloned repo
$ python vol.py --info | grep "mac_"

#Using the standalone binary
$ ./volatility_2.6_mac64_standalone --info | grep "mac_"

Conclusion

That pretty much wraps it up for this post. There is certainly more to explore with OSXpmem, the AFF4 format, and Volatility. However, I encourage you to explore it on your own as I would like to save some feature exploration for future in-depth posts focused on using both Volatility and the Rekall suite.

/JP

Mac Dumpster Diving – Identifying Deleted File References in the Trash (.DS_Store) Files – Part 2

In Part 1 of this post, we identified where these artifacts reside along with options for parsing them. However, we still have not addressed why/how this anomaly occurs. Thus, in Part 2 of this post, we must now test to see how/why this occurs.

The behavior we’re seeing led me to the following hypothesis for testing:

  1. Although the .DS_Store file is “deleted”, when it is re-created it is created in the same space on disk within the same previously allocated blocks on the volume.
    1. *Note: This same situation often occurs on Windows when event logs are cleared/deleted and the event log file is re-created. The re-created log file often inhabits an area on disk surrounding previously deleted entries that may or may not be relevant to the current log at hand. Thus, carving of that file for entries can yield various event entries.
  2. The .DS_Store entries are stored somewhere else on disk and/or memory and are referenced and re-populated within the file upon re-creation for some reason (what reason, I have no idea).
  3. …or another theory that might make sense. (Please share your hypothesis or factual knowledge!)

I tested #1 above by using the “stat” command to see if a deleted and then re-created .DS_Store file would occupy the same inode and it does not. However, I still leave room for the possibility that even though a new inode is associated with the file each time it is re-created, it may still be somehow occupying (some of) the same space on disk.

I tested the on-disk aspect of #2 by searching across all files on disk for any references to a file that was previously deleted (since reboot) – the installer for BlockBlock named “BlockBlock_Installer.app”. The following files stood out to me:

$ sudo sift -z -a -l --err-skip-line-length BlockBlock_Installer.app /
...
/private/var/audit/20161217022600.crash_recovery
/private/var/db/uuidtext/AC/AF78F7097534A2A72631F3DD0AFE52
/private/var/folders/q4/r796r6tx2sd7zhjsxn2bjmv00000gn/0/com.apple.LaunchServices-175-v2.csstore
/private/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/0/com.apple.LaunchServices-175-v2.csstore
/.Spotlight-V100/Store-V2/3AF86A9A-8A7B-414A-8479-E5FACBC49DF1/Cache/0000/0000/000f/997405.txt
...

While each of the above files did contain references to the given file name, none of them contained anything relevant to our research here to indicate they were the culprit of our .DS_Store file entry repopulation issue.

As an aside, the last entry was actually the Spotlight indexed (cached) Evernote page I have been using to take notes for this research 🙂 Do note that the Spotlight database and cache directories are also great places to search for references to deleted files as well, to possibly include full content that has been cached by indexing.

Moving on, I then tested the in-memory aspect of #2 by capturing a memory image (will author a separate blog post on doing this later) from my system and using Volatility’s yarascan and strings plugins to identify where in memory these entries may reside. I debated just showing the end results here, but I figure there is merit in showing how I got to the results as well. So, bonus for everyone!

Volatility’s yarascan plugin (specifically, mac_yarascan for our use on a Mac image) takes a yara rules file, finds matches across a memory image with the associated files/processes/memory areas, and (optionally) dumps the resulting files for analysis. So, this would seem rather useful for our situation here in trying to identify where in memory the historical deleted file references currently exist. To begin, I created the following yara rules file containing references to files that have been deleted from my system but whose entries still remain in the .Trash./DS_Store file.

rule ds_store_searches
{
strings:
$s1=”BlockBlock_Installer.app”
$s2=”canon-mx920-19_1_0a-ea11.dmg”
$s3=”FileZilla-Installer.app”
$s4=”SpotifyInstaller.zip”

condition:
any of them
}

As you can see, I’ve installed a few programs recently, the packages of which I deleted upon successful installation. However, these entries continue to be re-populated back into the .Trash/.DS_Store file on my system as I have not rebooted since I deleted them.

Using the latest release (2.6) of Volatility’s standalone OSX executable along with a custom macOS Sierra 10.12.2 profile I manually generated (and is now available in the Mac profiles repository for all to use!), I scanned the memory image for references to the above files using the mac_yarascan plugin as shown below.

$ ./volatility_2.6_mac64_standalone --plugins=/Users/jp/Projects/volatility/volatility/plugins/ --profile=Mac10_12_2_x64x64 -f ~/Projects/Memory_Captures/mem.raw mac_yarascan -A -y ~/Projects/Yara/ds_store.yar

I’m not going to lie to you, this ran for the better part of a day on my 2015 Core i5 MBA against an 8GB memory image. So, don’t expect speedy results from running this plugin.

=== Begin Sidebar ===

In comparison to the above, running Yara against the image took just under 3 minutes. However, the two tools are doing different things (to an extent) and producing different results.

Yara simply scanned the image and output the location(s) within memory where each hit was identified:

$ yara -s -p 8 ~/Projects/Yara/ds_store.yar mem.raw
0x1fd2cb6:$s1: BlockBlock_Installer.app
0xd380f26:$s1: BlockBlock_Installer.app
0x27b8d40c:$s1: BlockBlock_Installer.app
0x45dbd248:$s1: BlockBlock_Installer.app
0x46fa6195:$s1: BlockBlock_Installer.app

0x17efc11:$s2: canon-mx920-19_1_0a-ea11.dmg
0x1908620:$s2: canon-mx920-19_1_0a-ea11.dmg
0xd11f441:$s2: canon-mx920-19_1_0a-ea11.dmg
0x1831f101:$s2: canon-mx920-19_1_0a-ea11.dmg
0x42748dd1:$s2: canon-mx920-19_1_0a-ea11.dmg

0x1fd2c26:$s3: FileZilla-Installer.app
0x4dbb356:$s3: FileZilla-Installer.app
0x5c208e5:$s3: FileZilla-Installer.app
0x141bbf81:$s3: FileZilla-Installer.app
0x22be06cc:$s3: FileZilla-Installer.app
...
0x1fd22c6:$s4: SpotifyInstaller.zip
0xc5030c6:$s4: SpotifyInstaller.zip
0x41c95ee6:$s4: SpotifyInstaller.zip
0x54c9c5d9:$s4: SpotifyInstaller.zip
0x54c9c5f6:$s4: SpotifyInstaller.zip

These hits can be verified and further investigated by hexdump:

$ hexdump -C -s 0x1fd2cb6 -n 100 mem.raw
01fd2cb6 42 6c 6f 63 6b 42 6c 6f 63 6b 5f 49 6e 73 74 61 |BlockBlock_Insta|
01fd2cc6 6c 6c 65 72 2e 61 70 70 20 0a ad be b5 29 2e a8 |ller.app ....)..|
01fd2cd6 dd ba 80 f9 45 00 00 60 00 00 00 00 00 00 00 00 |....E..`........|
01fd2ce6 00 00 00 00 00 00 00 00 00 00 a4 fa ef 3d 33 33 |.............=33|
01fd2cf6 eb 3f 00 00 00 00 00 00 00 00 08 00 00 00 00 00 |.?..............|
01fd2d06 00 00 40 04 00 00 00 00 00 00 1b 04 00 00 00 00 |..@.............|
01fd2d16 00 00 00 98 |....|

While this is great and shows us that these historical references exist across a ton of areas within memory, it doesn’t really help us identify any useful context. Nonetheless, Yara is an incredibly useful tool that has a variety of purposes, so it’s just a matter of knowing your tools and which one you need to do a given job.

=== End Sidebar ===

Volatility’s mac_yarascan output provided a lot of useful results with context. Just what we needed! Below is a sample entry:

Task: lsd pid 230 rule ds_store_searches addr 0x10c0462bc
0x000000010c0462bc 46 69 6c 65 5a 69 6c 6c 61 2d 49 6e 73 74 61 6c FileZilla-Instal
0x000000010c0462cc 6c 65 72 2e 61 70 70 00 39 31 30 2e 2f 56 6f 6c ler.app.910./Vol
0x000000010c0462dc 75 6d 65 73 2f 52 65 63 6f 76 65 72 79 20 48 44 umes/Recovery.HD
0x000000010c0462ec 00 46 46 2d 2f 70 72 69 76 61 74 65 2f 76 61 72 .FF-/private/var
0x000000010c0462fc 2f 74 6d 70 2f 4d 50 50 5a 4c 50 52 50 00 69 6f /tmp/MPPZLPRP.io
0x000000010c04630c 6b 69 74 2e 2f 64 65 76 2f 64 69 73 6b 30 73 31 kit./dev/disk0s1
0x000000010c04631c 00 6c 79 00 2f 70 72 69 76 61 74 65 2f 74 6d 70 .ly./private/tmp
0x000000010c04632c 2f 44 64 6b 4a 57 79 6f 65 00 70 6c 2f 64 65 76 /DdkJWyoe.pl/dev
0x000000010c04633c 2f 64 69 73 6b 32 73 31 00 72 61 67 2f 56 6f 6c /disk2s1.rag/Vol
0x000000010c04634c 75 6d 65 73 2f 44 6f 63 73 00 6c 6f 2f 64 65 76 umes/Docs.lo/dev
0x000000010c04635c 2f 64 69 73 6b 32 73 31 00 00 00 00 2f 70 72 69 /disk2s1..../pri
0x000000010c04636c 76 61 74 65 2f 74 6d 70 2f 52 78 53 54 49 64 78 vate/tmp/RxSTIdx
0x000000010c04637c 41 00 63 73 2f 64 65 76 2f 64 69 73 6b 32 73 31 A.cs/dev/disk2s1
0x000000010c04638c 00 61 62 6c 2f 56 6f 6c 75 6d 65 73 2f 44 6f 63 .abl/Volumes/Doc
0x000000010c04639c 73 00 6c 6f 2f 64 65 76 2f 64 69 73 6b 32 73 31 s.lo/dev/disk2s1
0x000000010c0463ac 00 72 61 67 2f 55 73 65 72 73 2f 6a 70 2f 44 6f .rag/Users/jp/Do

While it identified references to the above files in a multitude of processes (a surprising amount, actually, that may need to be revisited in future research), we are trying to identify references to all of these files within a common process/context. So, the next step is to do a bit of analysis to see which process/context had at least 4 hits (because we had 4 file names to find). A bit of command line kung fu (gotta plug Hal Pomeranz‘s site, though *cough* he needs some new entries *cough*) yields the following:

$ grep 'Task:' ../Memory_Captures/mem.raw_yara_output | awk '{print $2}' | sort | uniq -c | sort -r
43 Finder
10 BlockBlock
6 mds
5 lsd
5 Google
2 loginwindow
2 coreservicesd
1 system_installd
1 sharingd
1 revisiond
1 pbs
1 mobileassetd
1 mdworker
1 crashpad_handler
1 configd
1 com.apple.geod
1 apsd
1 airportd
1 XprotectService
1 UserEventAgent
1 SubmitDiagInfo
1 Microsoft

We can weed out anything with less than 4 entries, leaving Google, lsd, mds, BlockBlock, and Finder. Google, lsd, and mds processes only had entries for FileZilla, so those are ruled out. BlockBlock is actually an awesome app by Patrick Wardle at Objective-See that watches for any applications that attempt persistence. So, it is of no surprise that all of these entries exist within its memory space as it has overseen each in their installation and alerted me if/when persistence (auto-start) mechanisms were implemented. Usefulness aside, it’s not our culprit here.

Now, we are left with Finder. So, let’s see what entries it found within the Finder process on my machine:

$ grep -A16 'Task: Finder' ../Memory_Captures/mem.raw_yara_output
Task: Finder pid 236 rule ds_store_searches addr 0x10ef4e2bc
0x000000010ef4e2bc 46 69 6c 65 5a 69 6c 6c 61 2d 49 6e 73 74 61 6c FileZilla-Instal
0x000000010ef4e2cc 6c 65 72 2e 61 70 70 00 39 31 30 2e 2f 56 6f 6c ler.app.910./Vol
0x000000010ef4e2dc 75 6d 65 73 2f 52 65 63 6f 76 65 72 79 20 48 44 umes/Recovery.HD
0x000000010ef4e2ec 00 46 46 2d 2f 70 72 69 76 61 74 65 2f 76 61 72 .FF-/private/var
0x000000010ef4e2fc 2f 74 6d 70 2f 4d 50 50 5a 4c 50 52 50 00 69 6f /tmp/MPPZLPRP.io
0x000000010ef4e30c 6b 69 74 2e 2f 64 65 76 2f 64 69 73 6b 30 73 31 kit./dev/disk0s1
0x000000010ef4e31c 00 6c 79 00 2f 70 72 69 76 61 74 65 2f 74 6d 70 .ly./private/tmp
0x000000010ef4e32c 2f 44 64 6b 4a 57 79 6f 65 00 70 6c 2f 64 65 76 /DdkJWyoe.pl/dev
0x000000010ef4e33c 2f 64 69 73 6b 32 73 31 00 72 61 67 2f 56 6f 6c /disk2s1.rag/Vol
0x000000010ef4e34c 75 6d 65 73 2f 44 6f 63 73 00 6c 6f 2f 64 65 76 umes/Docs.lo/dev
0x000000010ef4e35c 2f 64 69 73 6b 32 73 31 00 00 00 00 2f 70 72 69 /disk2s1..../pri
0x000000010ef4e36c 76 61 74 65 2f 74 6d 70 2f 52 78 53 54 49 64 78 vate/tmp/RxSTIdx
0x000000010ef4e37c 41 00 63 73 2f 64 65 76 2f 64 69 73 6b 32 73 31 A.cs/dev/disk2s1
0x000000010ef4e38c 00 61 62 6c 2f 56 6f 6c 75 6d 65 73 2f 44 6f 63 .abl/Volumes/Doc
0x000000010ef4e39c 73 00 6c 6f 2f 64 65 76 2f 64 69 73 6b 32 73 31 s.lo/dev/disk2s1
0x000000010ef4e3ac 00 72 61 67 2f 55 73 65 72 73 2f 6a 70 2f 44 6f .rag/Users/jp/Do
--
Task: Finder pid 236 rule ds_store_searches addr 0x6000001fd248
0x00006000001fd248 42 6c 6f 63 6b 42 6c 6f 63 6b 5f 49 6e 73 74 61 BlockBlock_Insta
0x00006000001fd258 6c 6c 65 72 2e 61 70 70 2f 1b 00 00 00 00 00 00 ller.app/.......
0x00006000001fd268 00 63 6f 6d 2e 6f 62 6a 65 63 74 69 76 65 53 65 .com.objectiveSe
0x00006000001fd278 65 2e 42 6c 6f 63 6b 42 6c 6f 63 6b 04 00 20 01 e.BlockBlock....
0x00006000001fd288 00 00 00 00 8e 00 10 00 02 00 00 00 c4 e5 c7 1d ................
0x00006000001fd298 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00006000001fd2a8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00006000001fd2b8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00006000001fd2c8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00006000001fd2d8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00006000001fd2e8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00006000001fd2f8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00006000001fd308 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00006000001fd318 00 00 00 00 00 00 00 00 47 09 00 00 00 00 00 00 ........G.......
0x00006000001fd328 00 00 00 00 00 00 00 00 47 0a 00 00 00 00 00 00 ........G.......
0x00006000001fd338 47 0b 00 00 00 00 00 00 47 0c 00 00 00 00 00 00 G.......G…….
--
Task: Finder pid 236 rule ds_store_searches addr 0x60000044e501
0x000060000044e501 63 61 6e 6f 6e 2d 6d 78 39 32 30 2d 31 39 5f 31 canon-mx920-19_1
0x000060000044e511 5f 30 61 2d 65 61 31 31 2e 64 6d 67 00 00 00 71 _0a-ea11.dmg...q
0x000060000044e521 91 d8 c5 ff ff 1d 00 8c 07 00 00 01 00 00 00 15 ................
0x000060000044e531 64 6e 67 2e 61 64 6f 62 65 2e 6e 69 6b 6f 6e 64 dng.adobe.nikond
0x000060000044e541 34 2e 63 61 6d 00 00 00 00 00 00 00 00 00 00 71 4.cam..........q
0x000060000044e551 91 d8 c5 ff ff 1d 00 8c 07 00 00 01 00 00 00 14 ................
0x000060000044e561 70 65 66 2e 70 65 6e 74 61 78 2e 37 37 39 37 30 pef.pentax.77970
0x000060000044e571 2e 63 61 6d 00 00 00 00 00 00 00 00 00 00 00 71 .cam...........q
0x000060000044e581 91 d8 c5 ff ff 1d 00 8c 07 00 00 01 00 00 00 10 ................
0x000060000044e591 61 72 77 2e 73 6f 6e 79 2e 32 39 36 2e 63 61 6d arw.sony.296.cam
0x000060000044e5a1 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 11 ................
0x000060000044e5b1 9c d8 c5 ff ff 1d 00 01 00 00 00 00 00 00 00 00 ................
0x000060000044e5c1 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 02 ................
0x000060000044e5d1 00 00 00 00 00 00 00 d0 65 01 00 00 60 00 00 71 ........e...`..q
0x000060000044e5e1 91 d8 c5 ff ff 1d 00 8c 07 00 00 01 00 00 00 11 ................
0x000060000044e5f1 6e 65 66 2e 6e 69 6b 6f 6e 2e 64 39 30 2e 63 61 nef.nikon.d90.ca
--
Task: Finder pid 236 rule ds_store_searches addr 0x600000a48d41
0x0000600000a48d41 53 70 6f 74 69 66 79 49 6e 73 74 61 6c 6c 65 72 SpotifyInstaller
0x0000600000a48d51 2e 7a 69 70 00 00 00 00 00 00 00 00 00 00 00 51 .zip...........Q
0x0000600000a48d61 93 d8 c5 ff ff 1d 00 c3 14 00 00 01 00 00 00 48 ...............H
0x0000600000a48d71 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 ................
0x0000600000a48d81 00 00 00 00 00 00 00 00 ab 1f 00 00 60 00 00 71 ............`..q
0x0000600000a48d91 91 d8 c5 ff ff 1d 00 8c 07 00 00 01 00 00 00 17 ................
0x0000600000a48da1 64 6e 67 2e 61 64 6f 62 65 2e 63 61 6e 6f 6e 65 dng.adobe.canone
0x0000600000a48db1 6f 73 6d 2e 63 61 6d 00 00 00 00 00 00 00 00 e0 osm.cam.........
0x0000600000a48dc1 41 db c5 ff 7f 00 00 01 00 00 00 00 00 00 00 c0 A...............
0x0000600000a48dd1 be 43 00 00 60 00 00 d8 be 43 00 00 60 00 00 d8 .C..`....C..`...
0x0000600000a48de1 be 43 00 00 60 00 00 00 00 00 00 00 00 00 00 71 .C..`..........q
0x0000600000a48df1 91 d8 c5 ff ff 1d 00 8c 07 00 00 0b 00 00 00 13 ................
0x0000600000a48e01 49 6e 73 74 61 6c 6c 20 53 70 6f 74 69 66 79 2e Install.Spotify.
0x0000600000a48e11 61 70 70 00 00 00 00 00 00 00 00 00 00 00 00 11 app.............
0x0000600000a48e21 9c d8 c5 ff ff 1d 00 01 00 00 00 00 00 00 00 00 ................
0x0000600000a48e31 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 02 ................
...

Sure enough, looks like we’ve likely found the harborer of our historical entries! And, it makes sense, as I surmise Finder is the process responsible for creating these .DS_Store files when they are moved to the trash. How might we be able to find out what process is responsible for creating this file? OSX has a nice little utility called fs_usage that can monitor all sorts of file system, disk, I/O aspects. For our purposes/testing here, we are going to filter on filesystem events and grep for the .Trash/.DS_Store file we care about while I go into Finder and delete (send to the Trash) a file:

$ sudo fs_usage -w -f filesystem | grep ".Trash/.DS_Store"
09:15:48.854558 fsgetpath /Users/jp/.Trash/.DS_Store 0.000010 Finder.2394967
09:15:49.357252 fsgetpath /Users/jp/.Trash/.DS_Store 0.000008 Finder.2395226
09:15:53.751509 getattrlist /Users/jp/.Trash/.DS_Store 0.000041 Finder.2395226
09:15:53.751556 fsgetpath /Users/jp/.Trash/.DS_Store 0.000008 Finder.2395226
09:15:53.751576 getattrlist /Users/jp/.Trash/.DS_Store 0.000019 Finder.2395226
09:15:53.751589 fsgetpath /Users/jp/.Trash/.DS_Store 0.000005 Finder.2395226
09:15:53.751717 fsgetpath /Users/jp/.Trash/.DS_Store 0.000007 Finder.2395226
09:15:53.751738 open F=21 (_W____) /Users/jp/.Trash/.DS_Store 0.000019 Finder.2395226
09:15:53.752898 HFS_update (__M_____) /Users/jp/.Trash/.DS_Store 0.000009 Finder.2395226
09:15:53.752905 HFS_update (__MN_c_m) /Users/jp/.Trash/.DS_Store 0.000003 Finder.2395226
09:15:53.752929 HFS_update (___N____) /Users/jp/.Trash/.DS_Store 0.000004 Finder.2395226
09:15:53.752956 HFS_update (___N_c_m) /Users/jp/.Trash/.DS_Store 0.000004 Finder.2395226
09:15:53.753005 HFS_update (_FMN_c_m) /Users/jp/.Trash/.DS_Store 0.000004 Finder.2395226
09:15:53.753157 getattrlist /Users/jp/.Trash/.DS_Store 0.000016 Finder.2395226
09:15:53.754084 WrData[AN] D=0x043832a0 B=0x5000 /dev/disk1 /Users/jp/.Trash/.DS_Store 0.001077 W Finder.2395226
09:15:53.804058 fsgetpath /Users/jp/.Trash/.DS_Store 0.000005 Finder.2395372
09:15:54.293014 lstat64 /Users/jp/.Trash/.DS_Store 0.000030 fseventsd.2395383

Sure enough, there it is. We can see Finder (re)creating the .Trash/.DS_Store file. Pretty cool, huh?

Now, why these entries are re-populated instead of just creating a blank/zero’ed file, we don’t yet quite know (this would take some more intensive inspection of the Finder code itself). Nonetheless, the Finder process definitely looks like a solid candidate responsible for (re)storing these these historical entries.

For even further testing and corroboration of our above findings (additional corroboration is ALWAYS a good idea in both investigations and research), we can use Volatility’s strings plugin. For most effective use, this plugin actually relies on a strings output file (fed as input to the plugin) with each string entry prepended with the decimal offset at which it was found (e.g., “102515331 file.dmg”). Keep in mind that in addition to the standard ASCII strings, we will also want to extract the Unicode 16-bit Big Endian strings as well.

Here we will use the GNU strings utility (gstrings on OSX via brew) to acquire this needed output. As a bit of a pro-tip, below is a great way to extract both ASCII and Unicode (16-bit Big Endian) in parallel using a FIFO queue:

$ mkfifo part-out
$ gstrings -a -td part-out > Memory_Captures/mem.raw.strings.ascii &

[1] 40780
$ cat Memory_Captures/mem.raw | tee part-out | gstrings -a -td -eb > Memory_Captures/mem.raw.strings.be

Once completed, let’s check out the format and see what it found for both the ASCII and Unicode Big-Endian strings:

$ sift "canon-mx920-19_1_0a-ea11.dmg" Memory_Captures/mem.raw.strings.ascii
25099281 canon-mx920-19_1_0a-ea11.dmg
26248704 ;/Volumes/Untitled/.Trashes/501/canon-mx920-19_1_0a-ea11.dmg
219280449 canon-mx920-19_1_0a-ea11.dmg
405926145 canon-mx920-19_1_0a-ea11.dmg
1114934737 canon-mx920-19_1_0a-ea11.dmg
1422508032 e: canon-mx920-19_1_0a-ea11.dmg
1913326497 canon-mx920-19_1_0a-ea11.dmg
4364841040 File: canon-mx920-19_1_0a-ea11.dmg
4454621776 File: canon-mx920-19_1_0a-ea11.dmg
4897694289 canon-mx920-19_1_0a-ea11.dmg
5379226560 ;/Volumes/Untitled/.Trashes/501/canon-mx920-19_1_0a-ea11.dmg
6315679704 File: canon-mx920-19_1_0a-ea11.dmg
7262910545 canon-mx920-19_1_0a-ea11.dmg
7624221584 File: canon-mx920-19_1_0a-ea11.dmg
7720217424 File: canon-mx920-19_1_0a-ea11.dmg
7720218576 File: canon-mx920-19_1_0a-ea11.dmg
8317281252 File: canon-mx920-19_1_0a-ea11.dmg
8317281288 File: canon-mx920-19_1_0a-ea11.dmg
8317283615 File: canon-mx920-19_1_0a-ea11.dmg
8317283651 File: canon-mx920-19_1_0a-ea11.dmg
8555763408 File: canon-mx920-19_1_0a-ea11.dmg
8800666640 File: canon-mx920-19_1_0a-ea11.dmg
8876241680 File: canon-mx920-19_1_0a-ea11.dmg
9351045649 canon-mx920-19_1_0a-ea11.dmg
9821317328 File: canon-mx920-19_1_0a-ea11.dmg
10051278021 $File: canon-mx920-19_1_0a-ea11.dmg
10051278106 $File: canon-mx920-19_1_0a-ea11.dmg
10058241281 canon-mx920-19_1_0a-ea11.dmg
10166913457 canon-mx920-19_1_0a-ea11.dmg
10166914465 canon-mx920-19_1_0a-ea11.dmg
10215457371 File: canon-mx920-19_1_0a-ea11.dmg
10215457407 File: canon-mx920-19_1_0a-ea11.dmg
10215459734 File: canon-mx920-19_1_0a-ea11.dmg
10215459770 File: canon-mx920-19_1_0a-ea11.dmg

And, now for Unicode Big-Endian:

$ sift "canon-mx920-19_1_0a-ea11.dmg" Memory_Captures/mem.raw.strings.be
5128627554 canon-mx920-19_1_0a-ea11.dmg
5128627664 canon-mx920-19_1_0a-ea11.dmg
5128627732 canon-mx920-19_1_0a-ea11.dmg
10079999330 canon-mx920-19_1_0a-ea11.dmg
10079999440 canon-mx920-19_1_0a-ea11.dmg
10079999508 canon-mx920-19_1_0a-ea11.dmg
10090625584 ile:///Users/jp/Downloads/canon-mx920-19_1_0a-ea11.dmg}

As we saw before when running our Yara scans against memory, we find many resident artifacts of our file name strings. A bit less in our Unicode output, but possibly useful findings nonetheless. No surprise here. But, let’s feed each of these into Volatility’s strings plugin to get some more context.

$ ./volatility_2.6_mac64_standalone --plugins=/Users/jp/Projects/volatility/volatility/plugins/ --profile=Mac10_12_2_x64x64 -f ~/Projects/Memory_Captures/mem.raw mac_strings -s ~/Projects/Memory_Captures/mem.raw.strings.ascii

And, now we wait… one day… two days… until Schrödinger’s cat got the best of me and I killed the process. After receiving a pro-tip from @attrc to filter down the strings file to just what we cared about (the 4 file names we put int our Yara rules file), I whittled it down to approximately 288 string entries (down from over 45 million – gah!) and re-ran it:

$ ./volatility_2.6_mac64_standalone --plugins=/Users/jp/Projects/volatility/volatility/plugins/ --profile=Mac10_12_2_x64x64 -f ~/Projects/Memory_Captures/mem.raw mac_strings -s ~/Projects/Memory_Captures/mem.raw.strings.ascii_FILTERED

…and waited another day before killing it and instead running it on a much faster desktop machine. Alas, it still took over a day to run on a 2.8GHz core i7 with 32GB memory, and yielded the following output:

25099281 [kernel:feacc17efc11] canon-mx920-19_1_0a-ea11.dmg
26248704 [kernel:feacc1908600] ;/Volumes/Untitled/.Trashes/501/canon-mx920-19_1_0a-ea11.dmg
33366720 [kernel:feacc1fd22c0] File: SpotifyInstaller.zip
33369120 [kernel:feacc1fd2c20] File: FileZilla-Installer.app
33369264 [kernel:feacc1fd2cb0] File: BlockBlock_Installer.app
81507152 [kernel:feacc4dbb350] File: FileZilla-Installer.app
96602320 [kernel:feacc5c208d0] +/Users/jp/Downloads/FileZilla-Installer.ap
...
10215459594 [kernel:feaf20e38b0a] File: BlockBlock_Installer.app
10215459626 [kernel:feaf20e38b2a] File: BlockBlock_Installer.app
10215459658 [kernel:feaf20e38b4a] File: BlockBlock_Installer.app
10215459734 [kernel:feaf20e38b96] File: canon-mx920-19_1_0a-ea11.dmg
10215459770 [kernel:feaf20e38bba] File: canon-mx920-19_1_0a-ea11.dmg
10230120017 [kernel:feaf21c33e51] BlockBlock_Installer.app

“kernel”? That’s it? No process association?

Well, that’s unfortunately less than useful for us. According to the wiki entry for the strings plugin, “For a given image and a file with lines of the form :, or , output the corresponding process and virtual addresses where that string can be found.” In reading that, I expected output similar to (or better than) the yarascan plugin in being able to pair the string hit(s) to the associated process. Alas, ’tis not the case.

Nonetheless, we seem to have some very useful findings to satisfy hypothesis #2.

Conclusion

In conclusion, while hypothesis #2 looks rather satisfied by our testing, we are still left with the following questions:

1) Why are these entries re-populated when a .DS_Store file is re-created?
2) What causes this behavior?
3) How is this information pulled into the re-created .DS_Store file?
4) Why are only certain files resident and not every file ever deleted from the machine?*
*My testing shows that the entries are purged upon reboot, so this last question is mostly answered. Though, we still don’t know why it happens.

If anyone has any insight into this, I would be INCREDIBLY interested to hear about it.

/JP

Mac Dumpster Diving – Identifying Deleted File References in the Trash (.DS_Store) Files – Part 1

If you have ever plugged a USB drive into a Mac, done some things, then plugged it into a Windows system, you have no doubt seen (if you have viewing of hidden files enabled) various “.DS_Store” files (among others) strewn throughout the folders on the drive. Though essentially useless to a Windows system, they do in fact serve a particular purpose on an HFS+ file system.

While I won’t re-invent the wheel on describing “What is a .DS_Store File?” (here as well), I would like to highlight its possible use for DFIR in containing/referencing artifacts that may be useful to investigations – traces of deleted files, with filenames and sometimes paths!

In a nutshell, the .DS_Store file stores metadata used by Finder for folder-specific display options such as window placement, layout, custom icons, background, etc. Thus, these .DS_Store files are (theoretically) created in every folder that Finder accesses, even remote network shares and external devices. Are those annoying .DS_Store files you see in Windows on your FAT32-formatted thumb drive making more sense now?

A part of this metadata is the filename, which got me to thinking… I wonder whether or not any traces get left behind when a file is moved or deleted.

For this post/research, I focused solely on the deletion aspect of when a user deletes a file through Finder.

In testing on my systems (OS X 10.10.5 and macOS Sierra 10.12.2), when a file gets “deleted” through Finder (not via “rm” on the command line, that’s a very different story), it first gets moved to the user’s ~/.Trash/ folder. If at least one file already exists within the user’s Trash, an entry for the yet-to-be-deleted file is added to the existing ~/.Trash/.DS_Store file denoting the full path on disk where the file resided before being moved to the Trash. This entry is part of how the “Put Back” feature works. If no files currently exist in the Trash (due to the user previously emptying the trash), I assumed (more on this in a bit) a new .DS_Store file would be created (“new” meaning a clear/empty file) to again begin storing entries for “Put Back”. Upon emptying the trash (via either the “Empty Trash” or “Secure Empty Trash” option in Finder for pre-Sierra systems), the files are deleted (according to the deletion method associated with each action) from the ~/.Trash/ folder and the ~/.Trash/.DS_Store file is also “deleted” (stay tuned for why I put this in quotes). Here is a great little writeup on the HFS+ volume structure and what happens “When Mac deletes it!”.

At this point, since all of the Trash source files are deleted upon emptying the Trash, we would assume that the .DS_Store file and all of its entries would be deleted as well. But, is this the case?

Answer: Not Quite!

In my testing, while the source data files within the ~/.Trash/ folder appear to be reliably deleted (short of carving the disk), various file and path entries within the ~/.Trash/.DS_Store file do not appear to be deleted! In fact, when you move another file to the trash, the ~/.Trash/.DS_Store file is re-created and historical entries* are re-populated into the file! Even if you “Put Back” the file(s), the associated .DS_Store file and entries remain. WIN!


*Note: These appeared to only be files I’ve deleted since the last reboot of my machine. Rebooting the machine seems to finally remove all historical entries. Various hypotheses of why/how this happens and where these entries come from will be tested later in this post.

We now have the opportunity to identify references to historical file deletions (sometimes with full path)! This doesn’t just apply to the Trash’s .DS_Store files, either. This applies to any given directory’s .DS_Store file that may contain (or have contained) references to files that existed within it.

Pretty AWESOME, right? How many of you are already putting together the “find” command to identify all the .DS_Store files on your systems?

*Hint: # find / -name .DS_Store

But, we kinda started this whole story at the end, well after I had finished muddling my way through researching and experimenting to find out how to actually parse these .DS_Store files. So, let’s rewind a bit

Upon first look at a .DS_Store file, they aren’t exactly straight forward, and they can’t apparently be opened with any native system tool or application. There is no native “ds_store_viewer” utility that simply parses the file information from the command line. So, how would we be even go about trying to figure out how to parse this thing? After all, the information it contains does us no good if we can’t parse and extract it!

Your initial thought may be “strings!” That’s a solid idea to start, let’s see what that yields…

[jp@jp-mba (:) ~]$ strings -a ~/.Trash/.DS_Store
Bud1
pptbNustr
gptbLustr
xptbLustr
xptbNustr
gptbNustr
...
DSDB
gptbNustr
gptbLustr
gptbNustr
gptbLustr
gptbNustr
fptbLustr

Well, that was less than useful. Oh, wait… maybe they’re Unicode strings instead of ASCII. Let’s see what the option is for Unix strings to search for Unicode strings instead of ASCII:

[jp@jp-mba (:) ~]$ man strings

At this point you may already know what I’m about to say – the BSD strings utility does NOT have the capability to search for Unicode strings. Fail.

So, you can go a few different ways here:

  1. Stick with native utilities
  2. Install/use a third-party utility that can identify Unicode strings (particularly big-endian Unicode)
  3. Install/use a third-party utility that can directly read .DS_Store format files

Native Utilities

So, what else might exist that we can use to view strings?

When in doubt, Hex it out!

I typically use of two native hex viewers – hexdump and xxd. They are both useful in different ways, but we’ll start with hexdump.

Using hexdump, you can dump hex+ASCII by doing the following:

$ hexdump -C

[jp@jp-mba (:) ~]$ hexdump -C ~/.Trash/.DS_Store
00000000 00 00 00 01 42 75 64 31 00 00 38 00 00 00 08 00 |....Bud1..8.....|
00000010 00 00 38 00 00 00 10 0c 00 00 02 09 00 00 20 0c |..8........... .|
00000020 00 00 30 0b 00 00 00 00 00 00 00 00 00 00 08 00 |..0.............|
00000030 00 00 08 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000040 00 00 00 00 00 00 00 03 00 00 00 01 00 00 00 4e |...............N|
00000050 00 00 00 04 00 00 10 00 00 65 00 61 00 73 00 65 |.........e.a.s.e|
00000060 00 5f 00 44 00 00 00 00 00 00 00 00 00 00 00 00 |._.D............|
00000070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200 00 00 00 00 00 00 00 02 00 00 00 02 00 00 00 04 |................|
00000210 00 00 00 30 00 50 00 6c 00 65 00 61 00 73 00 65 |...0.P.l.e.a.s.e|
00000220 00 5f 00 44 00 6f 00 63 00 75 00 53 00 69 00 67 |._.D.o.c.u.S.i.g|
00000230 00 6e 00 5f 00 74 00 68 00 69 00 73 00 5f 00 64 |.n._.t.h.i.s._.d|

Here we see the notable “Bud1” header followed by readable text. Score! But, how do we extract JUST the readable text in some effective way? You can mess around with hexdump to try to make sense of the output formats, or you could do like I did and get so overwhelmed at one point that you just use xxd to create this incredibly unpretty, certainly less than efficient, convoluted, but “working” one-liner:

$ xxd -p <path/to/.DS_Store> | sed 's/00//g' | tr -d '\n' | sed 's/\([0-9A-F]\{2\}\)/0x\1 /g' | xxd -r -p | strings | sed 's/ptb[LN]ustr//g'

Voilà. Strings output from Unicode strings only using the built-in utilities. It is very ugly and it is certainly separating at points/lines where it should not, but hey… you get what you get. At least you can more legibly make out filenames and paths that could get you somewhere.

This is an ugly hack. I do not recommend it, but sometimes ugly is better than nothing. YMMV.

Note: I would be very interested if someone who is WAY more versed in hexdump output formatting would create a much simpler way of doing the above solely using the hexdump utility.

Third-Party Utilities

GNU Strings

Believe it or not, you can actually install various GNU utilities on your Mac via a handy little thing called Homebrew. Just takes a command line one-liner to install and opens your Mac to world a new and useful utilities called “formulas”. Note that Xcode is a pre-req for installing Homebrew.

For our purposes, we want to install strings, which is a part of the GNU coreutils package. With homebrew installed, all it takes is a “brew install coreutils” and we’re up and running. Do note that various GNU utilities will be prepended with “g” due to naming conflicts. For example, the GNU strings utility must be called/run as “gstrings” (yeah, I laugh a little each time I see that).

Once installed, we now have full GNU strings capabilities, namely for searching big-endian Unicode text, a la the following:

$ gstrings -a -eb

You don’t necessarily need the “-a” option that tells strings “I don’t care whether or not you think it’s a searchable file, do it anyway”, but I add it out of habit of searching files that the system likes to gripe about.

Using FDB

https://digi.ninja/projects/fdb.php

  1. Enter CPAN shell
    1. $ perl -MCPAN -e shell
  2. Install DSStore
    1. $ cpan[1] > install Mac::Finder:DSStore
  3. Install Switch
    1. $ cpan[1] > install Switch
  4. Run FDB
    1. $ ./fdb.pl --type ds --filename /Users//.Trash/.DS_Store --base_url /Users//

Using ds_store Go Parser

https://github.com/gehaxelt/ds_store

  1. Download and Install Go
    1. Download OS X Package from here: https://golang.org/dl/
  2. Set Go Path in shell
    1. One-time (I set mine as the following but it’s up to you)
      1. $ export GOPATH=~/Projects/Go
    2. Permanent
      1. Place above line in /etc/bashrc
      2. Reload shell “source /etc/bashrc” or close and relaunch terminal
  3. Download ds_store go files
    1. $ go get github.com/gehaxelt/ds_store
  4. Change to the directory of the go project
    1. $ cd $GOPATH/src/github.com/gehaxelt/ds_store
  5. Make a directory for the new project/files (I opted to name mine “dsdump”, but feel free to alter yours) and cd to it
    1. $ mkdir -p bin/dsdump && cd "$_"
  6. (If not already done) Create a .go file (I named mine dsdump.go) and copy/paste the Example Code from https://github.com/gehaxelt/ds_store
    1. $ nano dsdump.go
    2. Copy/paste the Example Code into this file and save it
  7. Build the Go binary
    1. $ go build
  8. Run dsump
    1. $ ./dsump -i <path/to/.DS_Store>

**Note: One of the awesome things about Go is its ability to build static binaries (no additional files needed) for a variety of operating systems. For example, if you wanted to build a binary for a Windows x64 system, you would simply run “GOOS=windows GOARCH=amd64 go build -o dsdump.exe”. Then, just copy that to whatever Windows x64 system and run it. Pretty sweet, huh?

(Shout out to Slavik at Demisto for quickly getting me up and running with Go before I spent any time looking at documentation.)

Comparing the .DS_Store Parsing Solutions

At this point in my testing, I don’t think I would stick with only one solution, as each provides its own merits. A hex viewer is an invaluable tool for so many reasons, namely for assisting in identifying unknown structures, artifacts, or items within a given file. Gstrings offers an easy way to search for the appropriate strings with an easily installable pseudo-native utility. Fdb allows the option to specify the “base_url” to prepend its results with the appropriate path, based on the given .DS_Store file’s location. The ds_store Go parser does the job as well and it can be compiled to be portable to any major OS, which can be very handy in a Mac Forensics go-kit of sorts.

Regardless of why/how it occurs (which we’ll address in Part 2 of this post) and what option(s) you choose to parse/extract these items, you may now at least have an additional DFIR investigation method and artifact(s) to identify previously deleted files that are no longer resident on (allocated) disk.

Though we focused solely on .DS_Store files in this post, do note that it is not just .DS_Store files that can assist in identifying deleted files on a system. There are several other files/areas that should be searched for such investigations; however, I wanted to hone in on analysis of these files as it is possibly lesser known (at least in my research and experience).

At any rate, I hope this can be somehow useful in your investigations moving forward! As usual, YMMV, so I’m interested to hear feedback and stories of if/how this works in the field for everyone.

/JP

A Response to “The Cloud is Evil…”

In this post, I would like to respond to the SANS post titled, “The Cloud is Evil…“, as authored by John McCash. In reading the post, I felt rather compelled to dispel the myths/misstatements of the article that the cloud is evil for the 5 stated reasons. While I can certainly understand that line of thinking, definitely cannot argue one’s personal experiences that may have lead to such conclusions (I personally have had very lackluster experiences with a lot of lower tier cloud providers), and I am fairly certain he does not think the cloud is really “evil” per se (he actually states/clarifies that later in the post), I still feel a duty to respond with my experiences and facts as compiled from my interest and dedicated development of IR within AWS (and to some extent Azure).

1/18/16 Update: The link actually appears to be down right now. I wonder if it got taken down or if the site as a whole is just experiencing difficulty. Regardless, the page is Google cached here:

http://webcache.googleusercontent.com/search?q=cache:8YKTxP_iEsQJ:digital-forensics.sans.org/blog/2017/01/17/the-cloud-is-evil+&cd=1&hl=en&ct=clnk&gl=us

*Important Note*: Please PLEASE know that I mean zero personal attack, harm, or ill intent toward the author of the blog post of which this one references! If for some reason it does somehow come off that way, I ask that you please know it’s not the case and instead attribute it solely to my passion toward IR in the cloud and its usefulness and power to responders. I actually hope to hear from the author to discuss this and learn more about his position and how he came to the stated conclusions, as the priority for the DFIR community is education and healthy discussion, not pointing fingers and/or putting each other down. I much prefer someone author a contentious post and just take the time to feed back into the community for its discussion then authoring nothing at all. I am sure I will do the same (if not already done so). So, it is important that, regardless of outcome, we continue to foster participation and discussions within the DFIR community. We’re in this together.

Without further ado…

The referenced post is sub-titled, “A Forensicator’s Perspective”, but I have to say my perspective pretty much disagrees with most of the content here. Maybe that just means I’m not a “forensicator”? Now, I would like to give the benefit of the doubt and approach this from the perspective that maybe the author has no exposure or experience with AWS or Azure which is why such claims are being made without knowledge of their fallacy. However, regardless of the perspective, I’d like to address each of the claims in this article from my perspective and extensive DFIR experience with AWS (and to some extent Azure).

To begin (before the numbered statements/claims), I found the author’s below statement rather incongruent with my experience, in reference to Troy Larson’s SANS DFIR Summit presentation on “Defending A Cloud“:

“The Cloud (Specifically MS Azure), and describes numerous features that can be greatly advantageous to security professionals. Unfortunately, as I discovered upon submitting a question at the end of this talk, most of these features are currently only available to the Provider of the Cloud Service, and not to the Customer?. “

I’m not sure what was asked or discussed, but I know for certain this is not the case for AWS. A ton of features useful to IR are available to nearly anyone/everyone. It would be worthwhile knowing a bit more context for this and perhaps whether or not Azure mirrors the facilitations AWS provides. Though, from my experience, they both have very similar features and capabilities, just different ways of going about them. So, I would be very surprised to learn if Azure provides minimal features useful to security professionals.

At any rate, on to the numbered statements/claims.

“1. You can’t Forensicate the Virtual Hosts.”

  • “Even in cases of Infrastructure as a Service, if the customer desires these capabilities, he will typically have to build the associated forensic imaging support directly into his host configurations, rather than leveraging the intrinsic rapid access & direct snapshotting capabilities available to the Cloud provider.”

You absolutely can “forensicate” virtual hosts within AWS, and very easily. In fact, I contend that it is actually easier to acquire and analyze a system within AWS than it is in most any on-premises environment. Specific to AWS, all that is involved in acquiring a system is simply creating a Snapshot of it, which is a bit for bit image of the host. From there, you create a new volume from that snapshot and then attach the new volume (read-only) to your forensic host. It then appears as any other physical drive would to a forensic host, upon which you can go on your merry way with analysis right within the cloud. All of this typically takes only minutes to perform with typically no immediate need to download or transfer data outside of the cloud. No waiting hours for dd to complete and the image to transfer to another system (maybe even a day or so at this point, depending on your network speed). It’s literally as simple as point, click (or execute a couple commands), and begin analysis.

“2. You typically have no access to logs of administrative access to the Virtual Environment.”

  • “The customer may, or may not have access to logs of the activity performed using administrative accounts provided to him, and those logs may or may not contain sufficient detail to determine what actions were performed, and from what originating IP addresses.”

AWS, by default, logs API access to/from most (if not all) of its resources, namely to the console and EC2 Instances (systems/machines/hosts you provision and use) which are in the scope of this discussion. You have access to all of those logs (assuming you have the privileges within your IAM role/policy to do so), not to mention the capability to log further specifics about the machine’s performance, actions performed actually within/on the host itself, and a multitude of other items otherwise not easily achievable (or almost unachievable) in many other environments. These systems are in no way black boxes of known activity. I actually content that more logging is enabled by default within AWS than in most on-premises solutions I’ve seen.

“3. You have no way to monitor traffic to and from the Virtual Environment.”

  • “Similarly to #1 above, in the case of IaaS, it’s possible to set up netflow or full packet-capture architecture, internal to your virtual hosts, but such configurations are a pain to maintain, scale poorly, and can use significant resources.”

It’s actually very easy to set up netflow captures within AWS – they’re called VPC Flow Logs. In fact, it’s actually scarily simple to the point that I’d recommend alerting on VPC Flow Log creations as an attacker could also harness this capability to snoop on what’s going on within your network (again, with great power comes great responsibility and accountability!). Here is a nice little writeup by Jeff Barr from a while back on just how easy it is to begin monitoring all traffic over an interface (or multiple) with a few simple clicks. It scales, it stores, and uses very minimal resources. I tend to set these up during response to monitor for and identify specific known IOC’s/network indicators, a process of which is often much more arduous or impossible in many on-premises networks and architectures. They’re also great for network/endpoint baselining!

“4. Because of items two and three, an attacker (who might break into your internal network, but eventually be discovered after operating for a typical period of only 18 months or so) need only perform a single, brief, undetected intrusion. He just has to obtain Cloud access credentials or implant a backdoor in your Cloud software, and then get back out, deleting most of his tracks in the process. Subsequently there is only a miniscule chance of the existence or usage of this access ever being noticed, and he effectively has access to all Cloud-stored data in perpetuity.”

Sure, an attacker can compromise your network with elevated privilege credentials, but this is really no different than any other deployment or network. The reason DFIR sustains as a huge business is that attackers do this all the time, and it’s nothing at all unique to the cloud. As to deleting their tracks, I contend that (with the right monitoring and logging) your AWS environment can not only be instrumented to make this incredibly difficult and noisy for an attacker to perform, but also impossible. See Daniel Grzelak’s article on “Disrupting AWS Logging” for some tips on how to use the same tactics to protect against attacker log modification and deletion.

Daniel Grzelak has done a lot of awesome research and blogging on the attacker/PenTester side of the coin for AWS. I highly recommend following his stuff!

“5. The provider’s boilerplate contract agreement will typically absolve them from any possible responsibility for customer damages or losses due to data compromise.”

This is one point on which I totally agree, presuming we are talking about data compromise that occurred due to customer or user fault. However, I look at it in a positive light. It seems reasonable to me to have no preconceived notions that anyone owes me and/or is responsible for irresponsible, uneducated, or otherwise poor behavior in using what they’ve provided.

If the above statement is in reference to data compromise due to AWS operation that is outside of customer/user control, I’m not sure I’ve ever seen or heard that contract clause. So, I would be keen on getting that reference to read myself.

My last point of address is the following quote:

“Cloud providers are not necessarily evil, in and of themselves, but they enable appallingly self-destructive behaviors on the part of their customers. While there are many useful IR capabilities they could make available, there’s an unfortunate lack of demand. The businessperson customers continue to drone the mantra “Cheaper! Cheaper! Cheaper!”, and so the providers acquiesce to these demands, and streamline their offerings, reducing (or failing to develop in the first place) ‘frills’ such as security & IR capabilities.”

To the first sentence, this logic appears to obviate accountability and is essentially (in my personal view) taking the victim stance of “They let me do this.” To me, allowing a customer freedom to do what they want is not a detriment and cause for finger pointing when a poor decision is made; rather, it is the basis of what has made and continues to make AWS and Azure so successful and attractive to consumers. It is this very thing when done properly promotes both self-empowerment and drives accountability. Regardless, I don’t feel that you can make previous points of claiming fault for hand-tying and then go on to claim fault for giving too much freedom to do bad things.

To the second and third sentences of this claim, I’ve worked very closely with the AWS security team in developing our AWS IR service line. AWS provides a plethora (and, if you’ll please excuse me, I dare to say a metric sh*t ton) of incredibly useful tools for security and incident response (the baseline of which are better than most on-premises deployments I’ve encountered). However, they make one thing very clear – AWS does not perform security. Rather, AWS gives you the tools and resources you need to either perform it yourself or make an educated decision not to. If you choose “appallingly self-destructive” behavior, it is just that – a choice you have made. Further, it is a choice you have made despite the tools and resources available to you, not because of them. This is why and where accountability is so important.

Ultimately, cloud providers like AWS and Azure do so very much to enable and facilitate all levels of operation and technical (or non) capability with great freedom. They give YOU the tools to be successful (or shoot your eye out).

I highly encourage everyone to take some time to learn and understanding the offerings available from whichever top-tier cloud provider suits you best. And, if beneficial to your organization, use it and take the utmost advantage. But, whatever you do or do not do, be accountable for your own actions.

“So: Is The Cloud Evil?”

I contend that it is not, not even in the least, with the right provider.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén