A system call capture tool
Tracing system calls is a dynamic analysis reverse engineering technique that can offer a quick way to understand a program’s behavior.
Corellium makes it easy to trace system calls, using either our proprietary CoreTrace tool or strace, a standard command-line Linux tool. strace is included in Corellium Android virtual devices, and it is implemented with ptrace.
Corellium's CoreTrace tool is much more powerful. It has several advantages over strace:
CoreTrace is implemented with the help of the hypervisor. Applications can employ anti-debugging techniques to detect and prevent ptrace-based tracing. However, these techniques cannot prevent, or even easily detect, hypervisor-based tracing.
CoreTrace can trace the entire system at once. It’s not limited to a single process.
Setting up CoreTrace
To access CoreTrace, open the CoreTrace tab in the device screen:
The CoreTrace UI allows you to start and stop a trace, download the log generated by tracing, and clear the log.
By default, CoreTrace traces all threads in the system. This rapidly produces a huge amount of data. Often you’ll be interested in a particular target. To apply a filter to the results, click "Add a process or thread" to display the Processes dialog:
The Processes dialog displays all processes and threads in the system. To examine the threads inside a process, click the "THREADS" button in the process' row.
To add a filter, click the "ADD" button in the process' or thread’s row. Alternatively, specify a filter manually. Trace will log traces as long as they match at least one filter.
There are often many processes running. To more easily find the processes and threads you’re interesting in, click the magnifying glass in the top-right corner of the dialog and type a phrase. Only rows that contain the phrase will be displayed.
Then, you are ready to click Start Trace:
Understanding the results
After you have captured the trace (or during the capture) you can download the log file. Each line of the log will look like this:
<1> [00248.864651618] ffffff806401e040-0/337:firstname.lastname@example.org/ @00000070efc0b834 read ( fd: 5, buf: 0x6e5f6ec980, count: 4 ) ... @[ 0000006e5f6f9778 0000006e5f6f9840 0000006e5f6f948c 0000006e5f6f7f90 0000006e5f6f7b54 0000006e5f6f7434 00000070efc2088c 00000070efbc0e0c ]
or like this:
<1> [00248.864656648] ffffff806401e040-0/337:email@example.com/ @00000070efc0b834 ... read ( result: 4, buf: 0x6e5f6ec980 -> [s"001e"] )
The fixed line header contains the following information:
<cpu> [time.nsec] threadid-sigid/pid:comm.tid/ @pc
- cpu is the processor core the log comes from,
- time.nsec is time the entry was captured by the hypervisor,
- threadid is the internal kernel thread ID (usually address of a task or thread structure, depending on Linux vs iOS)
- sigid is the signal state (if a signal happens, a thread could execute in a different signal state before it's done with the signal, then return to the original signal state),
- pid is the process ID (PID of the process on Linux),
- comm is the short process name, which may be the original command but may also be set by process itself (Android likes doing that),
- tid is the thread ID (PID of the thread on Linux),
- pc is the PC where the syscall happened in EL0 (userland).
After the header, lines that end with ... are syscall invocations and lines that start with ... are syscall returns. On syscall invocation lines, if the environment permits it, there will be an additional trailer of the form
@[ lr ret1 ret2 ret3 ... ]
This trailer contains the EL0 return stack of the function that invoked the syscall.