This graphic depicts the operation of a new portable crime-scene forensics technology designed to take precise, high-resolution 3-D images of shoeprints and tire-tread marks. The technology works by encoding data in LED light that is projected onto the snow or soil surface, allowing the system to determine the depth of surface features while using a single camera. Credit: Purdue University image/Song Zhang
Researchers are developing a new type of portable crime-scene forensics technology designed to take precise high-resolution 3-D images of shoeprints and tire tread marks in snow and soil.

The system will cost around $5,000, which is about one-tenth the cost of systems commercially available, and represents an alternative to the traditional method of plaster casting, said Song Zhang, an associate professor in Purdue University’s School of Mechanical Engineering.

The project is funded with a $788,167, two-year grant from the National Institute of Justice. The portable 3-D imaging system will have “auto-exposure control,” allowing investigators with no technical expertise to take high-quality images, and an intuitive user interface.

“This is the biggest contribution we are making to the forensics community,” said Zhang, director of Purdue’s XYZT Lab. “Current 3-D imaging products on the market are very difficult to use. You need expertise to be able to capture good images. What we want to do is bring in some intelligence to the algorithms so the forensic examiner just has to click a button to capture good images.”

He will be working with two researchers in crime-scene forensics: forensic research scientist David Baldwin at the Special Technologies Laboratory, a U.S. Department of Energy-National Nuclear Security Administration facility in Santa Barbara, California; and retired forensic scientist and footwear and tire track examiner James R. Wolfe. The team also will include two doctoral students.

The research team will work to develop a system that produces images with a resolution of 600 dpi. Such an innovation would provide more precise results than casting and would produce images immediately, whereas casting takes up to an hour.

Find your dream job in the space industry. Check our Space Job Board »

“Most shoes have very small cracks from wear in addition to their design pattern, and our system will be able to capture these distinct features,” Zhang said. “These marks are unique to a specific shoe.”

Zhang’s team has invented a “binary defocusing technique” that provides accurate depth imaging by encoding data in LED light that is projected onto the snow or soil surface. The light bouncing back to the camera contains the pre-encoded information, allowing the system to determine the depth of surface features while using a single camera. A laptop computer will perform necessary computations needed to operate the projector and camera. Unlike some other systems, the new approach is “eye-safe” because it does not require the use of lasers.

“Our project has promise to deliver a device that will improve the quality and accuracy of tire and footwear impression evidence,” Baldwin said. “We plan to develop an affordable and easy-to-use system that will provide the forensic science community with more and better evidence from crime scenes.”

One challenge is to develop a system capable of taking high-quality images of shoeprints and tire tracks on mixed soil and snow.

“Some substrates like snow and light sand pose major difficulties for crime-scene investigators when photographing and casting shoeprints and tire tracks,” Wolfe said. “This project has the potential to develop a system that can quickly obtain the 3-D detail in such impressions, maximizing the value of this type of evidence in a criminal investigation.”

Another challenge is to take high-quality images for both diffuse and “specular” surfaces. Some objects evenly reflect light, which is said to be diffuse, whereas shiny objects reflect bright highlights.

“This specular light presents problems for 3-D imaging because cameras do not respond properly to those highlights,” Zhang said. “So we have to adapt our sensor to be able to deal with both specular and diffuse light.”

The project officially begins in January.


Source: Purdue University