News
Telling a lie?it's one of the easiest things for a human being to do. It's also one of the hardest in the world to detect. But that hasn't stopped people from trying. When the ancient Chinese questioned suspects about an alleged crime, they used to make them chew grains of rice. After interrogation, if the rice in the suspect's mouth was dry, it was assumed that tension created by deception had blocked the flow of saliva. Guilt by rice. Now, new brain-imaging technologies promise to revolutionize the ancient art of lie detection to go far beyond what polygraphs can do, and in the process turn what once was little more than superstition into a verifiable science. (See the sidebar "The Truth About Polygraphs.") Needless to say, this could be a big deal for murder suspects. About a year and a half ago, in the Indian state of Maharashtra, a 24-year-old woman named Aditi Sharma reportedly became the first person in the world to be convicted of murder based substantially on the results of a brain scan. The procedure, called a brain electrical oscillations signature (BEOS) examination, required Sharma to sit still with her eyes closed while a state examiner played aloud a series of taped statements, phrased in the first person: "I got arsenic from the shop." "I called [my former fiancé] Udit." "I gave him the sweets mixed with arsenic." Sharma remained silent throughout the test. However, with 30 electrodes tethered to her head, the examiner was able to monitor on a computer the electrical activity in those parts of her brain where memories are stored. His conclusion: Sharma had "experiential knowledge" of the murder. The examiner's report obviously had an impact. In fact, the judge, in sentencing Sharma to a life in prison, devoted nearly ten pages of his ruling to a discussion of the BEOS test. (As of this writing, her case is on appeal.) "I am innocent and have not committed any crime," Sharma said before she was led away. Said one bioethicist when later asked about the case by a New York Times reporter: "I find this both interesting and disturbing." Here in this country, the quest for a better lie detector has led researchers down a somewhat different path. Their focus: functional magnetic resonance imaging, or fMRI. An fMRI machine looks exactly like any other MRI machine, with a tunnel-like chamber. And like conventional MRIs, it uses magnets and radio waves to produce detailed images of the body's internal structures. But unlike its less sophisticated cousin, an fMRI device is powerful enough to reveal function as well as structure by detecting changes in the rate of oxygen consumption as a function of blood flow. Simply put, the harder a region of the brain works, the more blood it needs?and the more it "lights up" on an fMRI scan. What does any of this have to do with lie detection? Well, as a growing body of research shows, when people lie, certain areas of their brains tend to light up more than when they tell the truth. In fact, three separate studies of laboratory data in 2005?at the University of Pennsylvania and the Medical University of South Carolina?found that fMRIs accurately distinguished truths from falsehoods between 76 percent and 93 percent of the time. "The upside of this technology could be transformative," says Jeffrey Bellin, an assistant professor of law at Southern Methodist University's Dedman School of Law who has studied the lie-detecting potential of fMRI scans. "This could be a huge deal for the criminal justice system, if we can get a technology that works." The prospects are so intriguing that even the Pentagon's Defense Academy for Credibility Assessment, which trains all federal polygraph examiners, is studying a number of high-tech approaches to lie detection, including fMRI. Not surprisingly, the academy keeps a very low profile. Also not surprising: When the ACLU inquired under the Freedom of Information Act (FOIA) whether American security agencies such as the CIA had used fMRI or other brain-scanning technologies in interrogating suspected terrorists, it got little satisfaction. The government merely sent back a few pages of documents indicating an awareness of the technology but provided no details on its use. "If we had any kind of official public statement from the government or indications that they were using fMRI, we could have followed up on that, but we didn't," says Christopher Calabrese, the ACLU counsel who prepared the FOIA request. From a national security standpoint, fMRIs do have one obvious drawback: They require the subject's cooperation. In fact, to produce data that's useful, subjects must remain nearly motionless for at least ten minutes; a slight movement of the head or even a blinking of the eyes can compromise the accuracy of a scan. Moreover, forcing someone to remain immobile is not an option, because stress can alter the brain signals that are said to signify deception. Still, on at least one occasion, a military prosecutor at the U.S. detention facility in Guantanamo apparently thought enough about the technology's potential to contact a company in San Diego called No Lie MRI. The company's CEO, Joel Huizenga, took the phone call. "[The prosecutor] wanted to know what progress we had made in getting fMRI evidence admitted into a U.S. court," Huizenga remembers. "So the government must be looking at fMRI." Huizenga holds a bachelors degree in biology from the University of Colorado at Boulder, a masters in molecular biology from the State University of New York at Stony Brook, and an MBA from the University of Rochester. He worked in business development for several medical-diagnostic companies, then launched No Lie MRI in 2002. He has been offering tests to the public since December 2006. Huizenga claims accuracy rates of more than 90 percent for his fMRI system, and he predicts that with improvements in the technology its accuracy rate will climb to 99 percent within ten years. To be sure, the service that No Lie MRI provides isn't cheap: The machine itself costs upwards of $1 million, and for a single scan Huizenga charges clients $5,000. By comparison, a standard polygraph can be had for between $500 and $1,000. But apart from what No Lie's clients are willing to pay, its mix of customers would sound very familiar to a polygraph examiner. "We've tested cases concerning murder, rape, incest, worker's comp, and infidelity," says Huizenga. "The topics that are important are sex, power, and money, in that order. Anything to do with sex is great, because either you had it or you didn't, so there's no gray area. We've had husbands and wives come in to be tested on fidelity. It turns out that more wives are seeking to prove they're faithful than men." Currently, Huizenga operates two testing centers?one in Los Angeles, the other in San Diego?and he plans to establish contractual relations with as many as 40 more nationwide. These independent test centers would perform brain scans for No Lie after receiving a set of questions relevant to each case over a secure Internet connection. The results of the scans would then be returned to the company's headquarters for analysis. Huizenga cautions, though, that before any such expansion can become economically feasible, a judge somewhere in the United States will have to rule that fMRI evidence is admissible in court. Once that happens, he says, demand for these scans will skyrocket. It hasn't happened yet, although in March a criminal defense firm in San Diego did submit a report from No Lie purporting to show that one of its clients was telling the truth when he denied sexually abusing a child. The defense firm, McGlinn & McGlinn, eventually withdrew the scan before its admissibility could be ruled on. A spokesman for the firm would not explain why, because the issue came up in a confidential juvenile proceeding. But, according to Huizenga, who tracks these things, this marked the first time that the results of such a test were advanced in a U.S. court as forensic evidence of a person's innocence. It also provided Huizenga with some much-needed encouragement. "We're trying to get this accepted in court, but things are just going way too slow," he says. "I think it's going to happen eventually, as long as the decisions are based on science and not politics." Those who most strongly advocate the use of fMRI scans for lie detection are quick to point out that they're not just trying to create a better polygraph. Unlike a polygraph, they say, an MRI provides a window into the neurobiology of lying. Steven Laken, who holds a PhD in cellular and molecular medicine from Johns Hopkins, is the president and founder of Cephos, an fMRI lie-detection service with testing centers in Charleston, South Carolina, and Framingham, Massachusetts. "What distinguishes fMRI from other lie-detection technologies," he says, "is that, for one, we're not looking at the stress responses that may or may not be associated with lying. Secondly, we're using computer programs to make the determination of whether someone's being truthful. Unlike the polygraph, it's not about the examiner's opinion or belief about what the test shows." Consequently, it doesn't matter who is paying for the test, or what biases the examiner may have about the case. F. Andrew Kozel, an MD and fMRI expert at the University of Texas Southwestern Medical Center in Dallas, consults for Cephos. He's also listed as an inventor of fMRI technology licensed exclusively to Cephos. But when Kozel talks about the lie-detection capabilities of fMRIs, he is noticeably more cautious than either Laken or Huizenga. "It's still very early in the development of this technology," he allows. "We've demonstrated we can do this in a laboratory setting, but that's not the same as the real world. I do have significant concerns about it being used commercially until we fully understand its strengths and weaknesses. It may still work. I'm not saying it can't. But we just don't know. "A lot of people don't understand the technology," Kozel continues. "One of the bits of misinformation out there is that this technology can read thoughts. That's absolutely incorrect. What this technology is designed to measure is whether the brain is working harder during a lie than it is when you're telling the truth." There is solid scientific evidence that the anterior cingulate cortex, which is thought to play an important role in emotional control, lights up during spontaneous lying. But researchers also find that other emotions unrelated to deception can have the same effect. Moreover, scientists speculate that different types of lies may involve different neural pathways, which would put the goal of determining truthfulness beyond a reasonable doubt out of reach of today's technology. "I don't think fMRI technology is close to being ready for prime time," says Hank Greely, a bioethicist at Stanford Law School. "I don't think there's proof that it should be used now. I wouldn't go so far as to say that no one should offer it, though it would be nice to have a regulatory scheme that treated it the way the FDA treats new drugs or medical devices. But I certainly wouldn't pay for it or put any confidence in the results at this point." So, given what we know and don't know, how likely is it that an fMRI truth scan will be presented to a jury within, say, the next year or two? In California, the operative standard for admissibility of evidence is the Kelly/Frye test. It requires that the scientific method in question have general acceptance from the relevant scientific community. Meanwhile, in the federal courts and in more than two dozen other states, the operative standard is the Daubert test. Under Daubert, it's the trial judges who act as gatekeepers, and as such they determine whether the evidence in question is both relevant and reliable?based on testability, potential error rates, peer-reviewed papers, and other factors. "I think the technology has already met some very minimal criteria for Daubert, that's for sure," says Cephos's Laken. "There are more than 20 scientific papers that are peer reviewed, and those papers consistently show that the same parts of the brain are activated when the brain is engaged in deception. Is it testable? The technique certainly is testable. Are the materials and methods from which the technique is drawn scientifically accepted in the community? With 15,000 or so fMRI publications [in scientific journals], it's certainly hard to argue that brain imaging isn't detecting what's going on in the brain. There are peer-reviewed studies supporting the accuracy of the technology. When I look at those things, I certainly believe we've fulfilled many of those criteria." But then, even if a brain scan passes muster under Daubert or Kelly/Frye, it could still conceivably be kept from a jury under the hearsay rule, according to Southern Methodist University's Bellin. "If you wanted to get an fMRI test admitted, you'd have to require proponents of the evidence to explain the procurement of the evidence in the test. Was the other side notified of the test? Were they invited to participate in the test, or submit questions of their own? If you get the prosecution to attend, or propose questions for the examiner to ask, then a lot of those objections go away. But then the tactical problem for the defense is that if the defendant then fails the test, the prosecution could get it into evidence. So, only people very confident of passing the test would take it." Still, Bellin believes that the obstacles associated with trying to get these scans admitted into evidence are not insurmountable. "I think there's a chance that in short order someone is going to be able to convince a court of the scientific validity of fMRI and have it admissible," he says. "Because you don't have to convince everybody?you just have to convince one trial judge out there that this is a valid science. You get one judge, and then another judge, and then pretty soon this becomes more common." With so much technological innovation to keep track of, the law has for some time been hard pressed to keep up. But now, companies like No Lie MRI and Cephos are raising the stakes by claiming that they can answer?with a degree of certainty never achieved before?two of the most important questions in jurisprudence: Who is telling the truth? And, who is not? "It'll be interesting to see how this plays out," says Stanford's Greely. "I think the technology could be proven to be sufficiently reliable within the next 5 to 20 years. But I'd really hate to have it pushed prematurely, because some people's lives could really be harmed as a result." And that's no lie. Tom McNichol is a San Francisco?based freelance writer.
#321134
Kari Santos
Daily Journal Staff Writer
For reprint rights or to order a copy of your photo:
Email
jeremy@reprintpros.com
for prices.
Direct dial: 949-702-5390
Send a letter to the editor:
Email: letters@dailyjournal.com