Predictive policing: where scientific fact meets science fiction
As data-based software becomes increasingly popular in both the streets and the courtroom, some have raised concerns that such software will soon replace common sense and even perhaps, due process.
John Doe is sitting outside of a small coffee shop in Los Angeles. There are cars parked alongside the road, with people coming and going with each minute. No one seems to be paying attention to anyone or anything. No one, that is, except for John.
Eventually, John spots an individual sporting a suit and tie rush out of his car, slam the door, and fail to lock his car before storming off towards the executive office building a few blocks down. John watches this character disappear into the bustle of the city streets.
With the car unlocked, John casually walks to the car and peeks inside, where he sees a laptop computer resting inside of an open computer bag in the backseat. Mr. Doe begins to open the car door and reaches for the computer bag, when to his surprise, officers approach him from behind and apprehend him, just as he thought he had found an opportune crime. How could the police have known? Was it luck, or was it something far more deliberate?
Predictive policing is a recent trend in modern police work that seeks to utilize new technologies and massive amounts of data in order to fight crime before it happens.
The Washington Post reported on new technologies that are enabling police agencies to respond to crimes in an anticipatory sense. Predictive policing is a recent trend in modern police work that seeks to utilize new technologies and massive amounts of data in order to fight crime before it happens. This new style of police work takes many forms, usually through different software, and can be used for a variety of purposes, all falling within the umbrella of producing a more efficient, effective, and smarter approach to combatting crime.
One of these programs is called “PredPol,” a play on the phrase “predictive policing.” PredPol is becoming very popular in the crime fighting world. It is currently being utilized in United States jurisdictions of all sizes, as well as abroad. PredPol operates under the central premise that certain types of crimes tend to cluster in time and space. It relies on data from police record systems to generate predictions for each beat, shift, and mission type. By taking previous crime types, crime locations, and the dates and times of previous occurrences, it is able to create a prediction for what it considers to be a “hot spot” where particular crimes are likely to happen.
The data that goes into PredPol is extensive. While it does not include personal identifying information, it relies on several years’ worth of crime data which is filtered through an “Epidemic Type Aftershock Sequence Model” (ETAS). The ETAS is a self-learning algorithm that is forced to “re-learn” every six months in order to account for new crime data that is gathered. Older data is then compared against new crime data before locations for hot spots are generated. This re-learning ultimately produces new patterns, sometimes anticipating them before they emerge.
“[PredPol] is an invaluable added tool that allows our police force to use their patrol time more efficiently and helps stop crime before it happens.”
Former Alhambra City Police Chief Mark Yokoyama is quoted on PredPol’s website as saying “PredPol does not replace the experience and intuition of our great officers, but is rather an invaluable added tool that allows our police force to use their patrol time more efficiently and helps stop crime before it happens.” This statement highlights the potential that PredPol presents in enabling police departments to utilize its resources in a more efficient way.
While PredPol presents a way for police departments to prevent crime, another program offers a way to help inform judges’ decisions when it comes to sentencing convicted criminals who may commit another crime. “Correctional Offender Management Profiling for Alternative Sanctions,” or “Compas,” for short, is an algorithm that uses a variety of factors to determine the likelihood that an individual will commit another crime, with a specific emphasis on which punishments are best for that individual. The factors that inform the Compas assessment are proprietary and kept secret by Northpointe Inc., a private company that has developed the Compas software.
The secrecy surrounding the Compas software and its impact on criminal sentencing was explored in a recent Wisconsin Supreme Court case in which Eric Loomis, after being convicted of eluding the police, appealed his sentence because the judge relied in part on Mr. Loomis’ Compas score when giving him a six-year prison term. The judge stated that Mr. Loomis’ Compas score indicated that it was very likely that Mr. Loomis would commit another crime.
Mr. Loomis appealed his sentence, arguing that his rights had been violated and that he had been denied due process of law. The Supreme Court of Wisconsin held that even though Compas did not disclose the criteria it uses in its algorithm to the defendant or the court, Mr. Loomis’ due process rights had not been violated. In an effort to offer some sort of procedural safeguard that would act as a limitation on the effect Compas can have on sentencing, the Court declared that a “written advisement” was to be given to judges using Compas to alert them to the potential dangers of predictive software. Mr. Loomis appealed his case to the United States Supreme Court, which denied certiorari.
These new innovations in the field of criminal justice present very difficult questions for legal practitioners and laymen alike. For instance, should the factors, criteria, and predictive models that comprise these complicated algorithms be allowed to remain hidden from the public? Should Mr. Loomis have been allowed to review the factors Compas was considering in order to make an assessment, so that he could have presented rebuttal evidence at his sentencing hearing? What are the legal implications of making such judgments before the supposed actor ever takes action?
“Actus reus non facit reum nisi mens sit rea” is a Latin principle of law that roughly translates as, “the act is not culpable unless the mind is guilty.” Couched in this doctrine, penned by Sir Edward Coke in his “Institutes, Part III” (1797 ed.), are the principles of “actus reus” and “mens rea.” Actus reus (the guilty act) coupled with mens rea (the guilty mind) produces criminal liability in most common law jurisdictions.
Interestingly enough, Sir Edward Coke’s famous maxim was used as a way to show that actions alone were not enough to fix criminal liability on a defendant. According to to Coke, without the (im)proper state of mind, coupled with the egregious conduct, a person cannot be held liable for the crime. This is where Sir Coke articulated the necessity of having a guilty mind alongside the guilty conduct. But what if the presence of a guilty mind alone was enough to expose someone to criminal liability?
Because our system of law requires a guilty act for one to be subjected to criminal liability, predictive policing has its limitations.
Predictive policing treads a very thin line between the efficient use of resources in preventing crimes before they happen, and the beginning of an era in which the state prosecutes citizens for thoughts or motives alone. The practical effects of predictive policing are already being seen, such as in Loomis’s case, in which the judge imposed a higher criminal sentence because a computer program indicated that Loomis was likely to commit another crime. It should be noted, however, that because our system of law requires a guilty act for one to be subjected to criminal liability, predictive policing does have its limitations.
But is the problem really in the technology? If the judge, acting without considering Compas, believed himself that Mr. Loomis was likely to commit another crime, and thus imposed the same six-year prison sentence, would there be an outrage? There does not seem to be as much argument when people make their own independent assessments, often times coming to the same conclusions as those generated by predictive software.
It seems that a balance needs to be struck somewhere between using predictive tools as a supplement to human decision-making, while still being able to disregard the predictive software’s guidance when it seems to contradict human intuition. At the same time, however, the flawed and inconsistent nature of human intuition may be the very reason why predictive software is becoming increasingly sought after.