An individual is almost certainly going be videotaped every time he leaves home. You will be caught on camera in the store, at the airport and on the street. Your calls to various organizations will also be recorded “for quality purposes.”
Multiple Tools Tracking What We Do
Wearable health trackers
Activity-tracking devices made by companies like Fitbit, Jawbone and Misfit are increasingly popular gadget purchases, but they’re also making their way into the workplace: research firm Gartner estimates that 10,000 companies offered activity-trackers to staff in 2014. Their motivation is being questioned, however: will your boss have access to the data from these devices? (Imagine your annual review including criticism of your sofa-loafing nature at weekends). And will they share it with advertisers or insurance companies?
Monitoring your night-time habits
Personal sleep monitoring – sometimes using standalone devices or sometimes built in to fitness-trackers like Fitbit – sounds like a good idea, but would you want your employer to know how much shut-eye you’re getting? Meanwhile, a flurry of reports in 2013 called attention to the prospect of daily testing of employees’ alcohol levels, with fingerprint-scanning device AlcoSense TruTouch taking just 10 seconds to return a reading. “The benefits of testing all staff, every day are immense and the change in workforce behaviour is immediate,” explains the company on its website . Anyone for a mineral water?
Work/life balance
Some of the best things about workplace technology come outside your workplace: faster, more reliable broadband; powerful smartphones and tablets; better video and text chatting software all enable us to be more flexible in our working patterns. But one of the worst things about workplace technology also comes outside the workplace, because all this flexibility often erodes your work/life balance. It might be your boss’s fault – emails in the evening, expecting an immediate response, or conference calls on public holidays – but just as often it might be your own: checking emails in bed and treating the train home as an extension of your office hours.
Augmented reality
The Google Glass augmented-reality specs were essentially a toy for tech people with too much money to burn (“glassholes” as they were labelled). Google has gone back to the drawing board with that product but the one area where experts agree AR glasses are likely to catch on is in the workplace. From warehouse workers to plumbers and electricians, people will be able to access data and services with a flick of their eyes. The risks? Information overload: emails and notifications in your face throughout the day. And also the questions around recording video and audio, whether in terms of your boss demanding your first-person footage of a task, or simply the danger of an entire workforce becoming walking CCTV cameras.
Anonymous feedback tools
One of the elements of Amazon’s working culture picked out by the New York Times was its “secret feedback” system, used anonymously by employees to praise – or criticise – one another to bosses. Amazon claims most of the comments are positive but workers interviewed for the report said it could also be used to gang up on colleagues through coordinated campaigns of negative feedback. As this kind of “collaborative anytime feedback” technology spreads to other companies, moaning about colleagues in the pub could be replaced by anonymous complaints that feed directly into their performance reviews – and its anonymous nature may hamper their ability to defend themselves.
Driving-monitoring apps
Tracking drivers remotely is well established in industries like logistics but the emergence of smartphone driving-monitoring apps may expand the idea to anyone who drives a company car. Insurance companies including Aviva and Admiral have launched apps for drivers that score their driving safety based on cornering, braking and acceleration, and then offer the best ones a discount on their insurance. Employees will be aware of GPS technology being used to track their location to eliminate dilly-dallying, but data on their driving safety – especially in a device/car that’s used at weekends – adds a new layer.
Sociometric badges
That badge hanging from your neck to get in to your workplace? What if it also recorded your daily interactions? This is the idea behind sociometric badges, which capture “face-to-face interactions” of the wearer, as well as speech and body movement, then serve all this data up for analysis by employers. It might provide data on how well (or poorly) a salesperson is doing – “improving the employee-customer interaction” as one firm selling the technology puts it. The implications for workplace gossip, or even simply judging someone more on their social interactions than other aspects of their work, are complex.
Happiness analytics
Some sociometric badges are going further. Earlier this year, Hitachi unveiled a new badge – slogan: “ Human Big Data” – which aims to measure your happiness. How? A mysterious algorithm based on your physical activity, from how quickly you walk to how often you nod. Hitachi says this data will be aggregated to provide an overall happiness score for a workplace, rather than used for bosses to grill individual staff about why they’re not happy enough. Mood-tracking is a hot area for tech development, from smart-rings measuring your sweat to wristbands monitoring your heart. Some, like headworn devices Melomind and Thync, even claim to change your mood via electrodes stimulating the brain. The science remains under debate, but the vision of your boss trying to make you wear a de-stress helmet is… a bit stressful.
Facial recognition technology
Privacy concerns around facial recognition tend to focus on two areas: its use by police and the government in the monitoring of citizens, and the worry that if Facebook is working on it (which it is), it must be up to no good. Employers don’t get mentioned often, but perhaps they should. Think about established fears of companies Googling potential employees or mining their Facebook and Twitter profiles, and then extend those into using software to scan for their faces in photos across the web – from drunken nights out to public protests.
Security drones
In 2015, consumer drones – as opposed to military ones – remain a plaything, with regulators still drawing up rules on how they can and can’t be used for commercial purposes. One of the uses being mooted is for building security: drones capable of zipping around buildings, filming any intruders. Concerns here don’t just include the risk of a drone falling out of the sky on to an employee’s head but the question of whether these drones will also film staff, and what will happen to the footage.
Corporate security
Most of these changes in the workplace will create huge amounts of data on employees, from personal activity and email archives to photographs and footage of them going about their business. Which begs the question: are the companies storing this technology able to do that securely, in an era when data breaches are increasingly common. If Ashley Madison, Sony Pictures and Carphone Warehouse fell victim to hackers, what makes you sure your company won’t? In an era of increased data collection within the workplace, what your employer plans to do with it may be the least of your worries.
Cell Phone Locators
"DEAR subscriber, you have been registered as a participant in a mass disturbance." This text was sent by the Ukrainian government last year to everyone with a cellphone known to have been near a protest in the capital, Kiev.
Just what you'd expect from an ex-Soviet country? Not so fast. In the US and Europe, police are also seeking information on phones linked to specific places and times - and always without a warrant. We're all spied on. Our phones are bugged, our laptops inveterate informants. Reports on activities that define you - where you go, who you meet, what you buy - are sold to the highest bidder.
Facebook has developed facial recognition technology powerful enough to identify people without a clear view of their faces.
Researchers at the social network site say they have developed artificial intelligence that can identify users to an 83 per cent accuracy even if their faces are hidden. If it cannot detect a face, the system examines other characteristics, such as body shape, haircut, pose and clothing.
Yann LeCun, head of artificial intelligence at Facebook, told New Scientist that his team wanted to create systems with human abilities. “There are a lot of cues we use,” he said. “People have characteristic aspects. You can recognise Mark Zuckerberg [the creator of Facebook] very easily because he always wears a grey T-shirt.”
Facebook appears to be developing the technology to “tag” or identify photos of friends. However, Ralph Gross, of the Robotics Institute at Carnegie Mellon University in Pittsburgh, said: “If, even when you hide your face, you can be successfully linked to your identity, that will certainly concern people. It’s important to discuss these questions.”
You Won't Have A Choice
“IF YOU are walking down the street, a public street, should a company be able to identify you without your permission?”
That was the key question that caused talks about face recognition technology among privacy advocates, the US government and consumer groups to fall apart quite spectacularly earlier this week.
The talks were meant to develop a code of conduct for the use of this technology – which is becoming increasingly pervasive – but collapsed after the privacy advocates stormed out in protest.
Alvaro Bedoya of the Georgetown University Law Centre in Washington DC says this happened in the face of inflexibility from industry players when trying to agree on correct conduct in the simple, hypothetical case above.
Bedoya and other privacy advocates thought the answer was obviously no, but the tech companies disagreed. “We asked if we can agree on this edge case, but not a single company would support it,” he says.
The lack of a current consensus means face recognition is moving into creepy territory. One example is California-based company Face First, which is rolling out a system for retailers that it says will “boost sales by recognising high-value customers each time they shop” and send “alerts when known litigious individuals enter any of your locations”.
“What facial recognition allows is a world without anonymity,” says Bedoya. “You walk into a car dealership and the salesman knows your name and how much you make. That’s not a world I want to live in.”
“You go into a car dealership and the salesman already knows your name and how much you make”
Another company, called Churchix, is marketing facial recognition systems for churches. Once pictures of the church’s members have been added to a database, the system tracks their attendance automatically. It also claims to be able to discern demographic data about the congregation, including age and gender.
“Companies are already marketing products that will let a stranger point a camera at you and identify you by name and by your dating profile,” says Bedoya. “I think most reasonable people would find this appalling.”
Online technology companies that use facial recognition have had to set their own policies for it. Google and Microsoft are known for having the best policies, requiring users to explicitly opt-in. In Microsoft’s case, your facial profile never leaves your device, for example, your Xbox.
“Microsoft takes an opt-in approach to facial recognition and we would encourage other companies to do the same,” a spokesperson for the company said. “We believe the stakeholder process is important and that is why we are participating. Should there be a consensus that an opt-in approach be adopted, that is something that we could support.”
But no such consensus was reached.
While negotiating rules for face recognition has failed for now, Ben Sobel, also of Georgetown, points out that the US states of Illinois and Texas – which together make up an eighth of the country’s population – already have privacy laws that govern the collection and processing of biometric data.
Facebook is being sued in Illinois, where it is alleged that the social network’s face recognition system violates the state’s laws.
But why don’t technology companies take threats like these seriously? Jennifer Lynch of the Electronic Frontier Foundation, a privacy advocacy group based in San Francisco, says it’s because doing so would constrain what they can do with the technology in future.
“We’ve seen this from the birth of commerce on the internet – companies collect as much data as they possibly can on people, because of the possibility that it might be useful,” says Lynch. “We discourage companies from doing that because it means all of that data is available for the government to come asking.”
“This is just the beginning of a very important conversation,” says Kate Crawford of Microsoft Research. “Facial recognition is one of many remote biometric sensing technologies. There’s also gait detection, iris scanning, heartbeat recognition and many others. We need a deeper discussion of the social and ethical implications of these capacities as well as who gets to use them, where and how.”
24/7 Tracking
It might sometimes feel like your boss is always on your case, but this is a whole other level. A former sales executive from the wire-transfer company Intermex alleges in a lawsuit filed May 5 that she was fired for uninstalling an app that tracked her whereabouts 24/7 and sent the data to her supervisor.
According to the suit, which was spotted by Ars Technica, Intermex made employees install the Xora GPS app so the company could track them at all times. Myrna Arias claims that she told her boss, John Stubits, that she was fine with the tracking while she was on duty, but opposed to it during her off hours and weekends. The suit alleges that a group of co-workers agreed with this position. After doing some research about Xora, Arias uninstalled the app in April 2014.
The suit, filed in Kern County Superior Court, claims privacy violations, wrongful termination, and other labor infractions. It outlines damages of more than $500,000 for lost wages. “What we have here is a really egregious situation,” Arias’s attorney Gail Glick told Courthouse News Service. The suit says:
After researching the app and speaking with a trainer from Xora, Plaintiff and her co-workers asked whether Intermex would be monitoring their movements while off duty. Stubits admitted that employees would be monitored while off duty and bragged that he knew how fast she was driving at specific moments ever since she installed the app on her phone.
ClickSoftware, which makes the product Xora StreetSmart, seems to envision its product as a 9-to-5 tool, not a full-time surveillance service. The company has not responded to a request for comment. Its website says, “When your field employees start their day, they simply launch the application on their mobile devices.” But the potential for invasive abuse of the product is there. “See the location of every mobile worker on a Google Map. You can drill down on an individual worker to see where they have been, the route they have driven and where they are now,” the site explains.
Mobile phones have made it cheap, easy, and appealing for employers, insurance companies, and other groups to track their affiliates. But concerns about these programs are extensive and include the danger of unforeseen privacy violations, in addition to the obvious ones like your boss finding out that ran an errand during work, or your insurance finding out that you visit a smoke shop a few times a week. As one of my Slate colleagues said, “If that [lawsuit] is even close to what actually happened, it is terrifying.”
Uber Has You
Uber, the taxi-ordering app, can use more sophisticated technology to track people than the police, according to Britain’s top officer.
Sir Bernard Hogan-Howe, the Metropolitan police commissioner, said the company could use technology to locate people in real time, which the police are forbidden from doing.
Uber monitors a user’s position via a mobile phone app and then finds the nearest taxi to pick them up. “But when people ring the police, we haven’t got a clue where that phone is,” Sir Bernard told Radio Times.
He said that although police were not allowed to use location data in “real time”, if someone’s life was at risk officers could make an emergency application to get access to the details. Uber is able to use GPS tracking because customers sign up for an app, which they allow to reveal their location. If police wish to get information showing when, from where and with whom a person is communicating — including the time and duration of a call — they can only do so under laws laid down by parliament.
Figures released by the Big Brother Watch pressure group yesterday found that the police made more than 730,000 requests for communications data between 2012 and 2014. The average approval rate across all forces was 96 per cent, the campaign group said.
Churches Using Face-Recognition Software to Track Attendance
Churchgoers are being secretly monitored with facial recognition technology that automatically checks whether they are attending services.
About 30 churches across the world are using software that can detect faces in live CCTV footage and match them to photographs stored in a database.
The system, Churchix, is the creation of Face-Six, an Israeli software company that makes facial recognition software for border control forces, law enforcement agencies and casinos. It can be used without informing individuals that they are under surveillance.
Churches in countries including the United States, Portugal, Spain, India and Indonesia are using Churchix, according to Moshe Greenshpan, the chief executive of Face-Six.
He said that the largest church using the system had 10,000 individuals in its congregation.
The system was not yet being used in Britain, he said, but he did not rule out selling it in the UK should he be approached.
Emma Carr, the director of Big Brother Watch, said that churchgoers should be concerned about how churches were using, storing and securing their data. “Churches have managed to note who is in their congregation for hundreds of years without resorting to highly intrusive means,” she said. “Let’s hope that our places of worship remain environments where we can still expect a high level of privacy.”
Mr Greenshpan said: “I understand when people say it’s spy software and it intrudes [on] privacy but it’s not really like that. It’s for ease of use of the church to keep track of people automatically.”
To use Churchix, photographs of individuals are uploaded into a “watchlist” that contains their names and other information such as addresses, phone numbers and dates of birth. When Churchix is applied to live CCTV footage, recorded video or still photographs of church services or events, the software detects which members are present and logs the information.
Cell Phone Usage Tracks
You can discover a huge amount of information about someone’s life by getting hold of their personal data — Facebook and Twitter follows and Google searches are all revealing in their own ways. The same goes for cell-phone data, of course, and a new study shows that analyzing someone’s cell-phone usage patterns might allow an observer to determine — or at least guess with a pretty high degree of accuracy — whether they’ve recently lost their job.
Writing in the Journal of the Royal Society Interface, a team led by Jameson Toole of MIT examined this from a few different angles. First, they focused on an auto-parts plant in Europe that was closed in 2006 and was situated near three cell-phone towers that allowed them to scoop a bunch of cell-phone data that included all of the calls made by employees at the plant.
At the individual level, "Users highly suspected of being laid off demonstrate a sharp decline in the number of days they make calls near the plant following the reported closure date." The researchers couldn't prove these folks were laid off, but by comparing what they were seeing to what happened during a holiday when the plant was closed, they think they have "strong evidence that we are correctly identifying the portion of users who were laid off by this closure."
The researchers also used cell-phone data to track how people's behavior — or at least those signals of behavior captured by cell-phone use — changed after losing their job. Some of the findings were stark:
We find that the total number of calls made by laid off individuals drops 51% and 41% following the layoff when compared to non-laid off residents and random users, respectively. Moreover, this drop is asymmetric. The number of outgoing calls decreases by 54% percent compared to a 41% drop in incoming calls (using non-laid off residents as a baseline). Similarly, the number of unique contacts called in months following the closure is significantly lower for users likely to have been laid off. The fraction of calls made by a user to someone physically located in the town drops 4.7 percentage points for laid off users compared to residents of the town who were not laid off. Finally, we find that the month-to-month churn of a laid off person’s social network increases roughly 3.6 percentage points relative to control groups. These results suggest that a user’s social interactions see significant decline and that their networks become less stable following job loss. This loss of social connections may amplify the negative consequences associated with job loss observed in other studies.
Finally, the researchers showed that, by collecting cell-phone data at the individual level, they were able to zoom out and partially predict unemployment at the province level. In other words, if you notice that there's a sudden uptick in the people in one area whose call records reflect the above excerpt, you can be pretty sure unemployment is rising in that area.
The privacy concerns here speak for themselves — it seems like most of us just don’t understand how much we are broadcasting about our lives every day.
Your phone is constantly betraying you
Chances are, your smartphone is broadcasting sensitive data to the world, says Glenn Wilkinson, who hacks people’s phones to demonstrate the risk
You present a live show that involves hacking cellphones. How does it work?
We look at the repercussions of the fact that your phone constantly looks for every Wi-Fi network you have ever connected to. It’s looking for the Starbucks network, it’s looking for the Los Angeles airport network, it’s looking for “BT Homehub 123″. This information is flying from the phones of our audience members. We can geolocate these networks to discover where people have been and often where they live. There is also a unique serial number called the MAC address given out by your phone in those messages, so we can identify individuals.
Give me an example of what you can do with this intercepted data.
There was a woman in one audience whose smartphone was broadcasting the label “Emma’s iPhone”. We could see where she lived and we could see that she had been to a hotel in Mauritius so as we opened the show I said, “Hey Emma did you enjoy your trip to Mauritius at hotel so-and-so?” We could even tell what side of the hotel she had stayed in – her room had a sea view. These kind of things are surprising to people.
How do you do it?
The hardware is just a Wi-Fi card with particular properties. You can get them off Amazon for about £10. And we use open source software called Wireshark – which chops up data packets that are being broadcast – and software that I wrote called Snoopy, which organises and visualises this data.
Is this dubious, legally speaking?
It’s perfectly legal In the UK and to a large extent in the US as it is unencrypted broadcast traffic. It’s like you are at a bar and someone walks in shouting the names of all their friends and you write it down: it’s permissible to collect that stuff. And businesses are. In certain shopping centres in the UK, marketing firms are tracking you in this way, profiling you and figuring out how many times they have seen you at this store, if you live in a posh or poor area and so on.
Could people be in danger thanks to their data leaking form their phones?
I can think of all kinds of scenarios. If you are a police officer involved in an arrest or protest, say, and an adversary figures out where you live. Or if you are in the military and deployed in a foreign country, there could be an insurgent on a hill with a long-range antenna pointed at your base figuring out where you live when you are not on a tour, putting your family at risk.
How do hacked audience members react when you explain all this?
When we have figured out something personal about somebody they are often taken aback. But we delete all the data after the show, and I guess because we are perceived as the good guys they are not too stressed about it. But the audience does leave with a much greater awareness of the need to better secure themselves.
Technology Makes It Easy To Snoop
For generations, the telltale signs of a lover’s cheating were devastating, but simple. And low-tech.
Roosevelt biographers point to Eleanor finding a bundle of steamy letters from Lucy Mercer to Franklin as she unpacked his suitcase in 1918. The Clinton sex scandal was told through conversations replayed on cassette tapes and the infamous stain on Monica Lewinsky’s dress. Maria Shriver reportedly confronted Arnold Schwarzenegger during marriage counseling over the existence of a love child conceived with the family’s longtime housekeeper.
Today, divorces and infidelity investigations increasingly eschew such personal touches. Instead, as divorce lawyers report, they center around digital forensics, involving data gathered from wearable devices and smartphone applications such as Find my iPhone and mSpy. This unintended consequence of modern data gathering shifts the discussions over large societal questions about digital privacy to intimate ones of emotional privacy, leading to bizarre moments of Orwellian heartbreak.
“It’s the digital lipstick on the collar,” says Sam Hall, a U.K.-based divorce lawyer.
The apps say they will keep track of your kids but, often, they’re used to expose unfaithful lovers.
The issue of tell-all data trails surged into public consciousness in 2011, when a man claimed he used Apple’s Find My Friends app to place his wife uptown near a suspected romantic liaison rather than downtown, as she had texted to her husband. “These beautiful treasure trove of screen shots going to play well when I meet her a$$ at the lawyer’s office in a few weeks,” he wrote in a MacRumors.com post that went viral.
That same year, the popular health and fitness app Fitbit triggered an outcry from users and privacy experts when the company accidentally set “public” as the default privacy setting and unwittingly revealed some of its users’ sexual activities on profiles that were just a Google search away. Fitbit did not respond to a request for comment.
Fueling this phenomenon, of course, is the popularity of tracking applications and wearable devices. One in six consumers said they currently use wearable tech in their daily lives, according to a 2014 survey from Nielsen. Despite the failure of Google Glass, consulting firm Deloitte estimated the wearable market is currently generating $3 billion and that there will be 100 million wearable gadgets on bodies by 2020. The digital stalkers take advantage of the fact that those being tracked often have no idea that they are being monitored, or that they can disable location services.
If spouses are using data from wearable devices to track one another, it’s fair to say the union is already precarious. But sometimes the monitoring isn’t even intentional.
For example, Hall described a married client who went away for a friend’s bachelor party, during which he exchanged sexually explicit messages with a woman who was not his wife. When he returned home, Hall said, iCloud kicked in and his young son and wife were shocked when they opened up their iPad and saw the images, which had accidentally made their way onto the shared cloud.
“Sometimes, there’s someone who is tech savvy and can become a super sleuth,” Hall says. In other cases, “you have a technological dunce who gets called out.” Yet the more common scenario is that technology is simply way ahead of the average person’s understanding of it, says James T. McLaren, a divorce lawyer in Columbia, S.C., and president of the American Academy of Matrimonial Lawyers.
The Academy does not keep a count of divorces involving data tracking, but a 2012 survey found that 92 percent of divorce attorneys reported an increase in the number of cases using evidence from smartphones in the previous three years. Another survey from the group found that 81 percent of divorce lawyers had seen an increase in cases involving social networking evidence over the previous five years.
Data from wearables is beginning to make its way into personal injury cases, and GPS data has been around for years in various forms. Last year, lawyers representing a Calgary woman submitted Fitbit data to show how her physical activity level as a personal trainer had dramatically decreased after she was in a car accident, according to Wired. (Similarly, “Serial” fans will recall the controversy over the use of cell phone tower pings in Adnan Sayed’s case.)
Typically, data gathered from wearable devices and apps — heart rates, locations, text messages and emails — does not get submitted as evidence, McLaren said, rather it may be used to secure evidence. Today’s tools may not be enough for a court to conclude that someone is in the throngs of passion with a paramour. But data showing, say, that someone in a divorce proceeding is near the address of a suspected lover with an elevated heart rate, is probably not ideal for someone in court, either.
McLaren offered a hypothetical example of a husband who saw that his wife was at a particular unknown address with some regularity, who then could use that information to dispatch private investigators to survey the premises. The result of that surveillance, rather than the original data tip-off, may be submitted to court.
The legality of tracking someone else, even a spouse, can be fuzzy. Some couples are going rogue and monitoring data themselves, lawyers told me. They’re installing tracking applications or seizing information from wearables, which may not always be legal. If potential clients approach McLaren with questionably secured intelligence, he said he declines to represent them because it may present risk to him as a lawyer. “I’m just as culpable as the person who did it,” he says. “No client is worth that.”
Normally, a police officer who wants to monitor someone must acquire a subpoena, says Harry Houck, an investigator focused on matrimonial work. They don’t have permission to install tracking software. But because many devices and smartphones are jointly held in both spouses’ names, either partner may be able to legally install tracking software on them.
“We can't use that kind of stuff,” Houck, a retired New York Police Department detective, says. “But husbands and wives do it if they both own the phones.”
And technology keeps making it easier to snoop. The current iteration of devices is focused on “hyper tracking,” Jeff Chester, the executive director of the Center for Digital Democracy, says, meaning that our data trails are only growing more refined and more real-time, making it easier for both large corporations and jilted spouses to monitor a person’s actions.
“We’re being encouraged to use and wear these things 24–7,” Chester says. “It sounds innocuous — just your heart rate, where you spend, where you go,” he adds. “But people can put the pieces together.”
Clients seeking data on infidelity are often looking for “peace of mind” rather than legal evidence, Scott Lewis, a Michigan-based private investigator, says. “They have strong suspicions cheating is going on,” he says. “But they want to know it for themselves before they confront someone. They want airtight information.”
Infidelities aside, new technology may be unraveling traditional clues or contested things like hidden assets in a divorce proceeding, Gary Traystman, a divorce attorney in Connecticut, says. Online banking records or even medical histories or correspondence relating to mental health may also make its way onto a smartphone and into the hands of a divorce court. Traystman says he found evidence of a client’s husband who had an investment account and real estate holdings in Chicago that were not previously disclosed, but came through tracing his emails. Another case involved a series of communications with a real estate agent in Jamaica, triggering the uncovering of another hidden asset. Traystman says he also once found a spouse unloading merchandise on eBay, including gold coins and a tractor, in order to obtain more cash. “Nothing is ever erased,” Traystman says.
Among the more popular tracking options are those that are often marketed for monitoring the whereabouts of children, such as Trick or Tracker or Phone Tracker. Another, mSpy, is marketed as a “user-friendly application for watching over your kids, preventing theft, and supervising your employees’ performance,” according to the company’s website. For $39.99 a month, mSpy can collect keystrokes, screenshots, GPS location data, call and text logs and information exchanged using SnapChat.
It is an mSpy user’s “responsibility to determine whether you have proper authorization to monitor a given device,” the company explains in its FAQ. MSpy did not return requests for comment.
Yet in many cases, shoe leather reporting still wins out. While the data from many tracking apps and wearables can tip off investigators, they have proven less accurate when conducting a stakeout or following an alleged cheater. Lawyers and investigators also tell clients to expect a certain margin of technical error when analyzing data.
“You follow people in New York and that’s a rat race in itself,” says David Schassler, a private investigator and expert in “matrimonial investigations” and “ethical hacking.” “In Manhattan during rush hour, these trackers can be two blocks off and throw you in the wrong direction.”
Schassler estimates that 70 percent of the calls he receives are inquiries into investigating an infidelity. “We’re bombarded with cases,” he says. “We got plenty of work.”
Once a couple has separated, tracking can take on a new dimension — avoidance rather than pursuit. An app called Split can track the proximity of an ex or any “avoidee,” said Udi Dagan, the app’s founder and chief executive. Inspired by his own as well as his friends’ awkward encounters with exes, employers and other undesirables, Dagan decided to aggregate social networking clues from publicly available information, such as Instagram photo tags or check-ins on Swarm or Facebook. The app can also warn if an “avoidee” is RSVP’d to the same event as a user.
“We turned the idea of connectivity on its nose,” Dagan said. “It’s about disconnecting instead of connecting. It’s anti-social networking.”
Thus technology, which can hasten a divorce, also can make an acrimonious split a little more comfortable. Faithful or not, we’re always married to data.
Even Your Battery Identifies You
By now, you probably know that using a smartphone in just about any way will send personal data across the Internet. Service carriers log text messages and details about calls. Third-party apps can access or upload identifying data. Weather- and map-based services track a user’s geographic location. It seems that even the most passive, inoffensive service on our phones can leak our information.
Battery-life indicators—tiny icons that usually hover at the top of a screen—show how much charge a device has left before it needs to be connected to a power source. Though useful, these indicators might not be as innocuous as we think, according to a team of four European cybersecurity researchers. The experts recently authored a paper titled “The Leaking Battery” that explains how websites can access a user’s online browsing activity just by monitoring his or her device’s battery status—which means that data can be taken not just from mobile phones, but also laptops. When browsers give battery information to websites, they expose a “fingerprintable surface that can be used to track web users in short time intervals,” the researchers write.
Why does this happen? Under current rules from the World Wide Web Consortium, the organization that sets global Web standards, sites are allowed to get details on a user’s battery status in order to help save energy. Upon detecting low battery, sites can turn off power-sucking features and display an energy-saving page instead. The consortium permits sites to retrieve these details without asking permission because the feature was deemed to have a “minimal impact” on privacy. But information about a phone or laptop’s battery life can be oddly specific—so much so that it can be used to identify one user from another. Here’s how that works, succinctly explained by the Guardian:
The researchers point out that the information a website receives is surprisingly specific, containing the estimated time in seconds that the battery will take to fully discharge, as well the remaining battery capacity expressed as a percentage. Those two numbers, taken together, can be in any one of around 14 million combinations, meaning that they operate as a potential ID number. What’s more, those values only update around every 30 seconds, however, meaning that for half a minute, the battery status API can be used to identify users across websites.
For instance, if a user visits a website in Chrome’s private browsing mode using a VPN, the website should not be able to link them to a subsequent visit with private browsing and the VPN off. But the researchers warn that that may no longer work: “Users who try to revisit a website with a new identity may use browsers’ private mode or clear cookies and other client side identifiers. When consecutive visits are made within a short interval, the website can link users’ new and old identities by exploiting battery level and charge/discharge times. The website can then reinstantiate users’ cookies and other client side identifiers, a method known as respawning.”
The possibility of this kind of microlevel tracking might not be surprising to jaded consumers who are used to hearing about all sorts of Web-based data breaches these days. Still, it’s certainly unnerving. And the researchers’ report comes on the heels of many other recent revelations about unexpected identification.
Here’s one example: Web users can be recognized from just the way they type on a keyboard, even if they use an identity-shielding service like Tor. And another: Browser size and quality can also be used to pick people out. The unconventional ways in which we can be recognized, recorded, and tracked by our gadgets are stacking up—and, with them, likely a whole new set of privacy battles.
So You’ve Been Publicly Shamed
As Jon Ronson’s recent book, So You’ve Been Publicly Shamed, makes clear, these days we — and our children — are all a hair’s breadth away from being publicly shamed. It could be for the way we look, or for something we’ve said, or for inadvertently committing some kind of heinous thought crime, or for a joke misfiring, or just for not fitting in with the orthodoxy. And that’s before we even start on revenge porn.
Cameras When You Use A Credit Card
Tiny cameras embedded in chip-andPIN terminals could be used to snap pictures of shoppers and catch card thieves.
Facial recognition technology that verifies the identity of card owners is being tested by Worldpay, a payments processing company.
The upward-facing camera would take a picture of a shopper’s face as they entered their PIN on the terminal’s keypad. The picture then would be compared with an image of the card owner held on a secure Worldpay database. If there was no match, the shopper would be asked to provide additional identification before the transaction could proceed.
The researchers believe that the PIN entry device (PED) camera would help to combat card fraud. They emphasised, however, that the technology was in a “concept phase”, with controlled trials taking place.
Nick Telford-Reed, director of technology innovation at Worldpay, said that the system could automatically enrol shoppers. “People don’t want the admin hassle of registering their details,” he said. “With this prototype, we would remove that hassle. The design also means retailers would not have to find space for another device on their already busy sales counters.”
Photographs of shoppers would not be stored on its database. Instead, the images would be converted into “unique biometric templates”. Each image taken would create a new template on Worldpay’s secure database, building a profile of that person as their facial features changed over time.
Worldpay, which processes payments for nearly half of Britain’s high street businesses, is also exploring the use of PEDcameras to verify card users’ identities online.
Cameras Everywhere
SHORTLY after the US Supreme Court’s blockbuster decision on marriage equality, a short YouTube video made the rounds online. In it, a gay couple, recorded by a friend, wait in a Kentucky county office for a marriage licence.
But halfway through, the friend’s camera raises the ire of a nearby stranger, who whips out a phone and starts recording her own video. An amused bystander takes his out, too, panning back and forth between the duelling cameras. Three people angrily filming each other, one internet commenter noted drily: “That’s like the summary of the society we’re living in.”
It has indeed been a bonanza year for cameras: on-body ones that monitor police actions; free live-streaming apps to beam anyone halfway around the world; and high-tech surveillance devices that remember and recognise your face. Slowly, more and more of our world is captured on camera, raising new questions about what it means to live in front of the lens.
On-body cameras in particular have been received warmly in many cities, and are often seen as a gesture of good faith between police departments and the public. The devices are small and can unobtrusively record interactions from the wearer’s point of view, automatically uploading clips to an evidence room in the cloud. Last month, the mayor of London kicked off a plan to deploy 20,000 body-worn cameras with the Metropolitan Police. And earlier this year, the US government earmarked millions of dollars to buy them for police departments around the country, in response to public dismay over the deaths of black men at the hands of white officers.
“On-body cameras are often seen as a gesture of good faith between police and the public”
It’s not just the police getting cameras. Miami Beach, Florida, plans to give them to inspectors in their parking, fire and building departments. Cities in Texas have purchased them for animal control as well as for fire marshals and beach patrols. Two weeks ago, a school district in Iowa announced that its principals and assistant principals would wear small clip-on cameras during the coming school year to record their interactions with teachers and students.
There are options for general citizens, too. Vievu, a start-up in Seattle, Washington, now markets a consumer version of its police camera for professionals – such as those who carry out repairs on a home while the owner is away – who want video evidence to protect themselves from liability.
Cameras can help keep their wearer safe, says Mike Jones, a highway manager in Denbighshire, UK, where civil enforcement officers were equipped with on-body cameras this spring. The county is experimenting with giving cameras to bailiffs and to officers focused on environmental crimes such as littering. Some officers were initially sceptical, but soon saw the benefits.
“When somebody started to become aggressive, as soon as they were told that they were being filmed or they saw the camera, it did result in a change in behaviour,” says Jones. “It’s an added layer of security.”
Studies suggest that people behave better when they’re being taped. A year-long pilot scheme with on-body cameras at the police department in Rialto, California, found that officers who wore them used force 60 per cent less often than those without them. Citizen complaints also dropped by 88 per cent.
The mere feeling of being watched may even be enough to keep people in line. In one study, researchers at Newcastle University, UK, left drinks in a department lounge along with a sign asking people to pay via an unattended “honesty box”. Sometimes the sign was accompanied by a picture of flowers, and at other times by a picture of a face staring directly at the observer. On average, people who saw the face paid out more than twice as much as those who saw flowers.
Such research suggests that a world with more cameras monitoring us might be, in some ways, more pleasant to live in. But as the technology spreads, particularly outside law enforcement, there are also more chances for people to use it unwisely, says Alvaro Bedoya at Georgetown University Law Center in Washington DC. Rules on how police should use on-body cameras – such as when to film and how long to store videos – have been hotly discussed. It’s not clear what will happen if they are used by people who are less tightly bound by guidelines.
“It’s very easy to go overboard and deploy body cameras too aggressively, in a manner that’s detrimental to the community that it’s supposed to help,” says Bedoya. For example, someone might set their camera up to upload footage directly to the internet.
What if we end up living in a world without strict rules on cameras? What if, wherever we go, we know we might be caught on film and the images shared with strangers? It could have the unexpected effect of making our society more tolerant, says Judith Donath at Harvard University’s Berkman Center for Internet and Society. If more of us have embarrassing footage floating around on the internet, then perhaps we’ll be forgiving of others who have it too.
And even mundane everyday videos might make us appreciate others more. In one study published in May, researchers at Harvard Business School and University College London placed cameras around a university dining hall. Customers could watch a live feed of chefs at work in the kitchen, while chefs could see them waiting outside for their meals. Reports of food satisfaction rose by about 22 per cent.
“Tolerance is going to become a far more important characteristic of our society, unless we want to live in a world that’s more paranoia-inducing,” says Donath.
MAC Addresses and Wardriving
When it comes to sniffing out unsecure Wi-Fi networks, you can take your pick of vehicle to drive around: we've had warbiking, feline warprowling (with bonus mouse catching!), and warstrolling (with high heels packing Wi-Fi hacking tools, no less!).
Now, a US cop has reverted to the plain old vanilla mode of wardriving in a car, but he's not looking for hotspots or routers that lack passwords. Nor is he sniffing out routers using the creaky, old, easily cracked WEP encryption protocol. Rather, Iowa City police officer David Schwindt is stalking stolen gadgets. Specifically, he's cooked up some software and rigged up a thumb drive sized-antenna that plugs into the USB port of his squad car laptop to sniff out the media access control (MAC) addresses from a database of known stolen items.
MAC addresses are often called a burned-in address (BIA), an ethernet hardware address (EHA), or simply a “physical” address, because they are literally assigned (by the IEEE) and stamped into your network card by the company that manufactured your hardware. They're sort-of unique identification numbers that act like a device's digital fingerprint. Researchers have confirmed they also link to your real identity, and, according to Edward Snowden, the National Security Agency (NSA) has a system that tracks the movements of everyone in a city by monitoring the MAC addresses of their electronic devices.
Schwindt says his software product, which he's calling L8NT - that's a leet-speak/acronym hybrid that stands for latent analysis of 802.11 network traffic - won’t be used to find the occasional stolen iPod or laptop.
Neither will the tool give police access to personal or private information included in MAC packets, he told The Gazette.
Rather, he has his eye on bigger cases: If your cellphone is stolen from a bar ... that’s not necessarily what L8NT is intended for. But, if your home is burglarized and your cellphone is stolen, now, as a police chief, I’m interested [in that technology.]
The device - which has a range of about 300 feet - scans for MAC addresses, looking for matches to known stolen items. The L8NT can also be attached to a directional antenna to allow police to determine where the signal is coming from and to obtain a warrant.
However, the device does not work in all circumstances.
If you walk around with Wi-Fi enabled on your phone, it will broadcast its MAC address indiscriminately and, unlike an IP address which changes over time or when you switch networks, a MAC address is constant for the lifetime of a device (though it can be spoofed, either for legitimate purposes or by a thief who wants to hide it). But if a device is powered down, or if Wi-Fi has been disabled, the L8NT won't be able to sniff it out. Nor will it do much good if legitimate device owners haven't bothered to record the MAC addresses of their devices.
Then again, it might also prove useless in the case of Apple's iOS 8 devices. Apple introduced a random MAC address generator in iOS 8 last year, in an effort to help users fend off marketers' ability to recognize their devices and thereby ID them at will.
That randomisation isn't constant, mind you: As Paul Ducklin noted at the time, randomisation only happens before you connect, when your Wi-Fi card is scanning for networks. When your iGadget finds an access point with a name that matches one of your known networks, it tries to connect by using your real, rather than your random, MAC address. So the coffee shop you visit regularly won't have any trouble recognising you, though a shopping mall you merely walk through won't be able to ID you.
But while there are cases where the officer's L8NT won't work, Schwindt still has big plans, he's developed a proof of concept, has a provisional patent on the device, and plans to apply for a full patent this fall. In the meantime, he's sent out surveys to law enforcement agencies to test the waters and see if they might be interested.
Iris Tracking At A Distance
An iris scanner that can identify an individual from up to 16 paces has been developed by researchers.
The range of the scanner is significantly longer than those currently in use, giving it the potential to identify individuals without their knowledge.
Experts in law enforcement technology believe that iris scans will replace fingerprints as the best means of identifying people. Long-range iris scanners could be embedded in CCTV cameras to automatically pick out suspected terrorists in crowds or be used by roadside police to identify a suspect who merely glances in their wing mirror.
Scanners in use at British airports require a target to stand very still a matter of inches away. Those used by British and American soldiers to identify terrorist suspects in Afghanistan have similar limitations and require the cooperation of those being scanned.
However, a system developed in the United States by the CyLab Biometrics Centre at Carnegie Mellon University in Pittsburgh has lengthened the range of an iris scan to 40 feet.
The system detects a target’s face, locates their eyes and captures digital images of their irises. It then extracts information about certain iris features and runs them through a database. If the result matches a database entry, further information about the target, such as a photograph, is provided.
In a test earlier this year the device was able to identify an individual by scanning their reflection in a wing mirror. “Imagine I just got pulled over by a police officer,” Marios Savvides, director of the CyLab Biometrics Centre, said.
“What’s the first thing you do when you get pulled over? You naturally look at your side view mirrors to see what’s going on. Even if I grew a beard and looked completely different, my eyes are going to be exactly the same. That’s the power of iris recognition.”
Dr Savvides said there were “a lot of potential applications for saving lives”, such as identifying murderers on the run. He denied that the technology would be abused by the state to monitor innocent citizens.
“If someone really wanted to know what you were doing every moment of the day, they don’t need facial recognition or iris recognition to do that,” he told The Atlantic magazine. “People are being tracked — their purchasing, their habits, where they are every day — through credit card transactions, through advantage cards,” he said.
The CyLab Biometrics Centre is working with the US Department of Defence to enhance its iris-scanning technology to allow better detection of moving targets up to 43 feet away. The centre is also working on a project sponsored by Sandia National Laboratories, the US security research company, to create a rifle scope that can scan irises and track faces from afar.
Other research projects include an algorithm that improves the ability of a camera to identify a large number of individual faces in a crowd. Coupled with the long-range iris-scanning system this could create a powerful monitoring system.
Your Own Personal Cloud
Like Pigpen's dusty haze or Olaf's personal flurry, each of us is surrounded by our own personal cloud—of invisible microbes. Now, it appears that this unique bacterial signature can be used to identify individuals even after they've left the room.
In recent years scientists have built up all kinds of data about the trillions of microbes that live in and on our bodies and help govern our health, often dubbed the microbiome. They've also known that the human microbiome emits millions of airborne bacteria. But University of Oregon researchers decided to explore the extent to which such microbial clouds are detectable and whether they might carry important information about someone's unique microbial ecosystem.
“From this experiment we've learned that based on the air sampling in a room, you can tell when it's been occupied by a person because of the microbial signature of the air,” explains co-author Adam Altrichter. “And more importantly, we can tell that individuals are unique when they've been in a room, unique in the amount of bacteria they produce and the distinct organisms that they shed.”
We create such clouds in various ways. Microbes like Streptococcus are emitted in our breath, Altrichter notes, while others like Propionibacterium come off our skin. “There's even some indication that members of your gut microbiome could actually make it into the air surrounding you,” he adds. “We're talking about very small organisms, and clothing is not an impermeable barrier.”
Each of 11 volunteers spent up to four hours alone in a room during two sets of experiments. The unique combination of bacteria eight of them produced could be readily distinguished from the microbes of others, enabling the scientists to identify them as individuals based on just their microbial clouds. Other volunteers left clouds that made it clear a person had been present, but not which individual.
The microbial clouds we create have some intriguing potential for future applications. In forensics, for example, investigators might be able to use a cloud like fingerprints to identify where a person has been. “If a person walks into a room, and you're sampling that air afterwards, can you understand who was there based on the bacteria that they are shedding?” Altrichter asks.
Understanding more about how we release our individual microbiomes could also have implications for learning how some diseases spread from person to person—or even helping to fight them. “Thinking about how the microbiome occupies an environment and how that might lead to competition for resources that might make pathogenic strains have a harder time colonizing an area is one way this might prove helpful,” Altrichter says.
However, there's a long way to go before people's clouds can be identified in a real-world environment. The tests, published in the September 22 issue of PeerJ, were done in a very controlled, artificial environment—a small room where the temperature and air were controlled. Surfaces inside the room were wiped down to reduce background bacteria that could confuse the cloud signatures.
Volunteers were surrounded with air filters to collect the particles emitted into the space around them. Petri dishes were also deployed to collect surface biological particles that settled out of the air. Identifying human-made clouds in more complicated environments will be difficult.
“Translating this to someone sitting in an office or a patient in a hospital, where there's going to be a lot of background, will take a lot more,” Altrichter says. “But hopefully as technology progresses and we are able to reduce the lowest detection limits, we can maybe push the envelope a little bit and start to capture a personalize signature in a more realistic environment.”
Another avenue for further research is figuring out why some people's clouds were more distinguishable than others. The team doesn't have definite answers as yet.
“Some individuals may just have a microbial cloud that's very unique to them, where a more generalized microbiome might be harder to distinguish from those of other individuals,” he notes. The rate at which people shed microbes is another factor, and that may vary even in the same person depending on health, diet or simply the time elapsed since their last shower.
(Fark comment: How do people think dogs can follow a scent? People leave a trail of dead skin and other micro particles behind. Microbes live on those. That the specific mix of microbes is different for each person is not really surprising. Thanks to NotARocketScientist)
Can You Hide?
Hunted
This is an inspired idea for a six-part series. For better or worse, we live in a society where surveillance is widespread and privacy a quaint concept that disappeared in the 20th century. Britain has one CCTV camera for every 11 people. The Highways Agency operates an automatic number plate recognition (ANPR) system with about 8,000 cameras. Thirty-five million people have a smartphone, most of which transmit an exact GPS signal. The number of databases on which the average person has their data held has risen from 70 in 2004 to 700 last year.
To see whether or not it is possible to disappear off the grid in this age of surveillance, 14 people — alone or in pairs — were challenged to try to evade capture for 28 days. Where they went and what they did was entirely up to them, but they weren’t allowed to leave the country. They would be hunted by former head of counter terrorism for the City of London police Brett Lovegrove and his team of 30 specialists, including intelligence and security personnel and cyber-intelligence experts. The hunters are able to use the same methods of surveillance and tracking employed by the state wherever legally possible, and permission was granted to monitor bank records, search their homes and interview friends and family.
My guess is that no one will evade capture.
CCTV operator spots drunk driver at pub 20 miles away
A racehorse trainer was caught drinkdriving after being spotted stumbling out of a pub by a CCTV camera operator who was more than 20 miles away.
Oliver Costello, an assistant trainer with Godolphin, the yard owned by the ruler of Dubai, was stopped by police within minutes of getting behind the wheel and found to be more than twice over the legal drink-drive limit.
The camera operator, in Bury St Edmunds, saw Costello as he left the Yard pub near the National Horse Racing Museum in Newmarket in the early hours of December 1. The trainer, 31, admitted driving with excess alcohol when he appeared at West Suffolk magistrates’ court on Wednesday. He was banned from driving. Costello, from Moulton Paddocks, near Newmarket, was arrested at 1.15am and a breath test showed he had 82 micrograms of alcohol in 100 millilitres of breath. The legal limit is 35 micrograms. In addition to the ban, he was fined £645 and made to pay court costs.
West Suffolk council, which operates a network of CCTV cameras, has recently upgraded the system to enable operators to record incidents in much greater detail. A spokesman said: “The operator on duty noticed that the pub was still open early in the morning and there were still a lot of cars present in the car park.
“A man was then seen staggering from the pub towards his car. At this point the operator informed police HQ that there was a suspect drink-driver. The man then started to drive his vehicle while being monitored by CCTV. Police stopped the vehicle and breathalysed the man.”
Suffolk police said: “If the council camera operators see something they are concerned could constitute a criminal offence they contact us.” In January a suspected drink-driver was arrested after he was spotted driving erratically by a CCTV operator employed by Enfield council in north London. Police were alerted when a black car was seen veering dangerously across the road. The driver was stopped, and arrested after failing a breath test.
Between September and December seven drink-drivers were caught by to the council’s CCTV system, four spotted by an operator, two reported by Shopwatch and one called in by police that the CCTV helped to trace.
Shops Monitoring You
It is a secret pleasure enjoyed by millions: wandering around shops coveting things we can’t really afford.
Now we can no longer rely on being anonymous, thanks to technology that allows stores to track our movements via our mobile phones or even identify us by scanning our faces.
The Information Commissioner’s Office has issued a warning that it may need to take action to protect the privacy of shoppers being monitored without their permission.
The data protection watchdog says that some companies may already have used “wi-fi location tracking” without telling their customers.
Simon Rice, its technology manager, yesterday published a blog entitled: “How shops can use your phone to track your every move and video display screens can target you using facial recognition.” He wrote: “Picture the scene: you’re in a department store and decide to go back and try that pair of trousers on for a second time. How would you feel if the price had changed or a display lit up with a three-for-two offer? What you may not realise is that technology has been developed which could allow the store to track your shopping movements using the wi-fi on your mobile phone.”
He said that the technology was already being introduced by shops seeking to build up a picture of how people typically used their stores. It could also be used to track passengers at airports and railway stations.
The technology identifies the MAC address of a smartphone, which can be linked to a specific individual.
Mr Rice is concerned about the use of facial recognition technology, which has been used by advertisers to determine the gender of customers as they pass by and tailor adverts on nearby screens for either men or women.
“Even if the identification of individuals is not the intended purpose, the implications of intelligent video analytics for privacy, data protection and other human rights are still significant. For example, the technology could be used to play recorded messages to reprimand litter-dropping or illegal parking. One of the key implications of video analytics is that individuals have the right to know who is collecting what data about them and for what purposes.” A spokeswoman for the watchdog was unable to give actual examples of misuse of the technology but said: “The ability is there.”
A Private Co Has 2 Billion Licence Plates Stored
Throughout the United States—outside private houses, apartment complexes, shopping centers, and businesses with large employee parking lots—a private corporation, Vigilant Solutions, is taking photos of cars and trucks with its vast network of unobtrusive cameras. It retains location data on each of those pictures, and sells it.
It’s happening right now in nearly every major American city.
The company has taken roughly 2.2 billion license-plate photos to date. Each month, it captures and permanently stores about 80 million additional geotagged images. They may well have photographed your license plate. As a result, your whereabouts at given moments in the past are permanently stored. Vigilant Solutions profits by selling access to this data (and tries to safeguard it against hackers). Your diminished privacy is their product. And the police are their customers.
The company counts 3,000 law-enforcement agencies among its clients. Thirty thousand police officers have access to its database. Do your local cops participate?
If you’re not sure, that’s typical.
To install a GPS tracking device on your car, your local police department must present a judge with a rationale that meets a Fourth Amendment test and obtain a warrant. But if it wants to query a database to see years of data on where your car was photographed at specific times, it doesn’t need a warrant––just a willingness to send some of your tax dollars to Vigilant Solutions, which insists that license plate readers are “unlike GPS devices, RFID, or other technologies that may be used to track.” Its website states that “LPR is not ubiquitous, and only captures point in time information. And the point in time information is on a vehicle, not an individual.”
But thanks to Vigilant, its competitors, and license-plate readers used by police departments themselves, the technology is becoming increasingly ubiquitous over time. And Supreme Court jurisprudence on GPS tracking suggests that repeatedly collecting data “at a moment in time” until you’ve built a police database of 2.2 billion such moments is akin to building a mosaic of information so complete and intrusive that it may violate the Constitutional rights of those subject to it.
The company dismisses the notion that advancing technology changes the privacy calculus in kind, not just degree. An executive told The Washington Post that its approach “basically replaces an old analog function—your eyeballs,” adding, “It’s the same thing as a guy holding his head out the window, looking down the block, and writing license-plate numbers down and comparing them against a list. The technology just makes things better and more productive.” By this logic, Big Brother’s network of cameras and listening devices in 1984 was merely replacing the old analog technologies of eyes and ears in a more efficient manner, and was really no different from sending around a team of alert humans.
The vast scale of Vigilant’s operations is detailed in documents obtained through public-records laws by the New York Civil Liberties Union. “Last year, we learned that the NYPD was hoping to enter into a multi-year contract that would give it access to the nationwide database of license plate reader data,” the civil-liberties group announced Monday in a blog post linking to the document. “Now, through a Freedom of Information Law request, the NYCLU has obtained the final version of the $442,500 contract and the scope-of-work proposal that gives a peek into the ever-widening world of surveillance made possible by Vigilant.”
The NYPD has its own license plate tracking program. It nevertheless wanted access to the Vigilant Solutions database as well, “which means,” the NYCLU notes, “the NYPD can now monitor your car whether you live in New York or Miami or Chicago or Los Angeles.” The NYPD has a long history of spying on Muslim Americans far outside its jurisdiction. And both license-plate readers and the information derived from them have already been misused in other jurisdictions.
More abuses seem inevitable as additional communities adopt the technology (some with an attitude expressed with admirable frankness by an official in a small Florida city: “We want to make it impossible for you to enter Riviera Beach without being detected.”)
Washington is accelerating the spread of the technology.
“During the past five years, the U.S. Department of Homeland Security has distributed more than $50 million in federal grants to law-enforcement agencies—ranging from sprawling Los Angeles to little Crisp County, Georgia, population 23,000—for automated license-plate recognition systems,” the Wall Street Journal reports. As one critic, California state Senator Joe Simitian, asked: “Should a cop who thinks you're cute have access to your daily movements for the past 10 years without your knowledge or consent? I think the answer to that question should be ‘no.’”
The technology forms part of a larger policing trend toward infringing on the privacy of ordinary citizens. “The rise of license-plate tracking is a case study in how storing and studying people's everyday activities, even the seemingly mundane, has become the default rather than the exception,” The Wall Street Journal explains. “Cellphone-location data, online searches, credit-card purchases, social-network comments and more are gathered, mixed-and-matched, and stored. Data about a typical American is collected in more than 20 different ways during everyday activities, according to a Wall Street Journal analysis. Fifteen years ago, more than half of these surveillance tools were unavailable or not in widespread use.”
Nor are police the only ones buying this data.
Vigilant Solutions is a subsidiary of a company called Digital Recognition Network.
Its website declares:
All roads lead to revenue with DRN’s license plate recognition technology. Fortune 1000 financial institutions rely on DRN solutions to drive decisions about loan origination, servicing, and collections. Insurance providers turn DRN’s solutions and data into insights to mitigate risk and investigate fraud. And, our vehicle location data transforms automotive recovery processes, substantially increasing portfolio returns.
And its general counsel insists that “everyone has a First Amendment right to take these photographs and disseminate this information.” But as the ACLU points out:
A 2011 report by the International Association of Chiefs of Police noted that individuals may become “more cautious in the exercise of their protected rights of expression, protest, association, and political participation” due to license plate readers. It continues: “Recording driving habits could implicate First Amendment concerns. Specifically, LPR systems have the ability to record vehicles’ attendance at locations or events that, although lawful and public, may be considered private. For example, mobile LPR units could read and collect the license plate numbers of vehicles parked at addiction counseling meetings, doctors’ offices, health clinics, or even staging areas for political protests.”
Many powerful interests are aligned in wanting to know where the cars of individuals are parked. Unable to legally install tracking devices themselves, they pay for the next best alternative—and it’s gradually becoming a functional equivalent. More laws might be passed to stymie this trend if more Americans knew that private corporations and police agencies conspire to keep records of their whereabouts.
Cell Site Spoofing
In their elusive cat-and-mouse pursuit of criminals, police officers are increasingly turning to a little-known secret weapon: Stingray, aka triggerfish, an IMSI-catcher or cell-site simulator. No, these aren’t code names for under-the-sea, RoboCop-like patrollers. They’re actually surveillance devices that mimic cell phone towers — only they emit a stronger signal — and trick phones into connecting and revealing their location. Some liken these high-tech trackers to the kids’ swimming pool game Marco Polo, where a cell-site simulator triggers “Marco” and nearby phones respond with “Polo” — along with their unique device ID.
But you probably don’t know the half of it. Some of these trackers, which are roughly the size of a backpack, are also capable of recording numbers for a cell phone’s incoming and outgoing calls, as well as what is said or written in calls or texts. And they’re increasingly being used for all kinds of investigative cases, without being bound by public spaces. In fact, Nathan Wessler, staff attorney for the American Civil Liberties Union, says Stingray signals can pass through the walls of people’s homes (and other constitutionally protected areas), pinpointing exactly where a phone is within a building. “They could basically serve as a wiretap,” warns Brian Owsley, an assistant professor of law at UNT Dallas College of Law.
Police departments argue the device can be a useful tool for locating possible suspects. Yet only a few states require a warrant for such surveillance, and critics fear they violate the Fourth Amendment and sometimes get used by agencies that have no business deploying this tech. So far, at least 58 agencies spread across 23 states and the District of Columbia have purchased Stingray surveillance, according to the ACLU. And while experts say some of these agencies include the usual suspects — the National Security Agency, the FBI and the Drug Enforcement Agency — others, including the Internal Revenue Service at one point, may be more surprising.
Of course, Stingray technology isn’t the only type of surveillance being used today. Government-owned cameras, automatic license plate readers, telephone record databases and information sharing about citizens’ DNA are all part of a movement that’s sparking a national conversation about when enough is enough. But when it comes to Stingray specifically, there’s the issue of what kind of information gets gathered and how the data is stored. Adam Schwartz, senior lawyer at the Electronic Frontier Foundation, a digital rights group, says the data can place people on a map, within mere meters, and identify whether someone is, say, near a therapist’s office or meeting with a criminal defense lawyer. “If the government wants to build a database of location information about where innocent people are located,” he says, “well, that would be very disturbing.” He and other experts agree that, for the most part, the use of such data remains unknown.
Now some politicians and activists are trying to better protect the privacy of ordinary citizens. For instance, Rep. Jason Chaffetz (R-Utah) introduced a new bill in November that would force law enforcement officials to obtain Stingray warrants and make it illegal for the tech to be deployed without them. “If you are a law-abiding citizen, the federal government should not be able to track your movements,” Chaffetz recently wrote in a statement to OZY. “Clear guidelines that carry the weight of the law are needed to protect the privacy of innocent Americans.” Separately, 28-year-old activist Freddy Martinez won a significant intermediary step this January in a lawsuit against the Chicago Police Department; as a result, the police department is required to produce documents that disclose the use of cell-site simulators, which a judge is set to review in a closed-door hearing. “We can’t have an informed debate about what’s appropriate when it comes to government surveillance if we don’t even know what that surveillance is,” says Martinez’s attorney, Matthew Topic of Loevy & Loevy.
To be sure, some states, such as Washington and Virginia, do require a search warrant to use cell site simulators to identify a cellphone’s location or intercept incoming or outgoing calls and text messages. And following public outcry, the IRS recently announced that it would seek a probable-cause warrant. Meanwhile, in September, the Department of Justice issued a new policy for its own use of cell-site simulators designed to “enhance transparency and accountability,” and media reps for both the DEA and FBI said they also follow the DOJ’s policy. (The NSA didn’t respond to our request for comment.)
Yet given the growing threat and frequency of data breaches, privacy advocates say, the surveillance metadata of innocent citizens may risk falling into the hands of malicious cybercriminals. Ryan Satterfield, founder of the information security company Planet Zuda, explains that metadata gathered from Stingray devices can map out a lot, including where individuals have been and who they’ve been communicating with. The question some are asking is: Do agencies collecting Stingray surveillance data swiftly delete the metadata of innocent bystanders? The answer, however, remains a mystery.
StingRay 2
IN APRIL 2014, three men were shot when a drug deal turned sour on a tree-lined residential street in Baltimore. The city’s police department quickly linked the crime to Kerron Andrews, a dreadlocked 22-year-old, but could not find him at his registered address. Agents used phone records to determine roughly where he was, but instead of going door-to-door until they found him, they opted for something far more efficient: a Hailstorm. Using this, they tracked Mr Andrews directly to an acquaintance’s sofa, between the cushions of which he had stuffed the gun used in the shooting.
The Hailstorm is a more advanced version of the StingRay, a surveillance device that operates by mimicking a cellular tower, forcing all nearby mobile phones to reveal their unique identifying codes, known as IMSI numbers. By crosschecking the IMSI numbers of suspects’ phones with those collected by “cell-site simulators” such as Hailstorm and StingRay, police officers can pinpoint people with astonishing precision. The tools have been used to trail suspects to specific rooms in apartment blocks and to find them on moving buses on busy city streets. Developed at first for military and intelligence services, cell-site simulators are now furtively used by federal agencies such as the Federal Bureau of Investigation (FBI) and the Drug Enforcement Administration (DEA) as well as by local police forces across the land.
Law-enforcement agencies rarely seek explicit court approval to employ cell-site simulators, and rarely admit to using them after the fact. As a condition for purchasing them, state and local police forces must sign strict non-disclosure agreements with the FBI, since the more information is made public about cell-site simulators, the more “adversaries” will adapt to them. The agreements prohibit police from disclosing any information about the technology, even to judges in the form of warrant requests; prosecutors must drop cases if they are pressured to reveal details about them. In one case in Baltimore, a judge threatened to hold a detective in contempt after he refused to testify about the use of StingRay in locating an armed-robbery suspect. Instead of instructing the detective to answer, the prosecution dropped the evidence. In another armed-robbery trial in Tallahassee, prosecutors offered the defendants a generous plea deal rather than demonstrate how the device worked in open court.
Given this secrecy, it is impossible to identify all the agencies using cell-site simulators. The American Civil Liberties Union (ACLU) has counted 58 agencies that possess the devices, across 23 states and the District of Columbia, but thinks the true number may be much higher. The technology has been used not only to trace murderers and armed robbers, but also to nab car thieves, phone pilferers and, in one case, a woman who made a series of abusive phone calls. The city of Baltimore alone has admitted to using the technology 4,300 times between 2007 and 2015. “I have never seen a tool that is on one hand treated in such a cloak-and-dagger fashion, but on the other used as a bread and butter tool,” says Stephanie Pell of West Point’s Army Cyber Institute.
The ray and the net
Civil-liberties advocates claim the secrecy around cell-site simulators is unjustified. According to Christopher Soghoian, a technologist at the ACLU, the wiliest criminals already know to use disposable “burner” phones and, short of forgoing cellular communication altogether, there is no foolproof way for them to evade StingRays and Hailstorms. But their covert use raises worries about privacy, he says. Cell-site simulators often trace phones to pockets, purses, homes and other places protected by the Fourth Amendment’s prohibition on “unreasonable searches”. Moreover, they do not just gather information from the target’s phone, but also from other phones nearby. Brian Owsley, a law professor at Texas Tech University School of Law who, as a judge, rejected several federal requests to use cell-site simulators without a warrant, says there should be laws requiring data inadvertently gathered in this way to be deleted.
Lawmakers are starting to address such grievances. Washington state, California, Virginia, Minnesota and Utah have passed laws requiring their police forces to seek warrants before using cell-site simulators. Congressman Jason Chaffetz of Utah hopes to pass a bill that would do the same on a federal level, though he is unlikely to prevail in such a politically charged year.
Movement in the courts may come more quickly, however. After the Baltimore police department grudgingly confessed in a court hearing last summer that it had used a Hailstorm to locate Mr Andrews, the presiding judge suppressed all evidence related to the surveillance operation—including Mr Andrews’s gun. Without a warrant, she held, the police had breached his Fourth-Amendment rights. Civil-libertarians are optimistic about the precedent the case may set. The state of Maryland will appeal against the ruling on February 9th.
Your Shoes ID You
Visitors roaming Disney World may soon develop a strange sensation, similar to the one that struck Winston Smith one bright cold day in April in Nineteen Eighty-Four: that of mysterious eyes following every move.
The rides and attractions will know your name, where you come from and where you have been. They may also be able to compliment you on your shoes and tell you who else in film and television wore similar ones.
A system allowing visitors to be tracked by their shoes as they walk through the Magic Kingdom is outlined in a patent granted to a subsidiary of the Walt Disney Company. Discreet cameras and sensors, mounted beneath a knee-high shelf on park reception desks, will scan a customer’s feet, recording the colour, texture and dimensions of their shoes. As each customer gives their details to receptionists, the system will build a model of their shoe, which will be linked to their personal details and shared with rides and other attractions throughout Disney World.
Disney also imagines a robot performing the same job, scanning customers as they enter, or roaming the park and capturing images of the shoes of people who are already inside. Customers arriving at Space Mountain might then have their feet scanned by another set of cameras and sensors.
Disney said that the system would allow amusement park operators to study the routes guests took from attraction to attraction, and help them to “tailor certain experiences to the guest”. It added that scanning guests’ shoes was less invasive than finger prints and retina scans.