Android is one of the most widely used operating systems in the world, and its full of settings and options to help keep your personal data safe. But trying to use all of those tools effectively can sometimes feel a little confusing – and thats where the Help Desk comes in.
We started our Privacy Reset project with guides to help you understand how to protect your data on Facebook, Amazon, Google and Venmo, and now we’re expanding it to cover the settings you should change on your Android devices.
There’s one more thing you should know: This guide mainly deals with privacy controls specific to your Android phone or tablet. Because Google’s services are tied so deeply to the way Android works, you’ll also want to work through our guide to Google privacy settings. Don’t worry: Just like with this one, it shouldn’t take you more than 15 minutes to make the most important changes.
– – –
If you only do one thing:
– Audit your apps
What makes an Android smartphone or tablet truly yours is the apps you use on it, but it’s common for people to install apps without thinking about what we have access to.
When you first launch an app after you’ve installed it, it will ask for permission to access certain parts of your phone (like its camera or microphone) or personal data like your contacts or text messages. It’s a good idea to periodically make sure the apps on your phone or tablet have access only to the things they’re supposed to.
Go to Settings > Privacy > Permissions Manager; you’ll see a list of options ranging from body sensors to location to your contacts. Tap each option and make sure the “allowed” apps make sense – Uber should probably have access to your location, for instance, but something like a calculator app shouldn’t.
If you find any apps that you don’t use frequently, just uninstall them. Go to Settings > Apps > See all apps, then find the app you want to get rid of. Then, tap its name, followed by the “Uninstall” button.
– – –
If you’re still concerned about privacy:
– Rethink your unlocks
Android offers a handful of ways to unlock your phone, but some can be much more secure than others. If you don’t mind a brief wait before unlocking your phone, setting a password or using a fingerprint sensor is one of the easiest ways to make sure people can’t easily get into your phone.
If you’re a stickler for security, you many want to avoid unlock patterns and face recognition – some versions can be surprisingly easy to fool. Here’s how to change your unlock method:
Go to Settings > Security > Screen Lock and select a new screen lock type.
– Hide your sensitive notifications
Notifications from apps and incoming messages are meant to appear on your phone or tablet’s screen before you’ve even unlocked it. If you’re not careful, though, that could mean people around you are able to see snippets of text and emails you don’t want them to. Here’s how to make sure none of that potentially sensitive information shows up on your lock screen when you don’t want it to:
Go to Settings > Apps and Notifications > Notifications and make sure the “Sensitive notifications” is unticked. (If done correctly, the toggle will appear gray.)
– – –
If you want to be extra cautious:
– Make sure your phone is encrypted
Most modern Android phones already come encrypted, which means the personal data stored on it can’t be easily accessed unless someone manages to unlock your device. But if you’re using an older Android phone or tablet, this feature might not be on yet. Here’s how to check:
Go to Settings > Security > Advanced > Encryption and Credentials. If everything is secure, you’ll see “Encrypted” under the “Encrypt phone” option and you won’t need to do anything else. If you don’t, you’ll be able to tap “Encrypt phone” and begin the process – don’t be surprised if it takes upward of an hour.
– Use Chrome wisely, or not at all
You’d be hard-pressed to find an Android phone sold in the United States that didn’t have the Chrome browser pre-installed and that wouldn’t be such a bad thing if Google didn’t keep tabs on what you do online in order to serve you personalized ads. If it’s at all possible, you might want to consider leaving Chrome behind in favor of more privacy-oriented browsers like Brave, Firefox and DuckDuckGo.
If you have to keep using Chrome though, there are a few ways to use it more safely. Launch the Chrome app and tap the three-dot menu button next to the address bar – then tap Settings, followed by Privacy and security. Once that’s done, make sure the options “Do Not Track” and “Always use secure connections” are turned on.
SEOUL, South Korea – At an October conference hosted by the city of Seoul, the mayor was dressed in a patterned green suit jacket and a dark tie, his hair neatly combed.
But Oh Se-hoon was not really there. Instead he attended as his avatar, and the conference was held in the “metaverse,” a communal, virtual space, seen by many as the next frontier of the Internet, where users interact using avatars.
Seoul’s city officials are among them, as the city seeks to become one of the first municipal governments with a full-service virtual world. In “Metaverse Seoul,” according to plans, residents would be able to make reservations for city-run facilities, ride city tour buses, visit re-creations of destroyed historical sites, file administrative complaints with city bureaucrats and more. Residents would also be able to visit cultural heritage sites throughout the city by accessing the metaverse on their cellphones.
Metaverse Seoul begins this New Year’s Eve, when the traditional Bosingak bell-ringing ceremony will also be held on the platform for any residents who want to participate virtually.
Seoul’s metaverse plan aims to be completed by 2026 and could roll out in phases starting next year. It would first be available on smartphones. Eventually, augmented reality tools, such as goggles and controllers, may be used, officials said.
From Silicon Valley to India’s tech hubs, a future metaverse is envisioned as an online realm where personal avatars interact and participate in the same activities as people do in the physical world, including going to class, going shopping, going to work, watching TV or hanging out with friends.
Some versions of such a world already exist, most evidently through video games. But a truly integrated metaverse – where people can play, earn and spend money, and do other activities – is probably many years, if not decades, away.
Still, the hype around the metaverse is hard to ignore, punctuated by Facebook’s move to change its corporate name to Meta. In November, Iceland parodied Facebook’s announcement and the ever-growing metaverse curiosity. A fake tourism video that showed off Iceland – rebranded as “Icelandverse” – went viral.
In the most ambitious concept for the metaverse, for example, users who visit Metaverse Seoul could buy a souvenir with money they earned in the metaverse, and then bring the item along to other places they go to in the metaverse without switching devices.
That means most metaverse plans until at least 2025 are considered “emergent,” including Seoul’s platform, said Adrian Lee, senior research director at Connecticut-based Gartner, a technology research firm that analyzes metaverse trends.
But it has gained attention as a unique test case of how the emerging technology could apply to government functions. The project is a part of the newly elected Seoul mayor’s glitzy 10-yearpush to solidify the city as a global hub for emerging technology. The metaverse project is estimated to cost nearly $34 million over five years.
Photos of Virtual Seoul, a new platform launched by the city government at a tourism event, show the augmented reality spaces that Seoul plans to create in its Metaverse Seoul platform. Photo Credit: Seoul city government)
City officials are hoping to draw on digital fluency in South Korea, which has a well-established video gaming culture and industry. The mayor is trying to sell a more vibrant future for his city, which is facing a declining population, social cleavages over gender and income inequality, and a deepening real estate crisis as prices soar.
During the pandemic, younger South Koreans have popularized the term “untact” – a spin on the word “contactless” – to describe many virtual events and services, including classes, festivals, concerts and customer service help.
“The fourth industrial revolution, and the explosion of the ‘untact’ culture during corona, demand a change in the way we deliver public service by building a Metaverse Seoul platform,” Oh said during a September announcement.
Already, some city programs and events are being held in a metaverse-like format, including the October conference Oh attended and a Seoul Museum of History event featuring the avatar of Kim Gu, the late hero of the national independence movement, who was celebrated in a posthumous virtual ceremony.
Beginning in 2023, Seoul’s major cultural festivals will also be held in the metaverse and open to virtual tourists from abroad, officials said.
The announcement has drawn mixed reactions from the South Korean public. While some have expressed intrigue, other Seoul residents have raised concerns about its cost and accessibility to older residents.
The metaverse is so new and undefined that it could also pose unforeseen privacy challenges, experts say.
That means city officials may only be able to address security concerns when there is a “high-profile breach of privacy and/or security, usually when there are tangible implications and impact” to those who have already used the platform, said Lee of Gartner.
Seoul government officials say they plan on providing security verification methods and will “minimize the collection and use of personal information,” including allowing the use of pseudonyms so people are not required to give their legal names.
Kim Sang-kyun, a professor of industrial engineering at Kangwon National University who studies the metaverse, said that while there are not many details yet available about the project, city planners should consider accessibility concerns for older residents, potential security breaches and potentially increasing costs.
“As a new communication tool, citizens will be able to easily connect with public information, new opportunities for civic engagement, and use various infrastructures provided by the city,” he said. “However, a new channel of communications can be a high barrier for those who are not as digitally savvy, so the city should consider that aspect in advance.”
TOKYO – The Japan Aerospace Exploration Agency will launch two lunar probes on a U.S. rocket as early as next February as part of the U.S.-led Artemis program, with Japan aiming to land an explorer on the surface of the moon for the first time.
Developed by JAXA and the University of Tokyo, the shoebox-sized Omotenashi and Equuleus probes will be launched into lunar orbit with NASA’s Orion spacecraft and nanosatellites from the U.S. and Italy.
Omotenashi will attempt to land on the moon using a rocket to control its descent. It will also measure radiation levels around the moon, collecting data that will be utilized for future manned missions.
It is hoped that universities and companies will be able to develop lunar landers at a low cost in the future.
Equuleus will take advantage of the gravity of the Earth, moon and sun to reach the moon with less fuel over a period of six months to a year. It will also observe meteorite impacts on the lunar surface as it approaches the moon.
Netizens primarily used Google to search for information relating to lotteries, while going on Facebook to follow information they were interested in, the Thai Media Fund said on Wednesday following a study.
Fund expert Chumnan Ngammaneeudom said the study, conducted between April and June, also found that netizens used both platforms to discuss the Covid-19 crisis.
“The study of netizens’ tweets showed most of them had negative comments about the Covid-19 crisis, including criticising the government’s ‘ineffective’ efforts to contain the spread of the virus,” he said.
“Apart from gaining insight into netizens’ interests, emotions and the manner of their comments, this study also reflects their attitude and behaviour,” Chumnan said.
Negative comments by netizens show many feel hopeless, study shows, as personalities say listen to their voices
He said further study is necessary to create a media ecology and improve the quality of society.
Bundit Centre CEO Poramet Minsiri said the study showed that many in society feel hopeless.
He said netizens who are interested in lottery face inequality in society and the economy, while those interested in others’ lifestyles seek idols as a form of escape from their “imperfect” life.
“Netizens can easily use hate speech on social media as they believe they can freely criticise others, such as people who appear on the news or influencers,” he said.
“Hence, I would like to ask related agencies to listen to netizens’ voices in order to effectively solve issues afflicting society,” Poramet added.
Negative comments by netizens show many feel hopeless, study shows, as personalities say listen to their voices
Society for Online News Providers president Rawee Tawantharong said many people received information from other sources apart from online platforms, such as television and radio.
“As people can now publicise information online, we should create literacy among them to ensure that they express themselves appropriately,” he said.
Like Bundit, he also asked government agencies to support the media in creating content that will help improve society.
While working from home is convenient and has many benefits, it also exposes both individuals and businesses to a range of cybersecurity risks.
From January to June 2021, Kaspersky researchers discovered a 36.12% growth of brute force attacks on Remote Desktop Protocol (RDP) in Southeast Asia (SEA) compared to same period last year. The finding reflects how attackers are putting their efforts into targeting users that work from home.
In Thailand, Kaspersky detected a total of 24,094,399 attempted attacks against its users with Microsoft’s RDP installed on their computers. Thailand is now ranked second in the region.
What is RDP attack? Working from home requires employees to log in to corporate resources remotely from their personal devices. One of the most common tools used for this purpose is Remote Desktop Protocol, or RDP, Microsoft’s proprietary protocol that enables users to access Windows workstations or servers. Unfortunately, given that many offices transitioned to remote work with little notice, many RDP servers were not properly configured, something cybercriminals have sought to take advantage of to gain unauthorized access to confidential corporate resources.
24M brute force attacks, Thailand second most targeted in SEA, Kaspersky The most common type of attack being used is brute-force, wherein cybercriminals attempt to find the username and password for the RDP connection by trying different combinations until the correct one is discovered. Once it is found, they gain remote access to the target computer on the network.
Work from Home behavior in Thailand According to the survey on Thai work from home behavior, 42.72% of respondents claimed they worked from home during COVID-19, while 34.45% used a hybrid approach (working from home and at work). Working from home seems an ideal choice if you want to be safe. But everything didn’t go as smoothly as planned. 62.08% of home workers admitted that their devices were unequipped and inconvenient to use, while 45.97% experienced delay in communications.
In Thailand, the majority of desktop computers (80.7%) are installed with Microsoft OS and these have been the devices heavily relied upon by employees working remotely during on and off lockdowns since the pandemic began.
“This health crisis has clearly expedited digital transformation and the merging of our professional and personal life. Employees are now actively leading the way in accepting changes in pursuit of greater freedom and flexibility, using technology to own a new future. Companies must now adapt and restructure the modern workplace to make it more productive, sustainable, and most importantly, secure,” says Chris Connell, Managing Director for Asia Pacific at Kaspersky.
As working from home is here to stay, Kaspersky recommends employers and businesses to take all possible protection measures: • At the very least, use strong passwords. • Make RDP available only through a corporate VPN. • Use Network Level Authentication (NLA). • If possible, enable two-factor authentication. • If you don’t use RDP, disable it and close port 3389. • Use a reliable security solution. Companies need to closely monitor programs in use and update them on all corporate devices in a timely manner. This is no easy task for many companies at present, because the hasty transition to remote working has forced many to allow employees to work with or connect to company resources from their home computers. Our advice is as follows: • Provide training on basic cyber hygiene to your employees. Help them to identify the most common types of attacks that occur in the company, and provide basic knowledge in identifying suspicious emails, websites, text messages. • Use strong, complex and different passwords to access every company resource • Use Multi-Factor Authentication or two-factor authentication especially when accessing financial information or logging into corporate networks. • Where possible, use encryption on devices used for work purposes. • Enable access to RDP through a corporate VPN • Always prepare for backup copies of critical data. • Use a reliable enterprise security solution with network threat protection such as Kaspersky Endpoint Security for Business
24M brute force attacks, Thailand second most targeted in SEA, Kaspersky
The astronauts flying again from Cape Canaveral are getting a lot of attention. So are the celebrities and wealthy entrepreneurs plunking down millions to join suborbital flights that touch the edge of space in flights replayed in prime time.
But don’t forget about the robots. They are having a landmark year, too.
Late Tuesday night, NASA is poised to embark on another groundbreaking mission – this one designed to eventually save Earth from a killer asteroid by testing whether a spacecraft can nudge a celestial body in a way that will alter its orbit. It’s just the latest in a series of missions that this year have included a rover looking for signs of life on Mars, a small helicopter that continues to fly through the Red Planet’s skies, and the possible launch of the most powerful telescope ever to go to space, capable of looking back in time to the early days of the universe.
Tuesday’s launch at 10:21 p.m. Pacific time – 1:21 a.m. Wednesday on the East Coast – is set to see a SpaceX Falcon 9 rocket lift off from Vandenberg Space Force base in California. On its tip it will carry a refrigerator-sized spacecraft that will fly 6.7 million miles, hunting a small asteroid about the size of a football stadium before going kamikaze and crashing into it at 15,000 mph, likely next September.
If everything goes as planned, the impact will slow the asteroid by a fraction of a millimeter per second. That, scientists hope, will be enough that over time, in the vastness of space, it will alter the asteroid’s trajectory significantly.
Dimorphos, the asteroid in NASA’s sights, poses no danger to Earth. But it was chosen as the target for the so-called DART mission (that stands for Double Asteroid Redirection Test) by members of an elite NASA team known as the Planetary Defense Coordination Office – whose task isn’t exploring space but defending Earth.
It is a “first test of planetary defense,” Thomas Zurbuchen, the associate administrator of NASA’s science mission directorate, told reporters Monday. “What we’re trying to learn is how to deflect a threat that would come in.”
There are lots of rocks hurtling through space large enough to survive the fiery plunge through Earth’s atmosphere. NASA does its best to track them, but estimates it only knows of about 40% of the asteroids that could pose a danger. It’s working on adding more space rocks to its catalogue, and in the meantime, trying to figure out how to make sure none hit Earth.
Unlike other natural disasters, such as hurricanes or earthquakes, humans could, NASA says, do something about killer asteroids.
In an interview, NASA administrator Bill Nelson said that an asteroid impact could have enormous consequences, even threaten humans’ ability to live on earth. “We know a six-mile wide asteroid hitting what is today the Yucatán Peninsula was what wiped out much of life on earth, including the dinosaurs,” he said.
The redirect mission follows another program that last year reached another near-Earth asteroid. After studying the Bennu asteroid for about two years, a spacecraft grabbed a sample from the surface with a robotic arm and is now on its way back to Earth. It would be the first time NASA has ever grabbed a sample from an asteroid, which could shed light on how the universe was formed.
This week, NASA also celebrated the 16th flight of Ingenuity, the four-pound helicopter that flew the first powered flight of an aircraft on another planet earlier this year in what NASA said was a “Wright brothers moment.” Though it was only supposed to fly a handful of times, the sprite of a chopper has kept going, to the delight of NASA engineers.
Nelson said he was “very pleasantly surprised” and proud of “this little helicopter that we didn’t even know would fly in an atmosphere that is 1 % of Earth’s atmosphere. And now it’s not only a demonstration, it’s a scout.”
Those events came in a year when NASA’s astronauts were launching with regularity from United States soil for the first time since the space shuttle was retired in 2011. They were joined by a series of suborbital tourism flights from Richard Branson’s Virgin Galactic, and Jeff Bezos’ Blue Origin, which announced Tuesday that it would add Michael Strahan, the TV personality, to the ranks of its space passengers in December. (Bezos owns The Washington Post.)
“I’m super excited about all the successes on the human spaceflight side,” Zurbuchen said. “We at science are huge champions for that and we sit there glued to the TV, just the same way as everybody else.” But he said that “this has been a year of science, though, in an amazing fashion.”
He noted that NASA landed its Mars Perseverance rover on Mars, and it is gearing up not only to return astronauts to the lunar surface, but first send a series of robotic spacecraft there. By the end of 2023 it intends to send the first mobile robotic mission there to analyze ice at the lunar south pole to help NASA create maps of the resource.
It is also scheduled to launch the James Webb Space Telescope, which would be stationed about 1 million miles from Earth and “explore every phase of cosmos history – from within our solar system, to the most distant observable galaxies in the early universe, and everything in between,” NASA says.
After years of delays, the telescope was set to launch on an Ariane 5 rocket from French Guiana on Dec. 18. But NASA has delayed that flight until at least Dec. 22 after “a sudden, unplanned release of a clamp band” that was securing the telescope to its spot inside the nose cone of the rocket. NASA is now investigating to make sure “the incident did not damage any components,” the space agency said in a statement.
Zurbuchen said they were taking every precaution to ensure the $10 billion telescope is safe before launching, and he said he hoped that “in a few days we’ll be in good shape.”
Smart home devices made their way into our living rooms and bedrooms across the world helping us turn off our lights and lock our doors remotely. Now they are taking on new territory: our home offices.
Big tech companies including Amazon, Facebook parent Meta and Google are expanding work applications for the smart home, one that’s controlled by a group of connected devices that can be accessed remotely. The coronavirus pandemic blurred the lines between people’s home and work lives. As a result, some workers are asking Alexa or Google Assistant to book their virtual meetings, fetch revenue targets or remind them about important events on their busy work calendars. And while all of these work productivity features may add convenience to working from home, experts say they are also raising security and privacy concerns that could cost workers and their companies if not managed properly.
“The lines all blurred during the pandemic. Everything is turning into screens,” said Mark Quiroz, vice president and general manager of product marketing for Samsung’s Display division.
Smart home devices, which include the Amazon Echo speaker or the Google Nest line of smart thermostats, smoke alarms and doorbells, are now considered a mainstream technology, according to a survey by market research firm International Data Corp. More than 77 percent of households with WiFi connection have at least one smart home device. And consumers are warming up to the idea of using their smart home devices for work purposes, too: Nearly 50 percent of the roughly 1,700 people surveyed who are employed and own a smart home device said they’d be willing to use the device for work purposes such as video conference calls or to retrieve the latest sales numbers from connected work-related software.
“Each person may soon have 10 devices tied to them,” said Mark Ostrowski, head of engineering at cybersecurity company Check Point Software. “Ten devices per person times a household of four – that’s 40 devices for entry,” he said, referring to entry points that could be targeted by hackers.
Still, big technology companies are hoping to seize on the opportunity.
Workers using the Amazon Alexa virtual assistant can join Zoom meetings with a simple voice command on Amazon’s smart display called the Echo Show. With Alexa-enabled devices, they can also be reminded at a specific time about details on their to-do lists or their appointments for the day, be played focus music and have their emails read out loud to which they can verbally reply.
Amazon has been courting corporate clients with Alexa for Business, which helps companies deploy and manage Alexa-enabled devices, since 2017. Though it landed customers like General Electric, media group Condé Nast, and not-for-profit health system Hawaii Pacific Heath, the company only lists a little more than a dozen corporate clients. And in 2018, WeWork reportedly halted its pilot of Alexa for Business, though the company didn’t specify the reason for doing so. (Amazon founder Jeff Bezos owns The Washington Post.)
But Alexa-enabled devices have a history of quietly recording conversations. Alexa sometimes wakes after hearing its name, or something that sounds like its name even when its users never meant to activate their device. Those conversations – which in today’s remote work environment could very well be work-related – have the potential to be reviewed by human contractors working to improve Alexa’s speech recognition if people using their personal Alexa-enabled devices don’t opt out of the processes.
For business customers, Amazon said all interactions with Alexa are anonymous and not linked to any individual user. It said by default voice recordings aren’t saved.
“We absolutely see Alexa playing a bigger role in work in the future. Customers tell us how Alexa not only helps them get more done throughout the day, but helps them work smarter, productively and safely,” said Liron Torres, head of Alexa Smart Properties at Amazon.
Google, which similarly allows users to opt out of human review and saving recordings, has had a similar history with devices equipped with its voice-activated Google Assistant. Google also has been known to tap into users’ online activities to better serve them ads.
Similar to Amazon, Google aims to equip workers with productivity tools that may aid with work. For example, users are now able to create workday routines that automatically remind them about the items on their calendars as well as when they should take a break or get a glass of water. The feature was rolled out during the pandemic.
Before the pandemic, workers already were able to use the Google Assistant for tasks like creating to-do lists and calendar items, storing reminders and automatically joining video meetings on the company’s smart display called the Nest Hub Max, which began supporting Zoom at the end of last year.
Facebook also wants in on the work-world action, but it, too, has had its own privacy issues.
The company, which recently changed its corporate name to Meta, said early on in the pandemic that it reprioritized its plans for Portal. The device, powered by its own virtual assistant – called the Facebook Assistant – and Alexa, resembles a tablet and features a smart speaker and camera that follows people around the room as they talk.
“We had a number of users who saw that their workday consisted of going in and out of different video services,” said Micah Collins, director of product management at Meta. “We saw actual pain points of many Portal users and focused on that.”
Portal users can now use their devices to make video calls on services including BlueJeans, GoToMeeting, Webex and Zoom. They can also integrate their work calendars from services like Google and Microsoft. And companies can also rollout and manage a group of devices for their employees with special work accounts.
But in 2019, Facebook was slapped with a history-making $5 billion fine from the Federal Trade Commission for violating consumers’ privacy. The FTC probed the company after the social media giant left up to 87 million users’ data vulnerable to data analytics firm Cambridge Analytica ahead of the 2016 U.S. presidential election. Since then, the company has suffered several massive breaches of user data and has come under scrutiny for the amount of data it collects about its users.
Meanwhile, consumer electronics giant Samsung is hoping to get more connected displays that do it all in a stand-alone device. So workers can use software like Microsoft 365, complement their laptop and desktop screens and watch streaming entertainment, too. This means adding more screens in the homes of more workers. More screens means more connections and more risk, security experts say.
They say consumers should heed caution when mixing their personal and professional data and devices. Workers could be creating new opportunities for criminals to steal sensitive company information, even if it’s seemingly well-protected by security software.
Michael Siegel, director of cybersecurity at MIT Sloan, said it could be as simple as someone hacking a person’s smart thermostat or smoke alarm, for example. In that case, all they have to do is raise the temperature in the home or set off your smoke alarm in an attempt to get you to leave your device behind for them to steal.
“The more we’re connected to our office, the more exposed we are to social engineering,” he said. “All of these are things that can cause you to let your guard down.”
Beyond physically stealing a device – and all of its data – criminals also will have more ways to get to sensitive corporate data as people increase the devices that connect to it, experts say. Ari Lightman, professor of digital media and marketing at Carnegie Mellon University’s Heinz College, said it boils down to one simple fact: “If there’s a mechanism to exploit, people will look to do that.”
But workers may not only be increasing their exposure to hackers but also potentially to their employers, as well. Adam Wright, a senior analyst at IDC, said company-issued smart home devices like Facebook’s Portal should be considered much like company-issued laptops, which can be easily monitored by employers. Facebook employees, for example, were offered free Portal devices after the outbreak of the pandemic to help with virtual meetings. But the devices should be handled with caution, Wright suggested.
“Employers have every right to monitor their employees with their devices,” Wright said. So “in the middle of the night, it’s not just Amazon listening to me, but possibly my boss and IT department.”
Workers who are using their smart home devices for work would be wise to do a few things, said Pardis Emami-Naeini, researcher at the University of Washington’s Security and Privacy Research Lab. First, they need to familiarize themselves with the privacy and security of their smart home devices to understand what they may need to do to protect their data, and that of their company, as best they can. They also should be regularly updating the device, if the device doesn’t automatically update, to prevent additional security vulnerabilities, just like they would with their smartphones.
“Now that the purpose [of the device] is different, they shouldn’t assume that the normal practices of their daily behavior is going to work,” Emami-Naeini said. “The purpose is different and the data they share is more sensitive.”
Check Point’s Ostrowski said the responsibility not only lies with the worker but with the employer, which should be doing everything to safeguard its data and network even if a person’s personal device is compromised.
“It’s less about how do I secure or chase after 10,000 employees to make sure their digital hygiene is good. It’s more about how do I make sure when they come to the corporate environment, they can’t bring a malicious footprint with them,” he said.
Janneke van Ooyen, a community manager of a mobile gaming company in Barcelona, who recently outfitted her home with eight smart lights, a smart sound bar and an Amazon Echo Dot, said she’s hesitant about using these devices for work purposes.
“Since the data is very sensitive and you don’t know where it’s stored – that would be my biggest gripe for not” using it, she said. “We work with a lot of licensers, so if anything got out, that would be really bad.”
In 2018, John Rael, a volunteer track coach in Taos, N.M., was on trial for allegedly raping a 14-year-old girl when his lawyer made an unusual request.
He wanted the judge to admit evidence from “EyeDetect,” a lie-detection test based on eye movements that Rael had passed.
The judge agreed, and five of the 12 jurors wound up voting not to convict. A mistrial was declared.
EyeDetect is the product of the Utah company Converus. “Imagine if you could exonerate the innocent and identify the liars . . . just by looking into their eyes,” the company’s YouTube channel promises. “Well, now you can!” Its chief executive, Todd Mickelsen, says they’ve built a better truth-detection mousetrap; he believes eye movements reflect their bearer far better than the much older and mostly discredited polygraph.
Its critics, however, say the EyeDetect is just the polygraph in more algorithmic clothing. The machine is fundamentally unable to deliver on its claims, they argue, because human truth-telling is too subtle for any data set.
And they worry that relying on it can lead to tragic outcomes, like punishing the innocent or providing a cloak for the guilty.
EyeDetect raises a question that draws all the way back to the Garden of Eden: Are humans so wired to tell the truth we’ll give ourselves away when we don’t?
And, to a more 21st-century query: can modern technology come up with the tools to detect those tells?
An EyeDetect test has a subject placed in front of a monitor with a digital camera and, as with the polygraph, is lobbed generically true-false queries like “have you ever hurt anybody” to establish a baseline. Then come specific questions. If the subject’s physical responses are more demonstrative there, they are presumed to be lying; less demonstrative, they’re telling the truth. The exact number of flubbed questions that determines a failure is governed by an algorithm; the computer spits out a yes-or-no based on an adjustable formula.
Where the polygraph measures blood pressure, breathing and sweat to determine the flubbing, EyeDetect looks at factors like pupil dilation and the rapidity of eye movement. “A polygraph is emotional,” Mickelsen said. “EyeDetect is cognitively based.” He explains the reason the company believes eye movements would be affected: “You have to think harder to lie than to tell the truth.”
EyeDetect plays into a form of techno-aspirational thinking. Our Web browser already pitches us a vacation we swear has only lived in our minds while dating apps serve up a romantic partner dreamed up in our hearts. Surely an algorithm can also peer into our soul?
But experts say such logic may not have much basis in science.
“People have been trying to make these predictions for a long time,” said Leonard Saxe, a psychologist at Brandeis University who has conducted some of the leading research in the field of truth-detection. “But the science has not progressed much in 100 years.”
Like most renowned experts, he has not reviewed EyeDetect’s research specifically. But, he says, “I don’t know of any evidence that eye movements are linked to deception.”
When it comes to the polygraph, experts have a long history of declaring failure.
The machine, which celebrates its centennial this year; continues to be used in areas like police interrogations, government security-clearance investigations and sex-offender monitoring. The market is valued at as much as $2 billion, powered by many federal and local offices using it for hiring purposes.
Yet the American Psychological Association takes an unequivocal position – “Most psychologists agree that there is little evidence that polygraph tests can accurately detect lies,” it declares on its website. Good liars, after all, can cover up tics, while nervous truth-tellers might set the machine berserk.
A 1988 federal law bans private employers from administering polygraphs, if with loopholes. Most states don’t accept them as evidence in court (New Mexico is famously looser), while a 1998 Supreme Court ruling denied federal criminal defendants had a particular constitutional right to them.
If it turns out to be more accurate than a polygraph, EyeDetect can conjure a number of useful consequences – and a few dystopic ones. What grisliness awaits if anyone could know if you were telling the truth just by looking at you? That lie to spare your Aunt Lily’s feelings at Christmas would be out the window; so would being a teenager.
If it proves hollow, though, an entirely different danger lurks: With its veneer of authority, many legal experts worry, it could lead law enforcement, private employers, government agencies and even some courts even further down the wrong path than the polygraph.
“It’s the imprimatur that’s the issue: we tend to believe that where there’s science involved it’s reliable,” said Loyola Marymount University law professor Laurie Levenson, who has studied the issue.
She said she was concerned that people wouldn’t get clearances for jobs or would otherwise be held accountable for things they did not do because of false positives. She noted it also could help the guilty get away.
There are historical reasons for skepticism about any new truth-telling tech. Like diet sweeteners to the soft-drink industry, such innovations come along in the legal world at regular intervals. But they often fall short of their promise. About 15 years ago, the functional MRI, which posited that blood flow to the brain could be the key to truth-detection, enjoyed a period of buzz. But the device largely did not meet scientific standards for broad usage, and the procedure’s cost and intensiveness further inhibited wide-scale adoption.
The p300 guilty-knowledge method championed by the longtime Northwestern professor Peter Rosenfeld, who asserted that the future of truth-detection lay in brain waves, gained some enthusiasm from the scientific community; it was the subject of several dozen outside academic articles, a number of them with positive results, and has won the tentative support of Henry Greely, a Stanford Law professor and one of the leading experts on tech and the law. Among other advantages, it involves an EEG that is fairly cheap and easy to use.
Still, Rosenfeld died last winter with the method not in widespread use. The work is continued by his lab, which has had a group of graduate students working on the p300, and is emphasized in the field by people like John Meixner, an assistant U.S. attorney in Michigan and a protege of Rosenfeld’s.
On a recent afternoon, Mickelsen sat in his office in the tech corridor of Lehi, Utah, and, over Zoom, coolly screen-shared a series of graphs and charts to make the case why EyeDetect is different from failed past technologies.
EyeDetect’s accuracy rate is determined by a simulated-crime interrogation. A group of “innocent” and “guilty” subjects are told whether to commit a simulated crime of petty cash theft in a manufactured environment and then administered the ocular test. The rate at which the machine will then correctly predict a person’s truth-telling status – Converus researchers already know the right answer for each subject – is between 83 and 87%, according to the test results, the company says.
That’s about the same as a polygraph tends to achieve in its tests, though the polygraph can discard up to 10% of borderline results as “inconclusive,” while the EyeDetect gives a result on every test, leaving its accuracy percentage higher. Mickelsen also says the system is preferable to a polygraph because, by entirely automating the test, it avoids the possibility of human bias.
The man at EyeDetect’s scientific core is John Kircher, a now-retired University of Utah professor who has consulted for the CIA and had his lab funded by the Defense Department. Kircher had been researching and writing software for lie-detection technologies for decades when, in the early 2000s, he came across a University of New Hampshire professor who was researching how our eye patterns change during reading.
“Suddenly it hit me: all the software I had developed for 30 years could be applied to this problem,” Kircher said. He wedded the two and, for much of the past two decades, has been perfecting EyeDetect, now as Converus’s chief scientist.
Kircher says that the software for his eye tracker captures a machine-level 350,000 eye-movement metrics – including “fixations,” the milliseconds-long pause between words – over a 25-minute test; four metrics are taken every second. (Converus also has the EyeDetect+, which adds a computer-administered variation on a traditional polygraph.)
EyeDetect has won its supporters in the field. Law-enforcement customers laud the system as smoother than a traditional polygraph.
“People will come in nervous because they’re expecting what they see on TV, where you’re hooked up to this machine and sweating and it just seems really invasive,” said Lt. Josh Hardee of the Wyoming Highway Patrol. “This is just clean and quick.”
Hardee’s department has used EyeDetect to screen more than 150 prospective job candidates in the last two years. His department and others pay around $5,000 for the EyeDetect system – which consists of a high-definition camera, a head-rest and software that generates the questions and takes and calculates the responses – and then $80 to Converus to score each individual test. (They must be trained to use the system as well.) The machine reduces liar false alarms, Hardee says, because anxiety plays less of a role.
Other public officials have also been persuaded. The Tucson, Ariz., fire department uses EyeDetect to screen employees. The machine is also put to use, Converus says, by law-enforcement or corrections departments in states including Idaho, New Hampshire, Washington, Utah, Ohio and Connecticut. Defense lawyers in the ongoing case of Jerrod Baum, accused of killing two teens in Utah, have petitioned the judge to allow EyeDetect. The jury in the Rael case appeared amenable too. (The defendant later pleaded guilty and avoided jail time; he was given four years probation.)
But many experts are not swayed by the enthusiasm. Saxe makes the point many scientists and academics do: even if eye movements are fundamentally different under different sets of circumstances, there’s no way to directly link them to lying. In fact, they could well have to do with just the fact that the subject is taking a test.
“Fear of detection is not a measure of deception,” he said. At heart, the issue may come down to the 21st-century desire to automate and digitize a process – human emotion and motivation – that fundamentally resists the enterprise.
Stanford’s Greely says EyeDetect doesn’t dislodge his broader skepticism about truth-telling tech, either.
“I see no reason to believe that this works well, or, really, at all,” he said in an email. “Show me large, well-designed impartial studies and I’ll be interested.”
In a phone interview he noted that while it’s not hypothetically impossible for the body to engage in particular physiological changes in response to lies, the burden of proof lies heavily on the new technologies. The tests involving simulated crimes that EyeDetect uses he said contain a fundamental flaw – people being instructed to lie in a test situation might well react differently, and more demonstratively, than a criminal in the real world.
He also noted a lack of published research by people not affiliated with Kircher or his lab.
Kircher says the Defense Department is currently conducting a study of ocular technologies which he hopes will conclude by the summer. A DoD spokesman did not reply to a request for comment.
Not all outside experts are unmoved, however.
“All truth-detection methods are imperfect. But here’s the reason it’s worth relying – not over-relying, but relying – on them,” said Clark Freshman, a law professor at UC Hastings who specializes in lie detection, expressing optimism about the EyeDetect.
“People can do even worse than a coin-flip at telling whether someone is lying. So if you get better results – even if it’s just 70 or 80% accurate – than if we didn’t use it, and it’s generally free from bias, I don’t understand why you wouldn’t make it part of the picture.”
He said studies showed juries do not overly rely on these technologies, but instead include them as one factor among many.
There is also a particular abuse concern with EyeDetect. Without the human element, there may be less bias. But the device could also be intentionally set at algorithmic levels that would make it difficult to pass – at least with the polygraph there’s a transcript of human conversations. (Converus says that it trains customers on how to use the machine and, while it acknowledges that it allows them to set their own “base rate of guilt,” a spokesman also says that “if we observe a BRG set outside of a reasonable range, we make an inquiry.”)
Even if ocular technology can’t actually root out fibbers, there may be some value in how it could discourage them in the first place. In other words, EyeDetect may not need whiz-bang technology. It just needs to look high-tech. As Brad Bradley, the fire chief in Tucson, says in materials from Converus: “A guilty applicant, such as one with a drug history, looks at it and thinks, ‘I’m not going to pass. So, I’m not going to even apply.'”
Still, the odds of this moving from realms like hiring to mainstream courtrooms are slim. Plato said the eye is the window to the soul; he was silent on whether it was a ticket out of jail.
“This is going to be an uphill climb in almost any court, especially after the debacle that was the polygraph,” said Loyola Marymount’s Levenson.
Nor, in her opinion, do they deserve to scale the hill. “Truth-telling should be determined by people, not machines,” she said.
After a teacher posted images of her resignation letter on social media, saying she was quitting due to needless paperwork, the deputy leader of Democrat Party said this pointed to the failure of Thailand’s education system.
Prof Dr Kanok Wongtrangan said he has often spoken up about teachers’ workload and how it is a dangerous trap that is pulling down the quality of Thai education.
He pointed out that teachers are so overburdened with administrative jobs that they do not have time to focus on teaching.
Hence, he said, he has five solutions:
• Cut down the time students spend listening to lectures and instead work on developing an analytical mindset.
• Remove unnecessary subjects and add topics that are relevant to students’ lives.
• Cut down on homework and motivate youngsters to do more research.
• Reduce tests and exams.
• Cut down on unnecessary protocols for teachers to follow so they can spend more time with their students.