TOKYO – The Japan Aerospace Exploration Agency will launch two lunar probes on a U.S. rocket as early as next February as part of the U.S.-led Artemis program, with Japan aiming to land an explorer on the surface of the moon for the first time.
Developed by JAXA and the University of Tokyo, the shoebox-sized Omotenashi and Equuleus probes will be launched into lunar orbit with NASA’s Orion spacecraft and nanosatellites from the U.S. and Italy.
Omotenashi will attempt to land on the moon using a rocket to control its descent. It will also measure radiation levels around the moon, collecting data that will be utilized for future manned missions.
It is hoped that universities and companies will be able to develop lunar landers at a low cost in the future.
Equuleus will take advantage of the gravity of the Earth, moon and sun to reach the moon with less fuel over a period of six months to a year. It will also observe meteorite impacts on the lunar surface as it approaches the moon.
Netizens primarily used Google to search for information relating to lotteries, while going on Facebook to follow information they were interested in, the Thai Media Fund said on Wednesday following a study.
Fund expert Chumnan Ngammaneeudom said the study, conducted between April and June, also found that netizens used both platforms to discuss the Covid-19 crisis.
“The study of netizens’ tweets showed most of them had negative comments about the Covid-19 crisis, including criticising the government’s ‘ineffective’ efforts to contain the spread of the virus,” he said.
“Apart from gaining insight into netizens’ interests, emotions and the manner of their comments, this study also reflects their attitude and behaviour,” Chumnan said.
Negative comments by netizens show many feel hopeless, study shows, as personalities say listen to their voices
He said further study is necessary to create a media ecology and improve the quality of society.
Bundit Centre CEO Poramet Minsiri said the study showed that many in society feel hopeless.
He said netizens who are interested in lottery face inequality in society and the economy, while those interested in others’ lifestyles seek idols as a form of escape from their “imperfect” life.
“Netizens can easily use hate speech on social media as they believe they can freely criticise others, such as people who appear on the news or influencers,” he said.
“Hence, I would like to ask related agencies to listen to netizens’ voices in order to effectively solve issues afflicting society,” Poramet added.
Negative comments by netizens show many feel hopeless, study shows, as personalities say listen to their voices
Society for Online News Providers president Rawee Tawantharong said many people received information from other sources apart from online platforms, such as television and radio.
“As people can now publicise information online, we should create literacy among them to ensure that they express themselves appropriately,” he said.
Like Bundit, he also asked government agencies to support the media in creating content that will help improve society.
While working from home is convenient and has many benefits, it also exposes both individuals and businesses to a range of cybersecurity risks.
From January to June 2021, Kaspersky researchers discovered a 36.12% growth of brute force attacks on Remote Desktop Protocol (RDP) in Southeast Asia (SEA) compared to same period last year. The finding reflects how attackers are putting their efforts into targeting users that work from home.
In Thailand, Kaspersky detected a total of 24,094,399 attempted attacks against its users with Microsoft’s RDP installed on their computers. Thailand is now ranked second in the region.
What is RDP attack? Working from home requires employees to log in to corporate resources remotely from their personal devices. One of the most common tools used for this purpose is Remote Desktop Protocol, or RDP, Microsoft’s proprietary protocol that enables users to access Windows workstations or servers. Unfortunately, given that many offices transitioned to remote work with little notice, many RDP servers were not properly configured, something cybercriminals have sought to take advantage of to gain unauthorized access to confidential corporate resources.
24M brute force attacks, Thailand second most targeted in SEA, Kaspersky The most common type of attack being used is brute-force, wherein cybercriminals attempt to find the username and password for the RDP connection by trying different combinations until the correct one is discovered. Once it is found, they gain remote access to the target computer on the network.
Work from Home behavior in Thailand According to the survey on Thai work from home behavior, 42.72% of respondents claimed they worked from home during COVID-19, while 34.45% used a hybrid approach (working from home and at work). Working from home seems an ideal choice if you want to be safe. But everything didn’t go as smoothly as planned. 62.08% of home workers admitted that their devices were unequipped and inconvenient to use, while 45.97% experienced delay in communications.
In Thailand, the majority of desktop computers (80.7%) are installed with Microsoft OS and these have been the devices heavily relied upon by employees working remotely during on and off lockdowns since the pandemic began.
“This health crisis has clearly expedited digital transformation and the merging of our professional and personal life. Employees are now actively leading the way in accepting changes in pursuit of greater freedom and flexibility, using technology to own a new future. Companies must now adapt and restructure the modern workplace to make it more productive, sustainable, and most importantly, secure,” says Chris Connell, Managing Director for Asia Pacific at Kaspersky.
As working from home is here to stay, Kaspersky recommends employers and businesses to take all possible protection measures: • At the very least, use strong passwords. • Make RDP available only through a corporate VPN. • Use Network Level Authentication (NLA). • If possible, enable two-factor authentication. • If you don’t use RDP, disable it and close port 3389. • Use a reliable security solution. Companies need to closely monitor programs in use and update them on all corporate devices in a timely manner. This is no easy task for many companies at present, because the hasty transition to remote working has forced many to allow employees to work with or connect to company resources from their home computers. Our advice is as follows: • Provide training on basic cyber hygiene to your employees. Help them to identify the most common types of attacks that occur in the company, and provide basic knowledge in identifying suspicious emails, websites, text messages. • Use strong, complex and different passwords to access every company resource • Use Multi-Factor Authentication or two-factor authentication especially when accessing financial information or logging into corporate networks. • Where possible, use encryption on devices used for work purposes. • Enable access to RDP through a corporate VPN • Always prepare for backup copies of critical data. • Use a reliable enterprise security solution with network threat protection such as Kaspersky Endpoint Security for Business
24M brute force attacks, Thailand second most targeted in SEA, Kaspersky
Kaspersky cybersecurity analyst Tatyana Shishkova warned Android phone users that the Joker malware has returned to Google Play Store.
Shishkova said on Twitter that she found at least 15 Android apps infected with the Joker malware.
Last year, the Joker malware infected several apps and Google had to remove those apps from the store to protect users.
Joker malware is able to sneak past Google Play Store’s security by making small changes to its code.
The malware was first discovered in 2017 and has reappeared quite often since then.
It secretly steals money by subscribing to online services in the background. It can click on online advertisements and access the phone’s SMS with OTPs to approve payments.
The astronauts flying again from Cape Canaveral are getting a lot of attention. So are the celebrities and wealthy entrepreneurs plunking down millions to join suborbital flights that touch the edge of space in flights replayed in prime time.
But don’t forget about the robots. They are having a landmark year, too.
Late Tuesday night, NASA is poised to embark on another groundbreaking mission – this one designed to eventually save Earth from a killer asteroid by testing whether a spacecraft can nudge a celestial body in a way that will alter its orbit. It’s just the latest in a series of missions that this year have included a rover looking for signs of life on Mars, a small helicopter that continues to fly through the Red Planet’s skies, and the possible launch of the most powerful telescope ever to go to space, capable of looking back in time to the early days of the universe.
Tuesday’s launch at 10:21 p.m. Pacific time – 1:21 a.m. Wednesday on the East Coast – is set to see a SpaceX Falcon 9 rocket lift off from Vandenberg Space Force base in California. On its tip it will carry a refrigerator-sized spacecraft that will fly 6.7 million miles, hunting a small asteroid about the size of a football stadium before going kamikaze and crashing into it at 15,000 mph, likely next September.
If everything goes as planned, the impact will slow the asteroid by a fraction of a millimeter per second. That, scientists hope, will be enough that over time, in the vastness of space, it will alter the asteroid’s trajectory significantly.
Dimorphos, the asteroid in NASA’s sights, poses no danger to Earth. But it was chosen as the target for the so-called DART mission (that stands for Double Asteroid Redirection Test) by members of an elite NASA team known as the Planetary Defense Coordination Office – whose task isn’t exploring space but defending Earth.
It is a “first test of planetary defense,” Thomas Zurbuchen, the associate administrator of NASA’s science mission directorate, told reporters Monday. “What we’re trying to learn is how to deflect a threat that would come in.”
There are lots of rocks hurtling through space large enough to survive the fiery plunge through Earth’s atmosphere. NASA does its best to track them, but estimates it only knows of about 40% of the asteroids that could pose a danger. It’s working on adding more space rocks to its catalogue, and in the meantime, trying to figure out how to make sure none hit Earth.
Unlike other natural disasters, such as hurricanes or earthquakes, humans could, NASA says, do something about killer asteroids.
In an interview, NASA administrator Bill Nelson said that an asteroid impact could have enormous consequences, even threaten humans’ ability to live on earth. “We know a six-mile wide asteroid hitting what is today the Yucatán Peninsula was what wiped out much of life on earth, including the dinosaurs,” he said.
The redirect mission follows another program that last year reached another near-Earth asteroid. After studying the Bennu asteroid for about two years, a spacecraft grabbed a sample from the surface with a robotic arm and is now on its way back to Earth. It would be the first time NASA has ever grabbed a sample from an asteroid, which could shed light on how the universe was formed.
This week, NASA also celebrated the 16th flight of Ingenuity, the four-pound helicopter that flew the first powered flight of an aircraft on another planet earlier this year in what NASA said was a “Wright brothers moment.” Though it was only supposed to fly a handful of times, the sprite of a chopper has kept going, to the delight of NASA engineers.
Nelson said he was “very pleasantly surprised” and proud of “this little helicopter that we didn’t even know would fly in an atmosphere that is 1 % of Earth’s atmosphere. And now it’s not only a demonstration, it’s a scout.”
Those events came in a year when NASA’s astronauts were launching with regularity from United States soil for the first time since the space shuttle was retired in 2011. They were joined by a series of suborbital tourism flights from Richard Branson’s Virgin Galactic, and Jeff Bezos’ Blue Origin, which announced Tuesday that it would add Michael Strahan, the TV personality, to the ranks of its space passengers in December. (Bezos owns The Washington Post.)
“I’m super excited about all the successes on the human spaceflight side,” Zurbuchen said. “We at science are huge champions for that and we sit there glued to the TV, just the same way as everybody else.” But he said that “this has been a year of science, though, in an amazing fashion.”
He noted that NASA landed its Mars Perseverance rover on Mars, and it is gearing up not only to return astronauts to the lunar surface, but first send a series of robotic spacecraft there. By the end of 2023 it intends to send the first mobile robotic mission there to analyze ice at the lunar south pole to help NASA create maps of the resource.
It is also scheduled to launch the James Webb Space Telescope, which would be stationed about 1 million miles from Earth and “explore every phase of cosmos history – from within our solar system, to the most distant observable galaxies in the early universe, and everything in between,” NASA says.
After years of delays, the telescope was set to launch on an Ariane 5 rocket from French Guiana on Dec. 18. But NASA has delayed that flight until at least Dec. 22 after “a sudden, unplanned release of a clamp band” that was securing the telescope to its spot inside the nose cone of the rocket. NASA is now investigating to make sure “the incident did not damage any components,” the space agency said in a statement.
Zurbuchen said they were taking every precaution to ensure the $10 billion telescope is safe before launching, and he said he hoped that “in a few days we’ll be in good shape.”
Smart home devices made their way into our living rooms and bedrooms across the world helping us turn off our lights and lock our doors remotely. Now they are taking on new territory: our home offices.
Big tech companies including Amazon, Facebook parent Meta and Google are expanding work applications for the smart home, one that’s controlled by a group of connected devices that can be accessed remotely. The coronavirus pandemic blurred the lines between people’s home and work lives. As a result, some workers are asking Alexa or Google Assistant to book their virtual meetings, fetch revenue targets or remind them about important events on their busy work calendars. And while all of these work productivity features may add convenience to working from home, experts say they are also raising security and privacy concerns that could cost workers and their companies if not managed properly.
“The lines all blurred during the pandemic. Everything is turning into screens,” said Mark Quiroz, vice president and general manager of product marketing for Samsung’s Display division.
Smart home devices, which include the Amazon Echo speaker or the Google Nest line of smart thermostats, smoke alarms and doorbells, are now considered a mainstream technology, according to a survey by market research firm International Data Corp. More than 77 percent of households with WiFi connection have at least one smart home device. And consumers are warming up to the idea of using their smart home devices for work purposes, too: Nearly 50 percent of the roughly 1,700 people surveyed who are employed and own a smart home device said they’d be willing to use the device for work purposes such as video conference calls or to retrieve the latest sales numbers from connected work-related software.
“Each person may soon have 10 devices tied to them,” said Mark Ostrowski, head of engineering at cybersecurity company Check Point Software. “Ten devices per person times a household of four – that’s 40 devices for entry,” he said, referring to entry points that could be targeted by hackers.
Still, big technology companies are hoping to seize on the opportunity.
Workers using the Amazon Alexa virtual assistant can join Zoom meetings with a simple voice command on Amazon’s smart display called the Echo Show. With Alexa-enabled devices, they can also be reminded at a specific time about details on their to-do lists or their appointments for the day, be played focus music and have their emails read out loud to which they can verbally reply.
Amazon has been courting corporate clients with Alexa for Business, which helps companies deploy and manage Alexa-enabled devices, since 2017. Though it landed customers like General Electric, media group Condé Nast, and not-for-profit health system Hawaii Pacific Heath, the company only lists a little more than a dozen corporate clients. And in 2018, WeWork reportedly halted its pilot of Alexa for Business, though the company didn’t specify the reason for doing so. (Amazon founder Jeff Bezos owns The Washington Post.)
But Alexa-enabled devices have a history of quietly recording conversations. Alexa sometimes wakes after hearing its name, or something that sounds like its name even when its users never meant to activate their device. Those conversations – which in today’s remote work environment could very well be work-related – have the potential to be reviewed by human contractors working to improve Alexa’s speech recognition if people using their personal Alexa-enabled devices don’t opt out of the processes.
For business customers, Amazon said all interactions with Alexa are anonymous and not linked to any individual user. It said by default voice recordings aren’t saved.
“We absolutely see Alexa playing a bigger role in work in the future. Customers tell us how Alexa not only helps them get more done throughout the day, but helps them work smarter, productively and safely,” said Liron Torres, head of Alexa Smart Properties at Amazon.
Google, which similarly allows users to opt out of human review and saving recordings, has had a similar history with devices equipped with its voice-activated Google Assistant. Google also has been known to tap into users’ online activities to better serve them ads.
Similar to Amazon, Google aims to equip workers with productivity tools that may aid with work. For example, users are now able to create workday routines that automatically remind them about the items on their calendars as well as when they should take a break or get a glass of water. The feature was rolled out during the pandemic.
Before the pandemic, workers already were able to use the Google Assistant for tasks like creating to-do lists and calendar items, storing reminders and automatically joining video meetings on the company’s smart display called the Nest Hub Max, which began supporting Zoom at the end of last year.
Facebook also wants in on the work-world action, but it, too, has had its own privacy issues.
The company, which recently changed its corporate name to Meta, said early on in the pandemic that it reprioritized its plans for Portal. The device, powered by its own virtual assistant – called the Facebook Assistant – and Alexa, resembles a tablet and features a smart speaker and camera that follows people around the room as they talk.
“We had a number of users who saw that their workday consisted of going in and out of different video services,” said Micah Collins, director of product management at Meta. “We saw actual pain points of many Portal users and focused on that.”
Portal users can now use their devices to make video calls on services including BlueJeans, GoToMeeting, Webex and Zoom. They can also integrate their work calendars from services like Google and Microsoft. And companies can also rollout and manage a group of devices for their employees with special work accounts.
But in 2019, Facebook was slapped with a history-making $5 billion fine from the Federal Trade Commission for violating consumers’ privacy. The FTC probed the company after the social media giant left up to 87 million users’ data vulnerable to data analytics firm Cambridge Analytica ahead of the 2016 U.S. presidential election. Since then, the company has suffered several massive breaches of user data and has come under scrutiny for the amount of data it collects about its users.
Meanwhile, consumer electronics giant Samsung is hoping to get more connected displays that do it all in a stand-alone device. So workers can use software like Microsoft 365, complement their laptop and desktop screens and watch streaming entertainment, too. This means adding more screens in the homes of more workers. More screens means more connections and more risk, security experts say.
They say consumers should heed caution when mixing their personal and professional data and devices. Workers could be creating new opportunities for criminals to steal sensitive company information, even if it’s seemingly well-protected by security software.
Michael Siegel, director of cybersecurity at MIT Sloan, said it could be as simple as someone hacking a person’s smart thermostat or smoke alarm, for example. In that case, all they have to do is raise the temperature in the home or set off your smoke alarm in an attempt to get you to leave your device behind for them to steal.
“The more we’re connected to our office, the more exposed we are to social engineering,” he said. “All of these are things that can cause you to let your guard down.”
Beyond physically stealing a device – and all of its data – criminals also will have more ways to get to sensitive corporate data as people increase the devices that connect to it, experts say. Ari Lightman, professor of digital media and marketing at Carnegie Mellon University’s Heinz College, said it boils down to one simple fact: “If there’s a mechanism to exploit, people will look to do that.”
But workers may not only be increasing their exposure to hackers but also potentially to their employers, as well. Adam Wright, a senior analyst at IDC, said company-issued smart home devices like Facebook’s Portal should be considered much like company-issued laptops, which can be easily monitored by employers. Facebook employees, for example, were offered free Portal devices after the outbreak of the pandemic to help with virtual meetings. But the devices should be handled with caution, Wright suggested.
“Employers have every right to monitor their employees with their devices,” Wright said. So “in the middle of the night, it’s not just Amazon listening to me, but possibly my boss and IT department.”
Workers who are using their smart home devices for work would be wise to do a few things, said Pardis Emami-Naeini, researcher at the University of Washington’s Security and Privacy Research Lab. First, they need to familiarize themselves with the privacy and security of their smart home devices to understand what they may need to do to protect their data, and that of their company, as best they can. They also should be regularly updating the device, if the device doesn’t automatically update, to prevent additional security vulnerabilities, just like they would with their smartphones.
“Now that the purpose [of the device] is different, they shouldn’t assume that the normal practices of their daily behavior is going to work,” Emami-Naeini said. “The purpose is different and the data they share is more sensitive.”
Check Point’s Ostrowski said the responsibility not only lies with the worker but with the employer, which should be doing everything to safeguard its data and network even if a person’s personal device is compromised.
“It’s less about how do I secure or chase after 10,000 employees to make sure their digital hygiene is good. It’s more about how do I make sure when they come to the corporate environment, they can’t bring a malicious footprint with them,” he said.
Janneke van Ooyen, a community manager of a mobile gaming company in Barcelona, who recently outfitted her home with eight smart lights, a smart sound bar and an Amazon Echo Dot, said she’s hesitant about using these devices for work purposes.
“Since the data is very sensitive and you don’t know where it’s stored – that would be my biggest gripe for not” using it, she said. “We work with a lot of licensers, so if anything got out, that would be really bad.”
In 2018, John Rael, a volunteer track coach in Taos, N.M., was on trial for allegedly raping a 14-year-old girl when his lawyer made an unusual request.
He wanted the judge to admit evidence from “EyeDetect,” a lie-detection test based on eye movements that Rael had passed.
The judge agreed, and five of the 12 jurors wound up voting not to convict. A mistrial was declared.
EyeDetect is the product of the Utah company Converus. “Imagine if you could exonerate the innocent and identify the liars . . . just by looking into their eyes,” the company’s YouTube channel promises. “Well, now you can!” Its chief executive, Todd Mickelsen, says they’ve built a better truth-detection mousetrap; he believes eye movements reflect their bearer far better than the much older and mostly discredited polygraph.
Its critics, however, say the EyeDetect is just the polygraph in more algorithmic clothing. The machine is fundamentally unable to deliver on its claims, they argue, because human truth-telling is too subtle for any data set.
And they worry that relying on it can lead to tragic outcomes, like punishing the innocent or providing a cloak for the guilty.
EyeDetect raises a question that draws all the way back to the Garden of Eden: Are humans so wired to tell the truth we’ll give ourselves away when we don’t?
And, to a more 21st-century query: can modern technology come up with the tools to detect those tells?
An EyeDetect test has a subject placed in front of a monitor with a digital camera and, as with the polygraph, is lobbed generically true-false queries like “have you ever hurt anybody” to establish a baseline. Then come specific questions. If the subject’s physical responses are more demonstrative there, they are presumed to be lying; less demonstrative, they’re telling the truth. The exact number of flubbed questions that determines a failure is governed by an algorithm; the computer spits out a yes-or-no based on an adjustable formula.
Where the polygraph measures blood pressure, breathing and sweat to determine the flubbing, EyeDetect looks at factors like pupil dilation and the rapidity of eye movement. “A polygraph is emotional,” Mickelsen said. “EyeDetect is cognitively based.” He explains the reason the company believes eye movements would be affected: “You have to think harder to lie than to tell the truth.”
EyeDetect plays into a form of techno-aspirational thinking. Our Web browser already pitches us a vacation we swear has only lived in our minds while dating apps serve up a romantic partner dreamed up in our hearts. Surely an algorithm can also peer into our soul?
But experts say such logic may not have much basis in science.
“People have been trying to make these predictions for a long time,” said Leonard Saxe, a psychologist at Brandeis University who has conducted some of the leading research in the field of truth-detection. “But the science has not progressed much in 100 years.”
Like most renowned experts, he has not reviewed EyeDetect’s research specifically. But, he says, “I don’t know of any evidence that eye movements are linked to deception.”
When it comes to the polygraph, experts have a long history of declaring failure.
The machine, which celebrates its centennial this year; continues to be used in areas like police interrogations, government security-clearance investigations and sex-offender monitoring. The market is valued at as much as $2 billion, powered by many federal and local offices using it for hiring purposes.
Yet the American Psychological Association takes an unequivocal position – “Most psychologists agree that there is little evidence that polygraph tests can accurately detect lies,” it declares on its website. Good liars, after all, can cover up tics, while nervous truth-tellers might set the machine berserk.
A 1988 federal law bans private employers from administering polygraphs, if with loopholes. Most states don’t accept them as evidence in court (New Mexico is famously looser), while a 1998 Supreme Court ruling denied federal criminal defendants had a particular constitutional right to them.
If it turns out to be more accurate than a polygraph, EyeDetect can conjure a number of useful consequences – and a few dystopic ones. What grisliness awaits if anyone could know if you were telling the truth just by looking at you? That lie to spare your Aunt Lily’s feelings at Christmas would be out the window; so would being a teenager.
If it proves hollow, though, an entirely different danger lurks: With its veneer of authority, many legal experts worry, it could lead law enforcement, private employers, government agencies and even some courts even further down the wrong path than the polygraph.
“It’s the imprimatur that’s the issue: we tend to believe that where there’s science involved it’s reliable,” said Loyola Marymount University law professor Laurie Levenson, who has studied the issue.
She said she was concerned that people wouldn’t get clearances for jobs or would otherwise be held accountable for things they did not do because of false positives. She noted it also could help the guilty get away.
There are historical reasons for skepticism about any new truth-telling tech. Like diet sweeteners to the soft-drink industry, such innovations come along in the legal world at regular intervals. But they often fall short of their promise. About 15 years ago, the functional MRI, which posited that blood flow to the brain could be the key to truth-detection, enjoyed a period of buzz. But the device largely did not meet scientific standards for broad usage, and the procedure’s cost and intensiveness further inhibited wide-scale adoption.
The p300 guilty-knowledge method championed by the longtime Northwestern professor Peter Rosenfeld, who asserted that the future of truth-detection lay in brain waves, gained some enthusiasm from the scientific community; it was the subject of several dozen outside academic articles, a number of them with positive results, and has won the tentative support of Henry Greely, a Stanford Law professor and one of the leading experts on tech and the law. Among other advantages, it involves an EEG that is fairly cheap and easy to use.
Still, Rosenfeld died last winter with the method not in widespread use. The work is continued by his lab, which has had a group of graduate students working on the p300, and is emphasized in the field by people like John Meixner, an assistant U.S. attorney in Michigan and a protege of Rosenfeld’s.
On a recent afternoon, Mickelsen sat in his office in the tech corridor of Lehi, Utah, and, over Zoom, coolly screen-shared a series of graphs and charts to make the case why EyeDetect is different from failed past technologies.
EyeDetect’s accuracy rate is determined by a simulated-crime interrogation. A group of “innocent” and “guilty” subjects are told whether to commit a simulated crime of petty cash theft in a manufactured environment and then administered the ocular test. The rate at which the machine will then correctly predict a person’s truth-telling status – Converus researchers already know the right answer for each subject – is between 83 and 87%, according to the test results, the company says.
That’s about the same as a polygraph tends to achieve in its tests, though the polygraph can discard up to 10% of borderline results as “inconclusive,” while the EyeDetect gives a result on every test, leaving its accuracy percentage higher. Mickelsen also says the system is preferable to a polygraph because, by entirely automating the test, it avoids the possibility of human bias.
The man at EyeDetect’s scientific core is John Kircher, a now-retired University of Utah professor who has consulted for the CIA and had his lab funded by the Defense Department. Kircher had been researching and writing software for lie-detection technologies for decades when, in the early 2000s, he came across a University of New Hampshire professor who was researching how our eye patterns change during reading.
“Suddenly it hit me: all the software I had developed for 30 years could be applied to this problem,” Kircher said. He wedded the two and, for much of the past two decades, has been perfecting EyeDetect, now as Converus’s chief scientist.
Kircher says that the software for his eye tracker captures a machine-level 350,000 eye-movement metrics – including “fixations,” the milliseconds-long pause between words – over a 25-minute test; four metrics are taken every second. (Converus also has the EyeDetect+, which adds a computer-administered variation on a traditional polygraph.)
EyeDetect has won its supporters in the field. Law-enforcement customers laud the system as smoother than a traditional polygraph.
“People will come in nervous because they’re expecting what they see on TV, where you’re hooked up to this machine and sweating and it just seems really invasive,” said Lt. Josh Hardee of the Wyoming Highway Patrol. “This is just clean and quick.”
Hardee’s department has used EyeDetect to screen more than 150 prospective job candidates in the last two years. His department and others pay around $5,000 for the EyeDetect system – which consists of a high-definition camera, a head-rest and software that generates the questions and takes and calculates the responses – and then $80 to Converus to score each individual test. (They must be trained to use the system as well.) The machine reduces liar false alarms, Hardee says, because anxiety plays less of a role.
Other public officials have also been persuaded. The Tucson, Ariz., fire department uses EyeDetect to screen employees. The machine is also put to use, Converus says, by law-enforcement or corrections departments in states including Idaho, New Hampshire, Washington, Utah, Ohio and Connecticut. Defense lawyers in the ongoing case of Jerrod Baum, accused of killing two teens in Utah, have petitioned the judge to allow EyeDetect. The jury in the Rael case appeared amenable too. (The defendant later pleaded guilty and avoided jail time; he was given four years probation.)
But many experts are not swayed by the enthusiasm. Saxe makes the point many scientists and academics do: even if eye movements are fundamentally different under different sets of circumstances, there’s no way to directly link them to lying. In fact, they could well have to do with just the fact that the subject is taking a test.
“Fear of detection is not a measure of deception,” he said. At heart, the issue may come down to the 21st-century desire to automate and digitize a process – human emotion and motivation – that fundamentally resists the enterprise.
Stanford’s Greely says EyeDetect doesn’t dislodge his broader skepticism about truth-telling tech, either.
“I see no reason to believe that this works well, or, really, at all,” he said in an email. “Show me large, well-designed impartial studies and I’ll be interested.”
In a phone interview he noted that while it’s not hypothetically impossible for the body to engage in particular physiological changes in response to lies, the burden of proof lies heavily on the new technologies. The tests involving simulated crimes that EyeDetect uses he said contain a fundamental flaw – people being instructed to lie in a test situation might well react differently, and more demonstratively, than a criminal in the real world.
He also noted a lack of published research by people not affiliated with Kircher or his lab.
Kircher says the Defense Department is currently conducting a study of ocular technologies which he hopes will conclude by the summer. A DoD spokesman did not reply to a request for comment.
Not all outside experts are unmoved, however.
“All truth-detection methods are imperfect. But here’s the reason it’s worth relying – not over-relying, but relying – on them,” said Clark Freshman, a law professor at UC Hastings who specializes in lie detection, expressing optimism about the EyeDetect.
“People can do even worse than a coin-flip at telling whether someone is lying. So if you get better results – even if it’s just 70 or 80% accurate – than if we didn’t use it, and it’s generally free from bias, I don’t understand why you wouldn’t make it part of the picture.”
He said studies showed juries do not overly rely on these technologies, but instead include them as one factor among many.
There is also a particular abuse concern with EyeDetect. Without the human element, there may be less bias. But the device could also be intentionally set at algorithmic levels that would make it difficult to pass – at least with the polygraph there’s a transcript of human conversations. (Converus says that it trains customers on how to use the machine and, while it acknowledges that it allows them to set their own “base rate of guilt,” a spokesman also says that “if we observe a BRG set outside of a reasonable range, we make an inquiry.”)
Even if ocular technology can’t actually root out fibbers, there may be some value in how it could discourage them in the first place. In other words, EyeDetect may not need whiz-bang technology. It just needs to look high-tech. As Brad Bradley, the fire chief in Tucson, says in materials from Converus: “A guilty applicant, such as one with a drug history, looks at it and thinks, ‘I’m not going to pass. So, I’m not going to even apply.'”
Still, the odds of this moving from realms like hiring to mainstream courtrooms are slim. Plato said the eye is the window to the soul; he was silent on whether it was a ticket out of jail.
“This is going to be an uphill climb in almost any court, especially after the debacle that was the polygraph,” said Loyola Marymount’s Levenson.
Nor, in her opinion, do they deserve to scale the hill. “Truth-telling should be determined by people, not machines,” she said.
After a teacher posted images of her resignation letter on social media, saying she was quitting due to needless paperwork, the deputy leader of Democrat Party said this pointed to the failure of Thailand’s education system.
Prof Dr Kanok Wongtrangan said he has often spoken up about teachers’ workload and how it is a dangerous trap that is pulling down the quality of Thai education.
He pointed out that teachers are so overburdened with administrative jobs that they do not have time to focus on teaching.
Hence, he said, he has five solutions:
• Cut down the time students spend listening to lectures and instead work on developing an analytical mindset.
• Remove unnecessary subjects and add topics that are relevant to students’ lives.
• Cut down on homework and motivate youngsters to do more research.
• Reduce tests and exams.
• Cut down on unnecessary protocols for teachers to follow so they can spend more time with their students.
Hackers compromised the Federal Bureau of Investigations external email system on Saturday, sending spam emails to potentially thousands of people and companies with a faked warning of a cyberattack.
The FBI said in a statement that the fake emails were sent from the Law Enforcement Enterprise Portal system used to communicate with state and local officials, not part of the FBI’s larger corporate email service. “No actor was able to access or compromise any data or (personally identifiable information) on FBI’s network,” the bureau said. “Once we learned of the incident we quickly remediated the software vulnerability, warned partners to disregard the fake emails, and confirmed the integrity of our networks.”
Cybersecurity experts said the fact that the email didn’t include any malicious attachments could indicate the hackers stumbled across a vulnerability in the FBI portal and didn’t have a particular plan to exploit it.
“It could have just been a group or individuals looking to get some street cred to tout on underground forums,” said Austin Berglas, a former assistant special agent in charge of the FBI’s New York office cyber branch, who is not involved in any government investigation of the incident. “I would think that it would be some sort of criminal group or some sort of ‘hacktivist’ group,” rather than a coordinated state-backed attack.
The compromised system was an unclassified server used by FBI personnel to communicate outside of the organization, and the hackers didn’t appear to have gained access to internal databases containing state secrets or classified information, said Berglas, who is now global head of professional services at cybersecurity firm BlueVoyant.
A copy of the alleged spam email was posted on Twitter by the Spamhaus Project, an international watchdog that tracks spam and related cyberthreats such as phishing, malware and botnets. The subject line was: “Urgent: Threat actor in systems,” and the email claimed to be a warning from the Department of Homeland Security about a cyberattack.
Spamhaus, which analyzed the emails’ metadata, wrote on Twitter that the fake emails were “causing a lot of disruption because the headers are real, they really are coming from FBI infrastructure.” They were apparently sent to thousands of addresses, at least some taken from the American Registry for Internet Numbers database, the nonprofit responsible for managing the distribution of internet addresses in the North American region.
The email made reference to an international hacker group called the Dark Overlord, which allegedly steals data and demands big ransoms for its return. The group purportedly stole students’ records in several U.S. states and episodes of Netflix shows in 2017. A British man was sentenced to five years prison for his role in the hacking group last year.
The email claimed that the “threat actor” appeared to be cybersecurity expert Vinny Troia. Troia published an investigation of the Dark Overlord last year.
Troia couldn’t immediately be reached for comment. On Twitter, he speculated that he may have been the subject of what he called a smear attack. “Should I be flattered that the kids who hacked the @FBI email servers decided to do it in my name?” he wrote.
Although online scammers often create fake emails purporting to be from official sources, it is highly unusual for a hacker to penetrate a government server – and experts say the incident highlights the vulnerabilities of email communications.
Russian government hackers last year breached the Treasury and Commerce departments, along with other U.S. government agencies, as part of a global espionage campaign, and Chinese government hackers are believed to have compromised dozens of U.S. government agencies.
“It could have been a lot worse,” said Berglas. “When you have ownership of a trusted dot-gov account like that, it can be weaponized and used for pretty nefarious purposes. (The FBI) probably dodged a bullet.”
Four more astronauts blasted into orbit Wednesday, continuing a historic year of human spaceflight in which a diverse array of people have flown on several different spacecraft to varying parts of the increasingly popular neighborhood just outside Earths atmosphere.
SpaceX’s Falcon 9 rocket lifted off at 9:03 p.m. Eastern time, carrying a crew of four, including three NASA astronauts and one European, on what is expected to be a 22-hour journey to the International Space Station, where they are to stay for about six months.
The launch from the Kennedy Space Center in Florida was the fifth time that SpaceX has flown humans to orbit and the fourth time it has done so under its contract with NASA. In September, it flew four civilians in what was called the Inspiration4 mission – a three-day flight in the SpaceX Dragon capsule that circled the globe every 90 minutes.
The launch came less than 48 hours after SpaceX had returned the previous astronaut crew from the space station to a picture-perfect splashdown in the Gulf of Mexico – evidence that SpaceX is gaining prowess in multiple aspects of its role as NASA’s primary way to transport goods and people to the space station.
After reaching orbit, NASA astronaut Raja Chari told mission control that, “it was a great ride. Better than we expected.”
The SpaceX launch director told the crew, which will continue the mission on Veterans Day: “It was a pleasure to be part of this mission with you. Enjoy your holiday amongst the stars. We’ll be waving as you fly by.”
The flight comes as a number of companies are working to fly private citizens to space – from the actor William Shatner, 90, who became the oldest person to reach the edge of space, to Oliver Daemen, a student from the Netherlands, who at 18 became the youngest.
Wednesday’s launch, dubbed Crew-3, is commanded by Chari, an Air Force colonel and test pilot who is making his first trip to space. He was joined by Kayla Barron, a Navy lieutenant commander who served on a nuclear submarine, Tom Marshburn, a physician who has flown to space twice before, once on the space shuttle and once on the Russia Soyuz, and European astronaut Matthias Maurer, an engineer from Germany. It is also Barron’s and Maurer’s first trip to space.
The three rookies became the 599th, 600th and 601st people to fly past the 50-mile edge of space, NASA said. The list of space travelers is growing in part because of the efforts of Jeff Bezos’s Blue Origin and Richard Branson’s Virgin Galactic, which take paying customers just past the edge of space in suborbital trips that fly up and then fall back down to Earth.
Russia continues to fly astronauts on its Soyuz spacecraft and recently said that it would allow its cosmonauts to fly on SpaceX Dragon capsules. China also is flying humans and recently sent up a crew of three to the space station it is assembling in Earth orbit. And NASA’s Orion spacecraft is scheduled to launch early next year without any astronauts onboard on a trip that would go around the moon in preparation for a human landing, perhaps as soon as 2025 under a new schedule NASA announced on Tuesday.
Meanwhile, Boeing is working to develop a spacecraft that would fly astronauts to the space station as part of NASA’s “commercial crew” program. But its program has suffered through all sorts of problems and delays. On a test flight without astronauts at the end of 2019, the spacecraft suffered a software problem that forced controllers to truncate the mission and forgo a docking with the station.
Boeing decided to redo the test flight and take a charge of $410 million.
Then over the summer, the Starliner capsule suffered another problem ahead of that do-over, this time with valves that remained stuck in the service module. The flight never got off, and Boeing said last month that it would take another charge, this time of $185 million, to cover the costs of the delay.
During a news conference last month, John Vollmer, Boeing’s program manager for the commercial crew program, declined to say how much the problem would cost the company. But he said “NASA would not bear any responsibility for those costs that are within scope of our contract. . . . So, we’re not expecting any charge to the government from that side.” He added that the company would not back away from the program as a result of the additional costs. “We are 100 percent committed to fulfilling our contract with the government, and we intend to do that,” he said.
As it continues to solidify its status as NASA’s premier human spaceflight partner, SpaceX, the California company founded by Elon Musk, is also working toward flying more private citizens. It has a mission commissioned by Axiom Space, a Houston-based company, set to take three civilians and a former NASA astronaut, who would serve as their guide, to the space station for about a week.
As those efforts continue, many believe the ranks of space farers will increase dramatically.
“Six hundred in 60 years, it makes for 10 people per year,” Maurer said during a preflight news conference. “But I think in the next few years, we’ll see an exponential rise. Now we’re entering the era for commercial spaceflight.”
Before SpaceX flew its first test flight with a pair of NASA astronauts last year, the space agency had spent nearly a decade after the space shuttle was retired paying for seats on the Russia Soyuz.
Today, with SpaceX, “there are more flight opportunities” for NASA astronauts, said Garrett Reisman, a former NASA astronaut and a professor at the University of Southern California’s school of engineering. “One of the positive impacts is fewer people having to train over in Russia. That was a major strain and stress on families.”
The Crew-3 mission is slated to dock with the space station at 7:10 p.m. Eastern time Thursday. While onboard the orbiting laboratory, the astronauts will be conducting what NASA says is “new and exciting scientific research in areas such as materials science, health technologies, and plant science to prepare for human exploration beyond low Earth orbit and benefit life on Earth.”
– – –
Here’s what you need to know
Aboard the capsule are NASA astronauts Raja Chari, an Air Force colonel and test pilot who is making his first trip to space; Kayla Barron, a Navy lieutenant commander who served on a nuclear submarine, who is also making her first trip to space; Tom Marshburn, a physician who has flown to space twice before, once on the space shuttle and once on the Russia Soyuz, and European astronaut Matthias Maurer, another space rookie who is an engineer from Germany.
The capsule is a new addition to SpaceX’s Crew Dragon fleet and has never flown to space before. It’s been named Endurance.
One critical adjustment has been to this spacecraft: a tube that carries urine to a storage tank has been welded in place. The change was made after technicians discovered on another spacecraft that the tube had pulled away from the tank, allowing urine to collect under the spacecraft’s floor.