Categories
Wired

Want Some Eco-Friendly Tips? A New Study Says No, You Don’t

This story originally appeared on Grist and is part of the Climate Desk collaboration. Need something else for your growing to-do list? Environmentalists have about a zillion things for you, give or take. Chances are that you’ve heard a lot of them already: Ditch your car for a bike, take fewer flights, and go vegan. Oh, and install solar panels on your roof, dry your laundry on a clothesline, use less water when you brush your teeth, take shorter showers … hey, where are you going? We’re just getting started! For decades, we’ve been told that the solution to our planetary crisis starts with us. These “simple” tips are so pervasive, they usually go unquestioned. But that doesn’t mean that most people have the time or motivation to heed them. In fact, new research suggests that hearing eco-friendly tips like these actually makes people less likely to do anything about climate change. Oops! Experts say there are better ways to get people to adopt green habits—and they don’t involve nagging or guilt-tripping. In the study—titled “Don’t Tell Me What to Do”—researchers at Georgia State University surveyed nearly 2,000 people online to see how they would respond to different messages about climate change. Some saw messages about personal sacrifices, like using less hot water. Others saw statements about policy actions, like laws that would limit carbon emissions, stop deforestation, or increase fuel efficiency standards for cars. The messenger—whether scientist or not—didn’t make much of a difference. Then the respondents were asked…Continue readingWant Some Eco-Friendly Tips? A New Study Says No, You Don’t

Categories
The Next Web

How I’d study machine learning — if I’d be starting out today

I’m underground, back where it all started. Sitting at the hidden cafe where I first met Mike. I’d been studying in my bedroom for the past 9-months and decided to step out of the cave. Half of me was concerned about having to pay $19 for breakfast (unless it’s Christmas, driving Uber on the weekends isn’t very lucrative), the other half about whether any of this study I’d been doing online meant anything. In 2017, I left Apple, tried to build a web startup, failed, discovered machine learning, fell in love, signed up to a deep learning course with zero coding experience, emailed the support team asking what the refund policy was, didn’t get a refund, spent the next 3-months handing in the assignments four to six days late, somehow passed, decided to keep going and created my own AI Masters Degree. Then, 9-months into my AI Masters Degree, I met Mike, we had coffee, I told him my grand plan; use AI to help the world move more and eat better, he told me I should I meet Cam, I met Cam, I told Cam I’m going to the US, he said why not stay here, come in on Thursday, okay, went in on Thursday for a 1-day a week internship and two weeks later was offered a role as a junior machine learning engineer at Max Kelsen. 14-months into my machine learning engineer role, I decided to leave and try it on my own. I wrote an article about what I’d learned, Andrei found…Continue readingHow I’d study machine learning — if I’d be starting out today

Categories
The Next Web

Study reveals big regional divides in views on AI risks

A new study of opinions on using AI in decision-making shows views on the risks and benefits vary greatly between regions and nations. Researchers from the Oxford Commission on AI and Good Governance revealed the findings after analyzing survey data from a sample of 154,195 respondents in 142 countries collected for the 2019 World Risk Poll. One question asked respondents whether “machines or robots that can think and make decisions, often known as artificial intelligence” will mostly help or harm people in their country in the next 20 years. Worries that it will be mostly harmful were highest in Latin America and the Caribbean (49% of respondents), North America (47%), and Europe (43%), and lowest in East Asia (11%) and Southeast Asia (25%). People in China appear particularly enthusiastic about the prospects. Despite numerous reports of Xi Jinping’s government using AI to foster totalitarian rule, only 9% of respondents in the country said the tech will be mostly harmful, while 59% believe it will be mostly beneficial. [Read: US joins G7 AI alliance to counter China’s influence] The study also explored how opinions vary by profession. Over 40% of construction, manufacturing, and service workers view AI as mostly harmful, which is unsurprising given their potential vulnerability to automation. The most optimistic profession was executives in business or government, 47% of whom believe AI will be mostly helpful. Office workers, professionals such as doctors or engineers, and agricultural workers were also more likely to be positive about the prospects. The study team says further research is required to explain the variations, and whether they relate…Continue readingStudy reveals big regional divides in views on AI risks

Categories
Mashable

Study reveals the simple way people get around Facebook’s fact-checking AI

Recently, Facebook has been taking a harder stance on misinformation. The company banned content related to the conspiracy theory and cracked down on coronavirus . But it’s still not enough.  According to a recent by the non-profit advocacy group Avaaz, Facebook is failing in a major, basic way. Facebook Pages that spread misinformation are finding their way around one of the platform’s most important tools for fighting fake news: its AI system. When Facebook’s fact-checkers debunk a claim in a post, its AI is supposed to flag and label alternative versions of the post spreading the same misinformation. But the study says Pages are getting around these fact-checks.  How? By slightly tweaking the photos and memes used to spread misinformation. Avaaz’s researchers looked into 119 “repeat misinformers” – pages that have spread misinformation a minimum of three times – to understand how these pages get around Facebook’s AI detection.  Turns out, all they have to do is change the background color or font on the photo or meme they’re sharing. They can also change up the location of the text on the meme or try cropping it.  Below is an example from the study showing two pieces of content spreading the same fact-checked claims. The image on the left just needed to change the format and text placement of the image on the right to avoid the fact-check label from Facebook. A fact-checked meme could easily avoid a Facebook warning label by just tweaking some attributes. Another workaround is to…Continue readingStudy reveals the simple way people get around Facebook’s fact-checking AI

Categories
The Next Web

Dutch predictive policing tool ‘designed to ethnically profile,’ study finds

A predictive policing system used in the Netherlands discriminates against Eastern Europeans and treats people as “human guinea pigs under mass surveillance,” new research by Amnesty International has revealed. The “Sensing Project” uses cameras and sensors to collect data on vehicles driving in and around Roermond, a small city in the southeastern Netherlands. An algorithm then purportedly calculates the probability that the driver and passengers intend to pickpocket or shoplift, and directs police towards the people and places it deems “high risk.” The police present the project as a neutral system guided by objective crime data. But Amnesty found that it’s specifically designed to identify people of Eastern European origin — a form of automated ethnic profiling. The project focuses on “mobile banditry,” a term used by Dutch law enforcement to describe property crimes, such as pickpocketing and shoplifting. Police claim that these crimes are predominantly committed by people from Eastern European countries — particularly those of Roma ethnicity, a historically marginalized group. Amnesty says law enforcement “explicitly excludes crimes committed by people with a Dutch nationality from the definition of ‘mobile banditry’.” The watchdog discovered that these biases are deliberately embedded in the predictive policing system: The Sensing project identified vehicles with Eastern European licence plates in an attempt to single out Roma as suspected pickpockets and shoplifters. The target profile is biased towards designating higher risk scores for individuals with an Eastern European nationality and/or Roma ethnicity, resulting in this group being more likely to be subjected to measures, such as storage of their data in police…Continue readingDutch predictive policing tool ‘designed to ethnically profile,’ study finds

Categories
The Next Web

Blood pressure medicines lower risk of COVID-19 death, study says

At the start of the pandemic, there was concern that certain drugs for high blood pressure might be linked with worse outcomes for COVID-19 patients. Because of how the drugs work, it was feared they would make it easier for the coronavirus to get inside the body’s cells. Nevertheless, many national medical societies advised patients to continue taking their medication. With the potential for a second wave, it was essential to investigate whether patients could safely continue using these drugs. So, our team at the University of East Anglia set out to discover what effect they have on the progress of COVID-19. Instead of putting patients at risk, we found that these medications actually lower the risk of death and severe disease in COVID-19 patients. Bad outcomes cut by one-third We pooled data from 19 relevant COVID-19 studies that included patients taking two particular types of blood pressure medication: angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin receptor blockers (ARBs). This allowed us to look at the outcomes of more than 28,000 COVID-19 patients to assess the effects of these drugs. [Read: Are EVs too expensive? Here are 5 common myths, debunked] ACEIs and ARBs work by acting on the renin-angiotensin-aldosterone system (RAAS), which is essential for regulating blood pressure and the balance of fluids and electrolytes. These drugs were also thought to potentially increase the expression of a protein found on the surface of cells called angiotensin-converting enzyme 2 (ACE2). ARBs like valsartan help lower blood pressure by widening the blood…Continue readingBlood pressure medicines lower risk of COVID-19 death, study says

Categories
VentureBeat

Problematic study on Indiana parolees seeks to predict recidivism with AI

Using AI to uncover “risky” behaviors among parolees is problematic on many levels. Nevertheless, researchers will soon embark on an ill-conceived effort to do so at Tippecanoe County Community Corrections in Indiana. Funded by a grant from the Justice Department and in partnership with the Tippecanoe County Sheriff’s Department, Florida State University, and the University of Alabama-Huntsville, researchers at Purdue University Polytechnic Institute plan to spend the next four years collecting data from the bracelets of released prisoners. The team aims to algorithmically identify “stressful situations and other behavioral and physiological factors correlated with those individuals at risk of returning to their criminal behavior.” The researchers claim their goal is to identify opportunities for intervention in order to help parolees rejoin general society. But the study fails to acknowledge the history of biased decision-making engendered by machine learning, like that of systems employed in the justice system to predict recidivism. A 2016 ProPublica analysis, for instance, found that Northpointe’s COMPAS algorithm was twice as likely to misclassify Black defendants as presenting a high risk of violent recidivism than white defendants. In the nonprofit Partnership on AI’s first-ever research report last April, the coauthors characterized AI now in use as unfit to automate the pretrial bail process, label some people as high risk, or declare others low risk and fit for release from prison. According to Purdue University press materials, the researchers’ pilot program will recruit 250 parolees as they are released, half of whom will serve as a control group.…Continue readingProblematic study on Indiana parolees seeks to predict recidivism with AI

Categories
VentureBeat

Michigan University study advocates ban of facial recognition in schools

A newly published study by University of Michigan researchers shows facial recognition technology in schools presents multiple problems and has limited efficacy. Led by Shobita Parthasarathy, director of the university’s Science, Technology, and Public Policy (STPP) program, the research say the technology isn’t suited to security purposes and can actively promote racial discrimination, normalize surveillance, and erode privacy while institutionalizing inaccuracy and marginalizing non-conforming students. The study follows the New York legislature’s passage of a moratorium on the use of facial recognition and other forms of biometric identification in schools until 2022. The bill, which came in response to the launch of facial recognition by the Lockport City School District, was among the first in the nation to explicitly regulate or ban use of the technology in schools. That development came after companies including Amazon, IBM, and Microsoft halted or ended the sale of facial recognition products in response to the first wave of Black Lives Matter protests in the U.S. The Michigan University study — a part of STPP’s Technology Assessment Project — employs an analogical case comparison method to look at previous uses of security technology like CCTV cameras and metal detectors as well as biometric technologies and anticipate the implications of facial recognition. While its conclusions aren’t novel, it takes a strong stance against commercial products it asserts could harm students and educators far more than it helps them. For instance, the coauthors claim that facial recognition would disproportionately target and discriminate against people of color, particularly Black and Latinx communities.…Continue readingMichigan University study advocates ban of facial recognition in schools

Categories
Wired

Scientists May Be Using the Wrong Cells to Study Covid-19

By now there’s little doubt about hydroxychloroquine: It doesn’t work for treating Covid-19. But there’s a bigger, more important lesson hidden in the story of its failure—a rarely mentioned, but altogether crucial, error baked into the early research. The scientists who ran the first, promising laboratory experiments on the drug had used the wrong kind of cells. Instead of testing its effects on human lung cells, they relied on a supply of mass-produced, standardized cells made from a monkey’s kidney. In the end, that poor decision made their findings more or less irrelevant to human health. Worse, it’s possible that further research into novel Covid-19 cures will end up being compromised by the same mistake. SUBSCRIBE Subscribe to WIRED and stay smart with more of your favorite Ideas writers. The problem began in early February, when the scientific journal Cell Research published data from the Wuhan Institute of Virology suggesting that a pharmaceutical cousin of hydroxychloroquine was “highly effective” at controlling infections with SARS-CoV-2, the virus that causes Covid-19. (In 2005, lab tests of the same drug found it could inhibit the coronavirus that caused the original SARS outbreak.) A separate, full-fledged study from a different Chinese group, which appeared in the journal Clinical Infectious Diseases on March 9, found hydroxychloroquine to be more potent, and has since been cited hundreds of times. About a week after that, a third journal, Cell Discovery, put out the results from another study by the Wuhan group, which concluded that hydroxychloroquine, in particular,…Continue readingScientists May Be Using the Wrong Cells to Study Covid-19

Categories
Engadget

Apple sponsors a three-year UCLA study on depression and anxiety

The university says it designed the study so that it could conduct it entirely remotely. Privacy was another major consideration. UCLA and Apple say they plan to anonymize any data they collect during the study.    The hope is that the study will lead to a breakthrough that will give healthcare workers a better way to spot the symptoms of depression and prevent potential depressive episodes. As the university notes, how the medical field goes about detecting depression hasn’t changed significantly for more than a century. Much as they’ve done in the past, doctors currently observe patients and ask them how they feel.  “Current approaches to treating depression rely almost entirely on the subjective recollections of depression sufferers. This is an important step for obtaining objective and precise measurements that guide both diagnosis and treatment,” said Dr. Nelson Freimer, director of the UCLA Depression Grand Challenge. Health has been an area of focus for Apple for the last couple of years, so it’s not a surprise to see an initiative like this from the company. In 2014, the company announced HealthKit, a service that tracks, records and analyzes your fitness level, as part of iOS 8. More recently, the company detailed how the latest features that are coming to watchOS 7, one of which is sleep tracking. Source linkContinue readingApple sponsors a three-year UCLA study on depression and anxiety