Categories
The Verge

Researchers become their own lab rats with DIY coronavirus vaccine


Vaccine trials have had a weird week. First, there was the exhilarating kickoff of two massive clinical trials for vaccines created by Moderna and Pfizer. Each company is hoping to recruit 30,000 volunteers to test whether its vaccine is effective and safe. This is normal.

What’s not normal is a bunch of researchers in Boston who have decided to test a DIY coronavirus vaccine on themselves. At least 20 people have mixed together the vaccine and sprayed it up their noses as part of what they’re calling the Rapid Deployment Vaccine Collaborative (Radvac), according to a truly wild MIT Technology Review story from editor Antonio Regalado. Read More

Categories
VentureBeat

Researchers examine the ethical implications of AI in surgical settings


A new whitepaper coauthored by researchers at the Vector Institute for Artificial Intelligence examines the ethics of AI in surgery, making the case that surgery and AI carry similar expectations but diverge with respect to ethical understanding. Surgeons are faced with moral and ethical dilemmas as a matter of course, the paper points out, whereas ethical frameworks in AI have arguably only begun to take shape.

In surgery, AI applications are largely confined to machines performing tasks controlled entirely by surgeons. AI might also be used as a clinical decision support system, and in these circumstances, the burden of responsibility falls on the human designers of the machine or AI system, the coauthors argue. Read More

Categories
VentureBeat

Privacy problems are widespread for Alexa and Google Assistant voice apps, according to researchers


Google Assistant and Amazon Alexa voice app privacy policies are often “problematic” and violate baseline requirements, according to a study coauthored by Clemson University School of Computing researchers. The work, which hasn’t yet been peer-reviewed, analyzed tens of thousands of Alexa skills and Google Assistant actions to measure the effectiveness of their data practice disclosures. The researchers characterize the current state of affairs as “worrisome” and claim that Google and Amazon run afoul of their own developer rules. Read More

Categories
VentureBeat

Carnegie Mellon researchers use Twitch to collect sounds for AI research


Carnegie Mellon researchers designed a live-streaming video game to collect audio from players that’ll populate a database for AI research. The team’s game — Rolling Rhapsody — is specifically designed to be played on Twitch, and it tasks streamers with rolling a ball across a map to collect “treasure” while viewers record sound from their homes via an app.

It’s researchers’ belief that recordings of domestic sounds like thudding from a bedroom door or a coughing fit could be used to create a range of useful technologies. For instance, Google drew on audio from thousands of its own meetings and YouTube videos to train the noise-canceling algorithm in Google Meet. Meanwhile, a separate team of Carnegie Mellon researchers created a “sound-action-vision” corpus to anticipate where objects will move when subjected to physical force. Read More

Categories
VentureBeat

Intel researchers create AI system that rates similarity of 2 pieces of code


In partnership with researchers at MIT and the Georgia Institute of Technology, Intel scientists say they’ve developed an automated engine — Machine Inferred Code Similarity (MISIM) — that can determine when two pieces of code perform similar tasks, even when they use different structures and algorithms. MISIM ostensibly outperforms current state-of-the-art systems by up to 40 times, showing promise for applications from code recommendation to automated bug fixing.

With the rise of heterogeneous computing — i.e., systems that use more than one kind of processor — software platforms are becoming increasingly complex. Machine programming (a term coined by Intel Labs and MIT) aims to tackle this with automated, AI-driven tools. A key technology is code similarity, or systems that attempt to determine whether two code snippets show similar characteristics or achieve similar goals. Yet building accurate code similarity systems is a relatively unsolved problem. Read More

Categories
VentureBeat

Researchers find evidence of bias in recommender systems


In a new preprint study, researchers at the Eindhoven University of Technology, DePaul University, and the University of Colorado Boulder find evidence of bias in recommender systems like those surfacing movies on streaming websites. They say that as users act on recommendations and their actions are added to the systems (a process known as a feedback loop), biases become amplified, leading to other problems like declines in aggregate diversity, shifts in representations of taste, and homogenization of the user experience. Read More

Categories
VentureBeat

Researchers propose using AI to predict which college students might fail physics classes


In a paper published on the preprint server Arxiv.org, researchers affiliated with West Virginia University and California State Polytechnic University investigate the use of machine learning algorithms to identify at-risk students in introductory physics classes. They claim it could be a powerful tool for educators and struggling college students alike, but critics argue technologies like it could harm those students with biased or misleading predictions.

Physics and other core science courses form hurdles for science, technology, engineering, and mathematics (STEM) majors early in their college careers. (Studies show roughly 40% of students planning engineering and science majors end up switching to other subjects or failing to get a degree.) While physics pedagogies have developed a range of research-based practices to help students overcome challenges, some strategies have substantial per-class implementation costs. Moreover, not all are appropriate for every student. Read More

Categories
Engadget

DeepMind and Oxford University researchers on how to ‘decolonize’ AI


The paper, published this month in the journal Philosophy & Technology, has at heart the idea that you have to understand historical context to understand why technology can be biased.

“Everyone’s talking about racial bias and technology, gender bias and technology, and wanting to mitigate these risks, but how can you if you don’t understand a lot of these systems of oppression are grounded in very long histories of colonialism?” Marie-Therese Png, a co-author, PhD candidate at the Oxford Internet Institute and former technology advisor to the UN, told Engadget. The paper’s other authors were DeepMind senior research scientists Shakir Mohamed and William Isaac. Read More

Categories
VentureBeat

Researchers aim to measure the impact of imprecise medical data on AI predictions


In a study published on the preprint server Arxiv.org, researchers at Donghua University and the University of California, Santa Barbara highlight the dangers posed by imprecise medical data when fed to AI and machine learning algorithms. Learning algorithms, they find, can carry out calculations subject to uncertain influences, resulting in ranges of results that could lead to mislabeling and inappropriate treatments.

Clinical lab tests play an important role in health care. In fact, it’s estimated that from early detection to the diagnosis of diseases, test results guide more than 70% of medical decisions and prescriptions. The availability of medical data sets would seem to make health a natural fit for AI and machine learning. But due to equipment, instrument, material, and test method limitations, data inaccuracy often occurs (as a result of expired reagents, controls, calibrators, and failures in sampling systems), potentially impacting the accuracy of AI systems. According to a 2006 study, the prevalence of laboratory errors can be as high as one every 330 to 1,000 events, one every 900 to 2,074 patients, or one every 214 to 8,316 laboratory results. Read More

Categories
VentureBeat

Researchers find evidence of bias in facial expression data sets


Researchers claim the data sets often used to train AI systems to detect expressions like happiness, anger, and surprise are biased against certain demographic groups. In a preprint study published on Arxiv.org, coauthors affiliated with the University of Cambridge and Middle East Technical University find evidence of skew in two open source corpora: Real-world Affective Faces Database (RAF-DB) and CelebA.

Machine learning algorithms become biased in part because they’re provided training samples that optimize their objectives toward majority groups. Unless explicitly modified, they perform worse for minority groups — i.e., people represented by fewer samples. In domains like facial expression classification, it’s difficult to compensate for skew because the training sets rarely contain information about attributes like race, gender, and age. But even those that do provide attributes are typically unevenly distributed. Read More