AI offset may occur due to annotation instructions – TechCrunch

Read Time:6 Minute, 19 Second

Research in machine learning and artificial intelligence, which are now key technologies in virtually every industry and company, is too voluminous for anyone to read. This column, Perceptron (formerly Deep science), aims to gather some of the most relevant recent discoveries and documents – in particular, but not limited to artificial intelligence – and explain why they are important.

This week in the field of artificial intelligence, a new study shows how bias, a common problem in AI systems, can begin with instructions given to people hired to annotate data from which AI systems learn to make predictions. The co-authors find that annotators capture patterns in the instructions that cause them to make annotations, which then become over-represented in the data, shifting the AI ​​system toward these annotations.

Many artificial intelligence systems today “learn” to understand images, videos, text, and audio from examples cited by notaries. Tags allow systems to extrapolate relationships between examples (e.g., the relationship between a “kitchen sink” label and a photo of a kitchen sink) to data that the systems have not previously seen (e.g., photos of kitchen sinks that were not included in the data used for “learning »Models).

It works extremely well. But annotation is an imperfect approach – annotators add to the table of biases that may fall into the prepared system. For example, studies have shown that middle annotator more likely to denote phrases in African American Folk English (AAVE), an informal grammar used by some black Americans as toxic, leading AI toxicity detectors trained on labels to see AAVE as disproportionately toxic.

As it turns out, the propensity of annotators may not be solely to blame for having bias in training labels. In the preprint study from the University of Arizona and the Allen II Institute, researchers investigated whether there could be a source of bias in instructions written by the creators of the datasets to serve as a guide for annotators. Such instructions usually include a brief description of the task (e.g., “Mark all the birds in these photos”) along with a few examples.

Image credits: Parmar et al.

Researchers reviewed 14 different “reference” datasets used to measure the performance of natural language processing systems or AI systems that can classify, summarize, translate, and otherwise analyze or manipulate text. After examining assignment instructions given to the anator who worked on the datasets, they found evidence that the instructions influenced the annotators to follow certain patterns, which were then extended to the datasets. For example, more than half of Quoref’s annotations, datasets designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), begin with the phrase “What’s your name,” a phrase present in a third of the instructions for the data set.

This phenomenon, which researchers call “instruction bias,” is of particular concern because it suggests that systems trained on biased instruction / annotation data may not work as well as originally thought. Indeed, co-authors found that instruction bias overestimates the performance of systems and that these systems often do not generalize beyond instruction templates.

It turned out that larger systems, such as GPT-3 OpenAI, are generally less sensitive to instruction bias. But the study serves as a reminder that AI systems, like humans, tend to develop bias from sources that are not always obvious. The unsolvable task is to identify these sources and mitigate the consequences further downstream.

In a less sober article, scientists from Switzerland concluded that face recognition systems are not so easy to fool realistic AI-edited faces. “Morphing attacks,” as they are called, involve the use of artificial intelligence to alter a photograph in an identity card, passport, or other form of identity document to circumvent security systems. The co-authors created “morphs” using AI (Nvidia StyleGAN 2) and tested them with four of the latest face recognition systems. They argued that morphs do not pose a significant threat despite their true appearance.

Elsewhere in computer vision, researchers from Meta have developed an “assistant” that can memorize room characteristics, including the location and context of objects, to answer questions. The work, which is described in detail in preparation for publication, is probably part of Meta’s work Project Nazareth an initiative to develop augmented reality glasses that use AI to analyze the environment.

Metaegocentric AI

Image credits: The goal

The system of researchers, designed for use on any body-mounted device, analyzes the footage to create “semantically rich and effective scene memories” that “encode spatial and temporal information about objects.” The system remembers where objects are located and when they appeared in the video, and memorizes answers to questions that the user can ask about objects. For example, when asked “Where did you last see my keys?”, The system may indicate that that morning the keys were lying on a side table in the living room.

The purpose of which as reported plans to release full-featured AR glasses in 2024, last October announced its plans for “egocentric” AI, launching Ego4D, a long-term research project of “egocentric perception”. At the time, the company said it aimed to teach artificial intelligence systems – among other tasks – to understand social cues, how the actions of an AR device owner can affect their environment and how hands interact with objects.

From language and augmented reality to physical phenomena: the artificial intelligence model has been useful in studying MIT waves – how and when they break down. Although this seems a bit puzzling, the truth is that wave models are needed both to construct structures in and around water and to model the interaction of the ocean with the atmosphere in climate models.

Image credits: MIT

Usually waves are roughly simulated by a set of equations, but researchers trained machine learning models on hundreds of waves in a 40-foot water tank filled with sensors. By observing the waves and making predictions based on empirical data, and then comparing it to theoretical models, AI helped show where the models were flawed.

The startup is born out of research at EPFL, where Thibaut Asselborn’s PhD in Handwriting Analysis has evolved into a full-fledged educational application. Using the algorithms he developed, the program (called School Rebound) can identify habits and corrective actions in just 30 seconds when a child writes on an iPad with a pen. They are presented to the child in the form of games that help them write more clearly, consolidating useful habits.

“Our scientific model and rigor are important, and that’s what sets us apart from other existing applications,” Asselborn said in a press release. “We received letters from teachers who saw how their students were improving. Some students even come to practice before classes. ”

Image credits: Duke University

Another new finding in primary schools is related to the detection of hearing problems during routine examinations. These readings, which some readers may remember, often use a device called a tympanometer, which should be operated by trained audiologists. If this is not the case, say, in an isolated school district, children with hearing problems may never receive the necessary help in a timely manner.

Samantha Robler and Susan Emmett at Duke decided to build a tympanometer that essentially works by itself, sending data to a smartphone app where they are interpreted by model AI. Anything that worries will be highlighted and the child can undergo further examination. It’s not a replacement expert, but it’s much better than nothing, and can help detect hearing problems much earlier in places without proper resources.

Source by [author_name]

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Previous post Investors sum up the results of technology
Next post Janelle Evans joins OnlyFans: is she completely broken ?!