PaliGemma 2By analyzing images, it not only identifies objects; also in the images It can also explain people’s actions, emotions, and the general story of the scene.r. Google states that these models enable detailed analysis of images by producing “context-oriented explanations”. However, this emotion recognition feature does not work right out of the model’s box; It needs to be specially trained.
On the other hand, experts say that making this technology openly accessible is worried. Sandra Wachter, Professor of Data Ethics and Artificial Intelligence at the Oxford Internet Institute. “This is very disturbing to me. “I find it problematic to assume that we can ‘read’ people’s emotions.” he says.
Startups and tech giants have been trying for years to develop artificial intelligence that can detect emotions for everything from sales training to accident prevention. Although some claim to have achieved this, the science remains on weak empirical ground.
is a research fellow specializing in artificial intelligence at Queen Mary University. Mike Cook, “Emotion detection is not possible in the general case because people experience emotions in complex ways. Of course, we think we can tell what other people are feeling by looking at them, and many people, whether spy agencies or marketing companies, have tried this over the years. “I’m sure it’s certainly possible to detect some general indicators in some cases, but it’s not something we can ever quite ‘figure out’.” he says.
Additionally, these systems are often may be affected by the biases of its developers attention is drawn. One indication of this is a recent in the study emerges. The study suggests that emotional analysis models assign more negative emotions to black faces than to white faces.
Google says it did extensive testing to assess demographic biases before releasing PaliGemma 2. However, it did not share the full scope of these tests or details of the metrics used. The only benchmark Google has disclosed is FairFace, which consists of tens of thousands of people’s passport photos. The company claims that PaliGemma 2 scores well on FairFace. FairFace has been criticized by some researchers for representing a limited number of racial groups. Some experts say unequivocally that it is not possible to identify emotions just by looking at the face.
Aside from the scientific feasibility of emotion identification and detection with artificial intelligence, the ethical and legal aspects of the work are also of great importance. For example, the Artificial Intelligence Act, the most important AI legislation in the EU, bans schools and employers from using emotion detection systems (but does not ban law enforcement from using them).
Experts warn that open models such as PaliGemma 2, which are available from many servers, including artificial intelligence development platform such as Hugging Face, can be abused and this can lead to social harm. It is stated that discrimination may deepen especially in recruitment, loan applications and education acceptance processes.
This news our mobile application Download using
You can read it whenever you want (even offline):