How its policies harmed the rights of Palestinian

Introduction:

Meta’s actions in May 2021 appear to have had an adverse human rights impact (in Palestine). The report commissioned by Meta last year on how its policies harmed the rights of Palestinian Instagram and Facebook users during the attacks on Gaza in 2021 was damning.

More on news: 

  1. An investigation by The Guardian has found that a new feature on WhatsApp – which, like Facebook and Instagram, is owned by Meta — that generates images in response to queries seems to promote anti-Palestinian bias, if not outright bigotry.
  2. Searches for “Palestinian” and “Palestinian boy” resulted in images of children holding guns.
  3. In contrast, searching for “Israeli boy” shows children playing sports or smiling, and even for “Israeli army” shows jolly, pious — and unarmed — people in uniform.
  4. The controversy around AI-generated stickers has not occurred in a vacuum.
  5. Meta’s social media platforms have been accused of being biased against content from and in support of Palestinians.

What is AI?

  1. AI is the ability of a computer, or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.
  2. Although there is no AI that can perform the wide variety of tasks an ordinary human can do, some AI can match humans in specific tasks.

Characteristics & Components:

  1. The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal. A subset of AI is Machine Learning (ML).
  2. Deep Learning (DL) techniques enable this automatic learning through the absorption of huge amounts of unstructured data such as text, images, or video.

What is AI bias?

  1. AI Bias refers to an anomaly in the output produced by a machine learning algorithm.
  2. Bias in AI is when the machine gives consistently different outputs for one group of people compared to another.
  3. Typically, these bias outputs follow classical societal biases like race, gender, biological sex, nationality or age.
  4. This may be caused due to prejudiced assumptions made during the algorithm development process or prejudices in the training data.

What are the types of AI Bias?

  1. Cognitive bias - These are unconscious errors in thinking that affects individuals’ judgements and decisions.
  2. These biases could seep into machine learning algorithms via either designers unknowingly introducing them to the model or a training data set which includes those biases.
  3. Lack of complete data - If data is not complete, it may not be representative and therefore it may include bias.
  4. It is also difficult to find out the factor that causes the biased output due to the ‘black box effect’ in AI.

What can be done to correct these biases?

  1. For some time now, considerable work has been done around bias in artificial intelligence and machine learning (ML) models.
  2. Since the programs are amoral, they can reflect — perhaps even enhance — the prejudices in the data used to train them.
  3. Addressing the prejudices in the machine, then, requires active interventions and even regulation.
  4. Blind Taste Test Mechanism - It works by checking if the results produced by an AI system are dependent upon a specific variable such as their sex, race, economic status or sexual orientation.
  5. Open-Source Data Science (OSDS) - Opening the code to a community of developers may reduce the bias in the AI system.
  6. Human-in-the-Loop systems - It aim to do what neither a human being nor a computer can accomplish on their own.

Conclusion:

No search should paint people, especially children, from an entire community as inherently violent. For all it uses for human beings, AI should not be used to dehumanise so many of them.
 

Share:

Comments (0)


comments