Scientist Sennay Ghebreab wants to establish a Civic Ai Lab. This research group is an answer to companies and organizations that often do not take fundamental civil rights sufficiently into account with AI applications and algorithms. Discrimination is also lurking, he says. A conversation about the necessity of AI that serves people and society.

Artificial intelligence (AI) is on the rise. More or more systems and devices are using algorithms that make decisions more or less independently. Companies and governments more often make decisions based on those calculation models that are also becoming increasingly complex. The rise of IoT equipment (Internet of Things) also means that people are more often in contact with devices connected to the Internet that make all kinds of decisions, says Sennay Ghebreab, an Eritrean computer scientist who fled to the Netherlands with his parents at the age of six.

AI is a buzzword . Almost every company and organization claims to do something with AI. What definition of ‘ai’ do you use?
Ghebreab: ‘Ai is engaged in learning computer skills that normally require human intelligence, such as perceiving, learning, reasoning and interacting. The core of AI is a technology called machine learning. This technology uses algorithms to autonomously, so without guidance, recognize patterns in input data. By means of algorithms, the computer draws up certain rules itself, for example to recognize certain objects in pictures. An example of a very successful and widely used machine learning algorithm is called ‘deep learning’. ‘

AI is receiving a lot of attention within scientific research, says Ghebreab. He discusses the Innovation Center for Artificial Intelligence (ICAI, see box). Ghebreab is currently working on establishing a new research lab within ICAI, but will first discuss his role as a researcher and educator in which he studied the similarities between image recognition by the human brain and image recognition by machines.”What we put in the machine comes out”

What are those similarities?
‘In my research I studied the similarities between the way people process images in their brains and how computer vision works. There are many similarities. Humans have limited brain capacity. We cannot use and store everything we see. Therefore, we use certain patterns or advantages of what we have seen before and label what we see by bias. For example, we recognize a cow very quickly and coarsely on the basis of easily detectable characteristics such as grass-like colors and structures: after all, a cow is often in the pasture, and we use that learned prejudice to recognize a cow. Prejudices often help us unconsciously identify objects or understand concepts.

Machines do that too. Learning from the people and taking over the labels of the people who enter and label the data in those systems also creates certain assumptions. What we put in the machine comes out again. There is the bias (when external factors have a negative influence on the outcomes of a research question, ed. ). Now that we are deploying AI on such a large scale, we must therefore be careful not to simply adopt that bias. Especially if it is discriminatory. ‘

Ghebreab cites an example from own experience. In 2005 he wanted to leave the university in the evening with a group of fellow students. A revolving door with camera for image recognition prevented the door from opening for him, while it did for his white colleagues. The camera and the underlying system did not recognize Ghebreab, who has dark skin. Ghebreab explains that the system was probably fed only with photos of white people and therefore did not rule on a dark-skinned person. ‘That was almost fifteen years ago, but things are still going wrong. For example, with algorithms for photo recognition in smartphones, which mistake dark people for gorillas, or self-driving cars that do not recognize dark people as well, which means that dark people have greater risks of being hit. ‘

Discriminatory AI

Ghebreab says that his story about discriminatory systems did not initially affect his students. ‘When I taught the Information Communication Cognition course in 2009, the students hardly wanted to assume that machines can be biased and discriminate. They did not want to see that as people they also discriminated on the basis of certain assumptions and labels. ‘

It was only when he introduced students to machine learning and showed that the machine can make false assumptions because of data that is misinterpreted and labeled by people that the penny dropped. ‘If we start making machines like people, we can take over the good things but also the bad ones. I’ve been repeating that story for fifteen years. The idea that machines make mistakes because people feed them with discriminatory data has now become accepted. It is time to point out the dangers in addition to the opportunities. ‘

Yet Ghebreab often has difficulty with many doom scenarios in which AI is seen as a threat to humans.

Where do you think that doomsday thinking comes from?
‘A lot of people don’t really know what AI is and how it works. Then it quickly becomes a threat. For the relationship between man and machine also applies ‘unknown makes unloved’. Everyone needs to know to some degree how AI works. There is an important role for companies that use AI, but also for scientists and the government. In addition to my lectures at university, I have been teaching AI for primary school students in grades 7 and 8 and secondary school students for ten years. Young people pick up AI quickly and well. Now I focus on conveying knowledge about AI to the general public. After all, people can no longer ignore AI. It is there and it is better to get started with it and ensure that it develops properly than to think doomed. ‘

Frequently heard criticism of complex algorithms is that it is no longer possible to trace how it came to a certain outcome. How do you look at that?
Deep learning, or machine learning based on multiple data layers and abstractions, is based on neural networksand they are based on the mechanisms of our brain. This layering has become more complex. Ten years ago, for example, a computer vision algorithm consisted of ten layers. Now there are already more than two hundred. You can technically trace where an outcome comes from, but it is not always possible to interpret why something went in a certain way. Here too I compare it with our human brain. When we look at ourselves and consider why we make certain decisions, it is not always possible to explain clearly. Some decisions are made impulsively or reactively. This sometimes makes human behavior a kind of black box . ‘

Ai has to change

Sennay Ghebreab is a speaker at the business ICT fairs Infosecurity.nl and Data & Cloud Expo. These will take place on 30 and 31 October 2019 in the Jaarbeurs in Utrecht. View the full seminar program at www.infosecurity.nl and www.dncexpo.nl.
Register here  for a free ticket to the fair and access to the seminar program.

More and more companies are claiming to apply ‘explainable AI’. Is that credible?
‘I don’t believe that claim that they can trace and explain everything. I am convinced that parties that apply algorithms cannot always fully explain, explain and justify the outcome. ”

There are also more and more parties who argue that they are developing ethical AI. Do tech giants have enough self-regulating power to tackle these issues?
” Fair, explainable and ethical ai. Many companies say they are working on it, but the risk of lip service is real. Because ultimately it is all about a business model for companies. The focus is on the customer, not the citizen with rights such as equal treatment. I can hardly imagine that Google, Facebook and Twitter will adjust their earnings model to serve all citizens and population groups more fairly. That illusion will be created. I deliberately call it an illusion because even where steps are taken towards fair, transparent and ethical AI, much is still going wrong, intentionally or unintentionally. For example, algorithms that have been developed and are used to detect hate speech, for example on Facebook and Twitter, also appear to disadvantage certain population groups. ‘

Challenges within the government

Who should then ensure that those claims are correct?
‘I think that’s typical of the government. It concerns social issues, civil civil rights in which the government must play a role. But the government itself is also faced with challenges surrounding AI. It lacks the technical capabilities and personnel diversity to use and regulate AI in a fair and inclusive way. In fact, it may use algorithms that discriminate, so it has to take significant steps itself. ‘

Recently, for example, Rotterdam was in the news because this municipality uses algorithms to detect fraud involving benefits and benefits. Via the SyRI system, twelve hundred addresses were identified where fraud could be committed. The algorithms classified these addresses as ‘high risk’. According to FNV, which announced actions, one in ten addresses in certain neighborhoods where many migrants live, could expect a home visit from the social investigation department.

‘The Civic Ai Lab develops AI technology that enables citizens and communities to participate in a fair, inclusive and transparent manner. We look at new ways to reveal and minimize discrimination and inequalities, for example by applying AI concepts such as Differential Fairness, geïnspireerd door de Intersectionality theory (kruispuntdenken, red.) van de zwarte Amerikaanse feministe en hoogleraar Kimberlé Crenshaw. We kijken ook naar nieuwe manieren om kansengelijkheid te vergroten in het onderwijs of op de arbeidsmarkt. Zo kan ai worden gebruikt om nieuwkomers op basis van hun talenten en achtergronden op plekken terecht te laten komen waar zij het best kunnen bijdragen aan de economie.’ Hij legt uit dat het COA asielzoekers nu op basis van beschikbaarheid verdeelt over asielzoekerscentra.

Financing that lab is still a challenge. The current labs at ICAI have almost all been set up in collaboration with companies. These companies see opportunities in AI and are willing to invest heavily in it. ‘The question is which governments and social organizations want to put money into the lab. I am talking to potential partners such as government, municipalities and corporate foundations, who see the importance of the Civic Ai Lab. As soon as it is final, I can only name names, but it is time for parties that consider this subject important to also put their money where their mouths and invest in it. ‘

‘Ai for all’

“Critical algorithmic thinking is often lacking”

How do you expect AI to develop? And what should ICT professionals who are involved in these developments focus on?
‘The number of AI researchers, developers and applications is increasing rapidly in America, Europe and Asia. In the Netherlands too. I see new, creative AI solutions emerging for all kinds of social and economic problems. The number of ready-to-cook, or ‘ off the shelfalgorithms will grow. Not only programmers and ICT specialists will pick up, adapt and use these algorithms, but also lay people and digital illiterates. On the one hand this is good, on the other hand it also entails risks of improper and unfair use of algorithms. A question that must be asked, for example, is what the consequences are for whom of a certain method on a certain data set. Critical algorithmic thinking is often lacking, and that is desperately needed in this globalizing and digitizing society full of unfounded assumptions and positions. That is why you should always develop and apply AI in teams of people with different expertise and backgrounds. Especially when it comes to so-called ‘ ai for all ‘ or ‘ ai for social good‘. That is not feasible if it is not developed by ‘ all ‘. My mantra is: ‘ ai for all cannot be without ai by all ‘. ‘

This article was previously published in Computable Magazine issue 5 September 2019.

Bio

Sennay Ghebreab(46 years old) is a neuro-computer scientist at the University of Amsterdam, head of social sciences at Amsterdam University College, and visiting professor of diversity and inclusion at VU University Amsterdam. He studied technical information systems at the University of Amsterdam and from 1996 to 2001 did his PhD research on medical image diagnostics using artificial intelligence. After his PhD, he focused on artificial intelligence in cognitive neuroscience and social sciences from 2006 to 2016. He is now setting up a lab in collaboration with the National Innovation Center for AI to develop AI technology that enables citizens to participate in society in a fair, transparent and inclusive way. He will be one of the speakers at the meeting on September 30 ‘The discriminating algorithm ‘in the Rode Hoed in Amsterdam.

Innovation Center for AI (ICAI)

The national Innovation Center for Artificial Intelligence (ICAI) was established in 2018. It is an ecosystem in which knowledge institutions, large business companies, SMEs, startups and non-profit organizations jointly develop new AI technologies. ICAI now consists of nine labs in Amsterdam, Utrecht, Delft and Nijmegen with fourteen partners and eighty scientists. At least five PhD students are involved in each lab. They work for four to five years on solving a problem of the affiliated partner, such as ING (fintech), Ahold Delhaize (transport) and the Police (investigation). The labs are funded by the affiliated partner.

Read his full interview in Computable

admin

Author admin

More posts by admin

All rights reserved Salient.