What values does AI follow? Ethics and new technologies - a conversation with Zachary Goldberg

The founders of Trilateral Research knew from the very beginning of their activity that ethical values are embedded in the choices made by programmers when creating and disseminating each new technological tool. Zachary Goldberg on AI ethics.

What values does AI follow? Ethics and new technologies - a conversation with Zachary Goldberg
00:00 00:00

Summary

  • Zachary Goldberg, a scientist at Trilateral Research, is concerned about the ethical implications of technological advancements. Trilateral Research, founded in 2004, focuses on the ethics of new technologies and has teams for ethical artificial intelligence products and commercial activity consulting.
  • The company has participated in projects like Horizon 2020 and FP7, aimed at promoting international cooperation and increasing the innovation and competitiveness of the European economy. With the advent of GDPR, data protection has become a significant issue.
  • Personalization of technology, achieved through data collection, creates a conflict between privacy and entertainment. Algorithms can limit choices and perpetuate biases, especially in areas like policing.
  • Many businesses want to operate ethically and responsibly but lack the knowledge to do so. Legal regulations concerning artificial intelligence are also a factor, with companies wanting to comply with the law. Some organizations are more interested in avoiding ill will than in goodness, as using large language models carries the risk of violating privacy, copyright, or intellectual property rights.
  • The European Union's AI Act is based on the protection of fundamental rights, a key goal of democracy. Transparency, a principle of the AI Act, is crucial for a functioning democracy.
  • The D4Fly project aimed to create a system for identifying travelers at borders using biometric technology. The system was intended to speed up border control processes, but it was not implemented.
  • CESIUM is a project designed to aid child protection specialists by providing secure access to data collected by various agencies. The project supports decision-making processes by identifying and prioritizing at-risk children. A validation test revealed that CESIUM had previously identified 16 children who were later reported through traditional verification methods.
Do we have the ethical right to extract resources from the moon, asteroids, other planets, and bring them to Earth? If we had the ability to drastically change the landscape on Mars so that we could live there, should we do it? - these bold questions are posed in his presentations at tech industry conferences by Zachary Goldberg, a scientist associated with Trilateral Research, a research and consulting organization dealing with the ethics of new technologies.

Ewa Pawlik, Digitized : You introduce philosophical terminology into the discussion around new technologies, you address issues related to ethics. And you've been doing this for over 15 years. Was anyone interested in it then?

Zachary Goldberg, Trilateral Research : The company was founded in 2004, almost 20 years ago, I joined the team a little later. And indeed, from the very beginning we emphasized ethics. The reason for this was the founders' observation that despite significant progress in the field of technology, ethical issues were not widely recognized.

Did you sound the alarm?

Not quite, that's too strong a word. The bosses decided that there was an urgent need for researchers to pay attention to this aspect of technology. Initially, we were strictly a research organization. Currently, our work is based on three pillars. We have a research team, a team of products related to ethical artificial intelligence, and a consulting team dealing with commercial activity.

What projects were you involved in at the beginning of your activity?

Primarily ones like Horizon 2020 or FP7, whose goal was to promote international cooperation and increase the innovation and competitiveness of the European economy through investments in advanced technologies, scientific research and the development of new solutions. Now we have engaged in the latest Rise of Europe program, dedicated to the same theme. Over time, data protection became an increasingly important issue for politicians. More attention began to be paid to these issues, among other things thanks to GDPR.

We have reached a point where these issues have attracted the attention of legislative authorities, but it was a long process. And technology itself became a part of our daily life much earlier. And correct me if I'm wrong, but ethics is not necessarily an important topic for young, mostly, tech entrepreneurs. You kind of mature to ethics, having children, having to take responsibility for others changes a lot in this matter. It opens your eyes.

That's a good point. The founders of Trilateral Research, who are now the general directors, knew from the very beginning of their activity that ethical values are built into the choices that programmers make, creating and disseminating a specific technological tool. Values already exist. You may not be aware of it, but they are.

This information may be met with surprise.

Values are an integral part of every, even the smallest activity or choice. For example, if you want to create a more efficient tool, you value efficiency. It is a value here. And if you choose some specific values, it is usually at the expense of others. And there is a conflict of values.

What does it consist of?

I will use the example of a popular streaming platform. When I open Netflix, on my screen I see not only a different set of suggestions than you, but also the form of their presentation will be different. Let's assume that you watched many horror movies. You will see a small Stranger Things icon, and probably a monster will appear in the picture. And if I watched many romantic comedies, I will see a small picture for Stranger Things, and on it two characters kissing. Ads are personalized.

Someone created this with our comfort in mind.

In a sense, this is a good thing. However, on the other hand, this personalization is possible thanks to the collection of our data. In this case, there was a conflict between privacy and entertainment, and we choose or allow the choice of entertainment. The founders of my company were probably among the first to notice that there are values, value choices, and value conflicts in technology. At that time, not much was talked about it, at least outside the academic world.

We allow ourselves to be closed in bubbles, which over time limit our choice. In the case of a platform like Netflix, there is a risk that, for example, you will miss the French New Wave cinema, because the algorithm will never suggest it to you, although it is not excluded that you would like it. We live in the grip of a feedback loop - an information loop. You can live without the New Wave, but what if this mechanism penetrates areas of life other than entertainment?

Exactly, we get to the heart of the problem. The feedback loop also appears in algorithms used by the police and the judiciary. So if, for example, the police paid a lot of attention to a minority district, the algorithm will learn that minorities are arrested more often than white people or any other people, right? And this in turn tells the police that they should spend more time in this area, which they were already doing. An information loop is created between the algorithm and social practices. And such a dependency can be trivial in the case of Netflix, but really dangerous in the case of the police.

So who are your clients? Who and why is interested in popularizing ethics and introducing appropriate solutions in the business world? Do you only meet idealists?

I fear that we could go bankrupt if we based our operations on cooperation with idealists alone. Consulting services in the company started mainly from data protection consulting. We took care of this even before the introduction of GDPR. Then we focused on developing consulting in the field of responsible artificial intelligence. Motivated by good will, startup CEOs would come to me. Many people want to run businesses ethically and responsibly, they just don't know what it means in practice, how to translate such an approach into specific actions and tools.

In addition, there are legal regulations concerning artificial intelligence and companies want to operate in accordance with the law.

Even if the EU artificial intelligence law is not yet in force, it is worth preparing for its introduction, because it is only a matter of time. And another group of organizations is interested not necessarily in goodness, but in avoiding ill will. These are two different things. OpenAI was sued for three billion dollars in damages in California. There will be more and more such lawsuits. And although there is a lack of regulations, some organizations are worried because using Chat GPT or these large language models carries the risk of violating privacy, copyright or intellectual property rights.

I wonder what is the relationship between openness to legal regulation of AI and the level of democracy of a given country. The European Union takes a united front, but some member countries are moving away from democracy.

Interesting question. And I've never thought about it this way before. First of all, it should be said that yes, you are right. The European Union has some member states that currently have illiberal governments or illiberal tendencies. As for the AI Act, I must admit that I followed the development of this law, but I did not closely follow the sentiment of individual member states. One thing I thought about is that the AI Act is based on the protection of fundamental rights and this is a fundamental goal of democracy. Just like leveling the playing field for all citizens. Transparency, being one of the ethical principles of the AI Act, is absolutely necessary for a thriving democracy. Citizens, especially public institutions, need transparency. What do public institutions do? How do they make decisions? How do they use citizens' money and other such things? Transparency is therefore a key principle for a smoothly functioning democracy, and it is also an important principle in the AI Act. Only then can people decide for themselves what kind of life they want to lead and make informed decisions.

Like the British did, deciding on Brexit?

Unfortunately, in some democracies people make bad decisions. Aristotle believed that democracy is one of the worst forms of government, because you can't let inexperienced people make decisions that affect others. But today almost everyone agrees that democracy is the best form of government, because it allows a certain kind of freedom, even the freedom to make bad choices.

Please tell us about the specific tools you have worked or are working on.

I would be happy to talk about D4Fly. This project was ultimately not implemented, which is a shame, because when the war in Ukraine broke out, situations arose in which it would have practical application. Its main assumption was to create a system for identifying the identity of travelers, which was to be used at the borders of countries to facilitate and speed up the process of border control. The system was to be based on biometric technology, such as face scanning, body scanning, and fingerprinting. Biometric data registration was to be voluntary, where travelers would sign up to the system to record their biometric data. Then, when crossing the border, the same biometric data would be collected again and verified with the data registered earlier. Once the traveler's identity was successfully verified, it would allow for free crossing of the border without the need for long waiting and presenting traditional documents, such as a passport.

During refugee crises, people often leave their homes in panic. Not always with all the necessary documents.

That's one thing, and the other issue is that in the chaos, families often get separated - some fled to Poland, some to Lithuania and other countries. It was the Lithuanian border guards who came to us asking for support. In the end, nothing came of it, the decision-making processes in the EU take too long for such rapid response to be possible. We also didn't manage to collect the necessary data.

We are faster when it comes to control and penalties.

It's a huge tragedy that in all our human history we have focused more often on tracking and arresting individuals committing crimes than on trying to help others. If we had devoted the same energy and financial resources to this issue, we would be in a different place. That's why I try to emphasize at every possible opportunity that AI carries not only threats but also incredible possibilities for change for the better.

One of your tools, Cessium, has been put into practice and has changed the real life situation of 16 children. I would like us to end our first of a series of conversations about ethics in technology with this hopeful example.

In a huge nutshell, CESIUM is a project that allows child protection specialists to gain secure access to data collected by various agencies and supports their decision-making process by identifying and prioritizing endangered children. We conducted a validation test that showed that CESIUM had previously indicated 16 children who were reported via the traditional verification path months later. The CESIUM algorithm can take into account various offenses that individually would not indicate a threat, but in combination gave a picture of a child at risk of exploitation. CESIUM monitors the work of several agencies and foundations at the same time, thus catching threats earlier than people working independently of each other can.

And now you have a powerful ally and lobbyist in the person of Jonathan McAdam, Chief Inspector of Police from Linconshire. The British police were one of your partners in this project. Can you? Yes, you can!