Highlights from the HAI Conference on AI Ethics, Policy and Governance

14 minute read

Published:

Last week, I attended the HAI Conference on AI Ethics, Policy and Governance, organised by the Institute for Human-Centered Artificial Intelligence at Stanford University. It was an exciting event that was packed with interesting talks and attracted almost a thousand people. In this blog, I summarise some of its highlights.

AI and the economy

Erik Brynjolfsson and Susan Ashley discussed the economics of AI.

Silicon Valley, we got a problem”, said Erik who focused on the topic’s macro-level picture; he highlighted the need for institutions to ensure that the benefits of AI are shared.

The goal should not more and better tech but the shared prosperity

He also discussed that even though we live in the age of data and information, we have worse measurements of the economy. For example, we do not measure intangible assets such as the value of Wikipedia.

Susan Ashley took a micro-level perspective and discussed that the adoption of machine learning by organisations has a “slow” and “fast” mode:

  • Fast: Automating the easy tasks.
  • Slow: Reconfiguring how companies work to accommodate the use of machine learning. They need time to collect data, develop a cloud infrastructure and train their personnel.

Erik Brynjolfsson added that it is easier for companies that have the benefit of scale to internalise the cost of the machine learning transformation. However, they might also be victims of inertia (see Nokia, IBM).

They both discussed that new technology has the potential to transform core sectors such as health, education and transportation. It can enrich education’s content, make healthcare more accurate and transportation faster. It is critical though to remember that “when you have the power to change the world, your values matter”.

Regulating Big Tech

Eric Schmidt and Marietje Schaake clashed on big tech, innovation and regulation.

Eric Schmidt stated that AI provides new ways to solve problems in variety of disciplines such as physics and chemistry. He noted that we move from a simple AI stack to a more complex one and that has implications. Complex AI models make the coordination between humans and machines difficult and thus, we need new interfaces to facilitate their coexistence.

Eric was very excited with the strides that AlphaBet and its subsidies have made in deep learning and reinforcement learning. For example, he mentioned how Google is collecting data, normalising them with GANs and then uses reinforcement learning to find a universal function that’s useful in downstream tasks.

He also underlined some issues and open questions in AI:

  • How will end-users be in control of an AI system?
  • Progress in critical domains such as health is throttled by the limited data availability. Eric argued that this is due to privacy concerns but questioned them. “Wouldn’t you give all your data to save a life?”
  • A better and more complete understanding of the bias in people, systems and models is needed.
  • The China Problem”: US needs access to China’s talent so we got to figure out how to work with them.
  • Lack of ground rules in the development of AI. Eric became quite controversial, stating that the “liberal, western values” are the ones that should “win” when regulating AI.

Marietje Schaake highlighted that regulation is at the top of the AI agenda. The main question is how and who sets the rules. She also highlighted the need for new regulatory frameworks, with human rights being a suitable basis for them.

She noted that we should not innovate for the sake of it and that we should care more about democracy. This was mainly a comment on big tech platforms, their irresponsible way of innovating (move fast and break things) and their ability to influence the outcome of elections. She didn’t explicitly name them but think of Facebook, Twitter and YouTube.

As in the previous session, Marietje underlined that the important question is not who will dominate AI but what values they would be carrying. She also discussed that we need regulation in order to create benchmarks in the development of AI and keep others, such as China, accountable to them. Otherwise, AI could potentially be an accelerator of top-down control.

She also mentioned that governments have a serious information deficit to companies while they lack the tools needed to catch up. To make matters worse, the gap becomes wider due to the experience that companies have in data processing and analysis which enables them to look at challenges with different lenses. This leads to many big data tasks, such as the digitisation of public services, being done by the private sector, leading to the privatisation of the government.

If trade secrets stand between us and transparency, this has to change.

She also replied to Eric’s cautionary note that governments ought not to regulate AI because it would throttle innovation by saying that it’s never too early to regulate and that lawmakers have to be more proactive. Furthermore, she highlighted that “great power” means “great responsibility, or at least modesty” (Eric Schmidt was quite arrogant). She also added that companies have to treat internet users as citizens and not data points.

Finally, Marietje concluded that when technology could potentially be harmful, there must be:

  • Systematic impact assessments
  • Time to examine the technology before deployment (as we do with drugs).

The Coded Gaze

Joy Buolamwini gave a powerful talk on algorithmic bias and discussed her research on auditing facial analysis systems.

She explained why computer vision software is biased against people of colour and women. When collecting a training set for such models, researchers commonly crawl big databases with images and use a face detection algorithm to store only those with faces. However, these algorithms have serious difficulties in identifying women and people of colour and thus, the resulting training set is very imbalanced, favouring white - and usually male - faces.

She highlighted that we should question the performance on gold standard datasets because these benchmarks are skewed, especially when examining intersectionality. To test her concerns and raise the issue to companies developing such tools, she created a dataset that is balanced on gender and skin colour.

In detail, she examined how commonly used face detection systems from Amazon, IBM, Face++, Microsoft and others perform not only on the aggregate level but also when splitting the test data by gender, colour and their intersection. She found that all of them perform significantly worse in detecting black women while their accuracy on images of white men is close to 100%.

This is a reflection of systemic inequalities that are embedded in the training data.

She also argued that companies are doing “moral outsourcing”; they’re developing technologies and let the users deal with their flaws and biases. She added that although improving the accuracy of these systems is important, that will not stop the abuse; making them more accurate might increase the number of cases where they are used to actively discriminate against underrepresented groups.

Lastly, Joy argued that the following things could level the playing field:

  • Actionable auditing: White hat developers that interrogate ML systems and report their biases.
  • Diversity: Increase the diversity in ML teams.
  • Advocacy: Companies developing facial analysis technologies to sign the Safe Face Pledge.
  • Transparency: Data transparency to enable researchers to evaluate the suitability of a model for a use case.
  • Participatory AI: Ethics frameworks to be set by lawmakers, companies and community review boards.
  • Taxes: An algorithmic accountability tax to fund public research.

AI and Geopolitics

Mykel Kochenderfer, Matthew Daniels, Colin Kahl, and Brad discussed how develops and deploys AI systems and how they could potentially be used by adversaries. The conversation was moderated by Amy Zegart.

Mykel and Matthew argued that we tend to overestimate the short-term and underestimate the long-term benefits of ML. Due to this hype, ML is being adopted without considering how well it embeds with existing norms and systems. For example, we haven’t discussed extensively how to reorganise human labour while the current internet infrastructure and cybersecurity protocols were developed in a pre-ML era. This might not look dangerous now, but it could be in the near future; we need better mechanisms to examine the progress in ML and the risks it entails.

Colin and Brad picked up the discussion, with Colin arguing that AI and geopolitics should be viewed with economic, military and ideological lenses. The most interesting one was the last; AI adoption in western societies and its side-effects (for example, large scale automation of the workforce) might cause divisions in democracy while AI can further empower authoritative regimes (for example, computer vision systems used for surveillance by the Chinese government).

Moreover, Colin made a compelling comparison between AI and nuclear weapons based on the concept of deterrence stability (ie don’t hit me because I’ll hit back). He argued that the lack of a deterrence stability mechanism for AI in a geopolitically unstable world might lead to crises where nations fear that other nations have developed some form of advanced military AI. These fears could escalate existing tensions, lead to an accident or prompt nations to hit first in order to neutralise the enemy.

Brad commented that the US government is not focused on using AI to develop autonomous weapons. Its main purposes are surveillance, such as project Maven, command and control and logistics.

Intelligence and information give you the advantage in preventing wars.

Amy summarised the discussion by highlighting four cases where AI could be used by adversaries:

  • Destabilisation
  • Deception
  • Distortion
  • Decision

I think this paper provides a good description of Amy’s four Ds and summarises well many of the points made during the panel discussion.

AI, Elections and Disinformation

Nate Persily, Renee DiResta, Andy Grotto discussed the impact of AI on the elections and other influence campaigns. The conversation was moderated by Michael McFaul.

Nate opened the discussion by arguing that we now live in democracies without a basis on what’s true. The volume, variety and velocity of data as well as the social media platforms and the technologies they use, affect the political debate and create an elections ecosystem that privileges virality which is driven mainly by rageful and hateful content.

Moreover, he mentioned that anonymity online has two main disadvantages:

  • Increases the number of bots (Note: Bots are a spectrum).
  • Increases the number of unaccountable opinions.

He also highlighted that when examining disinformation, we should consider two different groups; the messenger (social media platforms) and the message (actors and content). In many disinformation campaigns, AI is being used to identify and target echo chambers in social media platforms.

Lastly, Nate questioned why the rules of political debate are being set by big tech companies, giving the example of Facebook’s Nick Clegg who announced to the EU how FB would act during EU’s 2019 parliamentary elections.

Renee argued that election interference is a discrete challenge with tangible impact, however, the main goal of disinformation campaigns in consensus building. She described how the Internet Research Agency, a Russian company engaged in online influence operations, used an expansive cross-platform media mirage to target left-leaning and right-leaning communities as well as black Americans in order to disinform them and create an internalised message within their echo chambers. And that highlights the main difficulty for companies, researchers and lawmakers; how do you filter a message that has been internalised and considered truthful?

Renee also explained that the influence operations should be thought of as a process with some key parts:

  • Actors: Sometimes bots but mostly non-bots.
  • Content:
    • Participatory propaganda through easily sharable content such as memes. That is a good read.
    • Long-form propaganda.
  • Dissemination: Platforms and social media.

Renee noted that artificial intelligence makes the challenges posed by disinformation harder to solve. Firstly, AI reduces the cost of generating content. She focused on two main applications, DeepFakes (images and audio) and text generation. Regarding the former, their cost of deployment is quite high with the audio ones being a bit easier to develop. Regarding text generation, it’s fairly easy to detect these messages because the generative models have been trained on small datasets or contain a lot of non-standard characters. However, both of these could be dangerous in the near future. Algorithmic advancements reduce their cost and improve their quality, meaning that they could have the potential to shape people’s perception at scale.

Secondly, AI changes the way that malicious messages are being distributed. It is becoming easier to generate fake accounts, a trend that will only accelerate when GANs can be deployed at scale. Moreover, other machine learning methods enable better targeting of malicious messages, very much aligned with what Nate mentioned during his talk. Renee concluded that enabling technologies favour the aggressor and in this case, actors orchestrating influence campaigns.

Lastly, Andy discussed the role of governments and they have to break down the problem and craft specific policy interventions.

Michael asked the panel some important questions:

Why not banning political ads on Facebook? Andy commented that the US government (2008-2016) had a sense of competence and good faith on big tech which they also lobbied hard against political action. Renee added that many of the political ads are run by individuals and banning them would violate free expression.

One man’s terrorist is another man’s freedom fighter.

Moreover, she argued that in a highly polarised world, it is difficult to distinguish what’s political and what’s not. Who has the right to decide?

Why do we allow companies to release bad products? The panel replied that the main reason is the Terms & Conditions where it is stated that consumers accept the product as it is.

What should big tech and governments do to combat disinformation? The panel argued that creating data trusts to enable the research of this topic would be a potential solution.