12 Angry AI

Not too long ago, AI was its own field: a separate walled garden working towards automation and machine intelligence. We hadn’t crossed the bridge to a personal data economy, and AI made us think of Hollywood and its sentient machines. Sometimes it was killer robots, and others it was an omniscient jar of swirling smoke sent to save humanity. Big names of tech wrote op-eds rallying around or decrying AI; more specifically, development and how it could lead to a machine superintelligence.

However, between 2016 and now, AI came to mean something very different: no longer is it an exciting/terrifying superintelligence. It is now an algorithm or machine learning or a neural network. By this definition, it is baked into almost every app, platform and service we use, to the end user’s knowledge or not. Most of the AI we engage with is built off of analyzing data and making predictions or recommendations — this is a sort of intelligence that is modeled off of human learning. Machine learning and neural networks are based on how we think and learn and make decisions: we are creating something in our own image.

To this end, humans are assigning more power to AI systems, but we have failed at a critical point: decision-making. We have created AI that is only as strong as its training, without drawing the same parallels to human intelligence and experience: consensus. Single-agent and general AI systems have their applications, but the future should rely on the power of teams present in multi-agent AI systems.

Movie still of 12 Angry Men

My Definition of Failure is your Definition of Success

AI is built with a goal in mind, then is rigorously trained and tested on either real or fabricated datasets. Training and testing is built on two core functions that drive the learning: cost and reward. Once the algorithm can reasonably execute on its goal, it is moved into its intended application.

The disconnect in single-agent systems happens at three points in AI design: goals, cost and reward.

Goal-setting is about meeting a perceived need or addressing a problem. When designing goals for AI, as many entrepreneurs have found, theory and reality can be worlds apart. Individual needs can differ greatly from perceptions, whether the end user or the stakeholder of a system. Even when taking all of that into account, it must then be translated into a quantifiable goal for machine learning — adding another layer of complexity. Remember: the most perfect optimization algorithm is only as good as the metrics that they optimize to.

Reinforcement learning is built on the idea of rewards for positive outcomes, and costs for negative ones. In a vacuum, it is easy to say what is a good outcome and what is a bad one. In practice, the challenge is ranking them, especially across multiple stakeholders and groups. What do you assign higher values to in order to better shape the outcomes of your model? How can a single AI best adapt to complex and contextual environments to maximize rewards?

A general, all purpose AI sets out to reduce the cost function as much as possible across all areas, applying learning from previous experiences, either through deep learning or machine learning, to best solve the new problem placed before it. Our jobs place more and more value in generalists, so why not build AI the same way? Because humans are masters of transferred learning: applying previous experiences to entirely new, but similar, fields. Most importantly, if we are out of our comfort zone, we learn socially, or ask for help.

The theory behind multi-agent systems is not new: first called distributed artificial intelligence, the field of study has been around since the 1970s. The important distinction is that multi-agent systems exist in the same environment and the agents interact with each other. This interaction can be as simple as sharing data collected with others in highly homogeneous agents, or by forming commitments, institutions and norms between heterogeneous agents. Depending on the goals and design of a multi-agent system, it can better serve the needs of users and stakeholders, even when they have different and conflicting interests.

20 years ago, the only groups who could run multi-agent systems were the military. Today, with cloud platforms that can run AI, the barrier of entry to operate a multi-agent system is far lower. Cooperative AI is one of the great computing challenges of our generation, and multi-agent systems are an integral part of that. If we as humans can’t create AI that can communicate with itself within a confined environment, then we surely won’t be able to master communication between independent AI systems.

An Uncomfortable Conversation Worth Having

Bias is inherently very difficult to talk about. A person’s opinions, beliefs and perceptions of the world around them are rightfully a topic that hits close to home. When a bias is exposed, the beholder can feel ostracized or attacked for what they hold true. Carefully, and over time, biases can be confronted and changed.

The biases of a single individual, for the most part, end up being low impact: there is only so much the average person can influence. An AI, on the other hand, has the capability to interact with hundreds of millions of people, making countless decisions per second. For single agent AI, it plays judge, jury and executioner; amplifying biases and prejudice found in training data sets, goals, cost and reward functions. Bad training data, misrepresentation of stakeholders and their values, loopholes in learning and lack of perspective are just some of the causes of bias in AI.

Multiple agents can and should advocate for equal outcomes in both supervised and unsupervised AI systems. By spreading the power around, and designing AI to protect the interests of not just stakeholders, but the users as well. Different agents could be trained to prevent exploitation and advocating for each party’s best interest. This has high potential in diversity and inclusion.

Another key feature of complex multi-agent systems is negotiation between agents. There are many applications, but one we want to draw attention to is for finance. The industry itself is getting more and more decentralized, but there is a lot of scar tissue present, especially in the unbanked. A general AI could see the unbanked as lacking a paper trail: with no transaction history or credit score, and filter them out. Instead, a multi-agent system would be able to advocate on behalf of these individuals, looking at non-traditional indicators, and getting them access to services they would not have otherwise.

Celebrating what Makes Us Different

The superintelligence is still a long way away, but the AI we have access to today can be built more wisely. By doubling down on modeling our AI systems on human learning and interactions, and leaning into the power of teams present in multi-agent systems, we better insulate ourselves from training biases.

Developers should also look to open up the black box, so to speak. If they are taking steps to have a more fair and balanced algorithm, then they should share it. Being diverse and inclusive, not just in your business structure, but in the way products and services are offered, is a differentiator in many markets. If they are using a multi-agent AI system, even the most uninformed customer can buy into the more is better mindset. We have the technical means to address the bias problems, but that needs to be paired with communication and education.

ABOUT SUPERPOSITION

We believe that progress is measured not by the number of new technologies created, but by their ability to be understood, embraced, and leveraged for positive impact. We seek to bring you easy-to-understand briefs on science and technology that can change the world, interviews with the leaders at the forefront of these breakthroughs, and writing that illuminates the importance of communication within and beyond the scientific community. The Superposition is produced by JDI, a boutique consultancy that brings emerging technology and science-driven companies to market. Our mission is to make precedent-setting science companies well known and understood. We pursue mastery of marketing, communications, and design to ensure that our clients get the attention they deserve. We believe that a good story — well told — can change the world.

To learn more, or get in touch, reach out to us at superposition@jones-dilworth.com.

story-shapeCreated with Sketch.
OUR WEEKLY DISPATCH + NEWSLETTER
THE SUPERPOSITION
© 2024 Jones-Dilworth, Inc. Privacy Policy