This blog discusses how to create human-centred AI (HCAI) – how to design AI that works for people.
First let’s look at the rough three-tiered hierarchy to represent the foundational HCAI design process:
- Problem Space – The human-world problem space, where both the task environment and the hardware resources available to accomplish tasks are recognized and prioritized.
- Intelligence-space, including both the hardware resources and the automated/interpreted problem-solving algorithms tailored to individual domains and tasks.
- Interaction Space – The AI-human-hardware interaction space, where AI and humans work together to jointly solve AI-human-hardware problems and otherwise communicate.
Each layer represents an abstraction level, the level below being more specific and more accurate, and the level above being more general and more approximate. The most common representation in all the three layers is the involvement of the human or people. So, how to design AI that works for people?
People can be great – when given space to communicate and make mistakes. Predicting things that happen when people aren’t given those spaces is challenging.
People, like AI systems, must consider what they want to happen given their settings, environments and such. But people also need to take the time to imagine how they will work and operate. There are lots of things to consider. We often begin by looking at what AI systems do. It’s not unusual, in a lot of cases, to conclude that ‘AI is doing something’ and ‘if we do the same thing, we will have the same effect’. This doesn’t work, though.
People’s work works differently when done by computers because of that difference, not because of anything like computers. People’ see’ patterns and meanings where computers may not. People have a history, knowledge, skills and experience. They bring subtle insight and understanding of other people and situations. This is what computers can’t yet do—not to say that building machines that do more of what people do is impossible: many people are working on this. But the challenge here is to think about it differently from how we usually think. A new way of thinking about AI involves:
- Thinking carefully about the questions which humans ask
- Automating technology for answering those questions
- Exploring the capabilities of these tools and what they potentially mean for society is complex and fraught.
There is no one-size-fits-all answer to the question “how we can create a responsible human-centred AI”. Creating human-centred AI requires a deep understanding of psychology, sociology, and other social sciences. We must consider how humans think, learn, act and interact with the world. This includes designing AI systems that are easy to use and understand and that can adapt to the ever-changing needs of users. It also necessitates a willingness to iterate the design with humans and learn from failures constantly. By taking a human-centric or user-centric approach to AI design, developers can create systems more likely to be adopted and used effectively by users.
10 tips to creating responsible human-centred AI
Here are a few key considerations for creating human-centered AI:
- Understand the user: To design AI systems that meet users’ needs, it is essential to understand how users think, learn, act and interact with the world around them. This can be done through research methods such as interviews, surveys, and ethnography studies.
- Make it easy to use: AI systems must be designed with usability in mind. They should be easy to use and understand and should require minimal training to get started.
- Iterate with users: Users’ needs and goals are constantly evolving, so AI systems must be able to adapt to change. This can be achieved through flexible design and iterating the design with the users.
- Learn from failures: Not every AI system will be a success, but it is essential to learn from failures to improve future designs.
- Constantly experiment: The field of AI is constantly changing, so it is essential to experiment with new approaches and technologies.
- Involve users early and often: Users should be involved in the design process from the beginning to ensure that the final product meets their needs.
- Design for fairness and transparency: AI systems must be fair and transparent to avoid bias and discrimination.
- Respect user privacy: AI systems must be designed to respect users’ privacy and should only collect and use data necessary for the task.
- Explainable AI: AI systems must be designed to be explainable so that users can understand how and why they make decisions.
- Address technologies impact on human rights: by defining clear rules and guidelines for everyone involved as shown below.
Technologies impact on human rights
Businesses and governments should appropriately address these technologies’ impact on human rights by defining clear rules and guidelines for everyone involved — from companies to governments to citizens — to appropriately use the technology and keep everyone safe.
- How do you deal with privacy, security, fairness, and other human rights concerns?
- Know as much as possible about the data, algorithms and models. You can also ask to see the models being trained on your data.
- Establish an AI accountability office that can investigate and intervene directly when there are clear patterns of harm.
- Develop ethical codes for algorithmic decision-making, and guidelines for how AI and data can be used to address human rights issues.
- Increase transparency in algorithms. Make it easier to know what data is being used to make decisions about humans.
- Contract with third-party experts who have access to confidential AI code from day one. This ensures that the code can be audited to check for bias and to ensure its safety.
Human-centered AI is a journey, not a destination: There is no “perfect” human-centred AI system, and the field is constantly evolving. The goal is to continuously strive for improvement and learn from mistakes along the way. At best, these are considerations with potentially far-reaching consequences that are likely to bring benefits and costs. By taking a human-centric approach to AI design, developers can create systems that meet responsible AI principles and deliver AI solutions that are more likely to be adopted and used effectively by users.