Communication barriers
Grasping the potential of AI is likely to be harder for people in lower-income countries, where literacy and numeracy are lower and residents are less familiar with digital data and the algorithms that process this information. For instance, in our field experiment in Nairobi, Kenya, we found it difficult to explain simple algorithms with negative numbers and fractions to low-income residents. But our team found simpler ways to communicate these concepts. It was clear when people responded to the algorithm that they grasped the concept. Still, complex AI systems are difficult to understand, even for AI researchers.
Some applications don’t require that users know how algorithms work. For instance, Netflix movie recommendations can benefit users even if they do not understand how the algorithm selects content it thinks they will like. Likewise, in a humanitarian crisis, policymakers may deem it acceptable to use an inscrutable “black box” algorithm, as Togo’s government did in response to the COVID-19 crisis.
Transparency is sometimes critical. When targeting social protections in nonemergency settings, explaining eligibility criteria to potential beneficiaries is essential. This is easier said than done: scores of interviews and focus groups showed us how norms and values around data and privacy are fundamentally different in a setting such as rural Togo than in wealthy nations where AI-based systems are more common. For instance, few people we spoke to were worried about the government or companies accessing their data (a dominant concern in Europe and the United States), but many wondered if and how such information would be shared with their neighbors.
As AI is more commonly deployed, populations must understand its broader societal effects. For instance, AI can generate provocative photographs that are entirely false and robocalls that mimic voices. These rapid changes will affect how much people should trust information they see online. Even remote populations must be informed about these possibilities so that they are not misled—and to ensure that their concerns are represented in the development of regulations.
Building connections
AI solutions rest on existing physical digital infrastructure: from massive databases on servers, to fiber-optic cables and cell towers, to mobile phones in people’s hands. Over the past two decades, developing economies have invested heavily in connecting remote areas with cellular and internet connections, laying the groundwork for these new applications.
Even though AI applications benefit from digital infrastructure, some could make better use of existing resources. For example, many teachers in Sierra Leone struggle with poor internet access. For some tasks, it may be easier to get ideas from a chatbot and then validate the response than to collate information from several online resources.
Some AI systems will, however, require investment in knowledge infrastructure, especially in developing economies, where data gaps persist and the poor are digitally underrepresented. AI models there have incomplete information about the needs and desires of lower-income residents, the state of their health, the appearance of the people and villages, and the structure of lesser-used languages.
Gathering these data may require integrating clinics, schools, and businesses into digital recordkeeping systems; creating incentives for their use; and establishing legal rights over the resulting data.
Further, AI systems should be tailored to local values and conditions. For example, Western AI systems may suggest that teachers use expensive resources such as digital whiteboards or digital slide presentations. These systems must be adjusted to be relevant for teachers lacking these resources. Investing in the capacity and training of local AI developers and designers can help ensure that the next generation of technical innovation better reflects local values and priorities.
Artificial intelligence promises many useful applications for the poor across developing economies. The challenge is not in dreaming big—it’s easy to imagine how these systems can benefit the poor—but in ensuring that these systems meet people’s needs, work in local conditions, and do not cause harm.