Strathmore Law School Research Colloquium: Discussion Paper Series on Algorithmic Discrimination in Developing Countries
The second week of May began on a high note as the very first discussion in the Strathmore Law School (SLS) Research Colloquium Discussion Paper Series was conducted on the 11th of May 2021. The event featured a presentation by Cecil Abungu on his paper titled ‘Algorithmic Decision-making and Discrimination in Developing Countries’ and a subsequent response by Dr. Isaac Rutenberg.
The purpose of the Discussion Paper Series is to encourage discussion on academia both by SLS staff and students alike. As the moderator of the first discussion in a movement pioneering legal engagement, the event felt like a massive success as this goal was attained.
More about the Author and Respondent:
The author of the paper, Cecil Abungu is a Teaching Fellow at SLS and a Research Affiliate with the Legal Priorities Project, where his research so far has focused on AI risk and long-termism. Cecil has an LL.B. degree from SLS and an LL.M. degree from Harvard Law School. He has previously published work in, among others, the African Journal of International and Comparative Law, Oxford’s Chinese Journal of Comparative Law and the Vienna Journal on International Constitutional Law. This year, he also has upcoming work in the Harvard National Security Journal.
The respondent, Dr. Isaac Rutenberg is a Senior Lecturer and the Director of the Centre for Intellectual Property and Information Technology Law at SLS. He is also an Associate Member of the Center for Law, Technology, and Society at the University of Ottawa, Canada. Dr. Rutenberg’s academic research focuses on the use and suitability of Intellectual Property systems by innovators and entrepreneurs throughout Africa, as well as aspects of IT law affecting data protection and information controls. He is the author of a book on cyberlaw in Kenya, and has published and presented widely on both practical and academic issues in IP and IT law.
Hence, both the author and respondent are extremely well equipped to engage in a informed and informative discussion on Algorithmic decision-making and discrimination. And they did not disappoint.
Key points from the discussion:
The discussion on the paper centered around the future of Algorithms in both ‘developed countries’ and ‘developing countries’ with a stark contrast on nations such as Kenya, India, Nigeria, South Africa and the Philippines. In Cecil’s presentation he identified core elements such as why algorithmic decision making is important in developing countries, why discrimination through algorithmic decision making may be even more potent, and why existing solutions formulated by authors in ‘developed countries’ may not successfully work in ‘developing’ nations.
To counter some of the well articulated and researched points raised by Cecil both in his presentation and paper, Dr. Isaac Rutenberg touched on how discrimination by algorithmic decision making may be on a greater scale, but that it is not necessarily worse. It is only more identifiable. Further, that pointing out discrimination is not the same as preventing it. He also sought more information on the authors of algorithms and what unique contextual bias they bring as a result.
The audience was also engaging as they raised points such as the significance of algorithmic discrimination and questions on whether the need for algorithms sufficiently justified the negative consequences it would raise. This was clearly articulated by Cecil and can be determined by a reading of his paper- of which I would highly recommend that you all do.
If you are interested in presenting your piece in the next discussion, make sure to email either cadionyi@strathmore.edu or anciko@strathmore.edu. We look forward to seeing you there!
Written by Tasneem Pirbhai, a third/fourth year SLS student in the Barristers stream.
As a researcher, Tasneem has an avid interest in law and technology and its implications. Her current research project is centered on the admissibility of data created by M2M (Machine to Machine) communication as evidence in a criminal or civil court.