Peering into the Black Box: Addressing Monitoring Challenges in AI Systems
The advancement of technology has led to the involvement of Artificially Intelligent machines at a great pace in the routine lives of people and it has now reached at a stage where it can be said that such artificially intelligent machines have become an integral part of human life.
Such a rapid involvement of machines in human lives also carries with it certain potential concerns that are difficult to predict as to how the machine reached to such an conclusion. This potential concerns results from the working of machines and the use of various technologies in it.
There are various technologies in a Artificial Intelligent machine and one such expedient technology that is used and which makes a machine intelligent is black box. Black box in AI is an combination of complex algorithms and decision-making processes used by artificially intelligent machines to make decisions. But the issue is the black box technology is often opaque to users who use it and also to programmers, or engineers who develop it.
Thus, the inner workings of Artificially Intelligent machines are a mystery and this has raised concerns about the accountability or responsibility of these machines. The system of AI go through the data collected about past events and circumstances related to a particular subject matter with an help of algorithm and then show up a result or course of action about the issue. This system is not as simple as it seems to be as many a times it is not clear as to how AI machines reached to such a conclusion.
The conclusion it reached might be correct or even incorrect. We can never know about how that conclusion arrived, it can be result of an technical error resulted from faulty programming due to human error that initially took place and it lead the AI system to create its own bias based upon the data fed into it even though the programmer had no intention to do so or even knowledge about it. This is likely an another way through which the system causes error.
Such mistakes can cause a heavy price. Another real life example is of self-driving cars that are susceptible to accidents caused by faulty algorithms or incorrect programming. This is the reason why decoding the black box becomes essential and why researchers want to open this black box, and make AI systems transparent, or explainable.
Recently there are multiple examples of biased caused by AI. In most of the cases the initial data fed into it is not biased nor one such that causes discrimination but the AI reaches to such a conclusion because of other data fed into it and the end result is disturbing.
One such example is of US healthcare – An algorithm was used in the USA hospitals at a huge scale to decide which patients should get extra medical care caused bias among the black and white people. The algorithm favored the white people more than the black.
There was no discriminatory data programmed into the machine rather it created biasness based upon the data that was available to it of healthcare cost spent by the patients. The reasoning was cost summarizes how much healthcare needs a particular person have and the white patients had spent a higher amount on healthcare as compared to the black patients.
Another example is – the Amazon opted a tool that was used on experimental basis which utilized Artificial Intelligence to assign job applicants ratings. Its suggestions was then used to find new recruits. It was discovered that the system was not evaluating applicants in a gender-neutral manner.
It biased towards women based on the earlier data of candidates whose applications were selected. It spotted similarities in the applications that most of them were males, reflecting the industry’s male dominance. Amazon’s algorithm concluded that male applicants were preferred and hence it was biased towards female.
After understanding these problems, it clearly indicates we need a solution for our problem or we can identity the need for monitoring our AI system to maintain its fairness and overcome the technical biases. Black box AI systems refer to those where the internal workings of the AI are not transparent or easily understood by humans.
These systems can be challenging to monitor because it is often difficult to determine why the system made a particular decision or came to a particular conclusion. This lack of transparency can be particularly problematic when the AI is making decisions that impact individuals or society as a whole.
One potential solution to this problem is to monitor the inputs and outputs of the black box AI system. For example, if the system is being used to make loan decisions, we might monitor the data that is being used as inputs (e.g., credit score, income, etc.) as well as the decisions that are being made (e.g., whether or not to approve a loan). This can provide some level of accountability and transparency, even if we don’t fully understand how the AI is making its decisions.
However, there are certain limitations to this approach. One challenge is that the AI may be learning and evolving over time, so the inputs and outputs that we are monitoring may not be fully representative of the system’s behavior. Additionally, if the code that the AI is based on cannot be reversed, it may be difficult or impossible to fully understand why the AI is making the decisions it is making. A current example of this challenge can be seen in the use of facial recognition technology.
Researchers have found that facial recognition systems can have high error rates, particularly for people with darker skin tones. However, it can be difficult to understand why the AI is making these errors or how to correct them, as the underlying code may be opaque or not easily modifiable. This can raise serious concerns about the fairness and accuracy of these systems, particularly when they are being used by law enforcement or other organizations with significant power over individuals.
Another potential solution to the challenge of monitoring black box AI systems is to focus on embedding ethical and moral principles into the AI itself. This approach is often referred to as “ethical AI” or “AI ethics.”
The European Union has taken a leadership role in this area, with the release of its Ethics Guidelines for Trustworthy AI in 2019. These guidelines are based on a set of seven key principles for AI, including transparency, accountability, and human oversight. The guidelines are intended to help ensure that AI is developed and used in a way that is ethical, transparent, and aligned with human values.
By focusing on embedding ethical principles directly into the AI system, this approach can potentially provide a more proactive way to monitor and regulate AI. Instead of simply monitoring the inputs and outputs of the system, we can help ensure that the system is designed from the outset to be transparent and aligned with human values. This can potentially reduce the risk of unintended consequences or unethical behavior by the AI system.
Other Challenges in AI Systems
Of course, there are also challenges to this approach. One challenge is that it can be difficult to agree on what ethical principles should be embedded in the AI system, particularly given the wide range of cultural and societal values around the world. Additionally, there may be practical challenges in implementing these principles in practice, particularly given the complexity of many AI systems. However, the European Union’s efforts in this area represent an important step towards developing a more ethical and accountable approach to AI.
In conclusion, the potential applications of AI are vast and expanding rapidly. From revolutionizing healthcare to transforming transportation, the possibilities are seemingly endless. However, with this great power comes great responsibility, and careful consideration must be given to the ethical implications of AI. While there are certainly risks and challenges to be addressed, it is clear that the benefits of AI are too significant to ignore.
It is up to us to continue exploring the vast potential of AI while ensuring that it is used ethically and responsibly. There are countless opportunities and solutions that are yet to be discovered, and the path forward is not always clear. Nonetheless, we must continue to push the boundaries of what is possible and work together to create a future where AI is used for the betterment of humanity.
This Article is Written by Miss. Pranjali Raghuwanshi and Bulbul Vaghela, 4th Year Law Students from Institute of Law, Nirma University, Ahmedabad.
 Neil Savage, ‘Breaking into the Black Box of Artificial Intelligence’ (2022). https://www.nature.com/articles/d41586-022-00858-1 accessed on 27 February 2023
 Terence Shin, ‘Real-life Examples of Discriminating Artificial Intelligence’ (2020) https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-85571909701c accessed 20 February 2023
 Dastin, J., ‘Amazon scraps secret AI recruiting tool that showed bias against women’ (2018) https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G accessed on February 28
 Alvaro Bedoya, “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” Harvard Law Review 128, no. 6 (2015): 1757-1788, https://harvardlawreview.org/2015/04/the-perpetual-line-up-unregulated-police-face-recognition-in-america/.
 European Commission, High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI,” 8 April 2019, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.