The Inconvenient Truth: Artificial intelligence Will Always Possess a Bias Unless Proven Otherwise

Image for post
Image for post
Peeling back the layers of bias that exist in AI can be tricky.

In 2014, 22-year-old Robert McDaniel received a very strange and very unexpected knock on his front door. To his surprise, an officer from the Chicago police department was standing outside and had a very chilling message for the young man. “If you commit any crimes, there will be major consequences. We are watching you.”

Even more curious, McDaniel did not possess a criminal record, nor had he ever had any violent altercations with the police. So why? Why had this officer shown up at his house to deliver this very clear warning? The answer is both simple and concerning.

Artificial Intelligence. It is the sole reason for more and more cases like this occurring in real life and not just in dystopian sci-fi films. AI powers the CPD’s “Heat List” where McDaniel’s name had appeared, along with 400 other “potential criminals” that could be monitored and tracked all by utilizing cutting edge data science.

But stories like McDaniel’s highlight a key caveat with relying solely on AI for decision-making, and that is, despite popular belief, there is, in fact, an inherent bias that exists in the data that AI learns from, so it only stands to reason that AI itself is biased until proven otherwise.

Data scientist specialist May Masoud (SAS Canada) has conducted intensive research into the intersectionality between ethics and AI. In her recent talk on DotsLive, she not only discusses the McDaniel story, but also how the lack of proper AI Implementation, along with taking the burden [of decision making] away from humans and placing it in a system of power” will ultimately lead to vast oversights and will only further reinforce issues such as racial, gender, and sexuality discrimination.

But we cannot solve the intricate problem of AI simply by just ceasing to use it. It is estimated that by 2021, 70% of businesses will integrate AI to assist employee productivity — -not replace employees, merely to enhance their work. AI enhancement and augmentation will also generate about $2.9 trillion dollars a year in value, and 72% of résumés are already screened by AI, so phasing the usage of AI out from daily life is not going to be feasible.

Image for post
Image for post
Pictured: May Masoud during her in-depth presentation on DotsLive

Masoud instead gives us 3 “Gotchas” or 3 pain points that organizations can focus their efforts on and ensure that their AI algorithms and models are automated, retrained, and updated to reflect modern-day ethics constantly.

Epistemic — The Black Box Problem

Truths can often seem absolute, and humans struggle with rationalizing where our own biases originate, which makes course-correction difficult. Therefore AI should transparently provide explanations for how it processes information so that the inner workings of its algorithms are explained and more importantly, understood. Utilizing natural language explanations so that even a layman can understand, is key to change being implemented on an effective level.

Normative — Misguided Action

If inherently biased and problematic information is then funneled into AI algorithms, then those biases become normalized and built-in to the models themselves. Further still, even if the data is ethical, then the human element can still cause it to make unethical decisions. The controllable variable in this instance lies in the power of decision making, which ultimately should be left in the hands of ethics. But this alone is not enough, automation streamlines this process considerably. Automating model management ensures that data degradation is eliminated immediately and human bias is left out of the equation. Thorough analysis into demographics and who is being impacted by the algorithm directly is of utmost importance.

Traceability — Who is Accountable?

To mitigate issues such as gender-based job discrimination in hiring, having accountability for the responsible parties is going to help minimize these situations from occurring. On a more proactive level, a strict and vetted code of conduct based on analytical research, an ethics committee with diverse representatives from across the organization, and diverse developers with different perspectives and lived experiences will be essential in helping to point out flaws and biases in AI models and algorithms.

Mindful over Fearful

It is quite easy to simply write off AI as dangerous and fear the very technology itself. Films like Captain America: The Winter Soldier or Minority Report, instill this terrifying notion of AI’s inherent negative impact on our world. But Masoud stated that “fear is rooted in ignorance”. If the time is taken to learn the proactive strategies that organizations can take to create ethical, sustainable, and “Green” AI, then perhaps the errors of history’s past can be corrected and maybe even eradicated in the future.

To catch the full presentation and the in-depth Q + A that followed, you can watch it here for FREE exclusively on DotsLive: https://beta.dotslive.com/#/replay/oA8GZhKi1oQ

Written by

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store