AI Can Be Made Legally Accountable for Its Decisions

In the near future when Artificial Intelligence (Intelligence displayed by machines) will start to spread out and autonomously make important decisions that will impact our daily lives, the issue of its accountability under the law will raise.

AI systems are expected to justify their decisions without revealing all their internal secrets, to protect the commercial advantage of the AI providers. Not to mention that map inputs and intermediate representations in AI systems to human-interpretable concepts is a challenging problem because these systems tend to work as black boxes.

As such explanation systems should be considered distinct from AI systems.

This paper written by researchers of the Harvard University highlights some interesting aspects of this debate and shows that this problem is by no mean straightforward.

Published by

Davide Madrisan

Linux Developer, DevOps & Automation Engineer Cloud, Web and Data Science passionate

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s