Honestly the more I learn about machine learning, the more it scares me that it's actually used in real life for things of actual consequence

To elaborate: inevitably when people read Tweets like this from me, I get reductive questions about humans that are not really useful for the conversation, or I get people assuming I am against machine learning in general, which is not true

As for "but aren't humans just as bad," my answer is a solid no, not just because humans can explain their decisions and understand one another, but more importantly because we have systems in place to hold humans accountable for their decisions and actions

Moreover, typically when machine learning systems replace human agency, what this really means is a single, opaque, poorly understood, for-profit tool replacing a lot of humans making individual decisions. This is centralization of power

If you assume a system of humans making decisions, replaced by a machine learning tool—there are often no checks and balances in a machine learning tool, making sure things that go wrong in one place don't cause catastrophe elsewhere

Let alone checks and balances by independent agents held accountable for their decisions and actions

When people told me early on that "nobody understands how [modern deep learning systems] really work," I thought they were exaggerating. The more I learn, the more I realize they are not at all exaggerating, and that's terrifying

In a context where little harm can be done, or where there are extremely reliable external checks one can apply (say, synthesizing proofs which can be checked by a small logical kernel), or where a human is in the loop and the system presents itself as untrustworthy, it's OK

But very often humans are not in the loop, or they are, but the interface misleads humans into fully trusting the tool by default, and the developers take no responsibility for humans doing that. And very often there are no external reliable checks to catch the tool causing harm

And very often these tools are deployed into contexts where the tool can cause extremely serious harm if it misbehaves. And all of that together scares me a lot more having learned much more about these systems than I used to know

Originally tweeted by Talia Ringer (@TaliaRinger) on 26 December 2021.

Catégories : Non classé