Programs as Agents – Repost from Samir Chopra

Today I got involved in a Facebook debate about Google scanning people’s email and tipping of the authorities when they found questionable images of a child. My grandfather would often give me the sage advice, “Never get into internet debates about pedophiles.” He really was a wise man. And I am a fool.

I think Google is super smart, and not just because they actually know everything about me or because they are figuring out how to live forever. Google is smart because they gave us a moral reason to support them scanning our emails. In fact by taking on sex offenders they gave us a reason to demand them to scan our emails.

bigbrother

Here is what I wrote in my Facebook debate (sorry Grandpa):

“Of course protecting children from sexual predators should be a top priority for all of us. And we should therefore all take action to stop it, root out the causes of it and take responsibility for stopping it. And Google has now convinced you that scanning all of your email and doing god knows what with the information they gather is all worthwhile because they helped catch a predator.

While it is uncomfortable to question Google’s actions because those actions are so commendable in this case, it is important to do so. I have real questions about what they do with the data and how they use it for their own purposes. As the article I cited above [here] asks, how would we feel if Facebook started swaying elections through its algorithms? How come we aren’t as upset at the fact that Google dominates how you experience the expanded world of information? What impact does that have on world view? On actions you would take? It isn’t only about surveillance it is about control and freedom and what we are willing to trade in relation to each.”

This debate happened on the awesome Samir Chopra’s feed. And to show how awesome he is I am re-blogging a post he published back in June about programs (and algorithms) as agents and people.

Samir Chopra

Last week, The Nation published my essay “Programs are People, Too“. In it, I argued for treating smart programs as the legal agents of those that deploy them, a legal change I suggest would be more protective of our privacy rights.

Among some of the responses I received was one from a friend, JW, who wrote:

[You write: But automation somehow deludes some people—besides Internet users, many state and federal judges—into imagining our privacy has not been violated. We are vaguely aware something is wrong, but are not quite sure what.]
 
I think we are aware that something is wrong and that it is less wrong.  We already have an area of the law where we deal with this, namely, dog sniffs.  We think dog sniffs are less injurious than people rifling through our luggage, indeed, the law refers to those sniffs are “sui generis.”  And I…

View original post 1,154 more words

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s