John Danaher interview – Robot Sex: Social and Ethical Implications

Gigolo Jane and Gigolo Joe robots in the film A.I.

Via Philosophical Disquisitions.

Through the wonders of the modern technology, myself and Adam Ford sat down for an extended video chat about the new book Robot Sex: Social and Ethical Implications (MIT Press, 2017). You can watch the full thing above or on youtube. Topics covered include:

  • Why did I start writing about this topic?
  • Sex work and technological unemployment
  • Can you have sex with a robot?
  • Is there a case to be made for the use of sex robots?
  • The Campaign Against Sex Robots
  • The possibility of valuable, loving relationships between humans and robots
  • Sexbots as a social experiment

Be sure to check out Adam’s other videos and support his work.

Autumnal AI links

Facial tracking system, showing gaze direction, emotion scores and demographic profiling

Another blogpost where I’m just gonna splurge some links cos they’re just sitting as open tabs in my browser and I may as well park them and share them at the same time, in case anyone else is interested…

(If you’re somehow subscribed to this blog and don’t like this, let me know and I’ll see if I can set-up another means of doing this… I used to use del.icio.us, remember that?!)

Here’s some A.I. things from my browser then:

 

Adversarial attacks on machine learning

There’s been quite a bit of chat about the ways particular kinds of neural nets used in machine vision systems are vulnerable to techniques that either create mis-recognition of images or feed into training a mis-recognition.

danah boyd made this part of her public talks earlier this year, drawing upon a ‘shape bias’ study by Google researchers. Two recent overview pieces on The Verge and Quartz are accessible ways into such issues too.

Other stories on news sites (e.g.) have focussed on the ways machine vision systems that could be used in ‘driverless’ cars for recognising traffic signs can be ‘fooled’, drawing upon another study by researchers at four US institutions.

Another story doing the rounds has been the model of a 3D printed turtle that was used to fool what is referred to as “Google’s object detection AI” into classifying it as a gun, many of these accounts start with the same paper boyd cites move on to discuss work such as the ‘one pixel’ hack for confusing neural nets by researchers at Kyushu and then discuss a paper on the 3D printed turtle model as ‘adversarial object’ by researchers at MIT.

A Facebook spokesperson says the company is exploring securing against adversarial examples, shown through a research paper published in July 2017, but they apparently haven’t yet implemented anything. Google, where a number of the early ‘adversarial’ examples were researched, have apparently declined to comment on whether its APIs and deployed ‘AI’ are secured, but researchers there have recently submitted papers to conferences on the topic.

A reasonable overview of this kind of research is available on Popular Science by Dave Gershgorn: “Fooling The Machine“. Artist James Bridle (who else?!) has also written and made some provocative work in response to these kinds of issues, e.g. Autonomous Trap 001 and Austeer.

 

Biases and ethics of AI systems

There’s, of course, tons on the ways biases are encoded into ‘algorithms‘ and software but there’s been a little more attention to this sort of thing in relation to AI appearing in my social media stream this year…

Vice’s Motherboard covered a story concerning the ways in which a sentiment analysis system by Google appeared to classify statements about being gay or a jew as ‘negative’.

Sky News covered a story about an apparent case of erroneous arrests at the Notting Hill Carnival this year (2017) that were allegedly caused by facial recognition systems.

An interesting event at the Research and Development department at Het Nieuwe Instituut addresses ‘the ways that algorithmic agents perform notions of human race’. Decolonising Bots included Ramon Amaro, Florence Okoye and Legacy Russell.

 

The Financial Services Board have an interesting report out concerning: Artificial intelligence and machine learning in financial services, which seems well worth reading.

 

Defending corporate R&D in AI

Facebook’s head of AI is fed up with the negative, or apocalyptic, references used for describing AI, e.g. the terminator. It’s not just a whinge, there’s some interesting discussion in this interview on The Verge.

Technology policy pundit Andrea O’Sullivan says the U.S. needs to be careful not to hamstring innovation by letting ‘the regulators ruin AI‘.

 

Finally, the British Library have an event on Monday 6th November called: “AI: Where Science meets Science Fiction“, which may or may not be interesting… it will be live-streamed apparently.

Automation as received wisdom

Holly from the UK TV programme Red Dwarf

For your consideration – a Twitter poll in a sponsored tweet from one of the UK’s largest management consultancies.

Why might a management consultancy do this? – To gain superficially interesting yet fatuous data to make quick claims about? Perhaps for the purposes of advertising? Maybe… Perhaps to try to suggest, in a somewhat calculating way, that the company asks the “important” questions about the future and therefore imply they have some answers? Or maybe simply to boost the now-prevailing narrative that automation is widespread, growing and will take your job. Although to be fair to Accenture, that’s not what they ask.

In any case, this is not neutral – though, I recognise it’s a rather minor and perhaps inconsequential example. Nevertheless, it highlights the growth in pushing a narrative of automation from management consultancies, like Accenture, Deloitte and PWC, who are all writing lots of reports suggesting that companies need to be ready for automation.

A cynical analysis would suggest that it’s within the interests of such companies to jump on the narrative – it’s been in the press quite a bit in recent years, then ramp it up, and offer to sell the ‘solutions’.

What I find particularly interesting is that, while newspaper articles parrot the reports from these consultancies, there appears to be (in my digging around) scant serious evidence for this trend. A lot of it is based on economic modelling (of both past and future economic contexts) and some of the reports when they do list methods seem to use adapted versions of models that once said something else.

A case in point is the recent PWC report about automation that claimed up to 30% of UK jobs could be automated by the early 2030s, widely reported in the press (e.g. BBC, Graun, Telegraph), which was based upon a modified (2016) OECD model – the original model suggested that only 9% of jobs in OECD countries were at relatively high risk (greater than 70% risk in their calculation) of automation with the UK rated at just over 10% of jobs.

I’m working my way through this sort of stuff to get at how these sorts of narratives are generated, become received wisdom and feed into a wider form of social imagination about the kinds of socio-economic and technological future we expect. I’m hoping to pull together a book on this theme with the tentative title “The Automative Imagination”.

The Economist ‘Babbage’ podcast: “Deus Ex Machina”

Glitched still from the film "Her"

An interesting general (non-academic, non-technical) discussion about what “AI” is, what it means culturally and how it is variously thought about. Interesting to reflect on the way ideas about computation, “algorithms”, “intelligence” and so on play out… something that maybe isn’t discussed enough… I like the way the discussion turns around “thinking” and the suggestion of the word “reckoning”. Worth a listen…

AI Now report

My Cayla Doll

The AI Now Institute have published their second annual report with plenty of interesting things in it. I won’t try and summarise it or offer any analysis (yet). It’s worth a read:

The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

There’s also a sort of Exec. Summary, a list of “10 Top Recommendations for the AI Field in 2017” on Medium too. Here’s the short version of that:

  1. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems.
  2. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
  3. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities.
  4. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
  5. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle.
  6. 6. Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
  7. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed.
  8. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development.
  9. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power.
  10. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms.

Which sort of reads, to me, as: “There should be more social scientists involved” 🙂

Brian Cox, cyberpunk

Man with a colander on his head attached to electrodes

Doing public comms of science is hard, and it’s good to have people trying to make things accessible and good to excite and interest people about finding things out about the world… but it can tip over into being daft pretty easily.

Here’s the great D:ream-er Brian Cox going all cyberpunk on brain/mind uploads… (note the lad raising his eyes to the ceiling at 0:44 🙂 )

This made me wonder how Hubert Dreyfus would attempt to dispel the d:ream (don’t all groan at once!) as the ‘simulation of brains/minds’ is precisely the version of AI that Dreyfus was critiquing in the 1970s. If you’re interested in further discussion of ‘mind uploading’, and not my flippant remarks, see John Danaher’s writing on this on his excellent blog.

AI Now post-doc positions

Holly from the UK TV programme Red Dwarf

This looks like a great opportunity for someone interested in the sorts of things the “AI Now” Institute (NYU) does. Link.

AI Now is looking for two to three postdoctoral researchers whose work resonates with the Institute’s mission. This position is an ideal opportunity for scholars who are interested in understanding the growing role of AI and related technologies within social and political institutions, and who are excited by the idea of shaping a new and far-ranging research field.

ABOUT THE AI NOW INSTITUTE

The AI Now Institute at New York University is an interdisciplinary community researching the social and economic implications of artificial intelligence and related algorithmic systems. We focus on producing foundational research illuminating the social contexts as automated decision-making moves deeper into core institutions like health, education, and criminal justice.

Founded in 2017 by Kate Crawford and Meredith Whittaker, AI Now is housed at NYU, where it fosters vibrant interdisciplinary engagement across the University and beyond. AI Now’s current partners at NYU include: the Tandon School of Engineering, the Steinhardt School of Culture, Education, and Human Development, the Law School, the Stern School of Business, and the Center for Data Science.

AI NOW POSTDOCTORAL FELLOWSHIPS

As an interdisciplinary institute at NYU, AI Now will provide postdocs with the opportunity to develop their scholarship at a top academic institution with an explicit remit to collaborate with researchers and practitioners across different fields, whether in NYU, at partner institutions, or within relevant industry and civil society organizations. AI Now has a strong network across these sectors, and will make this network available to postdocs where relevant and useful.

Postdocs will devote time to their own research and collaborative projects and will contribute to AI Now programs and events related to their research portfolio. Teaching is not expected, but may be an option, depending on a candidate’s availability and interest.

AI Now is committed to mentorship and support and to accommodating and resourcing research agendas that fit within its core mission. Postdocs will become a core part of a growing research community that includes reading groups, expert workshops, international conferences, regular salons, and site-specific travel. Fellows will also have the opportunity to help shape the annual AI Now Symposium.

RESOURCES AND BENEFITS

  • Competitive salary and benefits
  • Access to an exceptional network of mentors and established researchers spanning NYU and beyond, including civil society and industry practitioners
  • A generous research stipend for conferences (including international), fieldwork, and research materials, available as needed
  • Relocation assistance available as needed

See the full text of the call here.

Reblog> Idols of Silicon and Data

Deep Thought, Hitchhikers Guide to the Galaxy

From LM Sacasas:

Idols of Silicon and Data

In 2015, former Google and Uber engineer, Anthony Levandowski, founded a nonprofit called Way of the Future in order to develop an AI god and promote its worship. The mission statement reads as follows: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

A few loosely interconnected observations follow.

Read the full post.

“Invisible Images: Ethics of Autonomous Vision Systems” Trevor Paglen at “AI Now” (video)

racist facial recognition

Via Data & Society / AI Now.

Trevor Paglen on ‘autonomous hypernormal mega-meta-realism’ (probably a nod to Curtis there). An entertaining brief talk about ‘AI’ visual recognition systems and their aesthetics.

(I don’t normally hold with laughing at your own gags but Paglen says some interesting things here – expanded upon in this piece (‘Invisible Images: Your pictures are looking at you’) and this artwork – Sight Machines [see below]).

19 ‘AI’-related links

Twiki the robot from Buck Rogers

Here’s some links from various sources on what “AI” may or may not mean and what sorts of questions that prompts… If I was productive, not sleep-deprived (if… if… etc. etc.) I’d write something about this, but instead I’m just posting links.