skip to primary navigationskip to content

Latest news

The menace of monolingualism

By mjg209 from University of Cambridge - School of Arts and Humanities. Published on May 18, 2018.

Is monolingualism harming us, both as individuals and as a society? Wendy Ayres-Bennett, Professor of French Philology and Linguistics, is leading a major interdisciplinary research project which looks at the value of languages for everything from health and well-being to social cohesion, diplomacy and conflict resolution.

The MEITS project (Multilingualism: Empowering Individuals, Transforming Societies) is funded by the Arts and Humanities Research Council's Open World Research Initiative and seeks to transform the health of the discipline of Modern Languages in the UK, attitudes towards multilingualism and language policy at home and abroad. The motivation for the project comes from an awareness that language learning in the UK is in a very difficult state. “There is a sense that modern languages are in crisis,” says Professor Ayres-Bennett, “and that traditional motivations to get people studying languages are not working. We need exciting new reasons to learn languages and to demonstrate the value of speaking more than one language.”

The project, which finishes in 2020, involves around 30 non-academic partners including schools and voluntary groups and has six interlocking research strands which investigate how the insights gained from stepping outside a single language, culture and mode of thought are vital to individuals and societies.

Professor Ayres-Bennett will speak about three areas of the research in a talk at the Hay Festival for the Cambridge Series, now in its 10th year. The first involves health and builds on research which shows that if you are bilingual dementia onset is on average delayed by up to five years compared to people who are monolingual, and that stroke victims who are bilingual recover cognitively twice as well as monolingual ones. What is more exciting, says Professor Ayres-Bennett, is that even those who learn a language later in life can enjoy certain cognitive benefits. One experiment conducted as part of the project involved a group who learnt Gaelic intensively for a week and were monitored to see if there was any impact on their cognitive abilities. The results were positive. “The kind of mental gymnastics that learning a language involves is good for us and for our ageing society. They help us to stay mentally active a bit longer,” says Professor Ayres-Bennett. “It’s a benefit that is little known, but learning a language is better than any drug currently available for delaying dementia.”

A second area she will speak about is how languages can bring people together and create greater social cohesion. Language is at the heart of some of the current political problems in Northern Ireland, with Irish tending to be viewed with suspicion by the Protestant-Unionist-Loyalist (PUL) community. The MEITS project has been working with two charities in Northern Ireland to enhance understanding between the Catholic and Protestant communities. It has been teaching former paramilitaries and future PUL leaders basic Irish. Professor Ayres-Bennett says: “The Irish language doesn’t have to be associated with sectarianism; the aim is to normalise it and show how it is part of everyone’s culture. In addition, demonstrating the origins of Irish place names can show that Irish is part of PUL heritage as well.”

The third area she will touch on involves the work the project is doing with a number of schools in London and East Anglia to change attitudes to languages. It is comparing language learning for children who are monolingual and started learning a language at school with those who have English as an additional language. The students are being tracked over a two-year period. “We want children to value the languages they speak and schools to think consciously about what it means to be multilingual and to see children with more than one language as a resource rather than an inconvenience,” says Professor Ayres-Bennett. She mentions one Polish student who placed himself near the monolingual end of a scale which asked children to consider how multilingual they were because he was just starting to learn French. “He didn’t value his ability to speak Polish. We need to get away from the hierarchy of good and bad languages,” she states. She adds that looking at multilingualism in a positive way improves social cohesion in the classroom as well as potentially improving students’ motivation for learning and their proficiency.

The MEITS project’s findings will be widely disseminated with the aim of raising awareness of all the different areas of policy which language learning affects. “Language is so central to who we are, to our identities, that it has to have a higher profile across all government departments,” says Professor Ayres-Bennett.

Professor Wendy Ayres-Bennett will speak at the Hay Festival about her research into the health and social benefits of multilingualism.

The kind of mental gymnastics that learning a language involves is good for us and for our ageing society. They help us to stay mentally active a bit longer.
Professor Wendy Ayres-Bennett

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Yes

‘The greatest director in the world right now’ begins residency at Centre for Film and Screen

By sjr81 from University of Cambridge - School of Arts and Humanities. Published on May 04, 2018.

Lucrecia Martel comes to the Centre as this year’s Filmmaker in Residence from 5-20 May, following in the footsteps of Gianfranco Rosi (2017) and Joanna Hogg (2016).

A retrospective of her feature films — the first to be held in the UK—has been jointly organised between the Centre for Film and Screen and the Arts Picturehouse. Martel will be present following each screening for conversation and Q&A. 

Martel, who lives and works in Argentina, is one of international cinema’s major stylists. Her provocative films treat questions of family, childhood, sexuality, belonging, nation, class, historical memory, and colonialism. In a cinema that is both sensually immersive and politically attuned, Martel looks at the world in a way that acknowledges mystery and prompts criticism.

Dr John David Rhodes, Director of the Centre for Film and Screen said: “The residencies offer our students, staff and our community both inside and outside the University the opportunity to engage with serious filmmakers of the highest order, all of them crucially important figures in the unfolding history of contemporary cinema.

“The residencies also offer the filmmakers the opportunity to develop and reconsider their practices in the context of the vibrant scholarly and intellectual ecology that is unique to Cambridge.”

Described by Vogue as ‘the greatest director in the world right now’, Martel is the director of four acclaimed films and a number of award-winning shorts. After almost a decade after her last full-length feature film, Martel returned as director of the critically-lauded Zama in 2017.

Based on the 1956 novel by Antonio Di Benedetto, the film is a period drama relating the story of a 17th century Spanish officer, separated from his wife and family, and awaiting a transfer from a remote area of Paraguay to Buenos Aires.

Shining a light on colonialism and class dynamics, the film won almost universal acclaim from film critics in South America, and was chosen as Argentina’s nomination for Best Foreign Film at the 2018 Academy Awards.

Martel will be resident at the University’s Centre for Film and Screen for more than two weeks, during which she will be offering a sequence of seminars on her filmmaking practice.

 

Symposium

18 May, 10am-4pm, McCrum Lecture Theatre, Corpus Christi College.

Speakers: Lucy Bollington (Cambridge), Catherine Grant (Birkbeck), Rosalind Galt (KCL), Debbie Martin (UCL). 

Full details - TBC

 

Screenings

The screenings will all be held at the Cambridge Arts Picturehouse

Tuesday 8 May at 6pm - The Swamp (La Ciénaga)

Thursday 10 May ay 6pm - The Holy Girl (La niña santa)

Tuesday 15 May at 6:30pm - The Headless Woman (La mujer sin cabeza)

Thursday 17 May at 6pm - Zama

 

One of Argentina’s and Latin America’s pre-eminent filmmakers begins a 16-day residency at Cambridge’s Centre for Film and Screen from tomorrow (May 5).

Lucrecia is a crucially important figure in the unfolding history of contemporary cinema.
John David Rhodes

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Blood and bodies: the messy meanings of a life-giving substance

By amb206 from University of Cambridge - School of Arts and Humanities. Published on May 03, 2018.

What is blood? Today we understand this precious fluid as essential to life. In medieval and early modern Europe, definitions of blood were almost too numerous to locate. Blood was simultaneously the red fluid in human veins, a humour governing temperament, a waste product, a cause of corruption, a source of life and a medical cure.

In 1628, William Harvey, physician to James I and alumnus of Gonville & Caius College, made a discovery that changed the course of medicine and science. As the result of careful observation, he deduced that blood circulated around the body. Harvey’s discovery not only changed the way blood was thought to relate to the heart but revolutionised early science by demanding that human physiology be examined through empirical observation rather than philosophical discourse.

This turning point, and its profound repercussions for ideas about blood, is one of many strands explored in Blood Matters: Studies of European Literature and Thought,. A collection of essays, edited by Bonnie Lander Johnson (English Faculty, Cambridge University) and Eleanor Decamp, it examines blood from a variety of literary, historical and philosophical perspectives.

“The strength of the collection is that, in a series of themed headings, it brings together scholarship on blood to bridge the conventional boundaries between disciplines,” says Lander Johnson. “The volume includes historical perspectives on practical uses of blood such as phlebotomy, butchery, alchemy and birth. Through literary approaches, it also examines metaphoric understandings of blood as wine, social class, sexual identity, family, and the self.”

Contributors include several Cambridge academics. Hester Lees-Jeffries (English Faculty) writes about bloodstains in Shakespeare (most notable, of course, in Macbeth) and early modern textile culture. Heather Webb (Modern and Medieval Languages) looks at medieval understandings of blood as a spirit that existed outside the body, binding people and communities together. Joe Moshenska (English Faculty) examines the classical literary trope of trees that bleed when their branches are broken.

“The idea for the book came from my previous  work on chastity. I was struck that early modern writing about the body is all about fluids, especially blood. Blood was perceived as the vehicle for humours, the essence of being and the spirit – and something that could flow between people,” says Lander Johnson.

“I became fascinated by the fact that we use this word all the time but we have no real sense of what we mean. Our predecessors used it even more frequently and yet there was no scholarship that could help me to begin to understand how many things blood meant for them. A conference at Oxford in 2014 brought together a group of people working in related fields. The book reflects the excitement of those three days.”

Definitions of blood in Western European medical writing during the period covered by the book are changeable and conflicting. “The period’s many figurative uses of ‘blood’ are even more difficult to pin down. The term appeared in almost every sphere of life and thought and ran through discourses as significant as divine right theory, doctrinal and liturgical controversy, political reform, and family and institutional organisation,” says Lander Johnson.

“Blood, of course, was at the centre of the religious schism that split 16th-century society.  The doctrinal dispute over transubstantiation caused ongoing disagreements over the degree to which the bread and wine taken during Mass were materially altered into the body and blood of Christ or merely symbolic.”

The role of blood in sex and reproduction meant that it was routinely described as a force capable of both generation and corruption. Menstrual blood is a case in point. Menstruation was seen as a vital and purifying process, part of a natural cycle essential to human life. But menstrual blood and menstruating women were also thought to be corrupting.

In Shakespeare’s plays, blood makes many appearances, both spoken and staged, from bleeding wounds to the rebellious ‘high’ blood of youth. Lander Johnson examines Romeo and Juliet’s love affair in the light of early modern beliefs about weaning and sexual appetites.

“Writing about birth and infancy reveals that early moderns were as anxious about their children’s health as we are but for them the pressing questions were: should I breastfeed my baby myself or give it to a wet nurse? How and when should I wean it to food? What sort of food?” she says.

“The wrong decision at this early stage of life could have a fatal outcome and was thought to not only form the child’s blood in either a healthy or corrupted state but also to shape the child’s moral appetites for the rest of their lives.”

Blood is synonymous with family and, in elite circles, with dynasty. Contributor Katharine Craik (Oxford Brookes University) explores character and social class through references to blood in Shakespeare’s Henry IV and Henry V. In these plays about warfare and the relationships between royalty and common men, blood is often a substance that eliminates the differences between soldiers who die together in arms, their blood mingling in the dirt of the battlefield.

“Frequently these same descriptions turn into assertions of an essential difference between aristocratic and vulgar bloods,” says Lander Johnson. “Shakespeare is particularly inventive at building character through distinctions of this kind.”

In contrast, Ben Parsons (Leicester University) looks at blood and adolescence in the context of the medieval classroom where ‘too much blood’ was understood to cause wild and unruly behaviour. Medieval pedagogues were concerned about how the ‘full blood’ of students ought to be managed through the kind of material they were asked to read and when, the sort of food they ate while learning, and the style of punishment administered to those who were inattentive.

Blood Matters makes a valuable contribution to the history of the body and its place in literature and popular thought. It draws together scholarship that offers insight into both theory and practice during a period that saw the beginnings of empiricism and an overturning of the folklore that governed early medicine.

Today's scientists understand blood as a liquid comprising components essential to good health. But English remains a language peppered with references to blood that hint at our conflicted relationship with a liquid vital to human life.

A collection of essays explores understandings of a vital bodily fluid in the period 1400-1700. Its contributors offer insight into both theory and practice during a period that saw the start of empiricism and an overturning of the folklore that governed early medicine.

The book brings together scholarship on blood to bridge the conventional boundaries between disciplines.
Bonnie Lander Johnson
Detail from William Harvey's De motu cordis (experiment confirming direction of blood flow)

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Living with artificial intelligence: how do we get it right?

By Anonymous from University of Cambridge - School of Arts and Humanities. Published on Feb 28, 2018.

This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next?

True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.

If so, then there’s little reason to think that it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and need to fit through a human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the Webb Space Telescope.

Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?

On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.

So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.

For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.

The ‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.

Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.

As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.

Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?

These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.

This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”

We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.

But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism. All the more reason to think about the destination now, and to be careful about what we wish for.

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Professor Huw Price and Dr Karina Vold are at the Faculty of Philosophy and the Leverhulme Centre for the Future of Intelligence, where they work on 'Agents and persons'. This theme explores the nature and future of AI agency and personhood, and our impact on our human sense on what it means to be a person.

Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.

For safety’s sake, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time
Huw Price and Karina Vold

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

In tech we trust?

By lw355 from University of Cambridge - School of Arts and Humanities. Published on Feb 23, 2018.

Dr Jat Singh is familiar with breaking new ground and working across disciplines. Even so, he and colleagues were pleasantly surprised by how much enthusiasm has greeted their new Strategic Research Initiative on Trustworthy Technologies, which brings together science, technology and humanities researchers from across the University.

In fact, Singh, a researcher in Cambridge’s Department of Computer Science and Technology, has been collaborating with lawyers for several years: “A legal perspective is paramount when you’re researching the technical dimensions to compliance, accountability and trust in emerging ICT; although the Computer Lab is not the usual home for lawyers, we have two joining soon.”

Governance and public trust present some of the greatest challenges in technology today. The European General Data Protection Regulation (GDPR), which comes into force this year, has brought forward debates such as whether individuals have a ‘right to an explanation’ regarding decisions made by machines, and introduces stiff penalties for breaching data protection rules. “With penalties including fines of up to 4% of global turnover or €20 million, people are realising that they need to take data protection much more seriously,” he says.

Singh is particularly interested in how data-driven systems and algorithms – including machine learning – will soon underpin and automate everything from transport networks to council services.

As we work, shop and travel, computers and mobile phones already collect, transmit and process much data about us; as the ‘Internet of Things’ continues to instrument the physical world, machines will increasingly mediate and influence our lives.

It’s a future that raises profound issues of privacy, security, safety and ultimately trust, says Singh, whose research is funded by an Engineering and Physical Sciences Research Council Fellowship: “We work on mechanisms for better transparency, control and agency in systems, so that, for instance, if I give data to someone or something, there are means for ensuring they’re doing the right things with it. We are also active in policy discussions to help better align the worlds of technology and law.”

What it means to trust machine learning systems also concerns Dr Adrian Weller. Before becoming a senior research fellow in the Department of Engineering and a Turing Fellow at The Alan Turing Institute, he spent many years working in trading for leading investment banks and hedge funds, and has seen first-hand how machine learning is changing the way we live and work.

“Not long ago, many markets were traded on exchanges by people in pits screaming and yelling,” Weller recalls. “Today, most market making and order matching is handled by computers. Automated algorithms can typically provide tighter, more responsive markets – and liquid markets are good for society.”

But cutting humans out of the loop can have unintended consequences, as the flash crash of 2010 shows. During 36 minutes on 6 May, nearly one trillion dollars were wiped off US stock markets as an unusually large sell order produced an emergent coordinated response from automated algorithms. “The flash crash was an important example illustrating that over time, as we have more AI agents operating in the real world, they may interact in ways that are hard to predict,” he says.

Algorithms are also beginning to be involved in critical decisions about our lives and liberty. In medicine, machine learning is helping diagnose diseases such as cancer and diabetic retinopathy; in US courts, algorithms are used to inform decisions about bail, sentencing and parole; and on social media and the web, our personal data and browsing history shape the news stories and advertisements we see.

How much we trust the ‘black box’ of machine learning systems, both as individuals and society, is clearly important. “There are settings, such as criminal justice, where we need to be able to ask why a system arrived at its conclusion – to check that appropriate process was followed, and to enable meaningful challenge,” says Weller. “Equally, to have effective real-world deployment of algorithmic systems, people will have to trust them.”

But even if we can lift the lid on these black boxes, how do we interpret what’s going on inside? “There are many kinds of transparency,” he explains. “A user contesting a decision needs a different kind of transparency to a developer who wants to debug a system. And a third form of transparency might be needed to ensure a system is accountable if something goes wrong, for example an accident involving a driverless car.”

If we can make them trustworthy and transparent, how can we ensure that algorithms do not discriminate unfairly against particular groups? While it might be useful for Google to advertise products it ‘thinks’ we are most likely to buy, it is more disquieting to discover the assumptions it makes based on our name or postcode.

When Latanya Sweeney, Professor of Government and Technology in Residence at Harvard University, tried to track down one of her academic papers by Googling her name, she was shocked to be presented with ads suggesting that she had been arrested. After much research, she discovered that “black-sounding” names were 25% more likely to result in the delivery of this kind of advertising.

Like Sweeney, Weller is both disturbed and intrigued by examples of machine-learned discrimination. “It’s a worry,” he acknowledges. “And people sometimes stop there – they assume it’s a case of garbage in, garbage out, end of story. In fact, it’s just the beginning, because we’re developing techniques that can automatically detect and remove some forms of bias.”

Transparency, reliability and trustworthiness are at the core of Weller’s work at the Leverhulme Centre for the Future of Intelligence and The Alan Turing Institute. His project grapples with how to make machine-learning decisions interpretable, develop new ways to ensure that AI systems perform well in real-world settings, and examine whether empathy is possible – or desirable – in AI.

Machine learning systems are here to stay. Whether they are a force for good rather than a source of division and discrimination depends partly on researchers such as Singh and Weller. The stakes are high, but so are the opportunities. Universities have a vital role to play, both as critic and conscience of society. Academics can help society imagine what lies ahead and decide what we want from machine learning – and what it would be wise to guard against.

Weller believes the future of work is a huge issue: “Many jobs will be substantially altered if not replaced by machines in coming decades. We need to think about how to deal with these big changes.”And academics must keep talking as well as thinking. “We’re grappling with pressing and important issues,” he concludes. “As technical experts we need to engage with society and talk about what we’re doing so that policy makers can try to work towards policy that’s technically and legally sensible.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Fairness, trust and transparency are qualities we usually associate with organisations or individuals. Today, these attributes might also apply to algorithms. As machine learning systems become more complex and pervasive, Cambridge researchers believe it’s time for new thinking about new technology.

With penalties including fines of up to €20 million, people are realising that they need to take data protection much more seriously
Jat Singh
Want to hear more?

Join us at the Cambridge Science Festival to hear Adrian Weller discuss how we can ensure AI systems are transparent, reliable and trustworthy. 

Thursday 15 March 2018, 7:30pm - 8:30pm

Mill Lane Lecture Rooms, 8 Mill Lane, Cambridge, UK, CB2 1RW

BOOK HERE

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

International experts sound the alarm on the malicious use of AI in unique report

By sjr81 from University of Cambridge - School of Arts and Humanities. Published on Feb 21, 2018.

Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report – sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists.

For many decades hype outstripped fact in terms of AI and machine learning. No longer.
Seán Ó hÉigeartaigh

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Fake news ‘vaccine’: online game may ‘inoculate’ by simulating propaganda tactics

By fpjl2 from University of Cambridge - School of Arts and Humanities. Published on Feb 20, 2018.

A new online game puts players in the shoes of an aspiring propagandist to give the public a taste of the techniques and motivations behind the spread of disinformation – potentially “inoculating” them against the influence of so-called fake news in the process.

Researchers at the University of Cambridge have already shown that briefly exposing people to tactics used by fake news producers can act as a “psychological vaccine” against bogus anti-science campaigns.

While the previous study focused on disinformation about climate science, the new online game is an experiment in providing “general immunity” against the wide range of fake news that has infected public debate.

The game encourages players to stoke anger, mistrust and fear in the public by manipulating digital news and social media within the simulation. 

Players build audiences for their fake news sites by publishing polarising falsehoods, deploying twitter bots, photo-shopping evidence, and inciting conspiracy theories in the wake of public tragedy – all while maintaining a “credibility score” to remain as persuasive as possible.

A pilot study conducted with teenagers in a Dutch high school used an early paper-and-pen trial of the game, and showed the perceived “reliability” of fake news to be diminished in those that played compared to a control group. 

The research and education project, a collaboration between Cambridge researchers and Dutch media collective DROG, is launching an English version of the game online today at www.fakenewsgame.org.

The psychological theory behind the research is called “inoculation”:

“A biological vaccine administers a small dose of the disease to build immunity. Similarly, inoculation theory suggests that exposure to a weak or demystified version of an argument makes it easier to refute when confronted with more persuasive claims,” says Dr Sander van der Linden, Director of Cambridge University’s Social Decision-Making Lab

“If you know what it is like to walk in the shoes of someone who is actively trying to deceive you, it should increase your ability to spot and resist the techniques of deceit. We want to help grow ‘mental antibodies’ that can provide some immunity against the rapid spread of misinformation.”

Based in part on existing studies of online propaganda, and taking cues from actual conspiracy theories about organisations such as the United Nations, the game is set to be translated for countries such as Ukraine, where disinformation casts a heavy shadow.

There are also plans to adapt the framework of the game for anti-radicalisation purposes, as many of the same manipulation techniques – using false information to provoke intense emotions, for example – are commonly deployed by recruiters for religious extremist groups.

“You don’t have to be a master spin doctor to create effective disinformation. Anyone can start a site and artificially amplify it through twitter bots, for example. But recognising and resisting fake news doesn’t require a PhD in media studies either,” says Jon Roozenbeek, a researcher from Cambridge’s Department of Slavonic Studies and one of the game’s designers.

“We aren’t trying to drastically change behavior, but instead trigger a simple thought process to help foster critical and informed news consumption.”

Roozenbeek points out that some efforts to combat fake news are seen as ideologically charged. “The framework of our game allows players to lean towards the left or right of the political spectrum. It’s the experience of misleading through news that counts,” he says.

The pilot study in the Netherlands using a paper version of the game involved 95 students with an average age of 16, randomly divided into treatment and control.

This version of the game focused on the refugee crisis, and all participants were randomly presented with fabricated news articles on the topic at the end of the experiment.

The treatment group were assigned roles – alarmist, denier, conspiracy theorist or clickbait monger – and tasked with distorting a government fact sheet on asylum seekers using a set of cards outlining common propaganda tactics consistent with their role.    

They found fake news to be significantly less reliable than the control group, who had not produced their own fake article. Researchers describe the results of this small study as limited but promising. The study has been accepted for publication in the Journal of Risk Research.

The team are aiming to take their “fake news vaccine” trials to the next level with today’s launch of the online game.

With content written mostly by the Cambridge researchers along with Ruurd Oosterwoud, founder of DROG, the game only takes a few minutes to complete. The hope is that players will then share it to help create a large anonymous dataset of journeys through the game.  

The researchers can then use this data to refine techniques for increasing media literacy and fake news resilience in a ‘post-truth’ world. “We try to let players experience what it is like to create a filter bubble so they are more likely to realise they may be living in one,” adds van der Linden.

A new experiment, launching today online, aims to help ‘inoculate’ against disinformation by providing a small dose of perspective from a “fake news tycoon”. A pilot study has shown some early success in building resistance to fake news among teenagers.   

We try to let players experience what it is like to create a filter bubble so they are more likely to realise they may be living in one
Sander van der Linden
A screen shot of the Fake News Game on a smart phone.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Kettle's Yard is back

By sjr81 from University of Cambridge - School of Arts and Humanities. Published on Feb 12, 2018.

We thought you might like a look inside the 'new' Kettle's Yard, which reopened to the public on Saturday, February 10, to learn more about its past – and future.

As Kettle's Yard opens its doors following a two-year, multi-million pound redevelopment and transformation of its gallery spaces, the work of 38 leading contemporary and historic internationally-renowned artists has gone on display in a spectacular opening show.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Opinion: What Ancient Greece can teach us about toxic masculinity today

By ts657 from University of Cambridge - School of Arts and Humanities. Published on Feb 08, 2018.

Comedy and tragedy masks

‘Toxic masculinity’ has its roots in Ancient Greece, and some of today’s most damaging myths around sexual norms can be traced back to early literature from the time, as Professor Mary Beard discusses in her latest book Women & power: a manifesto

Euripides Hippolytus has toxic masculinity on every page, Greek myths are populated by rapists who are monstrous or otherworldly while Medusa is an early example of victim blaming. Of course, in some texts, rapists are condemned and victims believed. But the ending is usually the same – triumph for the aggressor, tragedy for the survivor.

In Hippolytus, the titular male hero challenges sexual norms because he is celibate, by some counts asexual, preferring to spend his time outdoors.  He is also a pious young man devoted to Artemis, the goddess of the wilderness, and virginity. 

Aphrodite, as goddess of sexual love, is none too impressed.  Hippolytus refuses to worship her.  To seek her revenge, Aphrodite causes Hippolytus’ stepmother, Phaedra, to fall in love with him.  Phaedra sexually harasses him, and his resistance leads her to falsely accuse him of rape in her suicide note. Hippolytus flees in disgrace and is killed. A sad tale, and far more complex than this brief summary can show. 

My work training University of Cambridge students to be active bystanders, as part of the University’s Breaking the Silence campaign, has made me think more about Hippolytus and the concepts of masculinity that stretch back to ancient times.

Hippolytus’s father Theseus prefers to accept his son is a rapist rather than the fact he does not fit with the definition of a ‘real man’.  What kind of man doesn’t want sex after all; what young prince left at home with his young and beautiful stepmother wouldn’t be tempted to get in bed with her? When deciding sexual and gender norms, we often make emotionally based value judgments. These create false beliefs that are some of the most resistant to truth, according to one US study.

Challenging myths and stereotypes

I cannot help but wonder whether society’s restricted definition of masculinity is contributing to the staggering statistics we see about the prevalence of sexual harassment and sexual violence on college campuses, as has been documented in the NUS report Hidden Marks. ‘Toxic’ norms of male behaviour are interrogated in anti-harassment programmes such as Cambridge’s Good Lad Initiative or the Twitter movement #HowIWillChange.

The images in popular culture, from men’s magazines to Hollywood movies, not to mention pornography so readily accessible on the internet, show a very restricted kind of masculinity.  The kind where aggression is rewarded and celebrated. 

Is it surprising, then, that so many of today’s young men seem to lack the confidence to be OK with taking things slow?  With not going out for the sole purpose of getting laid?  Isn’t that what everyone else is doing after all?

Challenging myths and stereotypes is also central to Cambridge’s bystander intervention programme.

We use social norms theory to show that what is perceived as the dominant view may well not be.  The ‘silent majority’ is strong.  And it only takes one or two people to stop being silent to change what is perceived to be normal and acceptable. 

We are empowering the students in our workshops to challenge the stereotypes, to see that it’s OK for them or their male friends to be a different kind of man.  Helping students to understand the culture, and perceptions, that enable sexual violence to take place is an important foundation for preparing them to be active bystanders.

Making a difference

Sex offender ‘monsters’ are as prevalent in today’s media as they were on the ancient stage. Rachel Krys, co-director of the End Violence Against Women coalition describes these stereotypes as unhelpful, allowing unacceptable behaviour short of sexual assault to be disassociated from perpetration. According to the coalition, most perpetrators “look normal, can be quite charming, and are often part of your group”. When we move away from the idea that perpetrators have to be monsters, we can begin to own and change unacceptable behaviours in our friends, our group and even ourselves.

It’s clear these are complex issues, and we know it’s not easy standing up to your friends, or going against the crowd.  Intervening may be awkward, and it may feel uncomfortable.  But it can make a real difference, not just for potential victims but also for potential perpetrators.

A recent study of London commuters shows that only 11% of women who were sexually harassed or assaulted on the Underground were helped by a bystander.

The report describes the devastation of finding that even when surrounded by people, they were unsafe. And bystanders witnessing their abuse and doing nothing left victims with the lifelong impression that no one cared.

There are also so many different ways to intervene, and it is not just about confronting people or taking a stand in a crowd.  The workshops help students practice intervention skills in realistic scenarios that could come up in their day-to-day university life, and explore the different options that may be available to them. 

It has been encouraging to see how the students participating have already started to gain not only confidence, but also awareness of how prevalent some of these situations are, and how what might seem like a very small action can make such a big difference.

Are we taking at least a small step to changing the culture at the University of Cambridge?  I certainly hope so.

Tori McKee is Tutorial Department Manager at Jesus College. Join this week's Breaking the Silence campaigning to increase bystander interventions to stop sexual harassment as part of National Sexual Abuse and Sexual Violence Awareness Week 2018. Download materials here or at www.breakingthesilence.cam.ac.uk.

 Tori McKee, a PhD scholar in Classical Studies, looks at ancient and modern ways of being a man

When we move away from the idea that perpetrators have to be monsters, we can begin to own and change unacceptable behaviours in our friends, our group and even ourselves

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Artificial intelligence is growing up fast: what’s next for thinking machines?

By cjb250 from University of Cambridge - School of Arts and Humanities. Published on Feb 06, 2018.

We are well on the way to a world in which many aspects of our daily lives will depend on AI systems.

Within a decade, machines might diagnose patients with the learned expertise of not just one doctor but thousands. They might make judiciary recommendations based on vast datasets of legal decisions and complex regulations. And they will almost certainly know exactly what’s around the corner in autonomous vehicles.

“Machine capabilities are growing,” says Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI). “Machines will perform the tasks that we don’t want to: the mundane jobs, the dangerous jobs. And they’ll do the tasks we aren’t capable of – those involving too much data for a human to process, or where the machine is simply faster, better, cheaper.”

Dr Mateja Jamnik, AI expert at the Department of Computer Science and Technology, agrees: “Everything is going in the direction of augmenting human performance – helping humans, cooperating with humans, enabling humans to concentrate on the areas where humans are intrinsically better such as strategy, creativity and empathy.” 

Part of the attraction of AI requires that future technologies perform tasks autonomously, without humans needing to monitor activities every step of the way. In other words, machines of the future will need to think for themselves. But, although computers today outperform humans on many tasks, including learning from data and making decisions, they can still trip up on things that are really quite trivial for us.

Take, for instance, working out the formula for the area of a parallelogram. Humans might use a diagram to visualise how cutting off the corners and reassembling it as a rectangle simplifies the problem. Machines, however, may “use calculus or integrate a function. This works, but it’s like using a sledgehammer to crack a nut,” says Jamnik, who was recently appointed Specialist Adviser to the House of Lords Select Committee on AI.

“When I was a child, I was fascinated by the beauty and elegance of mathematical solutions. I wondered how people came up with such intuitive answers. Today, I work with neuroscientists and experimental psychologists to investigate this human ability to reason and think flexibly, and to make computers do the same.”

Jamnik believes that AI systems that can choose so-called heuristic approaches – employing practical, often visual, approaches to problem solving – in a similar way to humans will be an essential component of human-like computers. They will be needed, for instance, so that machines can explain their workings to humans – an important part of the transparency of decision-making that we will require of AI.

With funding from the Engineering and Physical Sciences Research Council and the Leverhulme Trust, she is building systems that have begun to reason like humans through diagrams. Her aim now is to enable them to move flexibly between different “modalities of reasoning”, just as humans have the agility to switch between methods when problem solving. 

 Being able to model one aspect of human intelligence in computers raises the question of what other aspects would be useful. And in fact how ‘human-like’ would we want AI systems to be? This is what interests Professor José Hernandez-Orallo, from the Universitat Politècnica de València in Spain and Visiting Fellow at the CFI.

“We typically put humans as the ultimate goal of AI because we have an anthropocentric view of intelligence that places humans at the pinnacle of a monolith,” says Hernandez-Orallo. “But human intelligence is just one of many kinds. Certain human skills, such as reasoning, will be important in future systems. But perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all.

“I believe that future machines can be more powerful than humans not just because they are faster but because they can have cognitive functionalities that are inherently not human.” This raises a difficulty, says Hernandez-Orallo: “How do we measure the intelligence of the systems that we build? Any definition of intelligence needs to be linked to a way of measuring it, otherwise it’s like trying to define electricity without a way of showing it.”

The intelligence tests we use today – such as psychometric tests or animal cognition tests – are not suitable for measuring intelligence of a new kind, he explains. Perhaps the most famous test for AI is that devised by 1950s Cambridge computer scientist Alan Turing. To pass the Turing Test, a computer must fool a human into believing it is human. “Turing never meant it as a test of the sort of AI that is becoming possible – apart from anything else, it’s all or nothing and cannot be used to rank AI,” says Hernandez-Orallo.

In his recently published book The Measure of all Minds, he argues for the development of “universal tests of intelligence” – those that measure the same skill or capability independently of the subject, whether it’s a robot, a human or an octopus.

His work at the CFI as part of the ‘Kinds of Intelligence’ project, led by Dr Marta Halina, is asking not only what these tests might look like but also how their measurement can be built into the development of AI. Hernandez-Orallo sees a very practical application of such tests: the future job market. “I can imagine a time when universal tests would provide a measure of what’s needed to accomplish a job, whether it’s by a human or a machine.”

Cave is also interested in the impact of AI on future jobs, discussing this in a report on the ethics and governance of AI recently submitted to the House of Lords Select Committee on AI on behalf of researchers at Cambridge, Oxford, Imperial College and the University of California at Berkeley. “AI systems currently remain narrow in their range of abilities by comparison with a human. But the breadth of their capacities is increasing rapidly in ways that will pose new ethical and governance challenges – as well as create new opportunities,” says Cave. “Many of these risks and benefits will be related to the impact these new capacities will have on the economy, and the labour market in particular.”

Hernandez-Orallo adds: “Much has been written about the jobs that will be at risk in the future. This happens every time there is a major shift in the economy. But just as some machines will do tasks that humans currently carry out, other machines will help humans do what they currently cannot – providing enhanced cognitive assistance or replacing lost functions such as memory, hearing or sight.”

Jamnik also sees opportunities in the age of intelligent machines: “As with any revolution, there is change. Yes some jobs will become obsolete. But history tells us that there will be jobs appearing. These will capitalise on inherently human qualities. Others will be jobs that we can’t even conceive of – memory augmentation practitioners, data creators, data bias correctors, and so on. That’s one reason I think this is perhaps the most exciting time in the history of humanity.”

Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.

Our lives are already enhanced by AI – or at least an AI in its infancy – with technologies using algorithms that help them to learn from our behaviour. As AI grows up and starts to think, not just to learn, we ask how human-like do we want their intelligence to be and what impact will machines have on our jobs? 

Perhaps we want to build systems that ‘fill the gaps that humans cannot reach’, whether it’s AI that thinks in non-human ways or AI that doesn’t think at all
José Hernandez-Orallo
Artificial intelligence

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Yes

Judges announced on BBC's short story awards with First Story and Cambridge University

From School of Arts and Humanities. Published on Dec 11, 2017.

The BBC National Short Story Award with Cambridge University (NSSA) today calls for submissions for the 13th year with television presenter, author and actress Mel Giedroyc chairing the judging panel for the 2018 award. Mel, who has co-hosted a myriad of television shows including The Great British Bake-Off, has written two books From Here to Maternity (2005) and Going Ga-Ga (2007). Mel’s counterpart on the BBC Young Writers’ Award with First Story and Cambridge University (YWA) is BBC Radio 1 and CBBC’s Book Club presenter Katie Thistleton, who will chair the judging panel for the teenage award as it opens for submissions for the fourth year.

Polly Blakesley awarded Pushkin House Russian Book Prize 2017

From School of Arts and Humanities. Published on Jul 11, 2017.

School of Arts and Humanities Newsletter Lent Term 2017

From School of Arts and Humanities. Published on Mar 24, 2017.

School of Arts and Humanities Newsletter Lent Term 2017

Arts and Humanities appoints new Head of School

From School of Arts and Humanities. Published on Mar 16, 2017.

Emer Prof Baroness Onora O'Neill wins 2017 Holberg Prize

From School of Arts and Humanities. Published on Mar 15, 2017.

Vice Chancellor's Public Engagement and Impacts Awards are open!

From School of Arts and Humanities. Published on Mar 10, 2017.

The deadline for nominations to the Vice Chancellor's Public Engagement and Impact Awards is April 21st.

Masterclass: Coordinating Complex Funding Bids

From School of Arts and Humanities. Published on Mar 07, 2017.

Abandoned Liszt opera brought to life - 170 years later

From School of Arts and Humanities. Published on Mar 07, 2017.

Capitalism on the Edge Lecture - last in the series!

From School of Arts and Humanities. Published on Feb 27, 2017.

Capitalism on the Edge - What can women do to change how capitalism works?

Sidgwick Site Equalities Improvement Network talk March 7th

From School of Arts and Humanities. Published on Feb 22, 2017.

The Future of the Professions

From School of Arts and Humanities. Published on Feb 17, 2017.

David and Richard Susskins will talk about their latest book, The Future of the Professions, on February 23rd at CRASSH.

New book explores how articulating language is humans' greatest gift

From School of Arts and Humanities. Published on Feb 14, 2017.

The Wonders of Language: How to make noises and influence people by Prof Ian Roberts (DTAL) was published last week by CUP and is now on display in the CUP bookshop.

Newton Awards deadline fast approaching!

From School of Arts and Humanities. Published on Feb 03, 2017.

The deadline for the Newton International Fellowships, Newton Mobility Grants and Newton Advanced Fellowships is 15 March 2017

2017 Teaching Forum open for registration

From School of Arts and Humanities. Published on Feb 03, 2017.

University launches 2017 Impact and Public Engagement Awards

From School of Arts and Humanities. Published on Feb 01, 2017.

The 2017 University Impact and Public Engagement Awards are now open!

New starters in the Office of the School of Arts and Humanities

From School of Arts and Humanities. Published on Feb 01, 2017.

DAAD Lecture: Imperial Violence and Mobilised Nations

From School of Arts and Humanities. Published on Jan 26, 2017.

Cambridge DAAD Hub: Lecture by Leibniz Prize winner Prof. Dr. Lutz Raphael, 23 February, 5pm, Room 2 Mill Lane Lecture Theatres

CRASSH Communications Manager Vacancy

From School of Arts and Humanities. Published on Jan 25, 2017.

Vacancy: Communications Manager in CRASSH

School welcomes new Council representatives

From School of Arts and Humanities. Published on Jan 24, 2017.

Athena Swan Surgery Wednesday 25 January

From School of Arts and Humanities. Published on Jan 24, 2017.

First SSEIN Talk February 1st

From School of Arts and Humanities. Published on Jan 24, 2017.

The first Sidgwick Site Equalities Improvement Network talk on Wednesday February 1st will be given by Jacqueline Scott on 'Gender Inequalities in Production and Reproduction'.

Opening Reception for Gormley Sculpture on Sidgwick Site

From School of Arts and Humanities. Published on Dec 02, 2016.

Music Faculty Teaching Prizes 2016

From School of Arts and Humanities. Published on Dec 02, 2016.

Congratulations to the 2016 winners of the Faculty of Music's Teaching Prize!

School Newsletter Michaelmas Term 2016

From School of Arts and Humanities. Published on Dec 01, 2016.

Winners announced for the Inaugural Arts and Humanities Impact Pilot Fund

From School of Arts and Humanities. Published on Nov 17, 2016.

Cambridge support for MPs' position on the importance of language skills

From School of Arts and Humanities. Published on Oct 19, 2016.

On Monday 17th October, the All Party Parliamentary Group on Modern Languages called on the UK Government to support languages education.

History of Art Department responds to the AQA decision to drop Art History A Level

From School of Arts and Humanities. Published on Oct 18, 2016.

Postgraduate Open Day - Wednesday November 2nd

From School of Arts and Humanities. Published on Oct 18, 2016.

Faculties and departments in the School of Arts and Humanities will be attending the University’s Postgraduate Open Day being held on Wednesday November 2nd 10:00 AM to 4:30 PM.

Ensuring Artificial Intelligence benefits all mankind

From School of Arts and Humanities. Published on Oct 18, 2016.

Ambitious new centre launches at University of Cambridge