AI Ethics: send money, guns & lawyers

Photo of Westminster Abbey taken by Iulian Ursu
Photo: Iulian Ursu (cc)

My speech notes, for a talk given to the Westminster Abbey Institute on 31 May 2018

This evening I’d like to present a problem within what I believe to be the most transformative technology of our lives: artificial intelligence. I’ll suggest why I think that problem will involve some colossal rows involving money, guns, and lawyers. And as well as explaining the problem, I’d like your help to find the right way for us to respond professionally and personally, so I look forward to the discussion afterwards.

  • Who am I, and why am I here?
    • I’m a married father of two – living in south west London. Missing the family during half term holiday.
    • Like you I’m an alumni of the Fellow’s programme at the institute. I’ve always sought an inflated sense of purpose in what I do, which is easy to find in public life. But it is difficult to hang on to that sense of purpose without moral and spiritual reflection, which I think the Institute catalyses.
    • I’m someone who has slipped between the public and private sector, technology and policy, institutions and startups. I’ve worked in HM Treasury, and in Africa; in the Dept. Energy & Climate Change, and in a small Engineering charity; in the Cabinet Office and in an AI firm; because I really value the perspective that one can get by moving to a different position.
    • Right now, I work at ASI Data Science. We help people create artificial intelligence to solve business and policy problems. I’d say we’re now the leading specialist AI agency in Europe. My job is to make sure we don’t go bust.
      • We have worked with more than 150 organisations as diverse as Amnesty International on their marketing, Easyjet on their staff scheduling; Isaac Physics to help people learn faster; and Siemens, on predictive maintenance for their trains;
      • In Government, we’ve used AI in the Home Office to help spot and remove terrorist propaganda on social media platforms; in the Department of Education to help better forecast the number of teachers required in each local authority; we’ve used it in the NHS to work towards a more effective way to predict the recurrence of cancer; and in local government to help better identify houses in multiple occupation.
      • We believe that AI is for Everyone. It is a remarkable technology that has the potential to solve hard problems and automate hard work. It isn’t just for the Internet Advertising giants. It isn’t even just for companies and governments. It should benefit workers, and customers, and citizens: everyone.
  • What is AI?
    • I think it is helpful to define AI in the context of science and the scientific method:
      • Science as described by Karl Popper, is the process of collecting, analysing, and presenting data from the world and testing theories that describe that data.
      • That second step – analysis and deduction is the most mysterious – it requires human intelligence, which to Karl Popper was a black box.
      • Data Science is the Scientific method using software. Artificial Intelligence is the set of techniques that replicate that analytical, deductive step that is done by human intelligence in the scientific method.
    • So AI isn’t magic, it’s science. And it isn’t just advanced statistics. AI is different because:
      • More complex functional approximation – e.g. Image recognition
      • Continuously learns – e.g. AlphaGo Zero
      • The goal is usually automation not human ‘insights’ e.g. trading algorithms
    • AI started in the 1950s, and has progressed in fits and starts, or ‘winters’ and ‘summers’ as they are called in the field. For simplicity there are two types of AI to be aware of:
      • Symbolic AI (1950s-1980s): Pre-defined models; Hand-coded rules; Small data sets
      • Statistical AI or machine learning (1990s-present): Learning algorithms; Unspecified rules; Large data sets
    • Symbolic AI failed because it was always constrained to working on very small problems – it couldn’t deal with the complexity of the real world. Statistical AI has advanced because of the accessibility of vast sources of data; and the exponential increase in computing power.
    • Money, Guns & Lawyers are all intricately bound up in the future of AI:
      • Enormous amounts of money are now pouring into the field. Like the historical investment in electrification or the Internet, I don’t expect it to slow down any time soon.
      • Military at the forefront of experimentation – autonomous weapons particularly controversial. Quick show of hands – who would be comfortable with the British Military using autonomous weapons?
        • We have said we won’t – but missile defense systems are largely autonomous already, as are cyber attacks.
        • Russian and Israeli firms (e.g. Kalashnikov and Airobotics) are developing autonomous combat drones using AI, and more will follow.
        • South Korean guns on the sentry posts along the border with North Korea are partly autonomous.
      • And if you thought the conversations provoked by GDPR were tricky – the lawyers are going to have an absolute field day with AI – who is liable for the decisions that AI models make? What counts as discrimination? What rationale has to be provided for decisions made by machines? The list goes on and on.
  • Why does AI matter as a technology?
    • General Purpose Technology – like electricity, or the shipping container, or machine tools – there are so many secondary uses and implications – we can’t even imagine all of them.
    • Let’s take self-driving cars. They are coming. One example of social and economic transformation driven by AI, but one among many. I’m pretty sure that our children may not ever need to learn to drive. What are the implications of AI in Cars (h/t to Benedict Evans):
      • Reduce c.1 million Road Deaths/year globally. Something over 90% of all accidents are now caused by driver error, and a third of fatal accidents in the USA involved alcohol. Huge economic effect to these accidents: property damage, medical and emergency services, legal costs, lost work and congestion. UK analysis found a cost of £30bn every year.
      • Fewer people employed as drivers. There are something over 230,000 taxi and private car drivers in the USA and around 1.5m long-haul truck-drivers.
      • Taxis become 75% cheaper, because the wage of the person driving the vehicle accounts for three quarters of the cost.
    • Higher road capacity & faster journeys. No lanes, no separation, no stopping distances, and no signals, means profoundly different traffic patterns.  Accidents themselves cause as much as a third of congestion.

Slightly more speculatively, we might project:

      • No more looking for parking spaces & no more need for on-street parking
      • Collapsed distinction between cars & buses
      • Renaissance of Rural Pubs
      • Media Consumption time rises
      • Falling crime from car camera footage
    • But AI isn’t attracting such a lot of interest just because of its economic potential – or even because of the potential for apocalypse (we don’t worry about asteroids in the same way) – but because AI is entangled in stories about ourselves – about our identities.
  • AI and what it means to be human
    • A lot of people, from Alan Turing in the 1950s to Stuart Russell today, along with most of Silicon Valley, and influential philosophers like Peter Singer and Julian Savulescu all appear to fix on cognition and analytical intelligence as the principal way to define a person, and a person’s value.
    • It is interesting to me that even the critics of AI – like Yuval Harari, the author of Homo Deus, are often just as reductionist in their comparison between human and machine intelligence.
    • I think that is an impoverished definition: I think character trumps cognition, but we’ll come back to that.
    • Defined in this way – it sets machine intelligence up to be in direct competition with human intelligence, and therefore with us as people. Stories spread and anxiety grows about how machines will judge us. How they may replace us.
    • And in terms of cognition – the machines are getting brighter. It may be helpful to think about intelligence in some qualitative categories rather than just as a general scale. In ascending order of difficulty for machines we have:
      • Calculation – this is simple
      • Prediction – this can be complex, but is now common
      • Recognition – this is new
      • Understanding – contextual intuition is easy for us v. hard for machines
    • Computers are getting better at this at the same time that we’re increasingly aware of our own cognitive limitations. We aren’t often straightforwardly rational. We can be and usually are biased, emotional, lazy, or distracted in our thinking – that’s me anyway – I’m less sure about you.
    • Our cognitive biases are widely exploited. Quite a few methods of psychological manipulation are now used as reliable business models – advertising in particular.
    • And it is easy to think that emotional or subjective thinking is less good than the super rational kind. That we should only ever aspire to the life scientific.
    • But what if it takes more than intelligence to understand the world? What if intelligence is insufficient? What if, as Martin Heidegger suggested, the world is more than a set of facts.
  • AI Ethics: how will AI be used?
    • Like any technology or tool, AI is a capability that opens up a series of possibilities. It isn’t inherently good or bad, but it will be used in positive and less positive ways, and I’d love to explore not only what that might look like, but how we can understand the current framework and biases that are likely to characterise the ethos of applied AI.
    • The most common way of thinking about AI ethics is to imagine all the things that could go wrong, why they might go wrong, and start to construct ways of avoiding those failures.
      • Autonomous vehicles crashing
      • AI Chatbots spitting obscenities into the Internet
      • More scarily – Deep Fakes
      • Less visibly – algorithms used to triage applications discriminating on ethnicity
    • But I think an equally fruitful way to explore AI ethics is to  focus on what ‘going right’ looks like:
      • Cost efficiency or customer satisfaction?
      • Efficient allocation of labour, or full employment?
      • And in an era where even the quality of our sleep is now quantified – what should happen to those things that count, but cannot be counted? Like mercy? Or grace?
      • Most counter-intuitively, might a purely maximalist approach to optimising outcomes overlook the value within their opposites: the power within  weakness, the trap of independence, or the good that can emerge from suffering.
    • AI as a technology lends itself to an ethic of rationality where the world really is just a set of facts, and where a Western Cartesian, objective distant representation of the world is thought sufficient to make sense of it.
    • One reason for the uptake of AI across government is because it is entirely in tune with the gospel of New Public Management:
      • The philosophy that You can’t manage what you can’t measure. Started in the private Sector: Tom Peters – In Search of Excellence (1982):
      • Then came to the Public Sector: Osborne and Gaebler – Reinventing Government (1993)
      • It was (excellently) embodied by Michael Barber – who set up the PM’s Delivery Unit – and has recently written – How to Run a Government
      • Remarkably resilient management philosophy. Was reflected in a centrist political consensus for the last 25 years.  Almost unobjectionable. Who could be against efficiency?
      • Strengths:
        • It broke open the closed shop of the professions – teaching, medicine, law, and recognised that accountability is necessary, even for experts.
        • Offers a freedom from rules – by specifying outcomes and outputs, and encouraging innovation in process terms.
        • Not partisan but technocratic – so can be embraced by left and right – you can target equality or performance – as long as you’ve got measurable KPIs it works.
    • However AI within a philosophy of managerialism looks rather threatening (unless you’re a manager). ‘Good’ looks like taking people out of the loop, with the goal of greater efficiency.
  • And managerialism has recently run aground:
    • The bureaucracy of DWP dramatised in I, Daniel Blake. The Bureaucracy of the Home Office in the Windrush controversy. The bureaucracy of HMRC when it comes to charitable donations and reclaiming tax. In the NHS. Yes even the NHS: the unsubtle hints that terminating a fetus with Down’s syndrome is probably for the best. And most prosaically – how many times have you been kept on hold? How many times to be funnelled into a ‘process’ for the convenience of the company or organisation, rather than the citizen?
    • All five are examples of managerialism deployed in the interests of the organisation rather than the citizen.
    • Moreover, everywhere we are faced with ‘wicked problems’ can’t be solved analytically, or with more operational efficiency.
      • Debt counselling → relational imperative, not a transactional one.
      • Adult Skills Retraining → motivation of students and providers and employers, changing culture as well as services.
      • Brexit → not an economic argument, but a political clash of identities
    • I suspect they can only be solved relationally – not transactionally, because they involve a clash of values rather than an analytical mystery.
  • An ethics of AI, to just deliver efficiency could do more harm than good:
    • In Healthcare → automating diagnostics → what about the patient-doctor relationship?
    • In Education → personalised pedagogy through MOOCs → what about the inspiration of human tutorials
  • Out of the box, AI works to get tasks and transactions done; AI doesn’t help the formation of character, or the provision of kindness. In the language of David Brooks – AI is about the resume virtues, not the eulogy virtues.
  • Unless we recognise the essence of public services and duties within relationships and character, we will use AI to optimise our way to an impersonal and unresponsive state.
  • What can we do?
    • Well, I’d love to hear your suggestions. But here are three that occur to me:
    • First – Let’s not fall prey to CP Snow’s Two Cultures – in his 1959 lecture Snow declared “If the scientists have the future in their bones,…then the traditional culture responds by wishing the future did not exist.” Arguably that division continues today. We can stand against it by learning how to commission, sponsor and assure the use of AI where we are working by taking an AI for Executives type training.
    • Second, we should lobby for the use of AI, but also for it to be transparent and accountable – don’t blindly trust the machines, and don’t just rely on the lawyers.
    • Last, and perhaps most critically, we should use the efficiency savings from AI to invest in relationships, community, and social cohesion – for ourselves, as well as for the public that we serve.

In conclusion

I believe AI is a general purpose technology, much like electricity or computers, that has such broad application that in 50 years time there will scarcely be a corner of life untouched by its impact.

But there are two possible futures that involve artificial intelligence. Not the apocalyptic vs. the ecstatic visions of commentators such like Elon Musk and Ray Kurzweil that capture the headlines. That is merely a speculative debate about what the technology will be able to do in 20 years time. More important for us right now is the two ways in which the technology will be used today.

We stand today between a future that is increasingly quantified, optimised, and managed; and a future that emphasises the relational heart of work and services. These two futures will co-exist, but they will be felt very differently. Because of the power of AI, I think it will tip the balance, one way or another. And there will be rows. Those rows will involve money, guns and lawyers. The way those rows are resolved will define our our politics, our experience of work, and our social integration for a generation to come.

Questions?

I’m grateful to a number of people for their thoughts and suggestions for this talk – including Adrian Brown, Will Davies, Sally Phillips….

Advertisements

Freedom to flounder

“To food, friends, and freedom.” My traditional toast at dinner is usually readily echoed, whoever happens to be at our table. Yet if I were to substitute Freedom for a more specific word that historically has had much of the same meaning – salvation – then many of our guests might choke on their brussel sprouts.

Over the last century, the West has rightly pursued the ideal of freedom from external constraint, whether legal, social or political. The effect has thankfully given greater liberty (if not yet equality) to individuals, and particularly women, and minorities. Yet, we now face a host of social challenges from having forgotten the second kind of freedom: freedom from internal obstacles; salvation from ourselves and from our fathomless ability to screw things up. The notion of salvation promises freedom from the less beautiful chambers of our hearts; but sounds alien to modern ears because we too easily forget our own human frailties.

In an age where almost everything is permitted, our leaders and commentators have lost the moral courage to articulate how we can be saved from our instinctive choices. Poor diets have created an epidemic of diabetes. Poor financial discipline contributes to debt. Lack of community commitment has led to epic levels of loneliness. How many couples get help to navigate the useful constraints of marriage? How many of us feel confident to justify our own conception of virtue in an age of cultural relativism? Collectively, an excess of freedom has left us floundering.

Faith in Action – Episode 10

James Perry – Faith in Business

Transcript:

Hello. I’m Richard Sargeant, and this is Faith in Action, a podcast about how faith affects the way we live and work today.

What difference does faith make in business? With me to explore faith in business is James Perry, co-founder of the ready meal company COOK, and also the director of B Lab UK, which is a support organisation for B corporations.

James, welcome. What are B corporations? Why do they matter?

B corporations have been created in response to this idea that business has become quite narrow in its focus, particularly in the second half of the 20th century, and become very focused on maximizing shareholder value. Whereas business can potentially perform a much broader role in creating value for communities, the environment and employees. So, B corporations are a different kind of company.

They sound a bit like community interest companies, or co-ops. Where do B corps sit in the mix of social economy firms?

So, there’s about 3.1 million different enterprises in Britain. About 2.7 million of them are companies limited by shares, and about 400,000 are a sort of salami-sliced group of community interest companies, industrial provident societies, co-operatives, companies limited by guarantee and so on. B corporations are about companies limited by shares, which mean they have no asset lock. So, they are for-profit companies. The other 400,000 in the social economy have some form of asset lock, which means that they cannot access mainstream capital, which creates a constraint on their ability to grow and mainstream. Business is the most powerful force invented by mankind, it shapes our society in more profound ways than any other man made institution, and I suppose our point is that business can be repurposed to intentionally create social and environmental value, rather than just seeking to create shareholder value and obey laws.

How do we know this isn’t just greenwash? What did COOK have to do to become a B corporation?

Well, there are two things that one has to do. The first one is, one has to change one’s constitution. So, in America, they literally have created new laws, because it’s illegal to do anything other than maximize shareholder value within their current corporation. So, they’ve created benefit corporations. ; they’ve passed laws in 31 states. In Britain, we have a more flexible companies act, but to become a B corporation you have to change your memorandum and articles of association to assert that the company is being operated for stakeholders, defined as communities, the environment and employees, as well as shareholders, and they rank alongside one another, rather than having this default system of shareholder primacy in regular companies. So, that’s the first part. You have to change your legal constitution. And the second part is, you have to pass a performance assessment, of how well you perform against those other stakeholder groups. So, you have to complete what’s called a B impact assessment, which assesses your performance against your employees, communities, the environment and governance, as well as your impact business model. So, it’s a pretty comprehensive certification.

It sounds a bit techy. What are the practical differences for staff, or for people shopping at COOK, that they might notice from becoming a B corp?

So, a lot of the changes are about fixing the roof, in terms of your business practices. We had to change, literally, hundreds of things when we certified as a B corporation. For example, we never used to measure our water usage. Now we do, and we have found that we are using about a quarter of the water per portion produced than we were before we started measuring it. So, a lot of it’s to do with measuring what matters, and you’re incentivized to deliver better performance on those sorts of things. There are other, bigger things, such as we never used to have a profit share, but we now have a profit share for all employees. We used to always seek to employ vulnerable or marginalized people, but we never did it intentionally, and now we have a target, so that of our 850 employees, two percent of them come from underserved groups, such as former addicts, or offenders on release from prison, and we’re moving that from two percent to three percent this year. And things like that, where you start to intentionally measure your positive impact is basically what it’s all about.

Is this something the staff notice?

One of the reasons we did it was, we always had a problem when we were talking to our staff, and saying we wanted the business to be positive, socially and environmentally, because they didn’t believe us. And they said, “Well, ultimately it’s a private company. It’s making you rich. We know you want to believe this, but, actually, at the end of the day, that’s the effect of what’s going on here. So, why are you burdening us with all these additional things?” And the minute we became a B corporation, they realized we were serious. We changed the legal constitution of the company. It was like it opened up the doors of perception to them, in terms of what their role was. So, instead of coming to work to be paid, to give us what we want and then go off in their free time and do what they want, what we found was they started realizing they were coming to work to be the change they wanted to see in the world, and they could actually use our business as a platform to effect that change. So, the ideas started coming. And what’s incredible is this unleashing of creative energy when people start to conceive of business as a platform to do those things, rather than just enrich shareholders.

And how far has this got – the B corporations movement in the UK? Where is it going next?

So, it started in the US about nine years ago, but it’s a global problem in terms of this narrow role that business has been assigned, or restricted itself to. And now it’s a movement that’s in about 48 countries, in about 131 industries. There are 1,700 B corporations. The B corporation economy, globally, is the same size as a small country. $28 billion of revenue. And it’s growing very fast. We launched in Britain in September last year. There are 89 B corporations now in Britain, with revenues of about £700 or £800 million pounds. So, it’s very early days. You know, we see this as something which has the power to really change the whole economy, but, obviously, it’s going to take time.

I think, James, you previously said that you were looking to combine care with capital, and profit with purpose. I wondered if you might reflect on that, and perhaps explain how you saw faith fitting in.

Yeah, I mean, I suppose I go back to the moment when Jesus died, and the curtain in the temple was ripped from top to bottom. And for me, what that symbolizes is the end of a sacred-secular divide. And I think what we’ve done in our society is institutionalized a sacred-secular divide, particularly with respect to money and capital. So, I think that what the Bible is teaching is, effectively, stewardship. It’s teaching a holistic care and concern for all of God’s creation. And I think in modern society, in modern Western society, what we’ve done, we’ve essentially made the error of separating the sacred from the secular. So, we sort of go to work from Monday to Friday and earn money, and on Sunday we go to church. And I think in the way we manage our money, we’ve also replicated that. So, we invest our money in some fairly rapacious places which don’t do any good for God’s creation, and then we give it away to charities to try and repair some of that damage. And I think that what the Bible teaches us is a much more holistic approach, and I think that’s really where this is rooted.

And where did you come to this sense of integration, of stewardship? You’re Christian yourself, James?

Yes. And, I mean, it was really from that point, I suppose. I was raised by my parents as a Christian. The thing that I suppose I rebelled against, in terms of my faith, was what I perceived to be a sacred-secular divide, and a certain level of dissonance between what I heard on a Sunday and what I saw going on from Monday to Saturday. And that troubled me. And so, when I started my career, I assumed, as a Christian, that one would integrate spiritual and social considerations along with material considerations. But I learned quite quickly that’s not how the economy was structured, and that’s not how I was trained to think. And that troubled me, I never really accepted it, and I’ve been rebelling against it ever since.

And what does that look like? How did your faith actually work itself out in work? When it came to COOK, for example. Those spiritual or social considerations. Did you take some decisions early on that were influenced by your faith?

Well, another thing, I suppose, is that, you know, I never really believed in creating– in the spirit of not creating a sacred-secular divide, of creating this kind of ghetto, where Christians exist in isolation of the real world. And so, when we were designing the business, we were trying to design a good business. You know, we weren’t– It wasn’t a theocracy. We were deliberately trying to create something which was inclusive, so the faith, if you like, was intrinsic rather than extrinsic. And that’s definitely something which I feel quite strongly about. I think Christians pursuing their faith, and expressions of their faith in the world, is a very powerful thing. I think when they decide not to do that in the world, and to do that behind closed doors, amongst other Christians, it becomes rather exclusive and can be counterproductive. So, that’s one of the reasons I like working with secular movements, like the B corporation movement, which is not a faith movement. There’s plenty of people of faith in the B corporation movement, but it’s not a faith movement in itself, and I think that that’s sort of my theory of change.

James, you’ve worked on both the retail side and also the investment side of the social economy, with a firm called Panahpur. Could you tell us a little about what Panahpur does, and how that’s also influenced by the sort of integrated principles of faith and life that you describe?

Yes. So, Panahpur was founded in 1907, by Col. Sydney Long Jacob, who was an engineer in the Indian Raj. And he came across some orphans by the side of the road, who weren’t being cared for, and he created a community, and raised them in this community, and gave money, and the village was called Panahpur, which means “place of refuge” in Hindi. And over the years, he and his descendants gave money into this foundation called Panahpur, and when I became involved 15 years ago, it was operating as a very old fashioned grantmaking charitable trust. It had financial assets in the city that were maximizing their financial returns, and then it was using the income to give grants to alleviate social distress. And it seems to us that that was completely insane. The evidence was suggesting that the financial markets were leading to increased inequality, that the social distress that the charities we supported were seeking to alleviate was, in large degree, caused by some of the injustices institutionalized in the current financial system. So, with our capital supporting something which we were then using the income to try and alleviate. And that didn’t seem to us to be a very efficient way to go about things. So, we said, “Instead, why can’t we use our capital, our firepower, for our purpose?” which was to support excluded and vulnerable people. And doing that, we started looking for investments which had a positive social and environmental impact.

A lot of social funds seem to be limited by the appetite of the investor community to take lower returns. Is that the case at Panahpur, or do you think that this has, again, a broader relevance for investment funds and investors?

On one hand, there’s been an ethical investment movement, which is based on negative screens. So, “I don’t want to invest in tobacco, or porn, or arms, or alcohol.” Those kind of things. And that’s effectively an exclusion of those stocks. Now, those stocks might be quite high performing stocks. You know, tobacco stocks, over the last 10 or 15 years have performed very well, and by excluding yourself from high performing stocks you then have a lower return. And that’s broadly been the experience of the ethical investment market. We’re talking about something very different here, which is a positive screen. It’s saying, “I want to invest in things that are having a positive impact on social environments, society or the environment.” And so, you might, for example, find a renewable energy public utility company, like Good Energy. Now, that stock has performed very well, because, actually, it’s on the right side of history, it’s on the right side of the policy environment, it’s on the right side of where the consumer sentiment is going. And those sorts of businesses can create a greater level of value over the long term, because sustainability is designed in, they attract better talent, because people want to work for good businesses more and more. And there are a bunch of reasons why those sort of businesses are pretty well set up to thrive in the longer term. So, I think this is a kind of whole new approach to investment, which is still very immature, but it’s not necessarily a trade off between doing well and doing good.

Do you think that it is just the passage of time that we’ll see the social investment market expand, and the B corporations and social economy expand, or are there particular blockers in the path of trying to do well and do good?

I mean, I think both. There are certainly blockers. Not all things can provide a risk adjusted return. That’s why there are substantial policy interventions from government to try to overcome the asymmetry between the supply of capital and the demand for capital amongst part of the social economy, and that’s quite right. That’s how it should be. So, you know, a lot of charities wanting to raise capital to support bids for contracts to government, the upside might not cover the potential downside, and there’s some sort of subsidy required. And that’s part of developing that side of the market. And then there’s the non-asset locked side, and the mission-led business side, where those constraints don’t necessarily exist. So, I think there are blockers, but I think the broad overview is that the worm has turned, and shareholder capitalism, short term-ism, all those sorts of things, are unsustainable, and I think more and more people realize that. And younger people, the millennials, coming through into leadership just don’t want to see that persist.

On that optimistic note, James, thank you so much.

Pleasure. Thank you.

Faith in Action – Episode 9

Luke Hoare – Faith in the Military

Transcript:

Sargeant: Hello, I’m Richard Sargeant, and this is Faith in Action, a podcast about how faith affects how we live and work today. It’s said that there are no atheists in foxholes, but while everyone wants God on their side in battle, there’s always been an uneasy relationship between earthly force and divine direction. With me to explore the role of faith in the military is Major Luke Hoare of the Army Air Corps, who has served in Iraq and Afghanistan.

Luke, there’s a rich history of connection between faith and the military, from medals with crosses on them, to hymns like “Onward, Christian Soldiers”, to priests accompanying troops into battle from the Old Testament onwards. Is faith still relevant in the military, or is it a relic of the past?

Hoare: I think it’s very relevant, and soldiers certainly feel it to be relevant. I think there’s two angles I would look at this. The first is that, as you correctly identified, every single modern professional army has a relationship with religion. I think that’s because of the immediacy of your job means that you are more likely to experience the extremes of life where you also meet religion: births, deaths, funerals, so on and so forth, and those rituals of that which helps as a coping strategy. But I think there’s probably more to it than that. I was thinking this morning, the first person to recognize Jesus [after he had died] was a centurion at the foot of the cross. He said, “Behold, this is the king of the Jews,” and soldiering and religion have a pretty healthy relationship with each other. There’s nothing irreligious or, indeed, immoral about being a soldier, and the best soldiers I know are the most moral. They have a set of values, and a lot of our values come from a rich Christian tradition.

Sargeant: You mentioned Jesus and the centurion, but the Gospels seem to present an ethic of suffering service and non-retaliation… Continue reading

Faith in Action – Episode 8

Krish Kandiah – Faith in the Family

Transcript:

Richard: Hello, I’m Richard Sargeant and this is Faith in Action, a podcast about how faith affects the way we live and work today. The family is a haven in a heartless world, but do the faithful have a distinct vision of how to create that refuge? With me to discuss this is Krish Kandiah, founder and director of Home for Good. Krish, welcome.

Krish: Thank you. Nice to be here.

Richard: What is Home for Good?

Krish: Home for Good is a movement of people passionate about making sure the most vulnerable in our society get the love and attention that they need. Currently, in the UK, there’s around four thousand children that are waiting for adoption. They’re often not babies. They’re older children, siblings, children with additional needs, children from black and minority ethnic backgrounds. And we’re also short of foster carers to the tune of 8,600. We believe that fostering and adoptions are fantastic contribution that you can make to the life of a child. A lot of people tell me they’re interested in justice and they’re passionate about that, so I say, well, here’s a real way that you can kind of make the rubber hit the road. Open up your home. Open up your family and let’s welcome children that need loving parents in their lives. Let’s welcome them into our households.

Richard: That sounds fantastic. I don’t know very much about fostering or adoption – you said that there are a lot of people waiting – is that something that’s gone up in recent years? Continue reading