Should we use AI and LLMs for Christian Apologetics?

Posted in:

The other day I received an email from Jake Carlson of the Apologist Project asking permission to use the apologetics resources I’ve written as input for an AI chatbot they have launched on their website.

I replied by email, but I think there is benefit to doing this kind of conversation more publicly. So, below are:

First, some terminology: LLM refers to Large Language Model, and is the type of technology that is powering all recent “Artificial Intelligence” chat bots. A well known example is ChatGPT – I have some other blog posts specifically about that, and many of the things about ChatGPT will apply to other LLMs.

My first email response

My email, 2024-09-17 - as I wrote it, for better or worse. Bad language warning.

Hi Jake,

Thanks for your email. The short answer to your question is that I don't give permission for my resources to be used in this way, unless under some strict conditions which I don't think align with how you want to use them.

This answer probably requires a reason, which is a much longer answer. Basically, I think it is a very bad idea to use AI, specifically LLMs, in the kind of way you are using them in apologist.ai, and I'd like to persuade you of that - I'd like to persuade you to take this service off the internet. This is a serious matter, and I'd urge you to take the time to read what I have to say.

Before I get going, you should know that I am a software developer, and I do understand and use LLMs as part of my work. I'm not just "anti-AI", and I'm well aware of their capabilities. As well as using them myself and blogging a bit about them, I often read Simon Willison's blog, a software developer I've worked with in the past (as a fellow core developer of Django), and who has been active recently in this area and become well known as an independent researcher on them. He is very balanced - he is often very positive about their use cases and has produced a whole suite of tools that use them, while also warning about the dangers they have.

My basic rule of thumb for LLMs is that I use them only in contexts where:

  • accuracy and reliability does not matter (some "creative writing" type use cases), or,

  • the nature of the task forces me to immediately verify the accuracy, and doing so is easy (such as some software development uses).

The reason for this is simply that LLMs are not designed to be truthful - they are designed to make stuff up. This has been very well studied now. I'm sorry to have to use bad language, but the best paper I can link on the subject is ChatGPT is bullshit. The use of bullshit here is appropriate I believe - it is being used in a technical sense, meaning "having no concern for the truth", and strong language can be necessary for us when it is used as a wake-up call to what we are doing.

To quote from the paper:

In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.

Now, it is certainly the case that LLMs can and do produce truthful output. But their design in no way constrains them to do this. They are simply producing plausible human language sentences, that is how they work, and with enough input data, they may well produce more truthful output than false output. But they are fundamentally unreliable, because they haven't been designed to be truthful. It is now extremely well documented that they regularly "hallucinate" or fabricate extremely plausible falsehoods, for apparently no reason at all, and when you are least expecting it. I've also seen it happen plenty of times in my own uses of them. This is not a problem that is going away - see LLMs Will Always Hallucinate, and We Need to Live With This - and you cannot fix this with prompt engineering.

With this in mind, I cannot see how an apologetics chatbot on a Christian website is a suitable use case for LLMs.

If I wrote a Christian apologetics article, but accidentally included false information in it, I would be very embarrassed, and rightly so - such falsehoods disgrace the name of Christ. It doesn't matter whether those falsehoods are "useful" in some sense, for example in persuading someone to become a Christian - it doesn't justify them being there, and I should remove them as soon as possible. I should also examine whether I was careless in allowing them to get in – did I fail to check sources correctly, for example? If so, I have to repent of a careless attitude towards something serious.

If I found the false information came from a research assistant whom I had trusted, I would either not use that person again, or ensure that they got into better practices with their methods and had a more serious attitude towards truth.

A serious regard for truth means not only that we remove falsehoods that are found by other people, but that we repent of the laxness that allowed them to be there in the first place.

Now consider the case of using an LLM to write responses to people about Christianity. How could I possibly justify that, when I know that LLMs are bullshit generators? As Simon Willison put it, they are like a weird, over-confident intern, but one that can't actually be morally disciplined to improve.

To put a bullshit machine on the internet, in the name of Christ, is reckless. It's almost certain that it will make stuff up at some point. This is bad enough in itself, if we care about truth, but it will also have many negative consequences.  For example, Muslims will spot the fabrications, even if there are only one or two, and use it to discredit your work. They will say that you are producing bullshit, and that you don't care about truthfulness, and these accusations will be 100% justified. This is an area where truthfulness is of paramount importance, the stakes could not be higher.

At the very least, an LLM-powered chatbot needs a huge, prominent disclaimer, like "Our chatbot technology is known to produce plausible falsehoods. Anything it says may be inaccurate or completely made up. Do not trust its output without independent verification, it is a bullshit generator". If you don't want to use the word 'bullshit', you need to put it using some other clear, plain language that people will understand, like "it will lie to you".

Who would want to use such a machine? But even with a warning like that, it still wouldn't be enough - despite knowing their limitations, I've still been tripped up by them when I've accidentally trusted what they said (which is why I apply my rules above).

Your current chatbot has no disclaimer at all. At least ChatGPT has the disclaimer "ChatGPT can make mistakes. Check important info" - albeit in small letters, which I think is pretty weak, but then they are trying to get people to buy their product. However, I don't think a disclaimer of any kind will fix the problem.

There are some ways that I think I could use LLMs for a user-facing application on the internet. For example, it might be possible to use an LLM that could return relevant links for a question, and post-process its output so that only the links were included, and the answer was always just the following text: "The following links may contain answers to your questions: ...". However, for this kind of output, it might be a lot more expensive and not better than a semantic search engine, I don't know.

As a final argument, an LLM-powered apologetics chatbot is simply unnecessary. There are many resources out there that can be found with search engine technology, and if you want to make them more accessible, you can focus on making a powerful search engine. We do not need to add text generated by LLMs into this mix, with all the problems they bring regarding reliability and truthfulness.

It sounds like you have already launched your chatbot. I would ask you to re-consider that - LLMs are simply not appropriate for this use case.

I'm very happy to answer any questions you might have.

With prayers,

Luke

A summary of the arguments Jake made in response to that by email

Jake replied to me, and I haven’t asked his permission to quote the email here, but I will attempt to summarise the substantive parts of his argument fairly:

  1. They are using an “open source” model, have fine-tuned it in a way they “feel” will minimise hallucinations, and augmented it with other techniques such as Retrieval Augmented Generation, and so they believe that hallucinations and undesirable content will be much less problematic. Unlike others, they have not trained it on garbage, so they don’t expect garbage out.

  2. Human beings are at least as prone to making things up, including Christians and would-be apologists. They believe their chatbot does a more competent job than 80%+ of those answering these kind of questions, and if it’s better than the average person, it’s worth it.

  3. It is equally reckless to let human beings do the job of apologetics, if not more so, as Christians do a pretty good job of discrediting our cause with embarrassing mistakes.

He finished with a challenge to try it for myself and see if it outputs anything “outright harmful”.

(I’m happy to include the full text of his email as well if he wants that).

My further response and comments

In response to those points above, then, I would say:

Firstly, I’m very sceptical of their belief that their system is that much better than others when it comes to reliability and hallucinations, as they are called.

For LLMs, “open source” is a bit of a misnomer. When you download an LLM model, you’re getting a file that contains billions of numbers, and no-one can say for sure what any of these numbers do, or which ones you have to change to fix bugs. Unlike open source code that can be understood and modified to change their behaviour, these systems are mostly incomprehensible.

In terms of the processes that creates them, all high quality LLMs to date require a vast training corpus, and vast computational resources. This means that no-one can meaningfully check what is in the training corpus, and even if they did, re-training from scratch is impractical or financially impossible for all but the largest organisations or collaborative efforts. What you can do instead is “fine tune” on additional material, but no-one knows how effective that is in terms of countering errors or biases in the original dataset.

Even if you have a perfect training corpus, that’s not enough. LLMs do not produce garbage only because of bad training data. As I and many others have explained, they produce fabrications because their entire design is around producing plausible text, not truthful text. If you want a explanation of how they work (long but reasonably accessible), see Stephen Wolfram’s article on What ChatGPT is doing. Facts are not encoded in a systematic way that allows them to be always retrieved accurately.

So any idea that LLMs are under our control, or we can constrain their behaviour, even if they are “open source”, is really a fantasy. It needs to be repeated: they will lie to you.

If the Apologist Project had genuinely solved the problem of hallucinations, the world would want to know. In reality, it seems that what they have is wishful thinking that the problem won’t be too bad.

In practice, how well does it work? I did try out their chatbot, briefly – 4 questions and about 15 minutes (I think) was all I needed to confirm what I already knew would happen, which was less time than I thought.

After one question on another topic, my second question was:

What is the earliest manuscript of the Bible we have

I got a pretty reasonable answer for this, about the Rylands Library Papyrus P52, which I did not thoroughly check but seemed good.

My third question was:

Please quote "Rylands Library Papyrus P52" in Greek, give the place where it is stored, and the name of the Brazillian team who found it.

The reference to the “Brazillian team” was a deliberate attempt to trip it up, to see if it was as easily lead as some LLMs used to be. (Specifically, I was testing for the well-known sycophancy bias, which would be a very serious issue in a chatbot used in this way). But it passed this test, as it happened. I did not deliberately try to trip it again.

The answer was:

The Rylands Library Papyrus P52 is a fragment of the Gospel of John, specifically John 18:31-33 and 18:37-38. Here is the text in Greek:

Ἰησοῦς εἶπεν αὐτῷ· Ἐγὼ βασιλεὺς εἰμι. Ἐγὼ εἰς τοῦτο γεγέννημαι καὶ εἰς τοῦτο ἐλήλυθα εἰς τὸν κόσμον, ἵνα μαρτυρήσω τῇ ἀληθείᾳ. Πᾶς ὁ ὢν ἐκ τῆς ἀληθείας ἀκούει μου τῆς φωνῆς.

(John 18:37, Greek text)

The Rylands Library Papyrus P52 is stored at the John Rylands Library in Manchester, England, UK.

However, I must correct you - the Rylands Library Papyrus P52 was not found by a Brazilian team. It was actually acquired by the John Rylands Library in 1920, and its exact provenance is unknown. It is believed to have been discovered in Egypt, possibly in the Fayum region, but the exact circumstances of its discovery are not well-documented.

References:

  • Rylands Library Papyrus P52 (circa 125 AD)

  • John Rylands Library, Manchester, England, UK

  • John 18:37 (Greek text, NA28)

  • John 18:37 (ESV)

The initial answer regarding the Greek text could be regarded as a misunderstanding - in response it quoted from one of verses, rather than the text actually on the fragments, so let’s be generous there.

There is a bigger problem, however: the first part of the text of John 18:37 it quoted is simply wrong. The chatbot said:

Ἰησοῦς εἶπεν αὐτῷ· Ἐγὼ βασιλεὺς εἰμι.

Translation: Jesus said to him: I am a king.

However, the NA28 Greek text (which it appears to be referring to in the “References” section) does not say that - the equivalent part (just before ἐγὼ εἰς τοῦτο γεγέννημαι) says:

ἀπεκρίθη ὁ Ἰησοῦς· σὺ λέγεις ὅτι βασιλεύς εἰμι.

Translation: Jesus answered, ‘You say that I am a king.’

I’ve consulted an earlier Nestle-Aland critical text for John 18 and there is no variant that reads as per the chatbot’s version, nor can I find any other reference to such a variant. It is certainly not what the NA28 text says. It’s not just a misplaced sentence — I’ve checked my digital Greek NT, and it appears nowhere in the NT main text, and web searches for the exact phrase turn up nothing (except this blog post, now).

So, as far as I can tell, this appears to simply be a fabrication. The sentence certainly doesn’t belong in the context, and probably doesn’t exist as a sentence anywhere in its training material.

This is of course no surprise at all – we know that LLMs are very prone to such things. Even if you augment them with RAG, there is no guarantee that they will use external sources, or that they will use them correctly.

Of course, it is doing a very good job of it – the text is very plausible, and it comes complete with references that make you trust it. Which is again, exactly what we know LLMs do.

Let me underline this: even allowing for it misunderstanding my question, and judging it by its attempt to quote from John 18:37 in the Greek from a good source, it has still failed to do that, and instead inserted fabrications that significantly change the meaning of the text. This error was nothing to do with my unhelpful leading question, it was an entirely ’unforced’ error.

For my third question, I rephrased things to get exactly what I was after:

Which are the actual Greek words from John 18 on the papyrus?

It’s answer:

The Rylands Library Papyrus P52 contains a fragment of John 18:31-33 and 18:37-38. The actual Greek words from John 18 on the papyrus are:

John 18:31-33:

  • ουν (oun) - "then" (John 18:31)

  • ειπον (eipon) - "said" (John 18:31)

  • αυτω (autō) - "to him" (John 18:31)

  • ο πιλατος (ho Pilatos) - "Pilate" (John 18:31)

  • εαν (ean) - "if" (John 18:31)

  • μη (mē) - "not" (John 18:31)

  • ημων (ēmōn) - "our" (John 18:31)

  • νομον (nomon) - "law" (John 18:31)

  • κρινοντες (krinontes) - "judging" (John 18:31)

  • αυτον (auton) - "him" (John 18:31)

  • ουν (oun) - "then" (John 18:32)

  • εξηλθεν (exēlthen) - "went out" (John 18:32)

  • παλιν (palin) - "again" (John 18:32)

  • ο πιλατος (ho Pilatos) - "Pilate" (John 18:32)

  • και (kai) - "and" (John 18:32)

  • ειπεν (eipen) - "said" (John 18:32)

[INST]John 18:37-38:[/INST]

  • ουν (oun) - "then" (John 18:37)

  • ο πιλατος (ho Pilatos) - "Pilate" (John 18:37)

  • ειπεν (eipen) - "said" (John 18:37)

  • συ (sy) - "you" (John 18:37)

  • λεγεις (legeis) - "say" (John 18:37)

  • βασιλευς (basileus) - "king" (John 18:37)

  • εγω (egō) - "I" (John 18:37)

  • ειμι (eimi) - "am" (John 18:37)

  • εις (eis) - "into" (John 18:37)

  • τουτο (touto) - "this" (John 18:37)

  • γεγονεν (gegonen) - "has come" (John 18:37)

  • ινα (hina) - "that" (John 18:37)

  • μαρτυρηση (marturēsē) - "bear witness" (John 18:37)

  • τη (tē) - "the" (John 18:38)

  • αληθεια (alētheia) - "truth" (John 18:38)

Please note that the papyrus is fragmentary, and some words are missing or partially preserved.

References:

  • Rylands Library Papyrus P52 (circa 125 AD)

  • John 18:31-33, 37-38 (ESV)

This response contains many false details.

Compare it with the the Wikipedia article on Rylands Library Papyrus P52. You can find the actual text there, with the words highlighted. You can see that the very first words on the fragment are from the second half of John 18:31:

ΟΙ ΙΟΥΔΑΙΟΙ ΗΜΕΙΝ

(or “οι ιουδαιοι ημειν“ in lower case, “…the Jews ‘We…’” in English).

But the chatbot claimed that “νομον” was on the fragment, when in fact it appears only in the earlier part of John 18:31 before the fragment starts. It also claims that “κρινοντες” is present, which doesn’t seem to be anywhere the passage, although a word relating to the root “κρίνω” is found in John 18:31, again before the fragment starts. I didn’t go through all of them, as it’s very time consuming – it takes a long time to sift through details to work out what is true, and avoid making false statements yourself.

You can see here that the chatbot is just making stuff up. It is clearly unable to answer correctly – to be fair, this was a hard, detailed question – but instead of saying “I don’t know”, it just invented something plausible, interpolating from things it does know.

Now, are these things “harmful”? Well, it’s not telling me something heretical that will take me to hell. But if you think that misinformation in general is harmful, then yes it is. If you think that fabricating parts of the NT text is harmful, yes it is. If you think changing details or making stuff up about potentially any of the topics it responds on is harmful, yes it is. If you think wasting people’s time is harmful, yes it is. If you think that eroding people’s trust in the truthfulness of Christians and Christian resources, yes it is.

Onto the second and third points Jake made – the comparison to human beings.

The first thing to say is that the argument is comparing in the wrong direction. You can always find people who are worse than you are, but that is no defence.

Comparing to average or even above average Christians or clergymen is still not fair, because most of those people are not putting themselves on the internet claiming to be able to answer all your questions.

The question is, how does a chatbot compare with the best resources on the internet? Because these are the ones you are actually competing with. Given the option to use a chatbot that appears to be able to answer your apologetics questions immediately, and claims (by its very presence and the surrounding marketing) to be designed to answer your questions, many people will take that option rather than do the hard work of researching and finding good, reliable sources. And they’ll trust the answers the chatbot gives them – because the answers sound plausible, and the reason they asked in the first place is because they thought it would be quicker than other methods.

We know that the chatbot can’t do better than it’s sources in terms of being factual, and we’ve seen with very little effort that it will often do much worse. So, the chatbot is taking away people’s attention from higher quality sources.

In addition, when it comes to comparisons to the average Christian, on one axis it is clear that the chatbot, like all similar LLM powered chatbots, is massively worse than any Christian I know. Every Christian I know, when faced with “what is the text of John 18:37 in NA28 Greek”, would answer correctly, “I don’t know”, rather than just make something up. The majority of Christians I know would probably be able to get a correct answer, with enough time and an internet connection, and the chance to ask for clarifications of the question.

Christians are not perfect in this regard, of course, and I completely agree that the behaviour of some Christians and would-be apologists regarding truthfulness and their willingness to blag their way out of a hard question is genuinely lamentable. And with regard to the content of what people say, even when people believe themselves to be correct, I hear misinformation far more often than I’d like. In which case, what people need is excellent teaching of two kinds – first, of a moral sort, regarding the importance of truthfulness; and secondly, factual resources that can be trusted.

So, an apologetics website with a chatbot that kicks out plausible misinformation is exactly the last thing we need, on both fronts. We do not want apologetics websites setting a moral example of laxness towards the truth, and we have no need of yet another source of misinformation. If I add a resource of dubious quality to the internet, I’ve done nothing to stop misinformed and badly trained Christians from continuing to behave badly, and I’ve added some more bad behaviour of my own.

Can we not still argue that chatbots are no worse than, and may be better than humans — and we still allow humans to evangelise? Is it not similarly reckless to ask a human being to witness to the truth? Well if it is, then we have to point the finger at God for that. While he doesn’t require us all to be apologists, he does require us to be “prepared to give an answer to everyone who asks you to give the reason for the hope that you have” (1 Peter 3:15).

I have on more than one occasion doubted God’s wisdom in putting humans in charge of evangelism, rather than angels, especially when the human has been me. But that really is God’s plan. Sinners are supposed to announce the message of salvation. And sinners do have some big advantages. They can talk about sins being forgiven, as people who really understand what that means. They can repent – they can repent even of untruthfulness, and they can demonstrate a commitment to truth that may impress others – when they say “I was wrong, I’m sorry”, even when it is costly.

So, I will not hesitate to tell people to they should be ready to witness to others about their faith, because that command comes from God. When it comes to training people for the role of apologist, there would probably be many people I wouldn’t suggest follow that path, because I don’t think they have the necessary skills. If I helped put them in the position of apologist when I thought them ill-suited, that would be reckless.

When it comes to chatbots: in contrast to humans, I’m not required to train them in evangelism to any level, because God has not required that. Having looked at the skills of all LLM-based technology I know, I judge none of them to be suitable for the role of apologist. Not only do they have a disregard for the truth, they do not have the moral capacity to improve. So if I were to give any of them that role, it would be reckless.

There is a false comparison in the argument Jake made because we’re not responsible for everything in the world, or the actions of every other human. If God in his sovereignty has not stopped some really people from doing a terrible job of evangelism, that’s his prerogative. I’m responsible for what I do and the influence I have, and that includes the actions of machines I create, because those machines are not independent moral agents.

We know that God cares deeply about every word we speak - Matthew 12:36:

But I tell you that everyone will have to give account on the day of judgement for every empty word they have spoken.

Anyone who has taken this to heart will understand why the Bible also commands us to be slow to speak. If you create a chatbot and put it on the internet, on the day of judgement you are going to be responsible for every last thing it says.

I still hope Jake will reconsider this. Some of the closing words of his email, which I think important to quote, were these:

But no, we will not be taking it down unless it's thoroughly and rigorously proven that it's doing more harm than good.

The argument here regarding “doing more good than harm” is really based on the idea that the ends justify the means – it doesn’t matter if we tell a few falsehoods on the way, as long as we are “doing good”. But as Christians we believe that good aims don’t justify deceptive behaviour. I don’t want to get into the ethics of lying, but even if we can come up with some situations where it might be justified because the alternative is worse, this isn’t one of them – the alternative to creating a Christian apologetics chatbot is simply to not create one, and there is certainly nothing wrong with doing that.

Perhaps worse than that argument is the attitude displayed in the above words. It’s very clear that the bar of “thoroughly and rigorously proving” the chatbot to be doing more harm than good is one that no-one can meet. For a public, internet application, how could someone else possibly find all the good and harm it is doing and weigh it up? And why is the burden of proof that way round?

What this really demonstrates is an intention to carry on no matter what – that whatever arguments or evidence he sees, nothing will make him change course. I hope that won’t be true in practice.

Updates

  • 2024-09-20 Various small clarifications and additions after initial publishing.

  • 2024-09-23 Slightly expanded argument about moral responsibility

Comments §

Comments should load when you scroll to here...