Menu Close

73: Consequences of Language (with Nick Enfield and Morten Christiansen)

When language was innovated, what happened next? How did it change our abilities — and our responsibilities — to each other? Dr Nick Enfield shares ideas from his new book, Consequences of Language.

Plus: Have large language models (like GPT) disproven a key tenet of the innateness of language? Dr Morten Christiansen takes us through the implications for nativism and language learning.


Listen to this episode

Download this episode

RSS   Apple Podcasts   Overcast   Castbox   Podcast Addict   Goodpods   Pocket Casts   Player   YouTube Podcasts   More

Patreon supporters

Huge thanks to all our great patrons! Your support means a lot to us. Special thanks to:

  • Iztin
  • Termy
  • Elías
  • Matt
  • Whitney
  • Helen
  • Jack
  • PharaohKatt
  • LordMortis
  • gramaryen
  • Larry
  • Kristofer
  • AndyB
  • James
  • Nigel
  • Meredith
  • Kate
  • Nasrin
  • Joanna
  • Ayesha
  • Moe
  • Steele
  • Margareth
  • Manú
  • Rodger
  • Rhian
  • Colleen
  • Ignacio
  • Sonic Snejhog
  • Kevin
  • Jeff
  • Andy from Logophilius
  • Stan
  • Kathy
  • Rach
  • Felicity
  • Amir
  • Canny Archer
  • O Tim
  • Alyssa
  • Chris

And our newest patrons:

  • At the Listener level: Erik
  • and EvgenSk, who bumped their pledge up to the Listener level.

Become a Patreon supporter yourself and get access to bonus episodes and more!

Become a Patron!

Show notes

17 Squares In A Larger Square | Know Your Meme
https://knowyourmeme.com/memes/17-squares-in-a-larger-square

Aristophanes, Clouds. LCL 488: 94-95
https://www.loebclassics.com/view/aristophanes-clouds/1998/pb_LCL488.95.xml

How Humans Went From Hissing Like Geese To Flipping The Bird
https://www.atlasobscura.com/articles/how-humans-went-from-hissing-like-geese-to-flipping-the-bird

Meaning and Origin of the Phrase ‘To Get The Bird’ | Word Histories
https://wordhistories.net/2017/01/21/get-the-bird/

Canada rules that flipping the middle finger is a ‘God-given’ right
https://www.npr.org/2023/03/10/1162629535/canada-flipping-middle-finger-ruling-god-given-right

Judge rules the F-word has officially lost its shock value in the workplace
https://metro.co.uk/2023/01/30/judge-rules-the-f-word-has-officially-lost-its-shock-value-in-the-workplace-18187786/

OMG! Is swearing still taboo?
https://www.theguardian.com/science/2023/feb/09/is-swearing-still-taboo

Is the c-word offensive? Court rules on sandwich board referencing Tony Abbott
https://www.sbs.com.au/news/article/is-the-c-word-offensive-court-rules-on-sandwich-board-referencing-tony-abbott/hxab6ay0q

This Is Really, Really, Fuckin’ Brilliant! | Slate
https://slate.com/news-and-politics/2008/03/this-is-really-really-fuckin-brilliant.html

Using the F-word in PG-13/12A movies
https://www.imdb.com/news/ni45115340

“Fck” | BoJack Horseman Wiki
https://bojackhorseman.fandom.com/wiki/%22Fck%22

Sour fight ends with FDA ruling soy and nut milks can still be called “milk”
https://arstechnica.com/science/2023/02/almond-milk-can-keep-its-name-despite-lack-of-lactation-fda-says/

Planet Money, Episode 399: Can You Patent A Steak?
https://www.npr.org/sections/money/2015/04/22/401491625/episode-399-can-you-patent-a-steak

Synthetic Milk That Promises to Fight Climate Change
https://www.bbvaopenmind.com/en/science/scientific-insights/synthetic-milk-promises-fight-climate-change/

Synthetic Milk Is Coming, And It Could Radically Shake Up Dairy
https://www.sciencealert.com/synthetic-milk-is-coming-and-it-could-radically-shake-up-dairy

ANU Press | Something’s Gotta Change: Redefining Collaborative Linguistic Research | Authored by: Lesley Woods
https://press.anu.edu.au/publications/series/asia-pacific-linguistics/somethings-gotta-change

Large Language Models Demonstrate the Potential of Statistical Learning in Language | Cognitive Science
https://doi.org/10.1111/cogs.13256

Morten H. Christiansen
https://psychology.cornell.edu/morten-h-christiansen

Google Engineer Claims AI Chatbot Is Sentient: Why That Matters
https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/#

Reali and Christiansen (2010). Uncovering the Richness of the Stimulus: Structure Dependence and Indirect Statistical Evidence
https://onlinelibrary.wiley.com/doi/abs/10.1207/s15516709cog0000_28

Read online for free
Enfield and Sidnell – Consequences of Language: From Primary to Enhanced Intersubjectivity | MIT Press
https://mitpress.mit.edu/9780262372732/consequences-of-language/

‘Tell me exactly what’s happened’: When linguistic choices affect the efficiency of emergency calls for cardiac arrest
https://pubmed.ncbi.nlm.nih.gov/28599999/

[PDF] Margaret Gilbert | Walking Together: A Paradigmatic Social Phenomenon
http://faculty.www.umb.edu/steven.levine/Courses/Action/Gilbert,%20Walking_Together.pdf

Charles Hockett: Design Features of Human Language
http://abacus.bates.edu/acad/depts/biobook/Hockett.htm

Nick Enfield’s website
http://nickenfield.org

Graupel Isn’t Snow, Nor Sleet, Nor Hail, So What the Heck Is It?
https://science.howstuffworks.com/nature/climate-weather/atmospheric/graupel.htm

https://indieauthors.social/@Logophilius/109904294684442401

Last year we had “quiet quitting,” so now, of course, we have “quiet hiring,” a new name for an old phenomenon: https://fortune.com/2023/02/08/what-is-quiet-hiring-new-workplace-trend/ @becauselangpod

What Is Quiet Hiring, and Should You Worry About It?
https://www.makeuseof.com/what-is-quiet-hiring/

Move over, quiet quitting. ‘Rage applying’ is the latest form of worker revenge
https://www.cbc.ca/radio/costofliving/rage-applying-1.6759642

Mike’s Mic | YouTube
https://www.youtube.com/@mikesmic

And then something we talked about but which didn’t make the episode. We still thought you might want to see it.

Nuggets: Kiwi Video Paints Addiction In Simple Moving Terms
https://www.huffpost.com/archive/ca/entry/nuggets-kiwi-video-paints-addiction-in-simple-moving-terms_n_6211766


Transcript

[Transcript provided by SpeechDocs Podcast Transcription]

DANIEL: What’s the thing that I say? I don’t have my intro written here. So I’m just gonna…

BEN: Hello, and welcome to Because Language.

HEDVIG: Yeah. Hello, and welcome to Because Language.

DANIEL: Hey, I like this. Why don’t you do it this time?

BEN: Okay.

HEDVIG: [LAUGHS]

BEN: [COUGHS] Ahem. Let’s see how a real pro gets the job done! All right. Hello, and welcome to Because Language, a weekly show about the science of language. Let’s meet the team. No. No. No, I’m not feeling it. I’m not loving it.

HEDVIG: I liked it.

BEN: Daniel?

DANIEL: But you’ve given me an onramp. So now, here I go.

BEN: All right.

HEDVIG: I thought it was good.

[BECAUSE LANGUAGE THEME]

DANIEL: Hello, and welcome to Because Language, a show about linguistics, the science of language. I’m Daniel Midgley. Let’s meet the team. She’s a linguist, but she recently rocked the mathematical world by taking the minimum space required to hold 17 squares, and she managed to pack 18 squares into it! It’s Hedvig Skirgård.

BEN: Daniel is referencing maths stuff!

HEDVIG: Yes, he is.

BEN: And I know what he means. I am furiously shaking my head right now.

DANIEL: [LAUGHS]

HEDVIG: I don’t… haven’t done the square challenge thing, but I like math. And I’ve been thinking math.

BEN: Sorry, is this the triangle… the rearrange the triangle one?

HEDVIG: No, it is a square thing. It was going viral for a while. Uh… it’s very silly.

DANIEL: It’s like, what’s the most economical way to pack in 17 squares of equal size?

BEN: Oh, right, right, right.

DANIEL: Yeah, you managed to pack in 18. How’d you do it, Hedvig?

HEDVIG: Did I do it?

DANIEL: Making the squares smaller! Okay!

BEN: Infinity Gauntlet. It’s the only rational explanation.

DANIEL: He did not rock the mathematical world when he took the minimum space required to hold 18 squares, and managed to pack 17 squares into it. It’s Ben Ainslie.

BEN: I like wiggle room. I like a little buffer zone. I like a little safety in my little pack zone. You never know when a knick-knack, a paddywhack, or a doodad will need to be incorporated, unplanned, into the stuff.

HEDVIG: I feel like I should mention that it’s like morning for me, and I am slowly rewiring my brain. We’re just lying. That’s what we’re doing.

DANIEL: Yeah. We’re to the part of the show where the intros are just straight-up lies.

HEDVIG: Okay, good.

DANIEL: It’s a bit.

HEDVIG: Okay, good. Good! Good. I was like, “Had I done a thing that I don’t know about?” or just lying. Okay, good.

BEN: Do you know what makes great podcasting even greater?

HEDVIG: Oh, well.

BEN: It’s when we just really explain everything we’re doing.

HEDVIG: [DRINKS TEA] Fuck off.

BEN: [LAUGHS]

HEDVIG: Fuck right off. There is a particular style of comedy that is very popular in AngloWorld that is just straight-up lying.

BEN: It’s true. That is true. That is a thing.

HEDVIG: It is sometimes a bit much because it doesn’t have like a setup. It doesn’t have an obvious… It’s just like, “Oh, I’m just going to lie now.”

BEN: The markers are very unclear. It is true. You absolutely… You’re absolutely bang on.

DANIEL: No paralinguistic markers.

HEDVIG: Yeah. And I’m just… Ben here is the non-linguist who’s like, “Oh, I represent non-linguists. I’m trying to make the show accessible.” I’ll represent non-Anglos. Okay?

BEN: Okay. Making the show more, like, non… Anyway, yes, I agree.

HEDVIG: Accessible is the word you’re looking for.

DANIEL: And I represent the non-young.

BEN: You do.

DANIEL: I do.

BEN: You and your boomer farming sims from the chum bucket of a website you found one time.

DANIEL: I’m three years away from boomer. I’m an elder Gen X.

BEN: [LAUGHS]

DANIEL: And I will keep having to say this throughout the show.

BEN: Fingernails stuck firmly into the precipice.

DANIEL: [SIGHS] Hey, it’s good to see you two, you three.

HEDVIG: Thank you.

DANIEL: Later on, we’re going to be chatting with Dr Nick Enfield about his new book, Consequences of Language, by Nick Enfield and Jack Sidnell. We always ask, what was it about humans that enabled us to acquire language? And we always say stuff like, “Oh, living in groups, walking on two legs, having a vocal tract that we do, finding relevance in other people’s actions, stuff like that, bigger brains.” But we don’t always ask this question: once we acquired the ability to use language, what happened then? What did that make possible?

HEDVIG: Mmmm.

DANIEL: Do you have any guesses? What do you think we’re going to find?

BEN: A lot of fighting. Like, when you have opinions you can share vocally, people are going to beef.

DANIEL: Yeah, okay, but then you don’t have physical fighting. You have verbal fighting. So, that maybe took the place of physical fighting or both?

BEN: True.

HEDVIG: I think one of the consequences of language once it’s occurred in the human species, is ability to plan and coordinate better. To be like, “Next Thursday — not this Thursday, the Thursday after that — we’re all meeting at this place and having a big old party because my godson has his birthday.”

DANIEL: But then, that just enabled us to say, “Uh, when you say next Thursday,” “Oh, I missed it, because…”

HEDVIG: Yeah, but still we could be like, “No, no, no. It wasn’t me you saw in that restaurant with that guy last weekend. It was my lookalike. I’m not cheating on you,” etc. We can do things.

DANIEL: Last Thursday. Yes.

BEN: I shudder to imagine — anyone who is listening to this has ever tried to get a D&D party together on a regular basis or any of those just like horrific social logistics situations — if this is what we’re able to do with language, imagine how fucked it was beforehand.

DANIEL: Yeah, totally.

HEDVIG: Very Zen. Very in the now. It’s just like, “What is around me? What do I care about?”

BEN: Yeah, possibly! Yeah, it’s true. Maybe it was just way better!

DANIEL: [LAUGHS]

HEDVIG: And then, maybe like circadian rhythms, like, maybe we could do that. My cats, for example, they know that when I wake up, that is an event that is associated with certain actions.

BEN: Food!

HEDVIG: Yeah, food.

DANIEL: Maybe people, when language first started, they were like, “Ugh, I can’t keep track of this new technology.” [LAUGHTER] Well, Hedvig, I think you’re quite right…

HEDVIG: [VICTORY NOTE] Oh, yeah. Points.

DANIEL: …because one of the things about the prologue of the book is that one of the big, big consequences of language was intersubjectivity. That’s this word that keeps coming up over and over again.

HEDVIG: [WHISPERS] Intersubjectivity.

DANIEL: What it means is, for example, right now, we three have come together in an interaction.

HEDVIG: Uh-huh.

DANIEL: What we’re doing is we’re making meaning together. We’re adding knowledge together onto a shared pile of knowledge called the common ground.

HEDVIG: Okay.

DANIEL: It’s sometimes called shared intentionality or we-intention, and that’s going to be huge and I’m going to be talking to Dr Nick Enfield about it. So, I’m really looking forward to that.

BEN: I feel like there is really just a good opportunity to make a joke about the Tragedy of the Commons here, and I’m just not able to make it work. But I want everyone to know, just imagine a really good Tragedy of the Commons joke coming in here and… yeahhh.

DANIEL: We’ll just bookmark that.

BEN: Hey, listening audience, you just do the work of making the joke in your head. Thank you.

[LAUGHTER]

DANIEL: Okay.

HEDVIG: Yeah.

DANIEL: You know, our last episode was a bonus episode with newly minted speechie, PharaohKatt.

HEDVIG: Yes!

DANIEL: Yay.

HEDVIG: I’ve already put into practice something she taught me.

DANIEL: Wha… what is it?

HEDVIG: Well, she said that when you got something stuck in your throat and you want to cough, you shouldn’t, like, then drink water.

DANIEL: Yep. Yep.

BEN: ‘Cause inwards-outwards, inwards-outwards?

HEDVIG: Yeah. You’re trying to get things up and down another pipe, and putting water in that equation might not help.

DANIEL: Doesn’t work. Yep.

HEDVIG: And I have coughed a couple of times since we talked to her, and I was like, “Oh, I shouldn’t reach for water. I’m going to have my cough.” Cough, cough, cough, cough, cough.

DANIEL: Yeah. Well, she taught us how to roll our Rs as well, so that was… Which is weird because she can’t do it. Hmm. Anyway.

BEN: How fascinating.

DANIEL: So, she brought us news and words, and she answered our questions about being a speechie, including some junk science that we should steer clear of. So, if you want to hear bonus episodes the minute they come out, be a patron at the Listener level. There are other levels too, but every patron gets Discord access, where we talk about shows, we ask and answer each other’s questions, we share cat and dog pics.

BEN: That’s where cool people like PharaohKatt hang out.

DANIEL: That’s right. We also play games. So Diego is running a game right now, where he gives us movie titles as they’ve been translated into other languages, and we have to guess the movie in English. It’s really fun. [LAUGHS]

BEN: Heaps of fun… I’m getting in on that.

DANIEL: So, that’s just one perk. There are others. And it’s just a dollar a month to support the show and keep your favourite language podcast going. That’s patreon.com/becauselangpod. Thanks to all our great patrons.

BEN: Heyyyyyy, Daniel?

DANIEL: Yeahhhhh?

BEN: What’s going on in the world of linguistics for the last — however long? — Indeterminate, one to three weeks?

DANIEL: In the recent past. This one’s about flippin’ the bird. In Canada!

BEN: Pulling the finger?

DANIEL: Yup. Wait, pulling the finger is…

BEN: Just making sure we’re talking about the same thing.

DANIEL: Wait. Well, if it’s the middle finger, yes.

BEN: Right. Well, but see, I’m wondering, does flipping the bird extend to other rude hand gestures? Does it? I don’t know.

HEDVIG: I don’t think it does.

DANIEL: It doesn’t seem to.

HEDVIG: I think only Americans call it flipping the bird.

DANIEL: Well, this is a funny thing, because the reference to flipping the bird might be older than you thought. I did some digging on why we call it flipping the bird, before we even get to the news story.

HEDVIG: Okay.

DANIEL: So, in the 1600s, if you were an actor on stage and the audience didn’t like you, they might hiss, right? [MAKES A HISSING SOUND]

HEDVIG: Okay.

DANIEL: And this was known as giving someone the goose.

BEN: Oh, ’cause that’s what geese do when they’re angry at you. Cool, cool, cool, cool, cool. I’m following so far.

DANIEL: Okay. But then later on, it became known as giving someone the bird or giving someone the big bird.

HEDVIG: Okay.

DANIEL: So, there’s a link there between doing something disrespectful and tying it to birds. And it’s possible that at some point, the gesture took the name over from the hissing. But we don’t see flipping the bird… We see THE BIRD earlier, but we don’t see FLIPPING THE BIRD. You could give someone the bird, but you couldn’t flip it until about 1968.

HEDVIG: Okay.

BEN: And I’m assuming that’s to do with the actual hand gesture, right? Because you physically flip your hand over and then the finger comes up.

DANIEL: Correct. The gesture itself is way, way… I was going to get to this later if we wanted to, but what the heck? We’ll do it now. We can see the middle finger as a rude gesture as early as 419 BCE. That’s when Aristophanes wrote his play, The Clouds, where a character named Strepsiades gives Socrates the finger.

HEDVIG: Wow. And how do we know it’s that finger?

DANIEL: Because the text says — and I’m reading this from Loeb Classics — Socrates wants to teach somebody. He says, “You should recognise which rhythms are shaped for marches, say, and which by the finger,” by which he means the dactylic meter. That was called the finger. So, Strepsiades makes a joke and says, “By the finger? Well, that one I know, by Zeus.” And Socrates says, “Well, tell me then.” And Strepsiades says, “What could it be but this finger here? (raising his middle finger to Socrates).”

BEN: There we go. That’s pretty explicit! [LAUGHS]

HEDVIG: Ah, okay.

BEN: Now but…

DANIEL: But…

BEN: …is that like a weird fossil gesture and then it, like, fell out of usage and wasn’t really a thing for like 2,000 years or 1,500 years, and then it came back?

DANIEL: That might be really spot on, because when I was doing the research for this, I would find exactly two things. Number one, ancient Rome, and then number two, the 1800s, and just like this massive gap in the middle. In Shakespearean times, you wouldn’t give somebody your middle finger. You’d bite your thumb instead. And it’s not in every culture. In some places, it means somewhere between one and five. And in other cultures, it means your older brother, because it’s taller than the adjacent fingers.

BEN: [LAUGHS] Yeah, yeah, yeah.

HEDVIG: Okay. Because I really associate it with the United States of America. For me, that was like an import into Swedish culture from American culture. And we call it the fuck-you finger.

DANIEL: Right.

BEN: That’s fun.

HEDVIG: Because that’s what it does.

DANIEL: Keep in mind also that it in turn was probably imported to America by Italian immigrants somewhere along the line. The Romans called it the digitus impudicus or the unchaste finger. [LAUGHS]

BEN: Unchaste finger.

HEDVIG: Yeah, okay. Yeah. Funny.

DANIEL: Anyway.

HEDVIG: Okay, what’s the news?

DANIEL: The news is that this came up in a court case, and a Canadian judge ruled that it was not, in this case, legally actionable.

BEN: Okay.

DANIEL: This comes from a neighborly dispute in Montreal. There was bad blood between them already, so they were always fighting. But during one dispute, one neighbor flipped off the other and called him names. There were allegedly threatening gestures, which the flipper off-er denied, and it went to court, whereupon the judge, Dennis Galiatsatos said, “It is not a crime to give someone the finger. Flipping the proverbial bird is a God-given charter-enshrined right that belongs to every red-blooded Canadian.” Isn’t that funny?

HEDVIG: That is funny. Okay. Presumably doing gestures that are actually threatening, like pointing at someone and then pulling a hand across your throat, that’s still probably a threat. But a flipping the bird, a fuck you just means fuck you, which isn’t…

DANIEL: That’s right.

HEDVIG: …a threat of imminent violence.

DANIEL: That’s correct.

BEN: Is it illegal or is it actionable to say fuck you? Is that a thing?

DANIEL: Now, that’s the subject of another court case that happened in the USA a little while ago, because somebody got fired, and there was a tense meeting where he used the F word. I forget the actual quote. And he took it to a wrongful dismissal claim. And the people who had at that workplace said, “Well, he was profane in meetings and said the F word.” And the judge said that in this case, it was not legally actionable because it was so common. And according to the judge, “The words allegedly used in our view are fairly commonplace and do not carry the shock value they might have done in another time.” So, that was an interesting wrap-up to that case.

BEN: To be honest, I’m a bit surprised on that one, because employment and fair work stuff is usually way, way, way, way more permissive of spurious things like this, right? Like, an employer has the right to do a bunch of stuff that isn’t legally enshrined. So, I would have thought, ‘Yeah, an employer could be like, “I want to fire you because you say fuck a lot and that’s why.”‘ Yeah, that’s interesting.

HEDVIG: But also, he said, “Fuck you” in a meeting where he was getting fired, right?

BEN: [LAUGHS]

HEDVIG: Or they were getting fired.

DANIEL: I’m not sure if it was the actual meeting where he was getting fired or if it was just in a tense meeting around that time that led up to the firing. Not sure about the context there.

HEDVIG: That sounds… yeah, mm, okay. All right! Okay. So, flipping the bird in Canada is not illegal.

BEN: It’s legally enshrined! There’s precedent. [LAUGHS]

HEDVIG: Yes.

DANIEL: Legally enshrined.

HEDVIG: It’s like how saying CUNT in Australia is legal, right?

DANIEL: Yes, it is. It’s something you can do.

BEN: Is it really?

DANIEL: Yeah. There was that guy, right? With a sign.

DANIEL: Yeah. With the sandwich board.

BEN: I don’t know. Don’t pretend like I know. I am ignorant.

HEDVIG: It’s big news. Big news. Everyone who’s Aussie had…

BEN: Big news in the C-word RSS feed I’ve got just ticking away.

HEDVIG: It’s important. It’s important.

DANIEL: To be fair, I think we covered it so long ago that it was actually in the RTRFM Studios when we covered it!

BEN: Okay.

HEDVIG: I remember it!

BEN: Fair enough.

DANIEL: That long ago! I do too.

BEN: Which is the ancient and distant past.

DANIEL: Yes.

HEDVIG: I want to hear from our listeners, what is a court case you know about that makes it legal to say nasty words in your country?

BEN: Okay. [LAUGHS] Let’s bring all the swears.

HEDVIG: So, we have CUNT in Australia is fine. Flipping someone off in Canada is finem and saying, fuck you, fuck off in US is fine. What else are we allowed to do?

DANIEL: Let’s not forget Bono when he said, “This is fucking great,” in an award show. And the US FCC, the Federal Communications Commission, didn’t fine the network because, for an unusual reason, they said…

HEDVIG: He’s not American?

DANIEL: Well, he wasn’t talking about sex. He was just using it as an intensifier.

BEN: [LAUGHS]

HEDVIG: Oh, my god!

DANIEL: If he’d said, “We should all be fucking right now,” then that would be different. That was in 2004. I just thought that was such an interesting rationale.

BEN: That is so amazing.

HEDVIG: That means that people can say FUCK a lot more than they do, because it’s very rarely referring to the action of copulating.

DANIEL: Yeah!

HEDVIG: Very rarely. In fact, like…

BEN: Well, I guess that’s why you get one FUCK in PG-13, right? That’s the rule?

DANIEL: And BoJack Horseman as well.

BEN: There’s only one FUCK in BoJack Horseman, really?

DANIEL: Famously, there’s only one FUCK in every season of BoJack Horseman. And when they use it, it means that the relationship has irrevocably changed.

BEN: There we go.

HEDVIG: Oh, okay.

DANIEL: So, we’ll be following those rationales. So, what are our rationales? That it’s so common, or it’s not about actual sex, or it’s a God-given right for every red blooded Canadian. Wow.

HEDVIG: Mm.

DANIEL: Okay.

BEN: [LAUGHS]

DANIEL: I can say, “Fuck you.” God said.

BEN: Yeah, because my blood is red, the bird comes out.

DANIEL: LordMortis suggested this one about nut fluids.

BEN: Ah.

HEDVIG: Uh-huh.

BEN: My fave. Not nut fluids. I’m not a huge fan of nut fluids but calling them nut fluids is fantastic and I enjoy it.

HEDVIG: Yeah. Yeah. I like all that kind of stuff. I like calling coffee bean juice.

BEN: Water is… water… This is the joke that just never quits for me.

HEDVIG: Water.

BEN: Anytime I’m having a big, like I’m thirsty, I do the [DOES A SATISFIED SMACK] “Ahh, freshly squeezed cloud.”

HEDVIG: That’s cute.

DANIEL: There was somebody I know who said, she saw me drinking some sugar-free soda, and I said, “It’s mostly water.” She said, “Yes, it’s the fun water.” So, now I call it… [CROSSTALK]

BEN: [LAUGHS] It’s the opposite of that, Daniel! We’ve been through this!

DANIEL: The fun water.

HEDVIG: Yeah.

DANIEL: [LAUGHS] Yeah, without going into that discussion.

BEN: Sugar-free soda is the oatmeal cookie of the soda world, full stop. The kind with raisins.

DANIEL: I really think you’re overestimating how much fun sugar is. I don’t think it’s that fun.

BEN: Absolutely not.

DANIEL: [SIGH] So, far as fake milk, we’ve seen a few of these disputes, makers of milk and meat are trying to work the refs a little bit. They’re trying to engage in some regulatory capture, because they want makers of fake meat and fake milk — nut fluids — to have to change the name, because as they contend, the makers of these products are trying to confuse consumers. Make them feel like they’re getting real meat or real milk, and they’re not.

HEDVIG: Right.

BEN: Okay. So, let’s put a line under that…

DANIEL: Yeah.

HEDVIG: Okay.

BEN: …and call it what it is, a lie.

DANIEL: A lie.

BEN: What they’re trying to do is create protected brand equity, right? So, they’re doing exactly what the Chardonnay region has tried to do and the Cognac region has tried to do — and successfully done, by the way — all of these sorts of things. You can’t call it champagne unless it’s from the region, you’ve got to call it sparkling white wine, so on and so forth. Right? to create a privileged position for their product amongst other products in the market, which is exactly what the dairy and the meat industry and all that sort of stuff is trying to do. What is a bit dumb, in my opinion, of them is that it is a real limited-run strategy to try and attain advantage for your product. Because I’m not a big drinker, but I would throw it to anyone listening, no one fucking cares about whether it’s champagne or sparkling white wine anymore, or at least very few people within the market do.

HEDVIG: [DOUBTFUL NOISES] Mm? Mmm???

BEN: If your sparkling white wine comes from France, but it comes from a different part of France that is still making excellent champagne beverage, like bubbly white wine, no one cares. Does it confer them a small advantage? Probably. Is it going to stop the eventual replacement of these horrifically producing industries by environmentally friendly alternatives? Obviously not.

HEDVIG: And also, it feels like it’s a bit different, because milk and meat are really broad generic categories.

BEN: Yeah.

HEDVIG: Whereas champagne and cognac or whatever are much more restricted, and they carry a prestige that is associated with the name. I don’t know if the same things really apply to meat and milk. What’s happened in the public consciousness, I feel, is that we’ve transferred the meaning of the word MILK isn’t about the origin, it’s about the function in your diet. The same is true for MEAT. So, I have vegetarian meatballs, because I like to have my potatoes in brown sauce and meatballs and lingonberry jam, because I’m a Swedish person and I like that.

BEN: Because you are Swedish as fuck! [LAUGHS]

HEDVIG: I want to have this dish. And then, that thing is supposed to be a certain way, but then exactly where that comes from, meh.

DANIEL: Don’t forget that the word MEAT in Old English meant any kind of food. Even by 1530, we were saying things like “Give thy horse meat so he be shoed well,” and nobody was suggesting throwing flesh to the horse. MEAT in this case was really oats and things.

HEDVIG: Oh, my god! Swedish word for food is MAT.

BEN: Oh, there we go.

DANIEL: Oh, wow, there we go. Kept it.

HEDVIG: I never connected that!

BEN: Fun! Good old Germanic roots.

HEDVIG: Okay. Fun!

DANIEL: Oh, yeah. All right.

BEN: I anticipate what will happen on the meat front specifically is that cuts will become protected.

HEDVIG: They are already.

BEN: So, you won’t be able to say Impossible Meat tenderloin or anything like that. Like, if you want to use specific cuts that reference an animal’s biology, then you’re going to have to farm the meat from a cow, so to speak, or like a lamb or whatever.

HEDVIG: Yeah. I listened to another podcast, I think it was 99% Invisible, but it might have been another one, but I think it was. There are people who are researching new cuts of meat…

BEN: Yes.

HEDVIG: …and they are trying to seek patents for them and stuff. I didn’t even know that. I thought that all the ways you could cut a cow were found out.

DANIEL: We’re kind of done. [LAUGHS]

BEN: YEAH, It’s really interesting, isn’t? I listen to the same podcast, and you are right. I’m pretty certain it was a 99PI one. [It was probably Planet Money 399: Can You Patent A Steak? — D]

HEDVIG: Yeah.

BEN: Yeah. The idea that if you cut a certain bit of meat in a certain way, you’ll create an entirely new thing that you can then patent in the same way, say that… this is a big… So, the version of that that happens in the horticultural world is new varieties of apples and pears and that sort of thing. So, if you blend something just right and you get a really tasty fruit, you can patent that variety for a surprisingly long period of time and make mad coin off it. Same with the cuts of meat, you can really clean up.

HEDVIG: I heard though that you can only clean up if people use a name. If you have an apple that is a Pink Lady, but you don’t call it a Pink Lady. Anyway, sorry, we keep…

BEN: Yeah.

HEDVIG: This is such an… Daniel, yeah.

BEN: We should do a show.

DANIEL: Pulling it back. So, there’s been some movement on the milk front. The US FDA, the Food and Drug Administration… you see, in 1973, they defined milk as, here’s the quote. This is really tasty. “The lacteal secretion, practically free from colostrum” — the really goopy, fatty stuff, — “obtained by the complete milking of one or more healthy cows.” So, the cow aspect was definitely part of the FDA’s definition. But now, the FDA has kind of signaled a reversal, because they say, they — the nut fluids — are made from plant materials rather than the lacteal secretion of cows, but because there’s no confusion, it’s okay. They did some focus groups, they got comments, and the FDA says, “The comments and information we reviewed indicate that consumers understand plant-based milk alternatives to be different products than milk. Consumers generally do not mistake plant-based milk alternatives for milk.” In other words, they’re not buying plant-based milk because they think it’s milk. They’re buying it because they know it’s not.

BEN: Now, but that is interesting, because that may start to change in the not too distant future…

DANIEL: Yes, of course.

BEN: …because in the same way that Impossible Foods and Beyond Meat are trying to create a different kind of plant-based protein, I imagine not very far down the pipeline, there’s going to be, like, protein innovation houses, because everything’s a startup now, that will try and create a non-animal-based milk simulacrum that is very much designed to be a cow milk replacement, in a way that almond milk and oat milk and all those aren’t trying to be. They’re like, “We are almond milk, we are oat milk,” where this one is going to be like, “It’s milk as you know it, but not from where you know it,” or some crap like that or something.

DANIEL: Yeah. Well, that is the next stage. There are already attempts to make — in fact, startups who are making — synthetic milk from something called precision fermentation. It’s bioidentical to regular milk. You start with special yeast that’s been implanted with the gene, and then you use something called precision fermentation. You get the same protein as cow milk, and then you just add flavorings and oils to get the right sort of mouth feel. That way, you get less cows burping methane, less potential for animal abuse. Synthetic milk, I will snap that up as soon as I can.

HEDVIG: Wow!

DANIEL: So, that’ll be the next frontier in the milk war, the milk language war.

BEN: I just had to do a whole… You don’t have to put this in the show, but I just did a whole individual case study about the various protein innovation houses. It’s interesting.

DANIEL: That’s something that I would totally invest in. I would love to see that.

BEN: I would recommend that you don’t. For now.

DANIEL: Because it’s too expensive and there’s a long way off?

BEN: Yeah, they’re not there yet and you’ve got this network of different players, and like, one or two of them are going to succeed, and the rest are going to fail miserably.

DANIEL: Yeah. Okay. So, when a clear winner emerges, then bank on that hard.

BEN: Basically.

DANIEL: Thank you. Well, Hedvig, there’s been a new book out by our pal, Lesley Woods. Is that not correct?

HEDVIG: Yeah, Lesley Woods, who is an Australian Indigenous woman who is a PhD candidate at ANU, where I did my PhD. So, I always have a soft spot for her.

BEN: ~Your almer mater.~

[LAUGHTER]

HEDVIG: She is also a very smart and kind woman. She’s written a book called “Something’s gotta change: redefining collaborative linguistic research” about the way that linguistic research interacts with the ownership of knowledge by Indigenous communities and what’s been happening and what needs to change. The book touches on some points that we discussed with her in Episode 35, where we also had Alice Gaby on as a guest and the lovely Ayesha Marshall. That episode is actually called Something’s Got to Change, and her book is called Something’s Gotta Change. So, easy to find, easy to link. And the book is available for free from ANU Press, and I’ve had a read of the introduction and some parts of it already, and it looks really cool. I really recommend everyone who is studying linguistics or who are interested in global diversity, cultural diversity, and those things, it’s worthwhile to think hard about what research is and is doing. It’s not a politically neutral thing that we’re doing, and she makes a lot of very, very good points.

DANIEL: In the show, we had a bit about how linguistics really needs to decolonise, and be for the benefit of the people who are the custodians of that language. Is that kind of the themes that the book touches on and how to get it done?

DANIEL: Yes, that is one of the things she touches on. I haven’t read the whole book. It just came out, so I’ve only read parts of it.

DANIEL: Sounds like that might come up in a future episode with Lesley Woods. We’ll see what happens there.

HEDVIG: Oh, should we have her on a second time? Yeah.

DANIEL: I think that’d be a fantastic idea.

HEDVIG: I think she’s a very, very smart person. So, I’d love that.

DANIEL: All right, last news item. I came across a paper that’s come out recently by Pablo Contreras Kallens, Ross Deans Kristensen‐McLachlan, and Dr Morten Christiansen, who we had on the show earlier, in the episode called The Language Game with Nick Chater. So, here’s the thing. We’ve all been talking about the variants of GPT. GPT-4 has just come out. Some of us are playing with that. A lot of us are playing with ChatGPT. Everyone’s socks have been knocked off. It just seems so lifelike. There have been some great things about it, some bad things about it, some worries about it. But this is some research about it.

Now, generativists people in the Chomskyan tradition have long claimed that it’s not… and I hope I’m not straw-manning here. I don’t want to do that. But it’s been alleged that language is to some extent innate, that there is a universal grammar inside of human brains, that language is a bioprogram. There have been stronger and weaker versions of this proposed. But one of the common themes of nativism is that, because something about language is already inside of us when we’re born, it helps us to acquire grammar quicker than you would normally have to if you were just learning it from stats alone. And some people have even said you can’t learn it from stats alone, from just observation alone, that you need to have a universal grammar in you. Otherwise, it’s not possible.

BEN: The simple dude or dudette or theyvette take on this is, like, there is something magical inside us. Just us, nothing else. And it is like a magic seed that allows language to be a thing. Without that seed, no language.

HEDVIG: And since the inception of that idea, what that seed is has been whittled down to a smaller and smaller thing as we’ve been learning more about languages and how children acquire languages. So, by now, what those little rules are quite general and very, very basic. We’re not talking about hard-coded whole grammar. And if you want to hear more about this before we go off on a tangent explaining nativism again, we have episodes 37 and 38 on generativism, where you can hear real full-blooded generativists talking about it, which could be a good idea. [LAUGHS]

BEN: [LAUGHS] People smarter than uzzz.

DANIEL: Now, one of the things that came up in those discussions is something called the poverty of the stimulus. This is an argument that people make for nativism, the idea that some language is inborn. In particular, they say that children gain mastery of language structures that they haven’t heard, that they haven’t received evidence for, and that’s why the stimulus is too impoverished to be able to learn language.

HEDVIG: Can I do a dumbed-down version?

DANIEL: Please.

HEDVIG: Children are learning language surprisingly fast given the things that they are hearing. So the idea is that they’re not hearing enough things to explain how good they are at language. And therefore, there must be something that comes with their brain as they’re born that helps them use the minimal input that this argument claims that they get…

BEN: The magic seed.

HEDVIG: …to take that and get to language. So, that’s when people say poverty of the stimulus, which sounds like such a fancy term, it just means kids don’t get enough data to explain how good they are.

DANIEL: Yep, that’s it.

HEDVIG: And people who are studying large language models are saying that, “Look, we gave a bunch of data without any kind of innate understanding of language, without any of this hard wiring to a computer and it learned language.” And as Daniel was talking to Morten, it learns language very well.

DANIEL: In fact, can I even use less inflammatory language? Maybe we shouldn’t say large language models like GPTs have learned language, but they are able to do grammatical output. Whatever else they’re able to do, whether you think they’re silly or they’re not or they’re irrelevant or they’re dangerous, you’ve got to admit that the stuff they come out with is grammatical.

BEN: Definitely.

DANIEL: So, it IS possible to learn grammar from data alone. So, this was a paper called “Large language models demonstrate the potential of statistical learning in language”, and I had a chat with Dr Morten Christiansen. He is a cognitive scientist at Cornell University. He’s the William R. Kenan, Jr. Professor of Psychology. He’s author of The Language Game, and he’s also the author of this paper. Since the paper talks about statistical learning, I decided to ask him, “What is that?”

MORTEN CHRISTIANSEN: Yes, you might think of it as distributional learning. Essentially, they show what can be learned from being exposed to loads and loads of input. So, what these models pick up on are distributional regularities based on loads and loads of text.

DANIEL: Distributional regularities. So, is this the way that I know that if someone says the word TREMENDOUS, it’s probably describing something good. But if I say COLOSSAL, then that’s always tied to something bad. It’s always a colossal failure. It’s always a colossal blunder. It’s never a tremendous failure. Is that what we’re talking about?

MORTEN: Not quite. I think what you’re referring to are things like collocations, that certain words tend to co-occur together, or certain types of words co-occur together. But this is really more what you can pick up if you imagine if you just had loads and loads of experience. Now, what happens is that certain words, like what you are suggesting, do tend to co-occur together. People have been looking at this for a long time. And so, words of the same kind tend to occur in the same context. So, nouns tend to occur in the same context as other nouns and verbs as other verbs. And this is just a very simple distribution of information. But what we are learning from these large language models like ChatGPT is that they can actually learn quite a lot from just exposure to language.

DANIEL: Okay. So, what we’re saying is that large language models are capable of learning to make grammatical output just by looking at linguistic input alone.

MORTEN: Yes. So, what these models do simply, at least what models like GPT does, is given a past input, try to predict what’s the next probable word to occur given what has been learning over being exposed to millions and millions of words.

DANIEL: Would it be fair to say that large language models are spicy autocomplete?

MORTEN: Yes, they are autocomplete on steroids.

DANIEL: Okay, great. Okay, I feel comfortable with that. Now, let’s talk about what we’re not saying. We’re not saying that large language models understand language like humans do.

MORTEN: Yes. So, it’s easy to get very excited about these models. I am very excited about it. But it’s also easy to overhype them. Famously, there was this former employee from Google who thought that their large language model had consciousness. Certainly, we’re not suggesting anything like that or that there is any kind of general intelligence. And so, they don’t really understand language in the same way that you and I understand language, because they’re missing a lot of the key ingredients that go into language. This is in part what we talked about in the book, The Language Game, that we also have talked about on this show before.

DANIEL: Yes.

MORTEN: But what they do do is that when you look at the output… So, even though sometimes they will say things that are nonsensical… Unfortunately, these models can also be racist or sexist or bigoted in other kinds of way. But setting that aside, and the fact that they can say silly stuff, what they produce in output… Even the people who are criticising these models for a variety of reasons, when they provide examples, what is striking about those examples is that they are always grammatical.

DANIEL: Okay.

MORTEN: And so, what these models are able to do, given loads and loads of input, is to produce grammatical language from just being exposed to loads and loads of language.

DANIEL: They can produce grammatical language just by observation alone. So what? Why is this important? What hangs on this argument?

MORTEN: Well, this speaks to a very old argument in the study of language, namely an argument that’s referred to as the poverty of the stimulus. So, when people started looking at language no more than half a century ago, it seemed to many people, and I think quite recently so at the time, that the input that kids got, at least from what was understood at the time, was too messy, too incomplete, missing all sort of information so that they would never be able to really figure out how their language worked. That is, they wouldn’t be able to generate the kind of grammatical abilities that we see adult individuals use. And so, the suggestion was that in order for them to do so, they needed to have some built-in knowledge of language.

DANIEL: But the computer doesn’t have any built-in knowledge at all and it’s still able to do it. I mean, it’s able to do grammatical stuff.

MORTEN: Yes, it’s able to do grammatical. That’s what we are suggesting in this paper. We’re not suggesting that it’s capturing meaning or how we use language in context, but rather that what these models show is that it’s possible to develop the ability for grammatical language, even though they have no built-in knowledge of language, there’s no so-called universal grammar or anything like that in these models. They’re simply just getting exposure to language. And from that, they’re able to essentially produce grammatical language that is on par with what we see from most adults.

DANIEL: Okay. Now, I’ve heard an argument that large language models are not really relevant to poverty of the stimulus arguments, because in order to get them to do grammatical output, you had to bucket them with way more data than children ever get. Do you think this is a good counterargument?

MORTEN: Well, it’s certainly the case that these models are getting loads and loads of input. So roughly, it may correspond to, I think, something in the order of 20,000 human years or something like that, in terms of input. However, these models are also missing loads and loads of stuff that kids actually have. So, in a sense, what they’re getting as input is really impoverished. When kids are learning language, they’re getting all sort of other information as input. They’re living in the world, they have bodies, they have the context of interaction with other people. And the models are missing that. They’re just essentially getting loads and loads of input that they are soaking up the statistics of. And so, in a sense, that’s rather impoverished, ironically.

DANIEL: Okay.

MORTEN: So, that’s one thing. But there also have been some preliminary studies that have tried to give models like these less input. And it turns out that they actually can learn quite a lot of language even if they get less input. There’s one study by Husseini et al, where they found that the models were able to do quite well when they got only about hundred million words, which roughly correspond to what a kid would be exposed to during the first 10 years of their life, probably a lower bound on that.

Now, it’s an empirical question how well they will do more generally. But I think what is nice about these models is that we now have fully functioning models that can create grammatical language just like a human can. And so, now we can experiment with them. I think this is what we are suggesting in this paper, is that they provide an opportunity for language scientists to do all sort of tests on them. So, we can now treat the question of, “Can language be learned from experience alone?” empirically by using these models. We can see, can they develop patterns of regularities that would allow them to model children’s language behaviour early on if they’re trained in that way?

DANIEL: So, is that it? Have we done away with nativism, or have we just dealt with one of its major arguments?

MORTEN: I think we’re only really dealing with one of the arguments. So I think part of the original argument was also this notion that what the child had to do was to identify the correct grammar for the language that they’re learning. I think, again, as I was saying, it made sense at the time because back about 50 years ago, there was very little knowledge about what the kind of input that kids would get. There was very little knowledge about what kids actually could learn. There were not a lot of empirical studies of that. And also, we knew very little about how powerful statistical learning can actually be.

But of course, these days, we know much more about the input that kids get, and it turns out that it’s much richer than was thought originally. We know much more about how children learn, and it turns out that they’re actually quite sophisticated learners. And lastly, with models like GPT and other models, we now know that statistical learning or distributional learning is incredibly powerful with enough input. And so, I think it’s time to revisit the notion of the poverty of the stimulus, given what we now know.

DANIEL: So, if I were to argue against the poverty of the stimulus argument, I’ve noticed that people who make this argument tend to focus on things that children don’t hear, but they’re able to do anyway. For example, they’re able to come up with a sentence like, “Is the man who is singing tall?” They never say, “Is the man who singing is tall?” They do it right even though — it was argued — they never heard that kind of data. And the reason we’re able to do these sentences right is because we’re all born with a universal grammar in our brains that all languages conform to and that helps us to overcome any gaps in the data we hear. So, what you seem to be saying is that kids actually get great data. These rare sentences that children were presumed never to have heard, they actually do hear, and these so-called rare sentences might actually be common in real life. Is that what we’re saying?

MORTEN: Not quite. And in fact, I had a paper a few years ago with a former graduate student, Florencia Reali, where we showed that you don’t actually need direct examples of these kind of sentences in order to produce the right questions, but there’s actually indirect statistical information. And so, essentially, we did very simplified versions of these large language models that we have today. And so, this was on a much smaller scale. What we were able to show is that it’s possible for a very simple model to use indirect statistical evidence to figure out how to produce the grammatical questions and not make any errors.

And importantly, this modeling that we did also were able to predict the kind of errors that children do get, because it’s not the case that they don’t make errors trying to make these kinds of questions. But what the errors show is that they seem to be using multi-word chunk combinations, and sometimes that can make them produce errors, and our modeling was able to show that. So, I think the power of these new language models is that they show that there are lots and lots of indirect statistical information that can be put together by these large networks to essentially create grammatical language.

So, you don’t actually need explicit examples of, “Is the man who is smoking tall?” in order to be able to produce that question right the first time. You can actually rely on having come across subparts of that sentence in one form or another, and then piece that together on the fly, and this is what these models seem to be able to do.

DANIEL: Okay. So, it sounds like we’re arguing for something like construction grammar where children are able to do these complicated sentences because they pick up hunks of things that they hear and put them together. Is that kind of how it works?

MORTEN: Yes, I think that’s how it works. And so, essentially, what these models seem to be picking up on are multiword combinations that can then be reused and put together in novel ways. That’s certainly a lot like what we might think of as construction grammar. Now, construction grammar is normally thought of as a relationship between form and meaning; that is, a word or a set of words with some sort of associated meaning. As noted earlier, these models don’t really have a separate representation of meaning as such, as far as we know. That’s actually one of the sort of interesting things. One of the problems with the models is that we don’t have any insight into them. We can’t really see the machinery behind them. That’s a problem for empirical research as such. But they certainly seem to be capturing some aspects of what we might think of as construction grammar.

DANIEL: What’s the best argument that you’ve heard against your paper? The one that makes you go, “Ooh, okay”? Because I’ve seen people discussing this and saying, “Well, actually, large language models aren’t very relevant to how people do language,” or “It doesn’t really put paid to nativism or the poverty of the stimulus argument.” Have you gotten any word back from people who disagree with you? What are they saying?

MORTEN: I haven’t had any words back directly. There are various kinds of arguments against the models. Not necessarily against our paper, but against these models in general, in that they’re not really capturing language because they don’t have meaning and so on. I certainly agree that they don’t capture all aspects of language. There’s many aspects of language they don’t capture. And also, they are not really capturing the multimodality of language and so on.

But I think what these models do show us is the power of distributional learning, just how much can be learned from just being exposed to language and picking up on subtle distributional patterns. I think this is where their force is, as it were. And this allows us to figure out what else is needed in order to explain human language. This is where we can use them as an empirical tool to figure out what else is needed in order to explain the full capacity of human language. I think it’s incredibly humbling to see just how far these models can get in terms of generating grammatical language just from being exposed to loads and loads of words.

DANIEL: What about the world is different now? We’ve seen that large language models have been really successful, and they’ve really captivated people’s attention and they might be able to change a lot in the workforce or in the way people do things. What’s different now?

MORTEN: Well, lots of things are different. So, I’m actually on a committee here at Cornell University to figure out what the implications of these large language models are for education and how we might integrate them. There’s going to be both positive and negative sides to it. For example, ChatGPT can be used to essentially generate answers to loads and loads of quizzes or tests that are used to evaluate students of their knowledge in a variety of areas. That’s, of course, a problem, because if they just type it all into ChatGPT and copy out the answers from that, they can probably do fairly well in many cases, maybe a B minus or something like that. But of course, it wouldn’t necessarily mean they’ve learned anything. So, we need to figure out how to deal with that in education.

But also, there’s many other aspects of societies where they can be both helpful or problematic. They can be used to generate false information, for example. And they do have one scary part, and that is that they sometimes hallucinate, meaning that every now and then they will actually make up facts and they look like they’re quite confident about it, even though it’s just stuff they’re making up. So, they’re really good at putting words together and making it sound very convincing, but it could just be something that they’re making up out of thin air. That’s, of course, a problem.

DANIEL: Well, last night I got bored and I was trying to do something with a spreadsheet. I was trying to pull first names out of a list of people. And I just went to ChatGPT and I said, “I’m using Apple Numbers. Could you please write me an equation to pull first names out? But if it has this problem, then I need you to do something else.” And it said, “Sure, here it is.” It just explained to me what everything did. And I tried it and it worked. But the thing is, I could test it to make sure that it was right. I think that’s something that we need to have. We need to be smarter about the answers that we get.

Like, I would love it if it worked like Google. Google doesn’t tell you the right answer. Well, it does try to sometimes. But it just gives you a list of 10 likely answers and it says, “Well, here’s 10 websites that I think are relevant to the thing you asked for.” You have to check them out. If GPT or large language models managed to do that, if they said, “Here are some likely responses, but you need to check them out and here’s how you can check them out.” I would love it. That would be very useful.

MORTEN: I think you can essentially ask it to do that too. So, I think that there is a lot of opportunities here, as you’re saying, that we can use it as a way of… we can collaborate with these kind of models to see, can you come up with certain ways of thinking about now… So, for example, what should I write as a conclusion of this paper I’m working on? You can put in some suggestions in some information about what you’ve been working on, and then they can provide various kinds of suggestions. You can work through these and say, “Oh, this is a good idea. You can rework that in your own words and so on.” So, that way it becomes more sort of a collaboration.

I think one of the potentials of these kind of models is that it can help us get more into the writing process. So, typically, the writing process under normal circumstances involves three stages. They’re not separate as such. So, in first part, where we think about what we want to write about, we gather information and so on. Then there’s the writing part, there’s getting stuff down on paper or actually on the computer these days. Then the last part is editing. That’s the part that people tend not to spend a lot of time on and oftentimes, they will spend much more time on sort of generating text as opposed to actually editing it. But if we can use ChatGPT to provide text for us, we can focus more on the critical aspects of writing, namely editing and revising what we have written. Of course, it requires a different way of teaching how we write and in different ways working with computers. There’s also another potential advantage of large language models is that there’s a number of people who have problems generating text either because they are uncomfortable with it, but also… For example, we have many people here at different universities or in different contexts who have to write something in a language that’s not their first. That can create a lot of problems and that could be a hurdle to generating text. So, we can use these models to help us generate text, but then the person, the second language learner, for example, can go through and edit more. So, it can help level the playing field for quite a large number of individuals who have problems in terms of generating language.

So, there are these opportunities, but there’s also a lot of caveats and negative sides of these models as well. And how it’s all going to shake out in the end, I really don’t know. But I think if we try to think productively about it and creatively about it, they can be used in a positive way. So, we might think of these models in this regard as a little bit like calculators. So, in the old days, before calculators, there were abacus, of course. But then the calculator… that didn’t mean that we stopped learning times tables, but there was less emphasis on it because we have these useful devices.

Likewise, we can use them in a similar way. But of course, the danger is that whereas the calculator will always give you the right answer, whereas sometimes these models, at least the current models, might actually give the wrong answer. So, it requires people to approach it in sort of a critical way, a little bit with the suggestion that you had earlier about that these models could produce stuff that would be more like what you get in a Google search.

DANIEL: I think that it would be really good if we approached it skeptically. I know I’m a skeptic, but I feel like skepticism is just a really useful tool when we’re approaching anything from a calculator to a web search to a chat session. But you know, I feel like while there’s a lot of alarmist sentiment — because we always feel suspicious of AI and computers — this discussion between the two of us has made me feel actually quite optimistic, quite forward looking. I feel like there are some, like you say, some real opportunities here for good.

MORTEN: I think there is. Again, it’s crucial to keep all the caveats in mind. We need to be careful about that. Certainly, they can create bigoted language and language that’s in other way harmful. So, it’s important to be aware of that. And so, part of what we need to train ourselves, and students, and people in general is to be more critical towards the information that we get. But I think that’s something we need in general, but we might perhaps need to put more weight on it broadly in society. If we do that, then I think these models can be used for good.

DANIEL: The paper is “Large language models demonstrate the potential of statistical learning in language”. It’s by Pablo Contreras Kallens, Ross Deans Kristensen‐McLachlan, and Dr Morten Christensen, who’s been talking with me now. Morten, thanks so much for the chat. I really enjoy talking to you. It’s great to hear what you’ve been up to.

MORTEN: Thank you for having me to do it. So much fun.

DANIEL: Okay, team, what do you reckon?

BEN: I don’t know. This is like a layperson take, and this doesn’t have to go on the show. But people can do grammar without knowing grammar, without having ever learned grammar. Right? Like, if you ask a person explicitly: how does the grammar function in your language, most people cannot answer that question. Right?

DANIEL: I couldn’t.

BEN: The classic thing is you learn grammar when you learn a second language, because you learn the grammar of your first language as you’re starting to learn the rules of language, and then blah, blah, blah, blah, blah.

DANIEL: Okay.

BEN: So, I just think there’s something we can note here is that you can make a thing, whether that’s a human thing or a computer thing, do grammar without that thing understanding grammar.

HEDVIG: Yeah, for sure.

DANIEL: Yeah. So, there’s a difference between implicit and explicit grammar. So, we all have implicit grammar knowledge, but only if you’re a linguist or you’ve done some study of a second language, only then do you have explicit grammar knowledge. So, that’s a really good distinction. All right, so what did you think? Do you think that made the case that various flavors of GPT have put away that version of nativism?

HEDVIG: I’m not entirely convinced, if I’m fully honest. I really appreciated the discussion back and forth between you and Morten, but there was some loose ends for me. So, one of the things was related to what Daniel just said, which was, what we mean when we say knowing language. So, making grammatical sentences but not understanding the meaning of them. If kids maybe don’t get the entire internet fed to them, like ChatGPT gets a lot. A lot, a lot. Morten made the point that even when you give it less, it still makes grammatical sentences. But what kids get, like, Morten pointed out, is they also get a back and forth, an actual meaning attached to things, and they say things and then their parents either do or don’t do the things they want them to do. So, they get like a feedback loop. And that’s a very different kind of input, I think, than just a bunch of sentences, find the statistical correlates. So, I’m not sure they’re really comparable like that.

DANIEL: Well, I think you’re right. I wouldn’t try to compare the two, just like I wouldn’t try to compare birds and airplanes. In the early days of airplanes, they thought that airplanes would have to flap their wings, and there were all these machines that would flap wings, and then people realised, “Oh, wait, birds are using a principle and we can use the same principle, so that now birds and airplanes don’t act the same, but they can do the same thing. They can fly.”

HEDVIG: Right.

DANIEL: Do you think that’s a good analogy?

HEDVIG: I think that’s a great analogy, actually. I think it’s really good. I think it’s very weird of humans to expect that computers learn and do things the same way that we do. They don’t need to. There are different type of intelligence. You see this with all kinds of artificial intelligence all the time. They’re a different kind of intelligence. They’re not going to be just like us, and that’s fine. But if we’re using what we know about them to make inferences about what kids can do, then we are trying to make inferences. So, the question is not maybe if they’re identical. The question is if they’re similar enough, if it tells us anything. I think it tells us something. The fact that it could do grammatical sentences based on this input alone tells us something.

DANIEL: Yeah, I think it does too.

HEDVIG: I’m not saying it doesn’t tell us anything. I’m just unsure. I’ve seen some of the people… There’s been a bit of a back and forth about this paper on Twitter among other places, and some people are feeling strawmanned, and some people are feeling misunderstood. I don’t know, I still think it’s an interesting paper and tells us something, but it’s a complex issue.

DANIEL: Oh, Chomsky, you’re still dragging us into unprovable cross disciplinary decades-old debates.

BEN: We don’t talk about the gorilla!

DANIEL: How do you do it?

HEDVIG: Wait, I don’t even sure, do we need him?

DANIEL: Do we need Chomsky, or do we need to go there?

HEDVIG: No. Aren’t generativists doing fine without him? Is that weird to say?

DANIEL: Okay. Well, this is one thing that’s not clear to me. When people say they’re generativists, do they mean they’re in the Chomskyan tradition or do they just mean that they’re doing formal syntax?

HEDVIG: Yeah, they mean that they’re doing formal syntax.

DANIEL: Well, I wish we had clarity on that, because it sounds like people are signing up to stuff that they’re not signing up to. Okay. One thing that I do think that it shows us is the power of distributional learning. So, remember, that’s the bit where I noticed that the word BLUE occurs in certain contexts with certain words, and then I noticed that RED appears in the same sort of contexts and GREEN occurs in the same sort of context. So, then I can say, “Oh, okay, there’s something in common between those words.” We know where to put words because of their distribution in text. So, distributional learning is a super powerful tool. And if children are in fact using that, then that’s just a huge thing that they get. Which has been in dispute. The distributional hypothesis was really disputed for a long, long time, and it’s good to see a confirmation of this.

HEDVIG: I’m not sure it’s a whole lot of confirmation. It’s confirmation of some parts of it, maybe. Again, making inferences from computers to humans has some limitations. We should also remember that ChatGPT and OpenAI and all these services are spitting back to us what we’ve given them. Whereas kids actually have novel ideas in their head. ChatGPT doesn’t know anything that we didn’t tell.

BEN: I’m going to be honest, that is not my experience with children.

DANIEL: [LAUGHS] The high school teacher. Found him.

HEDVIG: I think Ben and I have talked about this before. Kids are actually not so original. I agree with you.

BEN: Yeah, yeah. [LAUGHS]

HEDVIG: When I worked in school, they would just draw Super Mario characters all the time.

BEN: One of the things that I have…

HEDVIG: Yeah, fashion kids.

DANIEL: [LAUGHS]

BEN: No, not kids. Humans now. I’m going bigger. [LAUGHS] One of the things that I feel like ChatGPT might be showing us is, spoiler alert, if you’re still making your way through Westworld, just tune out for literally 30 seconds, is one of the big reveals in that series is that, instead of, “Oh, when will the robots and the replicants become as good as humans?” It’s like the big reveal is like, “Actually, humans are disgustingly simple, logic-differentiating machines.” And so, when we do this ChatGPT stuff, part of me is wondering, are we eventually just going to narrow it down to: actually humans or a lot less amazing and complex and nuanced and original and stuff that we thought we were?

HEDVIG: Oh, yeah! I think we definitely aren’t as original as we think. That’s why I think people freak out so much about when they get an ad for a perfume. They’re like, “Oh, Instagram is listening to me. They know that I talked about it.” It’s like, “No, all the girls who are 34 and who live in that area, they’re all getting that perfume. You’re not that special.”

BEN: [LAUGHS]

HEDVIG: So, hold on to your panties.

BEN: I’m just very simple.

HEDVIG: Yeah, you’re just very simple and it’s fine.

BEN: Look, I think capitalism has heaps to answer for there. It serves the market for us to be nurtured into a really, really predictable set of heuristics.

HEDVIG: Yeah.

BEN: So, I don’t know if necessarily we are inherently simple. I don’t want to be quite that simplistic, but I think certainly in the world structures that we have built, it serves the powers that be without wanting to get to wawa conspiracy theory for us to be simplistic creatures that are easily… I’m burning myself here. Really easily entertained by silly little videogames with pixels that look really cool and all that kind of stuff.

HEDVIG: And have to get a whole big white wedding dress, because that’s what everyone gets. And have to get a diamond ring, because that means that you’re engaged and shit like that.

DANIEL: And that’s life. But then on top of that, we have language, which is so powerful and can be harnessed to make sentences of literally infinite complexity. And what do we use it for? We say, “Pass the tea.” We just use this… we have this repertoire.

HEDVIG: And that’s beautiful and that’s fine.

DANIEL: And it’s good and this is how we run our lives, but it’s a really good…

BEN: The Zen Buddhists would say there’s nothing more wonderful than sharing tea with a person of value.

HEDVIG: Yes.

DANIEL: Hmm. The thought that keeps coming back to me is that life is life, and then language is a low-resolution representation of life, of reality.

HEDVIG: Okay.

BEN: I teach this in my course all the time to high school students. When you are making something in the media, you are taking the real world and you’re creating a deeply flawed [LAUGHS] stand-in for that thing. Then anytime, you have to do that — to use your language, Daniel — anytime you make a representation, you’re making choices, what goes in, what goes out, blah, blah, blah, blah, blah.

HEDVIG: When you want to do something complex, you might use a shorthand or a trope, because you can’t explain that thing in itself with your low-resolution image. So then, you get crutches like tropes and things, and then you perpetuate stereotypes. It’s all great. We should all do media studies. We should be a media studies podcast instead of linguistics podcast. Daniel?

DANIEL: Okay!

BEN: Yeah, there’s certainly not enough of those podcasts out there. Three white people talking about what’s going on in the media.

HEDVIG: No, not what’s going on in the media. What is representation?

BEN: Okay.

DANIEL: Well, I feel like we’ve run the whole language thing into the ground.

BEN: [EXCITED] Oh, I get to be the expert, [DANIEL LAUGHS] and you get to be the dummies! This would be so good. This would be really fun.

DANIEL: Sounds like a bonus.

HEDVIG: Oh, I upset my cats. Sorry.

DANIEL: What I’m looking forward to is something that Dr Christiansen said, “With all that large language models can do, what else beyond that are humans doing?” That’s going to be the fascinating bit for me. What’s human about language now that large language models are here?

BEN: Trying to get laid.

HEDVIG: I don’t know.

BEN: That’s it.

HEDVIG: I don’t know! I just want to drink my tea and pet my cat. I don’t know.

[LAUGHTER]

DANIEL: Why do I need to be, like, a Superman.

BEN: I like that the one working linguist in this podcast, when presented with the philosophy of linguistics [HEDVIG GIGGLES] with big questions is like, [IN A WHINING VOICE] “No, no, don’t make me think about that.”

HEDVIG: I mean, I don’t know. What do we do? We form connections, we make groups, we build meaning…

DANIEL: We have a body and we’re connected to the real world.

BEN: Yeah. Meatspace.

DANIEL: Meatspace.

BEN: Gross, disgusting meatspace.

HEDVIG: Yeah, we’re connected to the meatspace. Though, don’t you sometimes feel like a body snatcher?

DANIEL: Nnnnnno.

BEN: Only when I’ve snatched a body. Is that what you mean?

HEDVIG: You know the movie, Inside Out?

BEN: Yes.

HEDVIG: Okay. They’re little guys.

BEN: Do you feel like that? Little people behind the controls?

HEDVIG: Yeah, except there’s just one.

DANIEL: You’re a homunculus!

BEN: Oh, so, like in Men in Black?

HEDVIG: Yeah.

BEN: Okay.

HEDVIG: And it’s like driving around my little ship, my little body, boop, boop, boop, and sometimes my body is very silly. It’s like, “Oh, I’m experiencing fear.” And my little guy in the control office is like, “There’s nothing to be afraid of. You’re just having anxiety.” And my entire body is like, “No no no, I’m going to die.”

BEN: I would love that level of detachment, but I’m avoidant as fuck. So, I don’t know what that says.

HEDVIG: No, I have massive detachment.

DANIEL: You’ve got a Ratatouille.

HEDVIG: I have a theory that I try to present to my Zumba girlfriends that periods make you better at detachment.

DANIEL: Oh.

BEN: Mm. Man, you guys roll deep in that Zumba class!

HEDVIG: Yeah, yeah, yeah.

BEN: I think other groups would discuss, Married at First Sight or something, you’d be like, “Have you ever wondered if we disassociate more effectively because of menstruation? I think about that all the time.”

HEDVIG: I did try to present this, and they were all very patient listening to me, and then they were like, “We don’t understand or agree.”

[LAUGHTER]

HEDVIG: So, I don’t know how good it went.

DANIEL: Was that your German or…?

HEDVIG: This was my immigrant girl group. So, mixed.

DANIEL: Oh, okay.

BEN: So, you say you don’t have the backstop of linguistic difficulties.

HEDVIG: I think it was the least. No, it isn’t linguists. [IN A SINGSONG VOICE] I have non-linguist friends in a new country.

DANIEL: Oh, my goodness.

BEN: Gen pop.

HEDVIG: Yeah, we go out for Chinese food. It’s really nice. We found one good Chinese place. We keep going to it.

BEN: Yesss. Do they do yum cha?

HEDVIG: I don’t know.

BEN: Stream trolleys and that sort of thing?

HEDVIG: I think it’s like Sichuans a bit. So, we get a very nice braised aubergine and we had a very nice fish stew last time. It’s gorgeous.

DANIEL: You know what I appreciate about you two?

HEDVIG: No.

BEN: What.

DANIEL: Is that if ever I get too high up there in the rarefied air of philosophy, you bring it back down to yum cha.

HEDVIG: Yeah yeah yeah.

BEN: Always.

DANIEL: That’s a beautiful thing.

BEN: You can never, ever worry. If you feel like you need to be grounded, talk to Ben about food and I will drag you back to earth so quickly.

DANIEL: [LAUGHS] Thanks to Dr Morten Christiansen for that chat. And now, it’s time to play Related or Not, in which we take a look at two words. You have to guess whether they’re etymologically related or if the resemblance between them is merely coincidental. We’re using a variety of sources to check our answers. So, let’s get started.

BEN: Most importantly of all, it is an opportunity for one of Ben or Hedvig to be right and the other be wrong, which is the greatest of all things.

DANIEL: This is going to be fun. I’m keeping track. So, for this episode, I have not one pair of words but two pairs of words, and they’re both related to joking, making japes.

HEDVIG: Okay.

DANIEL: The first one. JOSH. When I say, I was just joshing you, “I was just joshing your socks.”

HEDVIG: Joshing around.

DANIEL: Does that relate to the name JOSHUA? What do you think? Joshing around. Is it related to the name Joshua?

HEDVIG: Yea-nah.

BEN: I’m going to go with yes. I’m going to say there was a famous humorist somewhere that did a newspaper for ages, and it was like something… Do you know what? I reckon it wasn’t first name. I reckon it was last name, like, a…

DANIEL: Interesting.

BEN: …byline type thing and it became joshing around.

DANIEL: Okay. Hedvig, you said yea-nah?

HEDVIG: Yea, nah.

DANIEL: Okay. Can you think of an alternate path that joshing could have gotten to us by?

HEDVIG: Just somehow with same as JOKE, somehow.

DANIEL: All right. Well, here’s the answer. In fact, the two are related.

HEDVIG: Oh, fuck.

BEN: Get it in!

DANIEL: Ben’s got this one.

BEN: Come on. Yes. Please tell me for the reason I said, because that’ll make me feel even better.

DANIEL: Well, there was a US humorist, Josh Billings, and he’s sometimes thought of as a candidate. But the problem with that is that he started writing a little late after the term pops up. He didn’t start writing till 1860, whereas the term JOSHING — which is always capitalised, by the way, so it’s probably a proper noun, at least in the early writings — It showed up in 1845, 15 years earlier. What’s likely is that JOSH has been used as a generic name for a person. We see this a lot. Tom, Dick, and Harry. Or Tom for male animals.

HEDVIG: Johnny Come Lately.

DANIEL: Johnny Come Lately. Average Joe, right?

HEDVIG: Mm-hmm.

DANIEL: For women, we see Molly, which is just any girl. We see that in MOLLYDOOKER, somebody who uses their left hand because Molly was considered effeminate, which is really sad. But also, MOLLYCODDLING somebody. That Molly is Molly’s name because Molly was a generic girl. So, that one does appear to be related to Josh.

Second one, GAG. When someone writes a gag for a movie or a play, does that have anything to do with GAGGING someone, stopping them from talking? Related or not?

HEDVIG: Stopping from talking is… one thing that gagging means.

DANIEL: Yep. I decided to focus on that sense.

HEDVIG: Yeah. No, I’m just going all in on that this is an explicit episode.

DANIEL: [LAUGHS]

BEN: Of the two… I know we probably want to breeze right past that, but not on my watch.

[LAUGHTER, HEDVIG SNORTS]

BEN: Is it not the same thing?

HEDVIG: Having things stuck in your oral cavities.

BEN: Yeah. Is that not the same thing as being prevented from talking? Am I…?

DANIEL: That is the same thing.

BEN: Okay. Little bit of semantic drift, perhaps?

DANIEL: Yes, it is. It is semantic drift. When you see GAGS like preventing someone, that’s from the 1500s and then to GAG, forcibly or authoritarianly stop somebody from talking about a topic, we see that in the 1600s. But I’ll go back to the question. Do either of those have anything to do with the joke GAG?

BEN: Yes.

HEDVIG: Yes.

BEN: I say yes. Ah, what? You can’t say yes. I said yes!

DANIEL: [LAUGHS]

BEN: I say yes, because… I got in there first.

HEDVIG: I thought yes…

BEN: I claim it.

HEDVIG: …very early.

BEN: I say yes because when you’re laughing properly, like if you’re doing a good old belly laugh, you can’t talk.

HEDVIG: Yeah, same.

DANIEL: Interesting. Okay.

BEN: Or even more than that, in the classic vaudevillian thing, maybe people are smashing the peanuts or the popcorn or whatever, and a big unexpected laugh happens and they actually choke and gag.

DANIEL: Okay. Well, here’s the answer. The two senses of GAG are in fact related. You’re both correct.

HEDVIG: Yes.

DANIEL: Yay.

HEDVIG: Status quo.

BEN: But why, Daniel?

DANIEL: Let’s start with the GAG in your mouth. Probably called that because it sounds like somebody gagging. [MAKES GAGGING SOUNDS]

BEN: I guess. But onomatopoeias are always super weird anyway, really.

DANIEL: Yeah, they are. But a GAG, as in a joke, has taken a certain path. Started out in the 1700s, meaning to trick someone, to deceive them, to take them in with talk. And it’s probably because you’re taking something untrue and you’re shoving it down their throat or seeing if they’ll swallow it. “Ah, you’ll swallow this, you’ll swallow anything.” We still say that. Then by the 1800s, it meant a big story or a tall tale, and then by 1863, it meant a joke. So, the two are in fact related.

BEN: Wicked.

HEDVIG: Nice.

BEN: Most importantly of all, Ben wins.

DANIEL: [CHUCKLES]

HEDVIG: Yeah, he did.

DANIEL: Two to one.

HEDVIG: We say, “He swallowed it with all the fat and hair.”

DANIEL: Eww.

BEN: Oh, meaning, like, he really just took… like he didn’t even look.

HEDVIG: He swallowed everything, [SWEDISH at 01:13:03] Like all.

DANIEL: Gosh. Well, this has been an iteration of Related or Not, we’re looking for sponsors for this game. So, if you have a product that doesn’t suck and we don’t hate you, get in touch and you can sponsor an episode of Because Language.

HEDVIG: To be fair, we like a lot of things and people.

BEN: Yeah.

[MUSIC]

DANIEL: I am experiencing intersubjectivity with Dr Nick Enfield of the University of Sydney. Along with Jack Sidnell, he’s the author of Consequences of Language: From Primary to Enhanced Intersubjectivity. Wow. Nick, thanks for coming on the show for the second time.

NICK ENFIELD: Thanks for having me. It’s a pleasure to be here.

DANIEL: Tell me something about language that you think is really, really cool.

NICK: Oh, there’s so many cool things about language. Sometimes, I have to stand up in front of first years, which I’m doing this year, I’m teaching first-year linguistics. The things that get them interested are just so diverse. Some of them will find it extremely cool that our vocal apparatus moves at an unbelievable speed. Some people will find it cool that how you frame somebody’s involvement in an incident might make the difference between them going to jail or not going to jail. I think there’s a ton of really interesting things. Probably one that I think really gets students excited is one I spoke to students about this week, which is the little differences that make a big difference. So, I’m not sure if you know this study from emergency phone calls in Perth, but they looked at emergency calls for cardiac emergencies, and the call takers were asked to follow a script. That’s the standard procedure. So, they get the name and location of the person and then they ask, “Tell me exactly what happened.”

DANIEL: Yeah.

NICK: And what happens is that people will give a kind of narrative when you ask that question. So, they’ll say, “Well, we were in our bathroom getting ready for bed, and my wife turned to me and said, ‘I feel funny,’ and collapsed on the floor,” and so on. So, there’s a narrative gets elicited. Now, sometimes the call takers slightly differ in how they deliver that line. So, instead of saying, “Tell me exactly what happened,” they say, “Tell me exactly what’S happened” with the present perfect form of the construction there with the apostrophe S on WHAT.

DANIEL: Right. Okay.

NICK: It turns out that people respond to that question not so much with a longer narrative, but more of a focus on the here and now, the situation at hand, as would fit the form there with the apostrophe S.

DANIEL: Because the present perfect is for present relevance, right? “I ate a pizza” means it happened in the past. But if I say “I’ve eaten a pizza”, that means it happened in the past, but it’s also relevant now.

NICK: Exactly. So, the question is, what has happened? That turns to a more here-and-now summary, in a sense, of what’s going on right now. On balance, that results in faster dispatching of ambulances to emergencies.

DANIEL: That is bonkers! And it’s just an S.

NICK: It’s just an S.

DANIEL: That’s the most amazing thing I’ve heard all day!

NICK: Yeah, it’s pretty amazing. And you know, I think there’s plenty more things like that, which we just need to dig around and find. I think it’s a beautiful illustration of how these little things make a big difference. When you have a bit of linguistic knowledge and you can dig into it and explain why that is, the more people learn about the system of language they’re wielding, the more agency they get in situations like that.

DANIEL: That’s very, very cool. Also, I love teaching first years. It’s so much fun.

NICK: Yeah, I agree. It’s wonderful.

DANIEL: Earlier in this episode, this very episode in which you are appearing, we chatted with Dr Morten Christiansen about large language models and GPT-whatever, whatever version is current. We have wondered about the difference between language, the way that large language models do it, and language the way that we do it. And we’ve kind of been asking, “Well, what are we doing that they’re not? What else is going on when humans do language?” There’s a lot.

NICK: There’s a lot. It’s a very exciting field at the moment. Obviously, very fast moving, and a day or two’s lag in broadcasting a conversation may mean that the whole field of large language models and AI of language has changed.

DANIEL: God. That’s true.

NICK: But the issue is the same. It’s just incredible, as everybody, I think, agrees at the moment what these AIs can do with language. And I think it’s really drawing our attention to… Well, actually, I want to rephrase. I don’t think it’s actually drawing our attention quite in the way that it should. But what I find from looking at this phenomenon is that people want to say that the machine can’t understand in the same way that we do.

DANIEL: Yeah, I’ve said that.

NICK: The machine, it’s an autocompleter or whatever. People like to come up with this kind of dismissive ways of talking about what the AI can do. It’s certainly impressive what the AI can do, but I find it interesting that people are so willing to dismiss what’s going on underneath it, as if they have a very clear sense of what it means for another human to understand. Now, we all have a very clear grasp on at least on how it feels to understand something. But when we are talking to someone else, you know, what do we really know about how they’re understanding what we’re saying? There’s this huge gap between the surface behaviour of responding to something that you’ve just said or engaging in a conversation and what’s going on underneath the surface in the person’s mind.

People like to say that AI should be able to explain the actions it takes, and often it’s said, “This is one of the big differences between AIs and humans.” But if you get a human to explain their action, for example, why they said something they said, we’re pretty terrible at it if you have any way of testing the accuracy of it. But what we really know about human explanations is that they’re made up after the fact, they’re rationalisations, they’re justifications. They actually don’t bear a lot of relation to the deep underlying causes which you would ultimately want to locate in neurons firing. That’s not the kind of thing we have any capability to understand.

So, I find it interesting that people are, on the one hand, so willing to accept stories about understanding among humans, and at the same time, find it so hard to accept the idea that a machine could have natural language understanding. So, I think that’s one big piece that isn’t really getting a lot of attention. And I think the other one, which maybe not surprising to you, knowing what I work on, is that you don’t really have a social relationship with GPT-3 or GPT-4. When I say you don’t really have… you know, like everything in the world, we anthropomorphise it and we feel connected to it in certain kinds of ways. But I think this is where you get a real difference, is that GPT-3 or GPT-4 or whatever the AI is, it doesn’t have any commitment to us. So, we don’t have a relationship in the classic sense of investing in each other, having expectations of each other, knowing each other in the same sense as we would apply to our personal relationships that get us through life.

So, I think that the mutual commitments of social interaction, that’s the real thing that’s missing, and a lot of the time I feel that people are overly focused — which is perhaps standard in linguistics — overly focused on the informational side, whether the AI is using an underlying model of information processing that’s the same as the human one for language. But the social side of it is, to my mind, massively important and that’s where I see a big difference.

DANIEL: And when people imagine that large language models have achieved sentience, what they’re essentially doing is they’re hallucinating the social side in the same way that people think that the people on TV are somehow their friends in a para-social way, it’s like they’re meeting more than halfway and imagining a social relationship.

NICK: Yeah, I agree. What’s interesting is that — similar to the point I was making about people’s readiness to assume the richest interpretation of human relations — when you say that you’re hallucinating the social side of your interaction with an AI, what’s to say you aren’t hallucinating the social side of your interaction with a human being, like, the interaction that we’re having now? Just because you’re hallucinating, it doesn’t mean it isn’t there, but it may be that at the same time as it being there, you’re also hallucinating it in the same way as you hallucinate the so-called sentience of an AI.

So, underneath, I think there’s something very important about our attribution to other agents, our attribution of sentience and similar concepts to other agents. We’re often trying to locate it in the other agent, whereas actually it’s mostly being projected by us.

DANIEL: I am really surprised to hear you saying this, because I thought that you were going to say something like, “Oh, well, obviously, we’re doing a lot of things that computers aren’t doing. We’re combining our intention, we are making guesses about the other person’s knowledge and current mental state”, and yet it sounds like you’re making a case for a little bit of skepticism about the idea that we are not stochastic parrots. That’s a term that’s come up lately. Are we stochastic parrots? Are we simply repeating statistical patterns? Because early humans didn’t get their language knowledge by plowing through a billion words of data and hunting down statistical patterns.

NICK: Well, no, humans do that. Of course, we don’t have access. This is the obvious very big difference between what’s going on under the hood with these AIs as opposed to humans is that, any human will acquire language and be able to use it with the tiniest fraction of input that a large language model needs, right?

DANIEL: Yeah, we bootstrap.

NICK: Yeah. Of course, there’s a whole lot of inference. Of course, there’s a whole lot of attribution and a lot of mind-reading and all the rest of it. But my point before was, I was saying that when we’re attributing intentions to others, what their actual intentions are is another matter.

DANIEL: Yeah. That’s true.

NICK: We attribute intentions to them and then act in accordance with that. And then what happens after that in the next step of an interaction becomes public and the interpretation becomes public, and it either gets disputed or repaired or it doesn’t. So, I guess, coming back to your surprise, [DANIEL LAUGHS] my sense of the more, I don’t know, psychological or informational side is that, we don’t really have a lot of idea about what’s actually going on in our interactions under the surface. But where I would really want to say that the action is, is more in the public behaviour that happens around the things that get said when we use language.

So, you know, this idea that we’re stochastic parrots or what have you is really about whether we apply statistical learning, whether we’re doing autocompletion, and all of that. Certainly, people have claimed that we do a lot of recycling and anticipating in language, but what I would want to really focus on is the interpersonal commitments that come about through using language, which is not quite the same as the mind-reading stuff you mentioned before, but it’s more about something that follows from the things we say rather than is kind of part of the process of saying and understanding, and that is that we build accountability and we agree to the accountability that comes with social interaction and the use of language.

What do I mean by that? Well, I mean that language is a joint commitment. It’s a joint activity. So, if we’re having a conversation in the same way as any other kind of social interaction, baking a cake together, or going for a walk together, or whatever the case may be, we’ve signed up to this joint activity, we’ve committed to giving each other our attention and our time, we’ve committed to using words and phrases in the way that they’re normally used, shall we say? And whenever there’s a departure from any of those things, well, we have to account for it. We’ve got a duty to sort of explain what’s happening, to express our surprise if something goes away from the norm, and so forth.

So, I think that’s where you really see the human coming into things is in people orienting to the commitments that they’ve made and where you get into a normative framework rather than just an information processing framework, which I think most people are focusing on in relation to things like large language models. It’s the normative framework of joint commitment that I would really want to focus on as being particular to human interaction.

DANIEL: Okay, and that’s what we’re doing right now. We’ve decided to meet together at this time online, and we are committed to having a good discussion, obeying certain norms, contributing together to a pile of knowledge between us called the common ground and all that human stuff like that.

NICK: Absolutely. There are infinite ways in which I could fail and create a problem through talking about irrelevant stuff or just failing to talk at all, giving you one word answers, you name it. I could do a lot of things. I could just hang up the phone. That would be really costly, right? That would be a major problem for this particular interaction, but also for things like our future relationship, matters of reputation, all these kinds of things. I don’t think there’s any evidence that something like GPT-3 or GPT-4 has any kind of skin in that game.

DANIEL: You use the metaphor in the book about sawing a log, and it reminded me — with one of those two person saws where one person holds each end — it said in the logging days of the Pacific Northwest, this is my country, USA. When a couple wanted to get married, they first had to get on opposite sides of one of these saws and saw a log together. And I always liked that metaphor, because it really does show that you need to be able to communicate. You need to commit to a joint activity. You need to be engaged in the pushing and pulling. You have to know the sequence of what to do and when to do it. And I just thought that was a super good metaphor for language. Also, when you’re looking at the PDF of the book, one part of the sawing film is on one page and then the other side is on the other page. So, if you go up-down, up-down, up-down between the two pages, you get a little animation of them sawing the log.

NICK: Nice. I did not realise that. That’s excellent.

DANIEL: Get the PDF, folks.

NICK: Yeah, we really like that. We like that metaphor. It’s obviously the one we started with. But I should say it’s not so much a metaphor as being a first step in building up an understanding of what joint activity and intersubjectivity really looks like. So, the sawing a log with two people case is good, because it gets you into the idea of joint action while keeping the activity in question really inflexible in some sense. So, if I’m pushing the saw, you’ve got to be pulling it. At the very least, it’s going in your direction. So, there’s this complete physical dependence between what I’m doing and what you’re doing in relation to this common focus.

I like the case, because it’s triadic rather than dyadic. We think about the interaction involving two people, like the two people in the example you gave who are getting married, but don’t forget there’s the saw there. There are three entities involved. The two humans are not just coordinating. They’re coordinating around the saw and around the activity that is taking place there. So, they’ve got a physical activity that they have to engage in. And so, they’re literally coordinating their bodies around that activity. And they’re also coordinating around the nonphysical aspect of that interaction, which is things like, well, in that case, what’s their common goal? The common goal is, well, they want to separate a planked-shape piece of wood from the log that they’re soaring. It’s not really until they’ve got the saw all the way through that they’ve achieved that goal. So, there’s the visible physical aspect of coordination and the nonphysical, normative goal-oriented aspect of the interaction. And that’s where you really get true normatively oriented joint commitment, and that’s where you get into subjectivity.

DANIEL: Okay. So, this joint commitment, this joint intention, blending our intentions to do a thing, this is what you call in the book “intersubjectivity””.

NICK: Yeah, that’s right. So, intersubjectivity is, of course, one of those words that’s used in so many ways. It’s not funny. We obviously allow that the other people have defined the word in the ways that they like. For us, intersubjectivity is really this strong sense of joint commitment and joint orientation of two individuals around some point of coordination or around some common goal which is crucially regimented or upheld by this accountability that comes into play. So, I think the accountability side has a few interesting features. One is the idea that you are able to rebuke the other person when you have joint commitment take place.

DANIEL: “What are you doing? Don’t push, pull! Come on!” [LAUGHS]

NICK: Exactly. That’s really interestingly invokes language. So, if you are in a joint commitment with someone… This is something that Margaret Gilbert, the philosopher of joint action pointed out years ago that, even if something is simple as going for a walk together, it brings in this joint commitment and it brings in the right to rebuke the other if, for example, they’re walking too fast and they leave you lagging behind, you can call out and say, “Hey, I can’t keep up with you.” Now you don’t have the right to rebuke another person who’s randomly walking in front of you on a city street. You didn’t make any agreement with them to go for a walk together. They’re just walking in the same direction as you.

Language comes into play exactly when you’re making these kind of rebukes or sanctioning people for not doing their part, because you have to use language to characterise the problem. You got to say something like, “Hey, this is a conversation,” or “We’re on a walk together. We’re not jogging,” or whatever words you invoke, that is where your framework for accountability is built. So, you’ve got a particular language you speak and the language furnishes you with concepts that are heavily culture specific and heavily practical within your culture. And one of the main things we use words for is characterising right-wrong, good-bad, appropriate-inappropriate with respect to things like: are people upholding their commitments.

DANIEL: Okay. So, we’ve got two things on the table here. We’ve got intersubjectivity, where we agree to combine our intention to do a thing, either go for a walk or have a conversation on a podcast. But then that brings accountability, the act of combining your intention with somebody implies that you have responsibilities to make it go okay.

NICK: Yeah, that’s right. You have responsibilities, but crucially, those responsibilities… rights always have obligations that go with them, and obligations always have rights that go with them. So, when you don’t uphold your responsibility, that means that others have rights and indeed sometimes duties to sanction you for that. That’s what keeps everybody calibrated and aligned in a sense. This is in a sense, kind of basic microsociology, but we’ve tried to tie it in with language and point out that the very idea of sanctioning deviant behaviour in some sense is really very much done through language. That’s one key point.

But the other key point is that language itself is subject to all of this same normative regulation. So, this is another key part of what we’re trying to talk about in the book is that… Others, of course, have made this point. Herb Clark has been a real proponent of this idea that using language itself is a joint activity and it’s also used for organising joint activities and coordinating joint activities. So, it’s used for organising and coordinating itself. It’s used for policing itself. It’s used for sanctioning people who aren’t upholding their part of the bargain. So, a simple example would be something like, “Answer me, I asked you a question.” That would be, again, using language to characterise what I just did, using language. So, I’m using words to describe the fact that what I did was a question, that what was due from you was answer, and calling you out for not doing it. That’s a really nice example of language being turned on itself.

So, for us, that’s one of the key features of language that is criterial for what we call enhanced intersubjectivity of human life. That is the very possibility of using language reflexively, of turning language back onto itself, of using the communicative system to communicate about itself, which is something that was long ago identified as one of the unique features of human language. So, that was one of Hockett’s… not the original set, but of a subsequent set of his so-called design features of human language included reflexivity.

DANIEL: So, you mentioned the phrase “enhanced intersubjectivity””. I want to get into this, because the book is called Consequences of Language. Just implying — tell me if I’m getting this wrong — that before language we had intersubjectivity, we could join our intentions up to do stuff. But after language arose, it was enhanced. Can you tell me what you think that…? Have I got that right, first of all? And second, what did intersubjectivity look like before language and after?

NICK: Well, what we try to do in the book is start out with examples like the two-person sawing, which is nonlinguistic. And that’s why it’s a nice example. You don’t have to use language to coordinate that activity, and indeed… of course you can. Sometimes, you have to if things go wrong. But if things are running well and you’re with a familiar partner and so on, you don’t have to use words at all to get that going.

DANIEL: Yep.

NICK: So, when we talk about basic intersubjectivity, that is for us a kind of prerequisite for language, very much drawing on work by people like Mike Tomasello, who’ve really fleshed this out and many others who’ve argued, “You need a certain kind of social cognition in order to get language off the ground at all.” And so, that includes things like a certain theory of mind, a certain capacity for joint attention, the kinds of things that you mentioned earlier. So, following arguments by someone like Mike Tomasello, you’ve got this idea that a certain underlying infrastructure in humans made language possible in the first place. I think that’s what we mean by basic intersubjectivity. Then that allows language to emerge and to evolve in a historical time frame. And once you’ve got language circulating in a community and linguistic… specific properties of linguistic structure, for example, words and the things that they mean, once you have that circulating in a community, then that’s where you get this enhanced intersubjectivity developing. That’s pretty much what I was describing when I was talking about the richness of individual languages which have their own vocabularies and their own grammatical resources. When those resources allow you to be very specific about what we are coordinating around, what game we’re playing right now, what you’re accountable for doing, how I would sanction you, all of the specific aspects of how interaction is played out, including the accountability that we have, all of that’s very linguistically rich and that leads to this… in a sense, that’s what we mean by enhanced intersubjectivity, because you get very rich local conventions around what you are accountable for and how that accountability is played out.

DANIEL: And I was thinking of how this relates… You mentioned word meanings, but I was also thinking about how this relates to grammar. In the book, you say, “Grammar carries a huge functional load in building intersubjectivity by increasing common ground through productive generative mechanisms for linking subjects to predicates in novel propositions.” Can you just walk me through that sentence, because I feel like it’s super important?

NICK: [LAUGHS] Sounds a bit technical, doesn’t it?

DANIEL: I mean yeah, [LAUGHS] but that’s that kind of book.

NICK: What we’re doing now is simply acknowledging and pointing out something really basic that linguists have pointed out about language itself, is the classic Humboldt/Chomsky point that languages allow you to say things that haven’t been said before, languages allow you to productively take small bits of conventionalised meaning and say completely novel, unique things. This is a standard observation around language, and it is what structure allows you to do in language. You’ve got a lexicon, you’ve got combinatoric rules, and you have effectively infinite productivity of expression. But our point is that this isn’t just a fun fact about what languages make available. It’s actually this incredibly powerful creative feature that people then can draw on, not just in conveying information, but in characterising what people are accountable for having done or what people are accountable for doing next.

So, I can use the creative resources of language to describe what you just did or to describe a situation that I’m interested in talking about. I mean, there are classic… When people talk about this capacity of language, they usually give examples like, “Oh, look, there’s a lot of buffalo congregating down on the plane. Let’s go hunting.” So, it’s about updating each other with news. But we’re focusing much more on how this is not so much what people do with language as set out our own stance on what has happened or what someone has done precisely in order to do things like call them out, share attention, or coordinate around a stance towards that person. So, for example, we’re gossiping around someone, so we’re going to characterise what they’ve said in a very particular way, so we can align and say, “Yeah, we’re on the same team. We see this scene in the same kind of way.” The generative capacity of language allows you to really finely tune that and then over time to calibrate that with your social associates.

DANIEL: You mentioned an example of this in the book, because what you’re saying is that we’ve got to not just say stuff, but we’ve also got to report stuff. One grammar thing that helped us to do that was like using complementisers like THAT. You don’t just want to say, “The buffalo are on the move.” You want to say, “I heard THAT the buffalo are on the move,” or “John told me THAT the buffalo are on the move.” This is an important part of the whole intersubjectivity thing, and grammar helps us to do that.

NICK: That’s right. Quoted speech is this phenomenally powerful thing. All languages allow you to report what someone else said. And this is something that is wholly reliant on this unique design feature of reflexivity of the communicative system. So, the very capacity of language to refer to the system itself is the only thing that allows you to have reported speech. So, I can say, “Well, John said, blah, blah.” “John said there are buffalo down on the plain.” What I’m doing there is using speech to refer to speech. That’s the basic informationally, what I’m doing. But what I’m also doing is separating the animator of this message, that is, the person whose mouth is actually producing the sounds that correspond to it. So, I’m the one who is saying these words that there are buffalo on the plain. Quoted speech means I can separate the speaker from the person who is responsible for the, in this case, maybe the truth or falsehood of the claim. So, if I say, “Well, John said there were buffalo down on the plain,” and then I went down there and there’s none there, so the guy’s a liar. I can do that because of quoted speech. Then we can agree, oh, this person is a liar or they’re not our friend, or whatever it is. And quoted speech is this massively powerful thing, because it separates these components of agency. This is what Goffman referred to as the animator and the author and the principal of a particular message. So, reflexive language, and quoted speech in particular, allows us to separate the one who’s responsible for the meaning of the words from the one who actually, let’s say, pronounces the words and lets other people know them.

What it also does is that it allows you to extend your agency. I can ask you to carry a message to someone in the next village. Just give a message. Just go and visit someone and give them a message, tell them something. That’s the thing that language does and has done forever. It’s incredibly powerful from an agency perspective, but that is not possible with other forms of communication in the biological world.

DANIEL: Yeah, you can’t do that without language. It would be super-duper hard.

NICK: Yeah, and it’s the reflexive capacity of language in particular that allows you to do that.

DANIEL: All right, so, it sounds like in some ways, language allowed us to do stuff that were already doing. Like, we still had intersubjectivity, we could still combine our agency before language, but it allowed us to do it better. In fact, it strapped a rocket engine to it.

NICK: Exactly.

DANIEL: But then there are some things that we couldn’t do. And so, this being able to separate ourselves from the story we’re telling and do different things with it, once we had language, then we could hedge, or we could give evidence, or we could do lots of things about the story were telling, like, the buffalo on the plain.

NICK: That’s right. We’ll probably never know, but it’s an interesting question as to how much of our current grammatical machinery is purely emergent from this very basic, but — I don’t know — quasi magical innovation of language that allows you to use it to refer to itself. So, it allows you to separate the message from the messenger, and that little gap is exactly where you start inserting things like hedges, and evidentiality markers, and quotations, and things like that. It allows you to put frills around the message itself and to put the message on a little tray and then put [DANIEL LAUGHS] annotations on it in some sense. Whereas in any other kind of system, the message just comes at you. So, that’s very limiting in some interesting ways and bringing language into the equation is certainly a rocket engine of sorts.

And I think evolutionarily… So, Robin Dunbar is very well known for promoting ideas related to ones we’ve been talking about today, that language’s real function is about things like reputation management, social relations, and so on. And I think that he’s rightly being criticised for not appreciating the power of the real technical aspects of human language that make that possible.

DANIEL: Okay. And what else happens when humanity gets language?

NICK: Well, this is an idea that we touch on, I think, in the book, but it’s something that I’ve been thinking much more about recently, and that is… It really follows from what we discuss in the book, and that is that there’s a kind of lock-in that follows from the historical development of a complex object of cultural accountability like a language. There’s a billion… well, there’s infinite sort of possible concepts that you could develop. But of course, we only develop in any given culture, a subset of those things. So, in a given culture over time, historically, we build up through language a kind of worldview, if you like. We build up in language, a set of concepts, for which people become accountable for knowing and for living in terms of. And once those things start circulating, then you naturally gravitate towards those ideas and not all the other possible ideas. So, a worldview is empowering within your own community in some sense. It allows you to calibrate, but it also draws you away from other ways of seeing the world, other kinds of accountability that could be out there. I think that that’s a classic theme in anthropology that says: Here’s one of the reasons why we need to study linguistic diversity and why we need to study cultural diversity and social diversity is because we’re locked in in ways that it’s very hard to see from the inside when we’re a member of a particular culture.

So, I often quote Benjamin Lee Whorf [DANIEL LAUGHS], who had this idea that: If you were a member of a group of people who could only see blue, you couldn’t understand what that means until you’d had the opportunity to see other colours and then you would sort of look back at your current worldview and realise that you’re not seeing all the ways of being that might be available. I think one of the consequences of language is that you are only speaking one language. Of course, most people in the world speak more than one, but they don’t speak 6,000. They’re going to speak two or three or four. And so, I think that’s one of the really important consequences of language is that the diversification of language which is driven by local calibration of people within communities results in this kind of lock-in and path dependency for people who are agents within those social infrastructures.

DANIEL: I was chuckling a little bit because you mentioned Whorf and I was already thinking, “Well, this sounds dangerously Whorfian.” But it’s kind of not, because strong Whorfianism would say, “if you only have a word for the colour blue, that would affect your ability to perceive other colours.” Whereas what you’re saying is that if there’s a group of people that see only blue, then that’s going to affect our lexicon. That’s the opposite direction.

NICK: Oh, I’m not sure. When it comes to Whorf, there’s a curious historical question about Whorf, and his life, and what he wrote, and what he meant, and all of that. It’s one of these kind of Rorschach tests in a way. You see different things in it. What I like to point out as often as I can about the material that Whorf wrote about is that, while everybody focuses on things like perception, and reality, and the physical world, and time, and space, and all of that, he was often writing about accountability and how language is used in justifying and explaining and accounting for things that happen.

So, his most cited and most famous anecdotes about linguistic determinism are in these cases from his fire insurance days. He’s talking about people explaining why they were careless with a cigarette or why they didn’t expect a certain substance to burn because of the ways in which they were labeled. I think that the thing that often gets missed is precisely that people were using those linguistic labels not as triggers for thought, but as tools for defending themselves, tools for justifying themselves, tools for explaining the decisions that they made.

And so, I think that’s precisely what we’re talking about in this book, is that you reach for the tools that are at hand and historically specialised languages. That’s what they provide you with. A set of tools. Tools for what? Tools for explaining what’s going on, for holding others to account, for calling people out, for characterising the events and situations that you see around you in ways that everybody can understand.

So, coming back to your question about blue, if you have a group of people who only see blue, I think another way to put it would be you have a group of people who only have a word for blue and no other colours. Well, you would predict that there’d be a different kind of matrix of accountability around things like how tolerant people are of you getting the wrong shade of paint at the paint store when someone asked you to pick it up. Because in certain cultures, you’re going to have greater capacity to call out this difference in colour. Why would you do that? Well, because, “No, pass me the other colour, not that one.” When you have the language for that, well, that’s what it enables you to do. So, I think that people are focusing so much on perception and these individual internal functions of language when really what’s so important is the coordinating functions around language.

DANIEL: One of the things that we’ve done by talking about language on the show with so many great people is that my bulletin board, which is now tangled in red string between the different elements of language, has just acquired a couple of new nodes. So, I had been thinking about joint intention, cognitive brain power, social factors, grammar, semantics. But now, I’ve got intersubjectivity and I’ve got accountability, and those are two really important factors that I just hadn’t considered before.

NICK: Great. That’s fantastic. One of the phrases that we use in the book and also in our previous book, The Concept of Action, is the phrase: the tyranny of accountability. This is an idea borrowed from sociology people like Goffman and Garfinkle. It’s the idea that you cannot operate in a society without being accountable for what you do in the sense that people will view you within the kind of matrix of the local set of norms. And whether you like it or not, people will judge you in that way. If you don’t like it, well, you may have to move to another society, or go away from people, or put up with being stigmatised. But the use of language in everyday life is no different from other aspects of social practice. We are working within a tyranny of accountability and that’s got very positive aspects. It’s what allows us to coordinate so powerfully, but at the same time, it constrains us in these interesting ways.

DANIEL: The book is Consequences of Language: From Primary to Enhanced Intersubjectivity. It’s available from MIT Press and it is open source, so you can read it right now. We’ll have a link on our website, becauselanguage.com. The book is by Dr Jack Sidnell and my guest, Dr Nick Enfield of the University of Sydney. Nick, how can people find out what you’re doing?

NICK: Well, I have a website, nickenfield.org. Not sure quite how up to date it is, but it’s not bad. I think it’s not bad.

DANIEL: Yep.

NICK: So, yeah, that’s where they can sort of see books and links, and things like that. I’ve got a Twitter handle, but Twitter is a bit kinda weird these days. It’s getting less… It’s a bit sad, really, because it doesn’t seem to be as interesting as it used to be. Anyway, I am on Twitter, but I don’t really post that much these days.

DANIEL: You’re detached. There’s a Twitter detachment.

NICK: Yeah, a little bit. I haven’t gone anywhere else, but anyway, I do have a Twitter handle, but nothing else.

DANIEL: Okay. So, nickenfield.org. We’ll be encouraging people to check that out. We’ll drop a link on our website there as well.

NICK: Great.

DANIEL: Thanks again for the chat. This has been great.

NICK: Yeah, no worries. I really appreciate the interest, and thanks for featuring the book. We’re really delighted.

[TRANSITIONAL MUSIC]

DANIEL: Let’s go into Words of the Week.

HEDVIG: Okay.

BEN: What have we got?

DANIEL: Here we go. The first one, and you have to guess what it is. This was suggested by Diego on our Discord. Graupel. Graupel, G-R-A-U-P-E-L.

BEN: Graupel.

DANIEL: I’m still learning new words, and this is one of them.

BEN: G-R-A-U, Graupel.

DANIEL: Graupel.

BEN: Graupel.

DANIEL: What do you think it is?

HEDVIG: Graupel.

DANIEL: I should come up with three alternatives.

HEDVIG: Yeah, you should.

DANIEL: Okay.

BEN: Yeah, I’m struggling on this. Grapple and chortle… nah, because there’s no A-U there.

DANIEL: Yeah.

BEN: Graupel, groan. And what is aupul? So, I’m really trying.

DANIEL: Does it mean, A, is it a method of extracting your car from a sandy place? “We managed to graupel it out.” Is it a kind of snow, or is it something you put on your breakfast cereal?

BEN: I think it’s a kind of snow.

HEDVIG: I think it’s something you put in your breakfast cereal.

DANIEL: It’s a kind of snow.

BEN: Oh! three plays one, mother fucker!

DANIEL: The crowd goes wild! So, it’s not hail, it’s not sleet, but it looks like all of them. It’s kind of a mixture of snow crystals and ice and you can scrunch it, so it’s not super hard.

BEN: Now, is this because of unprecedented snowfall in Northwestern United States? Is that what’s going on there?

DANIEL: I haven’t seen that it’s more or less common. Hedvig, have you seen this before?

HEDVIG: I think we had that recently.

DANIEL: Really?

HEDVIG: I was calling it hail snow.

BEN: Right, right.

HEDVIG: We’ve had a lot of weird weather in Central Europe lately.

DANIEL: It might have been graupel.

BEN: Now, is this an official… sorry, is this an official meteorological term or is this something that’s just popped up, because people needed a word for it and they didn’t have a word for it?

DANIEL: It is an official term. It’s been used now… It’s about a hundred years old and I just never encountered it before.

BEN: I fucking love shit like this.

DANIEL: I know.

BEN: Like, when you do boats, and random weird words that don’t exist anywhere else and haven’t been used, but every now and then, you need a word for the thing, I love it. I love it. I’m here for it.

DANIEL: So, you know how hail forms. Rain falls and it freezes, but then it gets tossed up by wind and collects another layer and gets tossed and tossed and tossed until it’s so heavy that it falls. That’s hail. But this is when… Graupel happens when snow falls, and then it goes through a layer of super cooled water droplets. That’s water droplets that are colder than freezing. And the reason they don’t turn into ice is because there’s no dust up there. For some reason, the air is just so clean that it can’t form a nucleus, but then the snow becomes that nucleus, and so that then freezes and falls. Again, it looks like hail, but it’s squishable. It’s like that really nice ice that you see sometimes in ice machines. You know the kind?

HEDVIG: The kind that’s snow?

BEN: Yeah, I do know the kind you mean.

DANIEL: Where it’s like there’s a drink fountain and it’s kinda scrunchy?

BEN: It’s not quite snow, but it’s not quite not snow.

HEDVIG: We call it the hugging or squeeze snow. It’s really good for snowballs, but it’s snow, it’s not ice, and it makes a very pleasing sound.

BEN: Yeah. I think you’re talking about a different thing though, Hedvig. I think this needs to be a little bit more hail like. It needs to be a bit less snow like.

DANIEL: There’s a cue to the way it looks in the name, because the word GRAUP comes from Germanic languages and it’s the word for pearl barley.

HEDVIG: Yes, I had to buy it recently. I was very confused.

DANIEL: Use it all the time?

BEN: [LAUGHS] What is this?

HEDVIG: I had a Ukrainian friend coming for Christmas, and I was trying to cook a Ukrainian Christmas dish and it involved pearl wheat.

DANIEL: Oh.

HEDVIG: I spend a lot of… No, sorry. It involved wheat berries. I’ve learned a lot about wheat and the kinds of things that are similar to wheat, like pearl barley, which is not wheat berries.

DANIEL: No, not the same. Very tasty in soup though. Very hearty, very warming.

HEDVIG: Yeah, we have some.

DANIEL: So, that’s GRAUPEL.

BEN: What is the next Word of the Week?

HEDVIG: Yeah. [LAUGHS]

DANIEL: Okay, we got a couple. One of them, Andy from Logophilius said last year we had QUIET QUITTING. So now, of course, we have QUIET HIRING, a new name for an old phenomenon.

HEDVIG: What is this?

DANIEL: Quiet quitting, as we know, is kind of a dumb term for just doing what you’re supposed to do and not busting your butt on the rat race. This comes from an article from makeuseof.com. It says, “While quiet quitting is about employees doing the bare minimum at their jobs, quiet hiring is about employers trying to acquire more talent and bandwidth without hiring more people.”

BEN: Ah, okay.

DANIEL: In other words, what they always do.

BEN: Yeah. Is it just like, “We’re just going to add this to your duty statement,” and it’s like, “What?”

DANIEL: Okay.

HEDVIG: Ohh.

DANIEL: I think of that as job creep, but quiet hiring is another term that’s been proposed.

HEDVIG: Because hiring sounds like it’s more people.

BEN: Mm-hmm. Yeah, well, quitting sounds like you quit, but you don’t, so.

DANIEL: I guess.

HEDVIG: Yeah, they’re both bad.

[LAUGHTER]

DANIEL: They are both. Then this one suggested by Magistra Annie, RAGE APPLYING.

BEN: Oh, I think I can tell what this is.

DANIEL: You do? Go ahead.

HEDVIG: Yeah, me too.

BEN: I’m going to paint a little picture, do a little bit of improv for the audience. Actually, this is way too close to the truth. I’ve just had a terrible week at work, and I in a fit of frustration and rage load up whatever my local job search function thing is. In Australia, it’s called Seek, and in America, it’s called monster.com, whatever, and I just start applying for stuff that’s relevant [DANIEL LAUGHS] to me, because fuck ’em. That’s why! And you never know, shit might fall your way.

DANIEL: Yep. You might apply to 20 different jobs very quickly. Somebody named Amanda on TikTok did this with #rageapplying. Got quite popular. I think she managed to get a job that was like $25,000 more in salary than the one she had. So, very encouraging to see more examples of RAGE. We’ll add this to rage donation, rage quit, road rage, and word rage.

HEDVIG: That’s really cool. It’s not something academics can do, because our job applications are always like, “Write a really long thing. Go through our really complicated platform and do this kind of indexing of all your publications ever.” And it’s like honestly, like, full time work a couple of days to apply to anything.

DANIEL: Mm-hmm.

BEN: Yeah. Every time you see… and yours is not the only industry to do it, but every time you see an industry that’s just got those super Byzantine archaic bullshit rules, you’re just like, “Why? Why?” How much work are you really saving the fucking appointee board with this stuff, when you could just give them a load of PDFs of all your published work and all the rest of it and just be like, “Here’s a picture of me smiling proving that I’m not creepy and here’s one page of why I want the job.”

HEDVIG: Right. That is great if you think that people will read it. Some of the places that do have a very low barrier for applying are also the places that let an AI read all resumes and pick the top 10 that a human reads.

BEN: Which we’ve learned, unfortunately, is probably doing a better job than the humans would do anyway now.

HEDVIG: Yes, but they’re also getting… There’s a great John Oliver episode. They’re also getting trained on what human people picked as good candidates. So, in one case, it was like…

BEN: Which is really racist and shit and sexist and…

HEDVIG: It was like people called Jared who played lacrosse in college are really good candidates for this company.

BEN: Oh, clearly, the people on appointee boards have not met enough people who played lacrosse because those guys are weird.

DANIEL: Well, the important thing for this industry is to sustain that rage. Just keep it burning.

BEN: Just keep it going.

DANIEL: Recently, we saw the collapse of Silicon Valley Bank, and this tweet came from Hank Green. “I assumed Silicon Valley Bank was some crypto thing, but it’s a bank. One of the biggest in the US, specifically serving the startup community, and its Wikipedia page has just gone past tense.” I think “going past tense” would be a really good metaphor for death. Oh, no, it’s gone past tense.

BEN: That’s cool. I like that.

DANIEL: Yeah.

BEN: Silicon Valley Bank WAS a lar… Yeah, it’s great. I dig it.

HEDVIG: Yeah, I like it.

DANIEL: Let’s go on to our last one. This one’s from Cesar @Calidor on Twitter. “Hi, Because Lang pod. Have we seen this use of a ratatouille before?” We touched on it earlier on.

BEN: So on board.

DANIEL: Yeah.

BEN: If we’re talking about the Pixar film, I’m here. I’m ready.

DANIEL: We are. The tweet is from Mattie Rose @Lubchansky. Has anyone seen the Elvis movie with Austin Butler? I have not.

BEN: I’m deeply disinterested in that entire aspect of Western culture.

DANIEL: Yeah, me too.

HEDVIG: I don’t know what it is.

DANIEL: The tweet is, “Austin Butler experienced complete ego death and is to this day, letting the ghost of Elvis drive him around like a ratatouille. And for what?”

BEN: [LAUGHS] Like a rat…

DANIEL: Like a ratatouille.

HEDVIG: That’s me!

BEN: [LAUGHS] Yes!

HEDVIG: Is that a Body Snatcher? No, I’m a ratatouille.

BEN: You’re a raccacoonie. That’s a reference, by the way, everyone.

DANIEL: Okay.

HEDVIG: Yeah, I love it.

DANIEL: It’s funny how everyone thinks that Ratatouille is the name of the rat and it’s Remy. But anyway…

BEN: It’s Remy! The person who is getting ratatouille is Linguini.

DANIEL: Yep. And what I love about that film is that everybody can have some talent. Linguini wasn’t a great chef, but he was a really great server.

BEN: You know what? Linguini is actually, if I’m being honest, the hero of that text because — and this is what makes that film low-key really good — he’s just a really nice, compassionate dude who gives people a fair shake and just is really open to things, and that is what brings everything together.

HEDVIG: And if you want to see a good human version of the rat, you should follow the Australian comedian, Mike’s Mic. Mike’s Mic, he’s a YouTuber, he’s on Twitter, he’s on everything, and he does a lot of fun commentary on 1990s, early 2000s TV culture. And his fans lovingly call him the Ratatouille Rat.

BEN: Okay.

HEDVIG: And he looks like it.

DANIEL: All right.

HEDVIG: And he likes it.

BEN: Okay.

DANIEL: Link on our website, becauselanguage.com. So, GRAUPEL, QUIET HIRING, RAGE APPLYING, GONE PAST TENSE, and A RATATOUILLE are our Words of the Week. Big thanks to everybody who suggested things for this episode. Thanks to Dr Nick Enfield, to Dr Morten Christiansen, to Dustin of Sandman Stories, who still recommends us to everyone, the team at SpeechDocs who transcribes all the words, and most of all, you patrons who give us so much support, make it possible to keep the show going and keep the show free for everybody.

BEN: Hedvig, take it away.

HEDVIG: If you like our show, which I hope you do, if you’ve reached this part of the show and you don’t like our show, you’re one of my special population of hate listeners. And honestly, as I always say, a download is a download. Thank you.

BEN: A win is a win.

HEDVIG: Win is a win. Win is a win. But if you do like the show, and that’s why you listened, to those people who are listening right now, I would like to tell us various ways you can help us. We like this show, and we hope that people will listen to it. We don’t spend tons of money, like advertising it in lots of different places. We rely on people who like it to tell people, because we think word of mouth is a better way of spreading news honestly about shit like this.

So one of the things you can do is you can follow us in various social spaces like Twitter and Facebook and stuff, and we are @becauselangpod on all of those places. You can leave us a message by going to our website, becauselanguage.com, and clicking the button and there’s a SpeakPipe, and you can record your own little voice through your laptop or computer, through your web browser. No need to download any extra software. And we can hear your sweet, sweet, sweet voice.

DANIEL: Why do you think nobody uses it? Because it’s been ages. Is it because people hate hearing themselves?

BEN: Ah, perhaps, because no one listens to this part of the show, like, at all? They hear that we’re doing the big, long finish up thing, and like I do to every podcast, I stop listening at that point.

DANIEL: Mm.

HEDVIG: You can also send us an email at hello@becauselanguage.com. You can tell your friends about us or leave a review, especially on Podchaser.

BEN: Or if you really, really want to, you can come and hang out with a really funky, fun, fresh, fly group of people on our Discord server. And the way you do that is by becoming a patron. You also get bonus episodes when they are made, not like whenever we get around to releasing them. And you will be continuing to make it possible for SpeechDocs to make transcripts of me saying things. I’m so sorry, yet again, SpeechDocs. Your job is not one I would wish on my worst enemy.

Shoutout to our top patrons. They are Iztin, Termy, Elías, Matt, Whitney, Helen, Jack, PharaohKatt, LordMortis, gramaryen, Larry, Kristofer, AndyB, James, Nigel, Meredith, Kate, Nasrin, Joanna, Ayesha, Moe, Steele, Margareth, Manú, Rodger, Rhian, Colleen, Ignacio, Sonic Snejhog, Kevin, [TAKES A DEEP BREATH] Jeff, Andy from Logophilius, Stan, Kathy, Rach, Felicity, Amir, Canny Archer, O Tim, Alyssa, and Chris. And to our newest patron at the listener level, Erik with a K, and Evgensk or EvgenSK, who bumped their pledge up to the Listener level. Thanks to all our very most wonderful patrons.

DANIEL: Our theme music has been written and performed by Drew Krapljanov, who’s a member of Ryan Beno and of Didion’s Bible. Thanks for listening. We will catch you next time. Because Language.

HEDVIG: Pew, pew, pew.

BEN: Pew, pew, pew.

DANIEL: Pew, pew, pew.

[BOOP]

DANIEL: Hey, Ben, do you have a pop shield nearby? Sorry.

BEN: Sorry. Yes.

DANIEL: Thank you.

HEDVIG: Can I tell you something while Ben’s doing his thing?

DANIEL: Yes, please, please please

HEDVIG: He’s looking back. He might like this. Put his headphone in. Put your headphones in so you can hear me. Stupid man. Oh, we can call them all kinds of names and he wouldn’t know, hey?

DANIEL: [CHUCKLES] I’m not going to. He’ll probably listen back. This will be the one show that he ever listens back to.

HEDVIG: [LAUGHS] Okay, fair enough.

DANIEL: No, he won’t. [LAUGHS] But anyway.

HEDVIG: What do you want to say? Okay. All right. Now that we’re done talking shit about Daniel.

BEN: [LAUGHS]

DANIEL: What?

[BOOP]

DANIEL: Etymonline says gag is perhaps influenced by Old Norse gag-hals “with head thrown back””. Hedvig, does that ping anything for you, gag-hals?

HEDVIG: There’s ga-pals. Ga, face, open your mouth, gape.

DANIEL: Right. Okay.

BEN: Oh, yeah.

HEDVIG: How are you spelling it?

DANIEL: G-A-G hyphen H-A-L-S.

HEDVIG: I just remembered I suck when people say words.

BEN: [LAUGHS]

[Transcript provided by SpeechDocs Podcast Transcription]

Related Posts