How do large language models (LLMs) do their thing, and it is anything like how we do our thing? What can we learn from the software? The answer might involve constructions — pairings of form and meaning that we use to make language. And here to discuss it with us is constructionist pioneer and linguistic legend, Professor Adele Goldberg.
Timestamps
- Intros: 1:14
- News: 7:13
- Related or Not: 34:18
- Interview with Adele Goldberg: 46:40
- Words of the Week: 1:38:19
- The Reads: 1:56:50
- Bonus chat with Adele Goldberg: 2:03:16
- Outtakes: 2:13:11
Listen to this episode
Patreon supporters
Thanks to all our patrons! Here are our patrons at the Supporter level.
- Kevin
- Keith
- Kristofer
- Kathy
- Whitney
- Wolfdog
- Fiona
- Felicity
- Faux Frenchie
- Chris W
- Chris L (because W comes before L in this alphabet)
- Colleen
- Canny Archer
- Mignon
- Meredith
- Molly Dee
- Manú
- Margareth
- Ignacio
- Iztin
- Lyssa
- Linguistic C̷̛̤̰̳͉̺͕̋̚̚͠h̸͈̪̤͇̥͛͂a̶̡̢̛͕̰͈͗͋̐̚o̷̟̹͈̞̔̊͆͑͒̃s̵̍̒̊̈́̚̚ͅ
- Luis
- LordMortis
- Laura
- Larry
- gramaryen
- Elías
- Nikoli
- Nigel
- Nasrin
- Helen
- Rene
- Rodger
- Rach
- Rachel
- O Tim
- PharaohKatt
- Ayesha
- Amy
- Amir
- Alyssa
- Aldo
- Andy from Logophilius
- Andy B
- Ariaflame
- Diego
- sæ̃m
- Sonic Snejhog
- Steele
- Stan
- Tony
- Tadhg
- J0HNTR0Y
- Joanna
- Jack
- James
And our newest patrons:
- At the Listener level: Virginie
- At the Friend level: Christelle, Ben C, Pterrorsaurus hex, and Hellen
- And our newest free members: X, bailee, Courtney, Xavier, Janet, ⴰⵢⴻⵍ, kit, John, Pat, Mary, and Susan
Become a Patreon supporter yourself and get access to bonus episodes and more!
Become a Patron!Show notes
Magistra Annie’s attempts to get ChatGPT to generate a photo of an empty room, with no elephants.
Shared from our Discord, with permission.










Reddit has many such elephant threads. Here’s one.
They might be, huh? | Reddit r/ChatGPT
https://www.reddit.com/r/ChatGPT/comments/1j08g1w/they_might_be_huh/
How many fingers? | Reddit
https://www.reddit.com/r/ChatGPT/comments/1bok0fy/how_many_fingers/
Study shows vision-language models can’t handle queries with negation words
https://news.mit.edu/2025/study-shows-vision-language-models-cant-handle-negation-words-queries-0514
[PDF] Vision-Language Models Do Not Understand Negation
https://arxiv.org/pdf/2501.09425
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
https://machinelearning.apple.com/research/illusion-of-thinking
[PDF] The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
Language and economic behaviour: Future tense use causes less not more temporal discounting
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0317422
The now-deleted thread from Adele, shared with permission


Chomsky’s response to Chris Knight’s chapter in the new ‘Responsibility of Intellectuals’ book
https://chomsky.info/20190905-2/
Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI (The ELIZA effect)
https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
Google Ngram Viewer results for family terms. They seem to flip around 1970.
Google Ngram Viewer link
[PDF] Susanne Gahl and Susan Garnsey. (2006). Knowledge of grammar includes knowledge of syntactic probabilities.
https://linguistics.berkeley.edu/~gahl/GahlGarnsey2006.pdf
The Bias Detective: Psychologist Jennifer Eberhardt explores the roots of unconscious bias—and its tragic consequences for U.S. society. | Science
https://www.science.org/content/article/meet-psychologist-exploring-unconscious-bias-and-its-tragic-consequences-society
Sam Altman Admits That Saying “Please” and “Thank You” to ChatGPT Is Wasting Millions of Dollars in Computing Power
https://futurism.com/altman-please-thanks-chatgpt
Trump lashes out over viral ‘TACO trade’ meme. What does it stand for?
https://abcnews.go.com/Politics/trump-lashes-viral-taco-trade-meme-stand/story?id=122323324
Trump Always Chickens Out | Wikipedia
https://en.wikipedia.org/wiki/Trump_Always_Chickens_Out
Quitting men: Hope Woodard’s ‘boysober’ movement | Ladies, We Need to Talk
https://www.abc.net.au/listen/programs/ladies-we-need-to-talk/hope-woodard-boysober-men-dating-quit-ladies-we-need-to-talk-pod/104900532
Teens Are Practicing ‘Appstinence’ — & It Reveals How They Really Feel About Their Phones
https://www.sheknows.com/parenting/articles/1234886518/teens-appstinence-trend/
Biztrepar: una nueva palabra para un concepto clave en emprendimiento
https://www.prensalibre.com/opinion/columnasdiarias/biztrepar-una-nueva-palabra-para-un-concepto-clave-en-emprendimiento/

Transcript
[Transcript provided by SpeechDocs Podcast Transcription]
DANIEL: Can you see my screen right now?
BEN: Yes.
HEDVIG: I can.
BEN: Yeah, it’s working well.
DANIEL: Okay, good. It’s working great.
HEDVIG: “A new Hong Kong drama about three deaths…”
DANIEL: [SINGING] ♫ That’s not the one ♫.
HEDVIG: Okay.
DANIEL: Sorry.
BEN: I mean, it’s this sort of radio that’s the best sort of radio.
DANIEL: Oh, just…!
HEDVIG: Mm-hmm. What else can we see on his…?
BEN: Oh, oh, let’s see what other… This is like when the lecturer has just…
HEDVIG: There’s a text that says learning…
BEN: …like their personal…
HEDVIG: Yeah.
BEN: …stuff up for a second and you try and extract as much meaning.
HEDVIG: Oh, I think there are his notes that we don’t usually get to see behind there.
BEN: Yeah, yeah.
DANIEL: This is why I have two desktops. And when I’m lecturing, I always have only the slides on one of the desktops. It’s so good.
BEN: Oh, no, Daniel, live free my guy. Just let it all hang out. I enjoy allowing a little slice of my personal email account to just flash up for like a few seconds. I even lean into it. I play along. I’m like, “Oh, oh, sorry, guys. That’s my personal email.” I knew what I did.
DANIEL: This is not right.
[BECAUSE LANGUAGE THEME]
DANIEL: Hello and welcome to Because Language, a show about linguistics, the science of language. My name is Daniel Midgley. Let’s meet the team. First up, Ben Ainslie, who’s just come back from a massive cycling trip across the USA. How did that go, Ben?
BEN: Uh, good. Thank you, Daniel. Are we…? Okay.
HEDVIG: He’s been back for a while!
BEN: Yeah. I feel… I sense a bit coming for the simple reason that I’ve been back from said trip for nearly a calendar year, but that’s okay.
DANIEL: I know.
BEN: You know what? “Yes and”, Daniel.
DANIEL: I just…
BEN: I loved it. It was great. I particularly enjoyed Orca in Colorado.
DANIEL: Oh, nice. I just wanted to give you and our audience a slight, surreal trip back in time.
BEN: Okay.
DANIEL: Yeah. Just to see if anybody got confused.
HEDVIG: Ben, do you buy this?
BEN: Well, I buy that something is afoot. I don’t know if I buy that Daniel’s stated reason is the reason.
DANIEL: I was trying to come up with something.
BEN: Okay. [LAUGHS]
HEDVIG: Okay, okay.
BEN: [LAUGHS] Just good old-fashioned, shameless brain foggery, yes.
DANIEL: Explains everything.
HEDVIG: That’s fine. That’s fine.
DANIEL: And we also have linguist, Hedvig Skirgård. Hedvig.
HEDVIG: Hello. Hi.
DANIEL: Tell us about your headwear because you started by saying, “Please don’t make fun of me.”
HEDVIG: In Europe, summers get hotter with every year. And I now live in Germany, have done for almost five years now. And Germany is south of the Nordic. So, it’s even hotter here than in my home country. And I also live in an attic flat, which means that all the heat from all the apartments below come up to us, and all the sun from above come onto us.
BEN: Thanks, physics.
HEDVIG: Thanks, physics. And it’s very hot. And this weekend it’s going to get into… We’re even going to maybe reach 30, which maybe doesn’t sound a lot to Australians, but if you have a house that is built for keeping heat in, in winter…
DANIEL: This is going on a lot. This is going on longer…
HEDVIG: …and don’t have an AC.
DANIEL: I’m regretting this.
HEDVIG: Okay. The hat has gel packs in it. You put it in the freezer and then you put it on your head, and then your head cools down, and then you can think.
DANIEL: I was confused because I wasn’t expecting a hat in hot weather, but it’s a hat with gel packs.
HEDVIG: It’s lovely. So, they’re all super cool.
BEN: But for the Australian audience, I think the best thing here is, Hedvig is wearing inverse hot water bottles on her head.
DANIEL: On her head.
HEDVIG: And I can really recommend… It doesn’t matter what brand, but if you are in a similar situation to me, I can really recommend these hats. Also, if you’re a migraine sufferer, I can recommend these hats.
DANIEL: Oh, wow.
HEDVIG: Also, hangover’s pretty good.
DANIEL: I just love how you seem to be like a wacky inventor who made an invention to keep her brain cool.
HEDVIG: No, what happens is I have an idea, and I first strapped just gel packs to my head with shawls, but it kept getting like moving around.
DANIEL: I’m imagining duct tape. [LAUGHS]
BEN: Yeah, me too. That’s exactly what I was imagining.
HEDVIG: It was a whole thing and it would get wet as well. And it was a whole thing. Anyway, and then I was like, I can’t be the first person to have this problem. Very often in my life, I assume I am not alone in whatever problem I’m having.
DANIEL: I can’t be the only one.
HEDVIG: And then I sit down… and like a proper Millennial, I sit down at the big computer, the big screen, and then I google a lot, and then I make a spreadsheet, and then I look at prices and things, and then I purchase.
DANIEL: Amazing. Welcome to you and to your hat.
HEDVIG: We’re happy to be here.
DANIEL: Today on the show, we’re talking about large language models. We’ve been interested in large language models for quite a while because they’re a very successful implementation of language technology. They can do amazing things. But then one day on the bad place, which is Twitter, [LAUGHTER] I saw a very good thread about some things that large language models can teach us. And we’ve talked about this with, say, Morten Christiansen and other linguists as well. This particular thread was by Professor Adele Goldberg of Princeton University. And I messaged her and said, “Could I use this in my class?” And we talked about it. And then after a while, I was like, “Actually, do you want to come on the show and tell us about this?” And then, realising that Adele is the originator of the approach to grammar known as Construction Approaches, and I’ve always wanted to do a deep dive on that. So, we got together, Hedvig, and me and Adele Goldberg, and we had a big old discussion. Hedvig, you were there. Wasn’t it great?
HEDVIG: It was so great. I also made a bit of an ass of myself, I think, a couple of times. Ben, I don’t think you understand.
DANIEL: Me too.
HEDVIG: This seems like she’s like such a big deal. She’s so cool and so smart.
BEN: I’ve got to be honest. I’m able to infer from contextual clues from both of you. Now, I don’t claim to be a person who reads other people particularly well. But, guys, I’m here to tell you that you don’t need one of them right now. A real blunt force instrument is able to read what’s going on. Like, you were squeeing. There’s no other way for it. You were like Swifties…
HEDVIG: She’s so cool!
BEN: …who have seen Taylor in the real world.
HEDVIG: She’s so nice as well.
DANIEL: She’s super cool. Oh, my gosh.
BEN: Okay, okay, okay, so rein it in, you two, Jesus.
DANIEL: Did you see the way that she… Okay, so it was a wide-ranging discussion about a lot of things, and we think you’re going to love it.
HEDVIG: Yeah.
DANIEL: Also, pretty soon we’re doing a Mailbag episode with another person that I really enjoy, Martha Barnette of the podcast, A Way with Words. She does that show with our pal, Grant Barrett. Remember lexicographer Grant Barrett, who we’ve had on the show like nine times?
BEN: They do great work.
DANIEL: Yep, they do. So, we’re going to have her on to talk about a new book she’s been doing.
HEDVIG: Oh.
DANIEL: She’s written a book called Friends with Words, which is a good title.
HEDVIG: Ooh.
DANIEL: If you’re a patron at the Listener Level, you’ll be able to hear that the moment it comes out. So go sign up at a level of your choice. Even free, you can be a free patron. That’s patreon.com/becauselangpod.
BEN: But before we get into what I can only assume is going to be an embarrassingly tragic freakout on the part of both of you, should we check in [HEDVIG LAUGHS] with what’s going on in some hard news before we see you guys just dissolve into, like, fangirl puddles?
DANIEL: You know how we have a Discord server, and our patrons are on it sharing stuff and memeing it up and sharing wordle stats and posting photos?
HEDVIG: Mm-hmm.
BEN: It is simultaneously a deeply lovely and a deeply nerdy place.
DANIEL: Those two things are not mutually exclusive.
BEN: No.
DANIEL: On our Discord, one of our patrons, Magistra Annie, revealed her attempts to get ChatGPT to generate an entirely blank image with no elephants.
BEN: Ah, okay. I see what’s going on, I think.
HEDVIG: Oh.
DANIEL: With permission, I’ll show you what she got, here it is. She asked it to generate an image of a completely empty room with no elephants, and it generated a picture of an elephant. So, she asked, “Hey, what’s on the wall of that room?” “Good question. It looks like there might be some subtle texture or shadowing on the wall, but rest assured, there are no elephants in the room.” So, she got it to do it again. It just kept doing it. No elephants here. Nope, nope.
HEDVIG: There are elephants in this picture.
DANIEL: Dang it, there’s an elephant. The thing is it just couldn’t do it. It couldn’t generate a room with no elephants with the prompt, “Please give me a room with no elephants.” It just had to stick in one elephant.
BEN: Just to be clear to our listeners as well. It’s not like you couldn’t get a room to be generated with no elephants in it, but you can’t do it by saying “no elephants”. If you gave it the prompt, “Make an empty room,” it would have done it lickety-split. But it’s something about this anchoring reality of that keyword is so strong within the prompt that it can’t help but work with it.
DANIEL: And it is for us too.
HEDVIG: Exactly. To be fair, if I tell you, “In your head, imagine a room without elephants,” it’s going to be pretty… You’re going to be like, “Oh, I heard the word ELEPHANTS.”
BEN: Okay, so what’s interesting about this is I don’t fully agree with what you’ve just said in the way that you framed it because you just said, “Imagine a room without elephants,” and I can do that instantly.
DANIEL: No, what you do first is you imagine the elephants, and then you have to work.
BEN: I don’t think I do.
DANIEL: Oh, okay.
BEN: Especially because you guys are saying it in the order, “Imagine an empty room with no…” and then the last thing at the end. So, by the time you get to ELEPHANTS, it’s actually you’ve already been primed with all of the stuff you need to picture an empty room already.
HEDVIG: Ben, I think you’re abnormal.
BEN: Now, I will concede that if you say, “Don’t think about a bunny,” immediately bunny, right?
DANIEL: Yes.
HEDVIG: Okay.
BEN: That one, I can fully go there with you.
DANIEL: Of course, I’m thinking of George Lakoff’s book, Don’t Think of an Elephant, from the Frame Lab. They do great work.
HEDVIG: Yeah. I don’t think that the large language model, when it’s making a picture with an elephant in it, when it’s being instructed to make a room without an elephant, I don’t think it’s doing the same thing as we are with “Don’t think of an elephant,” right?
DANIEL: What do you think is going on?
BEN: Intuitively, I wouldn’t think so either, because I don’t think we’re at the stage where the cognition of the human brain and how the large language model are working the same way necessarily.
DANIEL: No, you’re right.
HEDVIG: Yeah, I don’t think so, but it is interesting. I also think that humans in general, we know that semantic entities cluster in our head and sometimes opposites will cluster to each other.
DANIEL: Ah, because a large language model will know that two words are synonyms if they appear in the same neighbourhood, but they have similar neighbours. But BLACK and WHITE also have the same distribution…
BEN: Yeah. Right, right, right, right.
DANIEL: …they have the same neighbourhood, so opposites… Mmm.
BEN: Fuzzy.
HEDVIG: Yeah.
DANIEL: Well, Magistra Annie’s attempt to generate a picture with no elephants was, for me, evidence that it’s kind of hard for an AI to do negation.
HEDVIG: Yes.
DANIEL: But now, there’s a new study from Kumail Alhamoud of MIT and a team showing that vision language models have a hard time of it too. Now, a vision language model is kind of like a large language model, except that it uses language to find things in images. So, let’s say I have a ton of photographs, and I say pull up all the images of cats or anything with a bicycle in it or like a doctor might look for this phrase: “Bilateral consolidation with no evidence of pneumonia.”
BEN: So, sort of one of the… What we’ve talked about before as being one of the many, many unsolved problems in search in the contemporary world, right? Like, how do you find that one photo in your phone amongst the 7,000 photos you’ve taken?
DANIEL: Mm-hmm. That’s right. But trying to find something with NO in it, when you specify, “Find me something with no bicycles,” it’s tough, it fails.
BEN: Oh, that’s so crazy.
HEDVIG: Oh.
DANIEL: Yeah.
BEN: Because you would imagine… Like, even I, as the person who learned approximately 45 seconds of Python when he was 17, I reckon with a little bit of googling, I could figure out how to create a batch script — not for vision, obviously, because that’s like super advanced. But it seems intuitively really easy to go, “Okay, make a list with all of the things that I’ve just said and now give me everything not on the list.”
DANIEL: Yeah, there should be…
HEDVIG: But in order for that, if you ask something, anything, for example, “Show me all the pictures without cats in it,” that thing needs to have a good idea of all the ways cats can look.
BEN: But we are suggesting that these models are doing that job at least somewhat okay, right?
DANIEL: Yeah, that part’s okay, if the part is…
BEN: If I ask ChatGPT to make a blank room, it can do that really good.
DANIEL: Yep.
HEDVIG: If you say, “Make a picture of an elephant,” it can do that pretty good as well, so.
BEN: Right. But there seems to be something more complicated than… So clearly, from what I’m inferring, from what Daniel’s saying, there’s something about negation that is throwing them off in a way that maybe we don’t even fully understand. If I was coding this right now, and any negation that got put in as a prompt, like, “Show me a picture without cats,” I would make the script go, okay, cool, step one, show me all the pictures with cats, great. Now, step two, show me all of the other pictures. That’s your answer.
DANIEL: Just subtract that. But you’re describing what it doesn’t look like. I’m saying: what does a not-cat look like? It could look like a bicycle. It could look like a bus. It could look like a paramecium. I mean, this is a weird thing. So, this team decided to check out why negation was hard for a vision-based model and also how to do better. So, they asked questions like affirmation questions, “Please find things that include cats.” Or negation, “Show me all images that don’t have cats.” Or hybrid, “This image has cats but not dogs.” Or, “This person is cooking but not eating. Can you show me that?”
BEN: Oh, interesting.
HEDVIG: Oh.
DANIEL: Yeah. And they found that all the models that they looked at performed very poorly on negation, the second case.
BEN: Okay, so there is something going on with negation for these models.
DANIEL: There’s something very hard. And they describe this as an affirmation bias. And they say that these models “often adopt shortcut strategies that ignore critical words such as NO.”
BEN: Oh, so it’s just they’ve got blind spots to negation words.
DANIEL: Yeah, they call it the affirmation bias. And it’s just a problem in natural language processing. And it’s especially a concern because if doctors are looking for, “Show me pictures of lungs that don’t have pneumonia,” or, “Show me a picture of this specific condition that doesn’t have this other condition,” then it could get it wrong.
BEN: Now, is this… Do we think this is an eminently solvable problem? We’ve just identified that we’ve trained stuff with a bias, and so now we correct that bias in the training?
DANIEL: Well, they think so. They think that the way to fix it is to address negation specifically in training.
BEN: That would make sense.
DANIEL: Like, look for things that involve nuanced expressions like negation or complex syntactic structures. And that way, they can capture a little bit more about this negation thing that we do in human language.
HEDVIG: I wonder if this is also true for other function words, like any or every, because what I’m hearing is that it takes a thing and then it maybe picks out the major noun phrases or verbs.
DANIEL: Mm-hmm. Yeah.
HEDVIG: Does it do well with other small things like, yeah, if, when, all these small function words, maybe it struggles with a lot of them as well, but people have mainly paid attention to the negation ones.
DANIEL: I bet it would have trouble with the word BUT, like “but not this” that kind of thing.
BEN: Oh.
HEDVIG: Yeah, something like that, or “if this, then that.”
DANIEL: There’s another story that crossed my desk this week, and I think it got a lot of attention. It’s from some researchers at Apple who came out kind of skeptical on reasoning for computer models. Did anyone see this paper?
HEDVIG: No.
BEN: So, explain the keywords, because reasoning can mean a lot of things, and I’m guessing that it’s meaning something in a jargony sense.
DANIEL: Okay, well, we know about large language models. We have all seen those and we have played with them. But now, I want to just shift a little bit to large reasoning models, LRMs not LLMs.
BEN: What do they do?
DANIEL: These are like large language models, and they use some of the same architecture, but they’re actually trying to also do some thinky work in addition to just putting out language. Have you ever done a thing where you say, “Okay, I want you to tell me how many fingers there are on five hands, and I want you to go through step by step and tell me your chain of thought.”
HEDVIG: Oh, one of the… My employer, they have decided to buy access to a bunch of large language models for us, and some of those models will first spit out this thinking process. So, I’ll ask it “Oh, how do I do this thing? What’s the name of this argument for this function in R?” or something. And it’ll be like, “So, I think that you want to know something about the programming language R and this function and this package.” And I’m like, “Yeah, yeah, yeah, cool, cool.” [LAUGHTER]
BEN: “Skip to the end, sweetheart. I ain’t got all day.” I don’t know why you’re a noir gumshoe detective in my mental reenacting of this, but you clearly are.
HEDVIG: Yeah, yeah. Yeah.
DANIEL: But what it’s doing is it’s going through its “chain of thought”, COT, and that’s a strategy that large reasoning models do. They also engage in what’s known as self-reflection, which I guess is kind of a human way of putting it. But they try to get the large reasoning model to reflect on its own process and think about what it’s doing.
HEDVIG: Okay.
DANIEL: So, a bunch of researchers at Apple did some investigation into some LRMs like Claude 3.7 Sonnet thinking, Gemini Thinking, DeepSeek-R1, and they decided to test them on some puzzle tasks. Okay, so tell me if you’ve heard these puzzle tasks. Tower of Hanoi.
BEN: Nope.
DANIEL: It’s those disks where there’s a big disc on the bottom and a slightly smaller disc on top of it and a much smaller disc on top of that.
BEN: Like a child’s toy.
DANIEL: Like a pyramid and you’re not allowed to put a big one on a small one, but you’ve got to somehow go, move number one to there, move number two to there.
BEN: Oh, gotcha. Yeah, yeah, yeah.
DANIEL: There’s a checker jumping task where you’ve got to jump checkers and make the picture look a certain way. There’s the river crossing problem where you’ve got like a goat and some wheat or whatever it is.
BEN: Yes. And they can’t be on the thing at the same time and blah, blah, blah. Yeah, yeah, yeah.
DANIEL: That. Yes, yes, yes.
HEDVIG: Cabbage. And a sheep and a wolf.
DANIEL: And then, the blocks world where you say, put a triangle block on top of this big square or something like that and it’s got to make it look a certain way. Now, the reason they use these puzzles is because you can increase the difficulty until it falls apart and then you can see exactly where I went wrong.
BEN: So, you’re kind of almost stress testing the reasoning chain of these models with like difficult… I mean, it really does sound like we’re getting pretty Ex Machina.
[LAUGHTER]
DANIEL: It is a bit. It is a bit.
BEN: [WITH A GERMAN ACCENT] Do the puzzles, do them faster.
DANIEL: But you can see what they’re doing.
BEN: Yeah.
DANIEL: They’re testing large reasoning models, but they were also testing large language models to see if they were any better or any worse.
BEN: Okay.
HEDVIG: Mm-hmm.
DANIEL: And they found three different things that happened. Stage one was they found that the complexity was low and they found that large language models were just as good as the large reasoning models, they both did okay.
BEN: Yeah.
DANIEL: In the second regime with medium complexity, they found that the large reasoning models started to pull ahead, started to get a bit better. And then, they got to the third regime where problem complexity was higher, and they found that both the large language models and the large reasoning models just fell apart.
BEN: Okay, so we still need human beings to do the traveling salesman, basically. [LAUGHS]
DANIEL: It sounds that way. Their conclusion was that even though the large reasoning models could sound pretty sophisticated, they weren’t really developing reasoning. They were really just doing pattern matching.
BEN: So, sort of in the same way that we’ve sort of poured cold water on this idea that the ChatGPT is the large language models are thinking. And we go, hmm, they’re doing a pretty impressive sort of autocomplete.
HEDVIG: Yeah.
BEN: This research is kind of suggesting that the reasoning models are doing a reasoning version of the same thing.
DANIEL: Yeah.
BEN: They are mimicking reasoning far more effectively than they are actually reasoning.
DANIEL: I think so. And a lot of people are saying, “Oh, sour grapes, Apple. Your AI sucks. So of course you don’t think it’s going to work. Of course you’re becoming AI skeptics.” But I think that this is an interesting attempt to test the boundaries of these systems.
BEN: All right, look, you will not find a person more willing to criticise Apple than I am, we know this. Any inherently cynical enterprise, I’m on board with…
DANIEL: Okay.
BEN: …just on a baseline.
DANIEL: On principle. [LAUGHS]
BEN: Yeah. If your thing is just like, “Everyone says, this thing is awesome. We don’t think so. Let’s fuck with it,” like, I’m down, that is a research premise that I am on board with.
DANIEL: Hey, let’s take our last news story.
HEDVIG: Yes.
DANIEL: To start off with, Ben, I’m going to ask you a question.
BEN: Got it.
DANIEL: Would you rather have $10 now or $10 in a month?
BEN: Now.
DANIEL: Why now? It’s still $10.
BEN: Daniel, have you checked what inflation rates are doing, you financially illiterate imbecile?
DANIEL: Fine. If you’re going to go that way.
BEN: That $10 in a month is going to be worth materially less.
DANIEL: What are you talking about? Interest rates are super low.
BEN: I want… No, not interest rates. Inflation rates, Daniel.
HEDVIG: Inflation.
BEN: Inflation rates.
DANIEL: Yeah. They are tied.
BEN: If you give me $10 in a month, that $10 is worth less than the $10 now. But that’s not actually why I want $10 now, I want $10 now because I want a snacky. I want a snacky now.
DANIEL: You don’t want it in a month?
BEN: I don’t want a snacky in a month. I want a snacky right now.
HEDVIG: And it is the same. It’s not marshmallow test. You don’t get $20.
BEN: Yes. Yeah, yeah. It’s just I want a good thing straight away.
DANIEL: Well, okay, wait a minute. What if it was? What if it was a kind of marshmallow test? What if I offered you $20 in a month or $10 now?
BEN: Mm.
DANIEL: Now, it starts looking better, doesn’t it?
BEN: Still $10 now.
DANIEL: Okay. [LAUGHS]
HEDVIG: It also has to do with how much you believe in Daniel’s ability to pay you $20 a month from now.
BEN: Yeah, yeah, there’s all sorts of things at work here.
DANIEL: Wait, so what are we saying?
BEN: Especially my income, so this is trivial. We’re talking about it at a level of money that’s trivial to me personally.
DANIEL: Yes.
HEDVIG: Yeah.
BEN: But, Daniel, I sense that you’re driving towards a point, so let’s keep it in the realm of just silliness. Okay, I do want the $20 in a month rather than the $10 now because that’s the more rational choice.
DANIEL: It would be, yes.
BEN: Pure economics in a classroom, year 9, supply and demand, blah, blah, blah. Yeah, I want the $20.
DANIEL: It’s okay if you want the $10 now. I could go up. I could go $50 in a month. I could go $100.
BEN: But that’s a boring conversation because there will be a point where that will be true for everyone.
DANIEL: That’s right.
BEN: So, yes.
DANIEL: I’m just trying to find Ben’s point. At what point will he crack?
BEN: Ah, 500 bucks.
DANIEL: Okay, interesting. Now, what you’re doing, Ben, when you say, “I don’t want the $10 in a month, I want the $10 now,” is you’re kind of discounting the future. This is called temporal discounting, so remember that term because it’s going to come up.
BEN: Temporal discounting. Got it.
DANIEL: Yep. Now, I think we all remember the economist, Keith Chen, and his work on language and saving money. Hedvig, can you give me a rundown if you remember this one to any detail?
HEDVIG: I remember the basic goalposts. So, Keith Chen, I think it was and a team or was the first paper single author? Never know.
DANIEL: I don’t remember, but I know that he’s worked with a lot of people.
HEDVIG: Yes. Basically, some economists were interested in how you can predict whether certain people are likely to save money or not and what correlates with that. So, like trust and institution, you could imagine matters or your basic income, etc. And they had one idea which had to do with that maybe it has to do with the language you speak. And…
BEN: Okay.
HEDVIG: …based on the language you speak and how it codifies tense, and in particular future tense, that might prime you to think of the future as either a different place to now. And then, you have to remember, “Oh, is it a different place that you believe exists or is it a different place that you don’t believe exists or that you have less faith in?”
BEN: This is sounding quite Whorfian.
DANIEL: It is very Whorfian.
HEDVIG: Yes, this is linguistic relative idea that… It could also be hard to know which one is the chicken and the egg because it could be that the culture conceptualises tense a certain way, so you don’t really know.
BEN: Yeah, well, straight away, that was my thought. Like, Germans famously are saving people, but is that culture or is that language?
DANIEL: Mm.
HEDVIG: That was one of the things. So, he did a study first where he tested this and found a relationship between saving and whether or not you had a grammatical as future tense as indexed by, I believe they used WALS, The World Atlas of Language, Structures, chapters on tense by Östen Dahl. And then later, there were some people in linguistics and cultural evolution that said, “Hmm. We’re not sure about this. We think that maybe this effect has to do with just language families and shared inheritance and proximity. So, it’s not really about the language you speak. It’s that you have some other third factor that you’re all inheriting or sharing.”
And this is where the coolest thing that has ever happened in research in linguistics happened, which was that Seán Roberts, friend of the show, contacted Keith Chen and said, “I think we should do a follow up study where we do some more robust statistical tests. Would you like to co-author this with me? And we will publish it regardless of the results. If my skepticism is right, we’ll publish it. Your results stand, we’ll publish it,” which is the coolest thing that has ever happened.
DANIEL: I know, I love this so much. And he’s done this before.
BEN: So, now can we explain to non-academics why you guys are frothing on this so hard?
DANIEL: It’s just cool. It’s just like somebody…
BEN: That’s not a good explanation. It just is, man.
DANIEL: Excuse me, but that’s science.
HEDVIG: Technically, science are supposed to hold these sort of like high moral ideals.
BEN: Right.
HEDVIG: We’re supposed to replicate studies, we’re supposed to share data, we’re supposed to be open to critique, etc. However, people do science and people are often influenced by things like just, “Will I have a salary?
BEN: Yeah. Yeah.
HEDVIG: “Is my idea novel? Is someone else encroaching on my idea? Is someone criticising it?” Both because there can be pride involved. There can also just be material concerns, like your employment involved.
BEN: Yeah, yeah. Like, the publish or perish kind of dynamic and all that kind of stuff, right.
HEDVIG: And then also we have the whole publishing network which promotes certain kinds of papers to be published. We don’t tend to publish negative results, we don’t tend to publish insignificant results, etc. So, all these things are sort of forces that work against this sort of like scientist ideal.
BEN: So, these guys were being a little bit punk rock.
HEDVIG: It really is so cool.
BEN: We’re going to fucking do it either way, man. We’re just going to do it. It’s cool.
DANIEL: Just fucking…
HEDVIG: It’s so cool. It’s very cool.
BEN: That’s very cool. I now understand.
HEDVIG: It’s such a fun episode for me, so yes, and… So, they then published a paper where they found that these effects were entirely discounted or came to naught if you…
DANIEL: It fell apart.
BEN: Yeah, okay.
HEDVIG: It fell apart. But now, we have the third step in the saga. So now, we have a team of Cole Robertson, Seán Roberts, Asifa Majid, and Robin Dunbar who have published a new paper just a couple of… What is it? two weeks ago. Language and Economic Behaviour: Future Tense Use Causes Less, Not More Temporal Discounting. So, they have found a relationship, and it is the other way around than what was found in the first episode of this saga.
BEN: Okay, so we’ve gone all the way around the horn, and we’re now at the opposite Whorfian conclusion.
DANIEL: I think it’s pretty much anti-Whorfian. Let’s talk about what was going on here. So, if your language has grammatical future tense — make a prediction, Ben — will you be better or worse at saving money? What do you think that the Whorfian first paper proposed?
BEN: The Whorfian first paper, okay.
DANIEL: Am I better at saving money or am I not?
BEN: Whorfians would say that you would better at saving money because you have more language to understand concepts like the future and what’s coming and stuff.
DANIEL: Okay, that’s what I thought when I first heard of this work, but secretly, it’s not what Chen said in the beginning.
BEN: Okay.
DANIEL: Chen said the opposite. He said if you have no grammatical future tense, you’ll be better at saving money because there’s no distinction grammatically between present and future.
BEN: Oh, okay, right.
DANIEL: The present and the future are kind of the same thing, so there’s no temporal disc… You won’t temporarily discount the future because it kind of is the present, at least grammatically so. The future is not some weird different thing. But the question is, do speakers actually temporarily discount if they use the future tense? And how can we find out? Now, for this one, they did the experiment a little differently. They looked at English. Now, in English, if you want to talk about rain, let’s just fill in the blank. Tomorrow, it… Now, go ahead and try filling that in with RAIN or a bunch of words.
BEN: It will rain.
DANIEL: It will rain, right?
BEN: Yeah.
DANIEL: What if I said, “Tomorrow, it rains”?
BEN: I mean, I would assume you like turtlenecks and slam poetry, but sure.
DANIEL: Precisely. Precisely. Because in English, you kind of have to say the future, “Tomorrow, it will rain.” Whereas in Dutch, you could say, “Tomorrow, it rains.” And that is the way that you do it. It’s not obligatory, but in English, it kind of is.
Now, they gave English and Dutch speakers a chance to fill in the blanks and saw if they used future or if they didn’t. And then, they also gave them tests like the one I gave you, “Do you want $10 now or do you want $50 in a month?” And they tried to see if they would match up. You would think that if your language strongly distinguishes the future and the present — if there’s a split — you discount the future more, and you’d go for that $10 right now because it’s different from the present and it’s riskier, and they’re not the same thing.
BEN: Okay. Again, that was the original hypothesis, right?
DANIEL: That was the idea.
BEN: Yeah.
DANIEL: They found the opposite.
BEN: Okay.
DANIEL: They said, “We found that English speakers who habitually make greater use of the future tense actually discount less, not more.”
BEN: Okay.
HEDVIG: Oh.
DANIEL: So, they found the opposite of the Whorfian conclusion. They tried to figure out why. They thought maybe English modal verbs like MIGHT and MAY, that’s the thing that…
HEDVIG: Yes.
DANIEL: …we use in English a lot. And they think that might make the future seem more risky, so that’s one possibility.
HEDVIG: Because that’s the thing about the future unlike the past, it is not so definite. I may say, “It will rain tomorrow,” but I could be mistaken in that belief. It is a belief I have about tomorrow. I don’t know what’s going to happen tomorrow. That’s why you find things like, “It’s going to rain tomorrow,” is a little bit less certain than “It will rain tomorrow,” or, “It shall rain tomorrow,” or something like that.
DANIEL: It’s going to rain tomorrow.
HEDVIG: Future is an unknown place, whereas the past… well, we can have slightly different beliefs about what happened, certainly far back, especially, but it’s a different beast. Futures are weird. There are many languages where you have ways of expressing the future, but often many different ones because you want to express different amounts of certainty.
DANIEL: That’s true. That’s interesting.
HEDVIG: Right. This is why some people don’t… Sometimes, future markers are called a mode or mood instead of a tense because it has more to do with like… mood has to do with the relationship between the event and other possible worlds, like, if you do this, I will do that, or something like that.
BEN: [LAUGHS] It’s the vibe, man.
HEDVIG: Future is funky. So, what was interesting in this study is that they didn’t treat all English and Dutch speakers wholesale as like having or not having future. They actually studied how much these particular speakers used these constructions, which I think is a really cool advancement on the previous studies.
DANIEL: Yeah, person by person. It’s a very cool study.
BEN: Yeah.
HEDVIG: It’s a very cool study, but it is comparing Dutch and English specifically. And as I’ve said before on this show, Dutch and English speakers are very similar in many, many ways. English people are just short Dutch people is my slogan.
DANIEL: Is that right?
BEN: [LAUGHS]
HEDVIG: Yeah.
DANIEL: How can we test that?
BEN: Oh, Hedvig.
HEDVIG: I have a British husband. I’ve lived in the Netherlands. They’re very similar. They both like mashed potatoes and a sausage and going across the world and doing trade and colonising people. And it rains a lot where they live. They have many similarities.
DANIEL: There you have it, folks. That’s it.
HEDVIG: So, while this is a really cool study, we would like to see this kind of study with more pairs or more samples, right?
BEN: Yes. Sure. Yeah. Yeah.
DANIEL: Yeah.
HEDVIG: But I think the paradigm and the way of testing is very cool.
BEN: And get outside the Eurosphere as well, right? Tell me how this is done in Southeast Asia and the Philippines and the Solomon Islands and all that kind of stuff.
DANIEL: Yeah. Still, props to the team. Congratulations for getting this one out. I really enjoyed it, and it’s very cool. All right, are we ready for Related or Not?
BEN: Ooh, bring it on, bring it on.
HEDVIG: Pen and paper. Pen and paper. Pen, paper found.
DANIEL: Our theme this time comes from Aristemo with a little help from me.
[LATEST RELATED OR NOT THEME]
BEN: That was short and sweet. I like it.
DANIEL: First one is from me.
BEN: Oh, okay.
DANIEL: I was running home with my daughter. My daughter was running home from the park, and I was sort of moving along with her, and she said… I said, “You’re running fast.” And she said, “I don’t run. I lope.” LOPE is a good word, isn’t it?
HEDVIG: Ah, mm-hmm. Yep.
DANIEL: She said, “I lope… like a wolf.”
HEDVIG: Yes.
DANIEL: Now, I thought that the juxtaposition of LOPE and WOLF was interesting. Can you guess why?
HEDVIG: Yes.
DANIEL: Why?
HEDVIG: Because WOLF in Latin is lupus.
BEN: Yes. Okay.
HEDVIG: I thought you were going to go LOPE / LEAP.
DANIEL: Interesting. Okay, let’s do it. So, I want to ask, do you find a connection between LOPE and Latin LUPUS, wolf…
BEN: Okay.
HEDVIG: Yeah.
BEN: Yeah.
HEDVIG: Yeah.
DANIEL: …and are either of them related to LEAP?
BEN: Okay.
HEDVIG: Yeah. Yeah.
DANIEL: I said no. That was my guess. And then, I looked it up.
BEN: My intuition is all unrelated on this one.
DANIEL: Okay.
HEDVIG: I for sure think that LEAP and LOPE are related.
DANIEL: Is that one of those Norse things?
HEDVIG: What are you talking about? It’s like moving forward. L and a P, just… And Swedish LOPE means “to run”. So, like…
DANIEL: Does it?
HEDVIG: It’s got to be… Yeah, LÖPA.
BEN: Yeah. So, I had figured that LOPE was an archaic or a borrowed word for running, or like gait or trot or something like that.
HEDVIG: Yeah.
BEN: LEAP, for me, feels like it could be… Because we’re talking about quite short words here, there’s a decent chance that something could kind of like flop over and end up feeling quite like another thing.
HEDVIG: Yeah.
BEN: Jumping and running for me, oh, it’s right on that borderline of, like, yeah, semantically, not far apart. I fully concede that.
HEDVIG: Not far apart? They’re like basically the same thing.
BEN: See, that’s where you lose me. I don’t think they’re basically the same thing.
HEDVIG: Right. What do you think about this? So, to LEAP, but what about a great leap, as in the great leap forward?
BEN: Yeah, yeah. What about it?
HEDVIG: If someone was running and then just took one huge step, would that be them taking a leap?
BEN: Ah, no, because it’s got to be like a jump? Like, it’s got to be like… Anyway, anyway, anyway, look, I’m going with all unrelated. I’m sticking with my gut on that one.
DANIEL: Okay. And, Hedvig, you think that LUPUS is on its own, but LEAP and LOPE are related?
HEDVIG: Yeah, I do, actually. I think that WOLF is unrelated, but I’m not so sure, but I think so.
DANIEL: Okay, Hedvig wins. LEAP and LOPE both come from an Old Norse word hlaupa, and Latin LUPUS is not the same thing at all. This one’s from Marianne, who says, “I’m a linguist and professor at the University of the Virgin Islands.” Hello out there to you.
BEN: Oh, cool.
HEDVIG: Oh, which one?
DANIEL: Oh, I think we’re talking US. “And I heard about your podcast at the recent LingComm25 conference. I don’t do podcasts. So, I appreciate it greatly that you’ve got transcripts available too, and have been making my way through them with great pleasure.
BEN: Oh, there we go. There you go, patrons. We are doing the Lord’s work for Marianne.
DANIEL: I’m glad we’re doing this. “I’ve got a suggestion for your Related or Not feature, a pair of words, QUAY.”
BEN: Yep.
HEDVIG: Okay. Mm-hmm.
DANIEL: “Meaning a landing place or a wharf place where ships could be loaded and unloaded. And CAY, meaning a low-lying island, sandbar or protruding reef.”
BEN: Oh, yes.
HEDVIG: Oh, Scheiße. Another one, fuck nah.
BEN: Hard related for me.
DANIEL: Oh, okay, really? Okay.
BEN: Big time, yes.
DANIEL: Because they just look the same and they’re both.
BEN: Well, I feel like these two things are more semantically related than LEAP and fucking RUN, to be honest because they are both like horizontal obstructions, low lying in the water. I think if I recall from my maritime lexigraphy correctly, the thing about a quay is that it lies parallel to shore as opposed to…
DANIEL: A jetty.
BEN: …a jetty which lies crossways to like some kind of shore.
DANIEL: Or a dock, perpendicular.
HEDVIG: Okay.
BEN: So, yeah, for me, a QUAY would be… A quay with a Q where ships land and are unloaded would be exactly the same thing effectively as the low-lying sandbar that like cuts across kind of thing. Okay, that’s me. I’m going related.
DANIEL: Are you not suspicious at all? At all?
BEN: No.
DANIEL: Okay, Hedvig?
HEDVIG: I have just learned this now that these are different things. So, thank you all for teaching me something. So, when people say… I have always been confused by all of these words. So, the Florida keys is a bunch of islands, not a bunch of places where you land boats.
BEN: Correct.
HEDVIG: All right, cool. Because I was, yeah, confused.
BEN: The keys probably have several quays though.
DANIEL: Yeah, they might. And in fact, I’m treating KEY with a K and CAY with a C as roughly equivalent.
BEN: Yep, like regionalisation differences only kind of thing.
DANIEL: So, what do you reckon, Hedvig?
HEDVIG: Yeah, I’m going to guess they’re related, but I’m suspicious.
BEN: What about you, Daniel?
DANIEL: I’m suspicious too. I said no because I am suspicious of maritime terms and pirates are wily.
BEN: Okay.
HEDVIG: And people who send in questions are trying to get us. [DANIEL LAUGHS]
BEN: [LAUGHS] Yeah, there is always that.
DANIEL: There’s a negative bias.
HEDVIG: Yeah.
DANIEL: Okay, well, so it’s me on my own again, okay. The answer, kind of depends on who you ask, but…
BEN: That’s unsatisfying.
HEDVIG: Oh, no.
DANIEL: …the sources that I checked mostly say no, not related. Now, let’s just see where we go. The word quay with a Q, now I checked this in numerous sources, comes from French cai which in turn comes from Gaulish caium, which goes back to an old Celtic word, has a Q because of French and we leaned it that way. So, remember that, this goes Gaulish. But the cay with a C comes from Spanish cayo, which ultimately goes back to Taíno, an Araquan language, cayo, small island, which is kind of like the keys. So, two different places. However, you flip over to the OED and it says “perhaps ultimately the same word.”
BEN: What?
DANIEL: But doesn’t say why.
BEN: Oh, that’s unsatisfying. No, I’m giving the win to Daniel on that one. That’s a terrible fudge at the end.
DANIEL: Yeah.
BEN: Like, blah, blah, blah, but maybe also the same.
DANIEL: I didn’t care for that at all. Interestingly, did you know that in the 17th century, the 1600s, you would open the door with a key /kei/? You would say /kei/. Hand me the /kei/.
BEN: Oh, really?
DANIEL: Darn, I’ve lost my /keiz/. And that influenced the spelling. I mean, you look at the word, cay, C-A-Y, and you say, “Well, that’s cay. Why are they saying that key?” Well, because of the name of the thing you open the door with that sound changed. And so, they changed the word and the spelling to go with it.
HEDVIG: Mm.
BEN: Wow.
DANIEL: Okay, last one.
HEDVIG: Wait, hold on.
DANIEL: Yes.
HEDVIG: I looked up quay with a Q on Etymonline, and it says that it goes French, Gaulish, Celtic, Proto-Indo-European.
DANIEL: Yes. I didn’t read the whole thing.
HEDVIG: So, we have a borrowing from Celtic to… Oh, sorry, of course we have a borrowing from Old Celtic to Gaelic, because Gaelic is a Celtic language. Hedvig’s mind is… I took off my hat.
BEN: What do you do for a living again? I can’t remember.
HEDVIG: I took off my hat.
[LAUGHTER]
DANIEL: Is that what it is?
HEDVIG: I’m putting my hat back on.
BEN: The fez comes back.
DANIEL: Get that thing back on. Thank you, Marianne, for that one.
HEDVIG: Thank you.
DANIEL: Last one from aengryballs.
HEDVIG: Mm-hmm.
DANIEL: Now, I’m going to get the pronunciation wrong.
HEDVIG: Okay.
DANIEL: But Vietnamese bồi, which means waiter. And the O has a couple of diacritics. And then English BOY, male servant.
HEDVIG: Uh-ah.
BEN: As in, “Here boy, fetch me my slippers.”
DANIEL: “I say, boy.”
BEN: Yeah. Okay.
DANIEL: Yep.
BEN: Wait. But hang on, I feel like we need to clarify here.
DANIEL: Yes.
BEN: But not all of the senses of BOY.
DANIEL: Nope, just the waiter sense.
BEN: Just one sense of BOY.
HEDVIG: But I think that aengryballs takes for granted that the waiter BOY in English and the male younger person is the same.
BEN: Well, no, but I… Okay.
HEDVIG: They’re just highlighting this one for us.
BEN: If we are associating that the word BOY in English, B-O-Y, meaning young person, is related to the BOY for… Oh, I guess they could be related, couldn’t they? Because like rich, douchey, white, French people could have gone over there during colonialism and were like, “Boy.”
HEDVIG: Mm-hmm.
DANIEL: They could have. And yet, this could be one of those things where there’s only so many sounds and syllables possible.
HEDVIG: Yeah.
BEN: I said that for the first Related or Not. And you know what happened to me, Daniel? I fucking didn’t win. So, you can take your sorrys and stuff them in a sack, mister.
DANIEL: Vietnamese words are only one syllable anyway, usually.
HEDVIG: But to be fair, LOPE and LEAP and LUPUS is slightly more than boy.
BEN: Are they? They’re still one syllable.
DANIEL: Does anybody have a favorite coincidental pair? Mine is meli, which means honey in Greek and in Hawaiian, but it’s unrelated. It’s just that there are only so many things possible. And every once in a while, you get an overlap.
BEN: Especially with Hawaiian, when you drop that many phonemes out of a language, it’s like, well…
DANIEL: Of course, it sounds the same.
HEDVIG: I’m a basic bitch. I’m a big fan of OBRIGADO, ARIGATŌ, Portuguese and Japanese words for thanks.
DANIEL: Oh, that’s a good one.
BEN: Oh, yes.
DANIEL: Not related.
HEDVIG: And they are too long. They’re both too long.
BEN: I don’t have… I’m not sure I know enough of these, unfortunately. I believe NO is NO in both English and Spanish, how unusual. [HEDVIG LAUGHS]
DANIEL: That’s not coincidental.
HEDVIG: One of the definite articles in Samoan and French is LA and LE, and I get those confused.
DANIEL: Wow, okay. Well, time to put in those votes.
BEN: Yep, yep, yep.
HEDVIG: Okay.
DANIEL: I said totes a coincidence. That’s my answer, not at all.
BEN: Look, I’m banking on colonialism being a shitbag here, so I’m going to go related.
DANIEL: Okay.
HEDVIG: That’s true, my dear Benjamina, but that was the French, and they would say garçon.
BEN: True.
HEDVIG: So, I’m going to go for coincidence, not related.
DANIEL: Aengryballs says according to the Wiktionary… Okay, now this is not the most wonderful source. I couldn’t find anything else, but let’s see what it says. According to the Wiktionary, they are probably related. And the path is from English to French and then from French to Vietnamese. Nice going, Ben.
BEN: Feels so good. Never ever bet that colonialism is not at fault in some way.
DANIEL: If not here, then in other ways.
BEN: Ainslie’s razor. If colonialism being a shitbag could be an explanation for this, it probably is.
DANIEL: Thanks to aengryballs for that and thanks to Aristemo for theme for our jingle. If you have a Related or Not jingle you want to throw us, just send it my way because we’ll probably play it. hello@because language.com is my email and my name is Daniel Midgley.
BEN: Jesus. Thanks. Thanks, late Night Jazz DJ.
DANIEL: Hey, babies.
BEN: [LAUGHS] god.
[MUSIC]
[INTERVIEW BEGINS]
DANIEL: We are here with Professor Adele Goldberg, the M. Taylor Pyne Professor of Psychology, Princeton University and a pioneer of the framework known as the constructionist approach, and probably one of the maybe five people who actually understands how language works. Adele, thank you for coming and hanging out with us today.
ADELE: There may be four by your reckoning, I think. [LAUGHS]
DANIEL: Adele, we started this conversation because I noticed a thing that you wrote on Twitter back in the days. It was about large language models. And I said, “Wow, this is really interesting. Do you want to talk about it?” And you said yeah. And then, I realised that you’re exactly who I want to talk to because I’m kind of obsessed by constructionist approaches and I wanted to find out more about it. So, this is kind of a two-parter. What we can learn from large language models, and what I can learn about constructionist approaches to grammar.
ADELE: Okay, that is awesome. So, a constructionist approach starts from two basic ideas. One is that constructions are central, and constructions are defined to simply be learned pairings of form with some kind of function. And the other aspect, the other motivation for the term, is that we believe that language is constructed, that is that it’s learned. The key idea, the key insight of constructionist approaches is that words and grammar are the same kind of thing. So, words clearly are learned pairings of form. They have certain phonological pattern and function. They have a meaning. So, words are instances of constructions.
DANIEL: Okay, so the word MICROPHONE sounds like this: /maɪ kɹoʊ foʊn/. That’s its form…
ADELE: Mm-hmm. That’s right.
DANIEL: …but it also comes with a meaning attached and that is the thing I’m talking through.
ADELE: Exactly. And so, words are learned pairings of form and function. So are idioms like IN CHARGE OF or HIGH WINDS.
HEDVIG: KICK THE BUCKET.
ADELE: Also, KICK THE BUCKET, but it doesn’t have to have non-compositional meaning. So, KICK THE BUCKET, people make up stories about why it means what it means to die, but we don’t really know what the origin is. But in a lot of other cases, the meaning is interpretable, even if you’ve never heard it before, but you still know that you have heard it before, and that makes it a construction, that you’re aware of its frequency. It occurs more… The combination of words occurs more commonly than you’d expect if the words were just produced by chance or based on their meanings.
And then, the most important part to me are the phrasal patterns, the more abstract patterns. People refer to them as syntactic patterns that have really captured the imagination of a lot of linguists for the last 60, 70 years. And the reason for that is that there’s a lot of complexity in those formal patterns that hasn’t been well understood and, for various reasons, was assumed not to be learned, was assumed to be universal and unlearned, or so called innate. Innate is really hard to define, so I try to avoid that term, but what was meant was that somehow, it was biologically determined aspects of our syntax.
DANIEL: Well, what’s one of those?
ADELE: Well, there are none. There are none in reality. [DANIEL LAUGHS] But for instance, a lot of people still will restrict their syntactic claims to assume that you have only binary branching, meaning that you only have two parts so that if you imagine a little tree diagram that can only have two parts, and if you need a third part, you have to attach it to the conglomeration of two.
DANIEL: Okay.
ADELE: So, that’s a restriction. There’s actually not any evidence for it, but it restricts the grammar and people have assumed that is a universal and unlearned.
DANIEL: So, if I say something like… I’ve got DOG and that means a thing, I can attach BIG to that. And now those two things are like a thing.
ADELE: That’s right.
DANIEL: But if I want to now put THE onto there, I have to take THE and I have to stick it onto BIG DOG. I can’t have it be like a three-way branch, THE and BIG and DOG. I’ve got to do BIG DOG first and then I got to do THE onto BIG DOG and now I’ve got a little phrase there.
ADELE: That’s exactly right.
DANIEL: But that’s not right.
ADELE: Well, in that particular case, it makes sense because when you talk about constituent structure of syntax, like what are those little clumps, the units, they really are meaning based. All the tests for constituent structure depend on interpretation. So, people will argue that a phrase like “visiting relatives is…”… Like, “visiting relatives are dangerous?”
DANIEL: “Flying planes are dangerous.” No, wait, flying planes…
ADELE: Okay, yeah, so the verb messes it up.
DANIEL: I got it.
HEDVIG: I like “visiting relatives”. I just had relatives visiting, so I feel like we should go with the “visiting relatives are dangerous”.
DANIEL: How about “Visiting relatives can be annoying”? That’s ambiguous.
ADELE: Oh, thank you, that’s very helpful. That’s exactly right. And people will say that the evidence that there are two different tree structures comes from the ambiguity, comes from the fact that there are two different interpretations. So, we can turn that around and recognize that what’s really going on is that there are different ways to semantically, to meaningfully group the elements. And people can call that syntax, or you could call that unitizing of meaning.
DANIEL: Okay. Now, this is very different from an approach that I’ve been doing. Next week, I’m going to be doing an exercise with my Linguistics 101 students because we’re going to generate a sentence. And the way that I’ve been doing it, here’s what you do. You start off with a sentence and that’s an S. And then, S breaks down into a noun phrase and a verb phrase. So, you got a little tree, at the top of the tree, you got an S, and then there’s two branches, NP, VP for noun phrase and verb phrase. And then, I break down my noun phrase into other things like nouns or maybe determiner + nouns, stuff like that. And I break it down, break it down, break it down and then I’ve got like a tree.
And then, I’m going to walk over to my words and I’m going to start hauling words out of my lexicon and I’m going to plug those words in to the cool tree that I made and then I’ll have a sentence. So, I got two parts of my table. I got rules and then I got the lexicon.
ADELE: Exactly right. So, that’s the traditional view and that is the view that we think is wrongheaded. So, it’s fine to teach it in 101, everybody does.
DANIEL: Yeah.
ADELE: But think about it as an actual theory of how people produce utterances. It doesn’t quite make sense. Why in the world would you start out with a meaningless tree and build it and only then decide what you want to say? [DANIEL LAUGHS] That’s kind of crazy. And moreover, the phrase structure rules that you use there, that’s the term of the… That’s what they’re called when you say like a verb phrase goes to a verb plus something. Well, there’s nothing general you can say about verb phrases. Verb phrases come in many different flavors, and it depends largely on the verb and the meaning.
So, for instance, if you have the verb CONFIRM, you’re likely to have a direct object. It’s likely to be transitive. “She confirmed the date.” But it could be something else. It could be, “She confirmed that he was going.” And verbs have these preferences for occurring with certain complements versus other complements. Those tend to be specific to individual verbs. There are generalizations across the lexicon. But when you go to choose the words after you’ve built the tree, you can’t choose any noun or any verb. You have to be very careful, so you have to sort of decide ahead of time what you want to say and put those words in your lexicon for the purpose of illustration.
DANIEL: Yeah. The example that I’m chewing on for my lecture is HOUSE and HOME, they’re both nouns, and they’re both kind of synonymous. And I can say, “Daniel went home,” but I can’t say, “Daniel went house.”
ADELE: Right, exactly.
HEDVIG: Right. You know how you can have certain hold sentences or something to signpost a certain knowledge and for construction, grammar, for me, it’s the, “He sneezed the napkin off the table,” construction, which is like, I’ve never… Most people probably have never heard SNEEZE as a transitive move verb. It’s usually just an intransitive thing, “He sneezed.” But if you were to be introduced to the construction, “He sneezed a napkin off the table,” you have actually more material from the “off the table,” etc. So, you could think of it as like a little template construction. You could insert certain verbs in there, and you can derive a meaning from that construction that is… like, you’ve learned something that is almost like a lexical item, but it’s larger than a lexical item.
ADELE: Exactly. So, that’s exactly the fundamental idea of constructions, that you have things that are like lexical items in that they pair form with some kind of function, but they can be large and they can be small. Exactly right.
DANIEL: Okay.
HEDVIG: It’s constructions all the way down and all the way up. For all intents and purposes, I guess…
DANIEL: “For all intents and purposes.”
HEDVIG: …morphemes are constructions and sentences are constructions.
ADELE: Well, okay, so there, I would stop you a little bit. So, morphemes would be like words with partially open slots. Because the morphemes don’t float around freely, we know where they go, and we also know what words they’ve appeared with. And we can create new ones, but we have PRE- with an open slot. And we know that PRE- occurs with something that can be construed to be relevant to time, like PRE-GAME is some kind of event that has a temporal component. PRE-HAIRCUT. PRE-1976. We can create new ones.
HEDVIG: PRE- my ex-boyfriend.
ADELE: But it has to have that form.
HEDVIG: Exactly, right.
DANIEL: So, if I’m not making trees in my mind when I say stuff and then populating those trees with words, what am I doing in a constructionist view? Let’s say that I’m using a sentence like, “Daniel ate his way through the Halloween candy,” what am I doing?
ADELE: Okay, perfect example. So, you’ve got a collection of constructions. You’ve got the pronoun HE. You’ve got a particular WAY construction. So, “ate his way through the Halloween candy” is an example of this very general construction and that’s a good example where you can put almost any verb in there. Even verbs that are normally obligatorily transitive. So, you can say, “He devoured his way through the Halloween candy,” even though you couldn’t say “he devoured” on its own.
DANIEL: Yeah, I would have to say, “Oh, now I’ve got two rules for DEVOUR. It’s intransitive and it’s also transitive somehow. I didn’t realise that, but I realise at this moment that it’s transitive because I made that sentence.” That’s weird.
ADELE: Exactly. So, a noncircular way of doing that is to recognize these larger patterns and then to understand that verbs have their preferences for occurring in different patterns. But you can also use the patterns to coerce verbs into new interpretations or in new contexts as you need them, yeah, as long as there’s not a better way of saying it. So, there are limits to how this coercion can work. Most English speakers wouldn’t say, “She explained him the news.” We would say, “She explained the news to him,” instead. And the reason we wouldn’t say the former and just coerce the verb typically is that there is a better way to express it. “She explained the news to him,” is right there. So, there’s no need to create a new pattern.
DANIEL: And yet, if I really, really wanted to, I could say, “Explain me that.”
ADELE: That’s right. And that particular example has become pretty familiar. And there are dialects that allow it, and second language learners often do it because there’s no semantic reason not to do it. So, there are cases where it works better. I think “explain me” is better than “explain him” because me in that construction is super, super common.
HEDVIG: Even US is worse.
ADELE: Yeah, “Explain us the story.” Exactly, exactly. So, people retain a huge amount of memory for the co-occurrences of particular words in particular patterns and the context we use them in. So, for words this is super clear. I just came across an example that Benjamin Dreyer, maybe you’ve had him on. I don’t know, you should. He’s a wonderful sardonic.
DANIEL: Yeah, he’s a good follow.
ADELE: Yeah, he is. So, he just mentioned that… he raised the question, when do we use the adjective LATE, like the late Pope Francis? And he pointed out that maximum of five years. You can’t use it beyond that. And you tend to use it when the other person doesn’t know, that the person has… Or you’re not sure if the other person knows. And he pointed out you wouldn’t use it for a nefarious character. It implies some kind of respect.
HEDVIG: Hmm.
ADELE: So, this is kind of thing that we all absorb, but we don’t know. For most of us, it would be hard to articulate those facts, but we obey them. I mean, if you look at corpus data, that’s the way it’s used.
HEDVIG: I’m having this problem right now because my husband is learning Swedish, I’m Swedish. And he found this list of 5000 most common Swedish words. So, he has little flashcards, and he goes through them. And for some reason, there’s a lot of synonyms in that. There’s a lot of synonyms for reason for, or cause for. And he asks me, he’s like, “Okay, these flashcards are telling me that all these three different things mean this. Hedvig, can you explain to me how they are different?” And I’m like, [EXHALES].
ADELE: Yeah.
HEDVIG: And I actually have to go and consult this Swedish corpus and look at the word neighbourhood. And then, I see things and I’m, “Yeah, this one is more when it’s like a nefarious thing.” Or, “This one is more when you’re shameful.” Or, “This one is more blah, blah,” but it’s very hard to extract when you’re just asked on the spot, but I do obey it.
ADELE: Exactly.
HEDVIG: I do it correctly…
ADELE: Exactly.
HEDVIG: …I hope.
ADELE: That’s right. And so, when you recognize that, and I think every learner of a new language comes to see that, that “Wow, I don’t know what context to use this in,” you begin to realize that we have to be absorbing these correspondences in context for not only words, but also grammatical patterns, because certain grammatical patterns are used in certain contexts and not others. So, for example, “The exam is to start at six.” That’s just a statement of fact the exam is going to start at 6 o’clock. But who would say that? You have to be a person of authority to say that. It wouldn’t be the student reminding the teacher of that, “the exam is to start at six”. It’s the teacher who would say it. So, there are these even in English, which is not known to be a language that has a lot of hierarchical attitudinal structures, it does. It has some and you have to look for them. But I love your example, because oftentimes these phrasal patterns convey these subtle aspects of meaning.
So, another example is what I like to call the gossip construction. So, something like, “It is blank of you to be here.” And how would you fill that in?
HEDVIG: Nice.
DANIEL: Good.
ADELE: Okay, good. And there is a dialect difference between British English speakers and American English speakers. Some prefer NICE and some prefer GOOD. American English usually prefers NICE. And I had one American speaker say, “Well, I thought good, but I thought it in a British accent.” [CHUCKLES] So, it turns out that is the most frequent adjective in that construction, but you don’t have to use NICE, you could say…
HEDVIG: Kind.
DANIEL: Great of you.
ADELE: It is kind… Yeah, or you could say, “It is annoying of you to be here.” Or, “It is nasty of you to be here.” It doesn’t have to be positive. But it’s always about some action that someone else has taken. So, the OF YOU phrase actually codes an agent. So, you can say, “It was nasty of him to go to the funeral,” or something like that, but the him has to be the agent of some action like going to the funeral. You wouldn’t say, “It’s nice of the dishwasher to save water.” That would sound odd.
DANIEL: I mean, it is nice for a dishwasher to do that, but that’s not…
ADELE: Okay. That’s right, but the content is there.
HEDVIG: Yeah.
ADELE: You wouldn’t use that phrasing. You changed it in a way that makes it make sense. But if you say, “It’s nice of the dishwasher to save water,” you’re anthropomorphizing the dishwasher. Like, good dishwasher. Thank you. [LAUGHTER] Otherwise, it doesn’t make sense. So, that’s an example where the formal pattern puts these constraints on the interpretation and it’s unusual for English.
HEDVIG: And there are so many of these that can seem… like, these larger constructions that you have to learn in the language that can seem sort of arbitrary. So, in English, for example, if you say, “You scoundrel,” you can only say negative things after you there, right? And if you were to say a neutral or positive thing, it’d be weird. So, if I say, “You linguist,” that would be like, you’re annoyed with linguists somehow. In Swedish, for reasons I don’t know, we actually use the possessive. So, we say, “your linguist,” “your scoundrel.”
ADELE: Mm-hmm. Interesting.
DANIEL: And in French, you would say “espèce de linguiste” which is like a kind of linguist, and that’s even worse.
HEDVIG: Right. But if you were just a second language learner and you just see the pronoun and the word, you’d be like, “What is going on?” But you have to learn that larger construction and what can allow to go in there. So, these word correspondences and these patterns, is that what large language models do?
ADELE: So, we think so. There’s so much that we still need to learn about the large language models, but we absolutely think so. And the fact that they’re getting language so well, it’s hard to stump the newest models. If you put in examples like, it’s good if… “The exam is to start at six. Who would say it?” they get it right. They explain the pattern to you. It’s really remarkable. So, I think they must be learning these larger patterns.
And there’s some exciting evidence from large language models that they’re also connecting the patterns. So, it’s important to realize that constructionists love to get into the weeds and to talk about particular constructions that have very nuanced meanings and functions. And unfortunately, that leads to a misunderstanding of constructionist approaches that we’re butterfly collectors, but that’s not accurate. So, the way we actually view language is the way people have always or for a long time viewed the lexicon or the mental dictionary of words, which is not a list. We know the lexicon or our knowledge of words is not a list. The words are interconnected. They’re related semantically. They’re related in terms of their formal patterns, their neighbourhoods and clusters. And that is true of all the idioms and collocations and phrasal patterns as well. So, they’re not completely idiosyncratic, almost never. There are a few, like BY AND LARGE, I don’t know where that came from, but that’s really uniquely bizarre. But most of these patterns have neighbours, and those neighbours support them.
DANIEL: So, here I was thinking: well, if I’m understanding language for a constructionist, they would say, “Daniel has a big, long list of everything that he’s likely to hear and all of the constructions that he knows. And somehow, he’s managed to save all of that in his memory.” But now, you’re saying, “No, that’s not really how it works. You know words and you know patterns that they occur in, and the patterns lock together with other patterns, and it’s all part of a big ball of knowledge, not a list.”
HEDVIG: Mm.
ADELE: Exactly. Thank you. It is not a list. That’s really, really important. It’s a network. It’s a rich, interconnected network. And our memory, we know, is incredibly vast. Like, work on visual memory has proven that, and work on language has proven that too. So, people… We remember one another’s voices over decades, and people will respond faster to new words spoken by the same person than a new person. There’s all kind of evidence for this really fine memory for specific details.
DANIEL: And we can also say things like, “Now, I’ve told you this before but,” which implies I’ve got a mental list of everything that I’ve said to everyone.
ADELE: Exactly. And as you get older, you tend to forget. We call it source memory. You forget where you learned it or who you said it to before. So, people tend to start repeating themselves. But absolutely, we have that to some extent, and we have it pretty good when we’re young, absolutely.
HEDVIG: All of these correspondences and patterns and neighbourhoods and networks are created by the production we make and the ones we perceive from other people. So, what we talk about. It doesn’t exist in some sort of abstract in our heads or something we say to each other…
ADELE: Exactly.
HEDVIG: …and we hear it and we read it and we see it on TV, etc. But that also means that is how, in a way, bias creeps into large language models. And potentially maybe, if you believe in linguistic relativism, that could creep into the human brain as well. So, if the models are trained on certain semantic networks, for example, connecting certain ethnicities to certain negative statements, then those will come into the production as well. That’s how they get into our heads, and that’s probably how they get into large language models as well. So, these neighbourhoods are shaped by what we actually say and do, right?
ADELE: That’s absolutely right. And so, there are some biases baked in. Politicians are aware of this, that if you say it often enough, it will come to mind. I’m old enough to remember when Hillary Clinton was running and friends who liked her a lot, if you would say “Crooked…” they would say “…Hillary”. They didn’t believe it, but it was a word association that became so well known, so entrenched, that you couldn’t help but think of her name after that adjective. And so, politicians and advertisers make use of that.
HEDVIG: And that’s just like sort of similar to just normal… the cognitive bias of normalcy bias. The more you experience something, the more normal you think it is. The more you look at something on TV, you think, “Oh, this is how people behave.” So, that does mean that if you flood people with certain statements like crooked Hillary or something, you can make them think, “Oh, that’s just what people have always called her. That’s how she is,” or something.
ADELE: Right. People tend to assume that whatever’s familiar is true because the feeling of something being familiar makes it feel as if it’s true. But the large language models, they’re taking some steps, at least, to mitigate this. I have a particular example of this where in many languages, like in Farsi or Hungarian, the pronoun is gender neutral. And it doesn’t mean that the society doesn’t recognize the difference between male and female. People in Iran certainly do recognize the difference, but they use one form, ou, for both.
And so, just a few years ago, if you put into a large language model or if you used Google Translate and you gave it phrases with the same pronoun, OU, all in one session, it would translate in the… Again, this was like 2021, it would say, “She is beautiful,” “He is clever,” “He reads,” “She washes the dishes,” “He builds,” “She sews.” I kid you not, it goes on and on with these stereotypes. But if you do it today, and I did it this morning, that’s not the case. It says, “She is beautiful,” “She is smart,” “She reads,” “She washes the dishes,” “She builds,” and so on. And then, it clarifies, “Let me know if you’d like a version that uses the gender-neutral pronouns or HE instead of SHE.” So, someone helped it along with that, I think, to stabilize the pronouns in a passage.
HEDVIG: But it does mean that someone needs to be aware and care to do that.
ADELE: That’s right. That’s right.
HEDVIG: We can’t just make them run wild, because if they do this unsupervised, they will just spit back whatever we gave them.
ADELE: Absolutely, that’s right.
DANIEL: Well, I think we are to the point now where we can start talking about the points that you raised in that discussion about what we can learn from large language models. Can I just read and then I’ll get you to comment?
ADELE: Sure.
DANIEL: You start off by saying, “We don’t know that people represent language in a way that parallels large language models, but we have already learned major things. Number one, it is possible in principle to learn language in all its rich specificity and context-dependent generalisations from the statistics in massive amounts of text.” And then, you comment, “I did not believe that would be possible myself.” You didn’t think that it was possible to learn language from stats?
ADELE: Stats by a human being in an environment, I absolutely did. I knew that language is learned. We don’t need this unlearned knowledge. But I did think that we needed to understand the world. We needed to understand other people’s intentions. So, that is what surprised me, that it could do so well with meaning where it had no experience, no real-world experience. It was all text. That surprised me. So, I learned something from that. I was genuinely surprised.
Just a few years ago, in 2021 or 2019, I used to teach in class that I didn’t even understand the purpose of these large language models because we don’t just want to predict the next word. We want to convey a message. So, I was really flabbergasted when ChatGPT came on the scene, and I realized that these are getting meaning and they’ve only gotten better and better. So, I loved your response to that, Daniel. You said, “Why is it a big deal? Of course we can learn from statistics. Doesn’t everybody know that?” Or that’s what you were implying.
DANIEL: Yeah.
ADELE: Those aren’t your exact words. [LAUGHTER] And you’re absolutely right. I mean, the constructionist perspective, I think, would be absolutely common sense if it weren’t for the fact that the dominant framework in traditional departments continues to be the idea that large parts of language are not learned, that syntax is somehow universal, and that our knowledge of words is a different kind of knowledge than our knowledge of grammar.
HEDVIG: I have to, as the show European, at many European universities and also in Australia and New Zealand, thankfully, that tradition is not as strong.
ADELE: You’re right, yeah.
HEDVIG: I did take a summer school with a mix of American and European students, and we had a class that featured Construction Grammar and the American students had no intro to Construction Grammar. And the European ones were like, “Well, this is basic 101. You read a little bit of this at least.” It was a bit embarrassing, but it is changing.
ADELE: That’s so interesting. Yeah, no, it’s very true. And in fact, well, you’re right. Europe really is much more open-minded and treats a lot of different theories… It provides different theories to their students. I love visiting Europe for that reason. Even in America, it is changing, but the East Coast in particular, still there’s shadow cast by Chomsky and his followers.
DANIEL: Yeah, I was going to say there are non-linguists who are just not getting what we’re talking about at all. So, I think we really need to address the elephant in the room. The dominant paradigm, when I was doing linguistics in America — and I thought everywhere — was the Chomskyan generativist thing, which is a lot like that exercise that I described where you construct trees and then you populate them with words. And one of the contentions is you can’t learn language from observation alone. And according to this idea, you don’t have to because a lot of the knowledge that we need for language is already inside our brains. That’s why children are able to learn so fast, because they just get some input and then their language module flips on. I hope I’m not straw-manning the generative argument.
But it would take a long time to learn all that stuff. And so, you don’t have to because it is innate in our brains and there is a universal grammar. All human languages are basically the same language with minor quirks. And so, when you are learning your first language, you’re basically learning the quirks. I hope I’m getting that right.
ADELE: Thank you for describing that so I didn’t have to. Yeah, that is the tradition. So, it’s been changing with new results. But that is basically… yeah, that is the idea. And at the time that Chomsky came up with that proposal and he was in part following his own mentors like Harris, it wasn’t crazy because we didn’t know what the input looked like. We didn’t know how repetitive it was. We didn’t know much about learning, so we didn’t know about statistical learning. People were using the model of computers. And at that time, computers had very small memories but good computing power. And so, the assumption was that’s true of our brain too, that our brain has very little memory but good computing power.
And then, there’s another very interesting idea that makes me sympathetic with Chomsky in that… So, Chris Knight has proposed this idea that in the 1950s and 1960s, MIT was already getting lots of government grants for defense, and Chomsky was already antiwar and did not want any of his research to be used for government grants. And so, Chris Knight’s hypothesis is, and he backs this up in a long book with a lot of quotes, a lot of detail. I found it farfetched when I first heard about it, but when I read the book, I was kind of impressed. His hypothesis is that Chomsky did not want his research to be used for propaganda, and so he didn’t want it to be used by the military. So, he decided, “I am not going to do anything related to meaning. I’m only going to focus on form.”
HEDVIG: Mm, interesting.
DANIEL: Interesting.
ADELE: So, that is interesting, isn’t it?
DANIEL: Well, then that’s why you have the form and meaning split typical of generativism, which Constructivist approaches attempt to heal.
ADELE: Exactly. And the traditional approach does separate syntax from the lexicon. That’s the key distinction. So, Steven Pinker made that popular in his book, Words and Rules, and argued that they’re two separate kinds of things…
DANIEL: Okay.
ADELE: …but for us, they’re the same kind of thing.
HEDVIG: I just had one question about what Daniel said, what you guys were talking about earlier, that you were surprised that the large language models could understand meaning so well and it is true. And linguists often focus on the small scale. So, we often think about words and clauses and sentences and less often on longer things, what we might call paragraphs or stories or something like that. And it does seem to me, and I was wondering if that is because the large language models are not embodied and are not experiencing the world, is that why they are worse at meaning, at larger units? Because that is what I experience when I interact with them, that the sentences make sense, but then sometimes the paragraphs make less sense. Do you guys ever experience that?
DANIEL: I don’t think they are understanding meaning. So, that would track, if that’s the case.
ADELE: Okay, so that’s a really interesting perspective. I’ve been impressed, actually, about how well they do understand paragraphs. And a lot of people are with you, Daniel, in saying that they don’t understand. Well, we can’t believe they understand, but we really don’t understand how we understand. So, I think people come at that question with some idea that, “When I understand, I build mental models and I reason and I understand the world,” but we really don’t know the nuts and bolts of that at all. We don’t know what a mental model looks like in the brain.
It looks as if we use those. I’ve referred to mental models too. It’s a good level of description, but there’s so little we understand about the human brain that I think it’s too early to say that they’re not doing what we’re doing. There are differences for sure, but I think the parallels are really intriguing.
DANIEL: Okay, hot take! Okay, hot takes for days.
HEDVIG: Hot take. Yeah, we’re getting closer to Blade Runner and Battlestar Galactica. Yeah, yeah.
ADELE: Well, people are falling in love with these chatbots.
DANIEL: I know, I know. They fell in love with Eliza.
ADELE: Do they?
HEDVIG: They fall in love with scammers who feed them a script as well.
ADELE: Well, I’m not saying they should fall in love. I’m not saying that they are human. They don’t have authentic emotions. They don’t have experience in the world. They need massive more amounts of text than any human possibly could ever… It’s like 6,000 years of language that they need to do what they’re doing.
DANIEL: They still do okay with less.
ADELE: Yeah, that’s right. So, it’s not clear they need that much data. I mean, on the Constructionist perspective, where everything is a pattern of form and function that you use in particular contexts, if they didn’t understand context, they would not be using these constructions, right?
DANIEL: Yeah, that’s true.
ADELE: And then, they would sound odd, you would notice.
DANIEL: That’s true. That’s true. Okay, well, I think we’re ready to put point number one to bed. It is possible to learn language from observation alone, even if you are a computer who… It’s possible to put out grammatical output from stats alone.
ADELE: Yes.
DANIEL: And you don’t need a human brain to do it. You don’t need innate knowledge. You can just… really, it does work. Okay, let’s go on to number two. This one confused me. You learned that “natural languages can be represented in a way that reflects quite human-like use, true, without separating words and rules, syntax from meaning.” Like, that exercise that I described for my students that separate, because the rules over there and then the lexicon over there, I separate them in this generativist approach, but large language models don’t have to do that. Can you tell me more about this?
ADELE: That’s right. So, there are no rules hardcoded in the models. There are generalizations that correspond to what you might want to call rules, but they emerge from the statistics and the input, just as word meaning does. So, the fact that the input is the same and the representations are the same, it’s all about vectors. The representation is not different from meaning and grammar. It’s not in different parts of the system as far as we can tell, it is combined. And so, that I think is really intriguing. And that’s been true since the early days of connections modeling that inspired these large language models. But you’re not dividing words and rules in these models, so I find that to be intriguing.
DANIEL: And that’s consistent with the constructivist approach, because I think we’re talking about the lexicon-syntax continuum, which is something that you’ve talked about a lot.
ADELE: That’s right. So, if you think about morphemes as partially filled words and then canonical words and then compounds that we know we’ve heard, and then collocations and idioms and then more abstract, flexible grammatical patterns that often contain words. So, like the way construction that we mentioned before, “He devoured his way through his Halloween candy,” it has to be WAY. You can’t substitute PATH or anything else there. You can’t say, “He devoured his path through the Halloween candy.” So, there’s this integration of words and phrasal patterns.
DANIEL: The grammar grows out of the constructions that you use.
ADELE: Right. It emerges from the actual expressions that you hear.
HEDVIG: And that’s also how it potentially changes because someone could be using a construction a particular way, and then they happen to be talking… Over time, people talk about it a little bit differently in other contexts, and then people are learning from that output and they make a slightly different generalisation, and then that grows over time. And the larger the constructions are, I feel, I feel, which is not very scientific things to do, but it feels like the larger they are, the more likely they are to change? Is that true? Well, it is to me.
ADELE: I think empirically, it’s maybe the opposite. So, word meaning does change in the course of a lifetime, like LIT.
DANIEL: Yeah.
ADELE: First of all, it was LIGHTED when I was a kid, and now it’s LIT. And now, LIT means having a great party or whatever. And I think there’s a reason for that. The abstract grammatical patterns are generalizations over many items. And so, they’re more stable because you’d have to change a lot of those specific items before they shifted. The center of gravity would have to shift. There is an interesting case where that happened on an intermediate scale. So, in English, we all used to say “uncles and aunts”, but that sounds odd now, right? You would say “aunts and uncles”, you’d prefer. Or, we all used to say “nephews and nieces” and “father and mother” and “pa and ma”. I’m not kidding you. In fact, there still is a male-first bias in English, which you can show for other kinds of expressions with two nouns like that. But in this small class of phrases that refer to relatives, like family relations, there’s been the shift among many of them. So, we say “nieces and nephews” — female first — “aunts and uncles”, “ma and pa”, “mother and father”. But that was a shift. That is a generalization. But it happened gradually over historical time, over the last century. And so, we’re actually using large language models now to see if we can replicate that shift in that.
DANIEL: Fun!
HEDVIG: Ooh, it’s interesting.
ADELE: Yeah. And what happened there was one case shifted for reasons that I think I understand, and then other cases got sucked into the same pattern.
HEDVIG: Oh, analogy.
ADELE: Almost by analogy or by priming, that’s right.
HEDVIG: Yeah. That’s right.
DANIEL: Wow, okay.
HEDVIG: People say that’s a very strong force of movement.
ADELE: That’s right.
DANIEL: Let’s go on to the third one. I think we’re actually intruding into it here. You say, “Large language models, like humans, are impacted by frequency and similarity. They offer us the opportunity of systematically probing how that works.” Well, it sounds like the “mums and dads” thing is sort of like that, because similarity, we say, “This thing is like that thing, so I’m going to drag all of these together into a new construction.”
ADELE: That’s right. So, it’s not conscious. Nobody woke up one morning and decided: I should say “nieces and nephews”. But through analogy or through simple priming, that when you go to… Remember the lexicon and the constructicon, or the construction net I should say, is a network, you go to find the meaning that you want to convey. And you know, you want to convey “nieces and nephews”, that meaning. You could pronounce it “nephews and nieces” because the meaning wouldn’t change much. But in that cluster, you’ve got this activation of other cases where the female is first. And that led over time to people producing “nieces and nephews”. And for us today, that’s the conventional way to say it and we just learn that. We just learn that.
DANIEL: When you mentioned frequency effects, I was thinking of an example. It’s the thing where if I say, “My name is Daniel,” that’s something that people say all the time and it means a normal thing. When you say normal things, you mean normal things.
HEDVIG: Oh, this is Levinson’s pragmatics.
DANIEL: Yes, it is.
HEDVIG: Yes.
ADELE: Yeah, yeah.
DANIEL: But if I say, “I am the man they call Daniel,” or, “I was given the name Daniel,” that would be a very weird and uncommon way to introduce myself. That would be me, meaning something different.
ADELE: Yes, exactly. So, you’re aware of what… It’s not that you would never say it. There are certain contexts where you might say it if you were a character in a novel, you might say it because we associate that kind of formulation with a very different context. It’s not a typical situation. That’s right.
HEDVIG: Yeah. I think Levinson had the ones like, “He closed the door,” versus, “He caused the door to close.” That would mean that he accidentally caused it to close.
ADELE: That’s right. That’s right. If you use the less common formulation, there’s going to be a special reason for it. Yeah, that’s true. That’s true. But we are so aware of the fine grain nuances. So, for instance, the word CONFIRMED, I mentioned before, it has a direct object, complement, usually. So, you confirm the appointment, but it can have a phrasal complement. You can say, “He confirmed the appointment was on Saturday.” But the direct object, the simple noun phrase, is more common, it’s more frequent.
And it turns out there was really great work by Susanne Gahl and Susan Garnsey showing that when you use the expected continuation, that is a direct object, people shorten the pronunciation of CONFIRMED and they don’t actually pronounce the last /d/ sound. So, we kind of… We often do that, but we do it in context where the word is easy, where the processing is simple. Confirmed the date without the D, but confirmed that the date was on Saturday with the D. I mean, that’s crazy. It’s unconscious, but it shows that we are so aware of the frequencies, not just of words, but how those words combine with more abstract patterns.
DANIEL: Wow.
HEDVIG: And that’s how you can get one word splitting into, further down the path, potentially two different words. Like, what happens with HAVE. It’s like, “I have three dogs” and “I have to go,” in one of those, I can do “I’ve got to go,” but can I do, “I’ve three dogs”? Most English speakers, I think, would say no. And if that goes on for a long enough time, you could get two different words from that one.
DANIEL: “Hafta”.
ADELE: Yeah, that’s right. So, the most frequent words are the most accessible in general from memory. And so, those tend to be the words that get coopted for new uses. And word meanings change because we’re always experiencing contexts that are slightly different, and in that slightly different context, we have to use the words we have. So, we use a word that’s good enough in that context and we thereby stretch it to a new meaning. Or, we use it in a way that is traditional, but someone else misinterprets what we say and then they start using it in a new way. So, both things happen.
DANIEL: Wow.
ADELE: Yeah, but that’s how meanings of words get shifted. And they get shifted fairly easily because it’s just the word and the context. You change the context, you change the meaning of the word slightly. But the more abstract phrasal patterns are slower to change. They also change over time, but they’re slower because again, they’re emergent generalizations over lots of instances. So, you’d need lots of instances to change for, like, basic word order to change in a language.
DANIEL: Finally, you say, “Large language models, like humans, absorb biases that can now be measured and potentially addressed.” And we’ve talked about bias just a while ago back at the beginning, but what have you noticed in the way of bias and maybe bias mitigation?
ADELE: Well, the clearest example I have is the interpretation of the gender-neutral pronouns and how that’s shifted. Bias is everywhere in language. There are many studies by linguists who have shown that. Jennifer Eberhardt from Stanford has shown that when police stop and interrogate Blacks versus whites, they use less respectful language when talking to Blacks, regardless of the severity of the offense, the purported offense involved. And language models, as Hedvig was saying, absorb those kinds of biases and they would do the same unless they are…
HEDVIG: Told not to. Yeah.
ADELE: …engineered not to, that’s right. And so, one nice thing about… And I think one thing that makes these models so much more successful than the older versions is that there’s an extra layer of training that teaches them basically to be helpful and to avoid stereotypes and to avoid hateful language. And so, they had human beings decide which are the most helpful responses and which are biased, and it used that to train the models. And that seems to have made models so much better at interacting with people.
Partly because, you’re a linguist, you’ve heard of Grice, the idea that when we use language, we assume that others are being helpful, if you think about it. Otherwise, we wouldn’t use language. There would be no point. If I assumed that you weren’t being helpful, you might say random things, full of untruths and full of irrelevant statements, but we don’t do that. We assume that we’re being helpful and we generally are helpful in being relevant to one another. And these large language models were taught to do the same. And that’s when the magic started appearing and they started being very user friendly. I think those two things are related.
DANIEL: Hmm.
HEDVIG: Yeah. And they also… I heard someone say you should think of your large language model as an improv partner, because if you are kind to it, it will also mirror that back to you, which I found generally is true as well when I personally use them.
ADELE: Yeah. So, they do react to your input, which is super interesting. Although Sam Altman from OpenAI recently said that the use of PLEASE and THANK YOU in these models is, like, killing the rainforest.
HEDVIG: I thought that was…
ADELE: There are big problems with these models. Don’t let me end without emphasizing that.
HEDVIG: I saw that earlier today. Apparently, when you ask it to do something and it gives a response and then you say THANK YOU, it needs to start up the whole model. And what is it, 1.5 ounces of water or something in order to…
ADELE: Okay.
HEDVIG: I mean, I think it’s just hardcoded in that if you say THANK YOU, it should say, “You’re welcome.”
ADELE: You’re right.
DANIEL: Because that’s what we do.
ADELE: That would be a good workaround, wouldn’t it? That’s right.
HEDVIG: And that’s not hard, right?
ADELE: That’s right.
HEDVIG: And allow for like one or two typos.
ADELE: Although even that is so full of context specificity. So, if you say thank you and it says, “You’re welcome,” it’s acting as though it really did you a favor. Whereas a lot of times it might have been something very simple, like you just said, “Is this grammatical English?” And it said, “Yes, it sounds fine.” And then you say, “Thank you.” Should it really say, “You’re welcome,” or should it say, “It’s nothing,” or should it say nothing at all. So, “you’re welcome” can sound a little, I don’t know, heavy handed, a little grandiose in certain contexts.
DANIEL: Mm-hmm.
ADELE: Yeah. So, everything we say and do is laden with these context-rich associations, which I think is fascinating. And that’s the kind of thing that constructionist approaches can in principle capture really well.
DANIEL: Well, I’ve learned so much about not just large language models, but about the whole Constructionist idea. So, I’m really grateful for that because I’ve been wanting to talk about that on the show for a long time. Adele, what’s a thing that people don’t really get about language but that you wish that they did?
ADELE: I think people think that they understand language and how it works, laypeople. But there’s so much that we don’t appreciate about language, these subtle aspects of language. In a psychology department, I’m aware of how typically, psychologists talk about how dumb people are, how we fall prey to mistakes and we can’t reason well and we’re biased. But when your focus is on language, you realize how smart we are, how carefully and accurately and well-chosen most of our words and phrases are, but we don’t really know how we do that. So, I think I would want to impress upon people that our knowledge is so vast, and it depends on memory, and it depends on context, and it depends on the function of language and we do learn it. So, these models are an existence proof that it is learnable in principle from statistics in the input. And yeah, I think when people are aware of language, it’s just as you are, you realize how fun and interesting everyday conversations can be.
HEDVIG: Yeah. And like you were saying earlier, how much data it takes for a large language model to do what it does, it’s like that meme, like, “Look what they have to do to mimic a fraction of our power.” I learned a language in, like, I don’t know, depending on how you count three to ten years as a human. And if we take all the input that we gave them and have them read it in a certain… even fast, it would take hundreds if not thousands of years, right?
ADELE: That’s right. That’s exactly right. Yeah, it really goes to show how smart we are. And we don’t use any ounces of water to think. Well, that’s not true, we do have to have some water.
HEDVIG: [LAUGHS] A little bit.
ADELE: Our energy efficiency is amazing.
HEDVIG: Speaking of, let me have some more of my sugar.
ADELE: They are a proof of… Yeah, Daniel drinks a sip of water. [LAUGHS]
DANIEL: Yeah. Well, Professor Adele Goldberg, luminary in the field of Constructionist approaches. Thank you so much for coming on and sharing your knowledge with us. It’s really, really great to talk to you.
ADELE: Thank you so much for having me. I really appreciate what you’re doing, and you’re doing a great job. It’s great to meet you.
DANIEL: Thanks.
HEDVIG: Thanks.
[INTERVIEW ENDS]
[MUSIC]
DANIEL: Let’s go on to Words of the Week. This one was suggested by James on our Discord. It’s TACO. Why is this relevant?
BEN: I suspect I know this one, I think.
DANIEL: Go on.
BEN: I believe it is a… I’ve just got to get the word right. Not an initialism, an acronym.
DANIEL: Nobody cares about the difference between initialisms and acronyms.
HEDVIG: Abbreviation.
DANIEL: It’s okay.
BEN: Excuse me, Daniel.
DANIEL: No one cares.
BEN: No, Daniel, I’ve been in our Discord. If you can look me in the face and tell me our people do not care about this distinction, I will call you a dirty fucking liar.
DANIEL: They don’t care. No…
BEN: They care.
DANIEL: …they don’t give a shit.
BEN: I don’t think they get…
HEDVIG: Some of them care. A bit.
DANIEL: My ABC audience gives a shit.
BEN: I don’t think they’re prescriptivists about it, but I think they care and they know the difference. They know the difference between POISONOUS and VENOMOUS…
HEDVIG: I think it’s okay to care. People can have hobbies.
BEN: …and they’re not pricks about it, but they’re going to care.
DANIEL: I’m asking them.
BEN: Okay, yeah. Let’s do a poll. Let’s do a poll. But I believe this is an acronym which, whether accurately or not, is purported to come from the financial world, and it stands for “Trump always chickens out.” And I think it is a thing that gets said in response to the fact of like, we’re starting to see somewhat less kind of volatility coming out around Trump’s announcements of tariffs or crazy other bullshit nonsense because, “Trump always chickens out.” So, that explanation, people are making like TACO trades with the belief that Trump will just not follow through on whatever ridiculous nonsense he spouted in the last 24 hours.
DANIEL: Yes, indeed. His laziness has prevented some problems but also caused more. Coined by Financial Times columnist, Robert Armstrong, but it has become a definite meme — and he hated it by the way. One of the Oval Office reporters gave him a question on it and asked for his reaction, and his reaction has been described as unhappy.
BEN: [LAUGHS]
HEDVIG: I mean, at the core, he is a sassy reality TV show diva person. He loves a zinger, right? And this is a bit of a zinger. It’s easy to remember. It’s funny. It relates to things people have experienced, if not in the recent period, then older. Like, he never built that wall really. He was going to make Mexico pay for it.
BEN: You could absolutely picture him making something like this up, right?
DANIEL: Yep.
HEDVIG: Right. A lot of his strong stances have no follow-through.
BEN: Yeah, yeah.
HEDVIG: So, yeah, of course he hates it.
DANIEL: And tacos are great.
HEDVIG: And tacos are great!
BEN: Yeah, indeed.
DANIEL: He hates being compared to a Latine food, a food that Mexican people like. Well, everybody likes tacos. Who doesn’t like a good taco now and again? But I won’t go for a TACO Trump.
BEN: Oh, oh, especially white people tacos, oh, yes, siree, Bob.
HEDVIG: Oh, I got into trouble on TikTok because there was an American who was in Sweden and who said she liked Swedish tacos, which I don’t. And then, everyone piled on me and was like, “Oh, my god. I don’t know what you’re talking about.”
DANIEL: Oh, why don’t you?
BEN: How dare you besmirch our flavorless gruel.
DANIEL: Well, now I’ve got to ask, how is it different?
HEDVIG: It’s fine. No, it’s just that there’s a lot of cold, uncooked vegetables, and ever since I had like hot mince and like a hot… Like, ever since I had a nice hot burrito with cheese in it, I just don’t understand why. [BEN LAUGHS]
DANIEL: Okay, all right, cool.
HEDVIG: Yeah, I don’t get it, but that’s all right.
DANIEL: Well, thank you to James for that one. Let’s go on to our next one suggested by O Tim on our Discord: APPSTINENCE.
BEN: Ooh, I’ve got some guesses.
DANIEL: Mmm.
HEDVIG: Ooh, what is this?
DANIEL: Okay, go on then.
BEN: No, no. I bogarted the last one all right, I want Hedvig to take a stab.
HEDVIG: Is it that thing of you’re on Instagram and then you think, “Oh, my god,” you close it and then you immediately open it?
BEN: Well, that would be the opposite of appstinence. That would be like…
DANIEL: What’s the opposite?
HEDVIG: Well, you’re trying.
BEN: So, I had figured that this is a word for people who are engaging in the Christian equivalent of not having sex, but with their device in some way. Maybe they get a dumb phone or maybe they block themselves from their own apps or something like that. They engage in some sort of almost Ben Ainslie-esque puritanical kind of like, “Ah, this shit is all whack and I don’t want to do it. Blah, blah, blah. It’s too much noise.” Is it that, Daniel?
DANIEL: It is that.
BEN: Okay.
DANIEL: So, this is a concept that became popularised by Harvard student, Gabriela Nguyen, started the club about it for people who want to just bring down their phone usage or eliminate it entirely. It’s got five different parts. Decrease. Decrease your usage. Deactivate. Start getting rid of your accounts one by one. Delete. Delete the apps. Downgrade your phone to a dumb phone if you go that far. And then, depart. Have real interactions, don’t have any virtual ones, which I think goes pretty far.
HEDVIG: That’s super far.
BEN: So, does abstinence. Like I think it is actually a… Is the word appellate for label? Is that… Am I thinking of the right word?
HEDVIG: Oh, like to call something, like appelle in French, but he has some sort of weird English word for it.
BEN: Yeah, like appellate.
DANIEL: Appellate. Yes.
BEN: Yeah, yeah. So, I think appstinence is an appropriate appellate. I was trying to do a thing with AP there, but then I didn’t…
DANIEL: Appellation.
HEDVIG: Yeah, it’s beautiful.
BEN: Yeah. Yeah.
DANIEL: I like it.
BEN: I do think it’s a fair one of those labels because abstinence is itself like a pretty intense and not necessarily fully aligned with human beings and how they exist in the world in a lot of the cases of human beings.
DANIEL: Equally realistic, right? Yeah.
BEN: Yeah. So, I think it’s an appropriate label because it’s a pretty extreme version of this. I’m also on the spectrum of, “Hey, maybe we should switch off and go outside more and stuff,” but I’m not anywhere near down that end of the line.
DANIEL: Mm, that’s not my goal.
HEDVIG: Neither am I but I think that society should move more to making it possible to be like that.
BEN: Yes. That’s a good point.
HEDVIG: Because it is a problem when we have people who want to do that or older people who aren’t very tech savvy and if they no longer can file their taxes and go to their bank without having a smartphone, I think we might be in a bit of a bad situation. This is where public services like libraries, for example, could step in and help people along with certain tasks. I think it’s good. I had a related news story as well, if I may…
DANIEL: Yeah, go for it.
HEDVIG: …which is about children, because both of you are parents and a lot of parents have children who want to use smartphones. I believe this applies to both of you. Correct?
BEN: Correct.
DANIEL: Yep.
HEDVIG: And it can be very hard, I imagine, as a parent if you want to deny your child some or all of those things. You might say, “Oh, you’re only allowed like two hours a day,” or something. And in Sweden a couple of weeks ago, there was a news about… I think this happens regularly every now and then, but it happened again where a bunch of parents go together who all have kids in the same school and they make a pact together so that none of their kids can say, “Oh, the other kids…
BEN: Oh, like an abstinence pact? [LAUGHS]
ADELE: Yes.
HEDVIG: So, they say none of them are going to get it. So, I think this is for very young kids. So, this is like between six and eight, nine.
BEN: Whoa.
HEDVIG: And they are going to wait with smartphones until grade seven, which is about 14, 15 years old.
BEN: Grade seven, 14, 15, really? Your grade seven is that old?
HEDVIG: Grades don’t…
DANIEL: I was 12 in year seven. Mm.
HEDVIG: Yeah. So, I’ve made a spreadsheet for this to explain it to my husband, but you know that schools aren’t the same in different countries. Like, grades don’t mean the same things.
DANIEL: Yeah.
BEN: But do you guys go to school until you’re like 23?
HEDVIG: No.
BEN: Okay. It seems like our year 10s are 15.
HEDVIG: Okay. So, you start counting…
DANIEL: That always seemed pretty young to me. I was 18 at the end of year 12. But I’m going through this kind of thing, yes, and we’re making decisions and trying to form good habits and good attitudes about online usage, but it also means I’m not going to say this is never going to happen. I can put my foot down on some things, like we’re not doing Roblox, but…
BEN: Yes. That’s going to be a hard no for me.
DANIEL: We don’t do YouTube because it’s a…
BEN: Yes also.
HEDVIG: No YouTube at all?
DANIEL: …radicalisation machine. That’s what it is.
BEN: It is, man. There’s some fucked up shit there.
DANIEL: Yep. No, we’re not doing that. But also, realistic internet usage, I think, is an important skill to be able to learn and knowing yourself. So, we’re feeling our way along here in this new reality.
BEN: I don’t know how successful I’d be because I am in a blended family and so I don’t have control of half of the equation. But I’m closer and closer to, I think, being a fairly extreme parent on this stuff now. Like, no smartphone whatsoever until probably 15 or 16. Same thing for any social media applications of any kind. Like, you can’t have an account on Instagram or whatever the kids are using now, man.
DANIEL: Yep.
BEN: And even to the extent of maybe no unsupervised… When I say unsupervised, no being on your computer doing miscellaneous things in your room alone.
DANIEL: In your room. That is a rule for us too.
BEN: Public space internet usage is, yeah, I think so.
DANIEL: Yep.
BEN: And I’ve got a boy as well and that for me, it contributes to this equation. Like, the radicalisation of boys in gamer spaces and stuff is dark, like I see it at school a lot. I see like year seven boys with like alt-right talking points just falling out of their mouths around transpeople and stuff, it’s dark.
DANIEL: Yeah. We’re all going to have to decide how to navigate this.
BEN: Yes.
DANIEL: We kind of navigated it all alone, but I think we get a chance to help the next generation with the meager skills that we have. Oh, dear. Thanks, O Tim, for that one. Let’s move on to our next one.
BEN: Yes, let’s.
DANIEL: This one’s from Laura on our Discord: BOYSOBER.
BEN: Ooh.
HEDVIG: Boysober.
DANIEL: Boysober.
BEN: I’m not…
HEDVIG: Okay.
BEN: Oh, wait, yes, I do. Sorry, sorry. I get it. It took me a second, but I think I’ve immediately landed on what is going on here?
DANIEL: Hedvig, you got it?
HEDVIG: Really.
DANIEL: I’m going boysober.
HEDVIG: Boysober.
BEN: There’s a real theme to this.
HEDVIG: Is it like RAWDOGGING, a particular way of not drinking alcohol that boys do?
DANIEL: No, alcohol is not the substance, but instead boyness.
BEN: I think I’ve got absolutely a handle on this one. Not because I know it, but because I can immediately just put our finger on the zeitgeist of pop culture at the moment.
DANIEL: Haven’t we seen this with the 4B movement? Ben, take it away.
BEN: Yes.
HEDVIG: Oh, now I know.
BEN: Yeah. there you go. Hedvig, you can take it there.
HEDVIG: You just stay away from boys.
BEN: Yeah. So, I don’t know if this makes it to your TikTok feed, Hedvig, but I am seeing a nontrivial number of sort of female creators making content in some vein of, like, if you think sexuality is a choice, do you think I would choose men? Are you fucking crazy? Of course, sexuality isn’t a choice, you reckon I would line up this constellation of fuckboys and then go, “Ooh, that one, please.” No, I have no choice in this matter.
HEDVIG: I fully understand this, and I see a lot of this kind of talking as well in my social media feeds. And I know about the South Korean 4B movement where people don’t date, don’t have sex with, don’t marry, and don’t something else with men.
DANIEL: Have children.
HEDVIG: Have children. My problem is I am a woman, and I’m married to a man and I have been with other men and I have only been with nice men, and I basically only have nice men friends. So, this idea that these other men…
BEN: Hedvig, can you stop being the one outlier in everything, please?
HEDVIG: No, but I mean, I don’t think I am. I think there are nice boys out there. But sometimes, I come across a story of men doing something insane and the comments are full of people being like, “Oh, this isn’t surprising.” And I’m like, “Huh, surprising to me.” But it by no means means that I don’t believe it exists or don’t think it’s very common or anything. It’s just something that rarely happens to me.
BEN: So, BOYSOBER basically is a bunch of girls just being like, “Nah, I’m done.”
HEDVIG: That’s fair enough.
DANIEL: I’m done with the world of men, which is fair.
HEDVIG: Fool me once, shame on you. Fool me twice, shame on me. You can make choices. This might be a choice for you. Yeah.
DANIEL: On the show notes for this episode, we’ve got a link to an ABC podcast called Ladies, We Need to Talk. The episode is called Quitting Men: Hope Woodard’s ‘boysober’ movement. And you can check that out. It reminds me of other noun adjective compounds like GIRL CRAZY or BOY CRAZY, LOVESICK, PUNCH DRUNK and SLAP HAPPY. Those are the ones I could think of, but it’s a…
BEN: Daniel, can I just quickly ask, PUNCH DRUNK — does that mean violent and drunk?
HEDVIG: Drunk on punch.
DANIEL: It doesn’t mean that you drank some punch. It means you got punched.
BEN: Oh.
DANIEL: And then, you…
HEDVIG: Punched by the drink.
DANIEL: No, it means you got punched and now you’re sort of walking around as though you’re in a drunken haze because you’ve been hurt.
BEN: I see. Okay.
DANIEL: That’s why.
HEDVIG: I’m learning so many new words today.
DANIEL: This is your linguistic education. Last one from Diego suggested on, yes, our Discord. This one’s in Spanish. How about a Spanish word of the week? BIZTREPAR. That’s the word.
BEN: BIZTREPAR.
DANIEL: “Biztrepar which seems to defy the rules of Spanish phonology. It broke my Spanish phonotactics. This is an article in the Prensa Libre. It’s about some business terms that are coming up, and it mentions this one. It’s made of two words, BIZ, which is business. You borrow business from English and then you shorten it and it’s BIZ. There you go, now it’s Spanish.
HEDVIG: Like SHOWBIZ. Mm-hmm.
DANIEL: Exactly. How long have you been in the biz? And then, TREPAR which means, I think, to climb. I’m just reading from Spanish here, translating on the fly. “It evokes the image of somebody who climbs with determination, even though the terrain is… empinado”, I think that’s steep.
HEDVIG: Okay.
BEN: Okay. So, it describes a person who does this or a business who does this?
DANIEL: This is the activity. This is a verb, TREPAR. So, you’re climbing. It reminds me of starting a business on a shoestring, that’s the English phrase that it reminds me most of.
BEN: Oh, okay. So, we’re talking about these like out-of-your-garage sort of disruptive, innovative, “We’re going to like make it or break it,” kind of jobs.
DANIEL: You’re hauling yourself up by your shoestrings. So, let’s see, we’ve got TACO, APPSTINENCE, BOYSOBER, and BIZTREPAR. Our Words of the Week. We’ve got so many good comments. In fact, we’re going to stuff a lot of our comments into a whole episode here. But let’s talk about us and ous, U-S and O-U-S. Now, Hedvig and I were kind of surprised… Do you remember how we talked about callus, the hard bit of skin, and callous meaning unfeeling, one with U-S and one with O-U-S. And we were all kind of like, “Ah, I don’t know if these are related.” And so many of our listeners have written in and said: “Of course they’re related. Are you not aware of this thing?”
BEN: Was I not in this one?
DANIEL: Were you not there for this one?
BEN: Because I would have yelled at both of you, surely.
DANIEL: All right, go ahead.
BEN: These two are related.
DANIEL: Be like everyone else.
HEDVIG: Surely, I said they were related.
DANIEL: I think I said no, because I was just unaware that MUCUS was the noun and then MUCOUS was the adjective. It was something…
HEDVIG: Right.
DANIEL: It was this weird gap for me. Andy from Logophilius pointed it out. Stephen now writes us and says, “Like Andy, I was also a bit surprised at the confusion over CALLUS, U-S and CALLOUS, O-U-S, but also wanted to comment on some of the etymology. The suffix, O-U-S, in English derives from ōsus in Latin, which is a very productive derivational suffix that more or less translates to “full of” or “rife with”. So, a CALLUS, U-S is the thing, while CALLOUS, O-U-S, is the adjectival form that describes being rife with that thing. I believe the other examples Andy provided follow the same pattern.”
BEN: So, would that be like LICE and LOUSY.
HEDVIG: [GASPS] Ooh.
DANIEL: No, this one… that doesn’t follow our U-S, O-U-S pattern.
BEN: Mm-kay…
HEDVIG: Well…
BEN: But it would be the O-U-S, rife with the thing thing.
DANIEL: No, no, no, because LOUSE is the singular form.
HEDVIG: But it started that way, and then.
BEN: Oh, okay, fair.
DANIEL: So, that’s not Latin ōsus.
BEN: Okay, yeah, yeah. My bad, my bad.
DANIEL: Good try, good try. But MUCUS and MUCOUS, right? It’s that sort of thing. SARCOPHAGUS and SARCOPHAGOUS, that was a fun one. Anyway, thank you, Stephen, for helping us, and thanks to our guest, Adele Goldberg. Thanks to everybody who gave stories, words and comments.
HEDVIG: Hey.
DANIEL: Thanks. What?
HEDVIG: There was one part of Steven’s message where he said I was right and you skipped saying it.
DANIEL: Well, it seemed like the moment. What would you like to say? Would you please read it? Tell me the part you liked.
HEDVIG: He said that, “Andy seems to have described the nominal-adjectival distinction correctly and Hedvig was correct to suspect Latin.” So, I just wanted that to note…
DANIEL: Yay. Nice going.
BEN: I just love that you somehow managed to vocally convey the two fingertips touching together and then like, “Oh, no. Oh, I’m just a baby.”
HEDVIG: [LAUGHS]
DANIEL: Hedvig has… This is Hedvig’s way of saying, “I wasn’t wrong exactly. Please don’t hurt me. Give it all to Daniel. He’s the one with the gap, not me.” Thanks, Stephen. And thanks to our guest, Adele Goldberg. Thanks to everyone who gave stories, words and comments. Thanks to SpeechDocs for transcribing all the words. And to you great patrons.
HEDVIG: If you like our show, there are a number of ways you can help us in our endeavours so that we keep on making this show for you. You can follow us on all the places. We are becauselanguage.com on Bluesky and @becauselangpod on lots of other places, including the bad place, Twitter, which we don’t post to anymore. And I tried to visit Twitter the other day, and I tried to search for something and it’s…
DANIEL: Everything’s gone.
HEDVIG: Not useful. Not useful. Let’s just put it like that. You can also… if you want to contribute to our show, you can do like several of our listeners have done for this episode and send us in Related or Not, or Words of the Week or news items. We love to hear from you. You can either email us at hello@becauselanguage.com or you can go to our website, becauselanguage.com, and click SpeakPipe, which I believe is on the right side of the website and record a little message for us. You can also send us a voice note from your phone if you want, and then we can play your lovely, sweet voice on the radio.
DANIEL: We’re getting to those very soon.
HEDVIG: You can also tell a friend about us or write us a review in all of the places. Podchaser is one of our favorite places. Also, Apple Podcasts is a great place to leave reviews for podcasts that you like.
BEN: And if none of that works, you can also become a patron. You get stuff depending on the level, like live shows and bonus episodes and mailouts and shoutouts and Discord access and just generally being a really cool person. You also get to contribute to us paying the bills. And one of the bills is to transcribe the show which, as we learned today from Marianne at the University of the Virgin Islands, is some people’s preferred way to consume podcasts. It’s a bit crazy sounding to me, but I am not here to yuck anyone’s yum. So, Marianne, I am so stoked that the poor people at SpeechDocs who have to wade through my voice and turn it into typed words, like, that is finding an audience in your eyes, and you could help contribute to more of my words, finding their ways to Marianne’s eyes.
I’m going to read some names of our patrons now, but we’ve been changing up the order of the names because it was so boring to read the same order again and again. We didn’t realise, of course, the beast that we were creating as we unleashed the insanity which is Daniel’s reorganisation of the names week upon week. So, Daniel, what are we doing today?
DANIEL: This idea for supporter name order came from Elías. I posted something fun on our Discord. It appeared on Reddit about eight years ago. It seems that if you opened up a box of alphabet cookies from IKEA, which is a thing that you can get — alphabet cookies — and you counted all the letters, that box of alphabet cookies would have about the same letter frequencies that the actual Swedish language does. Isn’t that interesting? You’ll line them up and there’s a ton of A’s. So, I said, “I think I need to check this out.” And Elías said, “Sounds like a piece of information you could somehow use to order the names of Patreon supporters at the end of the next show,” and I agreed.
So, for this episode, I’m reading out the names of our supporters in alphabetical order, but alphabetical order means I took cookies out of the box at random until I had them all. And that order is alphabetical now. Just so you can see it, there’s a photo on our website. I’m just going to show this now. Where is it? There they are.
BEN: There you go.
DANIEL: Just for interest, it’s something like K, V, W, F, C, M, I, U, L, Q, and so on. All right, so here it goes. Go ahead, Ben, take it away.
BEN: [INHALES] Kevin, Keith, Kristofer, Kathy, Whitney, Wolfdog, Fiona, Felicity, Faux Frenchie, Chris W, Chris L (because W comes before L in this alphabet), Colleen, Canny Archer, Mignon, Meredith, Molly Dee, Manú, Margareth, Ignacio, Iztin, Lyssa, Linguistic…
BEN and DANIEL: C̷̛̤̰̳͉̺͕̋̚̚͠h̸͈̪̤͇̥͛͂a̶̡̢̛͕̰͈͗͋̐̚o̷̟̹͈̞̔̊͆͑͒̃s̵̍̒̊̈́̚̚ͅ…
BEN: Luis,
BEN and DANIEL: LordMortis.
BEN: Laura, Larry, gramaryen, Elías, who gave us this terrible, terrible idea. Nikoli, Nigel, Nasrin, Helen, Rene, Rodger, Rach, Rachel…
IN UNISON: O Tim.
BEN: PharaohKatt, Ayesha, Amy, Amir, Alyssa, Aldo, Andy from Logophilius, Andy B, Ariaflame, Diego, sæ̃m, Sonic Snejhog, Steele, Stan, Tony, Tadhg, J0HNTR0Y, Joanna, Jack, James. And thanks to our latest patrons at the Listener level, Virginie.
DANIEL: Nice.
BEN: At the Friend level, Christelle, Ben C, and Pterrorsaurus hex, who started out as a free member but then upgraded to the Friend level because we’re that smack you just can’t quit. And Hellen. Hell-en? Is that what I would do with a double L there?
DANIEL: I think it’s just Hel-en.
HEDVIG: Hel-en.
DANIEL: I think so.
BEN: Okay, Hellen, but with two Ls. Hello to our newest free patrons, X — concerning — Bailee, Curtney. Curtney? [HEDVIG LAUGHS] I’m so sorry, Courtney. I’m so sorry.
HEDVIG: We have broken this man.
BEN: Xavier, Janet, ⴰⵢⴻⵍ /ajul/. What characters are those, Daniel?
DANIEL: I had to look that one up myself. It’s Berber. It’s the Berber Alphabet.
HEDVIG: How cool.
BEN: That’s so fucking cool. ⴰⵢⴻⵍ, I don’t know who you are, but that shit’s dope. kit, John, Pat, Mary, and this is very telling. This last name is the name that I just named my horse in my most recent replay of Zelda: Breath of the Wild, Susan.
DANIEL: Wow.
BEN: Also, special thanks to anonymous supporter who sent us the equivalent of a yearly supporter membership. Oh, you go… I was about to say, “You go, girl,” but then I was like, I don’t know who you are and you could be any kind of whatever. Thank you to all of our patrons.
DANIEL: Our theme music was written and performed by Drew Krapljanov, who also performs with Ryan Beno and Didion’s Bible. Their new album is great. Check it out on Bandcamp. That’s not a paid ad. We just like the album. Thanks for listening. We’ll catch you next time. Because Language.
[BOOP]
DANIEL: Hi, Adele.
ADELE: Hello.
DANIEL: Thanks for jumping on again.
ADELE: Anytime.
DANIEL: So, I did that activity with my students. It’s a generative syntax thing. It’s the kind of thing you’d find in lots of linguistics units. It’s got phrase structure, rules that turn things into other things. And then down here, it’s got a lexicon, and we generated some sentences, and the sentences weren’t very good. So, we found some ways to make the rules better. It’s pretty laborious. And then in one tutorial, as the students were leaving, one student said, “It was a lot of work to generate a sentence when large language models can generate text so easily.” And I said, [PAUSES] “Yes.”
ADELE: Don’t put me out of a job.
DANIEL: Kind of. And I realised for the first time that I’ve ever taught this unit, the problem of generating grammatical sentences has been solved. And it wasn’t this stuff that did it. It wasn’t that. It was statistics. So, I’m thinking a lot about the generative program. It’s something that we’ve been using for 50 years to describe and generate sentences, but it didn’t get us there. Large language models did. And I just didn’t expect this to be subsumed. I am shook. I’m having a crisis.
ADELE: Oh, well, I’m sorry you’re having a crisis, but I think that’s a really important insight. I think you’re absolutely right. I think it is really meaningful that the rules are not hardcoded in the models and yet they do so much better. They sound so much more natural because the rules don’t actually tell you that much. So, a classic phrase structure rules would say there’s a verb and then a possible direct object. And it’s true that the direct object comes after the verb, but you have to know that if the verb happens to be devour, then you absolutely have to have a direct object. “She devoured,” sounds odd, but if it’s eat, you don’t have to. You can say, “She ate,” and the direct object is understood.
And then you might realise that the direct object isn’t always after the verb. You can say, “Bagels, I like,” or bagel… Now, the semantic direct object is before the sentence and the differences are conditioned by the words and by the functions that you’re expressing, but the rule doesn’t tell you that much.
DANIEL: Yeah, I don’t even have anything on here for function or like what you’re trying to get across or the situation or the context or anything.
ADELE: Right, right.
DANIEL: So, did large language models just invalidate phrase structure grammar? Is the Chomskyan model a relic now?
ADELE: Well, that’s a tough question. So, there are people who are looking at the large language models and trying to find evidence of, say, phrase structure trees in the large language models. And Chris Manning, who’s a leader in this, in computational linguistics, did find evidence that you can find something akin to a verb plus direct object. But do they have it as a crisp rule? I’m sure they don’t because what would they do with that? You really have to go on a word-by-word level. And there are generalizations, but the generalizations aren’t driving what sounds natural because… I think last time I talked to you, we talked about confirm and the fact that it takes a direct object predominantly, but it can also take a cause. And it’s pronounced slightly differently depending on which one it is. You’re not going to get that from phrase structure trees. There’s just so much more fine-grained information that you need that these high-level abstractions, they may arise as they may emerge from the specific generalizations, but without the specific facts, you can’t produce natural language.
DANIEL: Okay, okay. So, I guess there’s still value in looking at this kind of thing just to see where we’ve been, to dig into the substrate and to get our hands on some trees. Those formalisms are still pretty cool, even if they didn’t do the whole job of language generation.
ADELE: Well, they’re certainly… So, I do find it useful when I teach psychology of language to give students a couple of examples of how grouping things differently gives you different meanings. And that’s the way I talk about it. So, “cool girls and boys.” Are the boys cool? Well, it depends whether you group these things together or the bigger unit together. And people talk about that as constituent structure, and that’s one set of terminology to use, but it’s also about semantic grouping. Does cool go with just girls or does it go with the whole conjunction, girls and boys?
DANIEL: And then, there’s a whole another aspect where we’re in an interaction right now. And so, I can clarify, I can go back and say, “Did you mean that the girls were cool or were the boys cool too? What did you mean by that?”
ADELE: Yeah, that’s right, that’s right. And that’s been largely ignored. This fact that we coordinate our understanding through discussion, through repeating each other, through our gestures, through various subtle cues, we communicate whether we’re understanding each other or not.
DANIEL: Okay, I feel like I’m getting a better picture now.
ADELE: Yeah, yeah.
DANIEL: Just as a final thing, you’re one of the originators of construction approaches, construction grammar. Can you give me a really, really good example of what a construction is and how I use it?
ADELE: Sure. So, there’s so many constructions to choose from. The one I started with was the double object construction. People have called it the Drosophila of linguistics because it’s so studied. But the double object construction is a verb followed by two noun phrases. A prototypical example is, “She gave him a book.” When you ask people what moop means and, “She mooped him something,” people automatically think of give. And there’s a good reason for that. That’s because give is the most frequent word that appears in that construction. But there are other verbs that are more frequent overall in English, like get or make that can appear in the construction but give is the most frequent verb that does appear in that construction, and people have this implicit awareness of it.
But what’s cool about the construction is that even if you’re not saying give, even if you say, “She baked him something,” the notion of giving is actually implicitly there. So, if you say, “She baked him something,” you imply that she’s intending to give him something. Now, she might not get there, but when she’s baking it, that’s her intention. She’s not going to throw that cake at him. She’s not going to do it instead of him and give it to some third party. She’s planning to give him that cake. And so, the notion of giving is associated with the construction, with the pattern even when give isn’t there. And so, that’s a nice example where there’s no lexical item, but the pattern itself, the word order, is associated with this function of meaning, of transfer.
DANIEL: So, when I say “to verb someone something,” it…
ADELE: Yes.
DANIEL: …that’s a construction.
ADELE: That’s right.
DANIEL: I can fit a lot of things in there like give or get or even toss, and…
HEDVIG: That’s right.
DANIEL: …it comes with a meaning like that somebody’s getting some kind of benefit.
ADELE: Yeah, or getting some kind of thing.
DANIEL: Thing, okay.
ADELE: Not just a benefit. It could be metaphorical. If you compare it to, “She baked a cake for him,” that could mean that they’re both bakers and he was sick, and so she took over and she made the wedding cake for the wedding. “She baked a cake for him.” But that’s different. “She baked him a cake,” oh, he’s going to get that cake for himself. Exactly.
So, where does that meaning come from? It’s not the case in every language. So, Bantu languages have a similar form, but their meaning is much more broad. They can put locations in that first object or instruments. They can put a lot of different kinds of meanings in that construction. But in English, it’s very tightly tied to this transfer idea.
So, the idea of construction grammar, though I should say, is it’s constructions all the way down. So, words are constructions, morphemes are words with open slots. They’re constructions. Idioms are constructions. And these very abstract grammatical patterns are also constructions. So, they’re all constructions. But it was the double object construction, and when I first realized for myself that the verb really didn’t have to have anything to do with transfer, that convinced me that, “Oh, these constructions themselves are doing some of the work. It’s not all the words.” Yeah.
DANIEL: That’s fantastic. Adele Goldberg, thank you so much for coming on again and just straightening me out a little bit. Really appreciate that.
ADELE: Oh, my pleasure. You are a pleasure to talk to. Thank you so much.
HEDVIG: Can I tell Ben about the mess-up I did?
DANIEL: Yeah, sure.
BEN: Wait, wait.
HEDVIG: Can I tell Adele on the show about the mess-up I did?
DANIEL: I don’t even remember what you’re talking about.
HEDVIG: Okay.
BEN: We’re all learning this for the first time.
DANIEL: Yeah. Bring it.
HEDVIG: There are two Adele Goldbergs.
BEN: Oh, no.
HEDVIG: Okay. And at first, I thought were interviewing the one we were. And then, something made me think we were interviewing the other one.
DANIEL: They’re both awesome.
HEDVIG: They’re both awesome. And they’re both…
DANIEL: I would interview the other one.
HEDVIG: Yeah. And I don’t know. I was flipping between two, because also, I couldn’t believe that it was the one that it was, I think. And then when she came on, I realised I have no idea what her face looks like. So, I couldn’t tell when she came on either. And then, Daniel started introducing her, and I almost corrected him and was like, “No, Daniel, you’ve got the wrong one,” but he had the right one.
DANIEL: [LAUGHS] Thank goodness. Well, I didn’t care which one it was because I wanted the one that did those tweets and that was her.
HEDVIG: Exactly. Exactly.
DANIEL: But she turned out to be super cool and super nice. Oh, my god.
HEDVIG: Okay, okay, right. I really struggled with this when I was little because I think we’ve talked about this on the show before, because to me, the opposite of a cat is like everything in the world with a black hole where a cat should be. The opposite of a cat is not a dog for sure. Like, they’re furry and they have four legs and they’re pets, like they’re the same the basically.
BEN: There is semantic similarity there.
HEDVIG: But you have to… If you imagine a list of everything a cat is like carbon based, matter, not antimatter.
BEN: Oh, gosh. Oh, Jesus. [LAUGHS] You would have been an insufferable child. [LAUGHS]
HEDVIG: I think I was. So, the only thing I could come up with was that, like, “Well, in that case, the opposite of a cat has to be everything in the world minus cats,” or something like that.
BEN: But then you’re getting into Socratic form, like, what is the ur-cat, sort of thing?
HEDVIG: And then people were like very confused. I also struggled with the idea of people said that two things were the same because I was like, “Well, the atoms are not all in the same position.”
BEN: Oh, Jesus. [DANIEL LAUGHS]
HEDVIG: So, I think maybe the science were… Yeah.
BEN: So, perhaps you and large language and large vision models share the same fundamental bug in your programming is what we’re positing here.
DANIEL: Incidentally, why is the disease called lupus?
BEN: Oh, is it because you look mangy?
DANIEL: Apparently, it devours the affected part. And in fact, Etymonline says, in early medical writing, sometimes Englished as wolf, you got a bad case of wolf.
BEN: Oh, wow. Okay.
HEDVIG: What is it with diseases and animals? Because cancer is also a thing.
DANIEL: That’s right. I can’t think of any other examples though.
HEDVIG: No, I just realised that as well. [LAUGHTER] I got to lupus in cancer and I was like, “Shit. Oh, no.”
BEN: Oh, actually, no, no, there are other ones.
DANIEL: You can have crabs.
BEN: So, like ringworm, for instance, has nothing to do with worms.
DANIEL: No. It’s a fungus, isn’t it?
HEDVIG: It doesn’t?
BEN: It’s a fungal infection. Yeah.
DANIEL: My favorite Van Morrison song, by the way.
HEDVIG: I’m just thinking back at when I was like 13 or 14 on IRC chats and I…
BEN: I know, yep, you and I are exactly the same.
HEDVIG: How much fun I had and how nice it was to have friends online and…
BEN: Yes.
DANIEL: It was. It was great.
HEDVIG: Nice to have friends online.
BEN: I would put to you, Hedvig, as I have become a parent. And perhaps should you ever go down that path yourself, this will happen to you as well. I’m now looking at a lot of the things that were a factor of my youth and now with a parent’s perspective, going, “What the actual fuck? What did you…?” If you are anything like me, Hedvig, there is some fucking appalling things in 13-year-old you’s search history.
HEDVIG: No, not really.
BEN: Oh, really? Oh, well, then you and I are different people.
HEDVIG: I was not very interested in sex or violence at all.
BEN: I’m not… See, but here’s the thing I’m not even talking about sex and porn necessarily. I’m just talking about… Perhaps, I’m just a darker person, I’m not sure, but I was deeply, profoundly fascinated to try and find the most just terrible and appalling things that I could.
HEDVIG: Right. No. Yeah, I don’t know for whatever reason, I wasn’t. But people like you exist. So, I have to remind myself that like…
BEN: [LAUGHS] I am a… Perhaps, this is a better way to frame it. I was the product of a system that in turn empowers people like me and also ignores people like me. And I don’t mean in a, like a, “Oh, won’t someone think of the poor, straight white man,” but more just like young men are largely, unfortunately, not only left to their own devices, but also given like a totally free pass on things like familial responsibility and all this sort of stuff. And so, I know I’m going on a little bit of a rant here, but I heard someone recently, a creator on TikTok, discussing the “male loneliness epidemic.” And one of the ideas this person put forward was the idea that men only know how to be friends when there is a scaffold, whether that’s a hobby or teasing your mates at the pub or whatever. And when you lose the scaffold, men have no authentic friendship skills outside of the scaffolded experience, because no one in our culture is modeling to men how to have like meaningful, authentically, emotionally good relationships and all that sort of thing.
HEDVIG: That’s relatively true for women as well. Like, as a 14-year-old girl, you go shopping, that’s like a scaffold. There are lots of things like that.
BEN: Perhaps, I have read and watched too much Babysitters Club. It could be true, but I have had it said to me by many women in the world that like, your friendships transcend simply like hanging out in the same space, that you will talk deeply and from a feelings perspective on the things that are important to you.
HEDVIG: Yeah, no, that’s true, that’s true. But just like… For example, like a lot of my friends now are like related to work and I think of work as a scaffold. But maybe adult men need… If they can’t learn to socialise without scaffolds, maybe they need new scaffolds, like things like… What’s that thing in Australia, Men’s Shed, etc.?
BEN: Yeah, yeah, yeah.
HEDVIG: Yeah. Yeah. But I’m…
BEN: I personally think maybe we could just give men emotional intelligence. It’s not impossible. We can do it.
DANIEL: Come on, science.
HEDVIG: Yeah, maybe you can give that to your son, but I don’t know if I can give that to you. No. [LAUGHS]
BEN: Yes, yes. Anyway, we’ve gone too far so that’s part of the reason why some of that stuff is why I would be like, “Ooh, I don’t want you just sniffing around the bottom of the internet.”
HEDVIG: Yeah. Fair enough.
BEN: Ooh, can I… Sorry.
DANIEL: Yeah.
BEN: Can I quickly share with you guys one moment? I don’t know. This might not happen to you as much, Hedvig, because you don’t have as much cause to hang out with people of a vastly different generation than you, I suspect. Daniel does because the people he teaches at university are a very different generation and I do because of my work. I was recently teaching my year 11s about the news media and journalism and the fourth estate and all this kind of stuff. And as a way to explain different ways to frame a story, I put up Al Jazeera as one of our new sources to analyse. And when I wanted to play it on the computer, I had to go through their YouTube livestream.
Now, I don’t know if you’ve been paying attention to world just events in the last week but boy, oh, boy, is there some fucking horrendous shit going down. And it was a story that involved the deaths of lots of people, blah, blah, blah. And a bunch of the students in my class started laughing. And I turned around with both barrels so ready to unload because it was a really heavy story, but I could immediately clock that some other dynamic was at play. I could tell that they weren’t laughing at what we were seeing in the livestream. This is how I know I am old and becoming increasingly irrelevant. Adjacent to the livestream when you watch a YouTube livestream is like a live chat window. Now, it had not occurred to me for one millisecond of my brain space to devote any attentional resources to that window, because for me and my generation, I know there is nothing of value there, right. Like, the live chat of a YouTube…
HEDVIG: It’s just people saying, “I’m from Ecuador. Who else is from Ecuador?”
BEN: Or in this particular instance, some gross person was going like, “Any teenage girls here?” [ONOMATOPOEIA] And so, that’s what got the students in the class laughing. But it was just so fascinating to me that they looked… That was a thing worthy of attention to that generation, because…
HEDVIG: Should not be that. Should not be that. I don’t know.
BEN: [LAUGHS] So, congratulations, Hedvig. Your response to this scenario means you are also old.
DANIEL: Nice.
HEDVIG: During the Conclave for the Pope, I looked at the chat.
BEN: What I love is that you can absolutely bank that as soon as you call Hedvig either old or irrelevant, immediately following that is some stammered defense of, like, “No, no, but, but, but.”
HEDVIG: I’m not old.
[Transcript provided by SpeechDocs Podcast Transcription]