papertalksai
Subscribe to our podcast using this RSS feed: https://feeds.soundcloud.com/users/soundcloud:users:1399330944/sounds.rss
Website : https://soundcloud.com/papertalksai
RSS Feed : https://feeds.soundcloud.com/users/soundcloud:users:1399330944/sounds.rss
Last Episode : August 8, 2024 12:00am
Last Scanned : 13 minutes ago
Episodes
Episodes currently hosted on IPFS.
Unequal exchange of labour in the world economy
https://www.nature.com/articles/s41467-024-49687-y
Host: Hey everyone, welcome back to Paper Talks, the podcast where we dive into the latest scientific research and break down complex concepts for everyone to understand. I' m your host, Rose.
Guest: And I'm Jack, the co-host who's always getting schooled by Rose on these fascinating topics .
Host: You know, Jack, I was just thinking about the last time I went to the grocery store. I was looking at the prices of some fruits and vegetables, and I couldn't help but think about how much cheaper they are compared to what I'd pay for them in the United States. It got me thinking about the global trade of goods and how it affects prices in different countries.
Published 08/08
On the Phenomenon of Bullshit Jobs: A Work Rant by David Graeber
On the Phenomenon of Bullshit Jobs: A Work Rant by David Graeber
https://strikemag.org/bullshit-jobs
Host: Hey everyone, welcome back to Paper Talks! Today we are diving into an article titled "On the Phenomenon of Bullshit Jobs: A Work Rant" by David Graeber. It was published in 2013 in the Strike! Magazine. Jack, you know, I've been reading a lot about the gig economy and how it's changing the way we work, and this article really got me thinking about the value of work and what it means to have a meaningful job.
Guest: Yeah, I've been thinking about that too. I mean, I'm a part-time stand-up comedian, so I guess I'm in the gig economy, but I' m not sure if I would call it a "bullshit job." I mean, I'm doing what I love, right? But I also have a day job, which is a bit more... traditional. And I've definitely had those days where I'm just like, "What am I even doing here?"
Host: Right. And that's exactly what Graeber is talking about. He starts the article by referencing John Maynard Keynes' prediction...
Published 08/06
Do Large Language Models Perform the Way People Expect? Measuring the Human Generalization Function
https://arxiv.org/abs/2406.01382
Host: Hey everyone, welcome back to Paper Talks! I'm your host, Rose, and I'm joined as always by the hilarious and insightful Jack.
Guest: Hey Rose, what are we talking about today? Something about AI? Because I'm still trying to figure out how to get my Roomba to actually clean the corners of my apartment. It's like the little guy's got a fear of angles.
Host: (chuckles) That' s a great analogy, Jack! Today we're diving into a fascinating paper titled "Do Large Language Models Perform the Way People Expect? Measuring the Human Generalization Function," and it's going to be a bit of a deep dive .
Published 07/24
Knowledge Mechanisms in Large Language Models: A Survey and Perspective
https://arxiv.org/abs/2407.15017
Host: Hey everyone, welcome back to Paper Talks! Today, we're diving into a really fascinating paper that explores the mechanisms of knowledge in large language models. It's a big topic, Jack, so buckle up!
Guest: Oh, I'm ready. I've been wondering about this, you know? I've been using these AI tools more and more lately, and I'm just amazed at how much they seem to know. But, I also get a little freaked out sometimes, like, "How do they even do that?"
Host: Yeah, it's a bit of a mind-bender, isn't it? This paper tries to answer some of those questions and provide a framework for understanding how LLMs acquire, store, and utilize knowledge.
Guest: Okay, so, tell me, what's the big picture here?
Published 07/23
Are LLMs 'just' next-token predictors ?"
https://osf.io/preprints/osf/y34ur
Host: Well, that's where things get interesting. The authors of this paper argue that many people are too quick to dismiss LLMs as simply "next-token predictors." They say that LLMs are often labeled as just being able to predict the next word in a sequence, but that's a very simplistic view of how these models work. They believe that LLMs are capable of much more than that.
Guest: So, you're saying that LLMs might be able to think and understand things like humans do? That's a pretty bold claim!
Host: It is a bold claim, Jack, and the authors of this paper are careful to acknowledge that. They say that many people believe that LLMs are simply "just" function approximators or "stochastic parrots," and that they lack the cognitive capacities that humans possess. But they argue that ...
Published 07/20
Against choosing your political allegiances based on who is _pro-crypto_
Host: Hey everyone, welcome back to Paper Talks! Today, we're diving into a blog post by Vitalik Buterin, the co-founder of Ethereum . It's called "Against Choosing Your Political Allegiances Based on Who is 'Pro-Crypto'."
Guest: Oh, man, this is a hot topic. I've seen so many people online, you know, just throwing their support behind politicians because they say they're pro-crypto. It's like, "Oh, this politician said they like Bitcoin, so I'm voting for them!" I'm not sure if that's the best way to make political decisions.
Host: Yeah, I know what you mean. And it's definitely something that Vitalik tackles head-on in this post. He starts by acknowledging that "crypto" has become a really important topic in ...
Published 07/17
The Art of Saying No: Contextual Noncompliance in Language Models
Host: Hey everyone, welcome back to Paper Talks! Today, Jack and I are diving into a fascinating paper titled "The Art of Saying No: Contextual Non compliance in Language Models."
Guest: Oh, this sounds interesting! Is this about those chatbots that are always trying to be helpful, even when they shouldn't be? You know, like when you ask them for advice on how to build a bomb, and they're all like, "Here' s a step-by-step guide!"
Host: (laughs) That's a great example, Jack! You're right, this paper dives into the different types of situations where language models should actually refuse to comply with a user's request. It goes beyond just safety concerns, which is what most previous research has focused on.
Published 07/17