5. then the AI judge says "AI provides logical consistency and use of evidence" despite citing no sources and just brushing aside my counter arguments by saying I'm wrong and repeating its arguments.
Ultimately it wasn't all that different than regular GPT. Is it even possible for the AI judge to say that the user wins?
You can't debate anything about transgenderism on reddit. You get downvoted to -5902 votes immediately and no one reads it. The only people who respond are the people who are trying to get comment karma because they know it's an easy battle.
So I appreciated debating the bot, but even though it is still a large language model and couldn't delve into any depth.
Except on Reddit you also have to be increasingly careful that the person you are wanting to go up against and anyone who's commented an affirmation of agreement with your opponent aren't a subreddit moderator or you'll risk getting yourself shadowbanned (through automoderator bot) across the quite possibly large part of Reddit they control.
You also have to be careful that you're not saying anything that upsets the admins or you'll find yourself permanently suspended without warning and properly shaddow-banned if they catch you making a new account.
Which I presume is part of the context since "practice" is mentioned and debate teams practice. Also training debate teams seems like an obvious market for a tool.
And there's a judge (just like competitive debate) choosing a winner on criteria that sound a lot like the criteria for competitive debate.
Or to put it another way, the criteria of your dissatisfaction sound a lot like the criticisms of competitive debate.
I have coached multiple formats for years. And no, that’s not how competitive debate works. At least not competitive debate worth participating in. (TBF there are a lot of terrible debate programs/leagues, so this may have been your experience.)
The quality bar in most nationally competitive debate — the type for which students might get college scholarships or do well in national qualifiers-only tournaments — is far higher than in online discussions. Students enter with thousands of pages of evidence and must frame arguments in terms of a few common logical structures that can have a lot of emergent complexity. It’s more like adversarial mathematical proof writing situated in the context of policy making than what people typically call “debate”. Even the delivery of speeches is, I bet, not at all what you would expect. See eg https://www.youtube.com/live/Kc-QrcxrkCw?feature=share
This is sometimes true even in cases where students aren’t allowed to bring outside evidence into the round due to students’ prior experience in evidence based formats.
I tried using this system to vet some common debate mainstays — a politics disadvantage about the debt ceiling, a federalism counter plan, a very condensed 1ac from this years topic, and a very condensed 1nc shell.
On the disad in particular this system double turns itself (google “double turn debate” for various explanations). This is a very novice mistake. The judge doesn’t notice.
On the 1NC response the system ignores topicality, which is an instant loss. The equivalent of a plaintiff ignoring a pre trial motion to dismiss because they are too busy writing up cross examination questions for the defense witnesses. Again, the judge didn’t notice.
The system isn’t working for me anymore but I bet you’d get similar issues with v/vc style LD cases, where the system doesn’t understand simple theory about how the value and criterion interact; particularly subsumption style arguments
Even on more lay types of debates the system and judge fail, as documented in other comments.
I wouldn’t recommend using this system to practice competitive debate.
What's the point of a debate in which the speeches are completely unintelligible? They might as well just submit a written document if they're going to abuse the spoken word as aggressively as that.
> What's the point of a debate in which the speeches are completely unintelligible?
1. The speeches are intelligible. Maybe not to you. That's okay.
2. The oral presentation component still has value because it puts a hard constraint on the amount of time competitors have to formulate and execute on their strategy. It's an artificial constraint designed to train specific skills -- "thinking on your feet in the presence of a lot of information".
What's the point of competitive running, or of any team sport?
Given the highly artifical nature of the 100m dash, might as well just have each competitor run their race and mail in a video instead of having a track meet, right? What value could the competition provide?
Ultimately, these are simply highly constrained games. The rules are designed to make the game competitive, and children play the game as a way to learn and practice various skills. No one thinks that being great at competitive running or hitting a baseball will immediately translate into e.g. business acumen or excellence on the battlefield. For some reason it breaks people's minds that debate is just another competitive game.
I think it's because this is described with the word "debate". Running 100m really fast is still clearly running, it's just very fast. Anyone can run, not everyone can run that fast.
That video doesn't resemble anything anyone normal would recognize as debate. The point of debate is to persuade people effectively. If normal people can't even understand what you're saying they can't possibly be persuaded so you have totally failed at debate. If that competition was described as a speed talking competition then fine, no problem, but it's not.
Consider hangboarding [1]. It looks nothing like climbing, but all elite climbers spend an enormous amount of time on this activity. You will find the same in any other sport.
I want you to consider something important: this is an EDUCATIONAL activity. The activity does not exist for you. It exists for the students.
So: what is your learning objective? What do you want to teach the students?
> That video doesn't resemble anything anyone normal would recognize as debate.
Debates don't look like that video until students have 3-4 years of experience in competitive debate.
The thing that normal people recognize as debate is, in fact, a fairly useless educational activity after 3-6 months of participation.
Beyond that initial period, it doesn't teach good oration skills. It doesn't teach good negotiation skills. It doesn't even teach procedural skills that might be useful in a courtroom, boardroom, or even legislature. And it certainly doesn't teach research skills or quick thinking.
Debating for a lay audience is, to put it simply, a shallow and mostly non-transferable skill. At least after the first six months. It IS a useful skill, but spending more than a semester on that skill in the context of competitive debate tournaments is -- to put it mildly -- a massive waste of students' time.
The contrived game in the video above is designed to additionally teach:
1. research skills,
2. quick thinking, and
3. the ability to quickly synthesize a large amount of information into a coherent narrative.
Those skills are useful in boardrooms, in court rooms, and in legislative sessions. They are transferable, most of all, to the vast majority of high paying jobs where you do not spend your entire day talking to people. For example, holding all of that context in your head and quickly forming a narrative to solve a problem (winning the round) has a lot in common with debugging or working through a complicated mathematical model.
And students only progress to this version of debate after they have mastered the basics of persuasive oral communication (what you want to see). To wit: the "speak pretty for normal audiences" type of debate is available in United States. Students who participate in the type of debate in this video regularly enter and win tournaments that have a more lay format. As a joke, for kicks. They have already largely mastered the lay format by the time they progress to the non-lay format. At least, that's true for my students.
> If normal people can't even understand what you're saying they can't possibly be persuaded so you have totally failed at debate.
Normal people don't judge these rounds, so this is largely a moot point. And, again, the students in this video are capable of speaking extremely persuasively to normal people. The fact that they choose to participate in a more pedagogically useful activity doesn't somehow erase their ability to do so.
Yes, the most time-efficient form of practice for BP (in my experience) was alternating opening speeches with a friend. 15+14+10 for reflexion≈40 minutes per speech.
In parliamentary styles, practicing opening speeches is definitely at the top of the list in terms of effective drills. So much hinges on getting off on the correct foot.
In other formats the first 2-4 speeches are verbatim pre-written, either entirely or as composable blocks. so you want to make sure those blocks can be chosen correctly and delivered efficiently but practicing beyond that point is not a huge help.
But you can instead drill specific skills. I have a large pile of “10 minute round” templates that I specialize to each topic for the students. Each focuses on only a specific skill and usually in the second constructive or rebutttal phase (eg, answering a link turn). Typically students who complete those drills a few times each and participate in a few scrimmages will make it to out rounds at their first novice tournament.
I made three vague 1 sentence arguments and the judge announced that I won. I definitely felt like I got crushed by gpt but I guess it went soft on me :/
2. It throws multiple arguments at you
3. Then it stops the debate after a certain point
4. It repeats its arguments
5. then the AI judge says "AI provides logical consistency and use of evidence" despite citing no sources and just brushing aside my counter arguments by saying I'm wrong and repeating its arguments.
Ultimately it wasn't all that different than regular GPT. Is it even possible for the AI judge to say that the user wins?