Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


You’re Not Going to Like How Colleges Respond to ChatGPT
February 3, 2023

You’re Not Going to Like How Colleges Respond to ChatGPT

Reading Time: 7 minutes

You’re Not Going to Like How Colleges Respond to That Chatbot That Writes Papers, It’s already happening., ChatGPT will lead colleges to increase student surveillance. Here’s how.

In the classroom of the future—if there still are any—it’s easy to imagine the endpoint of an arms race: an artificial intelligence that generates the day’s lessons and prompts, a student-deployed A.I. that will surreptitiously do the assignment, and finally, a third-party A.I. that will determine if any of the pupils actually did the work with their own fingers and brain. Loop complete; no humans needed. If you were to take all the hype about ChatGPT at face value, this might feel inevitable. It’s not.

But a response to the hit software demo, released by OpenAI in November to instant fanfare, is coming. You only have to look at how schools dealt with the potential externalities of newly essential tech during the pandemic to see how a similarly paranoid reaction to chatbots like ChatGPT could go—and how it shouldn’t. When schools had to shift on the fly to remote learning three years ago, there was a massive turn to what at that point was mainly enterprise software: Zoom. The rise in Zoom use was quickly followed by a panic about student cheating if they were not properly surveilled. Opportunistic education technology companies were happy to jump in and offer more student surveillance as the solution, claiming that invading students’ kitchens, living rooms, and bedrooms was the only way to ensure academic integrity and the sanctity of the degrees they were working for. Indeed, this cycle also played out in white-collar work.

Now we are seeing this once again in the fervor over ChatGPT and fears about student cheating. Already teachers and instructors are worried about how the tech will be used for circumventing assignments, and companies are touting their own ‘artificial intelligence’ tools to battle the A.I. for the soul of education.

Consider the flood of essays that would have us believe that not only college English courses but in fact the entire education system are imperiled by this technology. In separate pieces, the Atlantic proclaimed ‘The End of High-School English’ and announced that ‘The College Essay is Dead.’ A Washington Post analysis asserted that ChatGPT ‘AI will almost certainly help kill the college essay.’ A recent research paper tells us that GPT-3 (a precursor to ChatGPT) passed a Wharton professor’s MBA exam.

Whenever fears of technology-aided plagiarism appear in schools and universities, it’s a safe bet that technology-aided plagiarism detection will be pitched as a solution. Almost concurrent with the wave of articles on the chatbot was a slew of articles touting solutions. A Princeton student spent a chunk of his winter break creating GPTZero, an app he claims can detect whether a given piece of writing was done by a human or ChatGPT. Plagiarism-detection leviathan Turnitin is touting its own ‘A.I.’ solutions to confront the burgeoning issue. Even instructors across the country are reportedly catching students submitting essays written by the chatbot. OpenAI itself, in a moment of selling us all both the affliction and the cure, has proposed plagiarism detection or even some form of watermark to notify people of when the tech has been used. Unfortunately, the tool released is, according to the company, ‘not fully reliable.’

Witnessing this cycle of tech deployment and tech solutionism forces us to ask: Why do we keep doing this? Although plagiarism is an easy target and certainly on the minds of teachers and professors when thinking about this technology, there are deeper questions we need to engage, questions that are erased when the focus is on branding students as cheaters and urging on an A.I. bakeoff between students and teachers. Questions like: What are the implications of using a technology trained on some of the worst texts on the internet? And: What does it mean when we cede creation and creativity to a machine?

One of the most interesting details in the ChatGPT media swirl requires subtle attention to the shifting goal posts of A.I. ethics. In a recent interview, OpenAI CEO Sam Altman asserted the need for society to adjust to generative-text tech: ‘We adapted to calculators and changed what we tested for in math class, I imagine.’ In that sentence-ending qualifier, we can tease out a debate habit that is decades old: technologists guessing at how teachers might adapt to technology. Altman ‘imagines’ what ‘we’ (the teachers) had to ‘change’ about our tests because of calculators. What OpenAI likely didn’t do during the building of ChatGPT is study the potential pedagogical impact of its tool.

Instead of ‘imagining’ what ChatGPT might do to the classroom, educators have to adapt discussions, activities, and assessments to the changed environment that it creates. Some of that work is exciting, like when many of us began to bring social media into the classroom to connect our students with outside thinkers or collaborating in real time on a shared document. Some of it, however, is like what happens when we have to develop emergency plans for the possibility of an active shooter. We’ll do the work, but the adaptation a) might have been avoided, and b) only distracts from the job.

We could imagine another way this might have gone down. Consider what pedagogical testing for a tool like ChatGPT would look like: focus groups, experts, experimentation. Certainly the money is there for it. OpenAI is receiving investment interest from everywhere (after giving it $1 billion four years ago, Microsoft just invested another $10 billion) and has just launched a service that will allow companies to integrate models like ChatGPT into their own systems.

In all these discussions, it’s imperative to understand what this tool is—or more explicitly, what is required for it to exist. OpenAI paid outsourcing partner Sama $200,000 to teach ChatGPT how not to be violent, racist, or sexist. Sama’s workers were compensated between $1.50 and $2 an hour to keep ChatGPT from mimicking the worst kinds of human behavior. Kenya-based workers interviewed by Time reported being ‘mentally scarred’ by performing the job. Is it a surprise that the company wants to leave its ‘disruptive’ tool on the doorstep of the world’s schools with the sage advice of treating ChatGPT like a calculator? It shouldn’t be.

Teachers at many levels of our educational structure are going to be adapting to what A.I. text generation will do for, with, and to students in the coming years. Some of them will embrace the tool as a writing aid; others will bunker in and interrogate students whose papers feel auto-generated. ChatGPT has given us all interesting things to imagine and worry about. However, one thing we can be sure of is this: OpenAI is not thinking about educators very much. It has decided to ‘disrupt’ and walk away, with no afterthought about what schools should do with the program.

Almost every article about this technology has resorted to an appealing, yet severely flawed argument: The tech is here and isn’t going anywhere, so we’d better learn to live with it. This is a genie out of the bottle, we are told—never mind that at the end of most genie stories, they return to the bottle, having inflicted some manner of damage. Writer and theorist L.M. Sacasas refers to this line of argument as the ‘Borg complex.’ Telling us that resistance to a particular technology is futile is a favorite talking point for technologists who release systems with few if any guardrails out into the world and then put the onus on society to address most of the problems that arise. Let’s go back to that arms race we described. When the life cycle of a classroom activity is influenced at every phase by an A.I. instrument (assignment construction, student work, assessment), digital utopians might claim that students and teachers will have more opportunities for critical thinking because generating ideas—the grunt work of writing—isn’t taking up any of our time. Along this line of thinking, ChatGPT is just another calculator, but for language instead of numerical calculation.

This assertion, that A.I. might ‘free up human workers to focus on more thoughtful—and ideally profitable—work,’ is wrongheaded at the outset. When it comes to writing (and everything that can be done with it), it’s all grunt work. Having an idea, composing it into language, and checking to see whether that language matches our original idea is a metacognitive process that changes us. It puts us in dialogue with ourselves and often with others as well. To outsource idea generation to an A.I. machine is to miss the constant revision that reflection causes in our thinking. Not to mention that the biggest difference between a calculator and ChatGPT is that a calculator doesn’t have to check its answer against the loud chaos of everything toxic and hateful that has ever been posted on the internet.

Call it an idealistic concept, but the classroom is one of the most common spaces in modern life for the potential of collective meaning-making. Not every classroom satisfies that goal, but when teachers and students begin ceding the genesis of their ideas to a highly advanced version of autocorrect, the potential for group discovery begins to evaporate. That more-cynical future is not on the minds of those who argue, ‘Like it or not, ChatGPT is here, so deal with it.’ It’s a failure of imagination to think that we must learn to live with an A.I. writing tool just because it was built.

Pedagogically speaking, focusing on the grunt work of trying out ideas—watching them develop, wither, and cede ground to better ones—is the most valuable time we can spend with our students. We surrender that time to Silicon Valley and the messy database that is the internet at the peril of our students.

It’s perhaps a good moment to step back and develop better solutions for what’s occurred and what’s coming. Rather than upping the surveillance-and-detection stakes with tools that are, at best, spotty and unreliable, teachers can talk with students reflectively about what’s at stake with A.I.-generated text. At the same time, we need to continue building activities and assessments to make classroom work more specific and experiential. (ChatGPT probably won’t do so well with community observations or local interviews.) And we need to insist that, in the future, A.I. companies bring educators to the table to study the implications of their new tools. But we must also imagine an environment where we aren’t involuntarily sucked into the cycle of having unproved tech constantly foisted upon our lives. We’ve taken the important step of regulation in other major industries (like tobacco, pharmaceuticals, automobile manufacturing). Educators, or citizens in general, would benefit from a more public and reflective conversation, driven more by research than profit speculation. In the meantime, ignore the surveillance pitches. We don’t need to give the future implied by ChatGPT a helping hand.

Reference: https://slate.com/technology/2023/02/chat-gpt-cheating-college-ai-detection.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *