The AI Surge in Classrooms
On November 30, 2022, OpenAI launched ChatGPT, and within six days, CEO Sam Altman announced it had hit a million users. Unlike human brains, large language models like ChatGPT don’t “think.” They pull from vast datasets, stringing together words based on patterns. Earlier AI chatbots, like Microsoft’s 2016 Tay, crashed and burned shut down in 16 hours after spewing vile nonsense. But ChatGPT was different. It could chat smoothly, break down tricky concepts, and churn out coherent answers. By December 2022, Google execs were sweating, declaring a “code red” over fears AI might shake up their search empire.
Educators, meanwhile, faced a full-blown crisis. ChatGPT could spit out research summaries or draft essays in seconds, threatening to upend homework as we know it. Schools scrambled to set rules, but policing AI use proved tricky. Many campuses, including NYU, tried regulating it, often with little success.
AI as a Student’s Silent Partner
Back at the noodle shop, Alex opened his laptop to show me an AI-generated paper. Eugene, his quieter friend, leaned in, curious. A business major, Eugene uses AI for number-crunching but struggles with writing apps. “I got you,” Alex said, pulling up Claude.
One chat caught my eye, mentioning abolition. Alex explained he’d been assigned to read Robert Wedderburn, a 19th-century Jamaican abolitionist. “I wasn’t gonna read that,” he admitted with a smirk. He’d asked Claude for a summary, but it was too long for the 10 minutes he had before class. “So I told it, ‘Make it concise bullet points.’” He copied those into his notebook his professor banned screens in class.
Then he showed us a paper for an art-history class about a museum exhibition. He’d visited the show, snapped photos of the artwork and wall text, and fed them to Claude with the professor’s prompt. “I’m doing the least work possible,” he said, “because this class isn’t my thing.” The first draft missed the mark, so he tweaked the prompt. The final essay scored an A-minus. “I kinda got the argument,” he said, “but if the prof quizzed me, I’d be screwed.” Reading it over his shoulder, I found it convincing but bland typical undergrad fare. In 2007, I wouldn’t have batted an eye at its cookie-cutter style.
“I’m doing the least work possible, because this class isn’t my thing.” Alex, NYU undergrad
Eugene, more cautious, raised an eyebrow. “I wouldn’t just copy-paste like that. I’m too paranoid.” A high schooler when ChatGPT dropped, he’d dabbled with AI for essays but spotted its flaws early. “Did that pass the AI detector?” he asked.
The Cat-and-Mouse Game of Detection
When ChatGPT hit, professors rolled out countermeasures. Some required time-stamped Google Docs histories to track edits. Others designed in-class writing assignments spreadunofficial source. But catching AI use after submission is tougher. Tools like GPTZero, Copyleaks, and Originality.ai analyze text for machine-like patterns. Alex shrugged off the risk: “My art-history prof is old-school. Probably doesn’t know about those.” We tested his paper on a few detectors. One flagged a 28% chance of AI use; another, 61%. “Better than I thought,” Eugene said, impressed.
“If the prof quizzed me, I’d be screwed.” Alex, on his AI-written paper
I asked if he saw it as cheating. “Of course,” he shot back, laughing. “You kidding me?”
AI as Confidant and Ghostwriter
Alex’s laptop revealed more than just schoolwork. He’d asked ChatGPT if it was okay to run in Nike Dunks. It was his go-to for advice dating tips, motivation during rough patches. His ChatGPT chat history was a diary of a young adult’s ups and downs. Most shockingly, he admitted using it to draft his NYU application. “Guess it’s dishonest,” he said with a grin, “but, screw it, I’m here.”
“Guess it’s dishonest, but, screw it, I’m here.” Alex, on his AI-crafted NYU application
The Bigger Picture
Alex and Eugene aren’t outliers. AI’s grip on academia is tightening. Students lean on it for efficiency, not enlightenment, turning tools meant for exploration into shortcuts. Professors, caught off-guard, are stuck playing catch-up in a game where the rules keep shifting. Detection tools are imperfect, and AI’s outputs are getting harder to spot. The line between cheating and cleverness blurs, and the stakes grades, degrees, futures are high.
This isn’t just about lazy undergrads. It’s about a system that rewards output over understanding, where time-crunched students see AI as a lifeline, not a crutch. The real question isn’t whether they’re cheating but why they feel they have to. Overloaded schedules, sky-high tuition, and a job market that demands credentials over curiosity it’s no wonder Alex and his peers turn to AI to keep up.
“The real question isn’t whether they’re cheating but why they feel they have to.”
As we slurped our noodles, I couldn’t shake the feeling that AI’s role in education is a mirror held up to our priorities. If we’re churning out students who’d rather outsource their brains than engage them, maybe the problem isn’t the tech it’s the game we’ve built around it. Alex and Eugene finished their broth, packed up, and headed to class. I stayed behind, wondering if we’re teaching the next generation to think or just to win.