✏️ Schools have an AI problem

ALSO: Apple appoints OpenAI board

✏️ Schools have an AI problem

ALSO: Apple appoints OpenAI board

Estimated Read Time: 4 minutes

Every student I know uses AI to write their papers. And they never get caught. But I use AI to educate myself, to learn more. Schools need to address both of these issues, using AI to teach as well as changing their take-home work to adapt to AI-written essays. A recent study exposed how inept universities are at catching lazy AI essays.

  • ✏️ Schools need to change with AI.

  • 🤝 Apple’s Phil Schiller might join OpenAI’s board.

  • 🤔 Youtube allows you to take down deepfakes of you.

Read time: 2 minutes

✏️ Schools need to change with AI

What happened: Oh boy, it’s happening. Students are using AI to cheat on papers… who knew? 👀 Researchers at the University of Reading conducted an experiment where AI-generated exam answers, submitted under fake student identities, went undetected by professors and received higher grades than those of real students.

The details: 

  • The researchers created fake student profiles and submitted unedited answers generated by ChatGPT-4 for take-home online assessments in undergraduate courses. (Geez, not even reworded.)

  • Out of 33 AI-generated submissions, only one was flagged by the university’s markers, with the remaining answers receiving grades higher than the average student submissions. 👍

  • The research aimed to determine if human educators could detect AI-generated exam responses. And… they can’t.

  • Hint: Most schools say they can detect AI written papers. But they can’t. It’s been known that some real papers get flagged as “AI written” while actually AI generated papers go free. Ask any student.

Why it matters: Leave it to schools and mass media to act surprised at the obvious. I can speak from second-hand experience. I know people in universities and classes who use ChatGPT to answer questions, write essays, and complete online quizzes. They get away with it everyday. This includes nurses. How can you test someone’s competence if they can cheat freely?

Takeaway: Schools need to change how they use AI. Not only should schools figure out cheat-proof methods of testing, but they also need to use AI to teach. AI is a better teacher because students get the individual attention they need and the AI teacher has a database of “correct” information. But of course all schools can think about is students cheating on their essays, not how they could use AI to teach better.

Another day fighting deepfakes at the office…

😬 Meta’s changing their “made with AI” tag to “AI Info” after facing backlash for labeling real photos (usually from photographers) as AI generated. But even if the label changes, the tech to detect AI generated images has not changed. Meta continues to use metadata standards such as C2PA and IPTC to identify AI tools in editing processes. For instance, photos edited with tools like Adobe’s Generative AI Fill will still be tagged. Once again, the issue of deepfakes will be a big deal for social media.

Read time: 1 minute

🤝 Apple’s Phil Schiller might join OpenAI’s board

The man himself.

What happened: Apple has reportedly appointed Phil Schiller, the App Store chief and former marketing head, to OpenAI’s nonprofit board in an observer role.

The details:

  • Phil Schiller will have an observer role on OpenAI’s board, allowing him to attend meetings but without voting rights or directorial power.

  • There is no financial transaction involved in the partnership (so far), but Apple is expected to receive a percentage of ChatGPT subscriptions made through its platforms in the future.

  • It’s normally very uncommon for Apple executives to take board seats at partner companies.

Takeaway: Big Tech tightens their grip on AI. Apple is deepening their relationship with OpenAI as part of their ChatGPT deal. Microsoft also holds a non-voting observer position. Like it or not, OpenAI is the major player in this industry and what they do will direct the course of AI.

Read time: 1 minute

🤔 Youtube allows you to take down deepfakes of your face and voice

What happened: YouTube has implemented a new policy allowing individuals to request the removal of AI-generated or synthetic content that simulates their face or voice. This policy, quietly rolled out in June, aims to address privacy concerns associated with the rise of AI-generated media.

The details: 

  • People can now request the takedown of AI-generated content that mimics their face or voice under YouTube’s privacy request process.

  • Removal includes taking down the video and removing personal information from titles, descriptions and tags.

  • Youtube says they’ll consider whether it is disclosed as synthetic, uniquely identifies a person, or is considered parody, satire, or public interest content.

  • The policy makes special mention of public figures (think politicians) and if the deepfake is depicting a crime.

  • Content creators are given 48 hours to respond to a complaint.

Why this matters: One of the biggest dangerous of AI are deepfakes and false info. Protecting people from deepfakes needs to be a top priority. But Youtube is known for tons of copyright abuse problems, including unfair copyright claims and strikes on channels. I can’t imagine they’ll manage this feature better. Can’t wait for the stories about false deepfake claims on videos.

🍺 The future of beverage commercials…

Source: Midjourney

Give us your feedback!

Got feedback? Rate today’s newsletter by clicking below! 

Got more personal feedback? Fill in the box when you rate us and give us your criticism and feedback.

Thank you for reading! 

❤️Share the (ARTIFICIAL) Love!

Got a friend who wants to appear as smart as you? An AI fanatic or someone shaking in their boots of getting replaced? Share us by clicking the link below and you’ll get our love ;)