My Response to Medium’s AI-Generated Writing Policy
As a process oriented individual, I’m becoming keenly aware of the implications of the growing world of AI generated articles. There’s a lot to explore here and I see that the discussion is happening online. So far, I’ve only skimmed the discussion, but I’m looking to delve a little deeper, as this issue looks like it’s going to grow.
Exploring my own perspective first, here are some questions that come to mind. Is having an article written simply a means to an end? Are articles merely a form of information dissemination? Or are they part of an emerging human communication process? These are much more fundamental questions than considering what the practical use of a chatbot or AI application can be for productivity.
Is this part of a broader social trend that nearly everything is becoming more commodified? Even if things can be boundless, does that automatically indicate that we don’t set limitations? Well, talk of limitations, identification and disclosure is happening.
In early 2023, Medium writer Scott Lamb, conducted a poll to gather feedback from Medium users — writers and readers — to explore how we’re approaching Ai-generated writing on Medium. (Here’s a link to his original article)
The article states that Medium has updated distribution standards to include an AI-specific guideline, as stated: “We welcome the responsible use of AI-assistive technology on Medium. To promote transparency, and help set reader expectations, we require that…