Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Login

    AI Megathread

    Scheduled Pinned Locked Moved No Escape from Reality
    314 Posts 48 Posters 55.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      Juniper
      last edited by

      If LLM chatbots weren’t so chronically wrong, using them to dodge adverts and engagement bombardment might actually be a decent use case.

      FaradayF 1 Reply Last reply Reply Quote 0
      • FaradayF
        Faraday @Juniper
        last edited by

        @Juniper said in AI Megathread:

        If LLM chatbots weren’t so chronically wrong, using them to dodge adverts and engagement bombardment might actually be a decent use case.

        Until there’s no more content for them to gobble up because all the websites they stole from have shut down.

        1 Reply Last reply Reply Quote 0
        • J
          Juniper
          last edited by

          Part of the reason we’re now drowning in content slop farms that themselves use generative AI is because advertising revenue made content profitable and incentivised creating content as quickly as possible while eliminating any kind of standards for quality.

          I don’t have a lot of sympathy for advertisers or those who are paid by them because they form part of the ecosystem that put us in this mess. Let the model collapse, it stopped serving us long ago.

          FaradayF PavelP 2 Replies Last reply Reply Quote 1
          • FaradayF
            Faraday @Juniper
            last edited by

            @Juniper The existing ad model sucks, but there are other ways to solve that problem. If someone is doing work to put out professional content, they shouldn’t be expected to give it away for free, and it certainly shouldn’t be stolen from them by a plagiarism machine. They deserve compensation, whether that’s through a subscription or ads. I have no problem with, for instance, YouTube’s model where you get to choose between the two.

            But regardless of philosophy, what I’m talking about is simple cause and effect. ChatGPT has to get its information from human content creators. If OpenAI drives them all out of business, they’re just shooting themselves in the foot too. But by the time that happens, the damage to all the other creators will already have been done.

            1 Reply Last reply Reply Quote 1
            • PavelP
              Pavel @Juniper
              last edited by

              @Juniper said in AI Megathread:

              revenue made content profitable and incentivised creating content as quickly as possible while eliminating any kind of standards for quality

              That’s just late-stage capitalism. Internet advertising standards are a symptom rather than a cause.

              He/Him. Opinions and views are solely my own unless specifically stated otherwise.
              BE AN ADULT

              1 Reply Last reply Reply Quote 5
              • AshkuriA
                Ashkuri
                last edited by

                https://gizmodo.com/googles-veo-3-is-already-deepfaking-all-of-youtubes-most-smooth-brained-content-2000606144

                cool cool cool this isn’t problematic at all

                1 Reply Last reply Reply Quote 1
                • TezT
                  Tez Administrators
                  last edited by

                  @Rathenhope alerted me to this:

                  https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

                  Made me think of the intermittent discussions we’ve had about how to identify AI writing. Maybe this will be helpful or interesting for some of you!

                  she/they

                  D FaradayF 2 Replies Last reply Reply Quote 2
                  • D
                    dvoraen @Tez
                    last edited by

                    @Tez I do find it interesting, for sure.

                    What I find a little “funny” (it’s not - I don’t have a better choice of words) is that you would think OpenAI and the other LLM providers would offer tools to detect their own LLM’s handiwork as a capitalistic venture.

                    This article made me think of the so-called “AI” detectors that I would contend venture towards snake oil, especially since they can generate false positives and negatives. The only people who could possibly make a “foolproof” detector are those who provide what you’re trying to detect. Even then, we’re getting into the whole schtick about random and pseudorandom number generation in computers, which is part of where LLMs get their “ideas” from.

                    1 Reply Last reply Reply Quote 0
                    • FaradayF
                      Faraday @Tez
                      last edited by Faraday

                      @Tez said in AI Megathread:

                      https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

                      Just as a note, many (if not most) of these “signs of AI writing” are in fact signs of professional writing as well.

                      The so-called “ChatGPT Dash” is just the em dash, widely used by pro authors and well-known in Emily Dickinson poetry. Rule of three, “has been described”, parallelism… most of these are common writing tools that many people just weren’t aware of before. ChatGPT is able to imitate those tools because it stole the published work of actual writers.

                      Now if your coworker who couldn’t string a coherent paragraph together suddenly starts using elegant triplets and juxtaposition, it’s probably a sign that they’re using AI writing. Otherwise, it doesn’t mean much. And that’s why, to @dvoraen’s point, there is no reliable tool for AI writing detection that doesn’t have a zillion false-positives with real writing.

                      1 Reply Last reply Reply Quote 6
                      • PavelP
                        Pavel
                        last edited by

                        Agreed with all @Faraday said above.

                        The only way you can truly tell if writing is LLM generated and not simply a style you’ve come to associate with LLM is to be comparative. It’s sort of like differentiating a student’s work from something their parent wrote, to use a reference from back in our day.

                        If you want to test someone that you can’t physically be with to monitor, the best way – which is not a foolproof way – is to get them to write something reflective, about a mutual experience if possible. You’ll more easily see the main flaw in LLM writing: When it makes shit up. An essay written by ChatGPT is going to look like any of the thousands of good essays written in the last hundred years. Because it’s copying them. It’ll probably even get most of the facts right. But a personal, reflective piece? Sure, the LLM can get the structure right, but it’ll just make shit up because there’s no googling for facts of someone personal experience.

                        He/Him. Opinions and views are solely my own unless specifically stated otherwise.
                        BE AN ADULT

                        I 1 Reply Last reply Reply Quote 1
                        • I
                          InkGolem @Pavel
                          last edited by

                          @Pavel That only works when a student hands it in raw. Many I’ve encountered have been using AI to write the bulk of it and then editing and adding.

                          FaradayF 1 Reply Last reply Reply Quote 0
                          • FaradayF
                            Faraday @InkGolem
                            last edited by

                            Students using AI is a real problem in public school, but what I find interesting is how it doesn’t seem to be as much of a problem in the homeschool community. When you take the pressure of grades off, and let kids write about things they’re passionate about, many (most?) of them don’t WANT to use AI.

                            AshkuriA 1 Reply Last reply Reply Quote 1
                            • AshkuriA
                              Ashkuri @Faraday
                              last edited by

                              @Faraday said in AI Megathread:

                              it doesn’t seem to be as much of a problem in the homeschool community

                              I can’t find a source for this, maybe I’m not searching with the right terms. Can you link me?

                              1 Reply Last reply Reply Quote 0
                              • First post
                                Last post