Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Login

    AI Megathread

    Scheduled Pinned Locked Moved No Escape from Reality
    368 Posts 50 Posters 56.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • TrashcanT
      Trashcan @Faraday
      last edited by

      @Faraday
      There was cheating before AI and there were false accusations of cheating before AI detectors. Being falsely accused of using AI is no more serious than being accused of plagiarism.

      What is the alternative?

      he/him
      this machine kills fascists

      FaradayF 1 Reply Last reply Reply Quote 2
      • FaradayF
        Faraday @Trashcan
        last edited by Faraday

        @Trashcan I think you’re underestimating the psychological effect that takes place when people trust in tools. There’s a big difference between “I think this student may have cheated” and “This tool is telling me this student cheated” when laypeople don’t understand the limitations of the tool.

        I’ve studied human factors design, and there’s something that happens with peoples’ mindsets once a computer gets involved. We see this all the time - whether it’s reliance on facial recognition in criminal applications, self-driving cars, automated medical algorithms, etc.

        Also, plagiarism detectors are less impactful because they can point to a source and the teacher can do a human review to determine whether they think it’s too closely copied. That doesn’t work for AI detection. It’s all based on vibes, which can disproportionately impact minority populations (like neurodivergent and ESL students). I also highly doubt that hundreds of thousands of students are falsely accused of plagiarism each year, but I can’t prove it.

        As for the alternative? I don’t think there is one single silver bullet. IMHO we need structural change.

        PavelP 1 Reply Last reply Reply Quote 1
        • YamY
          Yam
          last edited by

          Just to summarize, and please correct me, Trashcan thinks that SOME amount of false positives (1%) using tools is acceptable in the fight against AI and Faraday thinks that ZERO amount of false positives using tools is acceptable in the fight against AI? Am I understanding that you think its better to trust your gut here, Faraday?

          FaradayF TrashcanT 2 Replies Last reply Reply Quote 0
          • PavelP
            Pavel @Faraday
            last edited by

            @Faraday said in AI Megathread:

            IMHO we need structural change.

            Agreed. It’s fundamentally not even really an “AI” problem at its core, but a sort of “humans relying on authorities instead of thinking” problem.

            He/Him. Opinions and views are solely my own unless specifically stated otherwise.
            BE AN ADULT

            1 Reply Last reply Reply Quote 1
            • FaradayF
              Faraday @Yam
              last edited by

              @Yam That isn’t exactly what I said. It’s a complex issue requiring multiple lines of defense, better education, and structural change. But I am saying that even 99% accuracy is too low.

              For example, say you have a self-driving car. Are you OK if it gets into an accident 1 out of every 100 times you drive it?

              Say you have a facial recognition program that law enforcement leans heavily on. Are you OK if it mis-identifies 1 out of every 100 suspects?

              I’m not.

              1% failure doesn’t sound like much until you multiply it across millions of cases.

              1 Reply Last reply Reply Quote 1
              • TrashcanT
                Trashcan @Yam
                last edited by

                @Yam
                I think that some amount of mistakes in any system are acceptable. Nothing is flawless. To me the barrier that a system needs to clear is “better than any alternative”.

                In AI detectors, we’ve already seen that most of the time, people unassisted get it right only 50-60% of the time. Certain detectors are performing at level where less than 1% of results are false positive. That seems better.

                @Faraday said in AI Megathread:

                say you have a self-driving car. Are you OK if it gets into an accident 1 out of every 100 times you drive it?

                There were about 6 million auto accidents in 2022. If the self-driving car (extrapolated to the whole population) would have caused 5 million accidents, it would be better.

                @Faraday said in AI Megathread:

                Say you have a facial recognition program that law enforcement leans heavily on. Are you OK if it mis-identifies 1 out of every 100 suspects?

                If this facial recognition program does a better job than humans, yes I am okay with it. Humans are notoriously poor eye witnesses.

                Eyewitness misidentification has been a leading cause of wrongful convictions across the United States. It has played a role in 70% of the more than 375 wrongful convictions overturned by DNA evidence. In Indiana, 36% of wrongful convictions have involved mistaken eyewitness identification.

                @Pavel said in AI Megathread:

                but a sort of “humans relying on authorities instead of thinking” problem

                There are cases when humans should rely on authorities instead of thinking. No one is advocating for completely disconnecting your brain while making any judgment, but authoritative sources can and should play a key role in decision-making.

                he/him
                this machine kills fascists

                YamY JumpscareJ 2 Replies Last reply Reply Quote 0
                • YamY
                  Yam @Trashcan
                  last edited by

                  @Trashcan said in AI Megathread:

                  There were about 6 million auto accidents in 2022. If the self-driving car (extrapolated to the whole population) would have caused 5 million accidents, it would be better.

                  Lol man, I have to agree. I realize that we’re generally anti-generative AI in art/writing here but I’ll be honest, if the computer drives the car better than my anxious ass, I’ll ride along.

                  FaradayF 1 Reply Last reply Reply Quote 0
                  • JumpscareJ
                    Jumpscare @Trashcan
                    last edited by

                    @Trashcan said in AI Megathread:

                    There were about 6 million auto accidents in 2022. If the self-driving car (extrapolated to the whole population) would have caused 5 million accidents, it would be better.

                    Making cities walkable would be far better than throwing more money into the abyss that cities become when they’re overrun by self-driving cars.

                    Game-runner of Silent Heaven, a small-town horror MU.
                    https://silentheaven.org

                    N YamY 2 Replies Last reply Reply Quote 2
                    • FaradayF
                      Faraday @Yam
                      last edited by

                      @Yam said in AI Megathread:

                      if the computer drives the car better than my anxious ass, I’ll ride along.

                      That’s a big “if” though, and is the crux of my argument.

                      @Trashcan said in AI Megathread:

                      If this facial recognition program does a better job than humans, yes I am okay with it. Humans are notoriously poor eye witnesses.

                      The difference is that many people know that humans are notoriously poor eye witnesses. Many people trust machines more than they trust other humans, even when said machines are actually worse than the humans they’re replacing. That’s the psychological effect I’m referring to.

                      1 Reply Last reply Reply Quote 1
                      • N
                        NotSanni @Jumpscare
                        last edited by

                        @Jumpscare said in AI Megathread:

                        @Trashcan said in AI Megathread:

                        There were about 6 million auto accidents in 2022. If the self-driving car (extrapolated to the whole population) would have caused 5 million accidents, it would be better.

                        Making cities walkable would be far better than throwing more money into the abyss that cities become when they’re overrun by self-driving cars.

                        Unfortunately, tech bros would rather reinvent bandaid solutions over and over again instead of actually working to improving the future.

                        1 Reply Last reply Reply Quote 0
                        • YamY
                          Yam @Jumpscare
                          last edited by

                          @Jumpscare Walkable cities is a whole 'nother can of worms.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post