Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Login

    AI PBs

    Scheduled Pinned Locked Moved Game Gab
    159 Posts 39 Posters 5.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic was forked from PBs Tez
    This topic has been deleted. Only users with topic management privileges can see it.
    • P
      ProperPenguin @Faraday
      last edited by

      @Faraday said in AI PBs:

      But people on SO don’t generally hallucinate library functions that don’t exist

      The main thing I have wanted to use AI for that relates to code is generating regex. I am bad at regex. My brain just does not wrap around it.

      And oh my god, they are bad at it. When I told a few dev friends this, they got surprised and then tested on other instances (not just ChatGPT) and found yeah, it spits out a whole mess or sometimes it suddenly veers into turning your request into Python or similar.

      Regardless, I do think that reliance is one of the biggest risks with AI. So many people (the number growing everyday; ask a teacher) will just grab whatever ChatGPT spits out without vetting it.

      After the first instance of a legal team doing this (and thus submitting filings filled with fake cases and other outright false information), I have been flabbergasted that it just keeps happening and I think this push to ‘get on the train’ is in part to blame.

      It is also telling (and I do not think I’ve seen this come up on the thread yet) that the investors in ChatGPT and other AI ventures are starting to pull out because their investments are not paying off.

      I am not fully without hope (even though I’ve been unemployed for 2 years and I’ve found myself going back to school; taking on more debt so I can pivot to a new career, despite tech writing being something I love to do) because I do think the bubble will burst. Between the fact that audiences are overwhelmingly underwhelmed by AI content (Disney has come out about several instances where they wanted to openly use it, but it failed for several reasons) and the environmental impact… I think it’s a tech that will fall off the map (and there are some estimating that it will and it will happen in 2026).

      FaradayF 1 Reply Last reply Reply Quote 6
      • PavelP
        Pavel @Faraday
        last edited by

        @Faraday said in AI PBs:

        Maybe enough to fool a layperson, but not enough to actually BE competent.

        Hey, I’ve been doing that the hard way for years. People need to stop faking faking and fake skill authentically.

        He/Him. Opinions and views are solely my own unless specifically stated otherwise.
        BE AN ADULT

        1 Reply Last reply Reply Quote 0
        • FaradayF
          Faraday @ProperPenguin
          last edited by Faraday

          @ProperPenguin said in AI PBs:

          When I told a few dev friends this, they got surprised and then tested on other instances (not just ChatGPT) and found yeah, it spits out a whole mess or sometimes it suddenly veers into turning your request into Python or similar.

          Knowing how these things work, it is not at all surprising that they are bad at generating a custom regex. If you want a well-known one, maybe, but GenAI doesn’t truly think or reason. It generates statistically likely responses. Not correct ones.

          But vetting regex-es is as hard as writing them, so you’re still not getting out ahead.

          1 Reply Last reply Reply Quote 3
          • D
            dvoraen
            last edited by

            I don’t have much to contribute here other than a serious amount of sardonic snark, so I’m just going to summarize the major points here with a metaphor:

            Generative “AI” (note the quotes) is like makeup. It can make things pretty, but unless you’re a skilled makeup artist or have a lot of experience with putting on your own cosmetics, there’s probably going to be flaws and eventually other people are going to notice and compare notes.

            Also, sooner or later the makeup has to be removed. We’ll leave that part open to interpretation; there’s many.

            (This metaphor also assumes the components of the makeup aren’t toxic, too!)

            1 Reply Last reply Reply Quote 1
            • D
              dvoraen
              last edited by

              This is a non-sequitur to “AI” PBs, but it feels so adjacent to the discussion (it overlaps it slightly) that I have to share it.

              This is for the programmers out there, especially @Faraday.

              https://youtu.be/CUfliPTbJu4

              FaradayF 1 Reply Last reply Reply Quote 1
              • FaradayF
                Faraday @dvoraen
                last edited by Faraday

                @dvoraen Heh I enjoyed that video, thanks.

                That reminds me of this post: My new hobby: watching AI slowly drive Microsoft employees insane. It’s reactions to a series of Pull Requests (the programmer equivalent of “Hey, I did a thing, someone check it out please and make sure I didn’t mess up”) from Copilot AI. The top comment says it all:

                I just looked at that first PR and I don’t know how you could trust any of it at some point. No real understanding of what it’s doing, it’s just guessing. So many errors, over and over again.

                Can AI tools make coding easier? Sure. Just in my lifetime I’ve seen code go from assembly language (low-level instructions to the computer registers) to visual coding tools (like Scratch) that let my kid build a game like Frogger. But even with that astonishing advancement, we still need developers to figure out what to build in the first place, and then make sure what gets built does what the customer needs. AI is highly unlikely to replace that in the forseeable future.

                In other news: AI industry horrified to face largest copyright class action ever certified

                They’ve warned that a single lawsuit raised by three authors over Anthropic’s AI training now threatens to “financially ruin” the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

                I know it’s unlikely to succeed, but one can dream.

                1 Reply Last reply Reply Quote 3
                • HobbieH
                  Hobbie
                  last edited by

                  Here’s the “Fighting against AI in SRE” update for the past week.

                  • Experienced developer raised a PR for our IaC that was full of AI slop, which he only admitted to when pressed, and expected it waved through.
                  • Stumbled across Claude (free version) configuration in a very proprietary repo.
                  • C-level keeps setting up meetings every two weeks asking how we get AI in our product, unsatisfied with “it breaches GDPR so we don’t” as an answer.
                  • Got sent this painful little article which caused war flashbacks to my desktop support days: https://thenewstack.io/vibe-coding-the-shadow-it-problem-no-one-saw-coming/
                  1 Reply Last reply Reply Quote 4
                  • MisterBoringM
                    MisterBoring
                    last edited by

                    This is an example of what I am hopeful AI will eventually do, which is help push scientific discovery forward.

                    https://www.bbc.com/news/articles/cgr94xxye2lo

                    Proud Member of the Pro-Mummy Alliance

                    FaradayF 1 Reply Last reply Reply Quote 0
                    • FaradayF
                      Faraday @MisterBoring
                      last edited by

                      @MisterBoring said in AI PBs:

                      This is an example of what I am hopeful AI will eventually do, which is help push scientific discovery forward.

                      https://www.bbc.com/news/articles/cgr94xxye2lo

                      That is neat, though it’s worth noting that this doesn’t appear to be “Generative AI” in the usual sense of ChatGPT, etc., but a custom-trained model. With much of this, it’s not the technology itself that is the problem, it’s the way in which it’s being trained and used that people take issue with.

                      MisterBoringM 1 Reply Last reply Reply Quote 4
                      • MisterBoringM
                        MisterBoring @Faraday
                        last edited by

                        @Faraday Totally. I feel like there’s just as many groups building AI models trained solely on academic works and scientific knowledge, but those groups aren’t getting any sort of news coverage because it’s not as controversial.

                        Proud Member of the Pro-Mummy Alliance

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post