New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD’s licensing goals) and cannot be committed to NetBSD.

https://www.NetBSD.org/developers/commit-guidelines.html

  • best_username_ever@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 months ago

    It’s actually simple to detect: if the code sucks or is written by a bad programmer, and the docstrings are perfect, it’s AI. I’ve seen this more than once and it never fails.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      I’m confused, do people really use copilot to write the whole thing and ship it without re reading?

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 months ago

        I literally did an interview that went like this:

        1. Applicant used copilot to generate nontrivial amounts of the code
        2. Copilot generated the wrong code for a key part of the algorithm; applicant didn’t notice
        3. We pointed it out, they fixed it
        4. They had to refactor the code a bit, and ended up making the same exact mistake again
        5. We pointed out the error again…

        And that’s in an interview, where you should be extra careful to make a good impression…

      • neclimdul@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        Not specific to AI but someone flat out told me they didn’t even run the code to see it work. They didn’t understand why I would or expect that before accepting code. This was someone submitting code to a widely deployed open source project.

        So, I would expect the answer is yes or very soon to be yes.

      • best_username_ever@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        Around me, most beginners who use that don’t have the skills to understand or even test what they get. They don’t want to learn I guess, ChatGPT is easier.

        I recently suspected a new guy was using ChatGPT because everything seemed perfect (grammar, code formatting, classes made with design patterns, etc.) but the code was very wrong. So I did some pair programming with him and asked if we could debug his simple application. He didn’t know where the debug button was.

        • Zos_Kia@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Guilty as charged, ten years into the job and I never learned to use a debugger lol.

          Seriously though that’s amazing to me I never met one of those… I guess 95% of them will churn out of the industry in less than five years…

          • Tja@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 months ago

            Debug button? There is a button that inserts ‘printf(“%s:%s boop! \n” , __FUNCTION__, __LINE__) ;’?

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      6 months ago

      So your results are biased, because you’re not going to see the decent programmers who are just using it to take mundane tasks off their back (like generating boilerplate functions) while staying in control of the logic. You’re only ever going to catch the noobs trying to cheat without fully understanding what it is they’re doing.

      • best_username_ever@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 months ago

        You’re only ever going to catch the noobs.

        That’s the fucking point. Juniors must learn, not copy paste random stuff. I don’t care what seniors do.