• dgmib@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    Sometimes ChatGPT/copilot’s code predictions are scary good. Sometimes they’re batshit crazy. If you have the experience to be able to tell the difference, it’s a great help.

    • restingboredface@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Yeah, but the non-tech savvy business leaders see they can generate code with AI and think ‘why do I need a developer if I have this AI?’ and have no idea whether the code it produces is right or not. This stat should be shared broadly so leaders don’t overestimate the capability and fire people they will desperately need.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Programming jobs will be safe for a while. They’ve been trying to eliminate those positions since at least the 90s. Because coders are expensive and often lack social skills.

        But I do think the clock is ticking. We will see more and more sophisticated AI tools that are relatively idiot-proof and can do things like modify Salesforce, or create complex new Tableau reports with a few mouse clicks, and stuff like that. Jobs will be chiseled away like our unfortunate friends in graphic design.

        • BlameThePeacock@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          You, along with most people, are still looking at automation wrong. It’s never been about removing people entirely, even AI, it’s about doing the same work with less cost.

          If you can eliminate one programmers from your four person team by giving the other three AI to produce the same amount of work, congrats you’ve just automated one programming job.

          Programming jobs aren’t going anywhere, but either the amount of code produced is about to skyrocket, or the number of employed programmers is going to drop (or most likely both of those things).

          • myliltoehurts@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            I wonder if this will also have a reverse tail end effect.

            Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.

            Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.

            • BlameThePeacock@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              4 months ago

              AI will help with that too, it’s going to be able to process entire codebases at a time pretty shortly here.

              Given the visual capabilities now emerging, it can likely also do human-equivalent testing.

              One of the biggest AI tricks we haven’t started seeing much of yet in mainstream use is this kind of automated double-checking. Where it generates an answer, and then validates if the answer is valid before actually giving it to a human. Especially in coding bases, there really isn’t anything stopping it from coming up with an answer compiling, running into an error, re-generating, and repeating until the code passes all unit tests or even potentially visual inspection.

              The big limit on this right now is sheer processing cost and context lengths for the models. However, costs for this are dropping faster than any new tech we’ve seen, and it will likely be trivial in just a few years.

          • Tyrangle@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            Right on. AI feels like a looming paradigm shift in our field that we can either scoff at for its flaws or start learning how to exploit for our benefit. As long as it ends up boosting productivity it’s probably something we’re going to have to learn to work with for job security.

            • BlameThePeacock@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              4 months ago

              It’s already boosting productivity in many roles. That’s just going to accelerate as the models get better, the processing gets cheaper, and (as you said) people learn to use it better.

  • Crisps@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    In the short term it really helps productivity, but in the end the reward for working faster is more work. Just doing the hard parts all day is going to burn developers out.

  • Veraxus@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    I’m surprised it scores that well.

    Well, ok… that seems about right for languages like JavaScript or Python, but try it on languages with a reputation for being widely used to write terrible code, like Java or PHP (hence having been trained on terrible code), and it’s actively detrimental to even experienced developers.

  • 0x01@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I’m a 10 year pro, and I’ve changed my workflows completely to include both chatgpt and copilot. I have found that for the mundane, simple, common patterns copilot’s accuracy is close to 9/10 correct, especially in my well maintained repos.

    It seems like the accuracy of simple answers is directly proportional to the precision of my function and variable names.

    I haven’t typed a full for loop in a year thanks to copilot, I treat it like an intent autocomplete.

    Chatgpt on the other hand is remarkably useful for super well laid out questions, again with extreme precision in the terms you lay out. It has helped me in greenfield development with unique and insightful methodologies to accomplish tasks that would normally require extensive documentation searching.

    Anyone who claims llms are a nothingburger is frankly wrong, with the right guidance my output has increased dramatically and my error rate has dropped slightly. I used to be able to put out about 1000 quality lines of change in a day (a poor metric, but a useful one) and my output has expanded to at least double that using the tools we have today.

    Are LLMs miraculous? No, but they are incredibly powerful tools in the right hands.

    Don’t throw out the baby with the bathwater.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      I’m a 10 year pro,

      You wish. The sheer idea of calling yourself a “pro” disqualifies you. People who actually code and know what they are doing wouldn’t dream of giving themselves a label beyond “coder” / “programmer” / “SW Dev”. Because they don’t have to. You are a muppet.

      • figaro@lemdro.id
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Hey! So you may have noticed that you got downvoted into oblivion here. It is because of the unnecessary amount of negativity in your comment.

        In communication, there are two parts - how it is delivered, and how it is received. In this interaction, you clearly stated your point: giving yourself the title of pro oftentimes means the person is not a pro.

        What they received, however, is far different. They received: ugh this sweaty asshole is gatekeeping coding.

        If your goal was to convince this person not to call themselves a pro going forward, this may have been a failed communication event.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          while your measured response is appreciated, I hardly consider a few dozen downvotes relevant, nor do I care in this case. It’s telling that those who did respond to my comment seem to assume I would consider myself a “pro” when that’s 1) nothing I said and 2) it should be clear from my comment that I consider the expression cringy. Outside memeable content, only idiots call themselves a “pro”. If something is my profession, I could see someone calling themselves a “professional <whatever>” (not that I would use it), but professional has a profoundly distinct ring to it, because it also refers to a code of conduct / a way to conduct business.

          “I’m a pro” and anything like it is just hot air coming from bullshitters who are mostly responsible for enshittification of any given technology.

  • THCDenton@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    It was pretty good for a while! They lowered the power of it like immortan joe. Do not be come addicted to AI

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    It’s been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.

    It’s also been helpful at work with some random database type stuff.

    But it definitely gets stuff wrong. A lot of stuff.

    The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It’s more like an iterative process of refinement than one prompt gives you the final answer.

    • Downcount@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

      Or it get stuck in an endless loop of two different but wrong solutions.

      Me: This is my system, version x. I want to achieve this.

      ChatGpt: Here’s the solution.

      Me: But this only works with Version y of given system, not x

      ChatGpt: <Apology> Try this.

      Me: This is using a method that never existed in the framework.

      ChatGpt: <Apology> <Gives first solution again>

      • mozz@mbin.grits.dev
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago
        1. “Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn’t work)”
        2. Goto 1
  • Ech@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    For the upteenth time - an llm just puts words together, it isn’t a magic answer machine.

  • jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    4 months ago

    Sure, but by randomly guessing code you’d get 0%. Getting 48% right is actually very impressive for an LLM compared to just a few years ago.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      4 months ago

      Exactly, I also find that it tends to do a pretty good job pointing you in the right direction. It’s way faster than googling or going through sites like stackoverflow because the answers are contextual. You can ask about a specific thing you want to do, and and an answer that gives you a general idea of what to do. For example, I’ve found it to be great for crafting complex sql queries. I don’t really care if the answer is perfect, as long as it gives me an idea of what I need to do.

    • xthexder@l.sw0.com
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Just useful enough to become incredibly dangerous to anyone who doesn’t know what they’re doing. Isn’t it great?

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Now non-coders can finally wield the foot-gun once reserved only for coders! /s

        Truth be told, computer engineering should really be something that one needs a licence to do commercially, just like regular engineering. In this modern era where software can be ruinous to someone’s life just like shoddy engineering, why is it not like this already.

        • iopq@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          Look, nothing will blow up if I mess up my proxy setup on my machine. I just won’t have internet until I revert my change. Why would that be different if I were getting paid for it?

            • iopq@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              4 months ago

              I have to actually modify the code to properly package it for my distro, so it’s engineering because I have to make decisions for how things work

              • sajran@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                4 months ago

                I don’t see how this supports your point then. If “setting up proxy” means “packaging it to run on thousands user machines” then isn’t there obvious and huge potential for a disastrous fuckup?

                • iopq@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  No, because it either runs the program successfully, or it fails to launch. I don’t mess with the protocol. It runs as root because it needs to set the iptables when turned on to be a “global” proxy

          • cows_are_underrated@discuss.tchncs.de
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            Nothing happens if you fuck up your proxy, but if you develop an app that gets very popular and don’t care about safety, so hackers are able to take control over your whole Server they can do a lot of damage. If you develop software for critical infrastructure it can actually cost human lives if you fuck up your security systems.

            • iopq@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              4 months ago

              Yes, but people with master’s degrees also fuck this up, so it’s not like some accreditation system will solve the issue of people making mistakes

      • lurch (he/him)@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        The one time it was helpful at work was when I used it to thank and wish a person well that left a company we work with. I couldn’t come up with a good response and ChatGPT just spat real good stuff out in seconds. This is what it’s really good for.

        • grrgyle@slrpnk.net
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          Yeah things that follow a kind of lexical “script” that you don’t want to get creative with would be pretty easy to generate. Farewells, greetings, dear Johns, may he rest in peaces, etc etc