• Kissaki@feddit.de
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    1 year ago

    Current AI is not smarter than humans. It needs supervised training, and then acts according to that. That’s inherently incompatible to novelty and correct exploration.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      This problem seems like the sort of thing machine learning could be good at though. You have some input binary code that doesn’t run, you want an output that does, you have available training data of inputs and correct matching outputs.

    • Send_me_nude_girls@feddit.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      edit-2
      1 year ago

      AI is good in doing complex things but bad at doing easy things. Supervision is required at first for learning of course, there’s no AI that works out of the box.

      • Kissaki@feddit.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        That assessment entirely depends on what you consider “complex” and “easy”.

        What do you mean by it’s bad at doing easy things but good at doing complex things? I don’t see how something complex would work better than something easy.

        • Send_me_nude_girls@feddit.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          In short.

          Look up what AI does good right now, like finding complex solutions to mathematical issues a human couldn’t. Calculate stuff very fast, replicate natural language etc.

          Look up what AI struggles with at the moment, like drawing hands or recognizing objects or driving a car.

          This statement is only valid in this current state, as AI is advancing faster than most peoples mind by now. Most people have yet to understand LLM or generative AI models.

          That’s what I’m talking about. If you look at the process required to crack Denuvo, then you’ll notice that there’s a lot of guesswork done, something the AI is good at if learned properly. The amount of people who know how to and are willing to spend time cracking Denuvo is shrinking by the day. The amount of software DRM encrypted is rising every day. We need automation soon.

          AI will soon be mandatory for software security as malicious actors will use AI to find zero day exploits and you want an AI to protect you from those real time threats. Anti Virus software already work somewhat into that direction by now but there’s still much room.