• 0 Posts
  • 1.13K Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle









  • Iirc when he did make it more explicit, the AI responded with “no, don’t do that” kind of responses. He just kept the metaphor up when the AI didn’t have such an association in its training data and just responded as a lover would respond to their love saying they’d come home in their training data.

    Though I’d say that if a kid would shoot themself in response to a chatbot saying anything to them, the issue is more about them having any access to a gun than anything about the chatbot itself. Unless maybe if the chatbot is volunteering weaknesses common in gun safes, though even then I’d say more fault lies with the parent choosing a shitty safe and raising a kid that would kill themself on the advice of their chatbot girlfriend.




  • My first seagate HD started clicking as I was moving data to it from my older drive just after I purchased it. This was way back in the 00s. In a panic, I started moving data back to my older hd (because I was moving jnstead of copying) and then THAT one started having issues also.

    Turns out when I overclocked my CPU I had forgotten to lock the PCI bus, which resulted in an effective overclock of the HDD interfaces. It was ok until I tried moving mass amounts of data and the HDD tried to keep up instead of letting the buffer fill up and making the OS wait.

    I reversed the OC and despite the HDDs getting so close to failure, both of them lasted for years after that without further issue.





  • If it’s a topic that has been heavily discussed on the internet or in literature, LLMs can have good conversations about it. Take it all with a grain of salt because it will regurgitate common bad arguments as well as good ones, but if you challenge it, you can get it to argue against its own previous statements.

    It doesn’t handle things that are in flux very well. Or things that require very specific consistency. It’s a probabilistic model where it looks at existing tokens and predicts what the next one is most likely to be, so questions about specific versions of something might result in a response specific to that version or it might end up weighing other tokens more than the version or maybe even start treating it all like pseudocode, where descriptive language plays a bigger role than what specifically exists.


  • My guess is what’s going on is there’s tons of psuedo code out there that looks like it’s a real language but has functions that don’t exist as placeholders and the LLM noticed the pattern to the point where it just makes up functions, not realizing they need to be implemented (because LLMs don’t realize things but just pattern match very complex patterns).



  • Don’t get me wrong, it’s decent entertainment. It’s just disconnected from any kind of scientific or technical reality and a part of me is rolling my eyes for a lot of it. And maybe a bit frustrated because I like thinking about things and analyzing and problem solving. I prefer hard magic systems over soft magic ones because there’s no point in thinking about soft magic systems because they just do whatever the plot calls for when it calls for it while hard magic systems have to build up to it and need to be clever to surprise viewers.

    Tony uses a soft technology system that defies thought.