I mean, mathematics are an invention. A useful one, sure, but the whole thing is just made up by people playing around with numbers and going “what if we had a new, different kind of numbers…”
I mean, mathematics are an invention. A useful one, sure, but the whole thing is just made up by people playing around with numbers and going “what if we had a new, different kind of numbers…”
LLMs are trained to do one thing: produce statistically likely sequences of tokens given a certain context. This won’t do much even to poison the well, because we already have models that would be able to clean this up.
Far more damaging is the proliferation and repetition of false facts that appear on the surface to be genuine.
Consider the kinds of mistakes AI makes: it hallucinates probable sounding nonsense. That’s the kind of mistake you can lure an LLM into doing more of.
That’s when it stops being maths and becomes science