

The study wanted to highlight the bias, not to recommend ChatGTP’s advice
The study wanted to highlight the bias, not to recommend ChatGTP’s advice
More because of US regression (note: it was a joke)
The problem is to use LLMs for the wrong things expecting correct answers.
I do not believe that LLMs will ever be able to replace humans in tasks designed for humans. The reason is that human tasks require tacit knowledge (=job experience) and that stuff is not written down in training material.
However, we will start to have tasks for LLMs pretty soon. It was already observed that LLMs work better on stuff produced by other LLMs.
It’s neither. LLMs are statistical models: if the training material contains bias (women get lower salaries) the output will reflect that bias.
By the end of Trump’s term, China may also have better civil rights than the US.
You sort of described RAG. It can improve alignment, but the training is hard to overcome.
See Grok that bounces from “woke” results to “full nazi” without hitting the mid point desired by Musk.