

I always hanged around non-mainstream forms of social media. The main issue with that is that the lack of “normies” soon transforms them in bubbles where extreme or controversial ideas are normalized.
I always hanged around non-mainstream forms of social media. The main issue with that is that the lack of “normies” soon transforms them in bubbles where extreme or controversial ideas are normalized.
Maybe now they have a chance to demonstrate that these filters don’t work.
Plus: shady managing of the network connections that could lead to mass surveillance, poor E2E encryption, and now a partnership with Musk.
Are hiring managers actually less likely to hire women if they ask for market-rate pay, as opposed to men when they do the same?
If instead of giving passive aggressive replies you would spend a moment to reflect on what I wrote you would understand that ChatGPT reflect the reality, including any bias. In short the answer is yes with high probability.
LLMs do not give the correct answer, just the most probable sequence of words based on the training.
That kind of studies (because there are hundreds) highlight two things:
1- LLMs could be incorrect, biased, or give fake information (the so called hallucinations). 2- the previous point stems from the training material proving the existence of bias in the society.
In other words, having an LLM recommending lower salaries for women is a proof that there is a gender gap.
You sort of described RAG. It can improve alignment, but the training is hard to overcome.
See Grok that bounces from “woke” results to “full nazi” without hitting the mid point desired by Musk.
The study wanted to highlight the bias, not to recommend ChatGTP’s advice
More because of US regression (note: it was a joke)
The problem is to use LLMs for the wrong things expecting correct answers.
I do not believe that LLMs will ever be able to replace humans in tasks designed for humans. The reason is that human tasks require tacit knowledge (=job experience) and that stuff is not written down in training material.
However, we will start to have tasks for LLMs pretty soon. It was already observed that LLMs work better on stuff produced by other LLMs.
It’s neither. LLMs are statistical models: if the training material contains bias (women get lower salaries) the output will reflect that bias.
A better resource: https://www.goeuropean.org/
Also useful: https://isitamerican.eu/