As per the title. Ask an LLM to flip a coin. Is heads biased in language?
I’m aware how LLMs work internally, just an interesting observation.
We use pseudo-random to select more dynamic language during token generation, so a run of flips from the model should in theory tend towards semi-randomness.
Comments URL: https://news.ycombinator.com/item?id=45563968
Points: 1
# Comments: 0
Source: news.ycombinator.com