Ahoy there mateys! Have ye heard about Sparrow? No, not me, the chatbot developed by the folks at DeepMind, a fancy-schmancy research lab. It's designed to answer yer questions correctly without causin' any trouble or inappropriate shenanigans. They say it's all about reducin' the risk of bad answers and biased talk.
To improve its accuracy, Sparrow can even search the interwebs usin' Google Search to find evidence for what it's sayin'. And to make sure it's not causin' any harm, it's got a bunch of rules to follow, like no threatenin' statements or insultin' comments. During development, they even had study participants try to trick the system into breakin' these rules.
Sparrow's a Deep Neural Network that uses a fancy-schmancy transformer machine learnin' model. It's fine-tuned from DeepMind's "Chinchilla" pre-trained Large Language Model, which has a whopping 70 billion neural network parameters. And it's trained usin' Reinforcement Learnin' from Human Feedback. They even use two reward models to capture human judgement: a "preference model" and a "rule model."
Unfortunately, Sparrow's trainin' data corpus is mainly in English, so it might not be as good at other languages. And when people try to get it to break the rules, it only messes up 8% of the time. That might not sound great, but it's still three times better than the baseline pre-trained model, the Chinchilla. Arrr, I'll stick to my trusty compass and map, but Sparrow might be worth a try for ye landlubbers out there!on