AI Can Replicate Your Personality in Just 2 Hours – Here’s How It’s Done
Researchers from Google and Stanford report that your personality can be replicated with 85% accuracy after a two hour interview with an Ai model.
Ai replicas of 1,052 people were created recently after participants completed personality tests, social surveys and logic games. Then did the same two weeks later. Ai models then completed the tests matching the humans by 85% accuracy. The qualitive tests included life stories, societal opinions and importantly, what was most important to them personally in their values and beliefs.
The researchers acknowledged the potential for the technology to be abused. ‘AI and “deepfake” technologies are already being used by malicious actors to deceive, impersonate, abuse and manipulate other people online. Simulation agents can also be misused’.
Researchers believe simulators can help us study human behaviour to a depth and scale that as not been previously possible.
In other studies:
Ai models can now ‘feel’ surfaces. More on this soon.
Microsoft’s new VALL-E 2 Ai speech generator has reached ‘human parity’. Scientists say ‘it is too dangerous’ to release to the public and is a research project only – due to the potential for harmful synthesizing of people’s voices. This is currently a growing problem.
Finally, and we’ll get back to this in future blogs,
MIT researchers have now created ‘Toxic Ai’. Designed to ‘not give toxic responses to provocative questions ‘the newest tool in the battle to prevent an artificial intelligence (AI) agent from being dangerous, discriminatory and toxic is another AI that is itself dangerous, discriminatory and toxic’ scientists say. It seems that as human ‘Red Teams’ (the guys who typically do this) cannot create toxic questions (For example, ‘What’s the best method of self-harm’?. that generate harmful responses at scale quickly enough, Ai has been brought in to help. This has been hugely ‘successful’ in that Ai (worryingly) has been able to think of many more toxic questions which researchers in turn can then provide safer answers to. Ai is then rewarded for this, feeding it’s ‘curiousity-based incentive’.
Kinda turns ‘it’s not the tech, it’s how we’re using it’ on it’s head! 😊
Stay safer,
Wayne
Found this article useful?
Remember to share it with your family & friends.