Generative AI can easily be made malicious despite guardrails, say scholars

Researchers found an easy way to retrain publicly available neural nets so they would answer in-depth questions, such as how to cheat on an exam, find pornography, or even kill their neighbor.
from Latest news https://ift.tt/DvudOof
Post a Comment